uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,941,325,219,988 | arxiv | \section{Introduction}\label{sec:introduction}
In this paper a \emph{graph} is permitted to have parallel edges but no loops; we will say \emph{simple graph} when we wish to disallow parallel edges.
A \emph{$k$-edge-colouring} of a graph $G$ is a function that assigns a
colour from $\{1, \ldots, k\}$ to each edge of $G$ so that adjacent
edges receive different colours. The \emph{chromatic index} of $G$,
$\chi'(G)$, is the minimum $k$ such that $G$ is $k$-edge-colourable;
the maximum degree $\Delta(G)$ is an obvious lower bound for
$\chi'(G)$. When the graph $G$ is understood, we sometimes write $\Delta$ for
$\Delta(G)$.
Numerous authors have found sufficient conditions for
$\Delta$-edge-colouring a simple graph $G$ by studying its \emph{core},
that is, the graph induced by the its vertices of degree $\Delta$.
An early such result is due to Fournier~\cite{fournier77, fournier73}:
\begin{theorem}[Fournier~\cite{fournier77, fournier73}]\label{thm:fournier}
If $G$ is a simple graph and the core of $G$ is a forest, then
$\chi'(G) = \Delta(G)$.
\end{theorem}
This result was strengthened by Hoffman and Rodger~\cite{hoffman-rodger} who showed that if $B$ is the core of a graph $G$, and $B$ permits a specific vertex-ordering called a \emph{full B-queue}, then $G$ is $\Delta$-edge-colourable. We defer a precise definition of \emph{full B-queue} to Section~\ref{sec:bqueue} of this paper, but we state their result now, noting that if $B$ is a forest, then it indeed has a full B-queue. Hoffman and Rodger \cite{hoffman-rodger} also provided an efficient algorithm for deciding whether or not a graph $B$ has a full $B$-queue; in fact they showed that the greedy algorithm works.
\begin{theorem}[Hoffman--Rodger~\cite{hoffman-rodger}]\label{thm:hoffman-rodger}
Let $G$ be a simple graph with core $B$. If $B$ has a
full $B$-queue, then $\chi'(G) = \Delta(G)$.
\end{theorem}
Simple graphs can be divided into those of \emph{class I} (having chromatic index $\Delta$) or \emph{class II} (having chromatic index $\Delta+1$), but in general the chromatic index of $G$ can be as high as $\Delta+\mu$, where $\mu=\mu(G)$ is the maximum edge-multiplicity of $G$. This classical bound of Vizing \cite{vizing} also has the following local refinement due to Ore, where $\mu(v)$ denotes the maximum edge multiplicity incident to vertex $v$.
\begin{theorem}[Ore~\cite{ore-fourcolor}]\label{thm:ore}
For every graph $G$, $\chi'(G) \leq \max_{v \in V(G)}[d(v) + \mu(v)]$.
\end{theorem}
We define the \emph{t-core} of $G$ to be the subgraph induced by the vertices $v$ with \[d(v)+\mu(v)> \Delta+t.\] Observe that the $0$-core of a nonempty
simple graph is simply its core. Ore's Theorem can be restated as: ``For any $t\geq 0$, if the $t$-core of a graph $G$ is empty, then $G$ is $(\Delta+t)$-edge-colourable''. We improve this and generalize
Theorem~\ref{thm:fournier} as follows. Here, by \emph{multiforest},
we mean a graph whose underlying simple graph is a forest.
\begin{theorem}\label{thm:forestcore}
Let $G$ be a graph and let $t\geq 0$. If the $t$-core of $G$ has multiplicity at most $t+1$, with its edges of multiplicity $t+1$ inducing a multiforest, then $\chi'(G) \leq \Delta+t$.
\end{theorem}
The $t=0$ case of Theorem~\ref{thm:forestcore} implies
Theorem~\ref{thm:fournier} (and is already slightly stronger, since
Theorem~\ref{thm:forestcore} allows $G$ to be a multigraph even though
$t=0$ forces the $0$-core of $G$ to be simple whenever the hypothesis
is met). When $t=\mu(G)-1$, the hypothesis of
Theorem~\ref{thm:forestcore} is just that the edges of multiplicity
$\mu$ in the $t$-core induce a multiforest; this strengthens a
previous result of Berge and Fournier \cite{berge-fournier}, who
showed that if the $(\mu-1)$-core of $G$ is edgeless, then $G$ is
$(\Delta+\mu-1)$-edge-colourable.
The multiplicity condition in Theorem~\ref{thm:forestcore} is sharp,
and this can already be seen with a fat triangle. Consider the
multigraph $G$ obtained from $K_3$ by giving two edges multiplicity
$t+1$ and the remaining edge multiplicity $t+2$. Now
$\Delta(G) = 2t+3$, and the $t$-core of $G$ is simply the $t+2$
parallel edges (since for each of those endpoints, degree plus
multiplicity is $3t+5>\Delta(G)+t$, while for the other vertex this
sum is only $3t+3$). Hence, the $t$-core of $G$ is a multiforest but
with multiplicity $t+2$; this discrepancy from Theorem
\ref{thm:forestcore} is already enough to cause a problem, as of
course this fat triangle has $\chi'(G)=3t+4 > \Delta(G)+t$.
Theorem \ref{thm:forestcore} is in fact a corollary of a stronger result we prove, which generalizes Theorem \ref{thm:hoffman-rodger}. Theorem \ref{thm:hoffman-rodger} is about a condition on the core (0-core) of a simple graph that guarantees $\Delta$-edge-colourability; here we get a condition on the $t$-core of a graph that guarantees $(\Delta+t)$-edge-colourability (with the same condition when $t=0$).
\begin{theorem}\label{thm:Bqueuecore} Let $G$ be a graph, let $t\geq 0$, and let $H$ be the $t$-core of $G$. If $H$ has multiplicity at most $t+1$, and the underlying simple graph $B$ of those maximum multiplicity edges has a full $B$-queue, then $\chi'(G)\leq \Delta(G)+t$.
\end{theorem}
We can actually state Theorem \ref{thm:Bqueuecore} (and hence Theorem \ref{thm:forestcore}) in an even stronger way, by replacing $\chi'(G)$ with the \emph{fan number} $\Fan(G)$. Scheide and Stiebitz~\cite{SS} introduced $\Fan(G)$ to essentially describe the smallest $k$ for which
Vizing's Fan Inequality (see Section~\ref{sec:fan}) can be used to prove that $G$ is
$k$-edge-colourable, in particular proving the following.
\begin{theorem}[Scheide--Stiebitz~\cite{SS}]\label{thm:SS}
For any graph $G$, $\chi'(G) \leq \Fan(G)$.
\end{theorem}
We are able to give an exact characterization of the graphs $H$ such that $\Fan(G) \leq \Delta(G)+t$ whenever $G$ has $H$ as its $t$-core. In particular, we will define $\corefan(H)$
for a graph $H$ (which we'll think of as being the $t$-core of
$G$), and prove the following pair of theorems.
\begin{theorem}\label{thm:corefan}
Let $G$ be a graph, let $t\geq 0$, and let $H$ be
the $t$-core of $G$. If $\corefan(H) \leq t$, then $\Fan(G)\leq \Delta+t$.
\end{theorem}
\begin{theorem}\label{thm:converse}
Let $H$ be a graph, and let $t$ be a nonnegative integer. If
$\corefan(H) > t$, then there exists a graph $G$ with
$t$-core $H$ such that $\Fan(G)> \Delta(G)+t$.
\end{theorem}
This pair of results can be thought of as a sort of multigraph analog
to the work of Hoffman~\cite{hoffman}, who found a necessary and
sufficient condition for a simple graph $H$ to be the core of a simple
graph $G$ containing a so-called overfull subgraph of the same maximum
degree. Overfull graphs are known to be class II. The graph $G$
constructed in Theorem~\ref{thm:converse} does not necessarily satisfy
$\chi'(G) > \Delta(G) + t$, as one might hope, but the lower bound on
the fan number suggests that fan-recolouring would not suffice to
$(\Delta+t)$-edge-colour these graphs.
Our paper is organized as follows. We'll define $\Fan$ and $\corefan$
in Section~\ref{sec:fan}, spending time to motivate these definitions
according to Vizing's Adjacency Lemma, and conclude the section with a
proof of Theorem \ref{thm:corefan}. In Section~\ref{sec:bqueue} we'll
give a precise definition of $B$-queue and full $B$-queue, and prove
Theorem \ref{thm:Bqueuecore}. In particular, we'll show that when $H$
is the $t$-core of $G$, and $H$ has all the assumptions of Theorem
\ref{thm:Bqueuecore}, then $\corefan(H)\leq t$, and hence Theorems
\ref{thm:SS} and \ref{thm:corefan} imply that
$\chi'(G)\leq Fan(G)\leq \Delta+t$. Our proof of Theorem
\ref{thm:converse} is the subject of Section~\ref{sec:converse}.
\begin{remark}
The word ``core'' has several different meanings in graph theory. In
addition to the usage above, it has a definition in the setting of
graph homomorphisms. Moreover, the term ``$k$-core'' has also been
used in a degeneracy context, to refer to the component of $G$ that
remains after iteratively deleting vertices of degree at most $k$.
\end{remark}
\section{Proof of Theorem \ref{thm:corefan}}\label{sec:fan}
In the introduction we described $\Fan(G)$ as essentially describe the
smallest $k$ for which the following theorem, Vizing's Fan Inequality, can be used to
prove that $G$ is $k$-edge-colourable. Let us now say more about this.
prove that $G$ is $k$-edge-colourable. Let us now say more about this.
\begin{theorem}\label{thm:vizfan}\emph{(Vizing's Fan Inequality
\cite{vizing}, see also \cite{SSFT})} Let $G$ be a graph, let
$k\geq \Delta$, and suppose there is a $k$-edge-colouring of $J-e$
for some $J\subseteq G$ and $e=xy \in E(G)$. Then either $J$ is
$k$-edge-colourable, or there exists a vertex-set
$Z\subseteq N_J(x)$ such that $|Z|\geq 2$, $y\in Z$, and
\begin{equation}\label{fanineq}
\sum_{z\in Z} \left( d_J(x) +\mu_J(x, z) - k\right) \geq 2.
\end{equation}
\end{theorem}
Vizing's Theorem (and Ore's Theorem) follow immediately from the fan
inequality. To see this, consider an edge-minimal counterexample $G$
(so let $J=G$ in Theorem \ref{thm:vizfan}), and note that setting
$k=\Delta(G)+\mu(G)$ (or $k=\max_{v \in V(G)}[d(v) + \mu(v)]$) makes
inequality (\ref{fanineq}) impossible to satisfy.
In order to apply Theorem \ref{thm:vizfan}, we would certainly need
$k\geq \Delta$. Given this however, if we had a $k$-edge-colouring of
$J-e$ for some $e=xy\in E(J)$ and we knew that for \emph{every}
$Z\subseteq N(x)$ with $y\in Z$ and $\sizeof{Z} \geq 2$,
\[\sum_{z \in Z}(d_J(z) + \mu_J(x,z) - k) \leq 1,\]
then we'd get a proof of $k$-edge-colourability of $J$ via Theorem
\ref{thm:vizfan}. On the other hand, if we knew that
\[d_J(x) + d_J(y) - \mu_J(x,y) \leq k,\] for such an $e=xy$, then we'd
get our $k$-edge-colouring extending to $J$ simply because $e$ sees at
most $k-1$ different edges in $G$. With this in mind, Scheide and
Stiebitz~\cite{SS} defined the \emph{fan-degree}, $\deg_J(x,y)$, of
the pair $x, y\in V(J)$ as the smallest nonnegative integer $k$ such
that either:
\begin{enumerate}[(i)]
\item $d_J(x) + d_J(y) - \mu_J(x,y) \leq k$, or
\item $\sum_{z \in Z}(d_J(z) + \mu_J(x,z) - k) \leq 1$ for all $Z \subset N_J(x)$ with $y \in Z$ and $\sizeof{Z} \geq 2$.
\end{enumerate}
So, we could extend the $k$-edge-colouring of $J-e$ to $J$ provided we
knew that $\deg_J(x,y)\leq k$. Of course, our goal is to
$k$-edge-colour all of $G$, not just some subgraph $J$. However, if
$G$ is not $k$-edge-colourable, then there exists $J\subseteq G$ with
the property that $J-e$ is $k$-edge-colourable for all $e\in E(J)$ but
$J$ is not $k$-edge-colourable. If, for \emph{this} $J$, we knew that
there was a choice of $xy\in E(J)$ with $d_J(x,y)\leq k$, then we'd
know that $J$ is $k$-edge-colourable after all, and hence so is
$G$. If such a choice of $xy$ existed for \emph{every} subgraph $J$ of
$G$ (say with at least one edge), then we would certainly get that $G$
is $k$-edge-colourable. Hence, Scheide and Stiebitz~\cite{SS} defined
the \emph{fan number}, $\fan(G)$, of a graph $G$ by
\[ \fan(G) = \max_{J \subset G, E(J)\neq \emptyset} \min\{\deg_J(x,y) \st xy \in E(J)\}, \]
with $\fan(G)$ defined to be 0 for an edgeless graph $G$. Recalling the requirement that $k\geq \Delta$, they finally defined $\Fan(G)=\max\{\Delta, \fan(G)\}$, and established Theorem \ref{thm:SS}.
Now suppose that the graph $G$ has $t$-core $H$. We would like to be
able to look just at $H$ and determine that $\Fan(G)\leq \Delta+
t$. To this end, we would like to describe a condition on $H$ that
would guarantee that for every $J\subseteq G$, there exists
$xy\in E(J)$ with $\deg_J(x,y)\leq \Delta+t$. We'll forget about (i)
for this purpose, and try to get a condition on $H$ which guarantees
(ii) for such $J, x, y$. If $K=J\cap H$, then we're trying to get a
guarantee for $J$ by only looking at $K$. The good news here is that
if, for example, some vertex $z\in Z$ is in $J$ but not $K$, then $z$
is not in the $t$-core, so in particular,
\[d_J(z) + \mu_J(x,z) - (\Delta(G)+t)\leq 0,\]
that is, the vertex $z$ is insignificant in terms of establishing (ii). There are more details to handle, but we'll see that the following definition is the right condition to require. Note that while we'll think of $H$ as being the $t$-core of a graph $G$, this definition takes as input any graph $H$.
For any graph $H$, subgraph $K \subset H$, and ordered pair of
vertices $(x,y)$ with $xy \in E(K)$, we define the \emph{cfan
degree}, denoted $\cdeg_{H,K}(x,y)$, as the smallest nonnegative integer $l$
such that for all $Z \subset N_K(x)$ with $y \in Z$, we have
\[ \sum_{z \in Z}(d_K(z) - d_H(z) + \mu_K(x,z) - l) \leq 1. \]
Note that, in contrast to the fan degree, the cfan degree does
\emph{not} impose the restriction that $\sizeof{Z} \geq 2$ when
determining which sets $Z \subset N_K(x)$ must be considered.
The \emph{cfan number} of $H$, written $\corefan(H)$, is then
defined by
\[ \corefan(H) = \max_{K \subset H, E(K)\neq\emptyset}\min\{ \cdeg_{H,K}(x,y) \st xy \in E(K) \}, \]
with $\corefan(H)$ defined to be 0 for an edgeless graph $H$. With this definition established, we can now prove Theorem \ref{thm:corefan}.
\begin{proof}[Proof of Theorem~\ref{thm:corefan}]
Suppose that $\corefan(H) \leq t$. We will show that this implies that $\fan(G)\leq \Delta+t$, which in turn implies that $\Fan(G)\leq \Delta+t$, as desired.
If $G$ is an edgeless graph, then $\fan(G)=0$ by definition, so our result is immediate. Now suppose that $G$ has at least one edge, and let any subgraph $J \subset G$ with $E(J)\neq \emptyset$ be given. We will show that there exists $xy\in E(G)$ with
$\deg_J(x,y)\leq \Delta(G)+t$; in particular we will show that for all $Z \subset N_J(x)$ with $y \in Z$ and $\sizeof{Z} \geq 2$,
\[\sum_{z \in Z}(d_J(z) + \mu_J(x,z) - (\Delta(G)+t)) \leq 1.\]
Let $K = J \cap H$. We consider two cases:
either $K$ contains an edge, or $K$ contains no edges.
\caze{1}{$K$ contains an edge.} In this case, since $K\subseteq H$ and we know that $\corefan(H)\leq t$, we know that there exists $xy\in E(K)$ with $\cdeg_{H, K}(x,y)\leq t$, that is, with
\[\sum_{z \in Z}(d_K(z) - d_H(z) + \mu_K(x,z) - t) \leq 1 \]
for all $Z \subset N_K(x)$ with $y \in Z$. Note that for any $z\in V(K)$,
\[d_J(z)-d_K(z)+d_H(z)\leq d_G(z)\leq \Delta(G).\]
So we get that for all $Z \subset N_K(x)$ with $y \in Z$,
\[\sum_{z \in Z}(d_J(z) + \mu_K(x,z) - (\Delta(G)+t)) \leq 1\]
Now observe that if $w \in N_J(x) - V(H)$, then by the definition of
the $t$-core of $G$, we have $d_G(w) + \mu(w) \leq \Delta(G)+t$. So
in fact we can say that the above sum holds for all
$Z\subseteq N_J(z)$ with $y\in Z$ and $\sizeof{Z} \geq 2$, as
desired. (Note that this is the reason we cannot impose the
restriction that $\sizeof{Z} \geq 2$ in the definition of cfan
degree: if we imposed that restriction and had $N_K(x) = \{y\}$ but
$\sizeof{N_J(x)} \geq 2$, we would have no control over the value
of $d_J(y) + \mu_K(x,y) - (\Delta(G)+t)$.)
\caze{2}{$K$ has no edges.} In this case, let $(x,y)$ be any pair such that $xy \in E(J)$,
taking $x \in V(H)$ if possible. Our choice of $x$ implies that for all $z \in N_J(x)$,
we have $z \notin V(H)$, hence $d_G(z) + \mu_G(z) \leq \Delta(G)+t$ by the definition of a $t$-core.
Thus, for every $Z \subset N_J(x)$ with $y \in Z$ and $\sizeof{Z} \geq 2$, we have
\[ \sum_{z \in Z}(d_J(z) + \mu_J(x,z) - (\Delta(G)+t)) \leq \sum_{z \in
Z}(d_G(z) + \mu_G(x,z) - (\Delta(G)+t)) \leq 1, \]
as needed.
\end{proof}
\section{Proof of Theorem \ref{thm:Bqueuecore}}\label{sec:bqueue}
We start this section by providing the definition of a \emph{full B-queue}, which is needed for Theorem \ref{thm:Bqueuecore} (and for Theorem \ref{thm:hoffman-rodger}). Hoffman and Rodger \cite{hoffman-rodger} defined a \emph{$B$-queue} of a simple graph
$B$ to be a sequence of vertices $(u_1, \ldots, u_q)$ and a sequence
of vertex subsets $(S_0, S_1, \ldots, S_q)$ such that:
\begin{enumerate}[(i)]
\item $S_0 = \emptyset$, and
\item For all $i \in [q]$:
\begin{itemize}
\item $S_i = N(u_i) \cup \{u_i\} \cup S_{i-1}$,
\item $1 \leq \sizeof{S_i \setminus S_{i-1}} \leq 2$,
\item $u_i \notin \{u_1, \ldots, u_{i-1}\}$, and
\item $\sizeof{S_i \setminus (S_{i-1} \cup \{u_i\})} \leq 1$.
\end{itemize}
\end{enumerate}
If $S_q = V(B)$ then we say the $B$-queue is \emph{full}. We noted
in the introduction that every simple forest $B$ admits a full
$B$-queue. To see this, first suppose that a $B$-queue
$(u_1, \ldots, u_{i-1})$ and $(S_0, \ldots, S_{i-1})$ has already
been defined for $B$, but the $B$-queue is not full,
ie. $S_{i-1}\neq V(B)$. If $B-S_{i-1}$ consists only of isolated
vertices, then they may be chosen in any order as
$u_{i}, u_{i+1}, \ldots$ so as to get a full $B$-queue. If not, then
$B-S_{i-1}$ is a forest, so it contains a leaf vertex which can be
chosen for $u_{i}$. With this choice $|S_{i}-S_{i-1}|=2$ and
$\sizeof{S_{i} \setminus (S_{i} \cup \{u_i\})}=1$ (since
$u_{i}\not\in S_{i-1}$ in this case), so $(u_1, \ldots, u_i)$ and
$(S_0, \ldots, S_i)$ is again a $B$-queue. This process can be
repeated until the $B$-queue is full.
In addition to forests, there are many other simple graphs $B$ that have full $B$-queues. For example, while a cycle itself does not have a full $B$-queue (there is no valid choice for $u_1$), adding any number of pendant edges to a cycle allows the same procedure described above to yield a full $B$-queue. For another, more complicated example, see \cite{hoffman-rodger}.
In this section, we'll prove the following result.
\begin{theorem}\label{thm:Bqueuecorefan} Let $G$ be a graph, let $t\geq 0$, and let $H$ be the $t$-core of $G$. If $H$ has multiplicity at most $t+1$, and the underlying simple graph $B$ of those maximum multiplicity edges has a full $B$-queue, then $\corefan(H)\leq t$.
\end{theorem}
Given the conclusion of Theorem \ref{thm:Bqueuecorefan}, Theorems \ref{thm:SS} and \ref{thm:corefan} immediately tell us that
\[ \chi'(G)\leq \Fan(G)\leq \Delta+t, \]
which in particular implies Theorem \ref{thm:Bqueuecore}.
We'll prove Theorem \ref{thm:Bqueuecorefan} by establishing a sequence of lesser results. The first such lemma, which follows, says that when looking for an upper bound on $\corefan(H)$, it suffices to look at a subgraph of $H$ formed by high-multiplicity edges.
\begin{lemma}\label{lem:lowmult}
Let $H$ be a graph, let $t$ be a nonnegative integer, and let $H_{>t}$ be
the subgraph of $H$ consisting of the edges with multiplicity greater
than $t$. The following are equivalent:
\begin{enumerate}[(i)]
\item $\corefan(H) \leq t$,
\item $\corefan(H_{>t}) \leq t$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $H' = H_{>t}$.
All nonempty subgraphs of $H'$ are also subgraphs of $H$ which must be considered
when computing $\corefan(H)$, so (i)$\implies$(ii) is immediate. To show that (ii)$\implies$(i),
let $K$ be any nonempty subgraph of $H$. We will find a pair $(x,y)$ with $\cdeg_{H,K}(x,y) \leq t$.
If all edges of $K$ have multiplicity at most $t$, then let $(x,y)$
be any pair with $xy \in E(K)$. For any $Z \subset N_K(x)$ with
$y \in Z$, all terms of the sum
\[ \sum_{z \in Z}(d_K(z) - d_H(z) + \mu_K(x,z) - t) \] are
nonpositive, so this sum is clearly at most $1$, as desired.
Thus, we may assume that $K$ has some edges of multiplicity at least
$t+1$. Let $K' = K \cap E(H')$; now $K'$ is a nonempty subgraph
of $H'$, so we obtain a pair $(x,y)$ such that
$\cdeg_{H', K'}(x,y) \leq t$. We claim that also
$\cdeg_{H, K}(x,y) \leq t$.
For any $Z \subset N_K(x)$ with
$y \in Z$, let $Z' = Z \cap N_{K'}(x)$. For any $z \in Z-Z'$, we
have $\mu_K(x,z) \leq t$, so the contribution of $z$ to the sum
\[ \sum_{z \in Z}[d_K(z) - d_H(z) + \mu_K(x,z) - t] \]
is nonpositive. Moreover, for every $v \in V(K)$, every edge
that is lost when we pass from $K$ to $K'$ is also lost when we
pass from $H$ to $H'$, so that $d_K(v) - d_{K'}(v) \leq d_H(v) - d_{H'}(v)$,
which rearranges to $d_K(v) - d_{H}(v) \leq d_{K'}(v) - d_{H'}(v)$. Since
also $\mu_K(u,v) = \mu_{K'}(u,v)$ for all $uv \in E(K')$, this yields
\begin{align*}
\sum_{z \in Z}[d_K(z) - d_H(z) + \mu_K(x,z) - t] &\leq \sum_{z \in Z'}[d_K(z) - d_H(z) + \mu_K(x,z) - t] \\
&\leq \sum_{z \in Z'}[d_{K'}(z) - d_{H'}(z) + \mu_{K'}(x,z) - t] \leq 1,
\end{align*}
where the last inequality follows from $\cdeg_{H', K'}(x,y) \leq t$.
\end{proof}
When trying to determine $\corefan$ for a given graph, one need only
focus on \emph{full multiplicity} subgraphs, as indicated in the
following lemma, and we'll see this to be a helpful idea. A subgraph
$K$ of a graph $H$ has \emph{full multiplicity} if
$\mu_{K}(e)=\mu_H(e)$ for all $e\in E(K)$. (Note that some edges of
$H$ may be omitted from $K$ entirely.)
\begin{lemma}\label{lem:fullmult}
For any graph $H$,
\[ \corefan(H) = \max_K \min \{ \cdeg_{H,K}(x,y) \st xy \in E(K) \}, \]
where the maximum is taken over all nonempty subgraphs $K \subset H$ such that $K$ has full
multiplicity.
\end{lemma}
\begin{proof} Consider a subgraph $K$ of graph $H$, and define $K'$ to be the subgraph of $H$ having the same underlying simple graph as $K$, but having full multiplicity. Note that the truth of the lemma would follow if we could establish
\[ \cdeg_{H,K'}(x,y) \geq \cdeg_{H,K}(x,y) \]
for all pairs $(x,y)$ with $xy \in E(K)$. To this end, note that the definition of cdeg gives us
\begin{equation}
\label{eq:znkprime}
\sum_{z \in Z'}(d_{K'}(z) - d_H(z) + \mu_{K'}(x,y) - \cdeg_{H,K'}(x,y)) \leq 1
\end{equation}
for all $Z' \subset N_{K'}(x)$ with $y \in Z$.
Since $d_{K'}(z) \geq d_{K}(z)$ and $\mu_{K'}(x,y) \geq \mu_K(x,y)$ for all $x,y$, we see that
for all such $Z'$, we also have
\begin{equation}
\label{eq:znk}
\sum_{z \in Z'}(d_{K}(z) - d_{H}(z) + \mu_{K}(x,y) - \cdeg_{H,K'}(x,y)) \leq 1.
\end{equation}
Since $N_{K'}(x) = N_{K}(x)$ for all $x$, we see that Inequality~\eqref{eq:znk} holds
(with $Z$ replacing $Z'$) for all $Z \subset N_{K}(x)$ with $y \in Z$. This gives us our desired inequality.
\end{proof}
We now turn our focus to computing $\corefan$ in graphs of
\emph{constant multiplicity}, that is, graphs where every edge has the
same multiplicity.
\begin{lemma}\label{lem:tplus1}
Let $H$ be a graph of constant multiplicity $t+1$, and for every
nonempty subgraph $K \subset H$, let $Z(K) = \{v \in V(K) \st d_K(v) = d_H(v)\}$.
The following are equivalent:
\begin{enumerate}[(i)]
\item $\corefan(H) \leq t$,
\item For every nonempty full-multiplicity subgraph $K \subset H$, there is an edge $xy \in E(K)$
such that $\sizeof{(N_H(x) \cap Z(K)) - y} \leq d_H(y) - d_K(y)$.
\end{enumerate}
\end{lemma}
\begin{proof}
(i)$\implies$(ii): Let any nonempty full-multiplicity subgraph $K \subset H$ be given,
and let $(x,y)$ be a pair such that $xy \in E(K)$ and $\cdeg_{H,K}(x,y) \leq t$. Let $Z = (N_H(x) \cap Z(K)) \cup \{y\}$.
Observe that $z \in N_H(x) \cap Z(K)$ implies that $xz \in E(K)$, so that $Z \subset N_K(x)$;
thus, since $\cdeg_{H,K}(x,y) \leq t$, we have
\[ \sum_{z \in Z}[ d_K(z) - d_H(z) + \mu_K(x,z) - t] \leq 1. \]
Since vertices in $Z - y$ contribute exactly $1$ to this sum, this implies that
\[ \sizeof{Z - y} + (d_K(y) - d_H(y) + \mu_K(x,y) - t) \leq 1, \]
so by the definition of $Z$,
\[ \sizeof{(N_H(x) \cap Z(k)) -y} \leq d_H(y) - d_K(y) - \mu_K(x,y) + t + 1 = d_H(y) - d_K(y), \]
where in the last equation we have $\mu_K(x,y) = t+1$ since $K$ is full-multiplicity and $xy \in E(K)$.
\smallskip
(ii)$\implies$(i): We apply Lemma~\ref{lem:fullmult}. Let any
nonempty full-multiplicity subgraph $K \subset H$ be given, and let
$xy$ be an edge such that
$\sizeof{(N_H(x) \cap Z(K)) - y} \leq d_H(y) - d_K(y)$. We claim
that $\cdeg_{H,K}(x,y) \leq t$.
Let $Z$ be any subset of $N_K(x)$ containing $y$. We must show that
$\sum_{z \in Z}[ d_K(z) - d_H(z) + \mu_K(x,z) - t)] \leq 1$. Observe
that any elements of $Z - Z(K)$ contribute
a nonpositive term to this sum, while elements of $Z(K)$ contribute $1$, so that
\begin{align*}
\sum_{z \in Z}[ d_K(z) - d_H(z) + \mu_K(x,z) - t] &\leq \sum_{z \in (N_H(x)\cap Z(K)) \cup \{y\}}[ d_K(z) - d_H(z) + \mu_K(x,z) - t ] \\
&\leq \sizeof{(N_H(x) \cap Z(K)) - y} + (d_K(y) - d_H(y) + 1) \\
&\leq (d_H(y) - d_K(y)) + (d_K(y) - d_H(y) + 1) \\
&= 1.\qedhere
\end{align*}
\end{proof}
\begin{lemma}\label{lem:flatten}
Let $B$ be a simple graph, and let $B_s$ and $B_t$ be graphs of
constant multiplicity $s+1$ and $t+1$ respectively, with underlying
simple graph $B$. If $0\leq s < t$ and $\corefan(B_s) \leq s$, then
$\corefan(B_t) \leq t$.
\end{lemma}
\begin{proof}
First observe that for every vertex $v \in V(B)$, we have
$N_B(v) = N_{B_s}(v) = N_{B_t}(v)$; thus, we suppress the
subscripts and simply write $N(v)$. To avoid double-subscripts, we
will also write $d_s$ and $\mu_s$ as shorthand for $d_{B_s}$ and
$\mu_{B_s}$, and likewise for $t$.
We verify Condition~(ii) of Lemma~\ref{lem:tplus1} for $B_t$. Let
$K$ be any nonempty full-multiplicity subgraph of $B_t$, and let
$K'$ be the full-multiplicity subgraph of $B_s$ having the same
underlying simple graph. Observe that $Z(K') = Z(K)$, and that $K'$
is nonempty since it has the same underlying simple graph as $K$.
Applying Lemma~\ref{lem:tplus1} to $B_s$, there is an edge $xy \in E(K')$ such that
\[
\sizeof{(N(x) \cap Z(K')) - y} \leq d_{s}(y) - d_{K'}(y).
\]
By the definition of $K'$, we have $\mu_K(xy) > 0$, that is, $xy \in E(K)$.
Since $Z(K') = Z(K)$, it therefore suffices to show that $d_{s}(y) - d_{K'}(y) \leq d_{t}(y) - d_{K}(y)$. This follows from observing that if $J$ is the common underlying simple
graph of $K$ and $K'$, then
\[ d_{s}(y) - d_{K'}(y) = (s+1)[d_{B}(y) - d_{J}(y)] \leq (t+1)[d_{B}(y) - d_J(y)] = d_{t}(y) - d_{K}(y). \qedhere \]
\end{proof}
\begin{remark}
The converse of Lemma~\ref{lem:flatten} is not true: for a simple graph $H$, it is possible that
$\corefan(H_t) \leq t+1$ yet $\corefan(H) > 0$, where we consider $H$ itself as the graph $B_s$ for $s=0$.
Consider the simple graph $H$ shown in Figure~\ref{fig:flattenconverse}.
\begin{figure}
\centering
\begin{tikzpicture}
\rpoint{w} (u1) at (0cm, 1cm) {};
\apoint{} (u2) at (-1cm, 0cm) {};
\apoint{} (u3) at (1cm, 0cm) {};
\rpoint{u} (u4) at (0cm, -1cm) {};
\rpoint{v} (v) at (0cm, -2cm) {};
\draw (u1) -- (u2) -- (u4) -- (u3) -- (u1); \draw (u2) -- (u3); \draw (v) -- (u4);
\end{tikzpicture}
\caption{Simple graph $H$ such that $\corefan(H_1) \leq 1$ but $\corefan(H) > 0$.}
\label{fig:flattenconverse}
\end{figure}
To see that $\corefan(H) > 0$, consider the subgraph $K = H-v$. If $\corefan(H)\leq 0$, then there exists $xy\in E(K)$ with
\[ \sum_{z \in Z}(d_K(z) - d_H(z) +1 - 0) \leq 1 \] for all
$Z\subseteq N_K(x)$ with $y\in Z$. However, the only vertex in $K$
that does not have the same degree in $K$ as it does in $H$ is $u$,
and this difference in degree is only one. Hence $u$ is the only
vertex that could contribute a nonpositive amount to this sum. If
$x$ is not $u$ or $w$, then $x$ has degree $3$ in $K$, so the sum
cannot be at most 1. On the other hand, if $x$ is $u$ or $w$, then
while $x$ has only degree two in $K$, neither of these neighbours is
$u$.
To see that $\corefan(H_1) \leq 1$, let any full-multiplicity
subgraph $K \subset H_1$ with $K\neq \emptyset$ be given. According
to Lemma \ref{lem:tplus1} we need only find $xy \in E(K)$ with
\[\sizeof{(N_{H_1}(x) \cap Z(K)) - y} \leq d_{H_1}(y) - d_K(y),\]
where $Z(K) = \{v \in V(K) \st d_K(v) = d_H(v)\}$. If $uv \notin E(K)$ but $K$ does have at least one edge incident to $u$, then we may take $y=u$ and $x \in N_K(u)$, so that
\[ \sizeof{(N_{H_1}(x) \cap Z(K)) - y} \leq 2 \leq d_{H_1}(y) -
d_{K}(y). \] (Recall that $H_1$ has constant multiplicity $2$,
so at a minimum, the two copies of $uv$ are missing at $y$ in the
subgraph $K$.) Otherwise, we may choose $xy$ so that
$(N_{H_1}(x) \cap Z(K)) - y =\emptyset$ and hence we immediately
get our desired inequality: if $vu\in E(K)$ then we take
$(x,y) = (v,u)$, and if $K$ has no edges incident to $u$ then
$Z(K)$ is either $\emptyset$ or $\{w\}$, and in the latter
case we may choose $y=w$.
\end{remark}
The following is the final result we need in order to write our proof of Theorem \ref{thm:Bqueuecorefan}.
\begin{theorem}\label{thm:qcorefan}
Let $B$ be a simple graph. If $B$ has a full $B$-queue, then $\corefan(H) \leq 0$.
\end{theorem}
\begin{proof}
Consider a full $B$-queue with vertex sequence $(u_1, \ldots, u_q)$ and set sequence $(S_1, \ldots, S_q)$.
Let $K\subseteq H$ with $E(K)\neq\emptyset$. According to Lemma \ref{lem:tplus1} we need only find $xy \in E(K)$ with
\[\sizeof{(N_{H}(x) \cap Z(K)) - y} \leq d_B(y) - d_K(y),\]
where $Z(K) = \{v \in V(K) \st d_K(v) = d_H(v)\}$. We consider two cases:
\caze{1}{$K$ contains an edge incident to $u_i$ for some $i$.}
Choose $i$ to be the smallest such index, and let $x = u_i$. If
there is some vertex in $N_K(u_i) \cap (S_i \setminus S_{i-1})$,
let $y$ be such a vertex; otherwise, let $y$ be an arbitrary
element of $N_K(x)$. We claim that
$N_{H}(x) \cap Z(K) \subset \{y\}$. To this end, consider any
$z \neq y \in N_K(x)$. The definition of a full $B$-queue implies
that $z \in S_j$ for some $j \leq i$. Take the smallest such
$j$. If $j=i$, then necessarily $z=y$, since
$\sizeof{S_i \setminus (S_{i-1} \cup \{u_i\})} \leq 1$. Otherwise,
$j < i$, and since $z\neq u_j$ (by choice of $i$) we know that
$u_j \in N_B(z)$. But again, by our choice of $i$, the edge $u_jz$
cannot be in $K$, and so $z \notin Z(K)$. It follows
that \[ \sizeof{N_H(x) \cap Z(K) - y} = \sizeof{N_K(x) \cap Z(K) - y} = 0 \leq d_B(y) - d_K(y),\] as
desired.
\caze{2}{$K$ has no edges incident to $u_i$ for any $i$.} In this case, choose any $xy \in E(K)$. Since the $B$-queue is full, every vertex in $B$ is incident to at least one of $u_1, \ldots, u_q$, so $N(x) \cap Z(K) =\emptyset$. Thus again we have
\[\sizeof{(N(x) \cap Z(K)) - y} = 0 \leq d_B(y) - d_K(y)\]
as desired.
\end{proof}
\begin{remark} The converse of Theorem~\ref{thm:qcorefan} does not
hold. In particular, there is an infinite family
$\{H_p\}_{p \geq 4}$ of simple graphs such that for all $p$,
$\corefan(H_p) = 0$ but $B=H_p$ does not have a full $B$-queue. To
see this, let $B=H_p$ be the graph obtained from the complete graph
$K_p$ by designating a special vertex $v$ and attaching $p-2$
pendant edges to $v$. Let $z_1, \ldots, z_{p-2}$ be the vertices of
degree $1$ adajcent to $v$. The graph $H_4$ is shown in
Figure~\ref{fig:bqueueconverse}.
\begin{figure}
\centering
\begin{tikzpicture}
\apoint{} (w) at (0cm, 0cm) {};
\rpoint{v} (u1) at (-90 : .67cm) {};
\rpoint{} (u2) at (30 : 1cm) {};
\rpoint{} (u3) at (150 : 1cm) {};
\draw (u1) -- (u2) -- (u3) -- (u1);
\foreach \i in {1,2,3} { \draw (w) -- (u\i); }
\bpoint{z_1} (z1) at (-.5cm, -1.5cm) {};
\bpoint{z_2} (z2) at (.5cm, -1.5cm) {};
\draw (z2) -- (u1) -- (z1);
\end{tikzpicture}
\caption{The graph $H_4$, a simple graph with $\corefan(H_4) = 0$ that does not admit a full $B$-queue.}
\label{fig:bqueueconverse}
\end{figure}
If $H_p$ has a full $B$-queue with vertex sequence
$(u_1, u_2, \ldots)$ then at least one vertex of the $K_p$ must
occur as a $u_i$; choose the smallest such $i$. If $u_i \in S_{i-1}$,
then $u_i=v$ and $\sizeof{S_i \setminus S_{i-1}} \geq \sizeof{N(u_i)\setminus S_{i-1}}\geq 3$, a
contradiction. If $u_i\not\in S_{i-1}$, then $\sizeof{S_i\setminus S_{i-1}} \geq 3$,
again a contradiction. Hence $H_p$ does not admit a full $B$-queue.
Now we claim that $\corefan(H_p) \leq 0$. We apply Lemma~\ref{lem:tplus1}.
Let $K$ be any subgraph of $H_p$ with $E(K)\neq \emptyset$. If $K$ does not contain any of the pendant edges incident to $v$, but $v$ does have at least one incident edge in $K$,
then let $y=v$ and take any $x \in N_K(v)$. Since $|N_K(x)|\leq p-1$ and $d_{H_p}(y) - d_K(y) \geq p-2$, we get that
\[ \sizeof{(N_{H_p}(x) \cap Z(K)) - y} \leq p-2 \leq d_B(y) - d_K(y), \]
as desired. Otherwise we can choose $xy\in E(K)$ so that $(N_{H_p}(x) \cap Z(K)) - y=\emptyset$ and hence we immediately get our desired inequality; if we can take $x=z_i$ for some $i$ then this is the case, else there are no edges incident to $v$. Since $v$ is joined to every vertex in $B$, this yields $Z(K) = \emptyset$, so that $N_{H_p}(x) \cap Z(K) = \emptyset$
no matter which $x$ we choose.
\end{remark}
We can now prove Theorem \ref{thm:Bqueuecorefan}.
\begin{proof}[Proof of Theorem \ref{thm:Bqueuecorefan}] By Lemma \ref{lem:lowmult}, it suffices to show
that $\corefan(H_{>t})\leq t$, where $H_{>t}$ is the subgraph of $H$
consisting of the edges with multiplicity greater than $t$. If $H$
has multiplicity at most $t$, then $H_{>t}$ is edgeless, and hence
$\corefan(H_{>t})=0$ by definition. So, we may assume that $H$ has
maximum multiplicity exactly $t+1$, and that $H_{>t}$ is the subgraph $H_t$ of $H$
consisting precisely of all the edges in $H$ of multiplicity
$t+1$. By Lemma \ref{lem:flatten}, we get our desired result of
$\corefan(H_t)\leq t$ provided $\corefan(B)\leq 0$. Since $B$ has a
full $B$-queue, Theorem \ref{thm:qcorefan} indeed tells us that
$\corefan(B)\leq 0$, thus completing our proof.
\end{proof}
\section{Proof of Theorem \ref{thm:converse}}\label{sec:converse}
Before we begin the main proof of Theorem~\ref{thm:converse}, it will
help to record a lemma about constructing regular graphs with perfect
matchings.
\begin{lemma}\label{lem:regmatching}
Let $n$, $k$, and $r$ be positive integers with $r\leq n$ and $k<n$. If $k$ and $r$ are even,
then there is a $k$-regular simple graph $G$ on $n$ vertices containing a
vertex set $S_r$ of size $r$ such that the induced subgraph $G[S_r]$
has a perfect matching.
\end{lemma}
\begin{proof}
For any even $k$ and any $n>k$, the standard circulant graph
construction (see, e.g., Chapter~12 of \cite{handbook-algo}) yields
a $k$-regular simple graph on $n$ vertices with a matching $M$ that covers
at least $n-1$ vertices. In particular, $M$ covers $n$ vertices if
$n$ is even, and $n-1$ vertices if $n$ is odd. Since $r$ is even, we
see that the number of vertices covered by $M$ is at least
$r$. Thus, one may choose any set of $r/2$ edges in $M$, and take
$S_r$ to be the set of vertices covered by those edges.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:converse}] Our goal is to build a graph $G$ whose $t$-core is $H$ and with $\fan(G)>\Delta(G)+t$ (and hence $\Fan(G)>\Delta+t$). Since $\corefan(H)>t$ there exists $K\subseteq H$, $E(K)\neq\emptyset$ with $\cdeg_{H,K}(x,y)> t$ for all $xy \in E(K)$, that is, with
\[\sum_{z \in Z}(d_K(z) - d_H(z) + \mu_K(x,z) - t) > 1\]
for some $Z\subseteq N_K(x)$ with $y\in Z$. Note that we may choose $K$ so that it has no isolated vertices. Choose positive integers $D$ and $r$ satisfying all of the following conditions:
\begin{enumerate}[(a)]
\item $r\geq \Delta(H) + 6 + t$
\item $r$ is even
\item $D \geq 3r + t$
\item $D \geq \Delta(H) + 2r^2$,
\item $D+t$ is an even multiple of $r-1$, with this multiple being at least 4.
\end{enumerate}
Initialize $G=H$. Our construction proceeds in several
stages; at each stage, we will add vertices and/or edges to $G$. When the construction is complete we will verify that $G$ indeed has $t$-core $H$ and $\fan(G)>\Delta(G)+t$.
\stage{1} In this stage, we will add vertices and edges to our initial $G=H$ in
order to guarantee that $d_G(x) = D$ for all $x \in V(K)$.
Let $p = \sizeof{V(K)}$, and write $V(K) = \{x_1, \ldots,
x_p\}$. Note that since $K$ is not edgeless, we know that $p\geq
2$. For each $x_i \in V(K)$, let $d_i = D - \deg_H(x_i)$. We can
write each $d_i$ as $d_i = \alpha_i (r-1) + \beta_i$ where $\alpha_i$
and $\beta_i$ are integers with $0 \leq \beta_i \leq r-2$. We
rewrite this equation as
\[ d_i = (\alpha_i - \beta_i)(r-1) + \beta_i r. \] Let
$a_{i,r-1} = \alpha_i - \beta_i$ and let $a_{i,r} = \beta_i$. We know that $a_{i,r}\in\{0, \ldots, r-1\}$; note also that by assumption (d),
\begin{align*}
a_{i, r-1}&=\alpha_i-\beta_i
= \left(\tfrac{d_i-\beta_i}{r-1}\right)-\beta_i
=\left(\tfrac{D-\deg_H(x_i)-\beta_i}{r-1}\right)-\beta_i\\
&\geq \left(\tfrac{D-\Delta(H)-\beta_i}{r-1}\right)-\beta_i \geq \left(\tfrac{2r^2-\beta_i}{r-1}\right)-\beta_i \geq r
\end{align*}
If $\sum a_{i,r}$ is odd, then we redefine the first pair
$(a_{1,r}, a_{1,r-1})$ to be $(a_{1,r} +(r-1), a_{1,r-1} -r)$, which
will change the parity of the sum since $r$ is even by assumption
(b). Given the above inequality, we still have that
$a_{i, r}, a_{i, r-1} \geq 0$ for all $i$; in fact we know that
$a_{i, r-1}\geq r$ except possibly when $i=1$. We also still have that
\[d_i=a_{i, r}(r)+ a_{i, r-1}(r-1).\]
For $\ell \in \{r-1, r\}$, let $s_{\ell} = \sum_{i=1}^p a_{i,\ell}$,
and let $S_{\ell}$ be a set of $s_{\ell}$ new vertices added to
$G$. Our definition of the numbers $a_{i, \ell}$ guarantees that
$s_r$ is even, and this will be helpful for us in a later stage of
our construction.
For each $x_i \in V(K)$, choose a disjoint set $T$ of $a_{i,\ell}$
vertices from $S_{\ell}$, and add an edge of multiplicity $\ell$
from $x_i$ to each vertex of $T$. Once we complete this procedure
for all $i$ and both $\ell$, we see that every vertex in $K$ has
degree $D$. Let $S = S_{r-1} \cup S_r$ with $s=|S|$.
\stage{2} In this stage, we will add edges within $S$ to ensure that
every vertex in $S_{\ell}$ ends with degree $D-{\ell}+t$.
Our strategy in this stage will be, roughly, to first paste in a
regular multigraph on the vertex set $S$ to bring the vertices of
$S_{r-1}$ up to degree $D-(r-1)+1$ and the vertices of $S_r$ up to
degree $D-r+t+2$, and then remove parallel copies of a (carefully
planted) perfect matching from $S_r$ so that those vertices end with
degree $D-r+t$.
Let $k = \frac{D - 2(r-1)+t}{r-1} = \frac{D+t}{r-1} - 2$. By
assumption (e), $k$ is an even integer and $k\geq 2$. Since $sk$ is
even, we can construct a $k$-regular simple graph on the vertex set
$S$ provided $s>k$. To verify this, start by observing the
following, where we are using $r \geq \Delta(H)$ (by a weak version
of assumption (a)):
\[ s = \sum_{i=1}^p (a_{i-1} + a_i) \geq \sum_{i=1}^p \frac{d_i}{r} \geq \sum_{i=1}^p \frac{D-r}{r} \geq 2\frac{D-r}{r} = \frac{2D}{r} - 2. \]
To show $s>k$, it remains to prove that $2D/r > (D+t)/(r-1)$, which is true iff $D>t\left(\tfrac{r}{r-2}\right)$. This follows from $D > 2t$ (by assumption (c) coupled with a weak version of assumption (a)) and $r \geq 4$ (by an even weaker version of assumption (a)).
Let $A$ be a $k$-regular simple graph on the vertex set $S$. Since
$s_r$ is even, as established in the previous stage of construction,
Lemma~\ref{lem:regmatching} allows us to choose $A$ so that it
contains a perfect matching $M$ on the vertex set $S_r$.
We now modify $G$ as follows: make $G[S]$ have underlying graph $A$
with edges in $A-M$ having multiplicity $r-1$ and edges in $M$
having multiplicity $r-3$.
Observe that at this point, every vertex in $S_{r-1}$ has degree
\[ (r-1) + (r-1)k = (r-1) + (D+t - 2(r-1)) = D - (r-1)+t, \]
while every vertex in $S_r$ has degree
\[ r + (r-1)k - 2 = r + (D+t - 2(r-1)) - 2 = D-r+t. \]
\stage{3} In this last stage, we will bring each vertex $v \in V(H) - V(K)$
up to degree $D$. We do this by simply adding a single pendant edge
of multiplicity $r-1$ to $v$, followed by enough pendant edges of
multiplicity $1$ to obtain the desired degree. Note that this is possible since, by assumption (d),
$D \geq \Delta(H) +2r^2 \geq \Delta(H)+ r-1$, so the pendant edge of multiplicity
$r-1$ cannot itself make the degree greater than $D$.
This completes our construction of $G$.
\textbf{Verification of Properties.} We begin by verifying that $H$ is the $t$-core of $G$. To this end, note that $d(v) = D$ for
all $v \in V(H)$. For all $v \in S$, we have
$d(v) \leq D-(r-1)+t$, which is less than $D$ since $r-1 > t$ by a weak version of assumption (a). The endpoints of the pendant edges from Stage~3
have $d(v) \leq r-1 < D$ (using assumption (e) weakly). Hence $\Delta(G) = D$, with this degree achieved precisely by the vertices in $H$. Now, note that every vertex
of $H$ is incident to an edge of multiplicity $r-1$ or greater. So, for any vertex $v\in V(H)$,
\[d(v) + \mu(v) \geq D + (r-1) > D+t.\]
For $v \in S_r$ we have
\[d(v) + \mu(v) = (D-r+t) + r = D+t,\]
while for $v \in S_{r-1}$ we
have
\[d(v) + \mu(v) = (D - (r-1)+t) + (r-1) = D+t.\]
The endpoints
of the pendant edges added in Stage~3 have
\[d(v) + \mu(v) \leq (r-1)+(r-1) \leq D+t,\]
where the last inequality is another weak application of (e). Hence $H$ is indeed the $t$-core of $G$.
To verify that $\fan(G) > D+t$, we choose $J=G[K\cup S]$ and show that
$\deg_{J}(x,y) > D+t$ for all $xy \in E(J)$.
We know that $\mu(x,y) \leq r$ for all $xy \in E(J)$, with this coming from a weakened assumption (a) (and the fact that $\Delta(H)\geq \mu(H)$) when $xy\in E(K)$. Using the computations above, we know that
$d_J(v) \geq D-r$ for all $x \in V(J)$, with this coming from $r\geq \Delta(H)$ (again by assumption (a)) when $v\in V(K)$. We thus get
\[ d_J(x) + d_J(y) - \mu(x,y) \geq 2D - 3r > D+t, \]
where the last inequality is by assumption (c). Thus, Condition~(i) of the definition
of $d_J(x,y)$ fails for $k=D+t$.
Next we show that Condition~(ii) fails for $k=D+t$ as well. Consider
any $xy \in E(J)$. We consider two cases,
according to the location of $x$.
\caze{1}{$x \in V(K)$.} If $y \in V(K)$, let $y' = y$. Otherwise,
since $x$ is not isolated in $K$, we can take some $y' \in N_K(x)$. Since
$\corefan(K) > t$, there is a set $Z' \subset N_K(x)$ with
$y' \in Z'$ such that
\[ \sum_{z \in Z'}[ d_K(z) - d_H(z) + \mu_K(x,z) - t ] > 1. \]
Since $J$ includes all the edges of $K$, we have $Z' \subset N_J(x)$, with
$\mu_K(x,z) = \mu_J(x,z)$ for all $z \in Z'$. Furthermore, $d_H(z) - d_K(z) = D - d_J(z)$
for all $z \in Z'$ since $d_G(z) = D$ and the only $G$-edges incident to $z$ not
included in $J$ are the edges in $E(H) - E(K)$. Thus,
\begin{equation}
\label{ieq:xcase1}
\sum_{z \in Z'}[ d_J(z) + \mu_J(x,z) - (D + t)] = \sum_{z \in Z'}[d_K(z) - d_H(z) + \mu_K(x,z) - t] > 1.
\end{equation}
If $y \in Z'$ then this immediately implies that $\deg_J(x,y) > D + t$. Otherwise,
by our choice of $y'$, we have $y \in S_{\ell}$ for some $\ell$, in which case
\[ d_J(y) + \mu(x,y) - (D+t) = (D-\ell+t) + \ell - (D+t) = 0, \] so letting
$Z = Z' \cup \{y\}$ does not change the sum in
Inequality~\eqref{ieq:xcase1}, and the set $Z$ witnesses
$\deg_J(x,y) > D+t$.
\greg{\caze{2}{$x \in S$.}} Let $y'$ be the unique neighbor of $x$ in $V(K)$.
Observe that
\begin{equation}
\label{ieq:yprime}
d_J(y') + \mu(x,y') - (D + t) \geq (D - \Delta(H)) + (r-1) - (D + t) \geq r-1-\Delta(H) - t \geq 5,
\end{equation}
where the second last inequality comes from assumption (a).
Observe that for any $z \in N_J(x)$, even if $z \in S$ we still have
\begin{equation}
\label{ieq:minus3}
d_J(z) + \mu(x,z) - (D+t) \geq (D-r+t) + (r-3) - (D+t) = -3.
\end{equation}
If $y = y'$, then taking any $z \in N_J(x) - y$ and putting $Z = \{z,y\}$, we see that Inequalities
\eqref{ieq:yprime} and \eqref{ieq:minus3} together imply that $Z$ witnesses $\deg_J(x,y) > D+t$.
(Such a $z$ exists because, by our construction, $d_J(x) \geq (r-1)k \geq 2\mu(x)$.)
If $y \neq y'$, then since $y \in N_J(x)$, taking $Z = \{y, y'\}$
again yields a set witnesing $\deg_J(x,y) > D+t$, via the same
inequalities.
\end{proof}
\bibliographystyle{amsplain}
|
1,941,325,219,989 | arxiv | \section{Introduction}
\label{intro}
Several phases of matter appear in the quantum degenerate regime only, namely, superconductivity, superfluidity (SF) and supersolid (SS) phases. While the two formers have been successfully explained, the supersolid phase \cite{Kim, Choi}, consistent with thermodynamic stability criteria \cite{Mendoza}, remains as a subject of theoretical investigation. On the other hand, not at necessarily low temperatures, but also manifesting many body quantum statistical behavior, the High-$T_c$ (HTc) superconductivity phenomenon still continues as an open question in the context of condensed matter.
Experiments with ultracold neutral gases are at the present time the closest candidates to quantum simulate, and thus address the description, of such not quite yet understood quantum phases \cite{Hofstetter, Chan, Liu, Buhler, Wang, Fujihara}.
Recent experimental studies have shown how quantum phases as SF, SS, Mott insulator and charge density wave, emerge from competing short- and long-range interactions among ultracold Bose atoms confined in an optical lattice coupled to a high finesse optical cavity \cite{Esslinger}. Those phases arise as a result of exploiting the matter-light coupling since in such a case interactions can be tuned on demand. There is however an alternative way of handling either the range and direction of interactions in ultracold neutral gases, which is by confining dipolar atoms or molecules in optical lattices. As it has been shown from the theoretical perspective, the combination of both, the long range anisotropic character of dipolar interactions and the controllable lattice structure where the atoms/molecules lie, make the many-body physics becomes very rich \cite{Baranov2, Lahaye, Chen, Zinner}.
In this work we consider a model proposed previously \cite{Vanhala, Camacho, Ancilotto} to demonstrate that ordered density wave (DW), SS and SF phases can be accessed by changing the external fields that set the system. In Fig. \ref{Fig1} we show a scheme of the quantum simulator that can be created in the laboratory to explore the referred phases. The model system is composed of dipolar Fermi molecules lying in a bilayer array of square lattices in 2D. Although such a configuration has not been realized yet, the current experimental panorama of ultracold dipolar gases, in particular the potential capacity of loading long-lived Fermi molecules of NaK, KRb and NaLi \cite{Woo, Ni} in optical lattices as well as the recently produced ro-vibrational ground state in molecules of NaK, is promising in setting the array here considered.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=3.0in]{Fig1}
\end{center}
\caption{(Color online) Schematic representation of the dipolar Fermi gas. The molecules affected by a harmonic trap potential lie in the lattice sites in up and down layers.}
\label{Fig1}
\end{figure}
Previous mean field analysis on dipolar fermions placed onto a single-layer square lattice, with arbitrary orientation and considering dipoles with fixed orientation, have predicted the melting among SF and DW phases \cite{Zinner1, Gadsbolle1, Gadsbolle2, Bhongale} and a variety of DW phases \cite{Mikelsons} respectively. Also, an extended model including a mixture of Fermi molecules with contact interactions, loaded in a bilayer array predicted density ordered phases as well as superfluids phases \cite{Vanhala, Prasad, Baranov, Pikovski, Potter}. The possibility of supersolid phases in these dipolar Fermi gases has also been studied \cite{He,Gadsbolle1,Gadsbolle2}. On the other side SF, Mott insulating, DW and SS phases of He have been investigated within a mean field context too \cite{Rica,Ye,Nozieres}. In the present study we consider dipolar Fermi molecules situated in a double array of parallel optical lattices having dipole orientation perpendicular to the lattice in the presence of a harmonic trap, to demonstrate that in addition to SF and DW patterns there is a region of coexistence in the phase diagram where SS phases emerge. Working within the Bogoliubov-de Gennes (BdG) approach we show that depending upon carefully controlled parameters these phases can be accessed under current experimental conditions.
The paper is organized as follows. In Section \ref{Model} we introduce the model considered in our study and describe the theoretical approach employed. In Section \ref{SS} we illustrate the coexistence and spatial overlap among superfluid and DW phases for several values of the temperature. We summarize our findings in the phase diagram at finite and zero temperature. Finally, we present our conclusions in Section \ref{Conclusion}.
\section{Model}
\label{Model}
We consider Fermi molecules of dipole moment $d$ and mass $m$ lying in two parallel square lattices of lattice constant $a$ separated by a distance $\lambda$, and a harmonic trap with frequency $\omega$ (see Fig. \ref{Fig1}). In the presence of an electric field perpendicular to the layers, the dipoles align along the same direction. Fermions in the same layer repel each other always, however, dipoles in different layers attract each other at short range, while also repelling each other at large distances. Thus, interaction between fermions in the same and in different layers is given respectively by
\begin{eqnarray}\nonumber
&&V^{\alpha,\alpha}(\vec{r})=d^2\frac{1}{r^3},\\
&&V^{\alpha,\beta}(\vec{r})=d^2\frac{r^2-2\lambda^2}{(r^2+\lambda^2)^{5/2}},
\label{V_int}
\end{eqnarray}
where $r$ is the in-plane distance between two fermions. Greek indices label the layer where the molecule is placed. Thus, superscripts $\alpha,\alpha$ ($\alpha,\beta$) indicate that interaction occurs between fermions in same (different) layers. For clarity, we denote the intralayer interaction by $V^{\alpha,\alpha}(\vec{r})=V(\vec{r})$, and the interlayer interaction by $V^{\alpha,\beta}(\vec{r})=U(\vec{r})$.
The system is described by the Hubbard model with the Hamiltonian given by $\hat{H}=\hat{H}_0+\hat{V}+\hat{U}$, with{\small
\begin{eqnarray} \nonumber
&& \hat{H}_0=\sum_{\alpha=A,B}\sum_{\vec{k}}(\epsilon_{\vec{k}}-\mu_\alpha)\hat{n}_{\vec{k}}^{\alpha}+\sum_{\alpha=A,B}\sum_{\vec{i}}\frac{m\omega^2}{2} r^2(i)\hat{n}_{\vec{i}}^{\alpha}\\ \nonumber
&&\hat{V}=\frac{1}{2\Omega}\sum_{\alpha=A,B}\sum_{\vec{k},\vec{k'},\vec{q}}V(\vec{q})\hat{c}^\dagger_{\vec{k}+\vec{q},\alpha}\hat{c}_{\vec{k},\alpha}\hat{c}^\dagger_{\vec{k'}-\vec{q},\alpha}\hat{c}_{\vec{k'},\alpha}\\ \nonumber
&&\hat{U}=\frac{1}{\Omega}\sum_{\vec{k},\vec{k'},\vec{q}}U(\vec{k}-\vec{k'})\hat{c}^\dagger_{\vec{q}/2+\vec{k},A}\hat{c}^\dagger_{\vec{q}/2-\vec{k},B}\hat{c}_{\vec{q}/2-\vec{k'},B}\hat{c}_{\vec{q}/2+\vec{k'},A}\\
\label{H}
\end{eqnarray}}
where $\hat{c}^\dagger_{\vec{k},\alpha}, \hat{c}_{\vec{k},\alpha}$ are the standard creation and annihilation operators and $\hat{n}_{\vec{k},\alpha}=c^\dagger_{\vec{k},\alpha}c_{\vec{k},\alpha}$, $\epsilon_{\vec k}= -2t(\cos{k_x a}+ \cos{k_y a})$ is the in-plane energy dispersion of the ideal Fermi gas within the tight binding approximation, being $t$ the hopping among nearest neighbors. $V(\vec q)$ and $U(\vec k - \vec k')$ are the Fourier transforms of $V(\vec r- \vec r')$ and $U(\vec r)$, respectively and $\Omega$ the number of sites. The terms containing the harmonic confinement are written in the Fock basis of sites, where the vector position in the lattice is denoted by $\vec{r}(i)=a(i_x,i_y)$, and $\omega$ the frequency of the harmonic trap that confines the molecules. In what follows, all the energies will be scaled with respect to $t$. We also introduce two relevant physical quantities: the dipolar interaction strength $a_d=m_{eff}d^2/\hbar^2$, with $m_{eff}= \hbar^2/2ta^2$ the effective mass, and the dimensionless parameters $\Lambda= \lambda/a$ and $\chi=a_d/a$.
The proposed model can be mapped into a system of fermions in two different hyperfine spin states $\uparrow, \downarrow$ ($A\rightarrow \uparrow, B\rightarrow \downarrow$) confined in a 2D lattice. The terms $V$ and $U$ describe repulsive and attractive interactions among fermions in the same and different hyperfine states respectively. We should stress that $U$ is an interaction that is attractive at short distances while becoming repulsive at long distances $U$. Thus, by controlling the separation between the layers $\lambda$, the proposed model represents a promising candidate to study the quantum phases from competing short- and long-range interactions as well as attractive versus repulsive interactions. The inclusion of the harmonic potential plays also a crucial role since, as it is well known, the global thermodynamics of the phase transition is qualitatively different from that of the homogeneous case \cite{Ayala}. A remarkable signature of this fact is that the magnitude of the coherence length can be as large as the typical confinement distance.
To investigate the physics of the model described above, we use mean-field theory including the usual BCS pairing terms and the Hartree contributions\cite{Hartree}. We expect this approximation to be reasonably accurate in the weakly interacting regime. The mean-field Hamiltonian is diagonalized by solving the Bogoliubov-de Gennes equations (BdG) \cite{Convergence}. This mean-field approach is commonly used for studying competing magnetic and superconducting phases in the context of high $T_c$ superconductivity \cite{Chen, Chen2} and in the context of ultra cold fermions \cite{Andersen}. It also has been recently employed for describing strongly correlated systems, like effective $p$-wave interaction and topological superfluids in quantum gases in lower dimensions systems (1D and 2D)\cite{Wang}. The equation to be solved is,
$$
\sum_j\left( \begin{array}{cc}
H^{0}_{ij,\alpha} & \Delta_{i,j} \\
\Delta_{i,j} & -H_{ij,\bar{\alpha}}^0 \end{array} \right)\left( \begin{array}{c}
u_{j,\alpha}^n\\
v_{j,\bar{\alpha}}^n \end{array} \right)=E_n\left( \begin{array}{c}
u_{j,\alpha}^n\\
v_{j,\bar{\alpha}}^n \end{array} \right),$$
where the matrix elements $H^0_{ij,\alpha}$ incorporate the tunneling among nearest neighbors $t\delta_{\langle i,j\rangle}$, the effect of the harmonic confinement $\epsilon_i=\frac{m\omega^2}{2} r^2(i)$ and the inter site interaction on the Hartree level that is expected to dominate \cite{Hartree}, that is, $H^0_{ij,\alpha}=-t\delta_{\langle i,j\rangle}+\left(\sum_{l\neq i}V_{li}\langle n_{l,\alpha}\rangle+\epsilon_i-\mu\right)\delta_{i,j}$ with $\delta_{\langle i,j\rangle}$ the Kronecker delta for nearest neighbors. $\Delta_{i,j}=U_{i,j}\langle c_{i,A} c_{j,B}\rangle$ is the superfluid parameter. The eigenvalues denoted by $E_n$ are self-consistency obtained through the usual relations $n_{i,A}=\sum_{n}|u_{i,A}|^2f(E_n)$ and $n_{i,B}=\sum_{n}|v_{i,B}|^2(1-f(E_n))$ with $(u^n_{i\alpha},v^n_{i,\bar{\alpha}})$ the local Bogoliubov quasiparticle amplitudes, being $f(E_n)$ the Fermi distribution and $n_{i,\alpha}$ the expectation value of $\hat{n}_{i,\alpha}$. For simplicity we shall assume that both layers are equally populated.
To include the effects of the harmonic trap, in addition to the usual order parameters that globally describe the system, we introduce local order parameters. The global density order parameter is given by $\rho_{\vec{Q},\alpha}=\frac{1}{N_\alpha}\sum_{\vec{k}}c^\dagger_{\vec{k}+\vec{Q}}c_{\vec{k}}$, where the vector $\vec{Q}$ identifies either, a checkerboard pattern $\vec{Q}=(\pi/a,\pi/a)$, or a stripe density order $\vec{Q}=(\pi/a,0),$ or $ (0,\pi/a)$. The local density order parameter is given by $\phi_{i}=\sum_{j_i} (-1)^{j_x+j_y}n_j$, where the index $j_i$ denotes that the sum runs over the first and second nearest neighbors. This local order parameter describes a checkerboard density pattern in a $3\times 3$ sublattice centered at site $i$. The superfluid local order parameter is given by $\Delta_i=\sum_{j_{i}}\Delta_{i,j}$, being the average superfluid behavior studied through $\Delta=\sum_{i}\Delta_{i}/N_A$.
\section{Supersolid: coexistence of Superfluid and Density Ordered phases}
\label{SS}
To determine the density profile and the behavior of the gap across the lattice, we solve BdG equations for lattices of size $\Omega = 2 \times 37 \times 37$, maintaining fixed the value of chemical potential $\mu/t=1.5$ \cite{Romero}. This restriction causes that the total number of fermions is increased with temperature, that is, at zero temperature the number of fermions is $N_A+N_B= 320$ while for $k_B T/t=0.5$ this value is increased to $335$. We also keep fixed the values of the interaction strength and the harmonic frequency at $\chi=0.3$ and $\frac{1}{2}m(\omega a)^2/t =0.025$, respectively.
To illustrate the competition among different phases we have selected the cases $\Lambda=0.8, 0.85, 0.9 $ and $1.0$. We plot the density profile and the gap parameter profile for several values of the temperature. In particular, we chose $k_B T/t=0.0, 0.10, 0.25$ and $0.45$.
First we start considering $\Lambda=0.8$. From Fig.\ref{Fig2} we observe an homogeneous distribution at the center of the trap at zero temperature for both, density and gap profiles. However, at finite temperature, the distribution of the density profile remains homogeneous at the center of the trap, while the gap structure shows a decreasing ratio as temperature is increased until they vanishes at a temperature of $k_{B}T/t=0.28$. In previous studies \cite{Camacho, Ancilotto} reported a BCS superfluid phase in the weakly interacting regime while occurring formation of dimers in the strong interaction regime, leading those dimers to a Bose superfluid. In the present work we focus on the BCS superfluid to DW-Supersolid phase transition. Therefore, the values of the parameters are restricted to those in which the weakly interacting regime is ensured.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=5.0in]{Fig2}
\end{center}
\caption{(Color online) Density order profile (top) and superfluid order parameter (bottom) as a function of the temperature. From left to right $k_B T/t=0.0, 0.10, 0.25$ and $0.45$. For $\Lambda=0.80$ and $\chi=0.3$}
\label{Fig2}
\end{figure}
When the interlayer spacing is increased, the competition between attractive and repulsive dipole interactions becomes evident. As plotted in Fig. \ref{Fig3}, for $\Lambda=0.85$ at low temperatures, there is a large region in the center of the trap having a superfluid order parameter coexisting with a checkerboard density order. This is the signature of a supersolid phase. When the temperature is increased, the radius of the superfluid order parameter shrinks and completely vanishes at a critical temperature of $k_B T/t=0.13$, while the checkerboard phase melts at a temperature of $k_B T/t=0.26$. That is, the supersolid phase exists, in this system, for certain temperatures, as further shown below in the corresponding phase diagram. Cross sections of the local order parameters are shown in Fig.\ref{Fig4}, where it can be appreciated the presence of a supersolid phase at the center of the trap, as the spatial overlap of both superfluid and DW.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=5.2in]{Fig3}
\end{center}
\caption{(Color online) Density order profile (top) and superfluid order parameter (bottom) as a function of the temperature. From left to right $k_B T/t=0.0, 0.10, 0.25$ and $0.45$. For $\Lambda=0.85$ and $\chi=0.3$.}
\label{Fig3}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=5.0in]{Fig4}
\end{center}
\caption{(Color online) Local order parameters. We plot a cross section of the local density $\phi(x,0)$ and the local gap $\Delta(x,0)$ through the center of the trap for $k_B T/t=0.0, 0.10, 0.25$ and $0.45$. For $\Lambda=0.85$ and $\chi=0.3$.}
\label{Fig4}
\end{figure}
For $\Lambda=0.9$ the repulsive intralayer interaction starts to dominate at the center of the trap. From Fig. \ref{Fig5} one can see that there is a wide region at the center of the trap exhibiting a checkerboard DW pattern. Such patterns persist below temperatures of $k_B T/t=0.26$. We also observe that the superfluid order parameter appears to surround the checkerboard pattern and that such a superfluid disk completely vanishes when the temperature reaches a value of $k_B T/t=0.06$. The cross sections shown in Fig. \ref{Fig6} exhibit a small region where both phases spatially overlap. Other studies \cite{Kurdestany,Pai} with different systems in the presence of a harmonic trap have shown coexistence of phases without spatial overlapping, for instance, in the extended Bose-Hubbard model the Mott insulator and superfluid phases tend to form rings and disks. In 2D those studies agree quantitatively with Quantum Monte Carlo calculations and more sophisticated methods.
\begin{figure}[H]
\begin{center}
\includegraphics[width=5.2in]{Fig5}
\end{center}
\caption{(Color online) Density order profile (top) and superfluid order parameter (bottom) as a function of the temperature. From left to right $k_B T/t=0.0, 0.10, 0.25$ and $0.45$. For $\Lambda=0.90$ and $\chi=0.3$.}
\label{Fig5}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=5.0in]{Fig6}
\end{center}
\caption{(Color online) Local order parameters. We plot a cross section of the local density $\phi(x,0)$ and the local gap $\Delta(x,0)$ through the center for $k_B T/t=0.0, 0.10, 0.24$ and $0.45$. For $\Lambda=0.90$ and $\chi=0.3$.}
\label{Fig6}
\end{figure}
Finally, when the interlayer spacing is large enough, the intralayer repulsive interaction dominates over the interlayer attraction and the superfluid order parameter almost vanishes. This behavior is found for $\Lambda=1.0$, where pairing is inhibited. For larger values of $\Lambda$ no pairs can be formed but a DW checkerboard pattern still persists at the center of the trap (see Fig. \ref{Fig7}). This value of $\Lambda$ signals the limit from which each layer can be studied separately. The single layer model has been studied previously considering arbitrary dipole moment orientations\cite{Gadsbolle1,Gadsbolle2}. We found that our predictions are in good agreement with those results, that is, at a given critical temperature that depends on the interaction strength, checkerboard phases for perpendicular orientation of the dipole moment emerge.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=5.2in]{Fig7}
\end{center}
\caption{(Color online) Density order profile (top) and superfluid order parameter (bottom) as a function of the temperature. From left to right $k_B T/t=0.0, 0.10, 0.25$ and $0.45$. For $\Lambda=1.0$ and $\chi=0.3$.}
\label{Fig7}
\end{figure}
In Fig. \ref{Fig8} we plot the two global order parameters $\Delta$ and $\rho$ as a function of the temperature for values of $\Lambda$ in the region of coexistence. As can be appreciated from this figure, Bogoliubov-de Gennes diagonalization predicts continuous phase transitions for the considered model. One can observe that, for a given value of $\Lambda$, the critical temperature at which the superfluid phase emerges coincides with that at which the derivative of the DW order parameter shows a discontinuity. Numerical calculations performed for lattices of larger size ($\Omega= 2\times 57\times 57$) lead us to observe how the discontinuity of the derivative in $\rho$ and $\Delta$ at the critical temperature becomes more evident as a function of $\Omega$. Namely, the global order parameters $\Delta$ and $\rho$ change more abruptly at the critical temperature as the lattice size is increased. It is also important to stress that the referred discontinuity in the derivative becomes less evident when the interlayer spacing is increased.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=5.3in]{Fig8}
\end{center}
\caption{(Color online) Order Parameters obtained from BdG diagonalization. A second order continuous phase transition is shown.}
\label{Fig8}
\end{figure}
In Fig. \ref{Fig9} we present the phase diagram of this model, obtained from Bogoliubov-de Gennes equations for finite temperatures. In the inset we show the phase diagram at zero temperature. The region of coexistence between superfluid and DW phases in both diagrams is the supersolid phase of our system. As expected, in the attractive interaction regime the superfluid phase destroys any density order pattern. When the interlayer spacing $\lambda$ becomes comparable with the lattice constant $a$, the superfluid phase and DW start to compete. For $\Lambda<0.83$ there is no formation of density order pattern, while the critical temperature of the BCS superfluid phase decrease monotonously. Close to $\Lambda\approx 0.83$ a density order pattern is formed, then the critical temperature of this phase jumps quickly to a constant value. For values of $\Lambda$ larger than $0.83$ the critical temperature of the DW phase becomes almost independent of the interlayer spacing. On the other hand, the superfluid parameter $\Delta$ suddenly decrease at $\Lambda\approx 0.83$ and then again starts to decrease monotonously. In contrast with the predictions obtained for the single layer system where the critical temperatures may be calculated using the value of the parameters at the center of the trap, this may not be completely true for the system here studied due the possible formation of disks and rings.
The maximum value of the critical temperature for the supersolid phase predicted by our model, considering the parameters of the current experimental systems, is $k_{B}T/t\approx 0.23$. Although such temperature is one order of magnitude smaller than those measured recently in experiments with fermonic KRb \cite{Ni} and NaK \cite{Woo}, current efforts in controlling and lowering the temperature of molecules are promising to reach such critical temperatures in the near future.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=4.3in]{Fig9}
\end{center}
\caption{(Color online) Phase diagram for lattices of size $2\times 37 \times 37$, as a function of the dimensionless inter-layer separation $\Lambda = \lambda/a$. The interaction strength is $\chi=0.3$.}
\label{Fig9}
\end{figure}
\section{Conclusion}
\label{Conclusion}
We have studied the thermodynamic phases that exhibited dipolar Fermi molecules placed at the sites of a bilayer array of square optical lattices in 2D in the presence of a harmonic confinement. Due to the nature of the dipolar interaction, where attractive and repulsive interactions are present, several phases are shown to form. While at- tractive interaction between molecules in different layers leads to predict superfluid phases, density order phases like checkerboard patterns result from the repulsive interaction. The competition between these phases gives rise to the formation of supersolid phases where both SF and DW phases coexist and spatially overlap. An exhaustive exploration of the space of parameters are summarized in the phase diagrams at zero and finite temperatures, see Fig. \ref{Fig9}. Our predictions allowed us to identify clearly the influence of the harmonic potential in the occurrence of the transitions with respect to thermodynamics in the homogeneous case reported in previous literature. The system here studied in combination with the capability of trapping Fermi molecules in optical lattices as well as the recently reported production of rovibrational and hyperfine ground state of $^{23}$Na$^{40}$K molecules constitute a promising candidate to study the competition between BEC and BCS superfluid phases in coexistence with an ordered structure, and thus, offering the opportunity to quantum simulate a supersolid phase in ultracold experiments.
\section{Acknowledgments}
\noindent
This work was partially funded by grants IN107014 DGAPA (UNAM) and LN-232652 (CONACYT). A.C.G. acknowledges a scholarship from CONACYT.
|
1,941,325,219,990 | arxiv |
\section{Introduction}
Large parts of the world are still suffering from a pandemic caused by the Severe Acute Respiratory Syndrome CoronaVirus 2 (SARS-CoV-2) that raised its ugly head somewhere late 2019, and that was first identified in December 2019 in Wuhan, mainland China~\autocite{vandorp2020emergence-sars-cov2}. Early work of Feretti~\etal~\autocite{ferretti2020covid}, modelling the infectiousness of SARS-CoV-2, showed that (under a number of strong assumptions) digital contact tracing could in principle help reduce the spread of the virus. This spurred the development of \term{contract tracing} apps,
(also known as \term{proximity tracing} or \term{exposure notification} apps),
that aim to support the health authorities in their quest to quickly determine who has been in close and sustained contact with a person infected by this virus~\autocite{who2020contact-tracing,martin2020demystifying-covid19-tracing}.
The main idea underlying digital contact tracing is that many people carry a smartphone most of the time, and that this smart phone could potentially be used to more or less automatically collect information about people someone has been in close contact with. Even though the effectiveness of contact tracing is contested~\autocite{bay2020no-panacea} and there are ethical concerns~\autocite{morley2020ethics-covid19}, especially Bluetooth-based contact tracing apps have quickly been embraced by governments across the globe (even though Bluetooth signal strength is a rather poor proxy for being in close contact~\autocite{dehaye2020bluetooth}). Bluetooth based contact tracing apps broadcast an ephemeral identifier on the short range Bluetooth radio network at regular intervals, while at the same time collecting such identifiers transmitted by other smartphones in the vicinity. The signal strength is used as an estimate for the distance between the two smartphones, and when this distance is determined to be short (within $1$--$2$ meters) for a certain period of time (typically $10$--$20$ minutes) the smartphones register each others ephemeral identifier as a potential risky contact.
Contact tracing is by its very nature a privacy invasive affair, but the level of privacy infringement depends very much on the particular system used. In particular it makes a huge difference whether \emph{all} contacts are registered \emph{centrally} (on the server of the national health authority for example) or in a \emph{decentralised} fashion (on the smartphones of the users that installed the contact tracing app)~\autocite{martin2020demystifying-covid19-tracing,hoepman2021hansel}.\footnote{%
Note that essentially all systems for contact tracing require a central server to coordinate some of the tasks. The distinction between centralised and decentralised systems is therefore \emph{not} made based on whether such a central server exist, but based on where the matching of contacts takes place.
}
In the first case, the authorities have a complete and perhaps even real time view of the social graph of all participants. In the second case, information about one's contacts is only released (with consent) when someone tests positive for the virus.
One of the first contact tracing systems was the TraceTogether app deployed in Singapore.\footnote{%
See \url{https://www.tracetogether.gov.sg}.
}
This inherently centralised approach lets phones exchange regularly changing pseudonyms over Bluetooth. A phone of a person who tests positive is requested to submit all pseudonyms it collected to the central server of the health authorities, who are able to recover phone numbers and identities from these pseudonyms. The Pan-European Privacy-Preserving Proximity Tracing (PEPP-PT) consortium quickly followed suit with a similar centralised proposal for contact tracing to be rolled out in Europe.\footnote{%
See this WikiPedia entry \url{https://en.wikipedia.org/wiki/Pan-European_Privacy-Preserving_Proximity_Tracing} (the original domain \url{https://www.pepp-pt.org} has been abandoned, but some information remains on the project Github pages \url{https://github.com/pepp-pt}).
}
This consortium had quite some traction at the European policy level, but there were serious privacy concerns due to its centralised nature. As a response, a large group of academics, led by Carmela Troncoso and her team at EPFL, broke their initial engagement to the PEPP-PT consortium and
rushed to publish the Decentralized Privacy-Preserving Proximity Tracing (DP-3T) protocol\footnote{%
See \url{https://github.com/DP-3T/documents}.
}
as a decentralised alternative for contact tracing with better privacy guarantees~\autocite{dp3t-whitepaper}. See~\autocite{veale2020sovereignty} for some details on the history.
All these protocols require low-level access to the Bluetooth network stack on a smartphone to transmit and receive the ephemeral identifiers used to detect nearby contacts. However, both Google's Android and Apple's iOS use a smartphone permission system to restrict access to critical or sensitive resources, like the Bluetooth network. This proved to be a major hurdle for practical deployment of these contact tracing apps, especially on iPhones as Apple refused to grant the necessary permissions.
Perhaps in an attempt to prevent them from being manoeuvred into a position where they would have to grant to access to any contact tracing app (regardless of its privacy risk), Google and Apple instead developed their own platform for contact tracing called Google Apple Exposure Notification (GAEN). Around the time that DP-3T released their first specification to the public (early April 2020) Google and Apple released a joint specification for contact tracing as well (which they later updated and renamed to exposure notification\footnote{%
See \url{https://techcrunch.com/2020/04/24/apple-and-google-update-joint-coronavirus-tracing-tech-to-improve-user-privacy-and-developer-flexibility/}.
})
with the aim to embed the core technology in the operating system layer of both recent Android and iOS powered smartphones\footnote{%
See \url{https://www.google.com/covid19/exposurenotifications/} and
\url{https://www.apple.com/covid19/contacttracing/}.
}.
Their explicit aim was to offer a more privacy friendly of exposure notification (as it is based on a distributed architecture) instead of allowing apps direct access to the Bluetooth stack to implement contact tracing or exposure notification themselves.
In this paper we study the consequences of pushing exposure notification down the stack from the app(lication) layer into the operating system layer. We first explain how contact tracing and exposure notification works when implemented at the app layer in section~\ref{sec-how}. We then describe the GAEN framework in section~\ref{sec-gaen}, and the technical difference between the two approaches in section~\ref{sec-difference}. Section~\ref{sec-critique} then discusses the concerns raised by pushing exposure notification down the stack: it creates a dormant functionality for mass surveillance at the operating system layer, it does not technically prevent the health authorities from implementing a purely centralised form of contact tracing (even though that is the stated aim), it allows Google and Apple to dictate how contact tracing is (or rather isn't) implemented in practice by health authorities, and it creates risks of function creep.\footnote{%
This paper is based on two blog posts written by the author earlier this year: \url{https://blog.xot.nl/2020/04/19/google-apple-contact-tracing-gact-a-wolf-in-sheeps-clothes/} and
\url{https://blog.xot.nl/2020/04/11/stop-the-apple-and-google-contact-tracing-platform-or-be-ready-to-ditch-your-smartphone/}.
}
\section{How contact tracing and exposure notification works}
\label{sec-how}
As mentioned before, centralised systems for digital contact tracing automatically register all contacts of all people that installed the contact tracing app in a central database maintained by the health authorities. Once a patient tests positive, their contacts can immediately retrieved from this database. But as the central database is collecting contacts regardless of infection, someone's contacts can be retrieved by the authorities regardless. Hence the huge privacy risks associated with such a centralised approach.
Decentralised systems for digital contact tracing only record contact information locally on the smartphones of the people that installed the app: there is no central database. Therefore, the immediate privacy risk is mitigated.\footnote{%
Certain privacy risks remain, however. See for example~\autocite{vaudenay2020dp3t,dp3t-whitepaper}.
}
Once a person tests positive however, some of the locally collected data is revealed. Some schemes (\eg DESIRE\footnote{%
See
\url{https://github.com/3rd-ways-for-EU-exposure-notification/project-DESIRE}.
},
and see~\autocite{hoepman2021hansel}) reveal the identities of the people that have been in close contact with the person that tested positive to the health authorities. Those variants are still contact tracing schemes. Most distributes schemes however only notify the persons that have been in close contact with the person that tested positive by displaying a message on their smartphone. The central health authorities are not automatically notified (and remain in the dark unless the people notified take action and get tested, for example). Such systems implement \emph{exposure notification}.
\begin{figure}
\centering
\includegraphics{./fig/en.eps}
\caption{Exposure notification using an app}
\label{fig-en}
\end{figure}
Most exposure notification systems (like DP-3T and GAEN) distinguish a \emph{collection} phase and a \emph{notification} phase, that each work roughly as follows (see also figure~\ref{fig-en}).
During the collection phase the smartphone of a participating user generates a random ephemeral proximity identifier $\EI{d}{i}$ every $10$--$20$ minutes, and broadcasts this identifier over the Bluetooth network every few minutes.\footnote{
In this notation $\EI{d}{i}$ denotes the ephemeral proximity identifier generated for the $i$-th $10$--$20$ minute time interval on day $d$.
}
The phone also stores a copy of this identifier locally. The smartphones of other nearby participating users receive this identifier and, when the signal strength indicates the user is within the threshold distance of $1$--$2$ meters, stores this identifier (provided it sees the same identifier several times within a $10$--$20$ minute time interval). A smartphone of a participant thus contain a database $S$ of identifiers it sent itself and another database $R$ of identifiers it received from others. The time an identifier was sent or received is also stored, at varying levels of precision (in hours, or days, for example). The databases are automatically pruned to delete any identifiers that are no longer epidemiologically relevant (for COVID-19, this is any identifier that was collected more than $14$ days ago).
The notification phase kicks in as soon as a participating user tests positive for the virus and agrees to notify their contacts. In that case the user instructs their app to upload the database $S$ of identifiers the app sent itself to the server of the health authorities.\footnote{%
A contact tracing version of the app would instead request the smartphone of the user to upload the database $R{}$ of \emph{received} identifiers that the app collected.
}
The smartphone app of other participants regularly queries this server for recently uploaded identifiers of infected people, and matches any new identifiers it receives from the server with entries in the database of identifiers it received from others in close proximity the last few weeks. If there is a match, sometime during the last few weeks the app must have received and stored an identifier of someone who just tested positive for the virus. The app therefore notifies its user that they have been in close contact with an infected person recently (sometimes indicating the day this contact took place). It then typically offers advice on how to proceed, like offering pointers to more information, and strongly suggesting to contact the health authorities, get tested, and go into self-quarantine.
\section{The GAEN framework}
\label{sec-gaen}
Google and Apple's framework for exposure notification follows the same paradigm, with the notable exception that instead of an app implementing all the functionality, most of the framework is implemented at the operating system layer instead.\footnote{%
The summary of GAEN and its properties is based on the documentation offered by both Google (\url{https://www.google.com/covid19/exposurenotifications/}) and Apple
(\url{https://www.apple.com/covid19/contacttracing/}) online, and was last checked December 2020.
The documentation offered by Google and Apple is terse and scattered.
The extensive documentation of the Dutch CoronaMelder at
\url{https://github.com/minvws} proved to be very helpful.
}
Although GAEN is a joint framework, there are minor differences in how it is implemented on Android (Google) and iOS (Apple).
GAEN works on Android version 6.0 (API level 23) or higher, and on some devices as low as version 5.0 (API level 21).\footnote{%
See \url{https://developers.google.com/android/exposure-notifications/exposure-notifications-api}.
}
On Android GAEN is implemented as a Google Play service. GAEN works for Apple devices running iOS 13.5 or higher.
At the Bluetooth and 'cryptographic' layer GAEN works the same on both platforms however. This implies that ephemeral proximity identifiers sent by any Android device can be received and interpreted by any iOS device in the world and vice versa. In other words: users can \emph{in principle} get notified of exposures to infected people independent of the particular operating system their smartphone runs, and independent of which country they are from. (In practice some coordination between the exposure notification apps and the back-end servers of the different health authorities involved is required.)
\begin{figure}
\centering
\begin{overpic}[abs,unit=1pt]{./fig/gaen-keys-x.eps}%
\def$\TEK{d}${$\TEK{d}$}
\def$\EI{d}{i} = f(\TEK{d},i)${$\EI{d}{i} = f(\TEK{d},i)$}
\input{./fig/gaen-keys.overpic}%
\end{overpic}
\caption{Temporary exposure keys and ephemeral proximity identifiers.}
\label{fig-gaen-keys}
\end{figure}
As an optimisation step, devices do not randomly generate each and every ephemeral proximity identifier independently. Instead, the ephemeral proximity identifier $\EI{d}{i}$ to use for a particular interval $i$ on day $d$ is derived from a \emph{temporary exposure key} $\TEK{d}$ (which \emph{is} randomly generated each day) using some public deterministic function $f$ (the details of which do not matter for the current paper). In other words $\EI{d}{i} = f(\TEK{d},i)$, see figure~\ref{fig-gaen-keys}. With this optimisation, devices only need to store exposure keys in $S$, as the actual ephemeral proximity identifiers can always be reconstructed from these keys.
\begin{figure}
\centering
\includegraphics{./fig/gaen.eps}
\caption{The GAEN framework.}
\label{fig-gaen}
\end{figure}
Generating, broadcasting, and collecting ephemeral proximity identifiers happens automatically at the operating system layer, but only if the user has explicitly enabled this by installing an exposure notification app and setting the necessary permissions,\footnote{%
Both Android and iOS require Bluetooth to be enabled. On Android 10 and lower, the device location setting needs to be turned on as well (see \url{https://support.google.com/android/answer/9930236}).
}
or by enabling exposure notifications in the operating system settings.\footnote{%
For those countries where the national health authorities have not developed their own exposure notification app but instead rely on Exposure Notification Express (see \url{https://developers.google.com/android/exposure-notifications/en-express}).
}
Apple and Google do not allow exposure notification apps to access your device location.\footnote{%
See \url{https://support.google.com/android/answer/9930236} and \url{https://covid19-static.cdn-apple.com/applications/covid19/current/static/contact-tracing/pdf/ExposureNotification-FAQv1.2.pdf}.
}
By default, exposure notification is disabled on both platforms. When enabled, the database of $S$ of exposure keys and the database $R$ of identifiers received are stored at the operating system layer, which ensures that data is not directly accessible by any app installed by the user.
Actual notifications are the responsibility of the exposure notification app. In order to use the data collected at the operating system layer, the app needs to invoke the services of the operating system through the GAEN Application Programming Interface (API). Apps can only access this API after obtaining explicit permission from Google or Apple. The API offers the following main functions (see also figure~\ref{fig-gaen}).
\begin{itemize}
\item
\emph{Retrieve} the set of exposure keys (stored in $S$). ``The app must provide functionality that confirms that the user has been positively diagnosed with COVID-19.''\footnote{%
See \url{https://developers.google.com/android/exposure-notifications/exposure-notifications-api}. Also see the verification system Google designed for this
\url{https://developers.google.com/android/exposure-notifications/verification-system}.
}
But this is not enforced at the API layer. In other words, the app (once approved and given access to the API) has access to the exposure keys.
\item
\emph{Match} a (potentially large) set of exposure keys against the set of ephemeral proximity identifiers received from other devices earlier (stored in $R$), and return a list of risk score (either a list of daily summaries, or a list of individual $<30$ minute exposure windows). This function is rate limited to a few calls per day.\footnote{%
On iOS 13.7 and the most recent version of the API limits the use of this method to a maximum of six times per 24-hour period. On Android, the most recent version of the API also allows at most six calls per day, but 'allowlisted' accounts (used by health authorities for testing purposes) are allowed 1,000,000 calls per day).
}
\end{itemize}
The API also ensures that the user is asked for consent whenever an app enables exposure notification for the first time, and whenever user keys are retrieved for upload to the server of the health authorities after the user tested positive for COVID-19. The API furthermore offers functions to tune the computation of the risk scores.
The idea is that through the API a user that test positive for COVID-19 can instruct the app to upload all its recent (the last 14, actually) temporary exposure keys to the server of the health authorities. The exposure notification app of another user can regularly query the server of the health authorities for recently uploaded exposure keys of infected devices. Using the second GAEN API function allows the app to submit these exposure keys to the operating system which, based on the database $R$ of recently collected proximity identifiers, checks whether there is a match with such an exposure key (by deriving the proximity identifiers locally). A list of matches is returned that contains the day of the match and an associated risk score; the actual key and identifier matched is not returned however. The day of the contact, the duration of the contact, the signal strength (as a proxy for distance of contact), and the type of test used to determine infection are used to compute the risk score. Developers can influence on how this risk score is computed by providing weights for all the parameters. Using the returned list the app can decide to notify the user when there appears to be a significant risk of infection. Note that by somewhat restricting the way risk scores are computed, GAEN makes it harder for a malicious app to determine exactly which exposure key triggered a warning (and hence makes it harder to determine exactly with whom someone has been physical proximity with).
\section{How the GAEN framework differs from a purely app based approach}
\label{sec-difference}
Given its technical architecture, the GAEN framework fundamentally differs from an purely app based approach to exposure notification in the following four aspects.
First of all, the functionality and necessary code for the core steps of exposure notification (namely broadcasting, collecting and matching ephemeral proximity identifiers) comes pre-installed on all modern Google and Apple devices. In a purely app based approach this functionality and code is solely contained in the app itself, and not present on the device when the app is not installed (and removed when the app is de-installed).
Second, all relevant data (ephemeral proximity identifiers and their associated metadata like date, time and possibly location) are collected and stored at the operating system level. In a purely app based approach this data is collected and stored at the user/app level. This distinction is relevant as in modern computing devices the operating system runs in a privileged mode that renders the data it processes inaccessible to 'user land' app. Data processed by apps is accessible to the operating system in raw form (in the sense the operating system has access to all bytes of memory used by the app), but the interpretation of that data (which information is stored where) is not necessarily easy to determine.
Moreover, the framework is interoperable at the global level: users can \emph{in principle} get notified of exposures to infected people independent of the particular operating system their smartphone runs, and independent of which country they are from. This would not necessarily be the case (and probably in practice be impossible to achieve) in a purely app based approach.
Finally the modes of operation are set by Google and Apple: the system notifies users of exposure; it does not automatically inform the health authorities. The app is limited to computing a risk score, and does not receive the exact location nor the exact time when a 'risky' contact took place. In a purely app based approach the developers of the app themselves determine the full functionality of the app (within the possible limits imposed by the app stores).
\section{A critique of the GAEN framework}
\label{sec-critique}
It is exactly for the above properties that the GAEN framework appears to protect privacy: the health authorities (and the users) are prevented from obtaining details about the time and location of a risky contact, thus protecting the privacy of infected individuals, and the matching is forced to take place in a decentralised fashion (which prevents the health authorities from directly obtaining the social graph of users).
However, there is more to this than meets the eye, and there are certainly broader concerns that stem from the way GAEN works.
\begin{itemize}
\item
By pushing exposure notification down the stack, GAEN creates a dormant functionality for mass surveillance at the operating system layer.
\item
Moreover, the exposure notification microdata (exposure keys and proximity identifiers) are under Google/Apple's control.
\item
GAEN does not \emph{technically prevent} health authorities from implementing a purely centralised form of contact tracing (although it clearly discourages it). A decentralised framework like GAEN can be re-centralised. The actual protection offered therefore remains \emph{procedural}: we need to trust Google and Apple to disallow centralised implementations of contact tracing apps offered through their app stores.
\item
GAEN leverages Google and Apple's control over how exposure notification works because exposure notification apps are required to use it. In particular it allows Google and Apple to dictate how contact tracing is (or rather isn't) implemented in practice by health authorities.
\item
GAEN introduces significant risks of function creep.
\end{itemize}
These concerns are discussed in detail in the following sections. These are by no means the only ones,\footnote{%
See \url{https://www.eff.org/deeplinks/2020/04/apple-and-googles-covid-19-exposure-notification-api-questions-and-answers}.
}
see for example also~\eg\autocite{sharon2020blind-sided,duarte2020gaen,klein2020corona}, but these are the ones that derive directly from the architectural choices made.
\subsection{GAEN creates a dormant mass surveillance tool}
Instead of implementing all exposure notification functionality in an app, Google and Apple push the technology down the stack into the operating system layer creating a Bluetooth-based exposure notification platform. This means the technology is available all the time, for all kinds of applications beyond just exposure notification. As will explained in the next section, GAEN can be (ab)used to implement centralised forms of contact tracing as well. Exposure notification is therefore no longer limited in time, or limited in use purely to trace and contain the spread of COVID-19. This means that two very important safeguards to protect our privacy are thrown out of the window.
Moving exposure notification down the stack fundamentally changes the amount of control users have: you can uninstall a (exposure notification) app, you cannot uninstall the entire OS (although on Android you can in theory disable and even delete Google Play Services). The only thing a user can do is disable exposure notification using an operating system level setting. But this does not remove the actual code implementing this functionality.
But the bigger picture is this: it creates a platform for contact tracing in the more general sense of mapping who has been in close physical contact (regardless of whether there is a pandemic that needs to be fought). Moreover,this platform for contact tracing works all across the globe for most modern smart phones (Android Marshmallow and up, and iOS 13 capable devices) across both OS platforms. Unless appropriate safeguards are in place this would create a global mass-surveillance system that would reliably track who has been in contact with whom, at what time and for how long.\footnote{%
GAEN does not currently make all this information available in exact detail through its API, but it \emph{does} collect this information at the lower operating system level. It is unclear whether GAEN records location data at all (although it would be easy to add this, and earlier versions of the API did in fact offer this information).
}
GAEN works much more reliably and extensively to determine actual physical contact than any other system based on either GPS or mobile phone location data (based on cell towers) would be able to (under normal conditions). It is important to stress this point because some people believe this is something companies like Google (using their GPS and WiFi names based location history tool) can already do for years. This is not the case. This type of contact tracing really brings it to another level.
In those regions that that opt for Exposure Notification Express\footnote{%
See \url{https://developers.google.com/android/exposure-notifications/en-express}.
}
the data collection related to exposure notification starts as soon as you accept the operating system update and enable it in the settings. In other regions this only happens when people install a exposure notification app that uses the API to find contacts based on the data phones have already collected. But this assumes that both Apple and Google indeed refrain from offering other apps access to the exposure notification platform (through the API) through force or economic incentives, or suddenly decide to use the platform themselves. GAEN creates a dormant functionality for mass surveillance~\autocite{veale2020gact}, that can be turned on with the flip of a virtual switch at Apple or Google HQ.
All in all this means we all have to put a massive trust in Apple and Google to properly monitor the use of the GAEN API by others.
\subsection{Google and Apple control the exposure notification microdata}
Because the exposure notification is implemented at the operating system layer, Google and Apple fully control how it works and have full access to all microdata generated and collected. In particular they have, in theory, full access to the temporary exposure keys and the ephemeral proximity identifiers, and control how these keys are generated. We have to trust that the temporary exposure keys are really generated at random, and not stealthily derived from a user identifier that would allow Google or Apple to link proximity identifiers to a particular user. And even if these keys are generated truly at random, at any point in time Google or Apple could decide to surreptitiously retrieve these keys from certain device, again with the aim to link previously collected proximity identifiers to this particular device. In other words, we have to trust Google and Apple will not abuse GAEN themselves. They do not necessarily have an impeccable track record that warrants such trust.
\subsection{Distributed can be made centralised}
The discussion in the preceding paragraphs implicitly assumes that the GAEN platform truly enforces a decentralised form of exposure notification, and that it prevents exposure notification apps from automatically collecting information on a central server about who was in contact with who. This assumption is not necessarily valid however (although it can be enforced provided Apple and Google are very strict in the vetting process used to grant apps access to the GAEN platform). In fact, GAEN can easily be used to create a centralised from of exposure notification, at least when we limit our discussion to centrally storing information about who has been in contact with an infected person.
The idea is as follows. GAEN allows a exposure notification app on a phone to test daily exposure keys of infected users against the proximity identifiers collected by the phone over the last few days. This test is local; this is why GAEN is considered decentralised. However, the app could immediately report back the result of this test to the central server, without user intervention (or without the user even noticing).\footnote{%
Recall that the API enforces user consent when retrieving exposure keys, but not when matching them.
}
It could even send a user specific identifier along with the result, thus allowing the authorities to immediately contact anybody who has recently been in the proximity of an infected person. This is the hallmark of a centralised solution.
In other words: the GAEN technology itself does not prevent a centralised solution. The only thing preventing it would be Apple and Google being strict in vetting exposure notification apps. But they could already do so now, without rolling out their GAEN platform, by strictly policing which apps they allow access to the Bluetooth networks stack, and which apps they allow on their app stores.
A malicious app could do other things as well. By design GAEN does not reveal which infected person a user has been in contact with when matching keys on the users phone. Calls to the matching function in the API are rate limited to a few calls each day, the idea being that a large number of keys can be matched in batch without revealing which particular key resulted in a match. But this still allows a malicious app (and accompanying malicious server) to test a few daily tracing keys (for example of persons of interest) one by one, and to keep track of each daily tracing key for which the test was positive, and report these back to the server. As the server knows which daily tracing key belongs to which infected person, this allows the server to know exactly with which infected persons of interest the user of this phone has been in contact with. If the app is malicious, even non-infected persons are at risk, because such an app could retrieve the exposure notification keys even if a user is not infected (provided it can trick the user in consenting to this).
Clearly a malicious exposure notification app not based on GAEN could do the same (and much more). But this does shows that GAEN by itself does not protect against such scenarios, while making the impact of such scenarios far greater because of its global reach.
\subsection{Google and Apple dictate how contact tracing works}
Apple and Google’s move is significant for another reason: especially on Apple iOS devices, access to the hardware is severely restricted. This is also the case for access to Bluetooth. In fact, without approval from Apple, you cannot use Bluetooth ‘in the background’ for your app (which is functionality that you need to collect information about nearby phones even if the user phone is locked). You could argue that this could potentially \emph{improve} privacy as it adds another checkpoint where some entity (in this case Apple) decides whether to allow the proposed app or not. But Apple (and by extension Google) use this power as a leverage to grab control over how contact tracing or exposure notification will work. This is problematic as this allows them to set the terms and conditions, without any form of oversight. With this move, Apple and Google make themselves indispensable, ensuring that this potentially global surveillance technology is forced upon us. And as a consequence all microdata underlying any contact tracing system is stored on the phones they control.
For example, the GAEN framework prevents notified contacts to learn the nature of the contact and make well informed decision about the most effective response: get tested, or go into self-quarantine immediately. It also prevents the health authorities from learning the nature of the contact and hence makes it impossible to build a model of contacts. ``The absence of transmission data limits the scope of analysis, which might, in the future, give freedom to people who can work, travel and socialise, while more precisely targeting others who risk spreading the virus.''~\autocite{ilves2020google-apple-dictating}. This happens because the GAEN framework is based on a rather corporate understanding of privacy as giving control and by asking for consent. But under certain specific conditions, and a public health emergency like the current pandemic is surely one, individual consent is not appropriate: ``to be effective, public-health surveillance needs to be comprehensive, not opt-in.''~\autocite{cohen2020danger-tech-covid19}.
\subsection{Function creep}
The use of exposure notification functionality as offered through GAEN is not limited to controlling just the spread of the COVID-19 virus. As this is not the first corona type virus, it is only a matter of time until a new dangerous virus will roar its ugly head. In other words, exposure notification is here to stay.
And with that, the risk of function creep appears: with the technology rolled out and ready to be (re)activated, other uses of exposure notification will at some point in time be considered, and deemed proportionate. Unless Apple and Google strictly police the access to the GAEN API (based on some publicly agreed upon rules) and ensure that it is only used by the health authorities, and only for controlling a pandemic like COVID-19, the following risks are apparent.
Consider the following hypothetical example of a government that wants to trace the contact or whereabouts of certain people, that could ensue when Google and Apple fail to strictly enforce access. Such a government could coerce developers to embed this tracking technology in innocent looking apps, in apps that you are more or less required to have installed, or in software libraries used by such apps. Perhaps it could even coerce Apple and Google itself to silently enable exposure notifications for all devices sold in their country, even if the users do not install any app.\footnote{%
Note that when Google and Apple first announced their exposure notification platform the idea was that your phone would start emitting and collecting proximity identifiers as soon as the feature was enabled in the operating settings, even if no exposure notification app was installed, see~\url{https://techcrunch.com/2020/04/13/apple-google-coronavirus-tracing/}.
}
It is known that Google and Apple in some cases do bow to government pressure to enable or disable certain features: like filter search results\footnote{%
See \url{https://www.sfgate.com/business/article/Google-bows-to-China-pressure-2505943.php}.
},
remove apps from the app store\footnote{%
See \url{https://www.nytimes.com/2019/10/10/business/dealbook/apple-china-nba.html}.
}
and even move cloud storage servers,\footnote{%
See \url{https://www.reuters.com/article/us-china-apple-icloud-insight/apple-moves-to-store-icloud-keys-in-china-raising-human-rights-fears-idUSKCN1G8060}.
}
offering Chinese authorities far easier access to text messages, email and other data stored in the cloud.
Because the ephemeral proximity identifiers are essentially random they cannot be authenticated. In other words: any identifier with the right format advertised on the Bluetooth with the correct service identifier will be accepted and recorded by any device with GAEN active. Moreover, because the way ephemeral identifiers are generated from daily exposure keys is (necessarily) public, anybody can build a cheap device broadcasting ephemeral identifiers from chosen daily exposure keys that will be accepted and stored by a nearby device with the GAEN platform enabled. A government could install such Bluetooth beacons at fixed locations of interest for monitoring purposes. The daily exposure keys of these devices could be tested against phones of people of interest running the apps as explained above. Clearly this works only for a limited number of locations because of rate limiting, but note that at least under Android this limit is not imposed for `allowlisted' apps for testing purposes, and then the question is again whether Google can be forced to allowlist a certain government app. China could consider it to further monitor Uyghurs. Israel could use it to further monitor Palestinians. You could monitor the visitors of abortion clinics, coffee shops, gay bars, \ldots
Indeed the exact same functionality offered by exposure notification could allow the police to quickly see who has been close to a murder victim: simply report the victims phone as being 'infected'. Some might say this is not a bug but a feature, but the same mechanism could be used to find whistleblowers, or the sources of a journalist.
For centralised contact tracing apps we already see function creep creeping in.
The recent use of the term `contact tracing' in the context of tracking protesters in Minnesota after demonstrations erupted over the death of George Floyd at the hands of a police officer\footnote{%
See \url{https://bgr.com/2020/05/30/minnesota-protest-contact-tracing-used-to-track-demonstrators/}.
}
is ominous, even if the term refers to traditional police investigating methods\footnote{%
And what to think of the following message posted by Anita Hazenberg, the Director Innovation Directorate at Interpol: ``Is your police organisation considering how tracing apps will influence the way we will police in the future? If you are a (senior) officer dealing with policy challenges in this area, please join our discussion on Wednesday 6 May (18.00 Singapore time) during a INTERPOL Virtual Discussion Room (VDR). Please contact [email protected] for more info. Only reactions from law enforcement officers are appreciated.''. See:
\url{https://www.linkedin.com/posts/anita-hazenberg-b0b48516_is-your-police-organisation-considering-how-activity-6663040380965130242-q8Vk}.
}.
More concrete evidence is the discovery that Australia’s intelligence agencies were `incidentally' collecting data from the country’s COVIDSafe contact-tracing app.
\footnote{%
\url{https://techcrunch.com/2020/11/24/australia-spy-agencies-covid-19-app-data/}
}
The Singapore authorities recently announced that the police can access COVID-19 contact tracing data for criminal investigations.\footnote{%
\url{https://www.zdnet.com/article/singapore-police-can-access-covid-19-contact-tracing-data-for-criminal-investigations/}
}
Now one could argue that these examples are an argument supporting the privacy friendly approach taken by Google and Apple. After all, by design exposure notification does not have a central database that is easily accessible by law enforcement or intelligence agencies. But as explained above, this is not (and cannot be) strictly enforced by the GAEN framework.
Contact tracing also has tremendous commercial value. A company could install Bluetooth beacons equipped with this software at locations of interest (e.g. shopping malls). By reporting a particular beacon as 'infected' all phones (that have been lured into installing a loyalty app or that somehow have the SDK of the company embedded in some of the apps they use) will report that they were in the area. Facebook used a crude version of contact tracing (using the access it had to WhatsApp address books) to recommend friends on Facebook~\autocite{tait2019facebook-friends,hill2016facebook-psychiatrist}. The kind of contact tracing offered by GAEN (and other Bluetooth based systems) gives a much more detailed, real time, insight in people’s social graph and its dynamics. How much more precise could targeted adverting become? Will Google and Apple forever be able to resist this temptation? If you have Google Home at home, Google could use this mechanism to identify all people that have visited your place. Remember: they set the restrictions on the API. They can at any time decide to change and loosen these restrictions.
\section{Conclusion}
\label{sec-conclusion}
We have described how the shift, by Google and Apple, to push exposure notification down the stack from the app layer to the operating system layer fundamentally changes the risk associated with exposure notification systems, and despite the original intention, unfortunately not for the better. We have shown that from a technical perspective it creates a dormant functionality for global mass surveillance at the operating system layer, that it takes away the power to decide how contact tracing works from the national health authorities and the national governments, and how it increases the risks of function creep already nascent in digital exposure notification and contact tracing systems.
These risks can only be mitigated by Google and Apple as they are the sole purveyors of the framework and have sole discretionary power over who to allow access to the framework, and under which conditions. We fully rely on their faithfulness and vigilance to enforce the rules and restrictions they have committed to uphold, and have very little tools to verify this independently.
|
1,941,325,219,991 | arxiv | \section{Introduction}
Given $(M,g)$ a smooth compact Riemannian manifold without boundary, the Yamabe problem is to find, in the conformal class of $g$, a metric of constant scalar curvature. The geometric problem has a PDE formulation, i.e. the metric $\tilde g=u^{4\over n-2}g$ has the required properties if the function $u$ is a smooth positive solution to the critical equation
\begin{equation}
L_{g}u=\kappa u^{n+2\over n-2}\ \hbox{in }M, \label{yamabe}
\end{equation}
for some constant $\kappa$.
Here $L_{g}:=\Delta_{g}-\frac{n-2}{4(n-1)}R_{g}$ is the conformal Laplacian,
$\Delta_{g}$ is
the Laplace Beltrami operator and $R_{g}$ is the scalar curvature of $(M,g).$
Solutions to \eqref{yamabe} are critical points of the functional
$$E(u):={\int\limits_M \left(|\nabla u|^2+{n-2\over4(n-1)}R_gu^2\right)dv_g\over\left(\int\limits_{\partial\Omega}|u|^{2n\over n-2}d\sigma\right)^{n-2\over n}}, \ u\in H^1_g(M),$$
were $dv_g$ denotes the volume form on $M$ and $\partial M.$
The exponent $2n\over n-2$ is critical for the Sobolev embedding $H^1_g(M)\hookrightarrow L^{2n\over n-2}(\partial M) .$
The existence of a minimizing solution to the Yamabe problem is well-known and follows
from the combined works of Yamabe \cite{yam}, Trudinger \cite{tru}, Aubin \cite{aub} and Schoen \cite{sch}.
\\
One of the generalizations of this problem on manifolds $(M,g)$ with boundary was proposed by Escobar in \cite{E92} and it consists of finding in the conformal class of $g$, a scalar-flat metric of constant boundary mean curvature. Also in this case the geometric problem has a PDE formulation, i.e.
the metric $\tilde g=u^{4\over n-2}g$ has the required properties if the function $u$ is a smooth positive solution to the critical boundary value problem
\begin{equation}
\left\{\begin{aligned}
&L_{g}u=0\ \hbox{in }M\\
&{\partial_\nu}u+\frac{n-2}{2}H_{g}u=\kappa u^{\frac{2(n-1)}{n-2}-1} \ \hbox{on}\ \partial M.
\end{aligned}\right.\label{eq:probK}
\end{equation}
for some constant $\kappa$.
Here $\nu$ is the outward unit normal
vector to $\partial M$ and
$H_{g}$ is the mean curvature on $\partial M$ with respect to $g.$
Solutions to \eqref{eq:probK} are critical points of the functional
$$Q(u):={\int\limits_M \left(|\nabla u|^2+{n-2\over4(n-1)}R_gu^2\right)dv_g+\int\limits_{\partial\Omega}{n-2\over2}H_g u^2d\sigma_g\over\left(\int\limits_{\partial\Omega}|u|^{2(n-1)\over n-2}d\sigma\right)^{n-2\over n-1}}, \ u\in H$$
were $dv_g$ and $d\sigma_g$ denote the volume forms on $M$ and $\partial M,$ respectively, and the space
$$H:=\left\{u\in H^1_g(M)\ :\ u\not=0\ \hbox{on}\ \partial\Omega\right\}.$$
Escobar in \cite{E92} introduced the Sobolev quotient
\begin{equation}\label{sob-quo}
Q(M,\partial M):=\inf\limits_H Q(u),
\end{equation}
which is conformally invariant and always satisfies
\begin{equation}\label{ineq}
Q(M,\partial M)\le Q(\mathbb B^n,\partial \mathbb B^n),
\end{equation}
where $\mathbb B^n$ is the unit ball in $\mathbb R^n$ endowed with the euclidean metric $\mathfrak g_0$.
Following Aubin's approach (see \cite{aub}), Escobar proved that if $Q(M,\partial M)$ is finite and the strict inequality in \eqref{ineq} holds, i.e.
\begin{equation}\label{strict}
Q(M,\partial M)< Q(\mathbb B^n,\partial \mathbb B^n),
\end{equation}
then the infimum \eqref{sob-quo} is achieved and a solution to problem \eqref{eq:probK} does exist.
In the negative case, i.e. $Q(M,\partial M)\le0$, it is clear that \eqref{strict} holds.
The positive case, i.e. $Q(M,\partial M)>0$, is the most difficult one and the proof of the validity of \eqref{strict} has required a lot of works.
Assume $(M,g)$ is not conformally equivalent to $(\mathbb B^n,\mathfrak g_0)$, \eqref{strict} has been proved by Escobar in \cite{E92} if
\begin{itemize}
\item[$\diamond$] $n=3,$
\item[$\diamond$] $n=4,5$ and $\partial M$ is umbilic,
\item[$\diamond$] $n\ge6$, $\partial M$ is umbilic and $M$ is locally conformally flat
\item[$\diamond$] $n\ge6$ and $M$ has a non-umbilic point
\end{itemize}
by Marques in \cite{Ma1,Ma2} if
\begin{itemize}
\item[$\diamond$] $n=4,5$ and $\partial M$ is not umbilic,
\item[$\diamond$] $n\ge8$, $\overline{\textrm Weyl}_g(\xi)\not=0$ for some $\xi\in\partial M$
\item[$\diamond$] $n\ge9$, ${\textrm Weyl}_g(\xi)\not=0$ for some $\xi\in\partial M$
\end{itemize}
by Almaraz in \cite{A1} if
\begin{itemize}
\item[$\diamond$] $n=6,7,8$, $\partial M$ is umbilic and $ {\textrm Weyl}_g(\xi)\not=0$ for some $\xi\in\partial M.$
\end{itemize}
We remind that
a point $\xi\in\partial M$ is said to be {\it umbilic} if the tensor
$T_{ij}=h_{ij}-H_g g_{ij}$ vanishes at $\xi,$ where $h_{ij}$ are the coefficients of the second fundamental form and $H={1\over n}g^{ij}h_{ij}$ is the mean curvature.
The boundary $\partial M$ is said to be umbilic if all its points are umbilic.
Moreover, $\overline{\textrm Weyl}_g(\xi)$ denotes the Weyl tensor of the restriction of the metric to the boundary.
The strategy to prove that the strict inequality \eqref{strict} holds consists in finding good test functions, which involve the minimizer of the Sobolev quotient in $\mathbb R^n_+:=\left\{(x,t)\ :\ x\in\mathbb R^{n-1},\ t>0\right\}, $ namely the so-called {\it bubble}
\begin{equation}\label{bubble}
U_{\delta,y}(x,t):=\delta^{-{n-2\over2}}U\left({x-y\over\delta},{t\over\delta}\right),\ \delta>0,\ x,y\in\mathbb R^{n-1},\ t>0\end{equation}
where
\begin{equation}\label{limite} U(x,t):={1\over\left((1+t)^2+|x|^2\right)^{n-2\over2}}.
\end{equation}
Indeed Beckner in \cite{B} and Escobar \cite{E88} proved that
$$Q(\mathbb B^n,\partial \mathbb B^n)=\inf\left\{{\int\limits_{\mathbb R^n_+} |\nabla u|^2dx \over\left(\int\limits_{\partial\mathbb R^n_+}|u|^{2(n-1)\over n-2}dx\right)^{n-2\over n-1}}\ :\ u\in H^1(\mathbb R^n_+),\ u\not=0\ \hbox{on}\ \partial \mathbb R^n_+\right\}.$$ The infimum is achieved by the functions
$U_{\delta,y}$ which are the only positive solutions to the limit problem
\begin{equation}\label{limite1}
\left\{\begin{aligned}&\Delta u=0\ \hbox{in}\ \mathbb R^n_+\\
&\partial_\nu u=(n-2) u^{n\over n-2}.\\
\end{aligned}\right.
\end{equation}
\\
Once the existence of solutions of problems \eqref{yamabe} or (\ref{eq:probK}) is settled, a natural question concerns the structure of
the full set of positive solutions of \eqref{yamabe} or (\ref{eq:probK}).
Concerning the Yamabe problem on manifold without boundary, Schoen (see \cite{s3})
raised the question of compactness
of the set of solutions of problem \eqref{yamabe}. The question has been
recently resolved by S. Brendle, M. A. Khuri, F. C. Marques and R. Schoen in a series of works \cite{bre,bre-mar,khu-mar-sch} (see also the survey by Marques \cite{mar}).
By their results, the set of solutions for the Yamabe problem \eqref{yamabe} is compact on any compact manifold of dimension $n\le24$, while it is not compact on some compact manifold of dimension $n\ge25.$
Therefore, it is natural to address the question of compactness of the set of positive solutions of (\ref{eq:probK}).
If $Q(M,\partial M)<0$ the solution is unique and if $Q(M,\partial M)=0$ the solution is unique up to a constant factor.
If $Q(M,\partial M)>0$ the situation turns out to be more delicate. Indeed in the case of the euclidean ball $(\mathbb B^n,\mathfrak g_0)$ the set of solutions is not compact!
Felli and Ould-Ahmedou \cite{FO03} proved that compactness holds
when $n\ge3$, $(M,g)$ is locally conformally flat and $\partial M$ is umbilic.
Almaraz in \cite{A3} proved that compactness also holds if $n\ge7$ and the trace-free second fundamental form of $\partial M$ is non
zero everywhere. This last assumption is generic as a transversality argument shows.
Up to our knowledge, the only non-compactness result is due to Almaraz. In \cite{A2} he constructs a sequence of blowing-up conformal metrics with zero scalar curvature and constant boundary mean curvature on a ball of dimension $n\ge 25.$
It is unknown if the dimension $25$ is sharp for the compactness, namely if $n\le24$ the problem (\ref{eq:probK}) is compact or not.\\
In this paper we are interested in the existence of blowing-up solutions to problems which are linear perturbation of the geometric problem \eqref{eq:probK}.
More precisely, the question we address is the following.
{\it Does the problem
\begin{equation}\label{eq:Peps}
\left\{\begin{aligned}
&L_{g}u=0\ \hbox{in }M\\
&{\partial_\nu}u+\frac{n-2}{2}H_{g}u+\varepsilon \gamma u= u^{ \frac{2(n-1)}{n-2}-1} \ \hbox{on}\ \partial M.
\end{aligned}\right.
\end{equation}
where $\gamma\in C^2(M)$, have positive blowing-up solutions as the positive parameter $\varepsilon$ approaches zero?}
We give a positive answer under suitable geometric assumptions on $M$ and on the sign of the linear perturbation term $\gamma.$ Our main result reads as follows.
\begin{thm}
\label{thm:main} Assume $n\ge7$, $Q(M,\partial M)>0$ and
the trace-free second fundamental form of $\partial M$ is non zero everywhere.
If the function $\gamma\in C^{1}(M)$ is strictly positive, then for
$\varepsilon>0$ small there exists a positive solution $u_{\varepsilon}$
of (\ref{eq:Peps}) such that $\|u_{\varepsilon}\|_{H^{1}}$ is bounded
and $u_{\varepsilon}$ blows-up at a suitable point $q_{0}\in\partial M$
as $\varepsilon\rightarrow0.$
\end{thm}
\begin{rem}
The proof of our result relies on a Ljapunov-Schmidt procedure. We build solutions to \eqref{eq:Peps} which at the main order looks like the bubble \eqref{bubble} centered at a point $q_0$ on the boundary.
As usual the blowing-up point $q_0$ turns out to be a critical point of the reduced energy whose leading term is a function (see \eqref{ridotta}) defined on the boundary,
which cannot be explicitly written in terms of the geometry quantities of the boundary. The difficulty comes from the fact that we cannot find an explicit expression of the correction term we need to add
to the bubble to have a good approximation. The correction term solves the linear problem \eqref{eq:vqdef} and it gives a significant contribution to the reduced energy (see \eqref{phiq}).
Actually, we conjecture that the term \eqref{phiq} (up to a constant factor) is nothing but the trace-free second fundamental form at $q_0$ and so the blowing-up point $q_0$ is a critical point of the function
$$q\to{\|\hbox{the trace-free second fundamental form at $q$}\|\over\gamma^2(q)},\ q\in\partial M.$$
\end{rem}
\begin{rem}
Theorem \ref{thm:main} states that problem (\ref{eq:Peps}) is not compact if the linear perturbation term is strictly positive in $\partial M.$ We strongly believe that the compactness is recovered if the linear perturbation is negative somewhere in $\partial\Omega$. This is what happens in the case of linear perturbation of the Yamabe problem \eqref{yamabe}.
Indeed, if we consider the perturbed problem
\begin{equation}
L_{g}u+\varepsilon f u=\kappa u^{n+2\over n-2}\ \hbox{in }M, \label{per-yamabe}
\end{equation}
where $\varepsilon$ is a positive parameter and $f\in C^2(M)$.
Druet in \cite{dru} shows that if $f\le0$ in $M,$ blow-up does not occur if $3\le n\le5.$
When $f$ is positive somewhere in $M$, blow-up is possible as showed by Druet and Hebey in \cite{dru-heb} in the case of the sphere
and by Esposito, Pistoia, and V\'etois in \cite{EPV14} on general compact manifolds.
\end{rem}
\begin{rem}
Almaraz in \cite{A3} studied the compactness of problem \eqref{eq:probK} when the exponent in the non-linearity of the boundary is below the critical exponent and he proved the following result.
\begin{thm}\label{almaraz} Assume $n\ge7$, $Q(M,\partial M)>0$ and
the trace-free second fundamental form of $\partial M$ is non zero everywhere.
Then the problem
\begin{equation}\label{peps}
\left\{\begin{aligned}
&L_{g}u=0\ \hbox{in }M\\
&{\partial_\nu}u+\frac{n-2}{2}H_{g}u= u^{ \frac{2(n-1)}{n-2}-1-\varepsilon} \ \hbox{on}\ \partial M.
\end{aligned}\right.
\end{equation}
is compact, namely there exist $\varepsilon_0>0$ and a positive constant $C$ such that for any $\varepsilon\in(0,\varepsilon_0)$ any positive solution $u_\varepsilon$ of \eqref{peps} satisfies
$\|u_\varepsilon\|_{C^{2,\alpha}(M)}\le C$ for some $\alpha\in(0,1).$\end{thm} In other words, problem \eqref{peps}
does not have any blowing-up solutions as the positive parameter $\varepsilon$ approaches zero.
Let us point out that combining our argument with some ideas developed in a previous paper \cite{GMP16} we can also obtain the existence of blowing-up solutions for problem \eqref{peps} when the parameter
$\varepsilon$ is {\it negative} and small. Then the compactness result Theorem \ref{almaraz} is sharp, namely the problem \eqref{peps} is compact
if the exponent in the non-linearity of the boundary approaches the critical exponent from below and it is non-compact if the exponent approaches the critical exponent from above.
\end{rem}
The paper is organized as follows. In Section \ref{uno} we set the problem in a suitable scheme, in Section \ref{due} we perform the finite-dimensional reduction, in Section \ref{tre} we study the reduced problem and in Section \ref{quattro} we prove Theorem \ref{thm:main}. The Appendix contains some technical results.
\section{Variational framework and preliminaries}\label{uno}
It is well known \cite{E92} that there exists a global conformal transformation
which maps the manifold $M$ in a manifold for which the mean curvature
of the boundary is identically zero, so we can choose a metric $(M,g)$ such
that $H_{g}\equiv0$. This can be done, by a global conformal transformation
$g=\varphi_{1}^{4/n}\bar{g}$, where $\varphi_{1}$ is the positive
eigenvector of the first eigenvalue $\lambda_{1}$ of the problem
\[
\left\{ \begin{array}{ccc}
-L_{g}\varphi+\lambda_{1}\varphi=0 & & \text{on }M;\\
B_{g}\varphi=0 & & \text{on \ensuremath{\partial}}M.
\end{array}\right.
\]
It is useful to point out that if $\pi$ denotes the second fundamental form related to $g$ and $q\in\partial M$
then $\pi(q)$ is non-zero if and only if the trace-free second fundamental form related to $\bar g$ at the point $q$ is non-zero.
By the assumption $Q(M,\partial M)>0$ we have $K>0$ in (\ref{eq:probK}),
so we can normalize it to be $(n-2)$. Moreover, to gain in readability,
we set $a=\frac{n-2}{4(n-1)}R_{g}$ , so Problem (\ref{eq:Peps})
reads as
\begin{equation}
\left\{ \begin{array}{ccc}
-\Delta_{g}u+au=0 & & \text{on }M;\\
\frac{\partial u}{\partial\nu}+\varepsilon\gamma u=(n-2)\left(u^{+}\right)^{\frac{n}{n-2}} & & \text{on \ensuremath{\partial}}M.
\end{array}\right.\label{eq:P}
\end{equation}
Since $Q(M,\partial M)>0$, we can endow $H^{1}(M)$ with the following
equivalent scalar product
\[
\left\langle \left\langle u,v\right\rangle \right\rangle _{H}=\int_{M}(\nabla_{g}u\nabla_{g}v+auv)d\mu_{g}
\]
which leads to the equivalent norm $\|\cdot\|_{H}$. We have the well
know maps
\begin{align*}
i: & H^{1}(M)\rightarrow L^{t}(\partial M)\\
i^{*}: & L^{t'}(\partial M)\rightarrow H^{1}(M)
\end{align*}
for $1\le t\le\frac{2(n-1)}{n-2}$ (and for $1\le t<\frac{2(n-1)}{n-2}$
the embedding $i$ is compact).
Given $f\in L^{\frac{2(n-1)}{n-2}}(\partial M)$ there exists a unique
$u\in H^{1}(M)$ such that
\begin{align}
u=i^{*}(f) & \iff\left\langle \left\langle u,\varphi\right\rangle \right\rangle _{H}=\int_{\partial M}f\varphi d\sigma\text{ for all }\varphi\nonumber \\
& \iff\left\{ \begin{array}{ccc}
-\Delta_{g}u+au=0 & & \text{on }M;\\
\frac{\partial u}{\partial\nu}=f & & \text{on \ensuremath{\partial}}M.
\end{array}\right.\label{eq:istella}
\end{align}
The functional defined on $H^{1}(M)$ associated to (\ref{eq:P})
is
\[
J_{\varepsilon}(u):=\frac{1}{2}\int_{M}|\nabla_{g}u|^{2}+au^{2}d\mu_{g}+\frac{1}{2}\int_{\partial M}\varepsilon\gamma u^{2}d\sigma-\frac{(n-2)^{2}}{2(n-1)}\int_{\partial M}\left(u^{+}\right)^{\frac{2(n-1)}{n-2}}d\sigma.
\]
To solve problem (\ref{eq:P}) is equivalent to find $u\in H^{1}(M)$
such that
\begin{equation}
u=i^{*}(f(u)-\varepsilon\gamma u)\label{eq:P*}
\end{equation}
where $f(u)=(n-2)\left(u^{+}\right)^{\frac{n}{n-2}}$. We remark
that, if $u\in H^{1}$, then $f(u)\in L^{\frac{2(n-1)}{n}}(\partial M)$.
Given $q\in\partial M$ and $\psi_{q}^{\partial}:\mathbb{R}_{+}^{n}\rightarrow M$
the Fermi coordinates in a neighborhood of $q$; we define
\begin{align*}
W_{\delta,q}(\xi) & =U_{\delta}\left(\left(\psi_{q}^{\partial}\right)^{-1}(\xi)\right)\chi\left(\left(\psi_{q}^{\partial}\right)^{-1}(\xi)\right)=\\
& =\frac{1}{\delta^{\frac{n-2}{2}}}U\left(\frac{y}{\delta}\right)\chi(y)=\frac{1}{\delta^{\frac{n-2}{2}}}U\left(x\right)\chi(\delta x)
\end{align*}
where $y=(z,t)$, with $z\in\mathbb{R}^{n-1}$ and $t\ge0$, $\delta x=y=\left(\psi_{q}^{\partial}\right)^{-1}(\xi)$
and $\chi$ is a radial cut off function, with support in ball of
radius $R$.
Here $U_{\delta}(y)=\frac{1}{\delta^{\frac{n-2}{2}}}U\left(\frac{y}{\delta}\right)$
is the one parameter family of solution of the problem
\begin{equation}
\left\{ \begin{array}{ccc}
-\Delta U_{\delta}=0 & & \text{on }\mathbb{R}_{+}^{n};\\
\frac{\partial U_{\delta}}{\partial t}=-(n-2)U_{\delta}^{\frac{n}{n-2}} & & \text{on \ensuremath{\partial}}\mathbb{R}_{+}^{n}.
\end{array}\right.\label{eq:Udelta}
\end{equation}
and ${\displaystyle U(z,t):=\frac{1}{\left[(1+t)^{2}+|z|^{2}\right]^{\frac{n-2}{2}}}}$
is the standard bubble in $\mathbb{R}_{+}^{n}$.
Moreover, we consider the functions
\begin{eqnarray*}
j_{i}=\frac{\partial U}{\partial x_{i}},\ i=1,\dots n-1 & & j_{n}=\frac{n-2}{2}U+\sum_{i=1}^{n}y_{i}\frac{\partial U}{\partial y_{i}}
\end{eqnarray*}
which are solutions of the linearized problem
\begin{equation}
\left\{ \begin{array}{ccc}
-\Delta\phi=0 & & \text{on }\mathbb{R}_{+}^{n};\\
\frac{\partial\phi}{\partial t}+nU^{\frac{2}{n-2}}\phi=0 & & \text{on \ensuremath{\partial}}\mathbb{R}_{+}^{n}.
\end{array}\right.\label{eq:linearizzato}
\end{equation}
Given $q\in\partial M$ we define, for $b=1,\dots,n$
\[
Z_{\delta,q}^{b}(\xi)=\frac{1}{\delta^{\frac{n-2}{2}}}j_{b}\left(\frac{1}{\delta}\left(\psi_{q}^{\partial}\right)^{-1}(\xi)\right)\chi\left(\left(\psi_{q}^{\partial}\right)^{-1}(\xi)\right)
\]
and we decompose $H^{1}(M)$ in the direct sum of the following two
subspaces
\begin{align*}
K_{\delta,q} & =\text{Span}\left\langle Z_{\delta,q}^{1},\dots,Z_{\delta,q}^{n}\right\rangle \\
K_{\delta,q}^{\bot} & =\left\{ \varphi\in H^{1}(M)\ :\ \left\langle \left\langle \varphi,Z_{\delta,q}^{b}\right\rangle \right\rangle _{H}=0,\ b=1,\dots,n\right\}
\end{align*}
and we define the projections
\begin{eqnarray*}
\Pi=H^{1}(M)\rightarrow K_{\delta,q} & & \Pi^{\bot}=H^{1}(M)\rightarrow K_{\delta,q}^{\bot}.
\end{eqnarray*}
Given $q\in\partial M$ we also define in a similar way
\[
V_{\delta,q}(\xi)=\frac{1}{\delta^{\frac{n-2}{2}}}v_{q}\left(\frac{1}{\delta}\left(\psi_{q}^{\partial}\right)^{-1}(\xi)\right)\chi\left(\left(\psi_{q}^{\partial}\right)^{-1}(\xi)\right),
\]
and
\begin{equation}
\left(v_{q}\right)_{\delta}(y)=\frac{1}{\delta^{\frac{n-2}{2}}}v_{q}\left(\frac{y}{\delta}\right);\label{eq:vqdelta}
\end{equation}
here $v_{q}:\mathbb{R}_{+}^{n}\rightarrow\mathbb{R}$ is the unique
solution of the problem
\begin{equation}
\left\{ \begin{array}{ccc}
-\Delta v=2h_{ij}(q)t\partial_{ij}^{2}U & & \text{on }\mathbb{R}_{+}^{n};\\
\frac{\partial v}{\partial t}+nU^{\frac{2}{n-2}}v=0 & & \text{on \ensuremath{\partial}}\mathbb{R}_{+}^{n}.
\end{array}\right.\label{eq:vqdef}
\end{equation}
such that $v_{q}$ is $L^{2}(\mathbb{R}_{+}^{n})$-ortogonal to $j_{b}$
for all $b=1,\dots,n$ Here $h_{ij}$ is the second fundamental form
and we use the Einstein convention of repeated indices. We remark
\begin{equation}
|\nabla^{r}v_{q}(y)|\le C(1+|y|)^{3-r-n}\text{ for }r=0,1,2,\label{eq:gradvq}
\end{equation}
\begin{equation}
\int_{\partial\mathbb{R}_{+}^{n}}U^{\frac{n}{n-2}}v_{q}=0\label{eq:Uvq}
\end{equation}
and
\begin{equation}\label{new}
\int_{\partial\mathbb{R}_{+}^{n}}\Delta v_{q}v_qdzdt\le 0,
\end{equation}
(see \cite[Proposition 5.1 and estimate (5.9)]{A3}).
\begin{prop}
The map $q\mapsto v_{q}$ is in $C^{2}(\partial M)$.\end{prop}
\begin{proof}
Let $q_{0}\in\partial M$. If $q\in\partial M$ is sufficiently close
to $q_{0}$, in Fermi coordinates we have $q=q(y)=\exp_{q_{0}}y$,
with $y\in\mathbb{R}^{n-1}$. So $v_{q}=v_{\exp_{q_{0}}y}$ and we
define
\[
\Gamma_{i}=\left.\frac{\partial}{\partial y_{i}}v_{\exp_{q_{0}}y}\right|_{y=0}.
\]
We prove the result for $\Gamma_{1}$, being the other cases completely
analogous. By (\ref{eq:vqdef}) we have that $\Gamma_{1}$ solves
\[
\left\{ \begin{array}{ccc}
-\Delta\Gamma_{1}=2\left(\left.\frac{\partial}{\partial y_{1}}\left(h_{ij}(q(y))\right)\right|_{y=0}\right)t\partial_{ij}^{2}U & & \text{on }\mathbb{R}_{+}^{n};\\
\frac{\partial\Gamma_{1}}{\partial t}+nU^{\frac{2}{n-2}}\Gamma_{1}=0 & & \text{on \ensuremath{\partial}}\mathbb{R}_{+}^{n}.
\end{array}\right.
\]
and, by the result of \cite{A3}, we know that $\Gamma_{1}$ exists.
We can proceed in analogous way for the second derivative.
\end{proof}
We define the useful integral quantity
\[
I_{m}^{\alpha}=\int_{0}^{\infty}\frac{\rho^{\alpha}}{(1+\rho^{2})^{m}}d\rho
\]
and in the appendix (Remark \ref{lem:I-a-m}) we recall some useful
estimates of these integrals.
Finally, we have to we recall the Taylor expansion for the metric
$g$ and for the volume form on $M$, expressed by the Fermi coordinates.
Since, without loss of generality, we have chosen a manifold for which
$H_{g}\equiv0$, we have the following expansions in a neighborhood
of $y=0$, with the usual notation $y=(z,t)$, where $z\in\mathbb{R}^{n}$
and $t\ge0$. Here and in the following, we use the Einstein convention
on the sum of repeated indices. Moreover, we use the convention that
$a,b,c,d=1,\dots,n$ and $i,j,k,l=1,\dots,n-1$.
\begin{align}
|g(y)|^{1/2}= & 1-\frac{1}{2}\left[\|\pi\|^{2}+\ric(0)\right]t^{2}-\frac{1}{6}\bar{R}_{ij}(0)z_{i}z_{j}+O(|y|^{3})\label{eq:|g|}\\
g^{ij}(y)= & \delta_{ij}+2h_{ij}(0)t+\frac{1}{3}\bar{R}_{ikjl}(0)z_{k}z_{l}+2\frac{\partial h_{ij}}{\partial z_{k}}(0)tz_{k}\nonumber \\
& +\left[R_{injn}(0)+3h_{ik}(0)h_{kj}(0)\right]t^{2}+O(|y|^{3})\label{eq:gij}\\
g^{an}(y)= & \delta_{an}\label{eq:gin}
\end{align}
where $\pi$ is the second fundamental form and $h_{ij}(0)$ are its
coefficients, $\bar{R}_{ikjl}(0)$ and $R_{abcd}(0)$ are the curvature
tensor of $\partial M$ and $M$, respectively, $\bar{R}_{ij}(0)=\bar{R}_{ikjk}(0)$
are the coefficients of the Ricci tensor, and $\ric(0)=R_{nini}(0)=R_{nn}(0)$
(see \cite{E92}).
\section{Finite dimensional reduction}\label{due}
We look for a good approximation for the solution of problem (\ref{eq:P*}),
then we look for solution with the form
\[
u=W_{\delta,q}+\delta V_{\delta,q}+\Phi,\text{ with }\Phi\in K_{\delta,q}^{\bot}.
\]
and we project (\ref{eq:P*}) on $K_{\delta,q}^{\bot}$ and $K_{\delta,q}$
obtaining
\begin{align}
\Pi^{\bot}\left\{ W_{\delta,q}+\delta V_{\delta,q}+\Phi-i^{*}\left(f(W_{\delta,q}+\delta V_{\delta,q}+\Phi)-\varepsilon\gamma(W_{\delta,q}+\delta V_{\delta,q}+\Phi)\right)\right\} & =0;\label{eq:P-Kort}\\
\Pi\left\{ W_{\delta,q}+\delta V_{\delta,q}+\Phi-i^{*}\left(f(W_{\delta,q}+\delta V_{\delta,q}+\Phi)-\varepsilon\gamma(W_{\delta,q}+\delta V_{\delta,q}+\Phi)\right)\right\} & =0.\label{eq:P-K}
\end{align}
To solve (\ref{eq:P-Kort}) we define the linear operator $L=L_{\delta,q}:K_{\delta,q}^{\bot}\rightarrow K_{\delta,q}^{\bot}$
as
\begin{equation}
L(\Phi)=\Pi^{\bot}\left\{ \Phi-i^{*}\left(f'(W_{\delta,q}+\delta V_{\delta,q})[\Phi]\right)\right\} \label{eq:defL}
\end{equation}
and a nonlinear term $N(\Phi)$ and a remainder term $R$ as
\begin{align}
N(\Phi)= & \Pi^{\bot}\left\{ i^{*}\left(f(W_{\delta,q}+\delta V_{\delta,q}+\Phi)-f(W_{\delta,q}+\delta V_{\delta,q})-f'(W_{\delta,q}+\delta V_{\delta,q})[\Phi]\right)\right\} \label{eq:defN}\\
R= & \Pi^{\bot}\left\{ i^{*}\left(f(W_{\delta,q}+\delta V_{\delta,q})\right)-W_{\delta,q}-\delta V_{\delta,q}\right\} \label{eq:defR}
\end{align}
so eq (\ref{eq:P-Kort}) rewrites as
\[
L(\Phi)=N(\Phi)+R-\Pi^{\bot}\left\{ i^{*}\left(\varepsilon\gamma(W_{\delta,q}+\delta V_{\delta,q}+\Phi)\right)\right\} .
\]
\begin{lem}
\label{prop:L}Let $\delta=\varepsilon\lambda$ For $a,b\in\mathbb{R}$,
$0<a<b$ there exists a positive constant $C=C(a,b)$ such that, for
$\varepsilon$ small, for any $q\in\partial M$, for any $\lambda\in[a,b]$
and for any $\phi\in K_{\delta,q}^{\bot}$ there holds
\[
\|L_{\delta,q}(\phi)\|_{H}\ge C\|\phi\|_{H}.
\]
\end{lem}
The proof of this lemma is postponed in the appendix
\begin{lem}
\label{lem:R}Assume $n\ge7$ and $\delta=\lambda\varepsilon$, then
it holds
\[
\|R\|_{H}=O\left(\varepsilon^{2}\right)
\]
$C^{0}$-uniformly for $q\in\partial M$ and $\lambda$ in a compact
set of $(0,+\infty)$.\end{lem}
\begin{proof}
We recall that there is a unique $\Gamma$ such that
\[
\Gamma=i^{*}\left(f(W_{\delta,q}+\delta V_{\delta,q})\right),
\]
that is, according to (\ref{eq:istella}) equivalent to say that there
exists a unique $\Gamma$ solving
\[
\left\{ \begin{array}{ccc}
-\Delta_{g}\Gamma+a\Gamma=0 & & \text{on }M;\\
\frac{\partial\Gamma}{\partial\nu}=(n-2)\left((W_{\delta,q}+\delta V_{\delta,q})^{+}\right)^{\frac{n}{n-2}} & & \text{on \ensuremath{\partial}}M.
\end{array}\right.
\]
By definition of $i^{*}$ we have that
\begin{align*}
\|R\|_{H}^{2}= & \|\Gamma-W_{\delta,q}-\delta V_{\delta,q}\|_{H}^{2}\\
= & \int_{M}\left[-\Delta_{g}(\Gamma-W_{\delta,q}-\delta V_{\delta,q})+a(\Gamma-W_{\delta,q}-\delta V_{\delta,q})\right](\Gamma-W_{\delta,q}-\delta V_{\delta,q})d\mu_{g}\\
& +\int_{\partial M}\left[\frac{\partial}{\partial\nu}(\Gamma-W_{\delta,q}-\delta V_{\delta,q})\right](\Gamma-W_{\delta,q}-\delta V_{\delta,q})d\sigma\\
= & \int_{M}\left[\Delta_{g}(W_{\delta,q}+\delta V_{\delta,q})-a(W_{\delta,q}+\delta V_{\delta,q})\right]Rd\mu_{g}\\
& \int_{\partial M}\left[(n-2)\left((W_{\delta,q}+\delta V_{\delta,q})^{+}\right)^{\frac{n}{n-2}}-\frac{\partial}{\partial\nu}(W_{\delta,q}+\delta V_{\delta,q})\right]Rd\sigma
\end{align*}
We have
\begin{equation}
\int_{M}aW_{\delta,q}Rd\mu_{g}\le c\|W_{\delta,q}\|_{L^{\frac{2n}{n+2}}(M)}\|R\|_{L^{\frac{2n}{n-2}}(M)}\le c\delta^{2}\|U\|_{L^{\frac{2n}{n+2}}(\mathbb{R}^{n})}\|R\|_{H}\label{eq:WR}
\end{equation}
and $\|U\|_{L^{\frac{2n}{n+2}}(\mathbb{R}^{n})}$ is bounded since
$n>6$. Moreover
\begin{equation}
\delta\int_{M}aV_{\delta,q}Rd\mu_{g}\le c\delta\|V_{\delta,q}\|_{L^{2}(M)}\|R\|_{L^{2}(M)}\le c\delta^{2}\|v_{q}\|_{L^{2}(\mathbb{R}^{n})}\|R\|_{H}\label{eq:VR}
\end{equation}
and, in light of (\ref{eq:gradvq}), $\|v_{q}\|_{L^{2}(\mathbb{R}^{n})}$
is bounded since $n>6$.
We have
\begin{align*}
\int_{\partial M}\left[(n-2)W_{\delta,q}^{\frac{n}{n-2}}-\frac{\partial}{\partial\nu}W_{\delta,q}\right]Rd\sigma & \le\left\Vert (n-2)W_{\delta,q}^{\frac{n}{n-2}}-\frac{\partial}{\partial\nu}W_{\delta,q}\right\Vert _{L^{\frac{2(n-1)}{n}}(\partial M)}\|R\|_{H}\\
& \le c\delta^{2}\|R\|_{H}
\end{align*}
since $U$ is a solution of (\ref{eq:Udelta}). In fact
\begin{multline*}
\left\Vert (n-2)W_{\delta,q}^{\frac{n}{n-2}}-\frac{\partial}{\partial\nu}W_{\delta,q}\right\Vert _{L^{\frac{2(n-1)}{n}}(\partial M)}=\\
\left(\int_{\partial\mathbb{R}_{+}^{n}}|g(\delta z,0)|^{\frac{1}{2}}\left[(n-2)U^{\frac{n}{n-2}}(z,0)\chi^{\frac{n}{n-2}}(\delta z,0)-\chi(\delta z,0)\frac{\partial U}{\partial t}(z,0)\right]^{\frac{2(n-1)}{n}}dz\right)^{\frac{n}{2(n-1)}}\\
\le C\left(\int_{\mathbb{R}^{n-1}}\left[(n-2)U^{\frac{n}{n-2}}(z,0)\left[\chi^{\frac{n}{n-2}}(\delta z,0)-\chi(\delta z,0)\right]\right]^{\frac{2(n-1)}{n}}dz\right)^{\frac{n}{2(n-1)}}=O(\delta^{2}),
\end{multline*}
Now we estimate
\begin{multline*}
\int_{\partial M}\left\{ (n-2)\left[\left((W_{\delta,q}+\delta V_{\delta,q})^{+}\right)^{\frac{n}{n-2}}-W_{\delta,q}^{\frac{n}{n-2}}\right]-\delta\frac{\partial V_{\delta,q}}{\partial\nu}\right\} Rd\sigma\\
\le c\left\Vert (n-2)\left[\left((W_{\delta,q}+\delta V_{\delta,q})^{+}\right)^{\frac{n}{n-2}}-W_{\delta,q}^{\frac{n}{n-2}}\right]-\delta\frac{\partial V_{\delta,q}}{\partial\nu}\right\Vert _{L^{\frac{2(n-1)}{n}}(\partial M)}\|R\|_{H}
\end{multline*}
and, by Taylor expansion and by definition of the function $v_{q}$
(see (\ref{eq:vqdef}) )
\begin{multline*}
\left\Vert (n-2)\left[\left((W_{\delta,q}+\delta V_{\delta,q})^{+}\right)^{\frac{n}{n-2}}-W_{\delta,q}^{\frac{n}{n-2}}\right]-\delta\frac{\partial V_{\delta,q}}{\partial\nu}\right\Vert _{L^{\frac{2(n-1)}{n}}(\partial M)}\\
\le\left\Vert (n-2)\left[\left((U+\delta v_{q})^{+}\right)^{\frac{n}{n-2}}-U^{\frac{n}{n-2}}\right]+\delta\frac{\partial v_{q}}{\partial t}\right\Vert _{L^{\frac{2(n-1)}{n}}(\partial\mathbb{R}_{+}^{n})}+o(\delta^{2})\\
\le\delta\left\Vert n\left((U+\theta\delta v_{q})^{+}\right)^{\frac{2}{n-2}}v_{q}+\frac{\partial v_{q}}{\partial t}\right\Vert _{L^{\frac{2(n-1)}{n}}(\partial\mathbb{R}_{+}^{n})}+o(\delta^{2})\\
=\delta n\left\Vert \left((U+\theta\delta v_{q})^{+}\right)^{\frac{2}{n-2}}v_{q}-U^{\frac{2}{n-2}}v_{q}\right\Vert _{L^{\frac{2(n-1)}{n}}(\partial\mathbb{R}_{+}^{n})}+o(\delta^{2}).
\end{multline*}
We observe that, chosen a large positive $R$, we have $U+\theta\delta v_{q}>0$
in $B(0,R)$ for some $\delta$. Moreover, on the complementary of
this ball, we have $\frac{c}{|y|^{n-2}}\le U(y)\le\frac{C}{|y|^{n-2}}$
and $|v_{q}|\le\frac{C_{1}}{|y|^{n-3}}$ for some positive constants
$c,C,C_{1}$. So it is possible to prove that, for $\delta$ small
enough, $U+\theta\delta v_{q}>0$ if $|y|\le1/\delta$. At this point
\begin{multline*}
\int_{\partial\mathbb{R}_{+}^{n}}\left[\left|\left((U+\theta\delta v_{q})^{+}\right)^{\frac{2}{n-2}}-U^{\frac{2}{n-2}}\right||v_{q}|\right]^{\frac{2(n-1)}{n}}\\
=\int_{U+\theta\delta v_{q}>0}\left[\left|\left((U+\theta\delta v_{q})^{+}\right)^{\frac{2}{n-2}}-U^{\frac{2}{n-2}}\right||v_{q}|\right]^{\frac{2(n-1)}{n}}dz\\
+\int_{U+\theta\delta v_{q}\le0}\left[\left|\left((U+\theta\delta v_{q})^{+}\right)^{\frac{2}{n-2}}-U^{\frac{2}{n-2}}\right||v_{q}|\right]^{\frac{2(n-1)}{n}}dz\\
=\delta^{\frac{2(n-1)}{n}}\int_{U+\theta\delta v_{q}>0}\left(U+\theta_{1}\delta v_{q}\right)^{\frac{-2(n-1)(n-4)}{n(n-2)}}|v_{q}|^{\frac{4(n-1)}{n}}dz\\
+\int_{U+\theta\delta v_{q}\le0}U^{\frac{4(n-1)}{n(n-2)}}|v_{q}|^{\frac{2(n-1)}{n}}dz\\
\le\delta^{\frac{2(n-1)}{n}}\int_{U+\theta\delta v_{q}>0}\left(U+\theta_{1}\delta v_{q}\right)^{\frac{-2(n-1)(n-4)}{n(n-2)}}|v_{q}|^{\frac{4(n-1)}{n}}dz\\
+\int_{|z|>\frac{1}{\delta}}U^{\frac{4(n-1)}{n(n-2)}}|v_{q}|^{\frac{2(n-1)}{n}}dz
\end{multline*}
and, since $n>6$ one can check that $\int_{U+\theta\delta v_{q}>0}\left(U+\theta_{1}\delta v_{q}\right)^{\frac{-2(n-1)(n-4)}{n(n-2)}}|v_{q}|^{\frac{4(n-1)}{n}}dz$
is bounded and that
\begin{align*}
\int_{|z|>\frac{1}{\delta}}U^{\frac{4(n-1)}{n(n-2)}}|v_{q}|^{\frac{2(n-1)}{n}}dz & \le C\int_{|z|>\frac{1}{\delta}}\frac{1}{|z|^{\frac{4(n-1)}{n}}}\frac{1}{|z|^{\frac{2(n-1)(n-3)}{n}}}dz\\
& \le C\int_{\frac{1}{\delta}}^{\infty}r^{-\frac{3n^{2}-12n+10}{n}}=O(\delta^{\frac{3n^{2}-11n+10}{n}})=o(\delta^{\frac{2(n-1)}{n}})
\end{align*}
thus $\left\Vert (n-2)\left[\left((W_{\delta,q}+\delta V_{\delta,q})^{+}\right)^{\frac{n}{n-2}}-W_{\delta,q}^{\frac{n}{n-2}}\right]-\delta\frac{\partial V_{\delta,q}}{\partial\nu}\right\Vert _{L^{\frac{2(n-1)}{n}}(\partial M)}=O(\delta^{2})$
and
\[
\int_{\partial M}\left\{ (n-2)\left[\left((W_{\delta,q}+\delta V_{\delta,q})^{+}\right)^{\frac{n}{n-2}}-W_{\delta,q}^{\frac{n}{n-2}}\right]-\delta\frac{\partial V_{\delta,q}}{\partial\nu}\right\} Rd\sigma\le c\delta^{2}\|R\|_{H}.
\]
To complete the proof we have to estimate
\[
\int_{M}\left[\Delta_{g}(W_{\delta,q}+\delta V_{\delta,q})\right]Rd\mu_{g}\le\|\Delta_{g}(W_{\delta,q}+\delta V_{\delta,q})\|_{L^{\frac{2n}{n+2}}(M)}\|R\|_{H}.
\]
We recall that in local charts the Laplace Beltrami operator is
\begin{eqnarray*}
\Delta_{g}W_{\delta,q} & = & \Delta_{\text{euc}}\left(U_{\delta}(u)\chi(y)\right)+[g^{ij}(y)-\delta_{ij}]\partial_{ij}^{2}\left(U_{\delta}(u)\chi(y)\right)\\
& & -g^{ij}(y)\Gamma_{ij}^{k}(y)\partial_{k}\left(U_{\delta}(u)\chi(y)\right)
\end{eqnarray*}
where $i,k=1,\dots,n-1$, $\Delta_{\text{euc}}$ is the euclidean
Laplacian, and $\Gamma_{ij}^{k}$ are the Christoffel symbols. Notice
that, by (\ref{eq:|g|}) and (\ref{eq:gij}) we have that $\Gamma_{ij}^{k}(y)=O(|y|)$.
Now, by (\ref{eq:Udelta}) and (\ref{eq:gij}) we have, in variables
$y=\delta x$,
\begin{eqnarray}
\Delta_{g}W_{\delta,q} & = & U_{\delta}(u)\Delta_{\text{euc}}\left(\chi(y)\right)+2\nabla U_{\delta}(u)\nabla\chi(y)\nonumber \\
& & +[g^{ij}(y)-\delta_{ij}]\partial_{ij}^{2}\left(U_{\delta}(u)\chi(y)\right)-g^{ij}(y)\Gamma_{ij}^{k}(y)\partial_{k}\left(U_{\delta}(u)\chi(y)\right)\nonumber \\
& = & \frac{1}{\delta^{\frac{n-2}{2}}}\left(2h_{ij}(0)\delta x_{n}\frac{1}{\delta^{2}}\partial_{ij}U(x)+g^{ij}(x)\Gamma_{ij}^{k}(x)\frac{1}{\delta}\partial_{k}U+o(\delta)c(x)\right)\nonumber \\
& = & \frac{1}{\delta^{\frac{n}{2}}}\left(2h_{ij}(0)x_{n}\partial_{ij}^{2}U(x)+O(\delta)c(x)\right)\label{eq:R1}
\end{eqnarray}
where, with abuse of notation, we call $c(x)$ a suitable function
such that $\left|\int_{\mathbb{R}^{n}}c(x)dx\right|\le C$ for some
$C\in\mathbb{R}^{+}$.
In a similar way, by (\ref{eq:vqdef}) and by (\ref{eq:gij}) we have
\begin{multline}
\delta\Delta_{g}V_{\delta,q}=\\
\frac{\delta}{\delta^{\frac{n-2}{2}}}\left(\frac{1}{\delta^{2}}\Delta_{\text{euc}}v_{q}(x)+\frac{1}{\delta^{2}}[g^{ij}-\delta_{ij}]\partial_{ij}^{2}v_{q}(x)+\delta g(x)\Gamma_{ij}^{k}(x)\frac{1}{\delta}\partial_{k}v_{q}(x)+o(\delta^{2})c(y)\right)\\
=\frac{1}{\delta^{\frac{n}{2}}}\left(-2h_{ij}(0)x_{n}\partial_{ij}^{2}U(y)+O(\delta)c(y)\right)\label{eq:R2}
\end{multline}
Thus, in local chart by (\ref{eq:R1}) and (\ref{eq:R2}) we get
\begin{equation}
\|\Delta_{g}(W_{\delta,q}+\delta V_{\delta,q})\|_{L^{\frac{2n}{n+2}}(M)}=\delta^{n\frac{n+2}{2n}}\frac{1}{\delta^{\frac{n}{2}}}O(\delta)=O(\delta^{2})\label{eq:deltaw+v}
\end{equation}
and we obtain the proof, once we set $\delta=\lambda\varepsilon$. \end{proof}
\begin{rem}
\label{rem:N}We have that the nonlinear operator $N$ (see (\ref{eq:defN}))
is a contraction. By the properties of $i^{*}$ and using the expansion
of $f_{\varepsilon}(W_{\delta,q}+\phi_{1}+\delta V_{\delta,q})$ centered
in$W_{\delta,q}+\phi_{2}+\delta V_{\delta,q}$ we have
\begin{multline*}
\|N(\phi_{1})-N(\phi_{2})\|_{H}\\
\le\left\Vert \left(f'\left(W_{\delta,q}+\theta\phi_{1}+(1-\theta)\phi_{2}+\delta V_{\delta,q}\right)-f'(W_{\delta,q}+\delta V_{\delta,q})\right)[\phi_{1}-\phi_{2}]\right\Vert _{L^{\frac{2(n-1)}{n}}(\partial M)}
\end{multline*}
and, since $|\phi_{1}-\phi_{2}|^{\frac{2(n-1)}{n}}\in L^{\frac{n}{n-2}}(\partial M)$
and $|f_{\varepsilon}'(\cdot)|^{\frac{2(n-1)}{n}}\in L^{\frac{n}{2}}(\partial M)$,
we have
\begin{multline*}
\|N(\phi_{1})-N(\phi_{2})\|_{H}\\
\le\left\Vert \left(f'\left(W_{\delta,q}+\theta\phi_{1}+(1-\theta)\phi_{2}+\delta V_{\delta,q}\right)-f'(W_{\delta,q})+\delta V_{\delta,q}\right)\right\Vert _{L^{\frac{2(n-1)}{n-2}}(\partial M)}\|\phi_{1}-\phi_{2}\|_{H}\\
=\beta\|\phi_{1}-\phi_{2}\|_{H}
\end{multline*}
where
$$\beta=\left\Vert \left(f_{\varepsilon}'\left(W_{\delta,q}+\theta\phi_{1}+(1-\theta)\phi_{2}+\delta V_{\delta,q}\right)-f_{\varepsilon}'(W_{\delta,q}+\delta V_{\delta,q})\right)\right\Vert _{L^{\frac{2(n-1)}{n-2}}(\partial M)}<1,$$
provided $\|\phi_{1}\|_{H}$ and $\|\phi_{2}\|_{H}$ sufficiently
small.
In the same way we can prove that $\|N(\phi)\|_{H}\le\bar\beta\|\phi\|_{H}$
with $\bar\beta<1$ if $\|\phi\|_{H}$ is sufficiently small.\end{rem}
\begin{prop}
\label{prop:EsistenzaPhi}Let $\delta=\varepsilon\lambda$ For $a,b\in\mathbb{R}$,
$0<a<b$ there exists a positive constant $C=C(a,b)$ such that, for
$\varepsilon$ small, for any $q\in\partial M$, for any $\lambda\in[a,b]$
there exists a unique $\Phi=\Phi_{\varepsilon,\delta,q}\in K_{\delta,q}^{\bot}$
which solves (\ref{eq:P-Kort}) such that
\[
\|\Phi\|_{H}\le C\varepsilon^{2}
\]
\end{prop}
\begin{proof}
By Remark \ref{rem:N} we have that $N$ is a contraction. Moreover,
by Lemma \ref{prop:L} and by Lemma \ref{lem:R} there exists $C>0$
such that
\[
\left\Vert L^{-1}\left(N(\phi)+R-\Pi^{\bot}\left\{ i^{*}\left(\varepsilon\gamma(W_{\delta,q}+\delta V_{\delta,q}+\phi)\right)\right\} \right)\right\Vert _{H}\le C\left((\beta+\varepsilon)\|\phi\|_{H}+\varepsilon^{2}\right).
\]
In fact, we have
\begin{align*}
\left\Vert i^{*}\left(\varepsilon\gamma(W_{\varepsilon\lambda,q}+\varepsilon\lambda V_{\varepsilon\lambda,q}+\phi)\right)\right\Vert _{H} & \le\varepsilon\left(\left\Vert W_{\varepsilon\lambda,q}+\varepsilon\lambda V_{\varepsilon\lambda,q}\right\Vert _{L^{\frac{2(n-1)}{n}}}+\left\Vert \phi\right\Vert _{H}\right)\\
& \le C(\varepsilon^{2}+\varepsilon\left\Vert \phi\right\Vert _{H})
\end{align*}
Notice that, given $C>0$, in Remark \ref{rem:N} it is possible (up
to choose $\|\phi\|_{H}$ sufficiently small) to choose $0<C(\beta+\varepsilon)<1/2$.
Now, if $\|\phi\|_{H}\le2C\varepsilon^{2}$ then the map
\[
T(\phi):=L^{-1}\left(N(\phi)+R-\Pi^{\bot}\left\{ i^{*}\left(\varepsilon\gamma(W_{\delta,q}+\delta V_{\delta,q}+\phi)\right)\right\} \right)
\]
is a contraction from the ball $\|\phi\|_{H}\le2C\varepsilon^{2}$
in itself, so, by the fixed point Theorem, there exists a unique $\Phi$
with $\|\Phi\|_{H}\le2C\varepsilon^{2}$ solving (\ref{eq:P-Kort}).
The regularity of the map $q\mapsto\Phi$ can be proven via the implicit
function Theorem.
\end{proof}
\section{The reduced functional}\label{tre}
\begin{lem}
\label{lem:JWpiuPhi}Assume $n\ge7$ and $\delta=\lambda\varepsilon$.
It holds
\[
J_{\varepsilon}(W_{\delta,q}+\delta V_{\delta,q}+\Phi)-J_{\varepsilon}(W_{\delta,q}+\delta V_{\delta,q})=o\left(\varepsilon^{2}\right)
\]
$C^{0}$-uniformly for $q\in\partial M$ and $\lambda$ in a compact
set of $(0,+\infty)$.\end{lem}
\begin{proof}
We know that $\|\Phi\|_{H}=O(\varepsilon^{2})$, so we estimate, for
some $\theta\in(0,1)$
\begin{multline*}
J_{\varepsilon}(W_{\delta,q}+\delta V_{\delta,q}+\Phi)-J_{\varepsilon}(W_{\delta,q}+\delta V_{\delta,q})=J_{\varepsilon}'(W_{\delta,q}+\delta V_{\delta,q})[\Phi]\\
+\frac12 J_{\varepsilon}''(W_{\delta,q}+\delta V_{\delta,q}+\theta\Phi)[\Phi,\Phi]\\
=\int_{M}\left(\nabla_{g}W_{\delta,q}+\delta\nabla_{g}V_{\delta,q}\right)\nabla\Phi+a\left(W_{\delta,q}+\delta V_{\delta,q}\right)\Phi d\mu_{g}\\
+\int_{\partial M}\varepsilon\gamma\left(W_{\delta,q}+\delta V_{\delta,q}\right)\Phi d\sigma-(n-2)\int_{\partial M}\left(\left(W_{\delta,q}+\delta V_{\delta,q}\right)^{+}\right)^{\frac{n}{n-2}}\Phi d\sigma\\
+\frac12 \int_{M}|\nabla\Phi|^{2}+a\Phi^{2}d\mu_{g}+\frac12\int_{\partial M}\varepsilon\gamma\Phi^{2}d\sigma\\
-\frac{n}2\int_{\partial M}\left(\left(W_{\delta,q}+\delta V_{\delta,q}+\theta\Phi\right)^{+}\right)^{\frac{2}{n-2}}\Phi^{2}d\sigma.
\end{multline*}
Immediately we have, by Holder inequality, and setting $\delta=\varepsilon\lambda$,
\[
\int_{M}|\nabla\Phi|^{2}+a\Phi^{2}d\mu_{g}+\int_{\partial M}\varepsilon\gamma\Phi^{2}d\sigma\le C\|\Phi\|_{H}^{2}=o(\varepsilon^{2});
\]
\[
\int_{M}aW_{\delta,q}\Phi d\mu_{g}\le C\|W_{\delta,q}\|_{L^{\frac{2n}{n+2}}(M)}\|\Phi\|_{L^{\frac{2n}{n-2}}(M)}\le C\delta^{2}\|\Phi\|_{H}=o(\varepsilon^{2});
\]
\[
\delta\int_{M}aV_{\delta,q}\Phi d\mu_{g}\le C\delta\|V_{\delta,q}\|_{L^{2}(M)}\|\Phi\|_{L^{2}(M)}\le C\delta^{2}\|\Phi\|_{H}=o(\varepsilon^{2});
\]
\begin{align*}
\int_{\partial M}\varepsilon\gamma\left(W_{\delta,q}+\delta V_{\delta,q}\right)\Phi d\sigma & \le C\varepsilon\|W_{\delta,q}+\delta V_{\delta,q}\|_{L^{\frac{2(n-1)}{n}}(\partial M)}\|\Phi\|_{L^{\frac{2(n-1)}{n-2}}(\partial M)}\\
& \le\varepsilon C\delta\|\Phi\|_{H}=o(\varepsilon^{2})
\end{align*}
\begin{align*}
\int_{\partial M}\left(\left(W_{\delta,q}+\delta V_{\delta,q}+\theta\Phi\right)^{+}\right)^{\frac{2}{n-2}}\Phi^{2}d\sigma & \le C\|\Phi\|_{H}^{2}\left(\left\Vert W_{\delta,q}+\delta V_{\delta,q}+\theta\Phi\right\Vert _{L^{\frac{2(n-1)}{n-2}}(\partial M)}^{\frac{2}{n-2}}\right)\\
& \le C\|\Phi\|_{H}^{2}=o(\varepsilon^{2});
\end{align*}
By integration by parts we have
\begin{multline*}
\int_{M}\left(\nabla_{g}W_{\delta,q}+\delta\nabla_{g}V_{\delta,q}\right)\nabla\Phi d\mu_{g}=-\int_{M}\Delta_{g}\left(W_{\delta,q}+\delta V_{\delta,q}\right)\Phi d\mu_{g}\\
+\int_{\partial M}\left(\frac{\partial}{\partial\nu}W_{\delta,q}+\delta\frac{\partial}{\partial\nu}V_{\delta,q}\right)\Phi d\mu_{g}.
\end{multline*}
and, as in (\ref{eq:deltaw+v}) we get
\[
\int_{M}\Delta_{g}\left(W_{\delta,q}+\delta V_{\delta,q}\right)\Phi d\mu_{g}\le\|\Delta_{g}(W_{\delta,q}+\delta V_{\delta,q})\|_{L^{\frac{2n}{n+2}}(M)}\|\Phi\|_{H}=O(\delta^{2})\|\Phi\|_{H}=o(\varepsilon^{2})
\]
once we set $\delta=\varepsilon\lambda$. Moreover, by Holder inequality,
\[
\int_{\partial M}\delta\frac{\partial}{\partial\nu}V_{\delta,q}\Phi d\mu_{g}\le\delta\left\Vert \frac{\partial}{\partial\nu}V_{\delta,q}\right\Vert _{L^{\frac{2(n-1)}{n}}(\partial M)}\|\Phi\|_{L^{\frac{2(n-1)}{n-2}}(\partial M)}\le O(\delta)\|\Phi\|_{H}=o(\varepsilon^{2}).
\]
In the end we need to verify that
\begin{multline*}
\int_{\partial M}\left[(n-2)\left(\left(W_{\delta,q}+\delta V_{\delta,q}\right)^{+}\right)^{\frac{n}{n-2}}-\frac{\partial}{\partial\nu}W_{\delta,q}\right]\Phi d\sigma\\
=\left\Vert (n-2)\left(\left(W_{\delta,q}+\delta V_{\delta,q}\right)^{+}\right)^{\frac{n}{n-2}}-\frac{\partial}{\partial\nu}W_{\delta,q}\right\Vert _{L^{\frac{2(n-1)}{n}}(\partial M)}\|\Phi\|_{L^{\frac{2(n-1)}{n-2}}(\partial M)}\\
=o(1)\|\Phi\|_{H}=o(\varepsilon^{2})
\end{multline*}
In fact, by (\ref{eq:vqdelta}), (\ref{eq:vqdef}) and by taylor expansion
we have
\begin{multline*}
\int_{\partial M}\left[(n-2)\left(\left(W_{\delta,q}+\delta V_{\delta,q}\right)^{+}\right)^{\frac{n}{n-2}}-\frac{\partial}{\partial\nu}W_{\delta,q}\right]^{\frac{2(n-1)}{n}}d\sigma\\
\le\int_{\partial\mathbb{R}_{+}^{n}}\left[(n-2)\left(\left(U_{\delta}+\delta\left(v_{q}\right)_{\delta}\right)^{+}\right)^{\frac{n}{n-2}}+\frac{\partial}{\partial t}U_{\delta}\right]^{\frac{2(n-1)}{n}}dz+o(1)\\
\le\int_{\partial\mathbb{R}_{+}^{n}}\left[n\left(\left(U_{\delta}+\theta\delta\left(v_{q}\right)_{\delta}\right)^{+}\right)^{\frac{2}{n-2}}\delta\left(v_{q}\right)_{\delta}\right]^{\frac{2(n-1)}{n}}dz+o(1)=o(1),
\end{multline*}
which concludes the proof.\end{proof}
\begin{prop}
\label{lem:expJeps}Assume $n\ge7$ and $\delta=\lambda\varepsilon$.
It holds
\[
J_{\varepsilon}(W_{\lambda\varepsilon,q}+\lambda\varepsilon V_{\lambda\varepsilon,q})=A+\varepsilon^{2}\left[\lambda B\gamma(q)+\lambda^{2}\varphi(q)\right]+o(\varepsilon^{2}),
\]
$C^{0}$-uniformly for $q\in\partial M$ and $\lambda$ in a compact
set of $(0,+\infty)$, where (see \eqref{new})
\begin{equation}\label{phiq}
\varphi(q)= \frac{1}{2}\int_{\mathbb{R}_{+}^{n}}\Delta v_{q}v_qdzdt-\frac{(n-6)(n-2)\omega_{n-1}I_{n-1}^{n}}{4(n-1)^{2}(n-4)}\|\pi(q)\|^{2}\le0.
\end{equation}
\[
B=\frac{n-2}{n-1}\omega_{n-1}I_{n-1}^{n}>0
\]
and
\begin{align*}
A & =\frac{1}{2}\int_{\mathbb{R}_{+}^{n}}|\nabla U(z,t)|^{2}dzdt-\frac{(n-2)^{2}}{2(n-1)}\int_{\partial\mathbb{R}_{+}^{n}}U(z,0)^{\frac{2(n-1)}{n-2}}dz\\
& =\frac{(n-2)(n-3)}{2(n-1)^{2}}\omega_{n-1}I_{n-1}^{n}>0
\end{align*}
\end{prop}
\begin{rem}
Notice that $A$ is the energy level $J_{\infty}(U)=\inf_{u\in H^{1}(\mathbb{R}_{+}^{n})}J_{\infty}(u)$,
where $J_{\infty}$ is the functional associated to the limit equation
(\ref{eq:Udelta}).\end{rem}
\begin{proof}
We expand in $\delta$ the functional
\begin{align*}
J_{\varepsilon}(W_{\delta,q}+\delta V_{\delta,q})= & \frac{1}{2}\int_{M}|\nabla_{g}W_{\delta,q}+\delta\nabla_{g}V_{\delta,q}|^{2}d\mu_{g}+\frac{1}{2}\int_{M}a\left(W_{\delta,q}+\delta V_{\delta,q}\right)^{2}d\mu_{g}\\
& +\frac{1}{2}\int_{\partial M}\varepsilon\gamma\left(W_{\delta,q}+\delta V_{\delta,q}\right)^{2}d\sigma\\
& -\frac{(n-2)^{2}}{2(n-1)}\int_{\partial M}\left[\left(\left(W_{\delta,q}+\delta V_{\delta,q}\right)^{+}\right)^{\frac{2(n-1)}{n-2}}-\left(W_{\delta,q}\right)^{\frac{2(n-1)}{n-2}}\right]d\sigma\\
& -\frac{(n-2)^{2}}{2(n-1)}\int_{\partial M}\left(W_{\delta,q}\right)^{\frac{2(n-1)}{n-2}}d\sigma=I_{1}+I_{2}+I_{3}+I_{4}+I_{5}.
\end{align*}
For the term $I_{2}$, by Remark \ref{lem:I-a-m} in the appendix,
we have, by change of variables,
\begin{align}
I_{2} & =\frac{1}{2}\delta^{2}\int_{\mathbb{R}_{+}^{n}}\tilde{a}(\delta y)\left(U(y)\chi(\delta y)+\delta v_{q}(y)\chi(\delta y)\right)^{2}|g(\delta y)|^{1/2}dy\nonumber \\
& =\frac{1}{2}\delta^{2}a(q)\int_{\mathbb{R}_{+}^{n}}U(y)^{2}dy+o(\delta^{2})\nonumber \\
& =\delta^{2}a(q)\frac{n-2}{(n-1)(n-4)}\omega_{n-1}I_{n-1}^{n}+o(\delta^{2})\label{eq:sviluppo1}
\end{align}
in fact by Remark \ref{lem:I-a-m} we have
\begin{align*}
\int_{\mathbb{R}_{+}^{n}}U(y)^{2}dy & =\frac{1}{n-4}\omega_{n-1}I_{n-2}^{n-2}=\frac{2(n-2)}{(n-4)(n-1)}\omega_{n-1}I_{n-1}^{n}
\end{align*}
For the term $I_{3}$, recalling that $y=(z,t)$ with $z\in\mathbb{R}^{n-1}$,
$t\ge0$, we have, by Remark \ref{lem:I-a-m},
\begin{align}
I_{3} & =\frac{\varepsilon\delta}{2}\int_{\mathbb{R}^{n-1}}\tilde{\gamma}(0,\delta z)\left(U(0,z)\chi(0,\delta z)+\delta v_{q}(0,z)\chi(0,\delta z)\right)^{2}|g(0,\delta z)|^{1/2}dz\nonumber \\
& =\frac{\varepsilon\delta}{2}\gamma(q)\int_{\mathbb{R}^{n-1}}U(0,z)^{2}dz+o(\varepsilon\delta)=\frac{\varepsilon\delta}{2}\gamma(q)\int_{0}^{\infty}\frac{1}{\left[1+|z|^{2}\right]^{n-2}}dz\nonumber \\
& =\varepsilon\delta\frac{\gamma(q)}{2}\omega_{n-1}I_{n-2}^{n-2}=\varepsilon\delta\gamma(q)\frac{n-2}{n-1}\omega_{n-1}I_{n-1}^{n}\label{eq:sviluppo2}
\end{align}
For the term $I_{5}$, by (\ref{eq:|g|}) we have
\begin{align*}
I_{5} & =-\frac{(n-2)^{2}}{2(n-1)}\int_{\mathbb{R}^{n-1}}\left(U(0,z)\chi(0,\delta z)\right)^{\frac{2(n-1)}{n-2}}|g(0,\delta z)|^{1/2}dz\\
& =-\frac{(n-2)^{2}}{2(n-1)}\int_{\mathbb{R}^{n-1}}U(0,z)^{\frac{2(n-1)}{n-2}}\left(1-\frac{\delta^{2}}{6}\bar{R}_{ij}(q)z_{i}z_{j}\right)dz+o(\delta^{2});
\end{align*}
by Remark \ref{lem:I-a-m} it holds
\[
\int_{\mathbb{R}^{n-1}}U(0,z)^{\frac{2(n-1)}{n-2}}=\omega_{n-1}I_{n-1}^{n-2}
\]
and, by symmetry reasons,
\begin{align*}
\bar{R}_{ij}(q)\int_{\mathbb{R}^{n-1}}U(0,z)^{\frac{2(n-1)}{n-2}}z_{i}z_{j}dz & =\sum_{i=1}^{n-1}\bar{R}_{ii}(q)\int_{\mathbb{R}^{n-1}}U(0,z)^{\frac{2(n-1)}{n-2}}z_{i}^{2}dz\\
= & \frac{\bar{R}_{ii}(q)}{n-1}\int_{\mathbb{R}^{n-1}}\frac{|z|^{2}dz}{(1+|z|^{2})^{n-1}}=\frac{\bar{R}_{ii}(q)}{n-1}\omega_{n-1}I_{n-1}^{n}.
\end{align*}
Thus, since $I_{n-1}^{n-2}=\frac{n-3}{n-1}I_{n-1}^{n}$ by Remark
\ref{lem:I-a-m},
\begin{align}
I_{5} & =-\frac{(n-2)^{2}}{2(n-1)}\omega_{n-1}\left(I_{n-1}^{n-2}-\frac{\delta^{2}}{6(n-1)}\bar{R}_{ii}(q)\omega_{n-1}I_{n-1}^{n}\right)\nonumber \\
& =-\frac{(n-2)^{2}(n-3)}{2(n-1)^{2}}\omega_{n-1}I_{n-1}^{n}+\delta^{2}\frac{(n-2)^{2}}{12(n-1)^{2}}\bar{R}_{ii}(q)\omega_{n-1}I_{n-1}^{n}.\label{eq:sviluppo3}
\end{align}
For the term $I_{1}$ we write
\[
I_{1}=\frac{1}{2}\int_{M}|\nabla_{g}W_{\delta,q}|^{2}+\frac{1}{2}\int_{M}2\delta\nabla W_{\delta,q}\nabla V_{\delta,q}+\delta^{2}|\nabla_{g}V_{\delta,q}|^{2}d\mu_{g}=I_{1}'+I_{1}''+I_{1}'''
\]
and we proceed by estimating each term separately. By (\ref{eq:|g|}),
(\ref{eq:gin}), (\ref{eq:gij}), we have (here $a,b=1,\dots,n$ and
$i,j,m,l=1,\dots,n-1$)
\begin{multline*}
I_{1}'=\frac{1}{2}\int_{\mathbb{R}_{+}^{n}}g^{ab}(\delta y)\frac{\partial}{\partial y_{a}}(U(y)\chi(\delta y))\frac{\partial}{\partial y_{b}}(U(y)\chi(\delta y))|g(\delta y)|^{1/2}dy\\
=\int_{\mathbb{R}_{+}^{n}}\left[\frac{|\nabla U|^{2}}{2}+\left(\delta h_{ij}t-\frac{\delta^{2}}{6}\bar{R}_{ikjl}z_{k}z_{l}+\delta^{2}\frac{\partial h_{ij}}{\partial z_{k}}tz_{k}+\frac{\delta}{2}^{2}\left[R_{injn}+3h_{ik}h_{kj}\right]t^{2}\right)\frac{\partial U}{\partial z_{i}}\frac{\partial U}{\partial z_{j}}\right]\\
\times\left(1-\frac{\delta^{2}}{2}\left[\|\pi\|^{2}+\ric(0)\right]t^{2}-\frac{\delta^{2}}{6}\bar{R}_{lm}(0)z_{l}z_{m}\right)dzdt+o(\delta^{2}).
\end{multline*}
Since $\frac{\partial U}{\partial z_{i}}=(2-n)\frac{z_{i}}{\left[(1+t)^{2}+|z|^{2}\right]^{\frac{n}{2}}}$,
by symmetry reasons and since $h_{ii}\equiv0$ we have that
\[
h_{ij}(q)\int_{\mathbb{R}_{+}^{n}}t\frac{\partial U}{\partial z_{i}}\frac{\partial U}{\partial z_{j}}dzdt=h_{ii}(q)\int_{\mathbb{R}_{+}^{n}}\frac{tz_{i}z_{i}dzdt}{\left[(1+t)^{2}+|z|^{2}\right]^{\frac{n}{2}}}=0
\]
\[
\frac{\partial h_{ij}}{\partial z_{k}}(q)\int_{\mathbb{R}_{+}^{n}}tz_{k}\frac{\partial U}{\partial z_{i}}\frac{\partial U}{\partial z_{j}}dzdt=(2-n)\frac{\partial h_{ij}}{\partial z_{k}}(q)\int_{\mathbb{R}_{+}^{n}}\frac{tz_{k}z_{i}z_{j}dzdt}{\left[(1+t)^{2}+|z|^{2}\right]^{\frac{n}{2}}}=0;
\]
in a similar way, using the symmetries of the curvature tensor one
can check that
\begin{align*}
\bar{R}_{ikjl}(q)\int_{\mathbb{R}_{+}^{n}}z_{k}z_{l}\frac{\partial U}{\partial z_{i}}\frac{\partial U}{\partial z_{j}}dzdt & =\bar{R}_{ikjl}(q)\int_{\mathbb{R}_{+}^{n}}\frac{z_{i}z_{j}z_{k}z_{l}dzdt}{\left[(1+t)^{2}+|z|^{2}\right]^{\frac{n}{2}}}\\
& =\frac{\alpha}{3}\left(R_{ikik}(q)+R_{ikki}(q)+R_{iijj}(q)\right)=0
\end{align*}
where $\alpha=\int_{\mathbb{R}_{+}^{n}}\frac{z_{1}^{4}dzdt}{\left[(1+t)^{2}+|z|^{2}\right]^{\frac{n}{2}}}$.
Thus, using again symmetry
\begin{align*}
I_{1}'= & \int_{\mathbb{R}_{+}^{n}}\left[\frac{|\nabla U|^{2}}{2}+\left(\frac{\delta}{2}^{2}\left[R_{injn}+3h_{ik}h_{kj}\right]t^{2}\right)\frac{\partial U}{\partial z_{i}}\frac{\partial U}{\partial z_{j}}\right]\\
& \times\left(1-\frac{\delta^{2}}{2}\left[\|\pi\|^{2}+\ric(0)\right]t^{2}-\frac{\delta^{2}}{6}\bar{R}_{lm}(0)z_{l}z_{m}\right)dzdt+o(\delta^{2})\\
= & \frac{(n-2)^{2}}{2}\int_{\mathbb{R}_{+}^{n}}\frac{dzdt}{\left[(1+t)^{2}+|z|^{2}\right]^{n-1}}\\
& +\frac{\delta}{2}^{2}\frac{(n-2)^{2}}{n-1}\left[\ric(q)+3\|\pi(q)\|^{2}\right]\int_{\mathbb{R}_{+}^{n}}\frac{|z|^{2}t^{2}dzdt}{\left[(1+t)^{2}+|z|^{2}\right]^{n}}\\
& -\frac{\delta^{2}(n-2)^{2}}{4}\left[\|\pi(q)\|^{2}+\ric(q)\right]\int_{\mathbb{R}_{+}^{n}}\frac{t^{2}dzdt}{\left[(1+t)^{2}+|z|^{2}\right]^{n-1}}\\
& -\frac{\delta^{2}}{12}\frac{(n-2)^{2}}{n-1}\bar{R}_{ll}(q)\int_{\mathbb{R}_{+}^{n}}\frac{|z|^{2}dzdt}{\left[(1+t)^{2}+|z|^{2}\right]^{n-1}}+o(\delta^{2}).
\end{align*}
Thus, by Remark \ref{lem:I-a-m},
\begin{align}
I_{1}'= & \frac{(n-2)\omega_{n-1}I_{n-1}^{n-2}}{2}+\delta^{2}\frac{(n-2)\omega_{n-1}I_{n}^{n}}{(n-1)(n-3)(n-4)}\left[\ric(q)+3\|\pi(q)\|^{2}\right]\nonumber \\
& -\delta^{2}\frac{(n-2)\omega_{n-1}I_{n-1}^{n-2}}{2(n-3)(n-4)}\left[\ric(q)+\|\pi(q)\|^{2}\right]\nonumber \\
& -\delta^{2}\frac{(n-2)^{2}\omega_{n-1}I_{n-1}^{n}}{12(n-1)(n-4)}\bar{R}_{ll}(q)+o(\delta^{2})\nonumber \\
= & \frac{(n-2)(n-3)}{2(n-1)}\omega_{n-1}I_{n-1}^{n}+\delta^{2}\frac{(n-2)}{2(n-1)^{2}(n-4)}\omega_{n-1}I_{n-1}^{n}\left[\ric(q)+3\|\pi(q)\|^{2}\right]\nonumber \\
& -\delta^{2}\frac{(n-2)}{2(n-1)(n-4)}\omega_{n-1}I_{n-1}^{n}\left[\ric(q)+\|\pi(q)\|^{2}\right]\nonumber \\
& -\delta^{2}\frac{(n-2)^{2}}{12(n-1)(n-4)}\bar{R}_{ll}(q)\omega_{n-1}I_{n-1}^{n}+o(\delta^{2})\label{eq:sviluppo4}
\end{align}
For the term $I_{1}''$, by (\ref{eq:|g|}), (\ref{eq:gij}), (\ref{eq:gin})
and by definition of $V_{\delta,q}$ and $v_{q}$ we have
\begin{align}
I_{1}'' & =\delta\int_{M}\nabla W_{\delta,q}\nabla V_{\delta,q}d\mu_{g}=\delta\int_{\mathbb{R}_{+}^{n}}g^{\alpha\beta}(\delta y)\frac{\partial}{\partial y_{\alpha}}(U(y)\chi(\delta y))\frac{\partial}{\partial y_{\beta}}(v_{q}(y)\chi(\delta y))|g(\delta y)|^{1/2}dy\nonumber \\
& =\delta\int_{\mathbb{R}_{+}^{n}}\nabla U\nabla v_{q}dy+\delta^{2}2h_{ij}(q)\int_{\mathbb{R}_{+}^{n}}t\frac{\partial U}{\partial y_{i}}\frac{\partial v_{q}}{\partial y_{j}}dy+o(\delta^{2})\nonumber \\
& =\delta^{2}2h_{ij}(q)\int_{\mathbb{R}_{+}^{n}}t\frac{\partial U}{\partial z_{i}}\frac{\partial v_{q}}{\partial z_{j}}dy+o(\delta^{2})\label{eq:sviluppo5}
\end{align}
in fact
\begin{align*}
\int_{\mathbb{R}_{+}^{n}}\nabla U\nabla v_{q}dy & =-\int_{\mathbb{R}_{+}^{n}}U\Delta vdy+\int_{\partial\mathbb{R}_{+}^{n}}U(0,z)\frac{\partial v_{q}}{\partial t}dz\\
& =2h_{ij}\int_{\mathbb{R}_{+}^{n}}Ut\frac{\partial^{2}U}{\partial z_{i}\partial z_{j}}-n\int_{\partial\mathbb{R}_{+}^{n}}U(0,z)\left(U(0,z)^{\frac{2}{n-2}}v_{q}\right)dz=0
\end{align*}
since the first term is zero by symmetry and using that $h_{ii}=0$,
and the second term is zero by (\ref{eq:vqdef}) and (\ref{eq:Uvq}).
For the term $I_{1}'''$, immediately we have
\begin{equation}
I_{1}'''=\frac{\delta^{2}}{2}\int_{M}|\nabla V_{\delta,q}|^{2}d\mu_{g}=\frac{\delta^{2}}{2}\int_{\mathbb{R}_{+}^{n}}|\nabla v_{q}|^{2}dzdt+o(\delta^{2}),\label{eq:sviluppo6}
\end{equation}
so
\begin{equation}
I_{1}''+I_{1}'''=\delta^{2}2h_{ij}(q)\int_{\mathbb{R}_{+}^{n}}t\frac{\partial U}{\partial z_{i}}\frac{\partial v_{q}}{\partial z_{j}}dzdt+\frac{\delta^{2}}{2}\int_{\mathbb{R}_{+}^{n}}|\nabla v_{q}|^{2}dzdt+o(\delta^{2})\label{eq:sviluppo7}
\end{equation}
For the term $I_{4}$, by (\ref{eq:Uvq}) and (\ref{eq:|g|}), and
recalling that $y=(z,t)$ we have
\begin{align}
I_{4}= & -\frac{(n-2)^{2}}{2(n-1)}\int_{\partial\mathbb{R}_{+}^{n}}\left[\left(\left(U+\delta v_{q}\right)^{+}\right)^{\frac{2(n-1)}{n-2}}-U^{\frac{2(n-1)}{n-2}}\right]|g(0,\delta z)|^{\frac{1}{2}}dz+o(\delta^{2})\nonumber \\
= & -\delta(n-2)\int_{\partial\mathbb{R}_{+}^{n}}U^{\frac{n}{n-2}}v_{q}dz-\delta^{2}\frac n2\int_{\partial\mathbb{R}_{+}^{n}}\left(\left(U+\delta v_{q}\right)^{+}\right)^{\frac{2}{n-2}}v_{q}^{2}dz+o(\delta^{2})\nonumber \\
= & -\delta^{2}\frac n2\int_{\partial\mathbb{R}_{+}^{n}}U^{\frac{2}{n-2}}v_{q}^{2}dz+o(\delta^{2}).\label{eq:sviluppo8}
\end{align}
At this point we observe that
\begin{equation}
2h_{ij}(q)\int_{\mathbb{R}_{+}^{n}}t\frac{\partial U}{\partial z_{i}}\frac{\partial v_{q}}{\partial z_{j}}dzdt-n\int_{\mathbb{R}^{n-1}}U^{\frac{2}{n-2}}v_{q}^{2}dz=-\int_{\mathbb{R}_{+}^{n}}|\nabla v_{q}|^{2}dzdt\label{eq:vqriduzione}
\end{equation}
in fact, by (\ref{eq:vqdef}) we get
\begin{align}
2h_{ij}(q)\int_{\mathbb{R}_{+}^{n}}t\frac{\partial U}{\partial z_{i}}\frac{\partial v_{q}}{\partial z_{j}}dzdt & =-2h_{ij}(q)\int_{\mathbb{R}_{+}^{n}}t\frac{\partial^{2}U}{\partial z_{j}\partial z_{i}}v_{q}dzdt=\int_{\mathbb{R}_{+}^{n}}\left(\Delta v_{q}\right)v_{q}dzdt\nonumber \\
& =-\int_{\mathbb{R}_{+}^{n}}|\nabla v_{q}|^{2}dzdt+\int_{\partial\mathbb{R}_{+}^{n}}v_{q}\frac{\partial v_{q}}{\partial\nu}dz\nonumber \\
& =-\int_{\mathbb{R}_{+}^{n}}|\nabla v_{q}|^{2}dzdt+n\int_{\partial\mathbb{R}_{+}^{n}}U^{\frac{2}{n-2}}v_{q}^{2}dz.\label{eq:sviluppo9}
\end{align}
Hence by (\ref{eq:sviluppo7}), (\ref{eq:sviluppo8}), (\ref{eq:sviluppo9}) and \eqref{eq:vqdef}
it holds
\begin{equation}\begin{aligned}
I_{1}''+I_{1}'''+I_{4}&= \delta^{2} \left(-\frac12\int_{\mathbb{R}_{+}^{n}}|\nabla v_{q}|^{2}dzdt+\frac n2\int_{\partial\mathbb{R}_{+}^{n}}U^{\frac{2}{n-2}}v_{q}^{2}dz\right) +o(\delta^{2})\\
&= \frac12\delta^{2} \int_{\mathbb{R}_{+}^{n}}\Delta v_{q}v_qdzdt+o(\delta^{2})\end{aligned}\label{eq:sviluppo10}
\end{equation}
In light of (\ref{eq:sviluppo1}), (\ref{eq:sviluppo2}), (\ref{eq:sviluppo3}),
(\ref{eq:sviluppo4}), (\ref{eq:sviluppo10}), finally we get
\begin{multline*}
J_{\varepsilon}(W_{\delta,q}+\delta V_{\delta,q})=\frac{(n-2)(n-3)}{2(n-1)^{2}}\omega_{n-1}I_{n-1}^{n}+\varepsilon\delta\gamma(q)\frac{n-2}{n-1}\omega_{n-1}I_{n-1}^{n}\\
+\frac12\delta^{2} \int_{\mathbb{R}_{+}^{n}}\Delta v_{q}v_qdzdt+\delta^{2}a(q)\frac{n-2}{(n-1)(n-4)}\omega_{n-1}I_{n-1}^{n}\\
-\delta^{2}\frac{(n-2)^{2}}{4(n-1)^{2}(n-4)}\omega_{n-1}I_{n-1}^{n}\left[2\ric(q)+2\frac{n-4}{n-2}\|\pi(q)\|^{2}+\bar{R}_{ii}(q)\right]+o(\delta^{2})
\end{multline*}
Now, we choose $\delta=\lambda\varepsilon$, where $\lambda\in[\alpha,\beta]$,
with for some positive $\alpha,\beta.$ Recalling that $a=\frac{n-2}{4(n-1)}R_{g}$
and that $R_{g}(q)=2\ric(q)+\bar{R}_{ii}(q)+\|\pi(q)\|^{2}$ (see
\cite{E92}) we have the proof.
\end{proof}
\section{Proof of Theorem \ref{thm:main}}\label{quattro}
\begin{lem}
\label{lem:punticritici}If $(\bar{\lambda},\bar{q})\in(0,+\infty)\times\partial M$
is a critical point for the reduced functional
\[
I_{\varepsilon}(\lambda,q):=J_{\varepsilon}(W_{\varepsilon\lambda,q}+\varepsilon\lambda V_{\varepsilon\lambda,q}+\Phi_{\varepsilon\lambda,q})
\]
then the function $W_{\varepsilon\lambda,q}+\varepsilon\lambda V_{\varepsilon\lambda,q}+\Phi$
is a solution of (\ref{eq:P}). Here $\Phi_{\varepsilon\lambda,q}=\Phi_{\varepsilon,\lambda\varepsilon,q}$
is defined in Proposition \ref{prop:EsistenzaPhi}.\end{lem}
\begin{proof}
Set $q=q(y)=\psi_{\bar{q}}^{\partial}(y)$. Since $(\bar{\lambda},\bar{q})$
is a critical point for the $I_{\varepsilon}(\lambda,q)$ we have,
for $h=1,\dots,n-1$,
\begin{align*}
0= & \left.\frac{\partial}{\partial y_{h}}I_{\varepsilon}(\bar{\lambda},q(y))\right|_{y=0}\\
= & \langle\!\langle W_{\varepsilon\bar{\lambda},q(y)}+\varepsilon\bar{\lambda}V_{\varepsilon\bar{\lambda},q(y)}+\Phi_{\varepsilon\bar{\lambda},q(y)}-i^{*}\left(f(W_{\varepsilon\bar{\lambda},q(y)}+\varepsilon\bar{\lambda}V_{\varepsilon\bar{\lambda},q(y)}+\Phi_{\varepsilon\bar{\lambda},q(y)})\right)\\
& -\varepsilon\gamma(W_{\varepsilon\bar{\lambda},q(y)}+\varepsilon\bar{\lambda}V_{\varepsilon\bar{\lambda},q(y)}+\Phi_{\varepsilon\bar{\lambda},q(y)}),\left.\frac{\partial}{\partial y_{h}}(W_{\varepsilon\bar{\lambda},q(y)}+\varepsilon\bar{\lambda}V_{\varepsilon\bar{\lambda},q(y)}+\Phi_{\varepsilon\bar{\lambda},q(y)})\rangle\!\rangle_{H}\right|_{y=0}\\
= & \sum_{i=1}^{n}c_{\varepsilon}^{i}\left.\langle\!\langle Z_{\varepsilon\bar{\lambda},q(y)}^{i},\frac{\partial}{\partial y_{h}}(W_{\varepsilon\bar{\lambda},q(y)}+\varepsilon\bar{\lambda}V_{\varepsilon\bar{\lambda},q(y)}+\Phi_{\varepsilon\bar{\lambda},q(y)})\rangle\!\rangle_{H}\right|_{y=0}\\
= & \sum_{i=1}^{n}c_{\varepsilon}^{i}\left.\langle\!\langle Z_{\varepsilon\bar{\lambda},q(y)}^{i},\frac{\partial}{\partial y_{h}}W_{\varepsilon\bar{\lambda},q(y)}\rangle\!\rangle_{H}\right|_{y=0}+\varepsilon\bar{\lambda}\sum_{i=1}^{n}c_{\varepsilon}^{i}\left.\langle\!\langle Z_{\varepsilon\bar{\lambda},q(y)}^{i},\frac{\partial}{\partial y_{h}}V_{\varepsilon\bar{\lambda},q(y)}\rangle\!\rangle_{H}\right|_{y=0}\\
& \sum_{i=1}^{n}c_{\varepsilon}^{i}\left.\langle\!\langle\frac{\partial}{\partial y_{h}}Z_{\varepsilon\bar{\lambda},q(y)}^{i},\Phi_{\varepsilon\bar{\lambda},q(y)}\rangle\!\rangle_{H}\right|_{y=0}
\end{align*}
using that $\Phi_{\varepsilon\bar{\lambda},q(y)}$ is a solution of
(\ref{eq:P-Kort}) and that
\[
\langle\!\langle Z_{\varepsilon\bar{\lambda},q(y)}^{i},\frac{\partial}{\partial y_{h}}\Phi_{\varepsilon\bar{\lambda},q(y)}\rangle\!\rangle_{H}=\langle\!\langle\frac{\partial}{\partial y_{h}}Z_{\varepsilon\bar{\lambda},q(y)}^{i},\Phi_{\varepsilon\bar{\lambda},q(y)}\rangle\!\rangle_{H}
\]
since $\Phi_{\varepsilon\bar{\lambda},q(y)}\in K_{\varepsilon\bar{\lambda},q(y)}^{\bot}$
for any $y$.
Arguing as in Lemma 6.1 and Lemma 6.2 of \cite{MP09} we have
\begin{eqnarray*}
\left\Vert \frac{\partial}{\partial y_{h}}Z_{\varepsilon\bar{\lambda},q(y)}^{i}\right\Vert _{H}=O\left(\frac{1}{\varepsilon}\right) & & \left\Vert \frac{\partial}{\partial y_{h}}W_{\varepsilon\bar{\lambda},q(y)}\right\Vert _{H}=O\left(\frac{1}{\varepsilon}\right)\\
\left\Vert \frac{\partial}{\partial y_{h}}V_{\varepsilon\bar{\lambda},q(y)}\right\Vert _{H}=O\left(\frac{1}{\varepsilon}\right)
\end{eqnarray*}
so we get
\begin{align*}
\langle\!\langle Z_{\varepsilon\bar{\lambda},q(y)}^{i},\frac{\partial}{\partial y_{h}}W_{\varepsilon\bar{\lambda},q(y)})\rangle\!\rangle_{H}= & \frac{1}{\lambda\varepsilon}\langle\!\langle Z_{\varepsilon\bar{\lambda},q(y)}^{i},Z_{\varepsilon\bar{\lambda},q(y)}^{h})\rangle\!\rangle_{H}+o(1)=\frac{\delta_{ih}}{\lambda\varepsilon}+o(1)\\
\langle\!\langle Z_{\varepsilon\bar{\lambda},q(y)}^{i},\frac{\partial}{\partial y_{h}}V_{\varepsilon\bar{\lambda},q(y)}\rangle\!\rangle_{H} & \le\left\Vert Z_{\varepsilon\bar{\lambda},q(y)}^{i}\right\Vert _{H}\left\Vert \frac{\partial}{\partial y_{h}}V_{\varepsilon\bar{\lambda},q(y)}\right\Vert _{H}=O\left(\frac{1}{\varepsilon}\right)\\
\langle\!\langle\frac{\partial}{\partial y_{h}}Z_{\varepsilon\bar{\lambda},q(y)}^{i},\Phi_{\varepsilon\bar{\lambda},q(y)}\rangle\!\rangle_{H} & \le\left\Vert \frac{\partial}{\partial y_{h}}Z_{\varepsilon\bar{\lambda},q(y)}^{i}\right\Vert _{H}\left\Vert \Phi_{\varepsilon\bar{\lambda},q(y)}\right\Vert _{H}=o(1).
\end{align*}
We conclude that
\[
0=\frac{1}{\lambda\varepsilon}\sum_{i=1}^{n}c_{\varepsilon}^{i}\left(\delta_{ih}+O(1)\right)
\]
and so $c_{\varepsilon}^{i}=0$ for $i=1,\dots,n$.
Analogously we proceed for $\left.\frac{\partial}{\partial\lambda}I_{\varepsilon}(\lambda,\bar{q})\right|_{\lambda=\bar{\lambda}}$.
\end{proof}
For the sake of completeness, we recall the definition of $C^{0}$-stable
critical point before proving Theorem \ref{thm:main}.
\begin{defn}
Let $f:\mathbb{R}^{n}\rightarrow\mathbb{R}$ be a $C^{1}$ function
and let $K=\left\{ \xi\in\mathbb{R}^{n}\ :\ \nabla f(\xi)=0\right\} $.
We say that $\xi_{0}\in\mathbb{R}^{n}$ is a $C^{0}$-stable critical
point if $\xi_{0}\in K$ and there exist $\Omega$ neighborhood of
$\xi_{0}$ with $\partial\Omega\cap K=\emptyset$ and a $\eta>0$
such that for any $g:\mathbb{R}^{n}\rightarrow\mathbb{R}$ of class
$C^{1}$ with $\|g-f\|_{C^{0}(\bar{\Omega})}\le\eta$ we have a critical
point of $g$ near $\Omega$.\end{defn}
\begin{proof}[Proof of Theorem \ref{thm:main}]
Let us call
\begin{equation}\label{ridotta}
G(\lambda,q)=\lambda B\gamma(q)+\lambda^{2}\varphi(q).
\end{equation}
If we find a $C^{0}$-stable critical point for $G(\lambda,q)$ then
we find a critical point for $I_{\varepsilon}(\lambda,q):=J_{\varepsilon}(W_{\lambda\varepsilon,q}+\lambda\varepsilon V_{\lambda\varepsilon,q}+\Phi)$
for $\varepsilon$ small enough (see Lemma \ref{lem:JWpiuPhi} and
Proposition \ref{lem:expJeps}), hence a solution for Problem (\ref{eq:P}),
by Lemma \ref{lem:punticritici}.
Since we assumed the trace-free second fundamental form to be nonzero
everywhere, we have $\|\pi\|^{2}>0$, so $\varphi(q)<0$.
Also, we assumed $\gamma(q)$ to be strictly positive on $\partial M$,
so there exists $(\lambda_{0},q_{0})$ maximum point of $G(\lambda,q)$
with $\lambda_{0}>0$. Moreover, $(\lambda_{0},q_{0})$ is a $C^{0}$-stable
critical point of $G(\lambda,q)$. Then, for any sufficiently small
$\varepsilon>0$ there exists $(\lambda_{\varepsilon},q_{\varepsilon})$
critical point for $I_{\varepsilon}(\lambda,q)$ and we completed
the proof of our main result, in fact we found a sequence $\lambda_{\varepsilon}$
bounded away from zero, a sequence of points $q_{\varepsilon}\in\partial M$
and a sequence of positive functions
\[
u_{\varepsilon}=W_{\lambda_{\varepsilon}\varepsilon,q_{\varepsilon}}+\lambda_{\varepsilon}\varepsilon V_{\lambda_{\varepsilon}\varepsilon,q_{\varepsilon}}+\Phi
\]
which are solution for (\ref{eq:P}) with $q_{\varepsilon}\rightarrow q_{0}$. \end{proof}
\begin{rem}
We give another example of function $\gamma(q)$ such that problem
(\ref{eq:P}) admits a positive solution. Let $q_{0}\in\partial M$
be a maximum point for $\varphi$. This point exists since $\partial M$
is compact. Now choose $\gamma\in C^{2}(\partial M)$ such that $\gamma$
has a positive local maximum in $q_{0}$. Then the pair $(\lambda_{0},q_{0})=\left(-\frac{B\gamma(q_{0})}{2\varphi(q_{0})},q_{0}\right)$
is a $C^{0}$-stable critical point for $G(\lambda,q)$.
\end{rem}
In fact, we have
\[
\nabla_{\lambda,q}G=(B\gamma(q)+2\lambda\varphi(q),\lambda B\nabla_{q}\gamma(q)+\lambda^{2}\nabla_{q}\varphi(q))
\]
which vanishes for $(\lambda_{0},q_{0})=\left(-\frac{B\gamma(q_{0})}{2\varphi(q_{0})},q_{0}\right)$.
Moreover the Hessian matrix is
\[
G_{\lambda,q}^{''}\left(-\frac{B\gamma(q_{0})}{2\varphi(q_{0})},q_{0}\right)=\left(\begin{array}{cc}
2\varphi(q_{0}) & 0\\
0 & -\frac{B^{2}\gamma(q_{0})}{2\varphi(q_{0})}\gamma''_{q}(q_{0})+\frac{B^{2}\gamma^{2}(q_{0})}{\varphi^{2}(q_{0})}\varphi''_{q}(q_{0})
\end{array}\right)
\]
which is negative definite. Thus $(\lambda_{0},q_{0})=\left(-\frac{B\gamma(q_{0})}{2\varphi(q_{0})},q_{0}\right)$
is a maximum $C^{0}$-stable point for $G(\lambda,q)$.
\section{Appendix}
\begin{proof}[Proof of Lemma \ref{prop:L}]
We argue by contradiction. We suppose that there exist two sequence
of real numbers $\varepsilon_{m}\rightarrow0,\lambda_{m}\in[a,b]$
a sequence of points $q_{m}\in\partial M$ and a sequence of functions
$\phi_{\varepsilon_{m}\lambda_{m},q_{m}}\in K_{\varepsilon_{m}\lambda_{m},q_{m}}^{\bot}$
such that
\[
\|\phi_{\varepsilon_{m}\lambda_{m},q_{m}}\|_{H}=1\text{ and }\|L_{\varepsilon_{m}\lambda_{m},q_{m}}(\phi_{\varepsilon_{m}\lambda_{m},q_{m}})\|_{H}\rightarrow0\text{ as }m\rightarrow+\infty.
\]
For the sake of simplicity, we set $\delta_{m}=\varepsilon_{m}\lambda_{m}$
and we define
\[
\tilde{\phi}_{m}:=\delta_{m}^{\frac{n-2}{2}}\phi_{\delta_{m},q_{m}}(\psi_{q_{m}}^{\partial}(\delta_{m}y))\chi(\delta_{m}y)\text{ for }y=(z,t)\in\mathbb{R}_{+}^{n},\text{ with }z\in\mathbb{R}^{n-1}\text{ and }t\ge0
\]
Since $\|\phi_{\varepsilon_{m}\lambda_{m},q_{m}}\|_{H}=1$, by change
of variables we easily get that $\left\{ \tilde{\phi}_{m}\right\} _{m}$
is bounded in $D^{1,2}(\mathbb{R}_{+}^{n})$ (but not in $H^{1}(\mathbb{R}_{+}^{n})$).
Thus there exists $\tilde{\phi}\in D^{1,2}(\mathbb{R}_{+}^{n})$ such
that $\tilde{\phi}_{m}\rightharpoonup\tilde{\phi}$ weakly in $D^{1,2}(\mathbb{R}_{+}^{n})$,
in $L^{\frac{2n}{n-2}}(\mathbb{R}_{+}^{n})$ and in $L^{\frac{2(n-1)}{n-2}}(\partial\mathbb{R}_{+}^{n})$,
strongly in $L_{\text{loc}}^{s}(\partial\mathbb{R}_{+}^{n})$ for
$s\le\frac{2(n-1)}{n-2}$ and almost everywhere.
Since $\phi_{\delta_{m},q_{m}}\in K_{\delta_{m},q_{m}}^{\bot}$, and
taking in account (\ref{eq:linearizzato}) we get, for $i=1,\dots,n$,
\begin{equation}
o(1)=\int_{\mathbb{R}_{+}^{n}}\nabla\tilde{\phi}\nabla j_{i}dzdt=n\int_{\partial\mathbb{R}_{+}^{n}}U^{\frac{2}{n-2}}(z,0)j_{i}(z,0)\tilde{\phi}(z,0)dz.\label{eq:L3}
\end{equation}
Indeed, by change of variables we have
\begin{align*}
0= & \left\langle \left\langle \phi_{\delta_{m},q_{m}},Z_{\delta_{m},q_{m}}^{i}\right\rangle \right\rangle _{H}=\int_{M}\left(\nabla_{g}\phi_{\delta_{m},q_{m}}\nabla_{g}Z_{\delta_{m},q_{m}}^{i}+a\phi_{\delta_{m},q_{m}}Z_{\delta_{m},q_{m}}^{i}\right)d\mu_{g}\\
= & \int_{\mathbb{R}_{+}^{n}}\delta^{\frac{n-2}{2}}\frac{\partial}{\partial\eta_{\alpha}}j_{i}(y)\frac{\partial}{\partial\eta_{\alpha}}\phi_{\delta_{m},q_{m}}(\psi_{q_{m}}^{\partial}(\delta_{m}y))dy\\
& +\int_{\mathbb{R}_{+}^{n}}\delta^{\frac{n+2}{2}}a(\psi_{q_{m}}^{\partial}(\delta y))j_{i}(y)\phi_{\delta_{m},q_{m}}(\psi_{q_{m}}^{\partial}(\delta_{m}y))dy+o(1)\\
= & \int_{\mathbb{R}_{+}^{n}}\nabla j_{i}(y)\nabla\tilde{\phi}_{m}(y)+\delta^{2}a(q_{m})j_{i}(y)\tilde{\phi}_{m}(y)d\eta+o(1)\\
= & \int_{\mathbb{R}_{+}^{n}}\nabla j_{i}(y)\nabla\tilde{\phi}(y)+o(1).
\end{align*}
By definition of $L_{\delta_{m},q_{m}}$ we have
\begin{multline}
\phi_{\delta_{m},q_{m}}-i^{*}\left(f'(W_{\delta_{m},q_{m}}+\delta_{m}V_{\delta_{m},q_{m}})[\phi_{\delta_{m},q_{m}}]\right)-L_{\delta_{m},q_{m}}\left(\phi_{\delta_{m},q_{m}}\right)\\
=\sum_{i=1}^{n}c_{m}^{i}Z_{\delta_{m},q_{m}}^{i}.\label{eq:L4}
\end{multline}
We want to prove that, for all $i=1,\dots,n$, $c_{m}^{i}\rightarrow0$
while $m\rightarrow\infty.$ Multiplying equation (\ref{eq:L4}) by
$Z_{\delta_{m},q_{m}}^{k}$ we obtain, by definition (\ref{eq:istella})
of $i^{*}$,
\begin{align*}
\sum_{i=1}^{n}c_{m}^{i}\left\langle \left\langle Z_{\delta_{m},q_{m}}^{i},Z_{\delta_{m},q_{m}}^{k}\right\rangle \right\rangle _{H}= & \left\langle \left\langle i^{*}\left(f'(W_{\delta_{m},q_{m}}+\delta_{m}V_{\delta_{m},q_{m}})[\phi_{\delta_{m},q_{m}}]\right),Z_{\delta_{m},q_{m}}^{k}\right\rangle \right\rangle _{H}\\
= & \int_{\partial M}f'(W_{\delta_{m},q_{m}}+\delta_{m}V_{\delta_{m},q_{m}})[\phi_{\delta_{m},q_{m}}]Z_{\delta_{m},q_{m}}^{k}d\sigma
\end{align*}
Now
\begin{multline*}
\int_{\partial M}f'(W_{\delta_{m},q_{m}}+\delta_{m}V_{\delta_{m},q_{m}})[\phi_{\delta_{m},q_{m}}]Z_{\delta_{m},q_{m}}^{k}d\sigma\\
=n\int_{\partial M}\left((W_{\delta_{m},q_{m}}+\delta_{m}V_{\delta_{m},q_{m}})^{+}\right)^{\frac{2}{n-2}}\phi_{\delta_{m},q_{m}}Z_{\delta_{m},q_{m}}^{k}d\sigma\\
=n\int_{\partial\mathbb{R}_{n}^{+}}\left((U+\delta_{m}v_{q_{m}})^{+}\right)^{\frac{2}{n-2}}\tilde{\phi}_{m}j_{k}dz+o(1)=n\int_{\partial\mathbb{R}_{n}^{+}}\left(U\right)^{\frac{2}{n-2}}\tilde{\phi}j_{k}dz+o(1)=o(1)
\end{multline*}
since $\tilde{\phi}_{m}\rightharpoonup\tilde{\phi}$ weakly $L^{\frac{2(n-1)}{n-2}}(\partial\mathbb{R}_{+}^{n})$,
$\|v_{q_{m}}\|_{L^{\infty}}$ is bounded independently on $q_{m}$
by (\ref{eq:gradvq}) and by equation (\ref{eq:L3}). At this point,
since
\[
\left\langle \left\langle Z_{\delta_{m},q_{m}}^{i},Z_{\delta_{m},q_{m}}^{j}\right\rangle \right\rangle _{H}=C\delta_{ij}+o(1),
\]
we conclude that $c_{m}^{i}\rightarrow0$ while $m\rightarrow\infty$
for each $i=1,\dots,n$. By (\ref{eq:L4}), and recalling $\|L_{\varepsilon_{m}\lambda_{m},q_{m}}(\phi_{\varepsilon_{m}\lambda_{m},q_{m}})\|_{H}\rightarrow0$
this implies
\begin{multline}
\left\Vert \phi_{\delta_{m},q_{m}}-i^{*}\left(f_{\varepsilon}'(W_{\delta_{m},q_{m}}+\delta_{m}V_{\delta_{m},q_{m}})[\phi_{\delta_{m},q_{m}}]\right)\right\Vert _{H}\\
=\sum_{i=0}^{n-1}c_{m}^{i}\|Z^{i}\|_{H}+o(1)=o(1)\label{eq:L5}
\end{multline}
Now, choose a smooth function $\varphi\in C_{0}^{\infty}(\mathbb{R}_{+}^{n})$
and define
\[
\varphi_{m}(x)=\frac{1}{\delta_{m}^{\frac{n-2}{2}}}\varphi\left(\frac{1}{\delta_{m}}\left(\psi_{q_{m}}^{\partial}\right)^{-1}(x)\right)\chi\left(\left(\psi_{q_{m}}^{\partial}\right)^{-1}(x)\right)\text{ for }x\in M.
\]
We have that $\|\varphi_{m}\|_{H}$ is bounded and, by (\ref{eq:L5}),
that
\begin{align*}
\left\langle \left\langle \phi_{\delta_{m},q_{m}},\varphi_{m}\right\rangle \right\rangle _{H}= & \int_{\partial M}f_{\varepsilon_{m}}'(W_{\delta_{m},q_{m}}+\delta_{m}V_{\delta_{m},q_{m}})[\phi_{\delta_{m},q_{m}}]\varphi_{m}d\sigma\\
& +\left\langle \left\langle \phi_{\delta_{m},q_{m}}-i^{*}\left(f_{\varepsilon_{m}}'(W_{\delta_{m},q_{m}}+\delta_{m}V_{\delta_{m},q_{m}})[\phi_{\delta_{m},q_{m}}]\right),\varphi_{m}\right\rangle \right\rangle _{H}\\
= & \int_{\partial M}f_{\varepsilon_{m}}'(W_{\delta_{m},q_{m}}+\delta_{m}V_{\delta_{m},q_{m}})[\phi_{\delta_{m},q_{m}}]\varphi_{m}d\sigma+o(1)\\
= & n\int_{\partial\mathbb{R}_{+}^{n}}\left((U+\delta_{m}v_{q_{m}})^{+}\right)^{\frac{2}{n-2}}\tilde{\phi}_{m}\varphi dz+o(1)\\
= & n\int_{\mathbb{R}^{n-1}}U^{\frac{2}{n-2}}\tilde{\phi}\varphi dz+o(1),
\end{align*}
by the strong $L_{\text{loc}}^{t}(\partial\mathbb{R}_{+}^{n})$ convergence
of $\tilde{\phi}_{m}$ for $t<\frac{2(n-1)}{n-2}$. On the other hand
\[
\left\langle \left\langle \phi_{\delta_{m},q_{m}},\varphi_{m}\right\rangle \right\rangle _{H}=\int_{\mathbb{R}_{+}^{n}}\nabla\tilde{\phi}\nabla\varphi d\eta+o(1),
\]
so $\tilde{\phi}$ is a weak solution of (\ref{eq:linearizzato})
and we conclude that
\[
\tilde{\phi}\in\text{Span}\left\{ j_{1},\dots j_{n}\right\} .
\]
This, combined with (\ref{eq:L3}) gives that $\tilde{\phi}=0$. Proceeding
as before we have
\begin{align*}
\left\langle \left\langle \phi_{\delta_{m},q_{m}},\phi_{\delta_{m},q_{m}}\right\rangle \right\rangle _{H}= & \int_{\partial M}f_{\varepsilon_{m}}'(W_{\delta_{m},q_{m}}+\delta_{m}V_{\delta_{m},q_{m}})[\phi_{\delta_{m},q_{m}}]\phi_{\delta_{m},q_{m}}d\sigma+o(1)\\
= & n\int_{\partial\mathbb{R}_{+}^{n}}\left((U+\delta_{m}v_{q_{m}})^{+}\right)^{\frac{2}{n-2}}\tilde{\phi}_{m}^{2}dz+o(1)\\
= & n\int_{\partial\mathbb{R}_{+}^{n}}U^{\frac{2}{n-2}}\tilde{\phi}_{m}^{2}dz+o(1)=o(1)
\end{align*}
since $\tilde{\phi}_{m}^{2}$ converges weakly in $L^{\frac{n-1}{n-2}}(\partial\mathbb{R}_{+}^{n})$.
This gives $\left\Vert \phi_{\delta_{m},q_{m}}\right\Vert _{H}\rightarrow0$,
that is a contradiction.
\end{proof}
We have (see \cite[Lemma 9.4 and Lemma 9.5]{A3}) the following relations
\begin{rem}
\label{lem:I-a-m}It holds
\begin{align*}
I_{m}^{\alpha}:=\int_{0}^{\infty}\frac{\rho^{\alpha}}{(1+\rho^{2})^{m}}d\rho=\frac{2m}{\alpha+1}I_{m+1}^{\alpha+2} & \text{ for }\alpha+1<2m\\
I_{m}^{\alpha}=\frac{2m}{2m-\alpha-1}I_{m+1}^{\alpha} & \text{ for }\alpha+1<2m\\
I_{m}^{\alpha}=\frac{2m-\alpha-3}{\alpha+1}I_{m}^{\alpha+2} & \text{ for }\alpha+3<2m.
\end{align*}
In particular we have $I_{n}^{n}=\frac{n-3}{2(n-1)}I_{n-1}^{n}$,
$I_{n-1}^{n-2}=\frac{n-3}{n-1}I_{n-1}^{n}$, $I_{n-2}^{n-2}=\frac{2(n-2)}{n-1}I_{n-1}^{n}$.
Moreover, for $m>k+1$, $m,k\in\mathbb{N}$, we have
\[
\int_{0}^{\infty}\frac{t^{k}}{(1+t)^{m}}dt=\frac{k!}{(m-1)(m-2)\cdots(m-k-1)}
\]
and, by explicit computation, by the previous formula, we obtain:
\[
\int_{\mathbb{R}_{+}^{n}}\frac{dzdt}{\left[(1+t)^{2}+|z|^{2}\right]^{n-1}}=\frac{\omega_{n-1}I_{n-1}^{n-2}}{(n-2)}
\]
\[
\int_{\mathbb{R}_{+}^{n}}\frac{|z|^{2}t^{2}dzdt}{\left[(1+t)^{2}+|z|^{2}\right]^{n}}=\frac{2\omega_{n-1}I_{n}^{n}}{(n-2)(n-3)(n-4)}
\]
\[
\int_{\mathbb{R}_{+}^{n}}\frac{t^{2}dzdt}{\left[(1+t)^{2}+|z|^{2}\right]^{n-1}}=\frac{2\omega_{n-1}I_{n-1}^{n-2}}{(n-2)(n-3)(n-4)}
\]
\[
\int_{\mathbb{R}_{+}^{n}}\frac{|z|^{2}dzdt}{\left[(1+t)^{2}+|z|^{2}\right]^{n-1}}=\frac{\omega_{n-1}I_{n-1}^{n}}{(n-4)}
\]
\end{rem}
|
1,941,325,219,992 | arxiv | \section{Introduction}
Because it models Hawking radiation as the very physical process of quantum fields tunneling through a barrier, an approach called the tunneling method has recently gained popularity in the field of black hole thermodynamics. This method also provides multiple advantages which go beyond merely providing intuition; indeed, because it considers Hawking radiation to be a purely local phenomenon, it can be used to study spacetimes with multiple horizons such as embedded black holes in deSitter spacetimes. Many spacetimes have thus been explored in this way: Kerr-Newman \cite{Jiang2006,Zhang2006}, Black Rings \cite{Zhao2006}, Taub-NUT \cite{Kerner2006}, AdS black holes \cite{Hemming2001}, BTZ \cite{Agheben2005,BTZ,Modak2009}, Vaidya \cite{Vaidya}, dynamical black holes \cite{Cri2007}, Kerr-G\"{o}del \cite{Kerner2007}, deSitter horizons \cite{dS}, constant curvature black holes \cite{Yale2010}, as well as generic weakly isolated horizons \cite{Wu2007}.
\\\\
Moreover, this method is especially powerful in that it allows quantum fields to be considered explicitly. As such, the Hawking radiation of scalars to every order in $\hbar$ \cite{Majhi1,Majhi4}, spin-$1/2$ fermions to first order \cite{Kerner2008,Kerner2008b,Li2008,Zhang2006,Jiang2006} and later to every order \cite{Majhi2}, higher-spin fermions \cite{Yale2009,Majhi3}, and $U(1)$ gauge bosons to second order \cite{Majhi3} has been investigated. Expanding on these results, we will consider, to every order, the tunneling of scalars, fermions of any spin, and arbitrary gauge bosons from a generic near-horizon black hole metric. Some of the results we will present are not technically new; for example, the scalar field was calculated exactly in \cite{Majhi1}. Nevertheless, we will include them not only for completeness, but also to provide a more thorough derivation and to interpret the results in a slightly different way.
\\\\
The tunneling method comes in two flavours. The first originates from the works of Volovik \cite{Volovik} and later of Kraus and Wilczek \cite{NullGeo}, who analyzed the radiation process semi-classically by considering modes near the event horizon; the idea was later generalized as a tunneling process by Parikh and Wilczek \cite{PW}. Near a Schwarzschild horizon, radial null geodesics obey $\frac{dr}{dt} = \dot{r} = \pm (1 - \frac{2M}{r} )$, where the $\pm$ denotes the outgoing and incoming geodesics. The contribution to the imaginary part of the action comes from two part: a temporal contribution $2E \text{Im} \Delta t = 4 \pi M E$ from the discontinuity of the time coordinate at the horizon \cite{Akhmedov2008b}, and a spacial contribution $\text{Im} \oint p_r dr$, where we integrate along an infinitesimal complex path around the pole at the horizon. This closed path integral is to be understood as a normalizing process which subtracts the infalling radiation from the outgoing one. Using the Hamilton equations of motion $p_r = \int_0^E \frac{dH}{\dot{r}}$, we perform this integral to find $\text{Im} I = \text{Im} E \oint \frac{dr}{\dot{r}} = 4 \pi M E$. We then relate the tunneling rate to the action by $\Gamma \propto e^{- \text{Im} I} = e^{-8 \pi M E}$, thus finding the correct Hawking temperature of $T = \frac{1}{8 \pi M}$.
\\\\
The second flavour, which we will use throughout this paper, comes from the works of Padmanabhan and his collaborators \cite{PaddyTun}. This method was initially developed, and later formulated more algorithmically \cite{Agheben2005}, as a means of studying the quantum tunneling of scalar particles through a gravitational barrier. It is rooted in the WKB approximation and consists of solving the Hamilton-Jacobi equations (thus earning it the nickname of Hamilton-Jacobi method) for a quantum field passing through an event horizon. For example, for a massless scalar field $\phi = ae^{\frac{-i}{\hbar}I}$, the equations of motion are $\partial_\mu \partial^\mu \phi=0$, of which the leading term is the Klein-Gordon equation $\partial_\mu I \partial^\mu I + {\cal O}(\hbar)= 0$. This leads to $I = \pm \oint \frac{E dr}{1-2M/r} = 4 \pi i M E$, yielding once more $T = \frac{1}{8 \pi M}$. A goal of this paper is to show that this $\hbar \rightarrow 0$ approximation, which is also made for fermions and bosons, is unnecessary: since we are only interested in near-horizon physics, we can use $g_{tt} \rightarrow 0$ in place of $\hbar \rightarrow 0$ to retrieve the correct Hawking temperature. This is somewhat unexpected since taking this limit is so common in semiclassical treatments of Hawking radiation; it means that we can consider this method to be a gravitational analog to the quantum WKB approximation.
\\\\
It is in that sense that our method calculates the temperature exactly to all orders. We will find that the action for a Klein-Gordon field obeys $\partial_r I= \pm \frac{\partial_t I}{f(r)}$ near the horizon. This implies two important points: first, since $\partial_t I$ is conserved, we can simply integrate this quantity to find the action without ever having assumed $\hbar$ to be small: as such, our method is exact to every order in $\hbar$. Second, this formula implies that if we expand the action $I$ in powers of $\hbar$ as $I = \sum \hbar^i I_i$, then each $I_i$ obeys this same equation: $\partial_r I_i= \pm \frac{\partial_t I_i}{f(r)} = \mp \int \frac{E_i}{f(r)}$, where the last equality defined the conserved quantity $E_i = -\partial_t I_i$. This then means that $\partial_r I = \frac{E_0}{f(r)} \left( 1 + \sum_{i=1}^\infty \frac{\hbar^i E_i}{E_0} \right) = \partial_r I_0 \left( 1 + \sum_{i=1}^\infty \frac{\hbar^i E_i}{E_0} \right)$.
\\\\
Besides $\hbar$, there exists another parameter which is important to the problem of Hawking radiation: the ratio $\frac{E}{M}$ between the energy of the emitted particle and the mass of the black hole; this controls the amount of back-reaction on the black hole. Early in its development, the tunneling method was used to show that this back-reaction modified the thermal nature of the emitted radiation \cite{PW}. More recently, it was used to study correlations between emitted particles, and may provide a solution to the information puzzle to the lowest order \cite{Zhang2009,Israel2010,Singleton2010}. This area, however, is beyond the scope of the present paper. Thus, even though we will expand to every order in $\hbar$, we will ignore back-reaction entirely.
\\\\
Our calculations will be done using a generic near-horizon line element in Schwarzschild-like coordinates:
\myeq{\label{metric} ds^2 = -f(r) dt^2 + \frac{1}{f(r)}dr^2 + d x^2_\bot,}
where $f(r)$ vanishes at the horizon. This form is quite general and does not restrict us to spherically symmetric spacetimes. For example, in the near-horizon limit at fixed $\theta=\theta_0$, the Kerr metric can be written
\myeq{
ds^2 &= -A(r,\theta) dt^2 + \frac{dr^2}{B(r,\theta)} + C(r,\theta) \left[ d \phi - D(r,\theta)dt \right]^2 + F(r,\theta)d\theta^2 \\
&= -A_r(r_0,\theta_0)(r-r_0) dt^2 + \frac{dr^2}{B_r(r_0,\theta_0)(r-r_0)} + C(r_0,\theta_0) \left[ d \phi - \Omega dt\right]^2,
}
where $\Omega = D(r_0,\theta_0)$ is the angular velocity of the black hole. This line element is of the form $(\ref{metric})$ up to a redefinition of the $r$ and $\phi$ coordinates. An unfortunate side-effect of using such a generic metric is a slight misdefinition of the energy, which we will illustrate here for the Kerr spacetime. From symmetry arguments, we know that the action is of the form $I = -Et + J \phi + W(r,\theta)$. However, the redefinition of the $\phi$ coordinate $\phi = \chi + \Omega t$ near the horizon means that the action is actually $I = -(E-\Omega J)t + J \chi + W(r,\theta)$. Therefore, our energy actually corresponds to $(E - \Omega J)$ in standard coordinates. Moreover, the temperature that we calculate is not redshifted: for an asymptotically flat spacetime, it represents the temperature measured by an observer at infinity.
\\\\
The purpose of this paper is therefore twofold. First, we will provide a generic treatment of bosons, which have so far only been studied in the Abelian case. Second, we will show that, ignoring back-reaction, terms of higher order in $\hbar$ do not modify the radiation process; in particular, we find that taking the near-horizon limit has the same effect as sending $\hbar \rightarrow 0$. In the first three sections, we will analyze, respectively, Klein-Gordon, Rarita-Schwinger and Non-Abelian Yang-Mills fields radiating from the near-horizon metric $(\ref{metric})$. Then, in Section \ref{sec:temperature}, we calculate the temperature associated with these particles, while, finally, in Section \ref{sec:discussion}, we discuss our conclusions and link our results with similar recent work in the field.
\section{Scalars} \label{sec:scalars}
We begin by considering a massive scalar field $\phi$ which we write $\phi = e^{\frac{-i}{\hbar} I}$. Although this form is based on the WKB approximation, we will not take the $\hbar \rightarrow 0$ limit which usually accompanies this approximation. The scalar field $\phi$ obeys the Klein-Gordon equation:
\myeq{ \label{scalar} 0 &= g^{\mu \nu} \left( -\partial_\mu \partial_\nu \phi + \Gamma^\sigma_{\mu \nu} \partial_\sigma \phi \right) + \frac{m^2}{\hbar^2} \phi \\
&= \frac{i}{\hbar}g^{\mu \nu} \left( \partial_\mu \partial_\nu I - \Gamma^\sigma_{\mu \nu} \partial_\sigma I \right) + \frac{1}{\hbar^2} \left( g^{\mu \nu} \partial_\mu I \partial_\nu I + m^2 \right).
}
Upon reaching this stage, one commonly truncates the equation to leading order in $\hbar$ and takes the small-mass limit to retrieve the Hamilton-Jacobi equation $\partial_\mu I \partial^\mu I = 0$. This truncation, it turns out, is unnecessary, and we can analyze the situation to every order in $\hbar$. Indeed, we begin by solving the above equation by assuming that $\partial_\mu I$ is real; the imaginary contribution to the action will come from integrating this divergent real quantity around a pole at the event horizon. We will ultimately find a solution for $I$ which does solve the entire (complex) equation $(\ref{scalar})$, thus justifying this assumption. Hence, taking the real part of equation $(\ref{scalar})$ and solving for $\partial_r I$, we find
\myeq{ \partial_r I = \pm \sqrt{ \frac{-g^{tt}}{g^{rr}}(\partial_t I)^2 + \frac{-g^{\theta \theta}}{g^{rr}}(\partial_\theta I)^2 + \frac{-g^{\phi \phi}}{g^{rr}}(\partial_\phi I)^2 + \frac{m^2 \hbar^2}{g^{rr}} }. }
Because of the near-horizon symmetry, all of $\partial_t I, \partial_\theta I$ and $\partial_\phi I$ are conserved quantities, and therefore finite. Moreover, $g^{tt}$ will diverge, such that only the first term inside the square root will contribute. Thus, we find
\myeq{ \label{scalarFinal} \partial_r I = \pm \sqrt{ \frac{-g^{tt}}{g^{rr}} } \partial_t I .}
As we mentioned earlier, this solution also solves the complex part of equation $(\ref{scalar})$. Indeed, a simple calculation can show that, for $\partial_r I$ defined by $(\ref{scalarFinal})$, we have
\myeq{ g^{\mu \nu} \partial_\mu \partial_\nu I = g^{rr} \partial_r^2 I = \partial_r I \left( g^{rr} \Gamma^r_{rr} + g^{tt} \Gamma^r_{tt} \right) = g^{\mu \nu} \Gamma_{\mu \nu}^\sigma \partial_\sigma I, }
since most Christoffel symbols vanish near the horizon.
\\\\
This result implies that $I$ will obey the same equation even when we are not taking the limit of $\hbar$ going to zero. Instead, we take the limit of our metric approaching the event horizon; this is physically justified since that is where the tunneling takes place. Although more involved mathematically, we will find similar results for fermions and bosons in the following sections.
\section{Fermions} \label{sec:fermions}
It is intuitive that fermions must be emitted at the same temperature as scalar particles. Indeed, a fermion field of spin $(n+\frac{1}{2})$ is a tensor-valued spinor $\Psi_{\mu_1 \cdots \mu_n a}$ which obeys the Rarita-Schwinger equations. Although these are commonly written, for a spin-$3/2$ field, as
\myeq{\left( \epsilon^{\sigma \nu \rho \mu} \gamma^5 \gamma_\nu \partial_\rho - i m \sigma^{\sigma \mu} \right) \Psi_{\mu a}, }
it is, for our purposes, much more enlightening to write them as
\myeq{\left( -i \gamma^\mu D_\mu + m \right) \Psi_{\mu_1 \cdots \mu_n a} &= 0 \\
\gamma^{\mu_1} \Psi_{\mu_1 \cdots \mu_n a} &= 0,
}
where
\myeq{D_\mu = \partial_\mu - \frac{1}{8} \Gamma^\alpha_{\mu \nu}g^{\nu \beta}[\gamma_\alpha,\gamma_\beta];}
this is the form in which they were originally studied \cite{Rarita1941}. In flat space, we can multiply by $(i \gamma^\nu D_\nu + m)$ to notice that each component of $\Psi_{\mu_1 \cdots \mu_n a}$ must obey the Klein-Gordon equation. Additional constraints then relate the components of $\Psi_{\mu_1 \cdots \mu_n a}$ with one another: the Dirac equation relates the Dirac indices $a$, while the other Rarita-Schwinger equations relate the higher-spin Lorentz indices $\mu_1 \cdots \mu_n$. Since we write the field as $\Psi_{\mu_1 \cdots \mu_n a} = a_{\mu_1 \cdots \mu_n a} e^{\frac{i}{\hbar} I}$ and are interested in the action, these extra relations play no role in the calculation of the Hawking temperature, and therefore fermions must be emitted at the same temperature as scalar particles.
\\\\
The Hawking temperature of fermions has already been calculated to leading order in $\hbar$ \cite{Kerner2008,Li2008,Zhang2006,Jiang2006,Yale2009} and, more recently, to every order \cite{Majhi2,Majhi3} for the massless case. We will perform the calculation for the massive arbitrary-spin case to every order in $\hbar$, and will retrieve the results from \cite{Majhi3}, albeit with an additional term which will not contribute to the Hawking temperature. The appearance of this term is due to our calculations being more thorough than previous ones, as we attempt to fill the gaps left behind by previous works. Moreover, our work has a slightly different interpretation than that of \cite{Majhi3}, as we will discuss in Section \ref{sec:discussion}.
\\\\
The Dirac equation implies that
\myeq{ \label{dirac} 0 = -i\gamma^\mu \left( \partial_\mu a_{\mu_1 \cdots \mu_n a} - \frac{i}{\hbar}a_{\mu_1 \cdots \mu_n a}\partial_\mu I \right) + ma_{\mu_1 \cdots \mu_n a} + \frac{i}{8} \gamma^\mu g^{\nu \beta}\Gamma^\alpha_{\mu \nu}[\gamma_\alpha,\gamma_\beta]a_{\mu_1 \cdots \mu_n a},}
while the other Rarita-Schwinger equations will simply relate the various $\mu_i$ indices and will have no effect on the action; more details can be found in \cite{Yale2009}. We define the vierbein $e^I_\mu$ so that $e^I_\mu e^J_\nu \eta^{\mu \nu} = g^{IJ}$; for the metric $(\ref{metric})$, this means $e_a^b = \sqrt{|g^{aa}|} \delta_a^b$. We also define the Dirac matrices $\gamma^I = e^I_\mu \hat{\gamma}^\mu$, where the $\hat{\gamma}^\mu$ represent the flat-space $\gamma$ matrices in Majorana representation:
\myeq{ \gamma^0= \left( \begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array} \right) \hspace{1.cm}
\gamma^i= \left( \begin{array}{cc} 0 & \sigma^i \\ \sigma^i & 0 \end{array} \right),
}
where the $\sigma^i$ are the standard Pauli matrices. In particular, we have $\left\{ \gamma^I,\gamma^J \right\} = 2 e^I_\mu e^J_\nu \eta^{\mu \nu} = 2g^{IJ}$. Near the horizon, the metric only depends on the radial coordinate, which means that $\partial_\mu$ for $\mu \neq r$ represents a Killing vector. This then implies that $a_{\mu_1 \cdots \mu_n a}$ can only be a function of $r$: the dependence of the fermion field on the other coordinates is restricted to a phase (such as $e^{iEt}$). Therefore, $\gamma^\mu \partial_\mu a_{\mu_1 \cdots \mu_n a} = \gamma^r \partial_r a_{\mu_1 \cdots \mu_n a} = \sqrt{g^{rr}} \hat{\gamma}^r \partial_r a_{\mu_1 \cdots \mu_n a} = 0$, since $g^{rr}$ vanishes while $a_{\mu_1 \cdots \mu_n a}$ remains finite. Hence, near the horizon, the Dirac equations become a system of equations linear in $a_{\mu_1 \cdots \mu_n a}$:
\myeq{ 0 = \left( \frac{-1}{\hbar}\gamma^\mu \partial_\mu I + m + \frac{i}{8} \gamma^\mu g^{\nu \beta}\Gamma^\alpha_{\mu \nu}[\gamma_\alpha,\gamma_\beta]\right) a_{\mu_1 \cdots \mu_n a} .}
Reading this as a matrix equation in the Dirac indices, it is obvious that $\left( \frac{-1}{\hbar}\gamma^\mu \partial_\mu I + m + \frac{i}{8} \gamma^\mu g^{\nu \beta}\Gamma^\alpha_{\mu \nu}[\gamma_\alpha,\gamma_\beta]\right)$ being invertible would imply $a_{\mu_1 \cdots \mu_n a} = 0$. Thus, we demand
\myeq{ 0 &= \text{Det} \left( \frac{-1}{\hbar}\gamma^\mu \partial_\mu I + m + \frac{i}{8} \gamma^\mu g^{\nu \beta}\Gamma^\alpha_{\mu \nu}[\gamma_\alpha,\gamma_\beta]\right) \\
&= \text{Det} \left( \begin{array}{c c c c}
-\hbar m & 0 & A & B \\
0 & -\hbar m & C & D \\
-D & B & -\hbar m & 0 \\
C & -A & 0 & -\hbar m
\end{array} \right) \\
&= (AD-BC)^2 + 2m^2(AD-BC) + m^4
}
where we defined
\myeq{
A &= \sqrt{-g^{tt}}\partial_t I + \sqrt{g^{rr}} \partial_r I + \frac{i \hbar}{4} \sqrt{g^{tt}}(g^{tt} g^{rr})^{3/2} g_{tt,r} \\
B &= \sqrt{g^{\theta \theta}} \partial_\theta I + i \sqrt{g^{\phi \phi}} \partial_\phi I \\
C &= \sqrt{g^{\theta \theta}} \partial_\theta I - i \sqrt{g^{\phi \phi}} \partial_\phi I \\
D &= \sqrt{-g^{tt}}\partial_t I - \sqrt{g^{rr}} \partial_r I - \frac{i \hbar}{4} \sqrt{g^{tt}}(g^{tt} g^{rr})^{3/2} g_{tt,r}.
}
As we approach the horizon, $A$ and $D$ diverge, such that the last terms do not contribute. We ultimately find $AD=0$, which implies
\myeq{ \label{fermionFinal} \partial_r I = \sqrt{ \frac{ -g^{tt} }{ g^{rr} } } \left(\pm E - \frac{i \hbar}{4}(g^{tt}g^{rr})^{3/2}g_{tt,r} \right) .}
\\\\
As discussed in \cite{Kerner2008}, studying fermions provides us with insight which is absent from the scalar case: a direct meaning for the $\pm$ in $(\ref{fermionFinal})$. Indeed, consider the massless spin-$1/2$ case, where the fermion field is $\Psi_a = a_ae^{\frac{i}{\hbar}I}$. The spin-up case corresponds to
\myeq{ a_a = \left[ \begin{array}{c} \xi_+ \alpha \\ \xi_+ \beta \end{array} \right]
= \left[ \begin{array}{c} \alpha \\ 0 \\ \beta \\ 0 \end{array} \right] ,}
where $\xi_+$ is the positive-spin eigenvector of $\sigma_r$. Then, combining equations $(\ref{dirac})$ and $(\ref{fermionFinal})$, we find that either $A=0$ or $B=0$. If $A=0$, $a_a$ will be an eigenvector of $\gamma^5$ with positive eigenvalue and therefore right-handed, whereas if $B=0$, $a_a$ will be left-handed. Thus, since they have the same spin, the two solutions of $(\ref{fermionFinal})$ correspond to particles of opposite momenta: one is falling into the black hole whereas the other is outgoing.
\section{Bosons} \label{sec:bosons}
Although very little attention has been given to bosons using the tunneling method, the emission of a $U(1)$ field from a generic black hole has recently been considered up to second order in $\hbar$ \cite{Majhi3}. We will here expand on this result to find an exact formula for $\partial_r I(r)$ for an arbitrary non-Abelian Yang-Mills theory. We therefore consider a vector field $A_\mu^a = a_\mu^a e^{\frac{-i}{\hbar} I}$ obeying the equations of motion
\myeq{ 0 &= \nabla^\nu F_{\mu \nu} \\
&= g^{\nu \alpha} \left[ \partial_\alpha F_{\mu \nu}^a - \Gamma_{\alpha \mu}^\lambda F_{\lambda \nu}^a - \Gamma_{\alpha \nu}^\lambda F_{\mu \lambda}^a + gf^{abc} A_\nu^b F_{\alpha \mu}^c \right] ,
}
where we defined $F_{\mu \nu}^a = \partial_\mu A_\nu^a - \partial_\nu A_\mu^a + gf^{abc}A_\mu^b A_\nu^c$. Expanding this according to $A \propto a e^{\frac{i}{\hbar}I}$, we find the expression
\myeq{ \label{long1}
0 = g^{\nu \alpha} \bigg[
& \partial_\alpha \partial_\mu a_\nu^a - \frac{i}{\hbar}\partial_\alpha a_\nu^a \partial_\mu I - \frac{1}{\hbar^2}a_\nu^a \partial_\alpha I \partial_\mu I - \frac{i}{\hbar}a_\nu^a \partial_\alpha \partial_\mu I - \frac{i}{\hbar}\partial_\mu a_\nu^a \partial_\alpha I \\
&-\partial_\alpha \partial_\nu a_\mu^a + \frac{i}{\hbar}\partial_\alpha a_\mu^a \partial_\nu I + \frac{1}{\hbar^2}a_\mu^a \partial_\alpha I \partial_\nu I + \frac{i}{\hbar} a_\mu^a \partial_\alpha \partial_\nu I + \frac{i}{\hbar}\partial_\nu a_\mu^a \partial_\alpha I \\
&+ gf^{abc}A_\nu^c \left( \partial_\alpha a_\mu^b - \frac{i}{\hbar}a_\mu^b \partial_\alpha I \right) \\
&+ gf^{abc}A_\mu^b \left( \partial_\alpha a_\nu^c - \frac{i}{\hbar}a_\nu^c \partial_\alpha I \right) \\
&-\Gamma^\lambda_{\alpha \mu}\left( \partial_\lambda a_\nu^a - \frac{i}{\hbar}a_\nu^a\partial_\lambda I - \partial_\nu a_\lambda^a + \frac{i}{\hbar}a_\lambda^a \partial_\nu I + gf^{abc}A_\lambda^b A_\nu^c \right) \\
&+\Gamma^\lambda_{\alpha \nu}\left( \partial_\lambda a_\mu^a - \frac{i}{\hbar}a_\mu^a \partial_\lambda I - \partial_\mu a_\lambda^a + \frac{i}{\hbar}a_\lambda^a \partial_\mu I + gf^{abc}A_\lambda^b A_\mu^c \right) \\
&+ gf^{abc}A_\nu^b \left( \partial_\alpha a_\mu^c - \frac{i}{\hbar}a_\mu^c \partial_\alpha I - \partial_\mu a_\alpha^c + \frac{i}{\hbar}a_\alpha^c \partial_\mu I + gf^{cde}A_\mu^d A_\alpha^e \right) \bigg].
}
We first simplify this expression by fixing the gauge:
\myeq{ 0 &= \nabla_\mu A^{a \mu} \\
&= g^{\nu \alpha} \left[ \partial_\alpha a_\nu^a - \frac{i}{\hbar}a_\nu^a \partial_\alpha I - \Gamma^\lambda_{\nu \alpha}a_\lambda^a \right];
}
then, $(\ref{long1})$ becomes
\myeq{ \label{long2}
0 = g^{\nu \alpha} \bigg[
& \partial_\alpha \partial_\mu a_\nu^a - \frac{i}{\hbar}a_\nu^a \partial_\alpha \partial_\mu I - \frac{i}{\hbar}\partial_\mu a_\nu^a \partial_\alpha I \\
&-\partial_\alpha \partial_\nu a_\mu^a + \frac{i}{\hbar}\partial_\alpha a_\mu^a \partial_\nu I + \frac{1}{\hbar^2}a_\mu^a \partial_\alpha I \partial_\nu I + \frac{i}{\hbar} a_\mu^a \partial_\alpha \partial_\nu I + \frac{i}{\hbar}\partial_\nu a_\mu^a \partial_\alpha I \\
&+ gf^{abc}A_\nu^c \left( \partial_\alpha a_\mu^b - \frac{i}{\hbar}a_\mu^b \partial_\alpha I \right) \\
&+ gf^{abc}A_\mu^b \left( \partial_\alpha a_\nu^c - \frac{i}{\hbar}a_\nu^c \partial_\alpha I \right) \\
&-\Gamma^\lambda_{\alpha \mu}\left( \partial_\lambda a_\nu^a - \frac{i}{\hbar}a_\nu^a\partial_\lambda I - \partial_\nu a_\lambda^a + \frac{i}{\hbar}a_\lambda^a \partial_\nu I + gf^{abc}A_\lambda^b A_\nu^c \right) \\
&+\Gamma^\lambda_{\alpha \nu}\left( \partial_\lambda a_\mu^a - \frac{i}{\hbar}a_\mu^a \partial_\lambda I - \partial_\mu a_\lambda^a + gf^{abc}A_\lambda^b A_\mu^c \right) \\
&+ gf^{abc}A_\nu^b \left( \partial_\alpha a_\mu^c - \frac{i}{\hbar}a_\mu^c \partial_\alpha I - \partial_\mu a_\alpha^c + \frac{i}{\hbar}a_\alpha^c \partial_\mu I + gf^{cde}A_\mu^d A_\alpha^e \right) \bigg].
}
We now focus on the $\mu=t$ equation. Setting the time derivatives of $a_\mu$ to zero and simplifying slightly, we find
\myeq{
0 &= g^{tt} \left[ \frac{1}{\hbar^2}a_t^a (\partial_t I)^2 + \frac{i}{2\hbar} g^{rr}g_{tt,r} a_r^a \partial_t I \right] \\
&+ g^{rr} \bigg[
- \frac{i}{\hbar}a_r^a \partial_r \partial_t I
-\partial_r \partial_r a_t^a + \frac{i}{\hbar}\partial_r a_t^a \partial_r I + \frac{1}{\hbar^2}a_t^a \partial_r I \partial_r I + \frac{i}{\hbar} a_t^a \partial_r \partial_r I + \frac{i}{\hbar}\partial_r a_t^a \partial_r I \\
&+ gf^{abc}A_r^c \partial_r a_t^b
+ gf^{abc}A_t^b \partial_r a_r^c\\
&-\Gamma^t_{rt}\left( - \frac{i}{\hbar}a_r^a\partial_t I - \partial_r a_t^a + \frac{i}{\hbar}a_t^a \partial_r I + gf^{abc}A_t^b A_r^c \right) \\
&+\Gamma^r_{rr}\left( \partial_r a_t^a - \frac{i}{\hbar}a_t^a \partial_r I + \frac{i}{\hbar} a_r^a \partial_t I + gf^{abc}A_r^b A_t^c \right) \\
&+ gf^{abc}A_r^b \left( \partial_r a_t^c - \frac{i}{\hbar}a_t^c \partial_r I + \frac{i}{\hbar}a_r^c \partial_t I + gf^{cde}A_t^d A_r^e \right) \bigg] \\
&+ (\cdots),
}
where the $(\cdots)$ refers to the $\theta$ and $\phi$ sectors of the equation. We omitted those terms since they will not contribute to $\partial_r I$ near the horizon, because they will remain finite while other terms, such as $g^{tt} (\partial_t I)^2$, diverge. Using the fact that the $a_\mu^a$ are normalized to be finite everywhere near the horizon (such that, for example, $g^{rr} a_\mu \rightarrow 0$), and looking only at the real part of this equation \footnote{We are solving for $\text{Im} I$ by integrating $\partial_r I$ around a pole at the horizon. Thus, only real divergent terms can contribute.}, we find
\myeq{
0 &= g^{tt} \left[ \frac{1}{\hbar^2}a_t^a (\partial_t I)^2 \right] \\
&+ g^{rr} \left[
\frac{1}{\hbar^2}a_t^a \partial_r I \partial_r I + \Gamma_{rt}^t \partial_r a_t^a - \Gamma^t_{rt} gf^{abc}A_t^b A_r^c - g\frac{i}{\hbar}f^{abc}A_r^b a_t^c \partial_r I \right].
}
Since $\Gamma^t_{rt}=\frac{1}{2}g^{tt}g_{tt,r}$, then $g^{tt} \gg g^{rr} \Gamma^t_{rt}$ such that the two middle terms in the square brackets vanish. Our expression is therefore a quadratic polynomial in $\partial_r I$, which we will denote $0=ax^2 + bx + c$ where $a, b$ and $\frac{1}{c}$ all go to zero at the same speed. Then, since $\frac{b}{a}$ is finite while $\frac{c}{a}$ diverges, we find $x = \pm \sqrt{ \frac{-c}{a}}$. Hence:
\myeq{ \partial_r I = \pm \sqrt{ \frac{-g^{tt}}{g^{rr}}} \partial_t I .}
\section{Temperature} \label{sec:temperature}
Calculating the Hawking temperature from $\partial_r I$ has long been a contested issue, as many questions surrounding the covariance of the method have been raised. In particular, it appeared to yield different Hawking temperatures, differing by a factor of two \cite{Pilling2008}, depending on the coordinate system used. Now that the tunneling method has matured and become better understood, it is generally felt that we have a good handle on this issue.
\\\\
While many potential techniques have been proposed \cite{Mitra2006,Akhmedov2008,Stotyn2009,Majhi5,Akhmedov2006,Akhmedov2006b,Chowdhury2006}, we will here calculate the temperature using the approach recently summarized in \cite{Gill2010} which, in particular, assumes that the tunneling rate follows a thermal distribution. The temperature gets two contributions: one from the integration over the radial coordinate and one from the discontinuity in the time coordinate:
\myeq{ \label{temp} T_H = \frac{E}{\text{Im} \left( \int \partial_r I_+ - \int \partial_r I_- + 2E \Delta t \right)}.}
We begin by calculating the contribution from $\partial_r I$. In all cases, defining the energy as $E = -\partial_t I$, we have $\partial_r I = \frac{1}{f(r)} \left(\pm E + C \right)$ for some finite function $C$. It is clear, then, that
\myeq{ \int \partial_r I_+ - \int \partial_r I_- &= \int \frac{1}{f(r)} \left( E + C \right) - \int \frac{1}{f(r)} \left( -E + C \right) \\
&= E \oint \frac{1}{f(r)} \\
&= \frac{E}{f'(r_H)} 2 \pi i
}
Next, we find the contribution from the discontinuity in the time coordinate, $\Delta t$, across the horizon. The metric $(\ref{metric})$ corresponds to an accelerated observer in flat space who follows the path
\myeq{ t_{\text{out}} &= \frac{ \sqrt{f(r)}}{a} \sinh(a t) \hspace{2.cm} t_{\text{in}} = \frac{ \sqrt{-f(r)}}{a} \cosh(a t) \\
x_{\text{out}} &= \frac{ \sqrt{f(r)}}{a} \cosh(a t) \hspace{2.cm} x_{\text{in}} = \frac{ \sqrt{-f(r)}}{a} \sinh(a t),
}
where the ``in'' and ``out'' subscripts refer to whether we are considering $r \leq r_H$ or $r > r_H$, and where $a = \frac{f'(r)}{2}$. Thus, as the horizon is crossed, we need $t \rightarrow t - \frac{i \pi}{2a}$, so $\Delta t = \frac{i \pi}{f'(r)}$. Hence, $2E \Delta t = \frac{2E i \pi}{f'(r)}$ and, from $(\ref{temp})$, we get the Hawking temperature
\myeq{ T_H = \frac{f'(r)}{4\pi} }
for every type of particle. This agrees with the temperature commonly found in the literature, which is usually calculated only to leading order in $\hbar$.
\section{Discussion} \label{sec:discussion}
We've completed the study of spin-$1$ bosons, initiated in \cite{Majhi3}, by extending it to the non-Abelian case, by giving proper physical motivation for a number of terms dropping out, and by calculating all higher-order terms. Combined with previous results for scalars and fermions, this finally confirms that Hawking radiation is independent of the type of particle involved. Although our results show that bosons are emitted at the Hawking temperature regardless of the symmetries of the underlying theory, we needed to fix the gauge ($\nabla_\mu A^\mu = 0$) in order to perform the calculations.
\\\\
We've also shown that the $\hbar \rightarrow 0$ limit is unnecessary when using the tunneling method. Indeed, since the method is highly local in considering the emission of a field from a pole at the horizon, we are forced to take the limit $r \rightarrow r_H$. It is therefore natural to take $g_{tt} \rightarrow 0$ instead of $\hbar \rightarrow 0$, such that the tunneling method can truly be understood as the gravitational analog to the quantum WKB method. There are, however, some important drawbacks to this approach. First, by assuming that there is no back-reaction, we are drastically restricting quantum processes which may affect the radiation process. Second, one might expect to see grey-body corrections; the fact that these are missed by the tunneling approach makes this method suspect \footnote{The author would like to thank T Padmanabhan for bringing this point to his attention.}.
\\\\
It is also important to distinguish our results from those of Majhi et al.\ \cite{Majhi1,Majhi2,Majhi3}, who have recently calculated non-zero contributions from the Hawking temperature coming from higher-order terms in the tunneling method. This mismatch is simply a consequence of differing definitions of energy, which we can illustrate using the free scalar field. For this case, our calculations yield equation $(\ref{scalarFinal})$, which is also found in \cite{Majhi1}. We define the energy as $E = -\partial_t I$, and therefore find no additional contributions to the Hawking temperature. On the other hand, \cite{Majhi1} defines the energy as $E = -\partial_t I_0$ where $I_0$ is the leading term in the action $I = \sum \hbar^i I_i$; this then produces higher-order corrections. We are not the first to question these corrections; indeed, \cite{Wang2010,Mitra} have pointed out that they are caused by an odd definition of the field's energy and concluded that the Hawking temperature is not modified by higher-order terms. Moreover, in previous works on this topic, terms with no explicit dependence on the action (such as the last term in our equation $(\ref{fermionFinal})$) are automatically set to zero; we've filled this gap by providing the necessary justification as to how each term cannot contribute to the Hawking temperature.
\\\\
In conclusion, we have calculated the temperature associated with the emission rate of every type of scalars, fermions, and spin-$1$ bosons from a generic black hole spacetime and to every order in $\hbar$ in the tunneling approach, as long as there is no back-reaction.
\acknowledgments{
This work was supported by the Natural Sciences and Engineering Research Council of Canada. The author would like to thank Ross Diener, Nima Doroud, as well as the reviewer for insightful comments on the manuscript.
}
|
1,941,325,219,993 | arxiv | \section{Introduction}
It is well known that tumor tissues are often stiffer than normal tissues. For instance, a normal mammary gland has an elastic modulus of about $2$ hPa, which may dramatically increase for a breast tumor up to about $4$ kPa \cite{paszek2005tensional}. For this reason self-palpation is often a successful tool of pre-diagnosis for the detection of possible stiffer nodules, therefore so encouraged. In most cases, the increased stiffness is due to the presence of a denser and more fibrous stroma \cite{butcher2009tense,kass2007mammary,takeuchi1976variation} coming from a considerable change in the content of ExtraCellular Matrix (ECM). Indeed, as reported in \cite{paszek2005tensional}, doubling the percent amount of collagen would increase the stiffness of a tissue by almost one order of magnitude ($328$ Pa and $1589$ Pa for a $2$ mg/ml and a $4$ mg/ml collagen mixture, respectively). The percentage of ECM also changes within the same tumor type during tumor progression \cite{zhang2003characteristics}.
The continuous remodeling of ECM is a physiologically functional process, because it allows to keep the stroma young and reactive. In fact, prolonged rest is detrimental for bones and muscles, while physical training has an opposite effect. The ECM is constantly renewed through the concomitant production of Matrix
MetalloProteinases (MMPs) and new ECM components. In stationary conditions, remodeling of ECM is a slow process: for instance, in human lungs the physiological turnover of ECM is $10$ to $15\%$ per day \cite{johnson2001role}, which leads to an estimated complete turnover in a period of nearly one week. However, when a new tissue has to be formed, e.g. to heal a wound, the rate of production is one or two order of magnitude faster \cite{chiquet1996regulation,dejana1990fibrinogen}. It is also well known that the remodeling process is strongly affected by the stress and the strain the tissue undergoes, as clearly occurs for bones, teeth, and muscles \cite{kim2002gene,kjaer2004role,mao2004growth}. Hence, the relation between the rate of production/degradation of ECM constituents and the pressure felt by the cells is rather complicated.
Increased presence of ECM does not characterize only many tumors but was also observed in other pathologies like intima hyperplasia, cardiac, liver, and pulmonary fibrosis, asthma, colon cancer \cite{berk2007ecm,brewster1990myofibroblasts,iredale2007models,johnson2001role,liotta2001microenvironment,pinzani2000liver}. The alteration in the ECM composition can be due to several probably concurring reasons, including increased synthesis of ECM proteins, decreased activity of MMPs, upregulation of Tissue-specific Inhibitors of MetalloProteinases (TIMPs).
The interaction between ECM and cells is also attracting the attention of many research for other reasons. Indeed, on the one hand cells must adhere properly in order to survive, a phenomenon called \emph{anoikis}, as well as be anchored to the ECM to undergo mithosis. On the other hand, the interaction with the stroma has been argued to be one of the causes of tumor progression \cite{butcher2009tense,hautekeete1997hepatic,
kass2007mammary,liotta2001microenvironment,ruiter2002melanoma}.
In this paper we propose a general multiphase mathematical model able to describe the formation of fibrosis through either excessive production of ECM or underexpression of MMPs. The model is based on the frameworks deduced in \cite{chaplain2006mml,MR2471305}, taking also cell-ECM adhesion into account. In particular, ECM is regarded as a rigid scaffold while the cell populations (tumor and healthy cells) are assumed to behave similarly to elastic fluids. More realistic constitutive models, taking cell-cell adhesion into account and comparing theoretical and experimental results, can be found in \cite{ambrosi2008cam,preziosi2009evp}. For the sake of conciseness, we refrain from citing here all papers dealing with multiphase models of tumor growth, and refer to the recent reviews \cite{MR2253816,preziosi2009mmt,tracqui2009bmt} for more references.
In more detail, Sect. \ref{sect:multiphase} derives and describes the model, which is then studied from the qualitative point of view in Sect. \ref{sect:spathomog}, having in mind the general dependence on the parameters stemming from biology. Existence, uniqueness, and continuous dependence of the solution on the initial data is proved in the spatially homogeneous case, and equilibrium configurations are discussed. These theoretical investigations reveal several interesting features of the model, for instance the fact that it predicts no other equilibria but the fully physiological and the fully pathological ones, featuring no tumor cells and no healthy cells, respectively. The physiological equilibrium turns out to be stable in the manifold with no tumor cells, but becomes unstable as soon as few tumor cells are present, which trigger the formation of fibrotic tissue.
\section{Multiphase modeling: general picture and particular cases}
\label{sect:multiphase}
In the multiphase modeling approach, tumors are regarded as a mixture of several interacting components whose main state variables are the volume ratios, i.e., their percent amounts within the mixture. With a view to providing a simplified, though still realistic, description of the system, we confine our attention to two cell populations: tumor cells, with volume ratio $\phi_T$, and healthy cells, with volume ratio $\phi_H$, moving within a remodeling extracellular matrix with volume ratio $\phi_M$. Clearly, $0\leq\phi_\alpha\leq 1$ for all $\alpha=T,\,H,\,M$.
\subsubsection*{Balance equations for the cellular matter}
\label{sect:celleq}
Following \cite{MR2471305}, we obtain the main governing equations for the cellular matter by joining the mass balance equation and the corresponding balance of linear momentum (with inertial effects neglected):
\begin{equation}
\frac{\partial\phi_\alpha}{\partial t}-\nabla\cdot\left(\phi_\alpha\left(\frac{\phi_\alpha}{\phi}
-\frac{\sigma_{\alpha M}}{\vert\nabla{(\phi\Sigma(\phi))}\vert}\right)^+\mathbb{K}_{\alpha M}
\nabla{(\phi\Sigma(\phi))}\right)=\Gamma_\alpha\phi_\alpha,
\label{eq:celleq}
\end{equation}
where $\phi:=\phi_T+\phi_H$, $\Gamma_\alpha$ is the duplication/death rate, and $\mathbb{K}_{\alpha M}$ the cell motility tensor within the matrix. Cells are regarded as elastic balloons forming an isotropic fluid, and are assumed to feature equal mechanical properties, hence their stress tensor is $\mathbb{T}=-\Sigma(\phi)\mathbb{I}$ for a pressure-like function $\Sigma$. In addition, the model accounts for the attachment/detachment of the cells to/from the matrix by means of a stress threshold $\sigma_{\alpha M}\geq 0$, which switches cell velocity on or off according to the magnitude of the actual stress sustained by cells in interaction with the matrix (see \cite{MR2471305} for further details).
In the application to matrix remodeling and fibrosis, we consider that cells duplicate and die mainly on the basis of the amount of matrix present in the mixture. In general, also the availability of some nutrients plays a major role, but in the present context we assume that they are always abundantly supplied to the cells. Specifically, we set
\begin{equation}
\Gamma_\alpha=\gamma_\alpha(\phi_M)H_{\epsilon_\alpha}(\psi_\alpha-\psi)-\delta_\alpha
-\delta'_\alpha H_{\epsilon_M}(m_\alpha-\phi_M),
\label{eq:Gamma}
\end{equation}
where $\psi:=\phi_T+\phi_H+\phi_M$ is the overall volume ratio occupied by cells and matrix, and $\gamma_\alpha(\cdot)$ is the net growth rate of the cell population $\alpha$, tempered by the free space rate $H_{\epsilon_\alpha}$. In particular, the $H_{\epsilon_\alpha}$'s are functions bounded between $0$ and $1$, which vanish on $(-\infty,\,0)$ and equal $1$ on $(\epsilon_\alpha,\,+\infty)$ (further analytical details in Sect. \ref{sect:spathomog}, Assumption \ref{hp:param}). Cell growth is inhibited when the amount of free space locally available is too small ($\psi\geq\psi_\alpha$) with respect to a threshold $\psi_\alpha\in[0,\,1]$. At the same time, either apoptosis or anoikis can trigger cell death at rates $\delta_\alpha,\,\delta_\alpha'>0$, respectively, the latter taking place when a too small amount of ECM ($\phi_M\leq m_\alpha$) with respect to a given threshold $m_\alpha\in[0,\,1]$ results in an insufficient number of possible adhesion sites.
Usually $\gamma_T(\cdot)=\gamma_H(\cdot)$, $\delta_T=\delta_H$, $m_T=m_H$, $\epsilon_T=\epsilon_H=\epsilon_M$. Instead, a difference between $\psi_T$ and $\psi_H$, with $\psi_T>\psi_H$, may be related to a smaller sensitivity to contact inhibition clues by tumor cells \cite{chaplain2006mml}. On the whole, we notice that it must be
\begin{equation}
\Gamma_T(\phi_M,\,\psi)>\Gamma_H(\phi_M,\,\psi), \qquad \forall\,(\phi_M,\,\psi)\in[0,\,1]\times[0,\,1]
\label{eq:Gamma-ineq}
\end{equation}
which holds if: (i) $\delta'_T<\delta'_H$ (smaller sensitivity to anoikis by tumor cells), (ii) $\epsilon_T>\epsilon_H$ (different speed for the switch mechanism, e.g. because of a different uncertainty in the response to mechanical stimuli), (iii) $m_T<m_H$ (higher capability from tumor cells to escape anoikis by surviving a greater lack of adhesion sites).
\subsubsection*{Matrix remodeling}
In general, the ECM is a quite complicated fibrous me\-dium. For the sake of simplicity, we model it as a rigid scaffold, which makes it unnecessary to detail its stress tensor because the internal stress is indeterminate due to the rigidity constraint. Under this assumption, the evolution equation for the volume ratio $\phi_M$ reads
\begin{equation}
\frac{\partial\phi_M}{\partial t}=\Gamma_M,
\label{eq:ecmeq}
\end{equation}
where $\Gamma_M$ is the source/sink of ECM accounting for remodeling and degradation due to the motion of the cells within the scaffold. Notice that, in general, $\phi_M$ depends on both time $t$ and space $x$, although the latter acts mainly as a parameter in the above differential equation.
Matrix is globally remodeled by cells and degraded by MMPs, whose concentration per unit volume is denoted by $e=e(t,\,x)$:
\begin{equation}
\Gamma_M=\sum_{\alpha=T,\,H}\mu_\alpha(\phi_M)H_{\epsilon_M}(\psi_M-\psi)\phi_\alpha-\nu e\phi_M.
\label{eq:GammaM-1}
\end{equation}
Here, $\mu_\alpha$ is a nonnegative, nonincreasing function (cf. Sect. \ref{sect:spathomog}, Assumption \ref{hp:param}) representing the net matrix production rate by the cell population $\alpha$ tempered by the free space rate $H_{\epsilon_M}$, and $\nu>0$ is the degradation rate by the enzymes. As usual, the latter are not regarded as a constituent of the mixture, but rather as massless macromolecules diffusing in the extracellular fluid according to a reaction-diffusion equation: $e_t=D\Delta{e}+\sum_{\alpha=T,\,H}\pi_\alpha\phi_\alpha-e/\tau$,
for net production rates $\pi_\alpha>0$ and enzyme half-life $\tau>0$. Actually, enzyme dynamics is much faster than that involving cell growth and death, hence it is possible to work under a quasi-stationary approximation. Furthermore enzyme action is usually very local \cite{barker2000cws}, so that also diffusion can be neglected and finally $e=\tau\sum_{\alpha=T,\,H}\pi_\alpha\phi_\alpha.$
Inserting this expression into Eq. \eqref{eq:GammaM-1} and defining $\nu_\alpha:=\nu\tau\pi_\alpha$ ultimately yields
\begin{equation*}
\Gamma_M=\sum_{\alpha=T,\,H}\left(\mu_\alpha(\phi_M)H_{\epsilon_M}(\psi_M-\psi)
-\nu_\alpha\phi_M\right)\phi_\alpha.
\label{eq:GammaM-2}
\end{equation*}
The pathological cases possibly leading to fibrosis are either $\mu_T(\cdot)>\mu_H(\cdot)$ or $\nu_T<\nu_H$, which imply that tumor cells produce either more ECM or less MMPs than healthy cells, respectively.
\section{The spatially homogeneous problem}
\label{sect:spathomog}
The spatially homogeneous problem describes the evolution of the system under the main assumption of absence of spatial variation of the state variables $\phi_T$, $\phi_H$, $\phi_M$. This allows in particular to describe the equilibria, and the related basins of attraction, as functions of the parameters of the model.
In the sequel we will be concerned with the following initial value problem:
\begin{equation}
\left\{
\begin{array}{rcll}
\dfrac{d\phi_\alpha}{dt} & = & \left[\gamma_\alpha(\phi_M)H_{\epsilon_\alpha}(\psi_\alpha-\psi)-\delta_\alpha
-\delta_\alpha'H_{\epsilon_M}(m_\alpha-\phi_M)\right]\phi_\alpha, \quad \alpha=T,\,H \\[0.3cm]
\dfrac{d\phi_M}{dt} & = & \displaystyle{\sum_{\alpha=T,\,H}}
(\mu_\alpha(\phi_M)H_{\epsilon_M}(\psi_M-\psi)-\nu_\alpha\phi_M)\phi_\alpha \\[0.6cm]
\phi_\alpha(0) & = & \phi_\alpha^0\in[0,\,1], \quad \alpha=T,\,H,\,M
\end{array}
\right.
\label{eq:ode}
\end{equation}
over a time interval $(0,\,T]$, $T>0$. Some preliminary technical assumptions are in order:
\begin{assumption}
For $\alpha=T,\,H,\,M$ as appropriate, we assume $0\leq\psi_\alpha,\,m_\alpha\leq 1$, $\delta_\alpha,\,\delta_\alpha',\,\nu_\alpha>0$, and in addition that $\gamma_\alpha,\,\mu_\alpha:[0,\,1]\to\mathbb{R}_+$ are Lipschitz continuous, with $\gamma_\alpha$ nondecreasing, $\gamma_\alpha(0)=0$, and $\mu_\alpha$ nonincreasing.
Moreover, we assume that the functions $H_{\epsilon_\alpha}:\mathbb{R}\to [0,\,1]$ are Lipschitz continuous and vanishing on $(-\infty,\,0]$.
\label{hp:param}
\end{assumption}
The monotonicity of $\gamma_\alpha,\,\mu_\alpha$ is dictated by the fact that cell proliferation is fostered by the presence of ECM, whereas production of new matrix is progressively inhibited by the accumulation of other matrix. Owing to the properties recalled in Assumption \ref{hp:param}, $\gamma_\alpha$, $\mu_\alpha$, and $H_{\epsilon_\alpha}$ satisfy
\begin{gather}
\gamma_\alpha(s)\leq\Lip{\gamma_\alpha}s, \ \gamma_\alpha(s)\leq\gamma_\alpha(1), \
\mu_\alpha(1)\leq\mu_\alpha(s)\leq\mu_\alpha(0), \ \forall\,s\in[0,\,1],
\label{eq:propgm}
\\
H_{\epsilon_\alpha}(s-\beta)\leq\Lip{H_{\epsilon_\alpha}}\vert s\vert, \quad
\forall\,s\in\mathbb{R},\,\beta\geq 0.
\label{eq:propH}
\end{gather}
The functions $H_{\epsilon_\alpha}$ may be taken to be mollifications of the Heaviside function, for instance $H_{\epsilon_\alpha}(s)=0$ if $s<0$, $H_{\epsilon_\alpha}(s)=\epsilon_\alpha^{-1}s$ if $0\leq s\leq\epsilon_\alpha$, and $H_{\epsilon_\alpha}(s)=1$ if $s>\epsilon_\alpha$, or even smoother.
Let us introduce the space $V^d:=C([0,\,T];\,\mathbb{R}^d)$ of continuous functions $\u:[0,\,T]\to\mathbb{R}^d$, endowed with the $\infty$-norm $\|\u\|_\infty=\max_{t\in[0,\,T]}\|\u(t)\|_1$. In proving our results we will utilize $d=3$ and $d=4$.
\subsubsection*{Well-posedness}
We start by studying existence, uniqueness, and continuous dependence on the data of the solution $\boldsymbol{\phi}=(\phi_T,\,\phi_H,\,\phi_M)$ to problem \eqref{eq:ode}. We will then also discuss its regularity. Since the $\phi_\alpha$'s are volume ratios, we are interested in nonnegative solutions such that $\psi(t)\leq 1$ for all $t\geq 0$.
\begin{theorem}[Existence, uniqueness, and continuous dependence]
\label{theo:wellpos}
For each initial datum $\boldsymbol{\phi}^0\geq 0$ with $\|\boldsymbol{\phi}^0\|_1\leq 1$, problem \eqref{eq:ode} admits a unique nonnegative global solution $\boldsymbol{\phi}\in C([0,\,+\infty);\,\mathbb{R}^3)$ such that $\|\boldsymbol{\phi}\|_\infty\leq 1$. In addition, if $\boldsymbol{\phi}_1,\,\boldsymbol{\phi}_2$ are the solutions corresponding to initial data $\boldsymbol{\phi}_1^0,\,\boldsymbol{\phi}_2^0$, then for each $T>0$ there exists a constant $C=C(T)>0$ such that
$$ \|\boldsymbol{\phi}_2-\boldsymbol{\phi}_1\|_\infty\leq C(T)\|\boldsymbol{\phi}_2^0-\boldsymbol{\phi}_1^0\|_1 $$
in the interval $[0,\,T]$.
\end{theorem}
\begin{proof}
\begin{enumproof}
\item Let us introduce the function $\varphi(t):=1-\psi(t)$ (which, in mixture theory, identifies the free space available to be filled by some extracellular fluid) and consider the auxiliary problem given by the set of equations \eqref{eq:ode} plus $\varphi'=-\sum_\alpha\phi_\alpha'$, along with $\varphi^0:=\varphi(0)=1-\sum_\alpha\phi_\alpha^0$. Clearly, a triple $(\phi_T,\,\phi_H,\,\phi_M)$ is a solution to problem \eqref{eq:ode} if and only if the quadruple $(\phi_T,\,\phi_H,\,\phi_M,\,\varphi)$ is a solution to the auxiliary problem.
\item We put the auxiliary problem in compact form as
\begin{equation}
\begin{cases}
\dfrac{d\boldsymbol{\Phi}}{dt}=J[\boldsymbol{\Phi}], \qquad t>0 \\[0.2cm]
\boldsymbol{\Phi}(0)=\boldsymbol{\Phi}^0,
\end{cases}
\label{eq:ode-compact}
\end{equation}
where $\boldsymbol{\Phi}=(\boldsymbol{\phi},\,\varphi)$ and $J:V^4\to V^4$ is given componentwise by the right-hand sides of the differential equations in \eqref{eq:ode} plus $J_\varphi=-\sum_\alpha J_\alpha$.
Next we make the substitution $\boldsymbol{\Phi}(t)=\boldsymbol{\Psi}(t)e^{-\lambda t}$, where $\lambda>0$ will be properly selected. Due to the specific expression of $J$, the term $J[\boldsymbol{\Psi}(t)e^{-\lambda t}]$ can be given the form $I[\boldsymbol{\Psi}](t)e^{-\lambda t}$ for a suitable operator $I:V^4\to V^4$, which allows us to rewrite problem \eqref{eq:ode-compact} in terms of $\boldsymbol{\Psi}$ as
\begin{equation*}
\begin{cases}
\dfrac{d\boldsymbol{\Psi}}{dt}=I[\boldsymbol{\Psi}]+\lambda\boldsymbol{\Psi}, \qquad t>0 \\[0.2cm]
\boldsymbol{\Psi}(0)=\boldsymbol{\Phi}^0
\end{cases}
\end{equation*}
or, in mild form, as $\boldsymbol{\Psi}(t)=\boldsymbol{\Phi}^0+\int\limits_0^t\Bigl(I[\boldsymbol{\Psi}](\tau)+\lambda\boldsymbol{\Psi}(\tau)\Bigr)\,d\tau=:G[\boldsymbol{\Psi}](t)$.
\item Let us look for a mild solution in the following set of admissible functions:
$$ \CMcal{A}=\{\u\in V^4\,:\,\u(t)\geq 0,\ \|\u(t)\|_1=e^{\lambda t}\ \text{for all\ } t\in[0,\,T]\}. $$
Notice that $\boldsymbol{\Psi}\in\CMcal{A}$ amounts in particular to $\phi_\alpha(t),\,\varphi(t)\geq 0$ with $\sum_\alpha\phi_\alpha(t)+\varphi(t)=1$ for all $t\in[0,\,T]$, thus $\sum_\alpha\phi_\alpha(t)\leq 1$, which is what the saturation constraint requires on the volume ratios of the constituents of the mixture.
Any mild solution of $\boldsymbol{\Psi}(t)=G[\boldsymbol{\Psi}](t)$ is a fixed point of the operator $G$, therefore the task is to show that $G$ admits a unique fixed point in $\CMcal{A}$.
\item Owing to Assumption \ref{hp:param} and properties \eqref{eq:propgm}, \eqref{eq:propH}, if $\u(t)\geq 0$ then
\begin{align*}
G_\alpha[\u](t) &\geq \phi_\alpha^0+(\lambda-C_\alpha)\int\limits_0^t u_\alpha(\tau)\,d\tau
\quad (C_{T,H}=\delta_{T,H}+\delta'_{T,H},\ C_M=\nu_T+\nu_H), \\
G_\varphi[\u](t) &\geq \varphi^0+\left(\lambda-\sum_{\alpha=T,\,H}(\gamma_\alpha(1)\Lip{H_{\epsilon_\alpha}}
+\mu_\alpha(0)\Lip{H_{\epsilon_M}})\right)\int\limits_0^tu_\varphi(\tau)\,d\tau,
\end{align*}
hence we can choose $\lambda>0$ so large that $G[\u](t)\geq 0$ as well. If in addition $\|\u(t)\|_1=e^{\lambda t}$ then, using $I_\varphi=-\sum_\alpha I_\alpha$, we discover $\|G[\u](t)\|_1=e^{\lambda t}$. In conclusion, $\u\in\CMcal{A}$ implies $G[\u]\in\CMcal{A}$, i.e., $G$ maps $\CMcal{A}$ into itself.
\item Take now $\u,\,\v\in\CMcal{A}$ and observe that
$$ \|G[\u](t)-G[\v](t)\|_1\leq\int\limits_0^t\left(\|I[\u](\tau)-I[\v](\tau)\|_1+
\lambda\|\u(\tau)-\v(\tau)\|_1\right)\,d\tau. $$
Lipschitz continuity of $\gamma_\alpha,\,\mu_\alpha,\,H_{\epsilon_\alpha}$ along with $H_{\epsilon_\alpha}(s)\leq 1$ and properties \eqref{eq:propgm}, \eqref{eq:propH} imply that there exists $C>0$, independent of $T$, such that $\vert I_\alpha[\u](t)-I_\alpha[\v](t)\vert\leq C\|\u(t)-\v(t)\|_1$ each $\alpha=T,\,H,\,M$. Since $I_\varphi=-\sum_\alpha I_\alpha$, an analogous relationship holds true also for $I_\varphi$, hence finally $\|G[\u]-G[\v]\|_\infty\leq T(C+\lambda)\|\u-\v\|_\infty$, which proves that $G$ is Lipschitz continuous on $\CMcal{A}$.
\item From the above calculations we see that we can choose $T>0$ so small that $G$ be a contraction on $\CMcal{A}$. Since $\CMcal{A}$ is a closed subset of $V^4$, Banach Fixed Point Theorem asserts that $G$ has a unique fixed point $\boldsymbol{\Psi}\in\CMcal{A}$. Therefore, the auxiliary problem \eqref{eq:ode-compact} admits a unique nonnegative local solution $\boldsymbol{\Phi}\in V^4$ such that $\|\boldsymbol{\Phi}(t)\|_1=1$. The three first components of $\boldsymbol{\Phi}$ form the unique nonnegative solution $\boldsymbol{\phi}\in V^3$ to problem \eqref{eq:ode} with $\|\boldsymbol{\phi}(t)\|_1\leq 1$.
Next, taking $\boldsymbol{\phi}(T)$ as new initial condition and observing that it matches all the hypotheses satisfied by $\boldsymbol{\phi}^0$, we uniquely prolong $\boldsymbol{\phi}$ over the time interval $[T,\,2T]$ in such a way that $\boldsymbol{\phi}(t)\geq 0$ and $\|\boldsymbol{\phi}(t)\|_1\leq 1$ all $t\in[0,\,2T]$. Proceeding inductively, we ultimately end up with a unique nonnegative global solution $\boldsymbol{\phi}\in C([0,\,+\infty);\,\mathbb{R}^3)$, for which the estimate $\|\boldsymbol{\phi}\|_\infty\leq 1$ easily follows from $\|\boldsymbol{\phi}(t)\|_1\leq 1$ all $t\geq 0$.
\item Let now $\boldsymbol{\Psi}_1,\,\boldsymbol{\Psi}_2\in\CMcal{A}$ be the two mild solutions corresponding to initial data $\boldsymbol{\Phi}_1^0,\,\boldsymbol{\Phi}_2^0$. Using the previous estimates we discover
$$ \|\boldsymbol{\Psi}_2(t)-\boldsymbol{\Psi}_1(t)\|_1\leq\|\boldsymbol{\Phi}_2^0-\boldsymbol{\Phi}_1^0\|_1+
(C+\lambda)\int\limits_0^t\|\boldsymbol{\Psi}_2(\tau)-\boldsymbol{\Psi}_1(\tau)\|_1\,d\tau, $$
whence, invoking Gronwall's inequality,
$$ \|\boldsymbol{\Psi}_2(t)-\boldsymbol{\Psi}_1(t)\|_1\leq\left[1+(C+\lambda)te^{(C+\lambda)t}\right]\|\boldsymbol{\Phi}_2^0-\boldsymbol{\Phi}_1^0\|_1. $$
Returning to $\boldsymbol{\Phi}_1,\,\boldsymbol{\Phi}_2$ and observing that $\|\boldsymbol{\Phi}_2^0-\boldsymbol{\Phi}_1^0\|_1\leq 2\|\boldsymbol{\phi}_2^0-\boldsymbol{\phi}_1^0\|_1$ we finally get the desired estimate of continuous dependence, after taking the maximum of both sides for $t\in[0,\,T]$. \qedhere
\end{enumproof}
\end{proof}
\begin{theorem}[Regularity]
If the functions $\gamma_\alpha,\,\mu_\alpha,\,H_{\epsilon_\alpha}$ are of class $C^k$ on $[0,\,1]$ then the solution $\boldsymbol{\phi}$ is of class $C^{k+1}$ on $[0,\,+\infty)$.
\end{theorem}
\begin{proof}
According to Theorem \ref{theo:wellpos}, the $\phi_\alpha$'s are continuous on $[0,\,+\infty)$, therefore the right-hand sides of the differential equations in \eqref{eq:ode} define continuous functions on $[0,\,+\infty)$. It follows that the $\phi_\alpha'$'s are continuous as well, i.e., the solution $\boldsymbol{\phi}$ is actually $C^1$ on $[0,\,+\infty)$. If $\gamma_\alpha,\,\mu_\alpha,\,H_{\epsilon_\alpha}$ are of class $C^k$ then, by differentiating the ODEs in \eqref{eq:ode} $k$ times, this reasoning can be applied inductively to discover $\boldsymbol{\phi}\in C^{k+1}([0,\,+\infty);\,\mathbb{R}^3)$.
\end{proof}
\subsubsection*{Stability of the equilibrium configurations}
Next we study the equilibria of model \eqref{eq:ode}. It is immediately seen that $\phi_T=\phi_H=0$ gives rise to an equilibrium solution for any $\phi_M\in[0,\,1]$, corresponding to that all cells have died leaving some ECM. In order to investigate nontrivial equilibrium configurations, we proceed by considering first the two important sub-cases in which either $\phi_T=0$ or $\phi_H=0$ but $\phi_T,\,\phi_H$ do not vanish at the same time. The former will be referred to as the fully physiological case, the latter as the fully pathological one. In the following, $\phi_\alpha$ will be the nonzero volume ratio for either $\alpha=T$ or $\alpha=H$, meaning that the other one is identically zero. For the sake of simplicity, let us fix $\psi_M=1$ and examine the case $\phi_\alpha+\phi_M\leq 1-\eta$ for some arbitrarily small $\eta>0$, in such a way that, choosing $\epsilon_M<\eta$, we have the simplification $H_{\epsilon_M}(\psi_M-\phi_\alpha-\phi_M)=1$.
Suppose that the function $\mu_\alpha(s)-\nu_\alpha s$ has exactly one zero, say $s=M_\alpha$. Since, for physiological reasons, we further must have $\mu_\alpha(\psi_\alpha)<\nu_\alpha\psi_\alpha$, it follows $M_\alpha\in(0,\,\psi_\alpha)$. In this case, there is one nontrivial equilibrium given by
\begin{equation}
\phi_M=M_\alpha, \qquad \phi_\alpha=\psi_\alpha-M_\alpha-
H_{\epsilon_\alpha}^{-1}\left(\frac{\delta_\alpha+
\delta'_\alpha H_{\epsilon_M}(m_\alpha-M_\alpha)}{\gamma_\alpha(M_\alpha)}\right),
\label{eq:equilH}
\end{equation}
which is readily checked to be stable. Notice that the function $H_{\epsilon_\alpha}^{-1}(s)$ is well defined for $s\in(0,\,1)$.
The trivial equilibrium with also $\phi_\alpha=0$ can be reached basically in two situations. The first one is when $\phi_M$ is initially too small, so that the growth rate of the cells is lower than the apoptotic rate and anoikis occurs. This corresponds to initial conditions located in the lower-left corner of the phase portrait illustrated in Fig. \ref{fig:phaseport}, left. The equation $\phi_\alpha=\phi_\alpha(\phi_M)$ of the curve delimiting this basin of attraction in the phase space is obtained by integrating
\begin{equation}
\frac{d\phi_\alpha}{d\phi_M}=\frac{\gamma_\alpha(\phi_M)H_{\epsilon_\alpha}(\psi_\alpha-\phi_\alpha-\phi_M)
-\delta_\alpha-\delta'_\alpha H_{\epsilon_M}(m_\alpha-\phi_M)}{\mu_\alpha(\phi_M)-\nu_\alpha\phi_M}
\label{eq:ode-phaseport}
\end{equation}
with the condition $\phi_\alpha(\phi_{M\alpha}^\star)=0$, $\phi_{M\alpha}^\star\in(0,\,1)$ being the smaller root of the equation $\Gamma_\alpha(\phi_M)=0$ with $\phi_T=\phi_H=0$, cf. Eq. \eqref{eq:Gamma}, characterized by $\Gamma_\alpha'(\phi_{M\alpha}^\star)>0$. This region does not exist if $\phi_{M\alpha}^\star<0$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.3\textwidth,clip]{phaseport.pdf} \qquad
\includegraphics[width=0.3\textwidth,clip]{phaseport3D.pdf}
\end{center}
\caption{Left: phase portrait in the fully physiological or pathological case. Right: phase portrait for the full model.}
\label{fig:phaseport}
\end{figure}
The second case is when $\phi_M$ is initially too large, namely the ECM is overly dense and cells are so compressed that the growth rate is again lower than the apoptotic rate because $H_{\epsilon_\alpha}(\psi_\alpha-\phi_\alpha-\phi_M)\approx 0$. This corresponds to initial conditions located in the lower-right corner of the phase portrait depicted in Fig. \ref{fig:phaseport}, left. The curve delimiting the basin of attraction is again obtained by integrating Eq. \eqref{eq:ode-phaseport}, now with the condition $\phi_\alpha(\phi_{M\alpha}^{\star\star})=0$, $\phi_{M\alpha}^{\star\star}\in(0,\,1)$ being the larger root of the equation $\Gamma_\alpha(\phi_M)=0$ with $\phi_T=\phi_H=0$, so that $\phi_{M\alpha}^\star\leq\phi_{M\alpha}^{\star\star}$. In this case $\Gamma_\alpha'(\phi_{M\alpha}^{\star\star})<0$. The region does not exist if $\phi_{M\alpha}^{\star\star}>1$.
In order to get the complete picture, we further have to investigate whether a nontrivial equilibrium with $\phi_T,\,\phi_H>0$ may exist. For this, we recall that the duplication/death rates $\Gamma_\alpha$ are constructed so as to match the biological requirement $\Gamma_T(\phi_M,\,\psi)>\Gamma_H(\phi_M,\,\psi)$ for all $(\phi_M,\,\psi)\in[0,\,1]\times[0,\,1]$, cf. Eq. \eqref{eq:Gamma-ineq}. As a consequence, we see that it is impossible for the right-hand sides of the two first equations of problem \eqref{eq:ode} to vanish simultaneously at an equilibrium point $(\phi_T,\,\phi_H,\,\phi_M)$ with $\phi_T,\,\phi_H\ne 0$, for this would imply that there exist $\phi_M\in[0,\,1]$ and $\psi\in(0,\,1]$ such that $\Gamma_\alpha(\phi_M,\,\psi)=0$ for both $\alpha=T,\,H$, which contradicts the above-mentioned Eq. \eqref{eq:Gamma-ineq}.
Hence, the only possible equilibria of the system are those arising in the fully physiological or pathological situation. In addition, condition \eqref{eq:Gamma-ineq} makes the nontrivial physiological equilibrium unstable and the nontrivial pathological one stable, as it can be realized from the three-dimensional phase portrait shown in Fig. \ref{fig:phaseport}, right.
\vskip0.3cm
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.3\textwidth,clip]{HMevolution.pdf} \qquad
\includegraphics[width=0.3\textwidth,clip]{THMevolution.pdf}
\caption{Left: formation of normal tissue in the physiological case. Right: formation of hyperplastic fibrotic tissue due to a small initial amount of tumor cells.}
\label{fig:evolution}
\end{center}
\end{figure}
Figure \ref{fig:evolution} (left) shows an example of a temporal evolution of the system giving rise to the formation of normal tissue in the fully physiological case. The initial death of healthy cells is due to anoikis, indeed cells are seeded in an environment completely deprived of ECM, that they have to build fast enough. The decrease stops as soon as the amount of ECM produced is such that $\gamma_H(\phi_M)H_{\epsilon_H}(\psi_H-\psi)\geq\delta_H+\delta'_HH_{\epsilon_M}(m_H-\phi_M)$, then the number of cells starts increasing, eventually leading to the stationary solution predicted by Eq. \eqref{eq:equilH} for $\alpha=H$. Conversely, if the initial amount of cells is insufficient to produce ECM rapidly enough then the entire population will die.
Figure \ref{fig:evolution} (right) gives instead an example of a complete temporal history ending with the formation of hyperplastic and fibrotic tissue. Despite the initial conditions $\phi_H^0,\,\phi_M^0$ coincide with the equilibrium values reached after the formation of normal tissue, the presence of a small amount of tumor cells ($\phi_T^0>0$) changes dramatically the outcome of the evolution, leading to a full depletion of healthy cells. This evolution can be duly compared with that shown in Fig. \ref{fig:spatinhomog}, which refers to the simulation of the full spatial and temporal model in one space dimension, cf. Eqs. \eqref{eq:celleq}, \eqref{eq:ecmeq}. Starting from the same initial conditions as in the spatially homogeneous case, the presence of a small amount of tumor cells at the beginning generates a traveling wave, which progressively depletes healthy cells and produces further fibrotic matrix while invading the normal tissue.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.32\textwidth,clip]{healthy.pdf}
\includegraphics[width=0.32\textwidth,clip]{tumor.pdf}
\includegraphics[width=0.32\textwidth,clip]{matrix.pdf}
\caption{Traveling wave solutions for $\phi_H$ (blue), $\phi_T$ (red), $\phi_M$ (green) in the full spatial and temporal evolution.}
\label{fig:spatinhomog}
\end{center}
\end{figure}
|
1,941,325,219,994 | arxiv |
\subsection{The QPE population to date}
J0249 represents the fifth member of a growing QPE population. Here we consider the relationships between various physical and observed flaring properties (quiescent $0.5-2$ keV luminosity $L_X$, flare amplitude, flare duration, and flare recurrence time $t_{\mathrm{rec}}$) of QPE sources.
Fig.~\ref{fig:qpepop} shows scaling relationships of mean $t_{\mathrm{rec}}$ with mean duration and mean amplitude; data are taken directly from the original discovery papers \citep{Miniutti19,Giustini20,Arcodia21}, with error bars provided where possible. We note that these quantities, particularly amplitude, can be highly variable even within individual QPE sources---in the most extreme case of eRO-QPE1, the amplitudes of different flares can differ by a factor of up to 10x. Within sources showing high variability of QPE amplitude (RX J1301.9+2747, eRO-QPE1), a shorter $t_{rec}$ is associated with a larger amplitude, the inverse of the trend seen between separate QPE sources. QPE sources also show alternating ``long-short" recurrence times associated with respective smaller and larger amplitude flares.
Recent work has explored the possibility of QPEs being generated by accretion from orbiting bodies such as extreme-mass ratio inspirals/EMRIs \citep{Arcodia21, Metzger21} or the partial tidal disruptions of a star \citep{King20}. These scenarios provide compelling arguments for the quasi-periodic nature of QPEs by allowing for some residual orbital eccentricity to modulate the reccurence time ($t_{rec}$) and duration of the flares. They would also account for the inverse relationship seen between the X-ray luminosity $L_X$ and characteristic QPE timescales in Fig.~\ref{fig:qpepop}{\color{WildStrawberry}c}, as a larger accretion rate (which scales closely with $L_X$ in this low-mass SMBH regime) leads to a thicker disk, and thus shorter $t_{rec}$ and duration.
J0249 falls near the shorter end of QPE characteristic timescales, closely matching the durations and recurrence times seen in eRO-QPE2 in spite of its host galaxy sharing the most physical similarities with GSN 069, a comparatively intermediate QPE source. eRO-QPE1 is a conspicuous outlier compared with the four other QPE hosts in terms of characteristic timescales, quiescent luminosity, flare amplitude, and blackbody temperature. Discovery of further QPE hosts in the intermediate regime would provide an important information on the scaling relationships illustrated in Fig.~\ref{fig:qpepop}.
\subsection{The UV variability}
J0249 is unique among QPE sources in that it shows evidence for dips in the UV around the same time as the X-ray flares. Due to the low number of observed flares in J0249, we cannot claim a definitive correlation with the UV, but the light curves are suggestive. \citet{Arcodia21} showed that in the two eROSITA QPEs, there was no corresponding variability in the optical/UV. Similarly, no significant UV activity was seen in GSN~069 \citep{Miniutti19}, though a re-analysis of RX J1301.9+2747 from the May 2019 observation did show that one of the three X-ray flares is accompanied by a slight dip in UV (but much smaller amplitude than in J0249).
\citet{Arcodia21} suggest that the lack of UV/optical variability may be due to particularly small accretion disks in these two previously quiescent galaxies. Coincident UV activity in J0249 may instead suggest that the QPE phenomenon also occurs on larger scales of $\sim$thousand gravitational radii from the black hole. Or perhaps, since the host allows for the presence of an AGN, J0249 may have had a large pre-existing accretion disk that couples to the QPE phenomena at small scales. Future observations of QPEs in both quiescent galaxies and known AGN will elucidate these findings.
\section{Introduction} \label{sec:intro}
\input{introduction}
\section{Observations and Methods} \label{sec:methods}
\input{methods}
\section{Results} \label{sec:results}
\input{results}
\section{Discussion} \label{sec:discussion}
\input{discussion}
\section{Conclusion} \label{sec:conclusion}
\input{conclusion}
\subsection{Quasi-periodic Automated Transit Search (\texttt{QATS})}
Our algorithm of choice for searching through the {\it XMM-Newton}\ archive was the Quasi-periodic Automated Transit Search (\texttt{QATS}). The algorithm, originally developed in \cite{Carter13}, was designed to find exoplanet transit timing variations (TTVs) in {\it Kepler}\ optical data. \texttt{QATS}\ is a maximum-likelihood algorithm which models a candidate transit at each feasible cadence in a light curve, then compares the $\chi^2$ fit to a polynomial continuum representing the baseline, identifying quasi-periodic signals where the transit fit outperforms the continuum. Apart from TTVs, \texttt{QATS}\ has also been used to find ``inverted transit" systems, e.g. self-lensing binary stars \citep{Kruse14} showing quasi-periodic symmetric brightenings rather than dimmings, similar to the behavior of QPEs. \texttt{QATS}\ thus provides an attractive option for en-masse triaging of {\it XMM-Newton}\ archival light curves.
We made use of data from the 4XMM {\it XMM-Newton}\ serendipitous source catalog compiled by the 10 institutes of the {\it XMM-Newton}\ Survey Science Center (SSC) selected by ESA \citep{Webb20}. Preprocessed 0.2-12 keV light curves from the {\it XMM-Newton}\ SSC pipeline were retrieved directly from the web interface\footnote{\href{http://xmm-catalog.irap.omp.eu}{http://xmm-catalog.irap.omp.eu}} developed by \cite{Zolotukhin17}. In total, this consisted of 302,773 broadband light curves taken from 11,647 observations made during 2000-2019. We then ran \texttt{QATS}\ on all of these light curves and sorted them by the \texttt{QATS}\ merit function $S$, a quantity derived from the Gaussian log-likelihood. The \texttt{QATS}\ merit function can be interpreted by its relation to the signal-to-noise ratio, $S \equiv \sigma \times (S/N)_{\mathrm{total}}$. High-performing signals were then vetted by-eye and then reduced and analyzed manually.
\subsection{Data Reduction}
``Promising" candidates from our \texttt{QATS}\ search pipeline (i.e. showing one or multiple high-amplitude variability events separated by stable quiescent periods in their broadband light curves) were subsequently reduced and analyzed using the {\it XMM-Newton}\ Science Analysis System (SAS) v18.0.0, following the standard SAS threads recommended by the {\it XMM-Newton}\ Science Operations Center. Spectral fitting of EPIC-pn data was performed using HEASoft v6.28 with Xspec v12.11.1. In almost all cases, false-positives were vetted out during this stage (roughly 40 total), for reasons including (I) lack of spectral hardening during flare states; (II) energy-resolved light curves showing flares not in agreement with the QPE energy dependence, i.e. higher amplitude and smaller duration in higher energy bands, or flares not isolated to a narrow energy range; or (III) excessive soft proton background flaring indicating that observed flux variability is unlikely to be confined to a central source. Common false-positive sources include X-ray binaries and stochastic variability from AGNs/quasars. Importantly, publicly archived {\it XMM-Newton}\ observations of GSN 069 and RX J1301.9+274 exhibiting QPE flares were recovered by this process. Ultimately, J0249 was the only novel candidate which passed all of our false-positive tests.
The {\it Swift}\ data used to produce its long-term light curve (Fig.~\ref{fig:lc}) were analyzed with the online processing tool maintained by the University of Leicester\footnote{\href{https://www.swift.ac.uk/user_objects}{https://www.swift.ac.uk/user\_objects}}.
\subsection{Observations of J0249}
After the initial detection from the {\it XMM-Newton}\ Slew Survey, J0249 was observed again in 2006 with an {\it XMM-Newton}\ pointed observation lasting 9.9 ks in the EPIC-PN detector and 11.7 ks in the EPIC-MOS detectors (OBSID: 0411980401). This is the observation initially flagged by the QATS algorithm.
The source was observed 15 times from 2006-2017 with the \textit{Swift} XRT, revealing a gradual long-term dimming by over an order of magnitude and generally following a $t_{yrs}^{-5/3}$ behavior (Fig.~\ref{fig:lc}), as expected for the fallback rate of a TDE \citep{Rees88}. We also attempted to fit the dimming by a $t_{yrs}^{-9/4}$ model as expected of the fallback rate from a partial tidal disruption event \citep{Miles20}, but this fit was considerably poorer.
After our initial discovery of the source as a QPE candidate, we requested a 5 ks \textit{Swift} Target-of-Opportunity observation, which was carried out in June 2021 and resulted in an upper limit consistent with the observed dimming . We then requested a longer 33.8 ks {\it XMM-Newton}\ Director's Discretionary Time (DDT) observation that was performed on August 6, 2021 (OBSID: 0891800601), which revealed that the flares were no longer present but the source was still detectable. As only 1.5 flares in total were detected from the source, the classification of J0249 as a true QPE source is less clear than previous ones. However, given that the characteristics of the flares and quiescence align closely with known QPEs, and that these flares are distinct from those seen in other channels for AGN variability, we refer to the source as a QPE candidate.
\subsection{Light curve analysis}
During the 2006 observation, 1.5 symmetric flares separated by 9 ks and confined almost entirely to the 0.8-2 keV band were detected in the EPIC-PN, MOS1, and MOS2 \citep{Strueder01, Turner01} light curves (Fig.~\ref{fig:lc}, Fig.~\ref{fig:eresolved}). The 0.5-2 keV X-ray luminosity increased from a quiescent level of $L_{0.5-2}=1.6\times 10^{41}$ erg s$^{-1}$ to a flare level of $L_{0.5-2}=3.4\times 10^{41}$ erg s$^{-1}$. Assuming a black hole mass of $8.5 \times 10^4 M_\odot$, the quiescent Eddington ratio $R_{\mathrm{Edd}}$ is $\approx 0.13$. Correspondingly, \cite{Wevers19} found an average peak $R_{\mathrm{Edd}}$ of 0.27 among their sample of soft X-ray detected TDE candidates including J0249.
Similar to the four other QPE sources, the light curve shows a relatively stable quiescent flux apart from these rapid variability events. By the August 2021 {\it XMM-Newton}\ pointed observation, the dimming of the source had resulted in a flux decrease of over an order of magnitude. While the 2021 {\it XMM-Newton}\ observation was designed to be long enough to catch 2--3 QPE flares, there is no longer significant X-ray variability, meaning that within the 15 years after the original QPE detections, the phenomenon has ceased (Fig.~\ref{fig:lc}).
In order to quantify the QPE duration and recurrence time, we model the light curves of the 2006 {\it XMM-Newton}\ observation using a constant baseline equal to the mean quiescent count rate, and represent the symmetric QPE flares using Gaussians. The QPE amplitudes, peaking times and durations correspond to the Gaussian amplitude, centroid and FWHM, respectively. For the first flare detected in the 2006 observation, as we probe higher energy bands from 0.3-1.3 keV, the amplitudes increase greatly, durations generally decrease moderately, and peaking times generally decrease slightly (Fig.~\ref{fig:eresolved}). We refrain from quoting flare amplitudes in the 1.3-2 keV band due to poor S/N. As only part of the second flare was seen by all three cameras, it is unclear what its amplitude, peaking time, or durations were. Above 2 keV, the background dominates, and no flaring behavior is seen. The energy dependence of the flare properties align with trends seen in the other QPE sources.
The UVW1 filter of the {\it XMM-Newton}\ optical monitor (OM) instrument \citep{Mason01} also shows flux variability during the 2006 pointed observation, though it is not strictly coincident with the soft X-ray flares, perhaps due to the long UVW1 exposure time compared to the X-ray time binning. Lower flux states are seen shortly preceding and following the first X-ray flare, with a third lower flux exposure aligned with the beginning of the second flare. During the 2021 observation, the first two OM exposures used the U filter, resulting in two detections of the source; however, subsequent OM exposures used the UVW2 filter and did not detect J0249.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{spec.pdf}
\caption{Flux-resolved EPIC-PN spectra of J0249 for both {\it XMM-Newton}\ pointed observations, along with \texttt{tbabs$\times$ztbabs$\times$(diskbb+powerlaw)} model fits (Table~\ref{tab:specfit}). Error bars are plotted at 90\% confidence. Note the different y-axes for the two observations resulting from the large flux difference of J0249 over the elapsed period. The 2006 observation is separately fit during the quiescent and flaring phases with $N_H(z)$ and blackbody temperature \& normalization tied. The observed spectral variability in 2006 closely matches what is seen in other QPE candidates, i.e. fast transitions from a disk-dominated quiescent phase to a state with harder emission from additional hot component.}
\label{fig:spec}
\end{figure*}
\subsection{Spectral analysis}
We perform flux-resolved spectral analysis on the 2006 0.3-2 keV EPIC-PN data (because of its higher $S/N$ and hence time resolution compared to the MOS detectors). We divide the data into flare and quiescent states using a threshold of 0.4 cts sec$^{-1}$ to separate low and high flux. For the 2006 observation we group the background-subtracted spectra with a minimum of 1 count per energy bin, and use the Cash statistic \citep{Cash79} to fit various models to the data. All spectral analysis results are reported in Table~\ref{tab:specfit}. The most notable result from spectral fitting is that, similar to other QPEs, we see a hardening of the X-ray spectrum during flaring states.
We also analyze the spectra of the 2021 0.3-2 keV EPIC-PN data, but do not resolve into separate high- and low-flux phases due to the lack of variability from the QPE ``turn-off" between 2006 and 2021. The decrease in flux of J0249 to the level of $\sim 4\times 10^{-14}$ erg cm s$^{-1}$ makes constraining spectral parameters difficult due to low SNR, but for completeness we nonetheless report results of spectral fitting here. As with the 2006 data, we group with 1 count per energy bin and fit using the Cash statistic.
In all fits, we model galactic line-of-sight absorption using the \texttt{tbabs} Xspec model \citep{Wilms00}. \cite{Strotjohann16} found that statistically acceptable fits of J0249 favor high intrinsic $N_H$, which we model using \texttt{ztbabs}. For our different fits, we test various combinations of blackbodies (\texttt{diskbb} and \texttt{bbody}) and powerlaws.
Model 1, \texttt{tbabs$\times$ztbabs$\times$diskbb} (Tab.~\ref{tab:specfit}), is the simplest model for which we obtain reasonable spectral fits, with a reduced $\chi^2 \leq 1.31$ for 325 and 145 degrees of freedom corresponding to 2006 and 2021 spectra, respectively. We allow for the blackbody temperature and normalization to vary between quiescent and flare periods in the 2006 observation, but require that they have the same intrinsic $N_H$. Interestingly, the change in disk temperature during the high-flux state does not follow $L \propto T^4$ emission as expected from the Stefan-Boltzmann Law; this pattern is also seen in other QPE sources. For example, the Model 1 fit of J0249 has $(kT_{flare}/kT_{quiescent})^4 = 6.2$, but only $L_{flare}/L_{quiescent} = 2.08$. This motivates the use of a more complex model, where changes are not simply due to the disk blackbody.
We test a model where the disk blackbody temperature and normalization remains constant from quiescent to flare periods, but where the QPE flares are due to the presence of an additional harder component. In Model 2, we model this harder component as a powerlaw: \texttt{tbabs$\times$ztbabs$\times$(diskbb+powerlaw)} (Fig. ~\ref{fig:spec}). During the quiescent phase, the normalization of the powerlaw component is consistent with zero, favoring a pure disk spectrum, whereas during the flare state addition of this component results in considerably better fit statistic with a reduced $\chi^2 < 1.13$. The photon index of the additional powerlaw is extremely soft, and does not behave like a regular AGN hot corona ($\Gamma \sim 1.8$).
We then test whether the harder component is better described by a second, hotter blackbody, rather than a soft powerlaw: \texttt{tbabs}$\times$\texttt{ztbabs}$\times$(\texttt{diskbb}+\texttt{bbody}) (Model 3). Again, as in Model 2, we keep the lower temperature disk blackbody tied between quiescent and flaring periods. Comparing to Model 2, Model 3 provides a better statistical fit for the 2006 observation, but a slightly worse fit for the 2021 observation. Additionally, the secondary blackbody normalization during quiescence is also consistent with zero, confirming the finding from Model 2 that a pure disk spectrum is the favored model during the quiescent phase.
Model 1 is then repeated using \texttt{zxipcf} in place of \texttt{ztbabs} to explore the effect of assuming ionized instead of neutral gas. Part of the motivation behind this model is the residuals around 0.7-0.8 keV in the previous models, which have been speculated as being due to OVII or OVIII absorption features from an outflow \citep{Brandt97}. Similar absorption-like features have been seen in the soft X-ray spectra of TDEs (e.g. ASASSN-14li; \citealt{kara18}). For this fit, we leave the internal column density $N_H(z)$ free to vary and fix $\log(\xi)=2.95$, a choice motivated by \cite{Strotjohann16}. The model performs worse than the previous three (reduced $\chi^2 \leq 1.52$), leaving the ionization features of the internal gas in J0249 uncertain. Attempting a fit of multiple emitters alongside the disk using ionized gas (e.g. \texttt{tbabs$\times$zxipcf$\times$(diskbb+bbody)}) does not result in significant improvement in the fit statistic.
We conclude from the fitting results that the high- and low-flux spectra are best described by the same non-variable disk component, while the QPE flares are due to an additional hotter component described by either a second blackbody emitter or a powerlaw behavior. This additional component does not behave like a regular AGN hot corona with $\Gamma \sim 1.8$, but is instead much softer, as is typical of the AGN soft excess. Finally, the harder component persists in 2021, but now the photon index is harder ($\Gamma=1.8\pm1.6$), which is more aligned with expectations of AGN hot coronae. The late time hard tail (perhaps the late time appearance of a hot corona) has been seen in a few X-ray TDEs (e.g. \citealt{kara18}; \citealt{saxton20} for a review) and Changing-Look AGN (e.g. \citealt{ricci20}).
|
1,941,325,219,995 | arxiv | \section{Introduction}
In recent years, the emergence of seq2seq models \cite{kalchbrenner2013recurrent,sutskever2014sequence,ChoMGBSB14}
has revolutionized the field of MT by replacing traditional phrase-based approaches with
neural machine translation (NMT) systems based on the encoder-decoder paradigm. In the first
architectures that surpassed the quality of phrase-based MT, both
the encoder and decoder were implemented as Recurrent Neural Networks
(RNNs), interacting via a soft-attention mechanism \cite{BahdanauCB15}.
The RNN-based NMT approach, or RNMT, was quickly
established as the de-facto standard for NMT,
and gained rapid adoption into large-scale systems in industry,
e.g.~Baidu \cite{DBLP:journals/corr/ZhouCWLX16},
Google \cite{DBLP:journals/corr/WuSCLNMKCGMKSJL16},
and Systran \cite{DBLP:journals/corr/CregoKKRYSABCDE16}.
Following RNMT, convolutional neural network based approaches
\cite{LeCun:1998:CNI:303568.303704} to NMT have recently drawn research
attention due to their ability to
fully parallelize training to take advantage of modern fast computing devices.
such as GPUs and Tensor Processing Units (TPUs) \cite{DBLP:journals/corr/JouppiYPPABBBBB17}.
Well known examples are ByteNet
\cite{DBLP:journals/corr/KalchbrennerESO16} and ConvS2S
\cite{DBLP:journals/corr/GehringAGYD17}.
The ConvS2S model was shown to outperform the original RNMT architecture in terms
of quality, while also providing greater training speed.
Most recently, the Transformer model
\cite{DBLP:journals/corr/VaswaniSPUJGKP17}, which is based solely on
a self-attention mechanism \cite{Parikh2016ADA} and feed-forward connections, has further advanced
the field of NMT, both in terms of translation quality and speed of convergence.
In many instances, new architectures are accompanied by a novel set of
techniques for performing training and inference that have been carefully
optimized to work in concert. This `bag of tricks' can be crucial to the
performance of a proposed architecture, yet it is typically under-documented and
left for the enterprising researcher to discover in publicly released code (if
any) or through anecdotal evidence. This is not simply a problem for
reproducibility; it obscures the central scientific question of how much of the
observed gains come from the new architecture and how much can be
attributed to the associated training and inference techniques. In
some cases, these new techniques may be broadly applicable to other
architectures and thus constitute a major, though implicit, contribution
of an architecture paper. Clearly, they need to be considered in order
to ensure a fair comparison across different model architectures.
In this paper, we therefore take a step back and look at which
techniques and methods contribute
significantly to the success of recent architectures, namely ConvS2S and
Transformer, and explore applying
these methods to other architectures, including RNMT models.
In doing so, we come up with an enhanced version of RNMT, referred to as RNMT+,
that significantly outperforms all individual architectures in our setup.
We further introduce new
architectures built with different components borrowed from
RNMT+, ConvS2S and Transformer.
In order to ensure a fair setting for comparison, all architectures
were implemented in the same framework, use the same
pre-processed data and apply no further post-processing as this may
confound bare model performance.
Our contributions are three-fold:
\begin{enumerate}
\item In ablation studies, we quantify the effect of several modeling
improvements (including
multi-head attention and layer normalization) as well as
optimization techniques (such as synchronous replica training and
label-smoothing), which are used in recent architectures.
We demonstrate that these techniques are applicable
across different model architectures.
\item Combining these improvements with the RNMT model, we propose the new RNMT+
model, which significantly outperforms all fundamental architectures on
the widely-used WMT'14 En$\rightarrow$Fr
and En$\rightarrow$De benchmark datasets. We provide a detailed
model analysis and comparison of RNMT+, ConvS2S and Transformer
in terms of model quality, model size, and training and inference speed.
\item Inspired by our understanding of the relative strengths and weaknesses
of individual model architectures, we propose new model architectures that
combine components from the RNMT+ and the Transformer model, and achieve better
results than both individual architectures.
\end{enumerate}
We quickly note two prior works that provided empirical solutions
to the difficulty of training NMT architectures (specifically RNMT).
In \cite{britz-EtAl:2017:EMNLP2017}
the authors systematically explore which elements of NMT architectures have a
significant impact on translation quality.
In \cite{denkowski-neubig:2017:NMT}
the authors recommend three specific techniques for strengthening NMT systems
and empirically demonstrated how incorporating those techniques improves the
reliability of the experimental results.
\section{Background}
\label{sec:background}
In this section, we briefly discuss the commmonly used NMT architectures.
\subsection{RNN-based NMT Models - RNMT}
RNMT models are composed of an encoder RNN and a decoder RNN, coupled with an
attention network. The encoder summarizes the input sequence into a set of
vectors while the decoder conditions on the encoded input sequence through an
attention mechanism, and generates the output sequence one token at a time.
The most successful RNMT models consist of stacked RNN encoders with one or
more bidirectional RNNs, and stacked decoders with unidirectional RNNs. Both
encoder and decoder RNNs consist of either LSTM \cite{hochreiter1997long} or
GRU units \cite{ChoMGBSB14}, and make extensive use of residual
\cite{DBLP:journals/corr/HeZRS15} or highway
\cite{DBLP:journals/corr/SrivastavaGS15} connections.
In Google-NMT (GNMT) \cite{DBLP:journals/corr/WuSCLNMKCGMKSJL16},
the best performing RNMT model on the datasets we consider,
the encoder network consists of one bi-directional LSTM layer, followed by 7
uni-directional LSTM layers. The decoder is equipped with a single attention
network and 8 uni-directional LSTM layers.
Both the encoder and the decoder use residual skip connections between
consecutive layers.
In this paper, we adopt GNMT as the starting point for our proposed RNMT+ architecture,
following the public NMT codebase\footnote{https://github.com/tensorflow/nmt}.
\subsection{Convolutional NMT Models - ConvS2S}
In the most successful convolutional sequence-to-sequence model
\cite{DBLP:journals/corr/GehringAGYD17}, both the encoder and decoder
are constructed by stacking multiple convolutional layers, where each
layer contains 1-dimensional convolutions followed by a gated linear units
(GLU) \cite{DBLP:journals/corr/DauphinFAG16}.
Each decoder layer computes a separate dot-product
attention by using the current decoder layer output and the final encoder layer
outputs. Positional embeddings are used to provide explicit positional information to the model.
Following the practice in \cite{DBLP:journals/corr/GehringAGYD17}, we scale the
gradients of the encoder layers to stabilize training. We also use residual
connections across each convolutional layer and apply weight normalization
\cite{DBLP:journals/corr/SalimansK16} to speed up convergence. We follow
the public ConvS2S codebase\footnote{https://github.com/facebookresearch/fairseq-py} in our experiments.
\subsection{Conditional Transformation-based NMT Models - Transformer}
\label{subsec:transf}
The Transformer model \cite{DBLP:journals/corr/VaswaniSPUJGKP17} is motivated
by two major design choices that aim to address deficiencies in the former two
model families: (1) Unlike RNMT, but similar to the ConvS2S, the
Transformer model avoids any sequential dependencies in both the encoder and
decoder networks to maximally parallelize training. (2) To address the limited
context problem (limited receptive field) present in ConvS2S,
the Transformer model makes pervasive use of self-attention networks
\cite{Parikh2016ADA} so that
each position in the current layer has access to information from all other
positions in the previous layer.
The Transformer model still follows the encoder-decoder paradigm. Encoder
transformer layers are built with two sub-modules: (1) a self-attention network
and (2) a feed-forward network. Decoder transformer layers have an additional
cross-attention layer sandwiched between the self-attention and feed-forward
layers to attend to the encoder outputs.
There are two details which we found very important to the model's
performance: (1) Each sub-layer in the transformer (i.e.~self-attention,
cross-attention, and the feed-forward sub-layer) follows a strict computation
sequence: \textit{normalize} $\rightarrow$ \textit{transform} $\rightarrow$
\textit{dropout}$\rightarrow$ \textit{residual-add}.
(2) In addition to per-layer normalization, the final encoder output is again
normalized to prevent a blow up after consecutive residual additions.
In this paper, we follow the latest version of the Transformer model in the
public Tensor2Tensor\footnote{https://github.com/tensorflow/tensor2tensor} codebase.
\begin{figure*}[t!] \centering
\includegraphics[width=0.8\textwidth,height=7cm]{gnmtv2} \caption{Model
architecture of RNMT+. On the left side, the encoder network has 6 bidirectional
LSTM layers. At the end of each bidirectional layer, the outputs of the forward
layer and the backward layer are concatenated. On the right side, the decoder
network has 8 unidirectional LSTM layers, with the first layer used for
obtaining the attention context vector through multi-head additive attention. The
attention context vector is then fed directly into the rest of the decoder
layers as well as the softmax layer.} \label{fig:gnmtv2} \end{figure*}
\subsection{A Theory-Based Characterization of NMT Architectures}
From a theoretical point of view, RNNs belong to the most expressive
members of the neural network family \cite{SIEGELMANN1995132}\footnote{Assuming that data
complexity is satisfied.}. Possessing an infinite Markovian structure (and thus
an infinite receptive fields) equips them to model sequential data \cite{ELMAN1990179},
especially natural language \cite{Grefenstette:2015:LTU:2969442.2969444} effectively.
In practice, RNNs are notoriously hard to train \cite{Bengio:1994:LLD:2325857.2328340},
confirming the well known dilemma of trainability versus expressivity.
Convolutional layers are adept at capturing local context and local
correlations by design. A fixed and narrow receptive field for each
convolutional layer limits their capacity when the architecture is shallow. In
practice, this weakness is mitigated by stacking
more convolutional layers (e.g.~15 layers as in the
ConvS2S model), which makes the model harder to train and
demands meticulous initialization schemes and carefully designed
regularization techniques.
The transformer network is capable of approximating arbitrary squashing
functions \cite{HORNIK1989359}, and can be considered a strong
feature extractor with extended receptive fields capable of linking salient
features from the entire sequence. On the other hand, lacking a memory component
(as present in the RNN models) prevents the network from modeling a state space,
reducing its theoretical strength as a sequence model, thus it requires
additional positional information (e.g. sinusoidal positional encodings).
Above theoretical characterizations will drive our explorations in the
following sections.
\section{Experiment Setup}
\label{sec:eval}
\vspace{-5px}
We train our models on the standard WMT'14 En$\rightarrow$Fr and En$\rightarrow$De
datasets that comprise 36.3M and 4.5M sentence pairs, respectively.
Each sentence was encoded into a sequence of sub-word units obtained
by first tokenizing the sentence with the Moses tokenizer, then splitting tokens into sub-word
units (also known as ``wordpieces'') using the approach described in \cite{wordpiece_schuster}.
We use a shared vocabulary of 32K sub-word units for each
source-target language pair. No further manual or rule-based post
processing of the output was performed beyond combining the sub-word
units to generate the targets. We report all our results on newstest
2014, which serves as the test set. A combination of newstest 2012 and
newstest 2013 is used for validation.
To evaluate the models, we compute the BLEU metric on tokenized, true-case
output.\footnote{This procedure is used in the literature to which we compare
\cite{DBLP:journals/corr/GehringAGYD17,DBLP:journals/corr/WuSCLNMKCGMKSJL16}.}
For each training run, we evaluate the
model every 30 minutes on the dev set. Once the model converges, we
determine the best window based on the average dev-set BLEU score over
21 consecutive evaluations.
We report the mean test score and standard
deviation over the selected window. This allows us to compare model
architectures based on their mean performance after convergence rather than
individual checkpoint evaluations, as the latter can be quite noisy for some models.
To enable a fair comparison of architectures, we use the same pre-processing and
evaluation methodology for all our experiments. We refrain from using checkpoint
averaging (exponential moving averages of parameters) \cite{junczys2016amu} or
checkpoint ensembles \cite{jean2015using,DBLP:journals/corr/abs-1710-03282}
to focus on evaluating the performance of individual models.
\section{RNMT+}
\label{sec:v2}
\subsection{Model Architecture of RNMT+}
The newly proposed RNMT+ model architecture is shown in Figure~\ref{fig:gnmtv2}.
Here we highlight the key architectural choices that are different between
the RNMT+ model and the GNMT model.
There are 6 bidirectional LSTM layers in
the encoder instead of 1 bidirectional LSTM layer followed by 7 unidirectional
layers as in GNMT. For each
bidirectional layer, the outputs of the forward layer and the backward layer
are concatenated before being fed into the next layer.
The decoder network consists of 8 unidirectional LSTM layers similar to the
GNMT model.
Residual connections are
added to the third layer and above for both the encoder and decoder. Inspired by
the Transformer model, per-gate
layer normalization \cite{DBLP:journals/corr/BaKH16} is applied within each LSTM cell. Our empirical results
show that layer normalization greatly stabilizes training. No non-linearity is
applied to the LSTM output. A projection layer is added to the encoder final
output.\footnote{Additional projection aims to reduce the dimensionality of
the encoder output representations to match the decoder stack dimension.}
Multi-head additive attention is used instead of the single-head
attention in the GNMT model. Similar to GNMT, we use the bottom decoder layer
and the final encoder layer output after projection for obtaining the recurrent
attention context. In addition to feeding the attention context to all decoder
LSTM layers, we also feed it to the softmax.
This is important for both the
quality of the models with multi-head attention and the stability of the
training process.
Since the encoder network in RNMT+ consists solely of bi-directional LSTM
layers, model parallelism is not used during training. We compensate for
the resulting longer per-step time with increased data parallelism
(more model replicas), so that the overall time to reach
convergence of the RNMT+ model is still comparable to that of GNMT.
We apply the following regularization techniques during training.
\begin{itemize}
\item \textbf{Dropout:} We apply dropout to both embedding layers and each LSTM
layer output before it is added to the next layer's input. Attention dropout is also applied.
\item \textbf{Label Smoothing:} We use uniform label smoothing with an
uncertainty=0.1 \cite{DBLP:journals/corr/SzegedyVISW15}. Label smoothing was shown to have a positive impact on both
Transformer and RNMT+ models, especially in the case of RNMT+ with
multi-head attention. Similar to the observations in \cite{DBLP:journals/corr/ChorowskiJ16}, we found it beneficial to use a larger beam size
(e.g. 16, 20, etc.) during decoding when models are trained with label
smoothing.
\item \textbf{Weight Decay:} For the WMT'14 En$\rightarrow$De task, we apply L2
regularization to the weights with $\lambda = 10^{-5}$. Weight decay is only
applied to the En$\rightarrow$De task as the corpus is smaller and thus more
regularization is required.
\end{itemize}
We use the Adam optimizer \cite{DBLP:journals/corr/KingmaB14} with $\beta_1 = 0.9, \beta_2 = 0.999,
\epsilon = 10^{-6}$ and vary the learning rate according to this schedule:
\begin{equation}
lr = 10^{-4} \cdot \min\Big(1 + \frac{t \cdot (n-1)}{np}, n, n \cdot (2n)^{\frac{s- nt}{e - s}}\Big)
\label{eq:rnmt_lr}
\end{equation}
Here, $t$ is the current step, $n$ is the number of concurrent model replicas used in training, $p$
is the number of warmup steps, $s$ is the start step of the exponential decay, and
$e$ is the end step of the decay. Specifically, we first increase the learning
rate linearly during the number of warmup steps, keep it a constant until the
decay start step $s$, then exponentially decay until the decay end step $e$,
and keep it at $5 \cdot 10^{-5}$ after the decay ends. This learning rate
schedule is motivated by a similar schedule that was successfully
applied in training the Resnet-50 model with a very large batch size
\cite{DBLP:journals/corr/GoyalDGNWKTJH17}.
In contrast to the asynchronous training used for GNMT
\cite{downpoursgd}, we train RNMT+ models with synchronous training
\cite{DBLP:journals/corr/ChenMBJ16}. Our empirical results suggest that when
hyper-parameters are tuned properly, synchronous training often leads
to improved convergence speed and superior model quality.
To further stabilize training, we also use adaptive
gradient clipping. We discard a training step completely if an anomaly
in the gradient norm value is
detected, which is usually an indication of an imminent gradient explosion.
More specifically, we keep track of a moving average and a moving
standard deviation of the $\log$ of the gradient norm values, and we
abort a step if the norm of the gradient exceeds four standard deviations
of the moving average.
\subsection{Model Analysis and Comparison}
In this section, we compare the results of RNMT+ with ConvS2S and Transformer.
All models were trained with synchronous training.
RNMT+ and ConvS2S were trained with 32
NVIDIA P100 GPUs while the Transformer Base and Big models were trained using 16
GPUs.
For RNMT+, we use sentence-level cross-entropy loss. Each training batch
contained 4096 sentence pairs (4096 source sequences and 4096 target sequences).
For ConvS2S and Transformer models, we use token-level cross-entropy loss. Each
training batch contained 65536 source tokens and 65536 target tokens. For the
GNMT baselines on both tasks, we cite the largest BLEU score reported in
\cite{DBLP:journals/corr/WuSCLNMKCGMKSJL16} without reinforcement learning.
Table~\ref{table:enfr} shows our results on the WMT'14 En$\rightarrow$Fr
task. Both the Transformer Big model and RNMT+ outperform GNMT and ConvS2S by
about 2 BLEU points. RNMT+ is slightly better than the Transformer Big model in
terms of its mean BLEU score. RNMT+ also yields a much lower standard
deviation, and hence we observed much less fluctuation in the training
curve. It takes approximately 3 days for the Transformer Base model to
converge, while both RNMT+ and the Transformer Big model require about
5 days to converge. Although the batching schemes are quite different
between the Transformer Big and the RNMT+ model, they have processed about the
same amount of training samples upon convergence.
\begin{table}[!htbp]
\centering
\begin{tabular}{ c|c|>{\centering\arraybackslash}m{1cm}|>{\centering\arraybackslash}m{1.1cm}}
\hline
\hline
Model & Test BLEU & Epochs & Training Time \\
\hline
GNMT & 38.95 & - & -\\
ConvS2S \footnotemark[7] & 39.49 $\pm$ 0.11 & 62.2 & 438h\\
Trans. Base & 39.43 $\pm$ 0.17& 20.7 & 90h\\
Trans. Big \footnotemark[8] & 40.73 $\pm$ 0.19 & 8.3 & 120h\\
RNMT+ & 41.00 $\pm$ 0.05 & 8.5 & 120h\\
\hline
\end{tabular}
\caption{Results on WMT14 En$\rightarrow$Fr. The numbers before and after
'$\pm$' are the mean and standard deviation of test BLEU score over an
evaluation window.}
\label{table:enfr}
\end{table}
Table~\ref{table:ende} shows our results on the WMT'14 En$\rightarrow$De task.
The Transformer Base model improves over GNMT and ConvS2S by more than 2 BLEU
points while the Big model improves by over 3 BLEU points. RNMT+ further
outperforms the Transformer Big model and establishes a new state of
the art with an averaged value of 28.49. In this case, RNMT+ converged
slightly faster than the Transformer Big model and maintained much more
stable performance after convergence with a very small standard
deviation, which is similar to what we observed on the En-Fr task.
\footnotetext[7]{Since the ConvS2S model convergence is very slow we did
not explore further tuning on En$\rightarrow$Fr, and
validated our implementation on En$\rightarrow$De.}
\footnotetext[8]{The BLEU scores for Transformer model are slightly lower than those
reported in \cite{DBLP:journals/corr/VaswaniSPUJGKP17} due to four differences:
1) We report the mean test BLEU score using the strategy described in section~\ref{sec:eval}.
2) We did not perform checkpoint averaging since it would be inconsistent with our evaluation for other models.
3) We avoided any manual post-processing, like unicode normalization using Moses replace-unicode-punctuation.perl or output tokenization using Moses tokenizer.perl, to rule out its effect on the evaluation.
We observed a significant BLEU increase (about 0.6) on applying these post processing techniques.
4) In \cite{DBLP:journals/corr/VaswaniSPUJGKP17}, reported
BLEU scores are calculated using mteval-v13a.pl from Moses,
which re-tokenizes its input.}
\begin{table}[!htbp]
\centering
\begin{tabular}{ c|c|>{\centering\arraybackslash}m{1cm}|>{\centering\arraybackslash}m{1.1cm}}
\hline
\hline
Model & Test BLEU & Epochs & Training Time \\
\hline
GNMT & 24.67 & - & -\\
ConvS2S & 25.01 $\pm$0.17 & 38 & 20h\\
Trans. Base & 27.26 $\pm$ 0.15 & 38 & 17h\\
Trans. Big & 27.94 $\pm$ 0.18 & 26.9 & 48h\\
RNMT+ & 28.49 $\pm$ 0.05 & 24.6 & 40h\\
\hline
\end{tabular}
\caption{Results on WMT14 En$\rightarrow$De.}
\label{table:ende}
\end{table}
Table~\ref{table:perf} summarizes training performance and model statistics.
The Transformer Base model is the fastest model in terms of training speed. RNMT+ is slower
to train than the Transformer Big model on a per-GPU basis.
However, since the RNMT+ model is quite stable, we were able to
offset the lower per-GPU throughput with higher concurrency by
increasing the number of model replicas, and
hence the overall time to convergence was not slowed down much.
We also computed the number of floating point operations (FLOPs)
in the model's forward path as well as the number of total parameters
for all architectures (cf.~Table~\ref{table:perf}). RNMT+ requires
fewer FLOPs than the Transformer Big model, even though both models
have a comparable number of parameters.
\begin{table}[!htbp]
\centering
\begin{tabular}{ c|>{\centering\arraybackslash}m{1.7cm}|>{\centering\arraybackslash}m{1.2cm}|>{\centering\arraybackslash}m{1.1cm}}
\hline
\hline
Model & Examples/s & FLOPs & Params\\
\hline
ConvS2S & 80 & 15.7B & 263.4M\\
Trans. Base & 160 & 6.2B & 93.3M\\
Trans. Big & 50 & 31.2B & 375.4M\\
RNMT+ & 30 & 28.1B & 378.9M\\
\hline
\end{tabular}
\caption{Performance comparison. Examples/s are normalized by the number of
GPUs used in the training job. FLOPs are computed assuming that source
and target sequence length are both 50.}
\label{table:perf}
\end{table}
\vspace{-10px}
\section{Ablation Experiments}
In this section, we evaluate the importance of four main techniques
for both the RNMT+ and the Transformer Big models. We believe that these
techniques are universally applicable across different model
architectures, and should always be employed by NMT practitioners for
best performance.
We take our best RNMT+ and
Transformer Big models and remove each one of these techniques
independently. By doing this we hope to learn two things about each
technique: (1) How much does it affect the model performance? (2) How useful is
it for stable training of other techniques and hence the final model?
\begin{table}[!htbp]
\centering
\begin{tabular}{c|c|c}
\hline
\hline
Model & RNMT+ & Trans. Big \\
\hline
Baseline & 41.00 & 40.73\\
- Label Smoothing & 40.33 & 40.49\\
- Multi-head Attention & 40.44 & 39.83\\
- Layer Norm. & * & *\\
- Sync. Training & 39.68 & *\\
\hline
\end{tabular}
\caption{Ablation results of RNMT+ and the Transformer Big model on WMT'14
En $\rightarrow$ Fr. We report average BLEU
scores on the test set. An asterisk '\mbox{*}' indicates an unstable training run
(training halts due to non-finite elements).}
\label{table:enfr_ablation}
\end{table}
From Table~\ref{table:enfr_ablation} we draw the following conclusions about the
four techniques:
\begin{itemize}
\item \textbf{Label Smoothing}
We observed that label smoothing improves both models, leading to an
average increase of 0.7 BLEU for RNMT+ and 0.2 BLEU for Transformer Big models.
\item \textbf{Multi-head Attention}
Multi-head attention contributes significantly to the quality of
both models, resulting in an average increase of 0.6 BLEU for RNMT+ and 0.9 BLEU
for Transformer Big models.
\item \textbf{Layer Normalization}
Layer normalization is most critical to stabilize the
training process of either model, especially when multi-head attention is used.
Removing layer normalization results in unstable training runs for both models.
Since by design, we remove one technique at a time in our ablation experiments,
we were unable to quantify how much layer
normalization helped in either case. To be able to successfully train a model
without layer normalization, we would have to adjust other parts of the model
and retune its hyper-parameters.
\item \textbf{Synchronous training}
Removing synchronous training has different effects on RNMT+ and Transformer.
For RNMT+, it results in a significant quality drop, while for the
Transformer Big model, it causes the model to become unstable. We also
notice that synchronous training is only successful when coupled
with a tailored learning rate schedule that has a warmup stage at
the beginning (cf.~Eq.~\ref{eq:rnmt_lr} for RNMT+ and
Eq.~\ref{eq:trans_lr} for Transformer). For RNMT+, removing this
warmup stage during synchronous training causes the model to become
unstable.
\end{itemize}
\section{Hybrid NMT Models}
\label{sec:hybrids}
\vspace{-5px}
In this section, we explore hybrid architectures that shed
some light on the salient behavior of each model family. These hybrid models
outperform the individual architectures on both benchmark datasets and provide
a better understanding of the capabilities and limitations of each model family.
\subsection{Assessing Individual Encoders and Decoders}
In an encoder-decoder architecture, a natural assumption
is that the role of an encoder is to build feature representations that can
best encode the meaning of the source sequence, while a decoder should
be able to process and interpret the representations from the encoder and,
at the same time, track the current target history.
Decoding is inherently auto-regressive,
and keeping track of the state information should therefore
be intuitively beneficial for conditional generation.
We set out to study which family of encoders
is more suitable to extract rich representations from a given input sequence,
and which family of decoders can make the best of such rich representations.
We start by combining the encoder and decoder from different
model families. Since it takes a significant amount of time for a
ConvS2S model to converge, and because the final translation quality
was not on par with the other models, we focus on two types of
hybrids only: Transformer encoder with RNMT+ decoder and RNMT+ encoder with
Transformer decoder.
\begin{table}[!htbp]
\centering
\begin{tabular}{c|c|c}
\hline \hline
Encoder & Decoder & En$\rightarrow$Fr Test BLEU \\ \hline
Trans. Big & Trans. Big & 40.73 $\pm$ 0.19 \\
RNMT+ & RNMT+ & 41.00 $\pm$ 0.05 \\
Trans. Big & RNMT+ & \textbf{41.12 $\pm$ 0.16} \\
RNMT+ & Trans. Big & 39.92 $\pm$ 0.21 \\ \hline
\end{tabular}
\caption{Results for encoder-decoder hybrids.}
\label{table:hybrids_encdec}
\end{table}
From Table~\ref{table:hybrids_encdec}, it is clear that the Transformer
encoder is better at encoding or feature extraction than the RNMT+
encoder, whereas RNMT+ is better at decoding or conditional language
modeling, confirming our intuition that a stateful decoder is
beneficial for conditional language generation.
\subsection{Assessing Encoder Combinations}
Next, we explore how the features extracted by an encoder can be
further enhanced by incorporating additional
information. Specifically, we investigate the combination of
transformer layers with RNMT+ layers in the same encoder block to build even richer feature representations.
We exclusively use RNMT+ decoders in the following architectures since stateful
decoders show better performance according to Table~\ref{table:hybrids_encdec}.
We study two mixing schemes in the encoder (see Fig.~\ref{fig:enc_hybrids}):
(1) \textit{Cascaded Encoder}: The cascaded encoder aims at combining
the representational power of RNNs and self-attention. The idea is to
enrich a set of stateful representations by cascading a feature
extractor with a focus on vertical mapping, similar to
\cite{DBLP:journals/corr/PascanuGCB13,D17-1300}. Our best performing
cascaded encoder involves fine tuning transformer layers stacked on
top of a pre-trained frozen RNMT+ encoder. Using a pre-trained
encoder avoids optimization difficulties while significantly enhancing encoder
capacity. As shown in Table~\ref{table:hybrids-perf},
the cascaded encoder improves over the Transformer encoder by more
than 0.5 BLEU points on the WMT'14 En$\rightarrow$Fr task. This
suggests that the Transformer encoder is able to extract richer
representations if the input is augmented with sequential context.
(2) \textit{Multi-Column Encoder}:
As illustrated in Fig.~\ref{fig:mcol}, a multi-column encoder merges
the outputs of several independent encoders into a single
combined representation.
Unlike a cascaded encoder, the multi-column encoder enables us to investigate
whether an RNMT+ decoder can distinguish information received
from two different channels and benefit from its combination.
A crucial operation in a multi-column encoder is therefore how
different sources of information are merged into
a unified representation. Our best multi-column encoder performs a simple
concatenation of individual column outputs.
The model details and hyperparameters
of the above two encoders are described in
Appendix~\ref{vertical_mixing} and \ref{horizontal_mixing}. As shown
in Table~\ref{table:hybrids-perf}, the multi-column encoder followed
by an RNMT+ decoder achieves better results than the
Transformer and the RNMT model on both WMT'14 benchmark tasks.
\begin{table}[!htbp]
\centering
\begin{tabular}{ c|c|c}
\hline
\hline
Model & En$\rightarrow$Fr BLEU & En$\rightarrow$De BLEU\\ \hline
Trans. Big & 40.73 $\pm$ 0.19 & 27.94 $\pm$ 0.18 \\
RNMT+ & 41.00 $\pm$ 0.05 & 28.49 $\pm$ 0.05 \\
Cascaded & \textbf{41.67 $\pm$ 0.11} & 28.62 $\pm$ 0.06\\
MultiCol & 41.66 $\pm$ 0.11 & \textbf{28.84 $\pm$ 0.06}\\
\hline
\end{tabular}
\caption{Results for hybrids with cascaded encoder and multi-column encoder.}
\label{table:hybrids-perf}
\end{table}
\begin{figure}
\begin{subfigure}{.25\textwidth}
\centering
\includegraphics[width=.6\linewidth]{stacked.png}
\caption{Cascaded Encoder}
\label{fig:stacked}
\end{subfigure}%
\begin{subfigure}{.25\textwidth}
\centering
\includegraphics[width=.85\linewidth]{mcol.png}
\caption{Multi-Column Encoder}
\label{fig:mcol}
\end{subfigure}
\caption{Vertical and horizontal mixing of Transformer and RNMT+ components in an encoder.}
\label{fig:enc_hybrids}
\end{figure}
\vspace{-5px}
\section{Conclusion}
\label{sec:conclusion}
\vspace{-5px}
In this work we explored the efficacy of several architectural and training
techniques proposed in recent studies on seq2seq
models for NMT. We demonstrated that many of these techniques
are broadly applicable to multiple model architectures.
Applying these new techniques to RNMT models yields RNMT+, an enhanced RNMT
model that
significantly outperforms the three fundamental architectures
on WMT'14 En$\rightarrow$Fr and En$\rightarrow$De tasks.
We further presented several hybrid models developed by combining
encoders and decoders from the Transformer and RNMT+ models, and empirically
demonstrated the superiority of the Transformer encoder and the RNMT+
decoder in comparison with their counterparts. We then enhanced the encoder
architecture by horizontally and vertically mixing components borrowed from
these architectures, leading to hybrid architectures that obtain further improvements
over RNMT+.
We hope that our work will motivate NMT researchers to further
investigate
generally applicable training and optimization techniques,
and that our exploration of hybrid architectures will open
paths for new architecture search efforts for NMT.
Our focus on a standard single-language-pair translation task leaves
important open questions to be answered:
How do our new architectures compare in multilingual settings, i.e., modeling
an \textit{interlingua}? Which architecture is more efficient and
powerful in processing finer grained inputs and outputs, e.g., characters or
bytes? How transferable are the representations learned by the different
architectures to other tasks? And what are the characteristic errors that
each architecture makes, e.g., linguistic plausibility?
\ifblindreview
\else
\section*{Acknowledgments}
We would like to thank the entire Google Brain Team and Google Translate Team for their foundational contributions to this project. We would also like to thank Noam Shazeer, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, and the entire Tensor2Tensor development team for their useful inputs and discussions.
\fi
|
1,941,325,219,996 | arxiv |
\section{Conclusion}
In this work, we introduce the multi-filter Seq2Seq model that resolves the problem of heterogeneous features in the dataset. Our model is able to outperform the original LSTM encoder-decoder network by concentrating heterogeneous features simultaneously using the multi-filter architecture.
We have also shown the enhancement of the model's performance while the reinforcement learning algorithm improves the clustering quality. Due to the assumption that data with similar features would be clustered together, better clustering quality leads to more accurate feature analysis. The filters can be trained by a set of records that contain similar features. Hence the overall performance can be improved.
\section{Experiments}
The experiments are going to prove the effectiveness of the multi-filter architecture. To achieve this, we conduct several comparative experiments. These experiments compare the performance from our latent-enhanced multi-filter model with the traditional encoder-decoder model, as well as some other baselines. The experiments are conducted on two of the classical sequence to sequence tasks- semantic parsing and machine translation.
In addition to showing the performance improvement, our experiments also demonstrate the positive correlation between the model's performance the quality of the latent space clustering. Hence we can show the necessity of our latent-enhancing reinforcement learning algorithm.
\subsection{Performance Examination on Semantic Parsing}
We evaluate the proposed architecture on a semantic parsing task, where we use the Geo-query dataset \citep{zelle_mooney} that contains a set of geographical questions. We use the token level accuracy, and the denotation accuracy \citep{Jia_2016, liang-etal-2011-learning} to quantify the performance of this architecture.
We implemented the proposed architecture in PyTorch with specifications stated in \ref{tab:specification}.
To enhance the latent space clustering, we utilize multi-layer perceptron (MLP) as the policy for SAC. In the SAC algorithm, we set the max number of learning step to 500, $k=100$ and $b=25$ in Equation \ref{eq:reward}, target Silhouette score $S_c^{target} = 0.55$, and keep the default settings from the SAC implementation from Stable-Baselines3\citep{stable-baselines3}.
\begin{table}[!ht]
\centering
\caption{Experimental Specifications.}
\begin{tabular}{||c c||}
\hline
Training size & Validation size \\
480 & 120 \\
\hline
Optimizer & Learning rate \\
Adam & 0.001 \\
\hline
Epoch number & Dropout rate \\
10 & 0.2 \\
\hline
Hidden dimension & Latent dimension\\
200 & 200 \\
\hline
Embedding dimension & Bidirectional \\
150 & True\\
\hline
\end{tabular}
\label{tab:specification}
\end{table}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.32\linewidth]{figures/geo_2.png}
\includegraphics[width=0.32\linewidth]{figures/geo_3.png}
\includegraphics[width=0.32\linewidth]{figures/geo_4.png}
\caption{latent space clustering in Geo-query training data. Plots from left to right show the clustering results for two, three, and four clusters, respectively.}
\label{fig:geo}
\end{figure}
Figure \ref{fig:geo} shows the latent space clustering results on different number of clusters after the classifier is optimized by the SAC algorithm. The interesting observation is that every cluster classifier generates two clusters, regardless of the filter number we assign to the model. This observation indicates that the optimal number of clusters in the latent space is two in this task. And the SAC reinforcement learning algorithm optimizes the weights in the cluster classifier. Hence if we set the number of clusters to greater than two, the cluster classifier will generate empty clusters and ensure there are only two clusters that are non-empty. Because the SAC algorithm has learned that this way of clustering (separating into two clusters) will maximize the Silhouette score. Our 2-filters LES2S achieves the best performance over both metrics as shown in Table \ref{tab:1}.
\begin{table}[!htbp]
\caption{Performance comparison between the baselines \citep{Dong_2016} and LMS2S model, in terms of Token-level accuracy and Denotation accuracy. The ordinary encoder-decoder model is denoted as Enc-Dec. Note that some of the baselines are not reporting the denotation accuracy.}
\begin{center}
\begin{tabular}{||c c c||}
\hline
Model & Token & Denot \\
\hline
Enc-Dec & 77.4 & 43.8 \\
\thead{SCISSOR \\ \citep{10.5555/1706543.1706546}} & 72.3 & \\
\thead{KRISP \\ \citep{kate-mooney-2006-using}} & 71.7 & \\
\thead{WASP\\ \citep{wong-mooney-2006-learning}} & 74.8 & \\
\thead{LEMS\\ \citep{yang2021training}} & 78.3 & 50.9 \\
\thead{ZC05 \\ \citep{zelle_mooney}} & 79.3 & \\
\textbf{LMS2S} & \textbf{81.7} & \textbf{60.8} \\
\hline
\end{tabular}
\end{center}
\label{tab:1}
\end{table}
\subsection{Performance Examination on Machine Translation}
We also justify the performance of the proposed architecture in machine translation tasks using the Multi30k English-French dataset\citep{multi30k}. This dataset is split to 29,000 training data records and 1,000 testing data records. The experimental setups are identical to semantic parsing. We use BLEU score \citep{papineni-etal-2002-bleu} with $N = 4$ and uniform weights $w_n = 1/4$ to evaluate the translation results.
As shown in Figure \ref{fig:trans}, the RL agent learns that the Silhouette score can be maximized when the latent space representations are separated into two clusters. Therefore, even we increase the number of clusters, the classifier will form empty clusters so that there are still two non-empty clusters. Then, we can use 2-filter LES2S model for the following experiments.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.32\linewidth]{figures/trans_2.png}
\includegraphics[width=0.32\linewidth]{figures/trans_3.png}
\includegraphics[width=0.32\linewidth]{figures/trans_4.png}
\caption{latent space clustering in Multi30k English-French translation training data. The figures on left to right present the latent space clustering results for two, three, and four clusters respectively.}
\label{fig:trans}
\end{figure}
\begin{table}[!htbp]
\caption{Performance comparison between our LMS2S model and several baselines in machine translation task.}
\begin{center}
\begin{tabular}{||c c||}
\hline
Model & BLEU\\
\hline
Baseline (text-only NMT) & 44.3\\
\thead{SHEF \_ShefClassProj\_C \\ \citep{Elliott_2017}} & 43.6 \\
\thead{LEMS\\ \citep{yang2021training}} & 46.3 \\
\thead{CUNI Neural Monkey Multimodel MT\_C \\ \citep{NeuralMonkey:2017}}
& 49.9 \\
\thead{LIUMCVC\_NMT\_C \\ \citep{Caglayan_2017}} & 53.3 \\
\thead{DCU-ADAPT MultiMT C \\ \citep{Elliott_2017}} & 54.1 \\
\textbf{LES2S} & \textbf{55.7} \\
\hline
\end{tabular}
\end{center}
\label{tab:2}
\end{table}
Table \ref{tab:2} lists the BLUE scores of the baseline models and our LES2S model. Among these results, our model achieves the best performance. Moreover, the comparative experiment between the ordinary encoder-decoder model and our model also shows the effectiveness of the multi-filter architecture.
\subsection{Latent Space Clustering Enhancement}
In both tasks, we use the soft actor-critic (SAC) algorithm to enhance the clustering quality. The results have shown that the SAC algorithm is able to improve the Silhouette score (generate better clusters).
To explore how the Silhouette score affects the model's performance, we train a set of LES2S models for each task. Every model has trained under the same hyper-parameter settings as stated in Table \ref{tab:specification}.
By setting the learning steps in the SAC algorithm to 10, 20, 30, 50, 100, 200, 300, 500, respectively, we can get clustering results in terms of Silhouette scores. Then, we compare the models' performances with Silhouette scores to show the significance of the latent space clustering.
Figure \ref{fig:rl} shows the model's performance vs. Silhouette scores. We can observe that the Silhouette score is positively correlated to both of the evaluation metrics of the two tasks. These results show that we can improve the model's performance by optimizing the latent space clustering. Hence we have proved the significance of the latent space clustering.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.9\linewidth]{figures/rl_plot.png}
\caption{The positive correlation between the clustering quality and model's performance.}
\label{fig:rl}
\end{figure}
\section{Introduction}
A sequence-to-sequence (seq2seq) model takes in a sequence of tokens as input and construct an output sequence, where both the input and output can have varied length. Machine translation and semantic parsing are two of the well-known seq2seq problems.
An encoder-decoder model is one approach to solving sequence-to-sequence prediction problems by using recurrent neural networks (RNN) as the underlying component.
One of the challenges for the encoder-decoder models is that the training data with heterogeneous features may increase the difficulty of convergence. The model is not able to concentrate on multiple heterogeneous features simultaneously: fitting one set of features may increase loss on another set of features.
To address this challenge, we apply a representation learning technique \citep{bengio2013representation} to categorize and cluster the heterogeneous features preserved in the latent space, which is the final hidden space returned from the encoder. Then, the quality of the representations in the latent space is important. Therefore, we introduce a reinforcement learning \citep{rlbook} approach to enhance the clustering algorithm, which is used to divide the latent space into subspaces with homogeneous features.
In the experiments, we demonstrate the advantage of multi-filter structure by showing the performance improvement contrast to the ordinary encoder-decoder model. Then, we also show the positive correlation between the latent space clustering algorithm (evaluated by the Silhouette score) and the model's performance.
As for the first contribution to this paper, we introduce a multi-filter encoder-decoder structure to address the heterogeneity in the training dataset and empirically prove the effectiveness of this structure. For the second contribution, we design a self-enhancing mechanism that uses reinforcement learning to optimize the clustering algorithm and further improve the performance of the multi-filter encoder-decoder model.
\section{Multi-filter Network Architecture}
The network consists of an encoder and a dummy decoder from an ordinary encoder-decoder model; a latent-space enhancer (multi-layer perceptron) that takes in the final hidden states from the encoder as inputs and projects them on the latent space; a cluster classifier for determining which cluster the latent space representation belongs to; certain number of decoders for various sets of features.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.95\linewidth]{figures/architecture.png}
\end{center}
\caption{Latent-enhanced Multi-filter Seq2seq Model Pipeline.
}
\label{fig:architecture}
\end{figure}
\subsection{Latent Enhanced Encoder-Decoder Model}
The latent enhanced encoder-decoder model consists of an encoder $R$, a dummy decoder $Q_0$, and an enhancer $T$ that connects to the encoder and projects the encoder outputs into the latent space. The encoder takes in a sequence of inputs $x$, then returns a sequence of outputs and a final hidden state $h$.
\begin{center}
$h = h_{|x|}, \quad where \quad \overline{x}_i, (h_i, c_i) = R^i(x_i)$
\end{center}
where $R_i$ is the $i^{th}$ recurrent cell, $\overline{x}_i$, $h_i$, and $c_i$ are the $i^{th}$ output, hidden state, and cell state, respectively. $|x|$ is the length of the output sequence $x$.
Each of the encoder and decoders is consisting of a bi-directional LSTM. Every decoder consists of an dot-product attention mechanism \citep{luong2015effective}.
The encoder will generate a final hidden state after we feed in the entire input sequence. The final hidden states can be viewed as the representations of the input-output pairs. We denote the space where the final hidden states lying as $Z$. Then, we construct an enhancer and apply it in $Z$ to enhance the qualities of representations.
\begin{center}
$r_e = T(h)$
\end{center}
The enhancer is a multi-layer perception that transforms the representation $h$ from $Z$ to a new space $Z_T$. We name the space $Z_T$ as the latent space. The enhanced representation, denoted as $r_e$, is feed as the input of the first decoder cell. In addition, the $r_e$ is also feed as the input of the cluster classifier $C$ for cluster assignment.
We use a Negative Likelihood Loss to optimize the trainable weights in the encoder-decoder model and the enhancer. Let $\overline{y}$ be the output vector sequence of the network and $y$ be the label sequence, we compute the loss as the following:
\begin{equation}
\label{eq:nll}
\mathcal{L}(\overline{y}, y) = \{ l_1, ..., l_N\}^T, \quad l_n = -w_{y_n} \overline{y}_{n, y_n}.
\end{equation}
We preserve and fix the encoder and enhancer once the model is converged. Then, we remove the dummy decoder.
\subsection{Cluster Assignment}
To assign data into clusters, a cluster classifier $C$ can be constructed. The classifier $C$ takes the enhanced representation as input and output a probability vector $v_c$ for cluster assignment. The classifier $C$ consists of two linear layers that transform the input from hidden dimension (space $Z_T$) to $n$-dimension, where $n$ is the number of clusters. Then, a softmax layer is added at the end of the classifier $C$. At this stage, the output of $C$ is considered as a probability vector $v_p$ that consists of the probabilities of assigning to each cluster. Hence if $v_p$ is the probability vector of sample representation $h_x$, then
\begin{center}
$v_p[i] = P[h_x \text{ belongs to } i^{th} \text{cluster}]$.
\end{center}
We can take the cluster $c_j$ with the highest probability as the cluster the sample representation $x$ belongs to:
\begin{center}
$h_x \in c_j \text{, where } j = argmax(v_p)$.
\end{center}
Then, the representation $h_x$ can be fed into the filter $Q_j$ that corresponding to $c_j$. Hence the output can be constructed.
The parameters in $C$ will not be adjusted by any loss computed from the decoder's outputs. Instead, we applied a reinforcement learning algorithm to optimize the parameters in $C$, which is discussed in the later section.
\subsection{Training Decoders}
We construct a set of decoders $Q = \{Q_i, i = 1,...,n\}$ to construct output sequences from the latent space representations. We apply the clustering algorithm to divide the enhanced latent space $Z_T$ into subspaces. Then, we assign a decoder from the decoder set $Q$ to each subspaces. The decoder is concentrating on the data within the subspace it is assigned to. To distinguish from the dummy decoder $Q_0$, we denote the decoders from set $Q$ as \textit{filters}.
Suppose the data record is assigned to the $i^{th}$ cluster by the cluster classifier. For each RNN cell in the filter $Q_j$, it takes in the output, hidden state, and cell state from the previous RNN cell and generate new output and states.
\begin{center}
$\overline{x}_i, (\overline{h}_i, \overline{c}_i) = Q_j^i(\overline{x}_{i-1}, (\overline{h}_{i-1}, \overline{c}_{i-1}))$
\end{center}
In the training stage, we apply the cluster classifier $C$ to divide the data into $n$ clusters. We set up $n$ filters corresponding to $n$ clusters. All the filters are initially the same, but we optimize their trainable weights independently. We extract the data records from cluster $c_i$ to train the filter $Q_i$.
We use the Negative Likelihood Loss as presented in Equation \ref{eq:nll} to update the weights in the filters. Note that the NLLoss for each filter is computed separately, hence the gradient updates for each filter are independent.
Through the entire training procedure, the parameters in the encoder $R$ and the enhancer $T$ are fixed, thus they will not be updated.
In the evaluation stage, we first encode and obtain the representation $r_e$ from the latent space $Z_T$. Then, we apply the cluster classifier $C$ to determine which cluster the input data belongs to. Once the cluster $c_i$ is determined, we use the corresponding filter $Q_i$ to construct the output sequence.
\section{Latent-Enhancing Algorithm}
\label{sec:lea}
To optimize the latent space, we introduce a Soft Actor Critic (SAC)\citep{haarnoja2018soft} reinforcement learning algorithm that is utilized for optimizing the trainable parameters in the cluster classifier $C$.
The reward function is based on the Silhouette Coefficient $S_c$:
\begin{equation}
R = k \cdot S_c + b
\label{eq:reward}
\end{equation}
where $k$ and $b$ are the user-defined constants. To maximize the reward, the RL model learns actions that adjust the trainable weights in the classifier $C$. The action function is defined as:
\begin{equation}
T_i = \Vec{a} \cdot T_i'
\label{eq:action}
\end{equation}
where $\Vec{a}$ is the action, $T_i'$ is the set of old parameters of the $i^{th}$ layer of $T$, $T_i$ is the set of updated parameters of the $i^{th}$ layer of $T$.
The reinforcement learning is applied after the encoder and the dummy decoder is fine tuned, prior to training the filters. A target Silhouette Coefficient $S_c^{target}$ is set as the terminate state of the RL model. The learning process immediately stop once the model reaches the maximum steps allowed, or the Silhouette Coefficient $S_c$ reaches the target score: $S_c \ge S _c^{target}$. This reinforcement learning model is expected to enhance the final results by improving the quality of clustering. After the RL algorithm improves the clustering quality, we can start training the multiple filters.
\section{Related Work}
Encoder-decoder model commonly used in solving sequence-to-sequence problems. Bidirectional LSTM \citep{lstm} with attention mechanism \citep{Dong_2016} is one of the state of the arts models. We build our LMS2S on top of this model and show how our model outperform it.
There are some existing works apply representation learning on the latent space. \cite{yang2021identifying} has shown that latent space representations can exploit the features and better preserve the key attributes of the raw data.
\cite{bouchacourt2018multi} designs a multi-level latent space structure that can generates representation in a hierarchical manner. \cite{yang2017towards} divides the latent space into subspaces using a hard K-means clustering algorithm; \cite{jabi2019deep} introduces a soft K-means algorithm that enhances the results from \cite{yang2017towards}; \cite{dilokthanakul2016deep} develops a Gaussian mixture model to divide the latent space representations into mixture of Gaussian distributions.
The works above are concentrated on image processing rather than language processing. They have shown that dividing the latent space into subspaces is beneficial to the performance. Following this idea, \cite{yang2021training} introduces for seq2seq tasks. It improves the encoder-decoder model by clustering latent space and using multiple decoders.
This research has also demonstrate the positive relationship between the quality of latent space clustering and model's overall performance. Our work starts from the architecture introduced in \cite{yang2021training}, modifies the clustering algorithm, and introduces a self-enhancing mechanism using reinforcement learning.
Reinforcement learning can be a technique for improving the quality of latent space clustering.
Research has also suggested that reinforcement learning can be used to achieve more appropriate clusters\citep{Barbakh2007ClusteringWR, Bose2016SemiUnsupervisedCU} by designing proper reward functions.
|
1,941,325,219,997 | arxiv | \section{Introduction}
The factorization method due to Hull and Infeld \cite{Hull} has been widely exploited in quantum mechanics
to determine the spectra and wave functions of exactly solvable potentials.
This approach has been formalized in supersymmetric quantum mechanics
(SUSY QM) \cite{Cooper} which has been used to find many new isospectral
potentials. The usual procedure is to find a factorization of a quantum
mechanical Hamiltonian and the methods of SUSY QM then guarantee that a
supersymmetric partner potential is isospectral to the original Hamiltonian.
As verified below, this procedure yields a pair of potentials with the same spectra (possibly apart from the ground state) and related wave functions. Throughout this paper we work in $\hbar=2m=1$ units.
Let's consider a one dimensional Hamiltonian \[H_-^{(0)}=-\partial^2_x+V_-^{(0)}(x)\] where $V_-^{(0)}(x)$ is an arbitrary non-singular potential with at least one bound state and zero ground state energy (given the Hamiltonian $H=-\partial^2_x+V(x)$ one simply subtracts the zero point energy to obtain $H_-^{(0)}$). It is a second order linear operator and it can be factored into a product of first order linear operators as follows: \[H_-^{(0)}=(-\partial_x+W_0(x))(\partial_x+W_0(x))\equiv A_0^\dag A_0\] once the ground state wave function $\psi_0(x)$ is specified. The function $W_0(x)=-\partial_x\ln\psi_0(x)$ is called superpotential generating the potential \[V_-^{(0)}(x)=W_0^2(x)-W_0'(x).\]
Fortunately, the factorization does not commute $A_0^\dag A_0\ne A_0A_0^\dag$ unless the superpotential is constant. In other words, an inverted product $A_0A_0^\dag$ is a certain new Hamiltonian $H_+^{(0)}=A_0A_0^\dag=-\partial^2_x+V_+^{(0)}(x)$ where \[V_+^{(0)}(x)=W_0^2(x)+W_0'(x)\] is also free of singularities. It turns out that the eigenfunctions and eigenvalues of these partner Hamiltonians are related. Indeed, we have the following first-order intertwining relations \begin{equation}H_-^{(0)}A_0^\dag=A_0^\dag H_+^{(0)} \mbox{ and } H_+^{(0)}A_0=A_0H_-^{(0)} \end{equation} from which one observes that since $A_0\psi_0(x)=0$, the spectra of $H_+^{(0)}$ and $H_-^{(0)}$ are connected by $\tilde E_n=E_{n+1}$ $(n=0,1,\dots)$ where $\tilde E_n$ and $E_n$ denote the eigenvalues of the Hamiltonians $H_+^{(0)}$ and $H_-^{(0)}$ respectively with eigenfunctions $\tilde\psi_n$ and $\psi_n$. Thus, the Hamiltonians have identical energy spectrum except for the ground state of $H_-^{(0)}$. The wave functions satisfy $\tilde\psi_n(x)\propto A_0\psi_{n+1}(x)$, $\psi_{n+1}(x)\propto A_0^\dag\tilde\psi_n(x)$ and if $\psi_{n+1}(x)$ is normalizable, then $\tilde\psi_n(x)$ is also normalizable and vice versa, because \begin{eqnarray*}\langle\tilde\psi_n(x), \tilde\psi_n(x)\rangle&=&\langle\psi_{n+1}(x), A_0^\dag A_0\psi_{n+1}(x)\rangle\\ &=&E_{n+1}\langle\psi_{n+1}(x), \psi_{n+1}(x)\rangle.\end{eqnarray*} Note that for singular potentials (for instance, with a $1/x^2$ singularity) some of the wave functions $\tilde\psi_n(x)$ are not acceptable as they may not be normalizable \cite{Berger}. That is, for singular potentials the degeneracy of energy levels is only partially valid or invalid at all. The upshot of all this is that one can generate new isospectral potentials from existing exactly solvable potentials.
Luckily, the above discussed factorization is not unique. For example, we have \[(-\partial_x+1)(\partial_x+1)=(-\partial_x+\tanh(x))(\partial_x+\tanh(x)),\] i.e. two different superpotentials can give rise to the same potential (in this particular example with no bound states). One can try to construct new isospectral potentials exploiting non-uniqueness of factorization and obtain a one-parameter family of potentials with the parameter arising as an integration constant \cite{Mielnik, Mitra}.
Suppose the Hamiltonian $H_+^{(0)}$ can be factorized by the operators different than $A_0$ and $A_0^\dag$, namely, \[B=\partial_x +f(x)\mbox{ and } B^\dag=-\partial_x +f(x)\] where $f(x)$ is temporarily undetermined function:
\[H_+^{(0)}=BB^\dag=-\partial^2_x+f^2(x)+f'(x).\] Now demanding that this Hamiltonian involve the potential $V_+^{(0)}(x)$ results in
a differential equation that must be satisfied \[f'(x)+f^2(x)-V_+^{(0)}(x)=0.\] This is a Riccati equation in its canonical form. The explicit closed-form solution of this equation is not known typically, but one understands that the superpotential $W_0(x)$ is a particular solution. This is enough to construct the general solution $f(x)$ which depends on an arbitrary integration constant that can be considered as a free parameter in the partner Hamiltonian \[H=B^\dag B=-\partial^2_x+ V_+^{(0)}(x)-2f'(x)=-\partial^2_x+ V(x).\]
According to SUSY QM the potentials $V_+^{(0)}(x)$ and $V(x)$ are isospectral (except for the lowest state of $V(x)$) provided that $f(x)$ is nonsingular. In addition, since $BB^\dag=A_0A_0^\dag$, it follows that the potentials $V_-^{(0)}(x)$ and $V(x)$ have strictly identical spectra.
In ref.~\cite{Mielnik} Mielnik performed factorization of the harmonic oscillator potential in this manner. Mielnik obtained one-parameter family of potentials with the oscillator spectrum, but as we have just seen the procedure is straightforwardly
generalized to any potential $V_+^{(0)}(x)$.
In the standard (i.e. based on the first-order intertwining relation (1)) unbroken SUSY QM it is impossible to use an excited state of the original potential and at the same time avoid creating singularities in the partner potential \cite{Panigrahi}. There is no guarantee that the resulting wave functions are normalizable and energy levels degenerate. The purpose of the present article is to modify the operators $B$ and $B^\dag$ in such a way as to determine new strictly isospectral potentials without being forced to solve Riccati equations (by reducing the Riccati equation whose appearance in the factorization problems is typical to the solvable Bernoulli equation) and, more importantly, by applying the non-uniqueness of factorization to the superpotentials generated by the excited states of a potential, since these also satisfy the Schr\"odinger equation.
\section{Modified factorization}
In this section we show the consequences of the non-uniqueness of factorization method extended to the excited states of a potential, rather than just the ground state. In the literature the Hamiltonians $H_+^{(0)}$ and $H_-^{(0)}$ are called "bosonic" and "fermionic" respectively. We show that the degeneracy of energy levels of partner potentials depends on whether the bosonic or fermionic Hamiltonians admit non-unique factorization.
\subsection{Bosonic Hamiltonian}
Let there be given an analytically solvable non-singular potential $V_-^{(0)}(x)$ whose energy eigenvalues $E_n$ and wave functions $\psi_n(x)$ are known. Without loss of generality, let $E_0$ be zero, so that $V_-^{(0)}(x)=\psi_0''(x)/\psi_0(x)=W_0^2(x)-W_0'(x)$ and also define \[V_-^{(n)}(x)=\psi_n ''(x)/\psi_n(x)=W_n^2(x)-W_n'(x)\] where $W_n(x)=-\partial_x\ln\psi_n(x)$ is taken to be the superpotential corresponding to $\psi_n(x)$. From the Schr\"odinger equation it follows that $V_-^{(n)}(x)=V_-^{(0)}(x)-E_n$, so that the potentials $V_-^{(n)}(x)$ are non-singular, even though the superpotentials $W_n(x)$ are always singular for $n>0$. Adjusting the energy scale seems appropriate: one simply subtracts from the potential the energy of the excited state so that the resulting potential can be factored.
Next we introduce the operators
\[B_n=\partial_x +f(x)+W_n(x)\mbox{ and } B_n^\dag=-\partial_x+f(x)+W_n(x)\] where $f(x)$ will be determined below. Notice when $n=0$ these definitions reduce to the familiar case of standard unbroken SUSY QM if $f(x)=0$ and to the Mielnik's factorization \cite{Mielnik} if $f(x)\ne 0$.
The factorization of the Hamiltonian $\tilde H_-^{(n)}=B_n^\dag B_n$ leads to
\[\tilde H_-^{(n)}=-\partial^2_x+V_-^{(n)}(x)+f^2(x)+2W_n(x) f(x)-f'(x).\] If we require that $f^2(x)+2W_n(x) f(x)-f'(x)=0$ the Hamiltonian becomes trivial because the potential $V_-^{(n)}(x)$ is related to $V_-^{(0)}(x)$ by a constant shift. On the other hand, the partner Hamiltonian $\tilde H_+^{(n)}=B_nB_n^\dag$ is less trivial \[\tilde H_+^{(n)}=-\partial^2_x+V_+^{(n)}+2f'(x)\] where $V_+^{(n)}(x)=W_n^2(x)+W_n'(x)$.
The function $f(x)$ is not arbitrary -- it is a solution of the Bernoulli equation (a specific example of the Riccati equation):
\[f'(x)=f^2(x)+2W_n(x) f(x)\] and reads \[f_n(x)=\frac{\psi^{-2}_n(x)}{C-\int_{x_0}^x\psi^{-2}_n(s)ds}\] where $C$, $x_0$ are constants. It follows that $\psi_n(x)$ must be inverse square integrable; however, in general the wave functions do not possess this property.
There is yet another problem, namely, singularity of the potentials $V_+^{(n)}(x)$ for $n\ne0$ corresponding to the zeros of the wave functions. Consequently, the breakdown of the degeneracy of energy levels of the Hamiltonians $\tilde H_-^{(n)}$ and $\tilde H_+^{(n)}$ occurs (in addition to $H_-^{(n)}$ and $H_+^{(n)}$).
\subsection{Fermionic Hamiltonian}
The difficulties of establishing the degeneracy theorem for bosonic Hamiltonians suggest to reverse the order of the operators $B_n$ and $B_n^\dag$ and start with the fermionic Hamiltonian $\tilde H_+^{(n)}=B_nB^\dag_n$:
\[\tilde H_+^{(n)}=-\partial^2_x+V_+^{(n)}(x)+f^2(x)+2W_n(x) f(x)+f'(x)\] where $V_\pm^{(n)}(x)$ are defined as usual. We again obtain the Bernoulli equation \[f'(x)+f^2(x)+2f(x)W_n(x)=0\] whose general solution is \begin{equation}f_n(x)=\frac{\psi^2_n(x)}{C+\int_{x_0}^x\psi^2_n(s)ds}\end{equation} where $C$, $x_0$ are constants and $\psi_n(x)$ is assumed to be square-integrable.
If it is possible to restrict the domain of the parameter $C$ and make $f_n(x)$ free of singularities, then the potential $\tilde V_-^{(n)}(x)$ in \[\tilde H_-^{(n)}=B_n^\dag B_n=-\partial^2_x+\tilde V_-^{(n)}=-\partial^2_x+V_-^{(n)}-2f_n'(x)\] constitute a one-parameter family of potentials isospectral to the potential $V_-^{(n)}(x)$.
To see this note that the Schr\"odinger equation $H_-^{(n)}\psi_k=(E_k-E_n)\psi_k$ implies \begin{eqnarray*}\tilde H_-^{(n)}[B_{n}^\dag A_n\psi_k]&=&B_{n}^\dag B_{n}B_{n}^\dag A_n\psi_k\\ &=& B_{n}^\dag A_nA_n^\dag A_n\psi_k\\&=&(E_k-E_n)[B_{n}^\dag A_n\psi_k]\end{eqnarray*} where we have used the non-uniqueness of factorization of the Hamiltonian $H_+^{(n)}=A_nA_n^\dag=B_nB_n^\dag$. So if $\psi_k(x)$ is an eigenfunction of the Hamiltonian $H_-^{(n)}$ with energy eigenvalue $E_k-E_n$, then $B_{n}^\dag A_n\psi_k$ is an eigenfunction of $\tilde H_-^{(n)}$ with the same energy.
Similarly, from the Schr\"odinger equation $\tilde H_-^{(n)}\tilde\psi_k^{(n)}=\tilde E_k^{(n)}\tilde\psi_k^{(n)}$ (where in $\tilde E_k^{(n)}$, $k$ denotes the energy level and $(n)$ refers to the $n^{th}$ eigenfunction of the Hamiltonian $H_-^{(n)}$) it follows that
\[H_-^{(n)}[A_n^\dag B_{n}\tilde\psi_k^{(n)}]=\tilde E_k^{(n)}[A_n^\dag B_{n}\tilde\psi_k^{(n)}]\]
Hence, the normalized eigenfunctions of the Hamiltonians $H_-^{(n)}$ and $\tilde H_-^{(n)}$ are related by \begin{equation}\tilde\psi_k^{(n)}(x)=(E_k-E_n)^{-1}[ B_{n}^\dag A_n\psi_k(x)]\end{equation} and \[\psi_k(x)=(E_k-E_n)^{-1}[ A_n^\dag B_{n} \tilde\psi_k^{(n)}(x)]\] where $k\ne n$. The operators $A_n$ or $B_{n}$ destroy a node in the eigenfunctions, but they are followed respectively by the operators $B_{n}^\dag$ or $A_n^\dag$ that create an extra node. Thus, the overall number of the nodes does not change. In addition, the normalization does not require positive semi-definiteness of the energy eigenvalues, as in the standard case. This is good because negative energy states appear when $n>0$.
For any $n$ there is always one missing state $k=n$ which can be obtained by solving the first order differential equation $B_{n}\tilde\psi_n^{(n)}=0$ (by construction the state $\tilde\psi_n^{(n)}$ has to be annihilated by the operator $B_{n}$):
\begin{eqnarray*}{d\tilde\psi_n^{(n)}(x)\over dx}&=&-\left(W_n(x)+\frac{\psi_n^2(x)}{C +\int_{x_0}^x{\psi_n^2(s)ds}}\right) \tilde\psi_n^{(n)}(x)\\&=&{d\over dx}\left(\ln\frac{\psi_n}{C +\int_{x_0}^x{\psi_n^2(s)ds}}\right)\tilde\psi_n^{(n)}(x)\end{eqnarray*}
Therefore, \begin{equation}\tilde \psi_n^{(n)}(x)=N(C)\times\frac{\psi_n}{C +\int_{x_0}^x{\psi_n^2(s)ds}}\end{equation}
with the corresponding energy $\tilde E_n^{(n)}=0$. All other energy eigenvalues satisfy $\tilde E_k^{(n)}=E_k-E_n$. The normalization constant $N(C)$ depends on the parameter $C$ and other parameters of the potential such as width, depth etc. It is a constraint that allows one to determine the values of $C$ for which the potentials $\tilde V_-^{(n)}(x)$ are nonsingular and eigenfunctions $\tilde\psi_k^{(n)}(x)$ are well-defined.
One observes that the intertwining relationship between the Hamiltonians $H_-^{(n)}$ and $\tilde H_-^{(n)}$ is of the second order: \[\tilde H_-^{(n)}B_n^\dag A_n=B_n^\dag A_n H_-^{(n)} \mbox{ and } H_-^{(n)}A_n^\dag B_n=A_n^\dag B_n\tilde H_-^{(n)}.\] In the second-order SUSY QM \cite{Hernandez} two different Hamiltonians are intertwined by an operator of the second-order in derivatives, say, $A=\partial_x^2+\eta(x)\partial_x+\gamma(x)$. If $A$ can be written as a product of two first-order differential operators with real superpotentials, then we call it reducible (otherwise one refers to it as irreducible). Thus, our construction is equivalent to the second-order SUSY QM with the reducible operator $A=-B_n^\dag A_n$. Performing an explicit factorization one finds that $-\eta(x)=f_n(x)$ and $-\gamma(x)=V_-^{(n)}(x)+f_n(x)W_n(x)$. Pros and cons of these related approaches are discussed in detail in the concluding section.
From now on we will discuss the degeneracy of energy levels of the Hamiltonians $H_-^{(n)}$ and $\tilde H_-^{(n)}$ only, leaving aside the Hamiltonian $H_+^{(n)}$ which plays an intermediate role in this construction.
\section{Examples}
Here we illustrate the results developed in the preceding section by providing examples that arise from well-known potentials and obtain some previously unreported potentials which might be of interest in various fields of physics and chemistry. One can also consult the ref.~\cite{Berger} where factorizations of the harmonic oscillator potential were performed.
\subsection{Morse potential}
Let us first consider the Morse potential \begin{equation}V_-^{(0)}(x)=A^2-B(2A+\alpha)e^{-\alpha x}+B^2e^{-2\alpha x}\end{equation} where the constants $A, B$ and $\alpha$ are nonnegative. There is a finite number of energy levels $E_k=k\alpha(2A-k\alpha)$ where $k$ takes integer values from zero to the greatest value for which $k\alpha< A$. For concreteness let us take $A=2$ and $\alpha=B=1$. The partner potential $\tilde V_-^{(0)}(x)$ is obtained from the ground state wave function $\psi_0(x)=e^{-2x -e^{-x}}$ of the potential $V_-^{(0)}(x)=4-5e^{-x}+e^{-2 x}$:
\begin{eqnarray*}&&\tilde V_-^{(0)}(x)=4-5e^{-x}+e^{-2 x}\\&-&16{d\over dx}\left(\frac{e^{-4x-2e^{-x}}}{C+e^{-2e^{-x}}(3+6e^{-x}+6e^{-2x}+4e^{-3x})}\right).\end{eqnarray*}
As the potential $V_-^{(0)}(x)$ it has only two bound states with eigenvalues $\tilde E_0^{(0)}=0$ and $\tilde E_1^{(0)}=3$. The normalized ground state wave function is \[\tilde\psi_0^{(0)}(x)=\frac{\sqrt{8C(C+3)\over3} \,e^{-2x -e^{-x}}}{C+e^{-2e^{-x}}(3+6e^{-x}+6e^{-2x}+4e^{-3x})}.\] Hence, the potential $\tilde V_-^{(0)}(x)$ is nonsingular as long as $C\not\in[-3,0]$ (see Fig. 1).
\begin{figure}[h]
\centering
\includegraphics[bb=0 0 240 235,width=3.2in,height=3.2in,keepaspectratio]{fig.1Color.eps}
\caption{A few members of the one-parameter family of potentials $\tilde V_-^{(0)}(x)$ isospectral to the Morse potential $V_-^{(0)}(x)$ with $A=2$ and $\alpha=B=1$ (thick blue line). }
\label{fig:fig.1}
\end{figure}
The normalized wave function $\tilde\psi_1^{(0)}(x)$ is determined by applying the operator $B_0^\dag A_0$ to the first (and only) normalized excited state $\psi_1(x)=2/\sqrt3e^{-x-e^{-x}}(3-2e^{-x})$ of the potential $V_-^{(0)}(x)$:\[\tilde\psi_1^{(0)}(x)=\frac{2e^{-e^{-x}} (6 + 12 e^x + 9 e^{2 x}) +Ce^{e^{-x}}( 3 e^{2x} - 2 e^{x})}{\sqrt3(4 + 6 e^x + 6 e^{2 x} + 3 e^{3 x} + C e^{2 e^{-x} + 3 x})}.\]
We would like to remind the ladder operators for the wave functions of the Morse potential given in (5) and explicitly derive them for the wave functions of the isospectral partner potential. Let's denote $s=A/\alpha$ and $y=2B/\alpha e^{-\alpha x}$ which is the common choice in the SUSY QM literature. Then for the creation $K_+$ and annihilation $K_-$ operators we have \cite{Dong}:
\[K_+=\left[\partial_y+{{s-n}\over y}-{s+1/2\over {2s-2n-1}}\right]\] and \[K_-=-\left[\partial_y-{{s-n}\over y}+{s+1/2\over {2s-2n+1}}\right]\] (we note that $K_-\ne K_+^\dag$) with the following effect $K_+\psi_k(y)\propto\psi_{k+1}(y)$ and $K_-\psi_{k+1}(y)\propto\psi_{k}(y)$. The proportionality factors can be calculated after normalizing the eigenfunctions $\psi_k(y)=y^{s-k}e^{-y/2}L_n^{2s-2k}(y)$ where $L_k^{2s-2k}(y)$ are associated Laguerre polynomials.
The equation (3) enables us to deduce the ladder operators for the eigenvectors $\tilde \psi_k^{(n)}(y)$ of the potential $\tilde V_-^{(n)}(x)$ whose energy spectrum is identical to that of the Morse potential $V_-^{(n)}(x)$. The corresponding raising and lowering operators for $\tilde \psi_k^{(n)}(y)$ with $k\ne n$ are $(B_n^\dag A_n)K_+(A_n^\dag B_n)$ and $(B_n^\dag A_n)K_-(A_n^\dag B_n)$. Exploration of the higher-order ladder operators is the direct consequence of extending the first-order SUSY QM.
\subsection{CPRS potential}
In ref. \cite{Carinena} Cari\~nena, Perelomov, Ra\~nada and Santander (CPRS) have studied the following one-dimensional non-polynomial exactly solvable potential (we define our Hamiltonian to be $H_-^{(0)}=2H_{\text{CRPS}}+3$): \[V_-^{(0)}(x)=x^2+3+8\frac{2x^2-1}{(2x^2+1)^2}.\]
This potential asymptotically behaves like a simple harmonic oscillator but its minimum at the origin is much deeper than in case of the harmonic oscillator. Using SUSY QM techniques it was shown by Fellows and Smith \cite{Fellows} that $V_-^{(0)}(x)$ is a partner potential of the harmonic oscillator $x^2+5$ and, therefore, their energy levels are the same. Here we further analyze the CPRS potential and find new potentials with the oscillator spectrum (see also ref. \cite{Berger}).
The ground state energy $E_0=0$ and wave function \[\psi_0(x)=\frac{e^{-x^2/2}}{2x^2+1}\]of the potential $V_-^{(0)}(x)$
allows one to find its isospectral partner \begin{eqnarray*}&&\tilde V_-^{(0)}(x)=x^2+3+8\frac{2x^2-1}{(2x^2+1)^2}\\&-8&{d\over dx}\left(\frac{e^{-x^2}}{2x(2x^2+1)e^{-x^2}+(2x^2+1)^2(C+\sqrt\pi\operatorname{erf} x)}\right)\end{eqnarray*} which has no singularities when $|C|>\sqrt\pi$ (see Fig. 2) as follows from normalizing the ground state wave function $\tilde\psi_0^{(0)}(x)$ (see below).
\begin{figure}[h]
\centering
\includegraphics[bb=0 0 240 235,width=3.2in,height=3.2in,keepaspectratio]{fig.2Color.eps}
\caption{Plot of the potential $\tilde V_-^{(0)}(x)$ with $C=1.8$ (close to $\sqrt\pi$) and the unnormalized probability densities (dashed line at the corresponding level position) for its three lowest energy levels. The limit $C\to\infty$ corresponds to the CPRS potential (thick blue line).}
\label{fig:fig.2}
\end{figure}
Its eigenvalues are the same as that of the potential $V_-^{(0)}(x)$ and given by $\tilde E_k^{(0)}=2k+4$ for $k=1,2,\hdots$. The normalized ground state wave function \[\tilde\psi_0^{(0)}(x)=\frac{\sqrt{2(C^2-\pi)/\sqrt\pi}\,e^{-x^2/2}}{2xe^{x^2}+(2x^2+1)(C+\sqrt\pi\operatorname{erf}(x))}\] corresponds to the energy eigenvalue $\tilde E_0^{(0)}=0$. The rest of the eigenfunctions can be derived using equation (3).
Neither Cari\~nena et al., nor Fellows and Smith provided the raising and lowering operators for the wave functions $\psi_k(x)$ of the CPRS potential. Here we address the question of finding ladder operators for the CPRS potential and its isospectral partner. Taking into account that the CPRS potential itself is a partner of the harmonic oscillator, we obtain its raising $A^\dag a^\dag A$ and lowering $A^\dag a A$ operators where \[A=\partial_x+x+\frac{4x}{2x^2+1}\] is needed to move between the CPRS potential and harmonic oscillator whose creation and annihilation operators are $a^\dag$ and $a$ respectively. Thus, the ladder operators for the wave functions $\tilde\psi_k^{(n)}(x)$ of the potential $\tilde V_-^{(n)}$ become $(B_n^\dag A_n)A^\dag a^\dag A(A_n^\dag B_n)$ and $(B_n^\dag A_n)A^\dag a A(A_n^\dag B_n)$ for $k\ne n$.
\subsection{Infinite square well potential}
Despite its simplicity, the one-dimensional infinite square well potential with a deformed bottom requires some new techniques for obtaining solutions of the corresponding Schr\"{o}dinger equation and usually one is unable to solve it exactly. In a recent paper \cite{Alhaidari}, exact solution for the problem with sinusoidal bottom has been deduced. In this subsection we explicitly find potentials with undulating bottom and energy spectrum coinciding with that of the infinite square well.
The wave functions and energy eigenfunctions of the infinite square well potential $V_-^{(0)}(x)=-\pi^2/L^2$ of width $L$ are given by $\psi_k(x)=\sin({(k+1)\pi x/ L})$ with $0\le x\le L$ and $E_k=k(k+2)\pi^2/L^2$. Using this time for diverseness the first excited state wave function $\psi_1(x)$ we find a pair of partner potentials, namely, the infinite square well potential with flat bottom \[V_-^{(1)}(x)=-4\pi^2/L^2\] and the infinite square well potential with non-flat bottom also defined in the region $0\le x\le L$ (see Fig. 3):
\begin{figure}[h]
\centering
\includegraphics[bb=0 0 240 235,width=3.2in,height=3.2in,keepaspectratio]{fig.3Color.eps}
\caption{Selected members of the family of one-parameter potentials $\tilde V_-^{(1)}(x)$. The limit $C\to\infty$ corresponds to the infinite square well $V_-^{(0)}(x)=-4$ of width $L=\pi$ (thick red line).}
\label{fig:fig.3}
\end{figure}
\[\tilde V_-^{(1)}(x)=-{4\pi^2\over L^2}-16{d\over dx}\left(\frac{\sin^2{(2\pi x/ L)}}{C+4 x-L/\pi\sin{(4\pi x/ L)}}\right).\] Both of the potentials have identical energy spectra $\tilde E_k^{(1)}=(k-1)(k+3)\pi^2/L^2$. The normalized first excited state of the potential $\tilde V_-^{(1)}(x)$ is calculated from (4) and reads \[\tilde\psi_1^{(1)}(x)=\sqrt{2C(C+4L)\over L}\frac{\sin{(2\pi x/ L)}}{C+4 x-L/\pi\sin{(4\pi x/ L)}}\] provided that $C\not\in[-4L,0]$. The wave functions $\tilde\psi_0^{(1)}(x)$, $\tilde\psi_2^{(1)}(x),\hdots$ can be found from (3). We only calculate the normalized lowest state eigenfunction: \[\tilde\psi_0^{(1)}(x)=\frac{\sin{\pi x\over L}\left(3\pi(C+4x)-8L\sin{2\pi x\over L}+L\sin{4\pi x\over L}\right)}{3\sqrt{L\over 2}\left(L\sin{4\pi x\over L}-\pi(C+4\pi)\right)}.\] It corresponds to the negative energy $\tilde E_0^{(1)}=-3\pi^2/L^2$ as expected since the potential $\tilde V_-^{(1)}(x)$ is generated by the first excited state of the original potential.
Note that the potential $\tilde V_-^{(1)}(x)$ satisfies \[\tilde V_-^{(1)}(C, x)=\tilde V_-^{(1)}(C+2L, x+L/2).\]
It is known \cite{Dong} that the eigenvectors $\psi_k(x)$ of the Hamiltonian $H_-^{(n)}$ admit the following creation and annihilation operators: \[M_+=\cos{\left(\pi x\over L\right)}\hat k+ {L\over\pi}\sin{\left(\pi x\over L\right)}\partial_x\] and \[M_-=\left[\cos{\left(\pi x\over L\right)}\hat k- {L\over\pi}\sin{\left(\pi x\over L\right)}\partial_x\right]\hat k^{-1}(\hat k-1)\] where one defines the "number" operator $\hat k$ and its inverse $\hat k^{-1}$ such that $\hat k \psi_k(x)=(k+1)\psi_k(x)$ and $\hat k^{-1}\psi_k(x)=(k+1)^{-1}\psi_k(x)$. The ladder operators $M_\pm$ obey \[M_-\psi_k(x)=k\psi_{k-1}(x) \mbox{ and } M_+\psi_k(x)=(k+1)\psi_{k+1}(x).\]
It is not hard to convince yourself that the raising and lowering operators for the wave functions $\tilde \psi_k^{(n)}$ of the partner isospectral Hamiltonian $\tilde H_-^{(n)}$ are given by $(B_n^\dag A_n) M_+(A_n^\dag B_n)$ and $(B_n^\dag A_n) M_-(A_n^\dag B_n)$ respectively for $k\ne n$ (when $k=n$ use equation (4)).
\subsection{Two-parameter set of potentials isospectral to the harmonic oscillator}
Given an eigenfunction $\psi_n(x)$ of the potential $V_-^{(0)}(x)$ one can find the wave function $\tilde\psi_k^{(n)}(x)$ of the one-parameter potential $\tilde V_-^{(n)}$ using the equation (3). Now one can repeat this procedure and instead of the eigenfunction $\psi_n(x)$ in (2) and (3) use $\tilde\psi_k^{(n)}(x)$ to obtain a two-parameter potential $\tilde V_-^{(n,k)}(x)$ and its eigenfunctions. One can go on with this construction and obtain well defined multi-parameter potentials strictly isospectral to the potential $V_-^{(k)}(x)$.
Let us focus on the harmonic oscillator $V_-^{(0)}(x)=x^2-1$ (with $\omega=2$) whose ground state wave functions is $\psi_0(x)=e^{-x^2/2}$. The potential $\tilde V_-^{(0)}(x)$ is carefully discussed in refs. \cite{Mielnik, Berger, Abraham} each using different approaches, so in the following we omit unnecessary calculations and only state its normalized first excited state wave function: \[\tilde\psi_1^{(0)}(x)=\sqrt{2\over\sqrt\pi}\frac{e^{-3x^2/2}(1+2C x e^{x^2}+\sqrt\pi xe^{x^2}\operatorname{erf}(x))}{2C+\sqrt\pi\operatorname{erf}(x)}\] where $|C|>\sqrt\pi/2$ to guarantee non-singularity of the potential $\tilde V_-^{(0)}(x)$. Applying (2) to the wave function $\tilde\psi_1^{(0)}(x)$ we get the two-parameter potential (see Fig. 4): \begin{eqnarray*}&&\tilde V_-^{(0,1)}(x)=x^2-3\\&-&2{d\over dx}\left(\frac{e^{-x^2}}{C+\sqrt\pi/2\operatorname{erf}(x)}+\frac{(\tilde\psi_1^{(0)}(x))^2}{\tilde C+\int_{x_0}^x{(\tilde\psi_1^{(0)}(s))^2}ds}\right)\end{eqnarray*} which is isospectral to the potential $\tilde V_-^{(0)}(x)-2$, which is in turn isospectral to the harmonic oscillator $V_-^{(1)}(x)=x^2-3$, i.e. its energy levels are $\tilde E_k^{(0,1)}=2(k-1)$.
The potential $\tilde V_-^{(0,1)}(x)$ is non-singular for any $C\ne0$ and $|\tilde C +1/(4C)|>\sqrt\pi/4$ as follows from normalizing its ground state wave function. This family includes the oscillator potential $x^2-3$ in the limit $C, \tilde C \to\infty $; the potential $\tilde V_-^{(0)}(x)$ arises when $\tilde C\to\infty$; and finally $\tilde V_-^{(0,1)}(x)$ reduces to the potential $\tilde V_-^{(1)}(x)$ \cite{Berger} in the limit $C\to\infty$.
\begin{figure}[h]
\centering
\includegraphics[bb=0 0 240 235,width=3.2in,height=3.2in,keepaspectratio]{fig.4Color.eps}
\caption{Plot of the potentials $\tilde V_-^{(0,1)}(x)$, $V_-^{(1)}(x)=x^2-3$ and the non-normalized probability densities (dashed line at the corresponding level position) for the three lowest energy levels of $\tilde V_-^{(0,1)}(x)$.}
\label{fig:fig.4}
\end{figure}
Let's briefly mention how to obtain its eigenfunctions. There is an expression similar to (3) for $k=2,3,\dots$: \[\tilde\psi_k^{(0,1)}(x)\propto\tilde B_1^\dag \tilde A_1\tilde \psi_k^{(0)}(x)\propto\tilde B_1^\dag \tilde A_1B_0^\dag A_0\psi_k(x)\] where $\tilde \psi_k^{(0)}(x)$ and $\psi_k(x)$ are the eigenfunctions of the potential $\tilde V_-^{(0)}(x)$ and the harmonic oscillator accordingly. The operators $\tilde B_1^\dag$, $\tilde A_1$ are defined by \[\tilde A_1=\partial_x-\partial_x\ln\tilde \psi_1^{(0)}(x)\] and \[\tilde B_1^\dag=-\partial_x+\partial_x\ln\frac{\left(\tilde C+\int_{x_0}^x{(\tilde\psi_1^{(0)}(s))^2}ds\right)}{\tilde \psi_1^{(0)}(x)}.\] Lastly, the raising and lowering operators for the eigenvectors $\tilde\psi_k^{(0,1)}(x)$ are given by $\tilde B_1^\dag\tilde A_1B_0^\dag A_0A_0^\dag A_0^\dag B_0\tilde A_1^\dag \tilde B_1$ and $\tilde B_1^\dag\tilde A_1B_0^\dag A_0A_0A_0^\dag B_0\tilde A_1^\dag \tilde B_1$ with $A_0^\dag$, $A_0$ being the creation and annihilation operators of the harmonic oscillator.
The two-parameter family of potentials with oscillator spectrum was also derived by the so-called second order intertwining technique in \cite{Fernandezz}. The advantages of the presented technique of getting multi-parameter sets of isospectral potentials are apparent.
\section{Conclusion}
After the discovery of supersymmetry in string theory and then field theory, factorization was recognized as the application of supersymmetry to quantum mechanics. The non-uniqueness of factorization serves as an avenue for the construction of many isospectral potentials. In this paper, we have explored the generality of this method by extending it to the excited states of a potential. Some nonsingular isospectral potentials that arise from the technique have been presented in this paper.
These include one-parameter extensions of the well-known infinite square well and Morse potentials as well as not so familiar CPRS potential and two-parameter extension of the harmonic oscillator. For some potentials the associated wave functions and probability densities have been derived and plotted. The ladder operators were determined explicitly. The application of this technique may be of significant interest because it can be applied to any one-dimensional quantum mechanical potential.
The most general approach in the second-order SUSY QM is based on an arbitrary solution of the Schr\"odinger equation for the initial potential, rather than on its ground or excited state wave functions as discussed in the present article. However, there are certain advantages in such a presentation. For example, one can explicitly construct the ladder operators for both isospectral Hamiltonians. It is also possible to avoid some technical complexities of the most general approach by mimicking the traditional first-order SUSY QM. For instance, in the second-order SUSY QM none of the expressions $AA^\dag$ or $A^\dag A$ coincide with any of the isospectral partner Hamiltonians, but are quadratic forms in them. For comparison in our construction, which is based on the non-uniqueness of factorization, the appearance of the atypical Hamiltonian $H_+^{(n)}$ at the intermediate stage does not affect the isospectral partner Hamiltonians $\tilde H_-^{(n)}$ and $H_-^{(n)}$.
\section*{Acknowledgments}
N.U. was assisted by the Hutton Honors College Research Grant. M.B was supported in part by the U.S.
Department of Energy under Grant No.~DE-FG02-91ER40661.
\thebibliography{8}
\bibitem{Hull} L. Infeld and T.E. Hull, Rev. Mod. Phys. $\bold{23}$, 21 (1951).
\bibitem{Cooper} F. Cooper, A. Khare and U. Sukhatme,
{\it Supersymmetry in Quantum Mechanics}, 2001 World Scientific;
B. K. Bagchi, {\it Supersymmetry in Quantum and Classical Mechanics}, 2001 Chapman \& Hall/CRC.
\bibitem{Berger} M. S. Berger and N. S. Ussembayev, 1007.5116v1.
\bibitem{Mielnik} B. Mielnik, J. Math. Phys. $\bold{25}$, 3387 (1984).
\bibitem{Mitra}D. J. Fernandez,
Lett. Math. Phys. $\bold{8}$, 337 (1984); A. Mitra et al.,
Int. J. Theor. Phys. $\bold{28}$, 911 (1989); H. Rosu, Int. J. Theor. Phys. $\bold{39}$, 105 (2000).
\bibitem{Panigrahi} P. K. Panigrahi and U. P. Sukhatme, Phys. Lett. A $\bold{178}$, 251 (1993).
\bibitem{Hernandez} D. J. Fernandez and E. Salinas-Hernandez, J. Phys. A: Math. Gen. $\bold{36}$, 2537 (2003); D. J. Fernandez and E. Salinas-Hernandez, Phys. Lett. A $\bold{338}$, 13 (2005).
\bibitem{Dong} S. Dong, {\it Factorization Method in Quantum Mechanics}, 2007 Springer.
\bibitem{Carinena} J. F. Cari\~{n}ena et al., J. Phys. A: Math. Theor.
$\bold{41}$, 085301 (2008).
\bibitem{Fellows} J. M. Fellows and R. A. Smith, J. Phys. A: Math. Theor.
$\bold{42}$, 335303 (2009).
\bibitem{Alhaidari} A. D. Alhaidari and H. Bahlouli, J. Math. Phys. $\bold{49}$, 082102 (2008).
\bibitem{Fernandezz} D. J. Fernandez, M. L. Glasser and L. M. Nieto, Phys. Lett. A $\bold{240}$, 15 (1998).
\bibitem{Abraham} P. Abraham and H. Moses, Phys. Rev. A $\bold{22}$,
1333 (1980).
\end{document}
|
1,941,325,219,998 | arxiv | \section{Introduction}\label{Introduction}
In this article we review recent progress in optimal control of stochastic thermodynamic systems. We focus on classical isothermal stochastic thermodynamics describing control by linear-response, thermodynamic geometry, and optimal-transport theory.
Historically, modern thermodynamic control began with the study of finite-time thermodynamics of macroscopic systems,~\cite{andresen1977,salamon1977,andresen2011} the natural extension beyond quasistatic (infinitely slow) processes. Fundamentally, any finite-time thermodynamic control will induce some degree of irreversibility, manifesting as energy dissipated into the environment. A goal of finite-time thermodynamics is to quantify and minimize this dissipation through the use of designed control strategies. For example, Ref.~\onlinecite{band1982} studied the optimal cycle for finite-time operation of a heat engine and found that instantaneous jumps in control parameters are necessary to minimize dissipation.
In parallel, a thermodynamic-geometry framework was developed to provide a novel means to describe thermodynamic processes on a smooth (generally Riemannian) manifold.\cite{Weinhold1975,ruppeiner1979,Crooks2007} Ref.~\onlinecite{salamon1983} showed the connections between thermodynamic geometry and minimum-dissipation protocols, opening the door for the development of a geometric description of minimum-dissipation protocols. Although theoretically compelling, the utility of the framework was not fully realized until the development of stochastic thermodynamics.
The aforementioned descriptions focused on macroscopic systems that equilibrate rapidly and whose fluctuations are relatively small. The advent of modern experimental techniques, including single-molecule biophysical experiments, created demand for a theoretical description of the energetics of microscopic systems. Optical tweezers and magnetic traps~\cite{ashkin1970,bustamante2021,polimeno2018,moffitt2008} allow for the precise manipulation of individual polymer strands~\cite{liphardt2002,collin2005,bustamante2000,bustamante2003,woodside2006,neupane2017} and nanoscale molecular machines~\cite{unksov2022} (e.g., ATP synthase,\cite{Toyabe2011,toyabe2012,kawaguchi2014} kinesin,\cite{svoboda1993,svoboda1994,kojima1997,hunt1994} and myosin\cite{greenberg2016,Laakso2008,norstrom2010,nagy2013}). Due to their small scale, these systems are not accurately described by macroscopic thermodynamics since the fluctuations (of order $k_{\rm B}T$) are comparable to the systems' internal energy scales, and the operating speeds of single-molecule experiments and molecular machines are comparable to those systems' natural relaxation times.
To describe these small-scale systems, the field of stochastic thermodynamics was developed, which aims to describe the nonequilibrium energetics of stochastic (fluctuating) microscopic systems~\cite{Seifert2012,Jarzynski2011}. Just like its macroscopic counterpart, a central goal of stochastic thermodynamics is the description of optimal control strategies: methods for performing a given task at minimum energetic cost~\cite{Brown2017,Brown2019}.
There are two distinct but related types of control we consider in this review: full control (section~\ref{Full control})and parametric control (section~\ref{Parametric control}). Full control assumes we have complete control of the probability distribution (Fig.~\ref{Full_vs_parametric}, top). Parametric control adjusts a finite number of control parameters (Fig.~\ref{Full_vs_parametric}, bottom), and in doing so drives the probability distribution.
\begin{figure}
\includegraphics[width=\linewidth]{OT_and_Parametric.pdf}
\caption{Comparing full control and parametric control. Full control (top) assumes complete control of the probability distribution $p(\boldsymbol{r},t)$ (shaded) which can be optimally driven between the endpoints by a potential $V(\boldsymbol{r},t)$ (red dashed curves). Parametric control (bottom) adjusts a finite number of control parameters $\boldsymbol{\lambda}(t)$ according to a protocol $\boldsymbol{\Lambda}$ between specified endpoints, thereby driving the probability distribution $p(\boldsymbol{r},\boldsymbol{\Lambda})$.}
\label{Full_vs_parametric}
\end{figure}
For either full or parametric control, the exact minimum-dissipation protocol is known if the probability distribution is Gaussian~\cite{Schmiedl2007,abiuso2022,Blaber2022_strong}. These exact solutions provide a glimpse into the properties of optimal control processes. For example, just like the finite-time thermodynamic control described previously, the minimum-dissipation protocol has discontinuous changes in control parameters at the start and end of the protocol but remains continuous between these endpoints~\cite{Schmiedl2007}. These discontinuities are present even for underdamped dynamics~\cite{Gomez2008}. The control-parameter jumps have been observed in a number of different systems~\cite{Gomez2008,Then2008,Esposito2010} and are now well understood and have been shown to be a general feature~\cite{Blaber2021}.
For more general solutions under full control, the study of minimum-dissipation protocols can be mapped onto a problem of optimal-transport theory, a well developed branch of mathematics for which there exist numerous algorithms and methods for determining the optimal-transport map~\cite{villani2009,santambrogio2015}. The connection between minimum-dissipation protocols and optimal-transport theory was first shown in Ref.~\onlinecite{Aurell2011} for overdamped dynamics: the protocol that minimizes dissipation when driving a system obeying overdamped Fokker-Planck dynamics between specified initial and final distributions is governed by the Wasserstein distance~\cite{Zhang2019,nakazato2021,dechant2022,miangolarra2022} and the Benamou-Brenier formula~\cite{benamou2000}. This technique eventually led to new fundamental lower bounds on the average work required for finite-time information erasure~\cite{Proesmans2020,proesmans2020optimal}. Initially only applicable to overdamped dynamics, the connections between optimal-transport theory and minimum-dissipation protocols have recently been shown for discrete-state and quantum systems~\cite{dechant2022,dechant2022geometric,yoshimura2022,zhong2022,van2022,van2022Topological}.
General solutions for parametric control are typically difficult to determine, although recent progress has been made towards exact solutions for general systems building off of optimal-transport~\cite{zhong2022} or advanced numerical techniques~\cite{Then2008,gingrich2016,engel2022}. Although exact solutions are convenient where possible, the determination of minimum-dissipation protocols can be considerably simplified through approximate methods. Inspired by a diagram presented in Ref.~\onlinecite{bonanca2018}, we schematically show in Fig.~\ref{Parametric_diagram} the limits where minimum-dissipation protocols are known.
Linear-response theory can be used to determine the minimum-dissipation protocol for weak perturbations and performs relatively well at any driving speed and beyond its strict range of validity~\cite{kamizaki2022,bonanca2018}. For slow control, the thermodynamic-geometry framework has been generalized to stochastic thermodynamic systems~\cite{Crooks2007,Sivak2016} and has been used to explore a diverse set of model systems~\cite{Sivak2016,Blaber2020,deffner2020,zulkowski2012,bonancca2014,zulkowski2015,zulkowski2015Quantum,Large2019,Lucero2019,Rotskoff2015,Rotskoff2017,louwerse2022,frim2022,frim2021}, including DNA-pulling experiments~\cite{Tafoya2019} and free-energy estimation~\cite{Blaber2020Skewed}. In the opposite limit of fast control, minimum-dissipation protocols are describe by short-time efficient protocols~\cite{Blaber2021}, which can be combined with the thermodynamic-geometry framework to design interpolated protocols that perform well at any driving speed~\cite{Blaber2022}. Leveraging known solutions from optimal-transport theory, strong control can be described by the strong-trap approximation, yielding explicit solutions for minimum-dissipation protocols~\cite{Blaber2022_strong}.
\begin{figure}
\includegraphics[width=0.75\linewidth]{Parametric_diagram.pdf}
\caption{The space of thermodynamic control. Horizontal axis is the driving speed from slow to fast, and the vertical axis is the strength of driving from weak to strong. Linear-response theory is applicable to weak and slow driving (blue), and can be simplified to a thermodynamic-geometry framework for slow driving. Short-time efficient protocols are valid for fast driving (purple) and can be combined with thermodynamic geometry to bridge the space between slow and fast with interpolated protocols (green). The strong-trap approximation (red) is only valid for overdamped dynamics, with region of applicability schematically indicated by a distinct dotted line. Exact solutions for Gaussian distributions serve as a window into the properties of minimum-dissipation protocols and are valid at any driving speed or strength of driving.}
\label{Parametric_diagram}
\end{figure}
There are a number of related topics which are not covered in this review, such as optimal control of heat engines (including optimal cycles~\cite{ma2018,zhang2020,abiuso2020,frim2021,frim2022,chen2022} and efficiency at maximum power~\cite{curzon1975,van2005,schmiedl2008efficiency,esposito2009,esposito2010efficiency,brandner2015,proesmans2016,shiraishi2016,ma2018universal,ma2020,miller2020,brandner2020,miangolarra2021,miangolarra2022,watanabe2022}) and optimal control in quantum thermodynamics (including thermodynamic geometry~\cite{acconcia2015,zulkowski2015Quantum,scandi2019,deffner2020} and shortcuts to adiabaticity~\cite{takahashi2017,guery2019}).
This review is organized as follows: we begin with examples of both experimental and theoretical model systems in section~\ref{Model Systems}, followed by a brief introduction to stochastic thermodynamics of heat, work, and entropy production in section~\ref{Thermodynamics}. Section~\ref{Full control} reviews recent progress exploiting optimal-transport theory to determine minimum-dissipation protocols under full control, yielding explicit solutions for the minimum-dissipation protocol under a strong-trap approximation in section~\ref{strong trap approximation} and allowing for constrained final control parameters in section~\ref{Constrained final control parameters}. Section~\ref{Parametric control} reviews parametric control, focusing on approximation methods in the fast (section~\ref{Fast control}), weak (section~\ref{Linear response}), and slow (section~\ref{Slow control}) limits. Applications to free-energy estimation are discussed in section~\ref{Free energy estimation} before comparing the performance of designed protocols in section~\ref{Comparison between control strategies} and finally concluding in section~\ref{Perspective and outlook} with a perspective and outlook for the study of optimal control in stochastic thermodynamics.
\section{Model systems}\label{Model Systems}
In this section we provide a brief introduction to a few paradigmatic model systems that motivate and guide the study of optimal control in stochastic thermodynamics. As discussed in section~\ref{Introduction}, the growth of stochastic thermodynamics coincides with the advent of new experimental techniques used to manipulate and measure single-molecule biophysical systems~\cite{Toyabe2011,toyabe2012,kawaguchi2014,svoboda1993,svoboda1994,kojima1997,hunt1994,greenberg2016,Laakso2008,norstrom2010,nagy2013}. First and foremost among these techniques are laser optical tweezers~\cite{bustamante2021,polimeno2018,moffitt2008} which can be used to trap microscopic Brownian systems.
The simplest experimental apparatus for studying stochastic thermodynamics is that of a microscopic bead trapped in an optical potential. From a theoretical perspective, this system is well approximated by continuous overdamped Brownian motion in a quadratic constraining potential. In these experiments, the center and stiffness of the trapping potential can be dynamically controlled to manipulate the system. With the use of feedback control, this experimental apparatus can be augmented to realize a virtual constraining potential of any form~\cite{kumar2018,kumar2019,gavrilov2017} and can, for example, be used to study fundamental bounds on information processing through bit erasure~\cite{jun2014,gavrilov2016}.
Microscopic beads trapped by laser optical tweezers can be attached to biopolymers to probe their properties. For example, dual-trap optical tweezers can be used to fold and unfold DNA or RNA hairpins by modulating the separation between the trapping potentials (Fig.~\ref{fig_Model_Systems} a).\cite{liphardt2002,collin2005,bustamante2000,bustamante2003,woodside2006,neupane2017} Monitoring the position of the probe beads provides insight into the properties of the indirectly observed biopolymers. The simplest model representing this process is that of a driven barrier crossing~\cite{neupane2015}, where a Brownian system is dynamically driven over an energy barrier by a time-varying quadratic trapping potential (Fig.~\ref{fig_Model_Systems} d).\cite{Sivak2016,Blaber2022}
\begin{figure}
\includegraphics[width=\linewidth]{Model_systems.pdf}
\caption{Model systems typical of stochastic thermodynamic control (top): a) DNA hairpin driven between folded and unfolded states by laser optical tweezers, b) ATP synthase driven by a magnetic trapping potential, c) nanomagnetic bit driven by an external magnetic field. Simplified theoretical descriptions (bottom) of the model systems in the top row: d) symmetric barrier-crossing model, e) Brownian rotary motor model, and f) nine-spin Ising model with independent magnetic fields applied to each spin.}
\label{fig_Model_Systems}
\end{figure}
Magnetic traps can be used to probe the F1 component of the rotary motor ATP synthase (Fig.~\ref{fig_Model_Systems} b),\cite{Toyabe2011,toyabe2012,kawaguchi2014} which is driven periodically to synthesize ATP, an essential and portable energy source for the cell. Once again, microscopic beads are used to probe the properties of the molecular machine and can be dynamically driven (Fig.~\ref{fig_Model_Systems} e); however, the control differs from driven barrier crossing in that the driving is periodic.
As a final example, consider a nanomagnetic bit characterized by its spin state or average magnetization (Fig.~\ref{fig_Model_Systems} c). By applying an external magnetic field, the system state can be driven from all spin-down to all spin-up, reversing the magnetization and resulting in a bit flip~\cite{hong2016}. This type of system is typically modeled with a discrete state space, e.g., the Ising model (Fig.~\ref{fig_Model_Systems} f), and optimal control of this system has been investigated~\cite{Rotskoff2015,Rotskoff2017,louwerse2022}. Due to the discrete state space, the properties of optimal control can differ from those for a system with a continuous state space.
Throughout this review we will use the model system of overdamped driven barrier crossing previously studied in Ref.~\onlinecite{Blaber2022} in order to give some intuition and examples for optimal control. The model consists of an overdamped Brownian particle in a double-well potential (symmetric for simplicity) constrained and driven by a quadratic trapping potential (Fig.~\ref{fig_Model_Systems} d). This system represents a simplified model of hairpin pulling experiments and Landauer erasure~\cite{Blaber2022,Proesmans2020,proesmans2020optimal}. The two-state nature of the system is also representative of activated processes such as chemical reactions, and the barrier crossing mimics the main features of experiments performed on ATP synthase, whose dynamics can be approximated as a series of barrier crossings~\cite{kawaguchi2014,Lucero2019,gupta2022}. This model is also typical of steered molecular-dynamics simulations, which use a time-dependent quadratic potential to drive reactions~\cite{Park2003,Park2004,Dellago2014}.
The total potential $V_{\rm tot}[x,x^{\rm c}(t),k(t)] = V_{\rm land}[x] + V_{\rm trap}[x,x^{\rm c}(t),k(t)]$ is the sum of the static hairpin potential $V_{\rm land}[x]$ and time-dependent trap potential $V_{\rm trap}[x,x^{\rm c}(t),k(t)]$ (shown schematically in Fig.~\ref{fig_Model_Systems}d). The hairpin potential is modeled as a static double well (symmetric for simplicity) with the two minima at $x = 0$ and $x = \Delta x_{\rm m}$ representing the folded and unfolded states~\cite{neupane2015,neupane2017,woodside2006,Sivak2016},
\begin{align}
V_{\rm land}(x) = E_{\rm B} \left[\left(\frac{2x-\Delta x_{\rm m}}{\Delta x_{\rm m}}\right)^2-1\right]^{2} \ ,
\label{double well}
\end{align}
for barrier height $E_{\rm B}$, distance $x_{\rm m}$ from the minimum to barrier, and distance $\Delta x_{\rm m}= 2x_{\rm m}$ between the minima. The system is driven by a quadratic trap
\begin{align}
V_{\rm trap}[x,x^{\rm c}(t),k(t)]= \frac{k(t)}{2}\left[x^{\rm c}(t)-x\right]^2 \ ,
\label{trap potential}
\end{align}
with time-dependent stiffness $k(t)$ and center $x^{\rm c}(t)$.
\section{Thermodynamics}\label{Thermodynamics}
In this review we focus on thermodynamics at the distribution level, which is generally described by dynamics of the form
\begin{align}
\frac{\partial p(\boldsymbol{r},t)}{\partial t} = \mathcal{L}[\boldsymbol{r},t] \ p(\boldsymbol{r},t) \ ,
\label{General_dynamics}
\end{align}
governing the time evolution of a nonequilibrium probability distribution $p(\boldsymbol{r},t)$ over position vector $\boldsymbol{r}$ at time $t$ according to the time-evolution operator $\mathcal{L}[\boldsymbol{r},t]$.
Continuous-space stochastic systems are described by the Fokker-Planck equation, which for overdamped dynamics has the time-evolution operator
\begin{align}
\mathcal{L}[\boldsymbol{r},t] = \nabla\cdot\boldsymbol{v}(\boldsymbol{r},t) + \boldsymbol{v}(\boldsymbol{r},t) \cdot\nabla \ ,
\label{Fokker Planck}
\end{align}
for mean local velocity~\cite{nakazato2021}
\begin{align}
\boldsymbol{v}(\boldsymbol{r},t) \equiv -D\nabla\left[\beta V_{\rm tot}(\boldsymbol{r},t) + \ln p(\boldsymbol{r},t)\right] \ ,
\label{velocity}
\end{align}
total potential $V_{\rm tot}$, diffusivity $D$, and $\beta \equiv (k_{\rm B}T)^{-1}$ for temperature $T$ and Boltzmann's constant $k_{\rm B}$. For a discrete-state system, $\mathcal{L}[\boldsymbol{r},t]$ is the transition rate matrix. For the example of driven barrier crossing, the total potential includes both the hairpin and trapping potential, and the time dependence arises from dynamic changes in the trap center and stiffness which drive the system over the barrier.
The average system energy is
\begin{align}
U = \int\mathrm d \boldsymbol{r} ~V(\boldsymbol{r},t)p(\boldsymbol{r},t) \ ,
\end{align}
and the rate of change in energy is
\begin{align}
\frac{\mathrm d U}{\mathrm d t} = \int\mathrm d \boldsymbol{r}\left[ \frac{\partial V(\boldsymbol{r},t)}{\partial t}p(\boldsymbol{r},t) + \frac{\partial p(\boldsymbol{r},t)}{\partial t}V(\boldsymbol{r},t)\right] \label{dU/dt}\ .
\end{align}
The first term quantifies work $W$ done on the system by an external agent controlling the potential $V(\boldsymbol{r},t)$ (e.g., an experimentalist dynamically driving a trapping potential) and the second quantifies heat flow $Q$ into the system from changes in the system distribution $p(\boldsymbol{r},t)$ (e.g., the system responding and relaxing towards a new equilibrium distribution in response to movement of the center and stiffness of the trapping potential):
\begin{align}
\dot{W} &\equiv \int\mathrm d \boldsymbol{r} ~\frac{\partial V(\boldsymbol{r},t)}{\partial t}p(\boldsymbol{r},t) \ , \label{Work def}\\
\dot{Q} &\equiv \int\mathrm d \boldsymbol{r}~\frac{\partial p(\boldsymbol{r},t)}{\partial t}V(\boldsymbol{r},t) \label{heat def}\ .
\end{align}
Throughout, a dot above a variable denotes the rate of change with respect to time. Substituting \eqref{Work def} and \eqref{heat def} into \eqref{dU/dt} gives the first law of thermodynamics: any change in system energy equals work and heat flows into the system.
Optimal control in thermodynamics is often discussed in terms of minimizing either entropy production or excess work incurred during the protocol. The entropy production as defined in this review will be used for periodic systems (e.g., ATP synthase), and when we have full control over the distribution (section~\ref{Full control}). The excess work is used for parametric control (section~\ref{Parametric control}) and model systems like DNA hairpins and nanomagnetic bits which are driven between control-parameter endpoints rather than periodically.
To understand these two concepts, consider the nonequilibrium free energy
\begin{subequations}
\begin{align}
F_{\rm neq} &\equiv U - \beta^{-1}S \\
&= \int\mathrm d \boldsymbol{r} ~\left[V(\boldsymbol{r},t)\, p(\boldsymbol{r},t) + \beta^{-1} \, p(\boldsymbol{r},t)\ln p(\boldsymbol{r},t)\right] \ , \label{Free Energy Def}
\end{align}
\end{subequations}
for dimensionless entropy
\begin{align}
S \equiv -\int\mathrm d \boldsymbol{r} ~p(\boldsymbol{r},t)\ln p(\boldsymbol{r},t) \ .
\end{align}
If the probabilities in~\eqref{Free Energy Def} are equilibrium distributions, then the nonequilibrium free energy reduces to the equilibrium free energy $F_{\rm eq}$. For an isothermal process the rate of change in free energy is
\begin{subequations}
\begin{align}
\frac{\mathrm d F_{\rm neq}}{\mathrm d t} &= \frac{\mathrm d U}{\mathrm d t} - \beta^{-1}\frac{\mathrm d S}{\mathrm d t} \ , \\
&= \dot{W} + \dot{Q} - \beta^{-1}\frac{\mathrm d S}{\mathrm d t} \ , \\
&= \dot{W} - \beta^{-1} \dot{S}_{\rm prod} \ .
\end{align}
\end{subequations}
The dimensionless entropy production is
\begin{subequations}
\begin{align}
\label{entropy_production_definition}
\dot{S}_{\rm prod} &\equiv \frac{\mathrm d S}{\mathrm d t} + \frac{\mathrm d S_{\rm env}}{\mathrm d t} \\
&= \frac{\mathrm d S}{\mathrm d t} - \beta \dot{Q} \\
& \geq 0 \ , \label{Second Law}
\end{align}
\end{subequations}
for environmental entropy $S_{\rm env}$. Equation~\eqref{Second Law} follows from the second law of thermodynamics. Importantly, this definition of entropy production only accounts for dissipation during the protocol, so if the system is not in equilibrium at the end of the protocol, additional energy may be dissipated into the environment during subsequent relaxation.
A measure of energy dissipation which accounts for relaxation to equilibrium even after the protocol terminates is the excess work $W_{\rm ex} \equiv W - \Delta F_{\rm eq}$, the amount of work done in excess of the equilibrium free-energy difference. The excess work and entropy production are related by
\begin{align}
\label{excess_work_and_entropy_production}
W_{\rm ex} = \Delta F_{\rm neq}-\Delta F_{\rm eq} + \beta^{-1}\Delta S_{\rm prod} \ ,
\end{align}
for net entropy production $\Delta S_{\rm prod} \equiv \int_{0}^{\Delta t} \mathrm d t \, \dot{S}_{\rm prod}$, nonequilibrium free-energy difference $\Delta F_{\rm neq}$ between the initial $p(\boldsymbol{r},0)$ and final $p(\boldsymbol{r},\Delta t)$ distributions, and equilibrium free-energy difference $\Delta F_{\rm eq}$ between the initial and final equilibrium distributions.
Equation~\eqref{excess_work_and_entropy_production} clarifies the distinction between entropy production and excess work: if both the initial and final states of the system are at equilibrium, the two quantities are equal, otherwise the difference between the two equals the difference between the nonequilibrium and equilibrium free-energy changes. If the system is allowed to relax to equilibrium, then this excess energy is dissipated into the environment, resulting in additional entropy production not accounted for in the present definition~\eqref{entropy_production_definition}, so that the total entropy production for such a process is~\eqref{entropy_production_definition} plus the entropy production from the subsequent relaxation. In contrast, the excess work always includes the energy dissipated into the environment from the system relaxing towards equilibrium after the protocol terminates. Essentially, quantifying dissipation by the entropy production in~\eqref{entropy_production_definition} assumes one can harness the nonequilibrium free energy at the conclusion of the protocol to perform a useful task (generally true for periodically driven systems like ATP synthase), while excess work quantifies dissipation when all the excess free energy is dissipated into the environment after the protocol terminates (generally true for two-state barrier crossings like hairpin experiments).
\section{Full control}\label{Full control}
For continuous-state systems, complete control over the shape of the potential $V(\boldsymbol{r},t)$ grants full control over the probability distribution $p(\boldsymbol{r},t)$, which can considerably simplify the optimization process and allows us to exploit known results from optimal-transport theory~\cite{Aurell2011,nakazato2021,ito2022}. Optimal transport describes the most efficient methods to move mass (e.g., a pile of sand) from one location to another; this is useful for describing methods that minimize dissipation in transporting probability from an initial to a final distribution~\cite{villani2009}.
Since the final distribution is constrained, the energy dissipated into the environment throughout the protocol is determined by the average entropy produced in driving from initial probability distribution $p(\boldsymbol{r},0)$ to final probability distribution $p(\boldsymbol{r},\Delta t)$ as~\cite{nakazato2021}
\begin{align}
\label{OT_entropy}
\Delta S_{\rm prod} = \frac{1}{D} \int_{0}^{\Delta t} \mathrm d t \ \langle \boldsymbol{v}(\boldsymbol{r},t)\cdot \boldsymbol{v}(\boldsymbol{r},t)\rangle \ .
\end{align}
Angle brackets $\langle\cdots \rangle$ denote an average over $p(\boldsymbol{r},t)$.
Expressing the entropy production in this form makes precise the connections between optimal-transport theory and minimum-dissipation protocols. In optimal-transport theory a common measure of the distance between two distributions is the $L_{2}$-Wasserstein distance, defined in the Benamou and Brenier dual representation as~\cite{benamou2000}
\begin{align}
\mathcal{W}(p_{0}, p_{\Delta t})^2 \equiv \min_v \int_{0}^{\Delta t} \mathrm d t \ \langle \boldsymbol{v}(\boldsymbol{r},t)\cdot \boldsymbol{v}(\boldsymbol{r},t)\rangle \ .
\end{align}
Therefore, the entropy production is bounded by the squared $L_{2}$-Wasserstein distance between initial ($p_0$) and final ($p_{\Delta t}$) probability distributions~\cite{nakazato2021}:
\begin{align}
\label{entropy_production_bound}
\Delta S_{\rm prod} \geq \frac{\mathcal{W}\left(p_0, p_{\Delta t}\right)^2}{D \Delta t} \ .
\end{align}
This allows us to exploit existing procedures from optimal-transport theory to determine protocols that minimize entropy production~\cite{villani2009,santambrogio2015}. Notably, the exact solution is known in two situations: one-dimensional systems and Gaussian probability distributions (section~\ref{Exact solutions}).
Extending minimum-dissipation full control and the connections to optimal-transport theory to more general forms of dynamics (e.g., discrete state spaces) is a rapidly advancing area of active research.\cite{dechant2022,dechant2022geometric,yoshimura2022,zhong2022,van2022,van2022Topological}
\subsection{Exact solutions}\label{Exact solutions}
For a one-dimensional system $\boldsymbol{r} = x$, the entropy production~\eqref{OT_entropy} of the optimal-transport process can be simplified considerably as~\cite{Aurell2011,Abreu2011,Zhang2019,zhang2020,Proesmans2020,proesmans2020optimal}
\begin{align}
\Delta S_{\rm prod} \geq \frac{1}{D\Delta t}\int_{0}^{1}\mathrm d y \left[\mathcal{Q}_{\rm f}(y)-\mathcal{Q}_{\rm i}(y)\right]^2 \ ,
\end{align}
where $\mathcal{Q}_{\rm f}$ and $\mathcal{Q}_{\rm i}$ are the final and initial quantile functions (inverse cumulative distribution functions)~\cite{Blaber2022}. The entropy production is minimized if the quantiles are linearly driven between their fixed initial and final values~\cite{Zhang2019,zhang2020,Proesmans2020,proesmans2020optimal}; from this the probability distribution can be computed, and then the Fokker-Planck equation inverted to determine the potential $V_{\rm tot}(x,t)$ to be applied to achieve the control that minimizes the entropy production:
\begin{align}
V_{\rm tot}(x,t) = -\ln p(x,t) + \int_{\infty}^{x}\mathrm d x' ~\frac{\int_{-\infty}^{x'}\mathrm d x''~\frac{\partial p(x'',t)}{\partial t} }{p(x',t)} \ .
\end{align}
Although this calculation is often analytically intractable, it is straightforward to compute numerically for any probability distribution.
If the initial and final distributions are Gaussian, $p(\boldsymbol{r},t) = \mathcal{N}(\mu_t,\Sigma_t)$ for time-dependent mean $\mu_t$ and covariance $\Sigma_t$, the entropy-production bound~\eqref{entropy_production_bound} is~\cite{Olkin1982,dechant2019,abiuso2022}
\begin{align}
\label{Gaussian Entropy}
\Delta S_{\rm prod} \geq \frac{1}{D\Delta t}\bigg\{ \Delta\boldsymbol{\mu}^2 + {\rm Tr}\left[\Sigma_{0} + \Sigma_{\Delta t} - 2(\Sigma_{\Delta t}^{\frac{1}{2}}\Sigma_{0}\Sigma_{\Delta t}^{\frac{1}{2}})^{\frac{1}{2}}\right]\bigg\} \ ,
\end{align}
with subscripts $0$, $t$, and $\Delta t$ respectively denoting the initial, time-dependent, and final values of the corresponding variable. Equality is achieved and the entropy production is minimized when following the optimal-transport map between the initial and final distributions, which for Gaussian distributions is completely specified by the mean and covariance:
\begin{subequations}
\begin{align}
\label{optimal mean}
\boldsymbol{\mu}_{t} &= \boldsymbol{\mu}_{0}+\Delta \boldsymbol{\mu}\frac{t}{\Delta t} \\
\Sigma_{t} &= \left[\left(1-\frac{t}{\Delta t}\right)I +\frac{t}{\Delta t}C \right]\Sigma_{0}\left[\left(1-\frac{t}{\Delta t}\right)I+\frac{t}{\Delta t}C\right] \ .
\label{optimal variance}
\end{align}
\end{subequations}
Here $\Delta \boldsymbol{\mu} \equiv \boldsymbol{\mu}_{\Delta t} - \boldsymbol{\mu}_{0}$ is the total change in mean position, $I$ is the identity matrix, and $C \equiv \Sigma_{\Delta t}^{\frac{1}{2}}(\Sigma_{\Delta t}^{\frac{1}{2}}\Sigma_{0}\Sigma_{\Delta t}^{\frac{1}{2}})^{-\frac{1}{2}}\Sigma_{\Delta t}^{\frac{1}{2}}$ reduces in 1D to the ratio of final and initial standard deviations. If the covariance matrix $\Sigma$ is diagonal, then \eqref{optimal variance} implies
\begin{align}
\Sigma_{t}^{\frac{1}{2}} &= \Sigma_{0}^{\frac{1}{2}}+\Delta \Sigma^{\frac{1}{2}}\frac{t}{\Delta t} \ ,
\label{optimal variance diagonal}
\end{align}
with $\Delta \Sigma^{\frac{1}{2}} \equiv \Sigma^{\frac{1}{2}}_{\Delta t}-\Sigma^{\frac{1}{2}}_{0}$. Thus for diagonal covariance the optimal-transport process linearly drives the standard deviation between its endpoint values. For a detailed description of minimum-dissipation protocols for general multidimensional Gaussian distributions see Ref.~\onlinecite{abiuso2022}.
In both solvable cases (1D and Gaussian) the general design principle is that the minimum-dissipation protocol linearly drives the quantiles between specified initial and final values. Linearly driving the quantiles of the probability distribution can be used as a guiding principle for designing more general minimum-dissipation protocols and arises independently for parametric control~\cite{Blaber2022}.
\subsection{Strong-trap approximation}\label{strong trap approximation}
In this section we review how to exploit the known full-control solution for Gaussian distributions in order to design minimum-dissipation protocols for strong trapping potentials on any arbitrary underlying energy landscape, initially described in Ref.~\onlinecite{Blaber2022_strong}.
We assume that the total potential $V_{\rm tot}[\boldsymbol{r},\boldsymbol{r}_{\rm c}(t),K(t)] = V_{\rm land}(\boldsymbol{r}) + V_{\rm trap}[\boldsymbol{r},\boldsymbol{r}_{\rm c}(t),K(t)]$ can be separated into a time-independent component $V_{\rm land}(\boldsymbol{r})$ (the underlying energy landscape) and a quadratic trapping potential
\begin{align}
V_{\rm trap}[\boldsymbol{r},\boldsymbol{r}_{\rm c}(t),K(t)]=\frac{1}{2}\left[\boldsymbol{r}-\boldsymbol{r}_{\rm c}(t)\right]^{\top} K(t)\left[\boldsymbol{r}-\boldsymbol{r}_{\rm c}(t)\right] \ .
\end{align}
The position of the system is denoted by the vector $\boldsymbol{r}$, $K$ is the symmetric stiffness matrix, and superscript $\top$ is the vector transpose. For a strong trapping potential, the time-independent component can be expanded up to second order about the mean particle position $\boldsymbol{\mu}$:
\begin{align}
V_{\rm land}(\boldsymbol{r}) \approx V_{\rm land}(\boldsymbol{\mu}) + (\boldsymbol{r}-\boldsymbol{\mu})^{\top}\nabla V_{\rm land}(\boldsymbol{\mu}) +\frac{1}{2}(\boldsymbol{r}-\boldsymbol{\mu})^{\top} \nabla\nabla^{\top} V_{\rm land}(\boldsymbol{\mu}) (\boldsymbol{r}-\boldsymbol{\mu}) \ .
\end{align}
Under these assumptions, the probability distribution can be approximated as Gaussian, $p(\boldsymbol{r},t) \approx \mathcal{N}(\boldsymbol{\mu}_{t},\Sigma_{t})$. Since the distribution is Gaussian we can use the results described by~\eqref{optimal mean} and~\eqref{optimal variance}.
To achieve the mean and covariance of~\eqref{optimal mean} and~\eqref{optimal variance}, the trap center and stiffness must respectively satisfy
\begin{subequations}
\label{eq:Optima}
\begin{align}
\boldsymbol{\lambda}_t =& \boldsymbol{\mu}_{t} + K_{t}^{-1}\left[\frac{\Delta \boldsymbol{\mu}}{\beta D \Delta t}+\nabla V_{\rm land}(\boldsymbol{\mu}_t)\right] \ , \label{optimal center} \\
K_{t}=& \frac{1}{\beta}\Sigma_{t}^{-1} - \frac{1}{\beta D}\int_{0}^{\infty}\mathrm d\nu \ e^{-\nu\Sigma_t}\frac{\mathrm d \Sigma_t}{\mathrm d t}e^{-\nu\Sigma_t} - \nabla\nabla^{\top} V_{\rm land}(\boldsymbol{\mu}_t) \ . \label{optimal stiffness general}
\end{align}
\end{subequations}
If $\Sigma$ is diagonal, then $\Sigma_t$ is given by \eqref{optimal variance diagonal}, the integral in~\eqref{optimal stiffness general} can be evaluated, and the trap stiffness obeys
\begin{align}
K_{t} &=\nabla\nabla^{\top} V_{\rm land}(\boldsymbol{\mu}_t) + \left(\frac{1}{\beta}I- \frac{\Delta \Sigma^{\frac{1}{2}} }{2\beta D\Delta t}\right)\Sigma_{t}^{-1} .
\end{align}
Equations~\eqref{optimal center} and~\eqref{optimal stiffness general} are explicit equations that minimize dissipation at any driving speed provided the trap is sufficiently strong. We emphasize that explicit solutions for the minimum-dissipation protocol are rarities; typically solutions require numerically solving differential equations or inverting the Fokker-Planck equation~\cite{Aurell2011,Proesmans2020,proesmans2020optimal}.
For our example system of driven barrier crossing with equal initial and final covariance, the protocol designed by \eqref{optimal center} slows down movement of the trap center as it crosses the barrier to compensate for the force due to the hairpin potential, and \eqref{optimal stiffness} tightens the trap as it crosses the barrier in order to counteract the negative curvature of the hairpin potential. Slowing down and tightening the trapping potential as the system crosses a barrier appears to be a general feature and results independently from other approximation methods~\cite{Blaber2022,Blaber2022_strong}.
To achieve periodic driving we constrain the final covariance matrix after one period (during which the mean completes one rotation) to equal the initial. To minimize dissipation, the standard deviation is linearly changed between the endpoints~\eqref{optimal variance}, implying for a periodic system that the standard deviation (and hence covariance) remains unchanged throughout the protocol. This is achieved when the effective stiffness is constant, i.e.,
\begin{align}
K_{t}= K_{0}+\nabla \nabla^{\top} V_{\rm land}(\boldsymbol{\mu}_{0}) -\nabla \nabla^{\top} V_{\rm land}(\boldsymbol{\mu}_{t}) \ .
\label{optimal stiffness}
\end{align}
If in each rotation the mean travels a distance $\Delta\boldsymbol{\mu}_{\rm rot}$ in time $\Delta t_{\rm rot}$, the resultant entropy production is
\begin{align}
\Delta S_{\rm prod} = \frac{\Delta\boldsymbol{\mu}_{\rm rot}^{2} }{D\Delta t_{\rm rot}} \ ,
\label{entropy production at constant variance}
\end{align}
that of an overdamped system moving at constant velocity against viscous Stokes drag; i.e., the minimum-dissipation protocol has perfect \emph{Stokes efficiency}~\cite{wang2002}.
\subsection{Constrained final control parameters}\label{Constrained final control parameters}
The techniques described in section~\ref{Full control} minimize the entropy production for constrained final distributions. Constraining the final distribution allows us to describe periodically driven systems like models inspired by ATP synthase; however, often the end state is instead constrained by the final control-parameter values. For example, in hairpin experiments the end state is typically defined by the trap separation, and the system is typically allowed to equilibrate between subsequent unfolding/folding protocols. Since the system is allowed to equilibrate between protocols, any excess energy remaining at the end of the protocol is dissipated as heat into the environment, and the entropy production as defined by \eqref{entropy_production_definition} does not equal the total dissipation.
In this case the relevant thermodynamic quantity is the work~\eqref{excess_work_and_entropy_production},
\begin{align}
W = \Delta F_{\rm neq} + \Delta S_{\rm prod} \ ,
\end{align}
which accounts for the excess free energy dissipated into the environment after the protocol terminates. Work can be minimized by optimizing over the final distribution subject to constrained final control-parameter values~\cite{Blaber2022_strong,Blaber2022}. In general this minimization must be performed numerically, which is straightforward for one-dimensional systems, but becomes computationally intensive for higher-dimensional systems.
For the special case of Gaussian distributions, the optimization procedure simplifies considerably since optimizing over a distribution reduces to optimizing over means and covariances. The average work for the minimum-dissipation protocol in the strong-trap approximation is~\cite{Blaber2022_strong}
\begin{align}
W = & \frac{1}{2}{\rm Tr}\left\{K\left[\Sigma+(\boldsymbol{\mu}-\boldsymbol{\lambda})(\boldsymbol{\mu}-\boldsymbol{\lambda})^{\top}\right]\right\}_{0}^{\Delta t}+V_{\rm land}(\boldsymbol{\mu})\big|_0^{\Delta t} + \frac{1}{2}{\rm Tr}\left[\nabla\nabla^{\top} V_{\rm land}(\boldsymbol{\mu})\Sigma\right]_{0}^{\Delta t} + \beta^{-1}\Delta S_{\rm prod}^{\rm min} \ ,
\label{Optimal Work}
\end{align}
with ${\rm Tr}$ the trace and $\Delta S_{\rm prod}^{\rm min}$ the lower bound in \eqref{Gaussian Entropy}.
To find the protocol that minimizes work for constrained final control parameters, we minimize \eqref{Optimal Work} with respect to the final mean $\mu_{\Delta t}$ and covariance $\Sigma_{\Delta t}$, for fixed final trap center $\boldsymbol{\lambda}_{\Delta t}$ and stiffness $K_{\Delta t}$. For a flat energy landscape (for which the strong-trap approximation is exact) the optimal mean and covariance can be solved analytically; e.g., for equal initial and final covariance, the final mean is
\begin{align}
\boldsymbol{\mu}_{\Delta t} = \boldsymbol{\mu}_0 + \left(\frac{2K^{-1}}{\beta D\Delta t} + I\right)^{-1}[\boldsymbol{\lambda}_{\Delta t} - \boldsymbol{\mu}_0] \ .
\label{Optimal Final Mean}
\end{align}
In some more general cases (e.g., energy landscapes represented by low-order polynomials), \eqref{Optimal Work} can also be minimized analytically, and in general can be solved numerically with relative ease.
\section{Parametric control}\label{Parametric control}
While full-control solutions are convenient when possible, many applications do not permit sufficient control to fully constrain the probability distribution throughout the entire protocol. In such cases the controller is constrained by a finite set of control parameters $\boldsymbol{\lambda}(t)$ which can be used to drive the system. Since there are insufficient control parameters to fully control the probability distribution, the endpoints of the protocol are constrained by the control-parameter values rather than the distribution. Therefore, we focus on optimizing the excess work, which is an accurate measure of dissipation provided the system equilibrates after the protocol terminates.
In this section we assume the control is the result of time-dependent control parameters $\boldsymbol{\lambda}(t)$, in which case the excess work~\eqref{Work def} is
\begin{align}
\label{parametric work}
\langle W_{\rm ex} \rangle_{\Lambda} ~= -\int_{0}^{\Delta t} {\rm d} t~ \langle \delta {\bf f}^{\top}(t) \rangle_{\Lambda} \dot{\boldsymbol{\lambda}}(t)\ ,
\end{align}
where we have defined the conjugate force ${\bf f} \equiv - \partial V_{\rm tot}/\partial \boldsymbol{\lambda}$ to express the work explicitly in terms of the control parameters, and $\delta$ denotes a difference from the equilibrium average. In this section we denote a nonequilibrium average (dependent on the entire protocol history) as $\langle \cdot \rangle_{\Lambda}$ and an equilibrium average (at the current control parameter value $\boldsymbol{\lambda}(t)$) as $\langle \cdot \rangle_{\boldsymbol{\lambda}(t)}$. Since the control-parameter protocol is externally specified, the only unknown is $\langle \delta {\bf f}^{\top}(t) \rangle_{\Lambda}$. This quantity is particularly difficult to deal with since the nonequilibrium average depends on the entire protocol history.
For example, Ref.~\onlinecite{Schmiedl2007} showed that---even in one dimension---optimization requires solving the nonlocal Euler-Lagrange equation. Therefore, there are very few cases where the minimum-work protocol can be determined analytically. Promising approaches for obtaining general solutions include optimal-transport theory with limited control~\cite{zhong2022} and advanced numerical techniques~\cite{Then2008,gingrich2016,engel2022}.
The minimum-work protocol can be determined exactly for a Brownian particle in a harmonic trap, in which case the optimal protocol, originally derived in one dimension~\cite{Schmiedl2007}, is identical to the full-control solution for Gaussian distributions described in section~\ref{Constrained final control parameters}. This exact solution serves as a window into the properties of minimum-dissipation protocols and gives us considerable insight into what to expect from optimized protocols: e.g., the minimum-dissipation protocols have control-parameter jumps at the start and end but remain continuous in between.
Although exact solutions are nice when possible, since general solutions are intractable we turn to approximate methods to gain insight into the general properties of minimum-dissipation protocols. The first approximation has already been described in section~\ref{strong trap approximation} and is valid for strong trapping potentials applied to overdamped systems~\cite{Blaber2022_strong}. In the following sections, we fill in the remaining limits of fast, weak, and slow control in order to describe optimal control for any system with any given control parameters and design interpolated protocols.
\subsection{Fast control}\label{Fast control}
Until recently, although it was generally suspected, it was not definitively known whether the jumps in the minimum-dissipation protocols are a general feature. By taking the fast-driving limit, it has been shown that the minimum-dissipation protocol is a step function, jumping from and to specified initial and final control-parameter endpoints to spend the entire intervening protocol duration at the control-parameter values that maximize the short-time power savings.\cite{Blaber2021} We refer to the minimum-dissipation protocols in the fast limit as short-time efficient protocols, or STEP.
In the fast limit, the excess work approaches that of an instantaneous protocol, which spends no time relaxing towards equilibrium and requires excess work proportional (up to a factor of $\beta$) to the relative entropy $D(\pi_{\rm i}||\pi_{\rm f})$ between the respective initial and final equilibrium distributions, $\pi_{\rm i}$ and $\pi_{\rm f}$~\cite{Blaber2021}. Spending a short duration $\Delta t$ relaxing towards equilibrium throughout the protocol results in saved work $W_{\rm save} \equiv \beta^{-1} D(\pi_{\rm i}||\pi_{\rm f}) -W_{\rm ex}$ compared to an instantaneous protocol, which can be approximated as~\cite{Blaber2021}
\begin{align}
\langle W_{\rm save}\rangle_{\Lambda} \approx \int_{0}^{\Delta t}\mathrm d t \, {\bf R}_{\boldsymbol{\lambda}_{\rm i}}^{\top}[\boldsymbol{\lambda}(t)] \,
[\boldsymbol{\lambda}_{{\rm f}} - \boldsymbol{\lambda}(t)] \ ,
\label{Excess work approx}
\end{align}
in terms of the initial force-relaxation rate
\begin{align}
{\bf R}_{\boldsymbol{\lambda}_{\rm i}}[\boldsymbol{\lambda}(t)] &\equiv \frac{\mathrm d\langle {\bf f} \rangle_{\boldsymbol{\lambda}_{\rm i}}}{\mathrm d t}\bigg|_{\boldsymbol{\lambda}(t)} \ ,
\label{Rate Function}
\end{align}
the rate of change of the initial mean conjugate forces at the current control-parameter values.
The saved work is maximized by the short-time efficient protocol (STEP) which spends the entire duration at the intermediate control-parameter value that maximizes the short-time power savings
\begin{align}
P_{\rm save}^{\rm st}(\boldsymbol{\lambda}) \equiv {\bf R}_{\boldsymbol{\lambda}_{\rm i}}^{\top}(\boldsymbol{\lambda})(\boldsymbol{\lambda}_{{\rm f}} - \boldsymbol{\lambda}) \ ,
\label{eq:power_savings}
\end{align}
satisfying
\begin{align}
&\frac{\partial P_{\rm save}^{\rm st}(\boldsymbol{\lambda})}{\partial \boldsymbol{\lambda}}\bigg|_{\boldsymbol{\lambda}^{\rm STEP}} = 0 \ .
\label{STEP}
\end{align}
The STEP achieves this by two instantaneous control-parameter jumps: one at the start from the initial value to the optimal value $\boldsymbol{\lambda}^{\rm STEP}$, and one at the end from $\boldsymbol{\lambda}^{\rm STEP}$ to the final value.
The optimal protocol described in this section is general, only assuming the protocol is fast compared to the system's natural relaxation time. Additionally, for quadratic time-dependent potentials (such as the driven barrier-crossing model system) or affine control ($V_{\rm tot}(\boldsymbol{r}, \lambda)=V_0(\boldsymbol{r})+\lambda V_1(\boldsymbol{r})$ where $\lambda$ linearly modulates the strength of an auxiliary potential $V_1(\boldsymbol{r})$ added to the base potential $V_0(\boldsymbol{r})$) the STEP value is always halfway between the control-parameter endpoints~\cite{Blaber2022,zhong2022}. Finally, since the initial force-relaxation rate~\eqref{Rate Function} is an equilibrium average and the STEP value is given by a point in control-parameter space, the STEP value is simple to determine. The STEP and the strong-trap optimal protocol are the simplest protocols to determine and can be calculated analytically in many cases or numerically evaluated with relative ease.
\subsection{Linear response}\label{Linear response}
Linear-response theory can be used to determine the minimum-dissipation protocol for weak perturbations and performs relatively well at any driving speed, well beyond its strict range of validity~\cite{kamizaki2022,bonanca2018}. For fast driving, the minimum-dissipation protocols determined from linear-response theory have jumps at the start and end of the protocol.
As mentioned in section~\ref{Parametric control}, the central quantity that needs to be determined to perform any optimization of the excess work is the nonequilibrium average force. In linear response, the deviation of the nonequilibrium average force from equilibrium is approximated by the integrated equilibrium force covariance
\begin{align}
\label{LR average force}
\langle \delta f_{j}(t) \rangle_{\Lambda} \approx -\int_0^{t} {\rm d} t' ~ \langle \delta f_{j}(t-t') ~ \delta f_{\ell}(0)\rangle_{\lambda(t)}~\dot{\lambda}_{j}(t') \ .
\end{align}
This greatly simplifies the optimization procedure, since the equilibrium average depends only on the current control-parameter value and not on the entire history of control. Substituting this into the excess work~\eqref{parametric work} gives
\begin{align}
\langle W_{\rm ex} \rangle_{\Lambda} ~\approx ~\int_{0}^{\Delta t} {\rm d} t \int_0^{t} {\rm d} t' ~ ~ \dot{\lambda}_{j}(t)~ \langle \delta f_{j}(t-t') ~ \delta f_{\ell}(0)\rangle_{\lambda(t)}~\dot{\lambda}_{\ell}(t') \ ,
\end{align}
which can be optimized directly by numerical methods and can perform well at any driving speed~\cite{bonanca2018,kamizaki2022}.
\subsection{Slow control}\label{Slow control}
The next approximation we consider is valid for slow near-equilibrium processes. This approach generalizes the paradigm of thermodynamic geometry to stochastic thermodynamics~\cite{Crooks2007,OptimalPaths}. The generalized friction tensor endows the space of thermodynamic states with a Riemannian metric where minimum-dissipation protocols correspond to geodesics of the friction tensor. This method is widely applicable, yields a relatively simple prescription for determining minimum-dissipation protocols, and has been extended to more general settings and different forms of control. The protocols determined from this method are continuous; however, we know from exact solutions that minimum-dissipation protocols can have jump discontinuities~\cite{Schmiedl2007}, which are never optimal within the geometric framework of slow control. This arises from the slow near-equilibrium approximation used; indeed, in the limit of a slow protocol the jumps in the exact optimal protocol become negligible.
In addition to the linear-response approximation, we assume the control parameters are driven slowly compared to the system's natural relaxation timescale, so the approximation for the nonequilibrium average force~\eqref{LR average force} simplifies to
\begin{align}
\langle \delta f_{j}(t) \rangle_{\Lambda} \approx -\dot{\lambda}_{\ell}(t)\int_0^{\infty} {\rm d} t' ~ \langle \delta f_{j}(t') ~ \delta f_{\ell}(0)\rangle_{\lambda(t)} \ .
\end{align}
Substituting into~\eqref{parametric work} yields the leading-order contribution to the excess work~\cite{OptimalPaths}:
\begin{align}
\label{LR excess work}
\langle W_{\rm ex}\rangle_{\Lambda} \approx \int_{0}^{\Delta t}\mathrm d t \ \frac{\mathrm d {\boldsymbol{\lambda}}^{\top}}{\mathrm d t} \ \zeta[\boldsymbol{\lambda}(t)] \ \frac{\mathrm d {\boldsymbol{\lambda}}}{\mathrm d t} \ ,
\end{align}
in terms of the generalized friction tensor with elements
\begin{align}
\zeta_{j \ell}(\lambda) \equiv \beta \int_0^{\infty} \mathrm d t \, \langle \delta f_{j}(t) \delta f_{\ell}(0)\rangle_{\boldsymbol{\lambda}} \ .
\label{friction}
\end{align}
In analogy with fluid dynamics, this rank-two tensor is the \emph{Stokes' friction}, since it produces a drag force that depends linearly on velocity.
$\zeta_{j \ell}$ is the Hadamard product $\beta \langle\delta f_{j} \delta f_{\ell}\rangle_{\boldsymbol{\lambda}} \circ \tau_{j \ell}$ of the conjugate-force covariance (the force fluctuations) and the integral relaxation time
\begin{align}
\label{relax1}
\tau_{j \ell} \equiv \int_0^{\infty} \mathrm d t \, \frac{\langle \delta f_{j}(t) \delta f_{\ell}(0)\rangle_{\boldsymbol{\lambda}}}{\langle \delta f_{j} \delta f_{\ell}\rangle_{\boldsymbol{\lambda}}} \ ,
\end{align}
the characteristic time for these fluctuations to die out.
For overdamped dynamics, the friction can be calculated directly from the total energy as~\cite{zulkowski2015}
\begin{align}
\zeta_{j\ell}(\boldsymbol{\lambda}) = \int_{-\infty}^{\infty} \mathrm d x
\,
\frac{\partial_{\lambda_{j}}\Pi_{\rm eq}(x,\boldsymbol{\lambda})\partial_{\lambda_{\ell}}\Pi_{\rm eq}(x,\boldsymbol{\lambda})}{\pi_{\rm eq}(x,\boldsymbol{\lambda})}
\ ,
\label{CDF friction}
\end{align}
where $\Pi_{\rm eq}(x,\boldsymbol{\lambda}) \equiv \int_{-\infty}^{x}\mathrm d x'\pi_{\rm eq}(x',\boldsymbol{\lambda})$ is the equilibrium cumulative distribution function, $\partial_{\lambda_{j}}$ is the partial derivative with respect to $\lambda_{j}$, and $\pi_{\rm eq}(x',\boldsymbol{\lambda}) = \exp[-\beta V_{\rm tot}(x,\boldsymbol{\lambda})]~/~\int\mathrm d x \exp[-\beta V_{\rm tot}(x,\boldsymbol{\lambda})]$ is the equilibrium probability distribution.
Within the slow-protocol approximation, the excess work is minimized by a protocol with constant excess power~\cite{OptimalPaths}. For a single control parameter, this amounts to proceeding with velocity $\mathrm d \lambda^{\rm LR}/\mathrm d t \propto \zeta(\lambda)^{-1/2}$, which when normalized to complete the protocol in a fixed allotted time $\Delta t$, gives
\begin{align}
\label{lambdaoptdot}
\frac{\mathrm d \lambda^{\rm LR} }{\mathrm d t}= \frac{\Delta \lambda}{\Delta t}\frac{\overline{{\zeta}^{1/2}}}{\sqrt{\zeta(\lambda)}} \ ,
\end{align}
where the overline denotes the spatial average over the naive (linear) path between the control-parameter endpoints.
For multidimensional control, the minimum-dissipation protocol solves the Euler-Lagrange equation
\begin{equation}
\zeta_{j \ell}\frac{\mathrm d^2\lambda_{\ell}}{\mathrm d t^2}+ \frac{\partial\zeta_{j \ell}}{\partial \lambda_{m}} \frac{\mathrm d\lambda_{\ell}}{\mathrm d t}\frac{\mathrm d\lambda_{m}}{\mathrm d t} = \frac{1}{2}\frac{\partial\zeta_{\ell m}}{\partial \lambda_{j}} \frac{\mathrm d\lambda_{\ell}}{\mathrm d t}\frac{\mathrm d\lambda_{m}}{\mathrm d t} \ ,
\label{Euler-Lagrange}
\end{equation}
where we have adopted the Einstein convention of implied summation over all repeated indices.
To illustrate the steps for determining the minimum-work protocol within this approximation, Fig.~\ref{Geodesics} presents the friction matrix previously computed for the example model system of driven barrier crossing~\cite{Blaber2022}. For this system the friction matrix can be directly computed from \eqref{CDF friction} and the geodesics found by numerically solving \eqref{Euler-Lagrange} with specified initial and final control parameters, as described in Refs.~\onlinecite{Rotskoff2017,louwerse2022}.
\begin{figure}
\includegraphics[width=\linewidth]{Geodesics_plus_min_protocol.pdf}
\caption{Geodesics and components of the friction matrix used to design two-dimensional linear-response protocols. Grayscale heatmap: components of the friction as a function of the (dimensionless) trap center$^*$ $x^{\rm c}/{\Delta x_{\rm m}}$ and stiffness$^*$ $k x_{\rm m}^{2}/E_{\rm B}$. Colored curves: geodesics of the friction for equal initial and final trap stiffness ($k_{\rm i} = k_{\rm f}$). Color heatmap: absolute product of control-parameter speeds $\dot{\lambda}_{j} = \mathrm d \lambda_{j} / \mathrm d t$. In the friction, the subscript ${\rm c}$ refers to the trap center and ${\rm s}$ to the stiffness components of the matrix. The positive and negative components of the off-diagonal entry $\zeta_{\rm c,s}$ are respectively denoted by $\zeta_{\rm c,s}^{+}\equiv {\rm max}(\zeta_{\rm c,s},0) $ (b) and $\zeta_{\rm c,s}^{-}\equiv {\rm max}(-\zeta_{\rm c,s},0)$ (c). The stars in (a-d) denote the STEP for initial and final stiffness $k x_{\rm m}^{2}/E_{\rm B}=1$. (e,f) STEP (purple dotted) as a function of protocol duration, with the corresponding slow (blue dashed) and interpolated protocols (green dot-dashed). An asterisk denotes a scaled (dimensionless) quantity, with the velocities scaled by the average speed $|\dot{\lambda}_{j}\dot{\lambda}_{\ell}|^* \equiv |\dot{\lambda}_{j}\dot{\lambda}_{\ell}|/(\overline{|\dot{\lambda}_{j}|}~\overline{|\dot{\lambda}_{\ell}|})$, friction as $\zeta_{j\ell}^* \equiv \zeta_{j\ell}\lambda_{j}\lambda_{\ell}/(\lambda_{j}^{*}\lambda_{\ell}^{*}\gamma x_{\rm m}^2)$, and protocol duration as $t/\Delta t$. This figure is adapted from Fig.~S1 of the supplemental material in Ref.~\onlinecite{Blaber2022}.}
\label{Geodesics}
\end{figure}
Some general properties can be observed from Fig.~\ref{Geodesics}: although the friction is positive semidefinite, the off-diagonal component can become negative; geodesics never have any discontinuities (following from the Riemannian geometry); geodesics tend to avoid or slow down in regions of high friction; and for driven barrier crossing the minimum-work protocol slows down and tightens the trap as it crosses the barrier. Slowing down and tightening the trap as the system crosses the barrier is consistent with the results derived within the strong-trap approximation (section~\ref{strong trap approximation}); however, the designed slow-control protocol lacks the jumps at the start and end of the protocol.
\subsubsection{Extensions to the slow-control approximation}\label{Extensions}
The slow-driving approximation has been generalized and extended to transitions between nonequilibrium steady states~\cite{mandal2016,zulkowski2013}, discrete control~\cite{Large2019}, and stochastic control~\cite{large2018}.
Previously throughout section~\ref{Parametric control}, we assumed that the system starts in equilibrium; however, this is not always the case in applications. Periodically driven machines such as ATP synthase are often driven for a sufficiently long time as to reach a nonequilibrium steady state, breaking the initial-equilibrium and near-equilibrium assumptions. Such systems may need to transition between steady states in order to increase or decrease their output in response to variable conditions (e.g., increasing/decreasing rate of ATP production). Although determining the correct definition of dissipation is more subtle for nonequilibrium steady states (see Refs.~\onlinecite{oono1998,hatano2001,speck2005,ge2009,ge2010,bertini2012,bertini2013,bertini2015} for detailed discussion), the slow-protocol approximation has been generalized to slow transitions between nonequilibrium steady states making use of a near-steady state approximation~\cite{mandal2016,zulkowski2013}. This approximation has an analogous form to the friction-tensor approximation and can be used to determine minimum-dissipation protocols for slow transitions between nonequilibrium steady states.
So far we have assumed that the protocol is continuous; however, many biological and chemical systems convert free energy stored in nonequilibrium chemical-potential differences into useful work through a series of reactions involving binding/unbinding or catalysis of small molecules. These chemical reactions typically occur on timescales much faster than the protein conformational rearrangements they couple to. Therefore, these changes are effectively instantaneous, leading to discrete control protocols. Building off of quasistatic results~\cite{nulton1985}, it has been shown that the linear-response approximation can be applied to discretely driven systems to yield an approximation analogous to the friction tensor, which can be used to determine minimum-dissipation protocols~\cite{Large2019}.
The deterministic control considered in all other sections is a reasonable assumption for single-molecule experiments; however, autonomous systems such as ATP synthase \emph{in vivo} are driven not by a deterministic controller, but by other stochastic systems. Towards a description of fully autonomous machines, several advances have been made extending the friction-tensor framework to stochastic control~\cite{large2018} and autonomous thermodynamic cycles~\cite{bryant2020}. A detailed discussion of protocol optimization for stochastic control, as well as relations to other bounds on dissipation~\cite{machta2015} appears in Ref.~\onlinecite{large2018}.
\subsubsection{Higher-order moments and corrections}
\label{Next-order corrections and higher-order moments}
For moderate-to-fast driving, the slow-control approximation is not sufficient to accurately determine the minimum-dissipation protocol. The slow-control approximation can be generalized to higher-order moments of the work distribution and to account for the next-order correction to the excess work.\cite{Blaber2020Skewed} This is particularly useful for nonequilibrium free-energy estimation (section~\ref{Free energy estimation}).
In Ref.~\onlinecite{Blaber2020Skewed}, the higher-order corrections are derived for the first moment
\begin{align}
\langle W_{\rm ex} \rangle_{\Lambda} &\approx \int_{0}^{\Delta t} \mathrm d t ~ \left\{\zeta_{j\ell}[\boldsymbol{\lambda}(t)]+ \frac{1}{4} \zeta^{(2)}_{j\ell m}[\boldsymbol{\lambda}(t)]\dot{\lambda}_{m}(t)\right\}\dot{\lambda}_{j}(t)\dot{\lambda}_{\ell}(t) \label{Mean_work} \ ,
\end{align}
second moment
\begin{align}
\langle W_{\rm ex}^2 \rangle_{\Lambda} &\approx \frac{2}{\beta}\int_{0}^{\Delta t} \mathrm d t ~\left\{\zeta_{j\ell}[\boldsymbol{\lambda}(t)] + \frac{3}{4}\zeta^{(2)}_{j\ell m}[\boldsymbol{\lambda}(t)]\dot{\lambda}_{m}(t)\right\}\dot{\lambda}_{j}(t)\dot{\lambda}_{\ell}(t) \ . \label{Variance friction approximation}
\end{align}
and $n^{\rm th}$ moment
\begin{align}
\label{Higher order moment friction approximation}
\beta^{n-1} \langle W_{\rm ex}^{n} \rangle_{\Lambda} \approx \int_{0}^{\Delta t} \mathrm d t ~\left\{n\mathcal{C}_{\nu_{1}\cdots\nu_{n}}^{(n-1)}[\boldsymbol{\lambda}(t),t] + \frac{n+1}{2}\mathcal{C}_{\nu_{1}\cdots\nu_{n+1}}^{(n)}[\boldsymbol{\lambda}(t),t]\dot{\lambda}_{\nu_{n+1}}(t)\right\}\prod_{j=1}^{n}\dot{\lambda}_{\nu_{j}}(t) \ .
\end{align}
We have defined the rank-three tensor
\begin{align}
\label{rank 3 friction}
\zeta_{j\ell m}^{(2)}[\boldsymbol{\lambda}(t)] \equiv -\beta^2\int_{0}^{\infty}\mathrm d t''\int_{0}^{\infty}\mathrm d t'''\left\langle\delta f_{j}(0)\delta f_{\ell}(t'')\delta f_{m}(t''')\right\rangle_{\boldsymbol{\lambda}(t)} \ ,
\end{align}
and the integral $n$-time covariance functions
\begin{align}
\label{rank n friction}
\mathcal{C}_{\nu_{1}\cdots\nu_{n}}^{(n-1)}&[\boldsymbol{\lambda}(t),t] \equiv (-\beta)^{n}\prod_{j=2}^{n}\int_{0}^{t}\mathrm d t_{j}\left\langle\prod_{\ell=2}^{n}\delta f_{\nu_{1}}(0)\delta f_{\nu_{\ell}}(t_{\ell})\right\rangle_{\boldsymbol{\lambda}(t)} \ .
\end{align}
The rank-three tensor [Eq.~\eqref{rank 3 friction}] is referred to as the \emph{supra-Stokes'} tensor and is indexed by superscript (2), as it corresponds to the next-order correction to dissipation beyond the Stokes' friction~\eqref{Mean_work}. The higher-order moments of the excess work are higher order in control-parameter velocity and are therefore smaller for slow protocols.
The most notable application of this approximation is to free-energy estimation (section~\ref{Free energy estimation}), where the supra-Stokes' tensor determines the leading-order contribution to the bias of bidirectional estimators and can be used to design protocols that minimize bias~\cite{Blaber2020Skewed}.
\subsection{Interpolated protocols}\label{Interpolated protocols}
Given theory describing minimum-dissipation control in both the slow and fast limits, a simple interpolation scheme has been developed to design protocols that reduce dissipation at all driving speeds~\cite{Blaber2021,Blaber2022}. The interpolated protocol has an initial jump $(\boldsymbol{\lambda}^{\rm STEP}-\boldsymbol{\lambda}_{\rm i})/(1+\Delta t/\tau)$ and a final jump $(\boldsymbol{\lambda}_{\rm f}-\boldsymbol{\lambda}^{\rm STEP})/(1+\Delta t/\tau)$, and follows the slow-control path between them,
\begin{align}
\boldsymbol{\lambda}^{\rm interp}(t) = \frac{1}{1+\frac{\Delta t}{\tau}}\boldsymbol{\lambda}^{\rm STEP} + \left(1-\frac{1}{1+\frac{\Delta t}{\tau}}\right)\boldsymbol{\lambda}^{\rm slow}(t)\ ,
\end{align}
with $\tau$ the crossover duration and $\boldsymbol{\lambda}^{\rm slow}(t)$ the solution to~\eqref{Euler-Lagrange}. This guarantees that the protocol approaches the minimum-dissipation protocol in both the fast and slow limits.
\section{Free-Energy Estimation}\label{Free energy estimation}
Computational and experimental measurements of free-energy differences are essential to the determination of stable equilibrium phases of matter, relative reaction rates, and binding affinities of chemical species,\cite{Gapsys2015} and the identification and design of novel protein-binding ligands for drug discovery.\cite{Schindler2020,Kuhn2017,Ciordia2016,Wang2015} Current methods for determining free-energy differences rely on costly experimentation, which can be reduced through screening with efficient computational techniques.\cite{Schindler2020,Aldeghi2018,Kuhn2017,Ciordia2016,Wang2015,Gapsys2015,Chodera2011,Pohorille2010} It has been shown that the precision and accuracy of standard free-energy estimators are reduced when estimated from a protocol inducing large dissipation,\cite{Gore2003,Shenfeld2009,Blaber2020Skewed} and that thermodynamic geometry can be applied to improve free-energy estimates.\cite{Shenfeld2009,Minh2019,Pham2011,Pham2012,Park2014,Blaber2020Skewed}
Free-energy differences are often estimated by measuring the work incurred during a parametric control protocol that drives the system between control-parameter endpoints corresponding to target states. Unidirectional estimators determine the free-energy difference from the work done by a forward protocol driving from initial to final
control-parameter values, while bidirectional estimators additionally use reverse protocols that drive from final to initial control-parameter values. The simplest estimator is the mean-work estimator,\cite{Gore2003} which estimates
the free-energy difference by the average work and for any non-quasistatic (finite-speed) protocol yields a biased estimate. For an unbiased estimate, the Jarzynski estimator (derived from the Jarzynski equality~\cite{Jarzynski1997}) estimates the free-energy difference from the exponentially averaged work. The mean-work and Jarzynski estimators can be used as either uni- or bidirectional estimators; however, if bidirectional data is available the maximum log-likelihood estimator is Bennett's acceptance ratio.\cite{Bennett1976} For a large number of samples, Bennett's acceptance ratio yields the minimum variance of any unbiased estimator.\cite{Shirts2003,Maragakis2006}
Near equilibrium, Taylor expanding all of these estimators for small excess work (small dissipation) gives
the mean-work estimator.~\cite{Blaber2020Skewed} For an equal number of samples in the forward and reverse directions,
near equilibrium the variance of any of these estimators
is~\cite{Blaber2020Skewed}
\begin{subequations}
\begin{align}
\left\langle \left(\delta \widehat{\Delta F}\right)^2 \right\rangle& \approx \frac{\langle W_{\rm ex}^{2}\rangle_{\Lambda} +\langle W_{\rm ex}^{2}\rangle_{{\Lambda^{\dagger}}}}{2N} \label{Variance to second moment} \\
& \approx \frac{\langle W_{\rm ex}\rangle_{\Lambda} +\langle W_{\rm ex}\rangle_{{\Lambda^{\dagger}}}}{N} \ ,
\label{Variance_near_equilibrium}
\end{align}
\end{subequations}
where the second line follows from the fluctuation-dissipation relation for the work distribution.~\cite{Gore2003,Blaber2020Skewed} The bias is given by the asymmetry between forward $\Lambda$ and reverse $\Lambda^{\dagger}$ dissipation, generated from skewed work fluctuations:
\begin{align}
\left\langle \delta \widehat{\Delta F}\right\rangle &\approx \frac{\langle W_{\rm ex}\rangle_{\Lambda}-\langle W_{\rm ex}\rangle_{\Lambda^{\dagger}}}{2N}\label{BAR Bias} \ .
\end{align}
Equations~\eqref{Variance to second moment} and \eqref{BAR Bias} not only hold near equilibrium but also when only a single sample is taken in the forward and reverse directions, since then the average of a function is equal to the function of the average (e.g., $\langle e^x \rangle = e^{\langle x \rangle}$).
For a slow protocol the variance is approximated by
\begin{align}
\left\langle \left(\delta \widehat{\Delta F}\right)^2 \right\rangle \approx \frac{2}{\beta N}\int_{0}^{\Delta t}\mathrm d t \, \zeta_{j\ell}[\boldsymbol{\lambda}(t)] \dot{\lambda}_{j}(t)\dot{\lambda}_{\ell}(t) \ , \label{BAR Variance Friction}
\end{align}
Thus the protocol designed to reduce the variance follows geodesics of $\zeta_{j\ell}$, and for one-dimensional control proceeds at velocity $\dot{\lambda} \propto \left(\zeta\right)^{-1/2}$. This, or the related force-variance metric~\cite{Shenfeld2009}, has been used to improve the precision of calculated binding potentials of mean force~\cite{Minh2019,Pham2011,Pham2012,Park2014}.
Unlike the variance, the protocol that maximizes the accuracy (minimizes bias) is different for unidirectional and bidirectional estimators. For unidirectional Jarzynski and mean-work estimators, near equilibrium the minimum-bias protocol is simply the minimum-dissipation protocol (protocol that minimizes \eqref{Mean_work}) and therefore to leading order is optimized by the same protocol that minimizes \eqref{BAR Variance Friction}. For a slow protocol, the bias from Bennett's acceptance ratio can be approximated as
\begin{align}
\left\langle \delta \widehat{\Delta F}\right\rangle &\approx \frac{1}{4N}\int_{0}^{\Delta t}\mathrm d t \, \zeta^{(2)}_{j\ell m}[\boldsymbol{\lambda}(t)] \,\dot{\lambda}_{j}(t)\dot{\lambda}_{\ell}(t)\dot{\lambda}_{m}(t) \ , \label{BAR Bias Friction}
\end{align}
where the second line follows from \eqref{Mean_work}. The protocol designed to reduce the (magnitude of) bias thus follows geodesics of the cubic Finsler metric $\zeta^{(2)}_{j\ell m}$, simplifying for one-dimensional control to $\dot{\lambda} \propto \left(\zeta^{(2)}\right)^{-1/3}$.
\section{Comparison between control strategies}\label{Comparison between control strategies}
In this section we compare naive and designed protocols based on the methods described in the previous sections: interpolated protocols combining STEP and slow-protocol approximations (section~\ref{Interpolated protocols}), strong-trap approximation (section~\ref{strong trap approximation}), and full control (section~\ref{Exact solutions}). We again turn to the example model system of driven barrier crossing to assess similarities, differences, and performance of the designed protocols. The naive protocol serves as a baseline which the designed protocols should outperform, and full control as a bound on what parametric control could possibly achieve.
Fig.~\ref{Protocol} shows the naive and designed protocols for intermediate driving speed, intermediate trap stiffness, and for fixed final control parameters. Every designed protocol has discontinuous jumps at the start and end of the protocol, and slows down and tightens the trap as it crosses the barrier. The behavior of the designed protocols can be understood in terms of the full-control solution. In one dimension the minimum-dissipation protocol linearly drives the quantiles of the probability distribution between the initial and final distributions. In the naive protocol, since it has constant stiffness, the probability distribution spreads out as it crosses the barrier due to the negative curvature of the energy landscape at the barrier. To compensate for this, the designed protocols tighten as they cross the barrier; additionally, to compensate for the changes in the gradient of the energy landscape, the designed protocols slow down as they cross the barrier.
\begin{figure}
\includegraphics[width=\linewidth]{Protocol_medium_inter_tau8_protocol.pdf}
\caption{Time-dependent protocols for driven barrier crossing at intermediate protocol duration. Naive (black), interpolated (green), strong-trap (red), and full-control (blue) protocols. Snapshots of the total (solid), static hairpin (dotted), and time-dependent trap (dashed) potential are shown for $t=0$, $\Delta t/2$, and $\Delta t$. The hairpin, initial, and final potentials are the same across protocols (purple). Dash-dotted curves: median positions during corresponding protocol. Shading: 9\%, 25\%, 75\%, and 91\% quantiles, which are approximately evenly spaced for a Gaussian distribution. Barrier height is $E_{\rm B} = 4\beta^{-1}$, initial and final trap stiffnesses are $k_{\rm i} = k_{\rm f} = 4 \beta^{-1}/x_{\rm m}^2$, and protocol duration is $\tau_{\rm D}$ for diffusion time $\tau_{\rm D} = \Delta x_{\rm m}^2/(2D)$.}
\label{Protocol}
\end{figure}
Fig.~\ref{Protocol_work_distance} shows the excess work of the designed protocols compared to the naive protocol. For long duration (slow protocol) all of the designed protocols significantly outperform the naive, with the difference between the minimum dissipation possible from full control indistinguishable from the dissipation from the interpolated protocol in this limit. While the approximations made in the interpolated protocol become exact in the long-duration limit, the same is not true for the strong-trap approximation. As a result, the strong-control protocol has slightly higher dissipation in the long-duration limit, but would achieve the minimal dissipation in the limit of high trap stiffness. Furthermore, for intermediate protocol duration ($\Delta t \sim \tau_{\rm D}$), the strong control performs the best of the approximations since the approximation does not explicitly depend on the protocol duration. For short duration (fast protocols), all the designed protocols perform similarly well.
\begin{figure}
\includegraphics[width=\linewidth]{work_work_diff.pdf}
\caption{Performance of the naive (black circles), interpolated (green squares), strong-trap (red triangles), and full-control (blue dashed) protocols. (a) Excess work $\langle W_{\rm ex} \rangle_{\boldsymbol{\Lambda}}$ and (b) work difference $\langle W \rangle_{\rm naive} - \langle W\rangle_{\rm des}$ between designed and naive protocols, all as functions of protocol duration $\Delta t/\tau_{\rm D}$ scaled by diffusion time $\tau_{\rm D}$. Error bars representing bootstrap-resampled 95\% confidence intervals are smaller than the markers.}
\label{Protocol_work_distance}
\end{figure}
In summary, the designed protocols perform similarly well, so it seems reasonable to choose the control strategy which is simplest to determine, provided the approximations/assumptions required are satisfied. Since the strong-trap approximation has explicit solutions for the designed protocol, it will generally be the simplest to determine; however, it is not as widely applicable as the interpolated protocols.
\section{Perspective and outlook}\label{Perspective and outlook}
We have reviewed optimal control in stochastic thermodynamics, from full control (section~\ref{Full control}) to parametric control (section~\ref{Parametric control}). General methods are known for determining minimum-dissipation protocols for parametric control ranging from strong (section~\ref{strong trap approximation}) to weak (section~\ref{Linear response}) and fast (section~\ref{Fast control}) to slow (section~\ref{Slow control}). These approximations fill out the four limits of parametric control (Fig.~\ref{Parametric_diagram}) and can be combined to design protocols that reduce dissipation at any driving speed (section~\ref{Interpolated protocols}). These designed protocols reproduce key features determined by exact solutions for Gaussian distributions and quadratic trapping potentials, such as control-parameter jumps at the start and end of the protocol (section~\ref{Fast control}) and linear driving of the distribution quantiles between specified endpoints (section~\ref{Exact solutions}).
For the model system of driven barrier crossing (section~\ref{Model Systems}), we compare the interpolated, strong-control, and full-control solutions (section~\ref{Comparison between control strategies}). The designed protocols significantly outperform the naive (linear) protocol. Strong control has explicit solutions for the minimum-dissipation protocol, making it the simplest approximation to use; however, it is only applicable to overdamped dynamics with strong trapping potentials.
Figure~\ref{Parametric_diagram} shows the limits in which we have solutions for the minimum-dissipation protocol for parametric control. The linear-response and slow-driving approximations have been applied to several different types of systems and control (section~\ref{Extensions}). A promising area of future study would be to explore if extensions and generalizations can be made to strong and fast control.
Another extension to consider is to allow for position-dependent diffusivity, which generically arises when a high-dimensional system is represented in a lower-dimensional space.\cite{best2010} For example, DNA hairpin experiments and molecular dynamics simulations take place in three spatial dimensions but are often represented as one-dimensional diffusive processes.\cite{hummer2005,neupane2016,neupane2017} This requires averaging out the behavior in the other two dimensions, and any inhomogeneity in these dimensions will result in position-dependent effective diffusivity in the one-dimensional representation. Measured diffusivities in molecular dynamics simulations often vary with position~\cite{hummer2005}, and although accurate measurements remain a technical challenge, some hairpin experiments report a position-dependent diffusivity.\cite{foster2018} Position-dependent diffusivity can alter the kinetic transition state of protein folding~\cite{chahine2007}, which will impact the design of minimum-dissipation protocols. Therefore, position-dependent diffusivity may be an important consideration in some applications.
An important aspect of designing protocols to minimize dissipation in both experiments and theory is the choice of control parameters and number of control parameters. For the model system of driven barrier crossing, it has been shown that designed protocols with control over both trap center and stiffness (two-parameter control) significantly reduces dissipation compared to designed protocols that can only adjust the trap center (single-parameter control).~\cite{Blaber2022} However, adding even more control parameters does not significantly reduce dissipation any further, because this system is well approximated as a one-dimensional Gaussian, for which the full-control solution only requires two parameters to adjust the mean and variance (section~\ref{Exact solutions}). Although this phenomenon is well explained for one-dimensional overdamped systems, considerably less is known more generally. For example, Ref.~\onlinecite{louwerse2022} compared one-, two-, and four-parameter control of the Ising model and found significant qualitative and quantitative differences between the designed protocols. Since the full-control solution for this system is not yet known, we cannot explain this phenomenon the same way we did for driven barrier crossings. Beyond simply the number of control parameters, it remains an open question as to which control parameters are the most important when designing protocols to minimize dissipation. The choice of control parameters and number of control parameters will be important for experimental and computational applications, such as free-energy estimation.
Leveraging optimal-transport theory appears to be a promising approach towards a deeper understanding of optimal control in stochastic thermodynamics. Full-control solutions based on optimal-transport theory were initially only applicable to continuous classical systems with overdamped dynamics~\cite{Aurell2011,nakazato2021}; however, recent studies have begun to expand their range of applicability to discrete-state and quantum systems~\cite{dechant2022,dechant2022geometric,yoshimura2022,zhong2022,van2022,van2022Topological}.
The full-control solutions give an idealized view of optimal control and will always outperform parametric control, but can be used as the ideal solution that parametric control can strive towards and draw insight from. For example, linearly driving the quantiles between the initial and final endpoints is the minimum-dissipation protocol for a one-dimensional system under full control; this is reasonably well reproduced by parametric control of driven barrier crossing (section~\ref{Comparison between control strategies}). Furthermore, it has recently been shown that optimal-transport theory can be leveraged to design minimum-dissipation protocols under parametric control~\cite{zhong2022}, which is a promising new technique for determining exact minimum-dissipation protocols at any driving speed or strength of driving.
In this review we focused on protocols which reduce the average work or entropy production; however, higher-order moments (e.g., variance or skewness), or individual trajectory optimization~\cite{proesmans2022} may also be of interest for strongly fluctuating systems. Ref.~\onlinecite{Solon2018} compared for a breathing harmonic trap the protocols that minimize work variance with those that minimize average work, finding that minimum-average-work and minimum-work-variance protocols qualitatively differ for intermediate-to-fast driving speeds. In contrast, for slow near-equilibrium control, the same protocols minimize average work and work variance (section~\ref{Next-order corrections and higher-order moments}), so a more complete description of minimum-variance protocols far from equilibrium is desirable.
This review has focused on systems driven by external control parameters, which is adequate for describing the experimental systems discussed in section~\ref{Model Systems}; however, this does not accurately describe autonomous machines. For example, ATP synthase \emph{in vivo} is driven by a proton gradient across the mitochondrial membrane which drives the coupled ${\rm F}_{\rm o}$ and ${\rm F}_{1}$ components. In this system, coupling between the components means that none of the components can be treated as external, and thus it is not obvious how the present discussion of optimal control applies to such autonomous machines. Some first steps towards the description of autonomous molecular machines is discussed in section~\ref{Extensions}; however, more work is required towards a full description of efficient autonomous machines.\cite{ehrich2022}
The majority of the studies on optimal control in stochastic thermodynamics have focused on theoretical understanding and simple toy models. As we have shown in this review, there is a deep understanding of minimum-dissipation protocols at both slow and fast driving speed and both weak and strong driving strength (Fig.~\ref{Parametric_diagram}). It would be interesting to see how these results apply to real physical systems and if they are able to achieve the promising results predicted by theory. The two most straightforward applications are to relatively simple experimental model systems (section~\ref{Model Systems}) in an analogous fashion to Ref.~\onlinecite{Tafoya2019}, and to free-energy estimation as discussed in section~\ref{Free energy estimation}, in a similar fashion to Refs.~\onlinecite{Minh2019,Pham2011,Pham2012,Park2014}.
\section*{Acknowledgments}
We thank Jannik Ehrich and Matt Leighton (SFU Physics) and Miranda Louwerse (SFU Chemistry) for feedback on the manuscript.
This work was supported by an SFU Graduate Deans Entrance Scholarship (SB), an NSERC Discovery Grant and Discovery Accelerator Supplement (DAS), and a Tier-II Canada Research Chair (DAS), and was enabled in part by support provided by WestGrid (\url{westgrid.ca}) and the Digital Research Alliance of Canada (\url{alliancecan.ca}).
|
1,941,325,219,999 | arxiv | \section{Introduction}
\label{sec:introduction}
For over a decade now, domain-specific languages for numerical partial
differential equations (henceforth PDEs) such as
Sundance~\cite{Long:2003,Long:2010}, FEniCS~\cite{Logg:2012}, and
now Firedrake~\cite{Rathgeber:2016} have enabled users to efficiently generate
algebraic systems from a high-level description of the variational
problems. Both FEniCS and Firedrake make use of the Unified Form
Language~\cite{Alnaes:2014} as a description language for the weak
forms of PDEs, converting it into efficient low-level code for form
evaluation. They also share a Python interface that, for the
intersection of their feature sets, is nearly source-compatible.
These high-level PDE codes succeed by connecting a rich description
language for PDEs to effective lower-level solver libraries
such as PETSc~\cite{Balay:2016,Balay:1997}
or Trilinos~\cite{Heroux:2005} for the requisite, and
performance-critical, numerical (non)linear algebra.
These high-level PDE projects utilise the solver packages in an
essentially \emph{unidirectional} way: the residuals are evaluated,
Jacobians formed, and are then handed off to mainly algebraic
techniques. Hence, the codes work at their best when (compositions
of) existing black-box matrix techniques effectively solve the
algebraic systems. However, in many situations the best
preconditioners require additional structure beyond a purely algebraic
(matrix and vector-level) problem description. Many of the
preconditioners for block systems based on block factorisations
require discretisations of differential operators not contained in the
original problem. These include the pressure-convection-diffusion
(PCD) approximation for Navier-Stokes~\cite{Kay:2002,Elman:2008}, and
preconditioners for models of phase
separation~\cite{Farrell:2016,Bosch:2014}. An alternate approach to
derive
preconditioners for block systems is to use arguments from functional
analysis to arrive at block-diagonal preconditioners. While these are
often representable as the inverse of an assembled operator, in some
cases, a mesh and parameter independent preconditioner that arises
from such an analysis requires the action of the sum of inverses. An
example is the preconditioner suggested in~\cite[example
4.2]{Mardal:2011} for the time-dependent Stokes problem.
While a high-level PDE engine makes it possible to obtain these new
operators at low user cost, additional care is required to develop a
clean, extensible interface. For example, the PCD preconditioner has
been implemented using Sundance and Playa~\cite{Howle:2012a}, although
the resulting code tightly fused the description of the problem with a
highly specialised specification of the preconditioner. Similarly, in
the FEniCS project, \texttt{cbc.block}~\cite{Mardal:2012} allows the
model developer to write complex block preconditioners as a
composition of high-level ``symbolic'' linear algebra operations;
Trilinos provides similar functionality through Teko~\cite{Cyr:2016}.
However, in these codes one must specify up front how to perform the
block decomposition. Switching to a different preconditioner requires
changing the model code, and there is no high-level manipulation of
variational problems within the blocks. Ideally, one would like a
mechanism to implement the specialised preconditioner as a separate
component, leaving the original application code essentially
unchanged.
\emph{Extensibility} of fundamental types such as solvers,
preconditioners, and matrices has long been a main concern of the
PETSc project. For example, the action of a finite difference stencil
applied to a vector can be wrapped behind a matrix ``shell'' interface
and used interchangeably with explicit sparse matrices for many purposes.
Users can
similarly provide custom types of Krylov methods or preconditioners.
Thanks to petsc4py~\cite{Dalcin:2011}, such extensions can be
implemented in Python as well as C. Moreover, PETSc provides powerful
tools to switch between (compositions of) existing and custom tools
either in the application source code or through command-line options.
In this work, we enable the rapid development of high-performance
preconditioners as PETSc extensions using Firedrake and petsc4py. To
facilitate this, we have developed a custom matrix type that embeds
the complete Firedrake problem description (UFL variational forms,
function spaces, meshes, etc) in a Python context accessible to PETSc.
As a happy byproduct, this enables low-memory matrix-free evaluation
of matrix-vector products. This also allows us to produce PETSc
preconditioners in petsc4py that act on this new matrix type,
accessing the PDE-level information as needed. For example, a PCD
preconditioner can access the meshes and function spaces to create
bilinear forms for, and hence assemble, the needed mass, stiffness,
and convection-diffusion operators on the pressure space along with
PETSc \texttt{KSP} (linear solver) contexts for the inverses.
Moreover, once such preconditioners are available in a globally
importable module, it is now possible to use them instead of existing
algebraic preconditioners by a straightforward runtime modification of
solver configuration options. So, we use our PDE language not only to
generate problems to feed to the solver, but also to extend that
solver's capabilities.
Our discussion and implementation will focus on Firedrake as
the top-level PDE library and PETSc as the solver library.
Firedrake already relies heavily on PETSc through petsc4py and
has a nearly pure Python implementation. Provided one is content with
the Python interface, it should not be terribly difficult to adapt
these techniques for use in FEniCS. Regarding solver libraries, the
idiom and usage of Trilinos and PETSc (if not their
actual capabilities) differ considerably, so we make no speculation as
to the difficulties associated with adapting our techniques in that
direction.
In the rest of the paper, we set up certain model problems in
\cref{sec:model}. After this, in \cref{sec:algs} we survey certain algorithms that go
beyond the current mode of algebraically preconditioning assembled
matrices. These include matrix-free methods, Schwarz-type
preconditioners, and preconditioners that require auxiliary
differential operators. It turns out that a proper implementation of
the first of these, matrix-free methods, provides a very clean way to
communicate PDE-level problem information between PETSc matrices and
custom preconditioners, and we describe the implementation of this and
relevant modifications to Firedrake in \cref{sec:impl}.
Finally, we give examples demonstrating the efficacy of our approach
to the model problems of interest in \cref{sec:examples}.
\section{Some model applications}
\label{sec:model}
\subsection{The Poisson equation}
\label{sec:poisson-eq}
It is helpful to fix some target applications and describe things we
would like to expedite within our top-level code.
A usual starting point is to consider a second-order scalar elliptic
equation. Let $\Omega \subset \mathbb{R}^{d}$,
where $d=1, 2, 3$, be a domain with boundary $\Gamma$.
We let
$\kappa:\Omega \rightarrow \mathbb{R}^{+}$ be some positive-valued coefficient.
On the interior of $\Omega$, we seek a function $u$ satisfying
\begin{equation}
-\nabla \cdot \left( \kappa \nabla u \right) = f
\label{eq:poisson-strong}
\end{equation}
subject to the boundary condition $u = u_{\Gamma_D}$ on $\Gamma_D$ and
$\nabla u \cdot n = g$ on $\Gamma_N$.
After the usual technique of multiplying by a test function and
integrating by parts, we reach the weak form of seeking $u \in
V_\Gamma \subset V$ such that
\begin{equation}
\left(\kappa \nabla u, \nabla v \right) = \left( f, v \right) -
\left\langle g, \frac{\partial v}{\partial n} \right\rangle
\label{eq:poisson-weak}
\end{equation}
for all $v \in V_0 \subset V$, where $V$ is the finite element space,
$V_0$ the subspace with vanishing trace on $\Gamma_D$. Here $(\cdot,
\cdot)$ denotes the standard $L^2$ inner product over $\Omega$, and
$\langle \cdot , \cdot \rangle$ that over $\Gamma$.
The finite element method leads to a
linear system:
\begin{equation}
A u = f,
\end{equation}
where $A$ is symmetric and positive-definite (positive semi-definite
if $\Gamma_D = \varnothing$), and the vector $f$
includes both the forcing term and contributions from the boundary
conditions.
\subsection{The Navier-Stokes equations}
\label{sec:navier-stokes-eq}
Moving beyond the simple Poisson operator, the incompressible
Navier-Stokes equations provide additional challenges.
\begin{subequations}
\begin{align}
-\frac{1}{\text{Re}} \Delta \vec{u} + \vec{u} \cdot \nabla \vec{u} + \nabla p & = 0, \\
\nabla \cdot \vec{u} & = 0
\end{align}
\end{subequations}
together with suitable boundary conditions.
Among the diverse possible methods, we shall focus here on inf-sup
stable mixed finite element spaces such as
Taylor-Hood~\cite{Brenner:2008}. This is merely for simplicity of
explication and does not represent a limitation of our approach or
implementation. Taking $V_\Gamma$ to be subset of the discrete
velocity space satisfying any strongly imposed boundary conditions and $W$ the
pressure space, we have the weak form of seeking $\vec{u}, p$ in
$V_\Gamma \times W$ such that
\begin{subequations}
\begin{align}
\frac{1}{\text{Re}} \left( \nabla \vec{u} , \nabla \vec{v} \right)
+ \left( \vec{u} \cdot \nabla \vec{u} , \vec{v} \right)
- \left( p, \nabla \cdot \vec{v} \right) & = 0, \\
\left( \nabla \cdot \vec{u} , w \right) & = 0
\end{align}
\end{subequations}
for all $\vec{v}, w \in V_0 \times W$, where $V_0$ is the velocity subspace
with vanishing Dirichlet boundary conditions.
Relative to the Poisson equation, we now have several additional
challenges. The nonlinearity may be addressed by Newton
linearisation, and UFL provides automatic differentiation to produce
the Jacobian. We also have multiple finite element spaces, one of
which is vector-valued. Further, for each nonlinear iteration, the
required linear system is larger and more complicated, a
block-structured saddle point system of the form
\begin{equation}
\label{eq:NSEstiff}
\begin{bmatrix}
F & -B^t \\
B & 0
\end{bmatrix}
\begin{bmatrix}
\vec{u} \\ p
\end{bmatrix}
=
\begin{bmatrix}
f_1 \\
f_2
\end{bmatrix}.
\end{equation}
Black-box algebraic preconditioners tend to perform poorly here, and
we discuss some more effective alternatives
in \cref{sec:algs}.
\subsection{Rayleigh-B\'enard convection}
\label{sec:rayleigh-benard-eq}
Many applications rely on coupling other processes to the
Navier-Stokes equations. For example, Rayleigh-B\'enard
convection~\cite{Carey:1986} includes thermal variation in the fluid, although
we take the Boussinesq approximation that temperature
variations affect the momentum balance only as a buoyant force. We
have, after nondimensionalisation,
\begin{subequations}
\begin{align}
- \Delta \vec{u} + \vec{u} \cdot \nabla \vec{u} + \nabla p & = -\frac{\text{Ra}}{\text{Pr}} T g\hat{\mathbf{z}}, \\
\nabla \cdot \vec{u} & = 0, \\
-\text{Pr} \Delta T + \vec{u} \cdot \nabla T & = 0,
\end{align}
\label{eq:rb-residual}
\end{subequations}
where $\text{Ra}$ is the Rayleigh number, $\text{Pr}$ is the Prandtl
number, $g$ is the acceleration due to gravity, and $\hat{\mathbf{z}}$ the
upward-pointing unit vector.
The problem is usually posed on rectangular domains, with no-slip boundary
conditions on the fluid velocity. The temperature boundary conditions
typically involve imposing a unit temperature difference in one
direction with insulating boundary conditions in the others.
After discretisation and Newton linearisation, one obtains a block
$3\times 3$ system
\begin{equation}
\label{eq:RB3x3}
\begin{bmatrix}
F & -B^t & M_1 \\ B & 0 & 0 \\ M_2 & 0 & K
\end{bmatrix}
\begin{bmatrix} \vec{u} \\ p \\ T \end{bmatrix}
= \begin{bmatrix} f_1 \\ f_2 \\ f_3 \end{bmatrix}.
\end{equation}
Here, the $F$ and $B$ matrices are as obtained in the Navier-Stokes
equations (with $\text{Re} = 1$). The $M_1$ and $M_2$ terms arise from the
temperature/velocity coupling, and $K$ is the convection-diffusion
operator for temperature.
Alternately, letting
\begin{subequations}
\begin{align}
N & = \begin{bmatrix} F & -B^t \\ B & 0 \end{bmatrix}, \\
\widetilde{M}_1 & = \begin{bmatrix} M_1 \\ 0 \end{bmatrix}, \\
\widetilde{M}_2 & = \begin{bmatrix} M_2 & 0 \end{bmatrix},
\end{align}
\end{subequations}
we can write the stiffness matrix as block $2 \times 2$ matrix
\begin{equation}
\label{eq:rayleighbenard2x2}
\begin{bmatrix} N & \widetilde{M}_1 \\ \widetilde{M}_2 & K \end{bmatrix}.
\end{equation}
Formulating the matrix in this way allows us to consider composing some
(possibly custom) solver technique for Navier-Stokes with other
approaches to handle the temperature equation and coupling.
\section{Solution techniques}
\label{sec:algs}
Via UFL, Firedrake has a succinct, high-level description of these
equations and can readily linearise and assemble discrete operators.
When efficient techniques for the discrete system exist within PETSc,
obtaining the solution is as simple as providing the proper options.
When direct methods are applicable, simple options like
\texttt{-ksp\_type preonly -pc\_type lu} suffice -- possibly augmented
with the selection of a package to perform the factorisation like
MUMPS~\cite{Amestoy:2000} or
UMFPACK~\cite{Davis:2004}. Similarly, when iterative
methods with black-box preconditioners such as incomplete
factorisation or algebraic multigrid fit the bill, options such as
\texttt{-ksp\_type cg -pc\_type hypre} work. PETSc also provides
many block preconditioner mechanisms via \texttt{FieldSplit},
allowing users to specify PETSc solvers for inverting the relevant
blocks~\cite{Brown:2012}. Firedrake automatically enables this by
specifying index sets for each function space, passing the information
to PETSc when it initialises the solver.
A key feature of PETSc is
that these choices can be made at runtime via options, \emph{without}
modifying the user code that specifies the PDE to solve.
As we stated in the introduction, however, many techniques for
preconditioning require information beyond what can be learned by an
inspection of matrix entries and user-specified options.
It is our goal now to survey some of these techniques in more
detail, after which we describe our implementation of custom PETSc
preconditioners to utilise application-specific problem descriptions
in a clean, efficient, and user-friendly way.
\subsection{Matrix-free methods}
\label{sec:matrix-free}
Switching from a low order method to a higher-order one simply requires
changing a parameter in the top-level Firedrake application code.
However, such a small change can profoundly affect the overall
performance footprint. Assembly of stiffness matrices becomes more
expensive, both in terms of time and space, as the order increases.
An alternative, that does not have the same constraints
is to use a \emph{matrix-free} implementation of the matrix-vector
product. This is sufficient for Krylov methods, although not for
algebraic preconditioners requiring matrix entries.
Rather than producing a sparse matrix $A$, one provides a function
that, given a vector $x$, computes the product $Ax$. Abstractly,
consider a bilinear form $a$ on a discrete space $V$ with basis
$\{ \psi_i \}_{i=1}^N$. The $N\times N$ stiffness matrix
$A_{ij} = a(\psi_j, \psi_i)$ can be applied to a vector $x$ as
follows. Any vector $x$ is isomorphic to some function $u \in V$ via
the identification $ x \leftrightarrow u = \sum_{j=1}^N x_j \psi_j$.
Then, via linearity,
\begin{equation}
\label{eq:action}
\begin{split}
\left( A x \right)_i & = \sum_{j=1}^N A_{ij} x_j = \sum_{j=1}^N a(\psi_i, \psi_j) x_j \\
& = a(\psi_i , \sum_{j=1}^N x_j \psi_j) = a(\psi_i, u).
\end{split}
\end{equation}
Just like matrices or load vectors, this can be computed by assembling
elementwise contributions in the standard way, considering $u$ to be
just some given member of $V$.
In the presence of strongly-enforced boundary conditions, the bilinear
form acts on a subspace $V_0 \subset V$. When a matrix is explicitly
assembled, one typically either edits (or removes) rows and columns to
incorporate the boundary conditions. Care must be taken in enforcing
the boundary conditions to ensure that the matrix-free action agrees
with multiplication by the matrix that would have been assembled.
Typically, such an approach has a much lower startup cost than an
explicit sparse matrix since no assembly is required. Forgoing an
assembled matrix also saves considerably on memory usage. Moreover,
the arithmetic intensity ($\operatorname{ai}$) of matrix-free operator
application is almost always higher than that of an assembled matrix
(sparse matrix multiplication has $\operatorname{ai} \approx 1/6 \operatorname{flop}/\operatorname{byte}$
\cite{Gropp:2000}). Matrix-free methods are therefore an increasingly
good match to modern memory bandwidth-starved hardware, where the
balanced arithmetic intensity is $\operatorname{ai} \approx 10$. The
algorithmic complexity is either the same ($\mathcal{O}(p^{2d})$ for
degree $p$ elements in $d$ dimensions), or better
($\mathcal{O}(p^{d+1})$) if a structured basis can be exploited
through sum factorisation. On simplex elements, the latter
optimisation is not currently available through the form compiler in
Firedrake. Thus we will expect our matrix-free operator applications
to have the same algorithmic scaling as assembled matrices (though
with different constant factors). If we can exploit the vector units
in modern processors effectively, we can expect that matrix-free
applications will be at least competitive with, and often faster than,
assembled matrices (for example \cite{May:2014} demonstrate
significant benefits, relative to assembled matrices, for $Q_2$
operator application on hexahedra).
\subsection{Preconditioning high-order discretisations: additive
Schwarz}
\label{sec:additive-schwarz}
Matrix-free methods preclude algebraic preconditioners such as
incomplete factorisation or algebraic multigrid. Depending on the
available smoothers, if a mesh hierarchy is available, geometric
multigrid is a possibility~\cite{Brandt:1977,Brandt:2011}.
Here, we discuss a family of additive Schwarz methods. Originally
proposed by Pavarino in~\cite{Pavarino:1993,Pavarino:1994},
these methods fall within the broad family of subspace correction
methods~\cite{Xu:1992}.
These two-level methods decompose the finite element space
into a low order space on the original mesh and the high-order space
restricted to local pieces of the mesh, such as patches of cells around
each vertex. Any member of the original finite element space can be
written as a combination of items from this collection of
subspaces, although the decomposition in this case is certainly not
orthogonal. One obtains a preconditioner for the original finite element
operator by additively combining the (possibly approximate) inverses
of the restrictions of the original operator to these spaces.
Sch\"oberl~\cite{Schoeberl:2008} showed for the symmetric coercive
case that the preconditioned system has eigenvalue bounds independent
of both mesh size and polynomial degree and gave computational
examples from elasticity confirming the theory. Although not covered
by Sch\"oberl's analysis, these methods have also been applied with
success to the Navier-Stokes equations~\cite{Pavarino:2000}.
This approach is \emph{generic} in that it can be attempted for any
problem. Given a bilinear form over a function space of degree $k$,
one can programmatically build the lowest-order instance of the
function space and set up the vertex patches for the mesh. Then, one
can easily modify the bilinear form to operate on the new subspaces
and perform the subspace correction. We have developed such a generic
implementation, parametrised over the UFL problem description.
One drawback of this method is the relatively high memory cost of
storing the patch-wise Cholesky or LU factors, especially at high
order and in 3D. One may further decompose the local patch spaces
through ``spider vertices'' to reduce the memory required
and still retain a powerful method~\cite{Schoeberl:2008}. Such
refinements are possible within our software framework, although we have not
pursued them to date.
\subsection{Block preconditioners and Schur complement approximations}
\label{sec:block-preconditioners}
Having motivated matrix-free methods and preconditioners for
higher-order discretisations in the simple case of the Poisson
operator, we now return to the Navier-Stokes equations introduced
earlier. In particular, we are interested in
preconditioners for the Jacobian stiffness matrix \cref{eq:NSEstiff}.
Block factorisation of the system matrix provides a starting point for
many powerful preconditioners~\cite{Benzi:2005,Elman:2008,Elman:2014}. Consider
the block LDU factorisation of the system matrix in \cref{eq:NSEstiff} as
\begin{equation}
\begin{bmatrix} F & -B^t \\ B & 0 \end{bmatrix}
=
\begin{bmatrix} I & 0 \\ B F^{-1} & I \end{bmatrix}
\begin{bmatrix} F & 0 \\ 0 & S \end{bmatrix}
\begin{bmatrix} I & -F^{-1} B^{t} \\ 0 & I \end{bmatrix},
\end{equation}
where $I$ is the identity matrix of the proper size and
$S = B F^{-1} B^t$ is the Schur complement. The inverse of this matrix is then
given by
\begin{equation}
\begin{bmatrix} F & -B^t \\ B & 0 \end{bmatrix}^{-1}
=
\begin{bmatrix} I & F^{-1} B^{t} \\ 0 & I \end{bmatrix}
\begin{bmatrix} F^{-1} & 0 \\ 0 & S^{-1} \end{bmatrix}
\begin{bmatrix} I & 0 \\ -B F^{-1} & I \end{bmatrix}.
\end{equation}
Since this is the exact inverse, applying it in a preconditioning
phase leads to Krylov convergence in a single iteration if all
blocks are inverted exactly.
Note that inverting the Schur complement matrix $S$ either requires
assembling it as a dense matrix or else using a
Krylov method where the matrix action is computed implicitly via two
matrix-vector products and an inner solve to produce $F^{-1}$.
Two kinds of approximations lead to more practical
methods. For one, it is possible to neglect either or both of the
triangular factors. This gives a structurally simpler preconditioner,
at the cost (assuming exact inversion of $S$) of a slight increase in
the iteration count. For example, it is common to use only
the lower triangular part of the matrix, giving a preconditioning matrix of
the form
\begin{equation}
\label{eq:NSEtriP}
P =
\begin{bmatrix} F & 0 \\ B & S \end{bmatrix}
\end{equation}
which has inverse
\begin{equation}
P^{-1} =
\begin{bmatrix} F^{-1} & 0 \\ 0 & S^{-1} \end{bmatrix}
\begin{bmatrix} I & 0 \\ -B F^{-1} & I \end{bmatrix}.
\end{equation}
Using $P$ as a left preconditioner, $P^{-1} A$ is readily seen to
give a unit upper triangular matrix, and it is known that GMRES will
converge in two (very expensive) iterations since the resulting
preconditioned matrix system has a quadratic minimal
polynomial~\cite{Murphy:2000}.
Given the cost of inverting $S$, it is also desirable to devise a
suitable approximation. A simple approach is to use a
pressure mass matrix, which gives mesh-independent but rather large
eigenvalue bounds~\cite{Elman:1996}. More sophisticated approximations are
well-documented in the literature~\cite{Elman:2008}. For our
purposes, we will consider one in particular, the
\emph{pressure convection-diffusion} (hence PCD)
preconditioner~\cite{Elman:2006,Kay:2002}. It
is based on the approximation
\begin{equation}
\label{eq:pcddef}
S^{-1} =
\left(B F^{-1} B^{t}\right)^{-1}
\approx K_p^{-1} F_p M^{-1}_p \equiv X^{-1},
\end{equation}
where $K_p$ is the Laplace operator acting on the pressure
space, $M_p$ is the mass matrix, and $F_p$ is a discretisation of the
convection-diffusion operator
\begin{equation}
\mathcal{L} p \equiv
-\frac{1}{Re} \Delta p + \vec{u_0} \cdot \nabla p,
\end{equation}
with $\vec{u_0}$ the velocity at the current Newton iterate. Although
this requires solving linear systems, the mass and stiffness matrices
are far cheaper to invert than $F$.
While one could use this approximation to precondition a Krylov solver
for $S$, it is far more typical to replace $S^{-1}$ with $X^{-1}$.
For example, using the triangular
preconditioner \cref{eq:NSEtriP} gives the further approximation in a
block preconditioner:
\begin{equation}
\label{eq:nspcdpc}
\widetilde{P}^{-1} =
\begin{bmatrix} F^{-1} & 0 \\ 0 & X^{-1} \end{bmatrix}
\begin{bmatrix} I & 0 \\ -B F^{-1} & I \end{bmatrix}
= \begin{bmatrix} F^{-1} & 0 \\ 0 & K_p^{-1} F_p M_p^{-1} \end{bmatrix}
\begin{bmatrix} I & 0 \\ -B F^{-1} & I \end{bmatrix}.
\end{equation}
Although bypassing the solution of the Schur complement system
increases the outer iteration count, it typically results in a much
more efficient overall method. We note that strong statements about
the exact convergence in the presence of approximate inverses are
rather delicate, and refer the reader to \cite[\S9.2, and
\S{}10]{Benzi:2005} for an overview of convergence results for such
problems. Also, note that only the action of the off-diagonal blocks
is required for the preconditioner so that a matrix-free treatment is
appropriate.
Preconditioning strategies for the Navier-Stokes equations can quickly
find their way into problems coupling other processes to fluids. We
return now to the B\'enard convection stiffness matrix
\cref{eq:rayleighbenard2x2}, where $N$ is itself the Navier-Stokes
stiffness matrix in \cref{eq:NSEstiff}. Block preconditioners based
on this formulation, replacing $N^{-1}$ with a very inexact solve via
PCD-preconditioned GMRES, proved more effective than techniques based
on $3 \times 3$ preconditioners~\cite{Howle:2012}. Here, we present a
lower-triangular block preconditioner rather than the upper-triangular
one in~\cite{Howle:2012} with similar practical results.
A block Gauss-Seidel preconditioner for \cref{eq:rayleighbenard2x2} can be taken as
\begin{equation}
P = \begin{bmatrix} N & 0 \\ \widetilde{M}_2 & K \end{bmatrix},
\end{equation}
the inverse of which requires evaluation of $N^{-1}$ and $K^{-1}$:
\begin{equation}
P^{-1} = \begin{bmatrix} N^{-1} & 0 \\ 0 & I \end{bmatrix}
\begin{bmatrix} I & 0 \\ -\widetilde{M}_2 & I \end{bmatrix}
\begin{bmatrix} I & 0 \\ 0 & K^{-1} \end{bmatrix}.
\end{equation}
Replacing these inverses with approximations/preconditioners
$\widetilde{N}^{-1}$ and $\widetilde{K}^{-1}$ gives
\begin{equation}
\widetilde{P}^{-1} = \begin{bmatrix} \widetilde{N}^{-1} & 0 \\ 0 & I \end{bmatrix}
\begin{bmatrix} I & 0 \\ -\widetilde{M}_2 & I \end{bmatrix}
\begin{bmatrix} I & 0 \\ 0 & \widetilde{K}^{-1} \end{bmatrix}.
\end{equation}
At this point, replacing $\widetilde{N}^{-1}$ with the
block preconditioner \cref{eq:nspcdpc} recovers a block
lower-triangular $3 \times 3$ preconditioner:
\begin{equation}
\label{eq:rbpc}
\widetilde{P}^{-1} =
\begin{bmatrix} F^{-1} & 0 & 0 \\
0 & K_p^{-1} F_p M_p^{-1} & 0 \\
0 & 0 & I
\end{bmatrix}
\begin{bmatrix} I & 0 & 0 \\ -B F^{-1} & I & 0 \\ 0 & 0 & I
\end{bmatrix}
\begin{bmatrix} I & 0 & 0 \\ 0 & I & 0 \\ -M_2 & 0 & I \end{bmatrix}
\begin{bmatrix} I & 0 & 0 \\ 0 & I & 0 \\
0 & 0 & \widetilde{K}^{-1} \end{bmatrix}.
\end{equation}
\section{Implementation}
\label{sec:impl}
The core object in our implementation is an appropriately designed
``implicit'' matrix that provides matrix-vector actions
and also makes PDE-level discretisation
information available to custom preconditioners within PETSc.
Here, we describe this class, how it interacts with
both Firedrake and PETSc, and how it provides the requisite
functionality. Then, we demonstrate how it cleanly provides the
proper information for custom preconditioners.
\subsection{Implicit matrices}
\label{sec:implicit-matrices}
First, we note that Firedrake deals with
matrices at two different levels. A Firedrake-level
\texttt{Matrix} instance maintains symbolic information (the
bilinear form, boundary conditions). It in turn contains a
PETSc \texttt{Mat} (typically in some sparse format), which is used when
creating solvers.
Our implicit matrices mimic this structure, adding an
\texttt{ImplicitMatrix} sibling class to the existing \texttt{Matrix},
lifting shared features into a common \texttt{MatrixBase} class. Where
the \texttt{ImplicitMatrix} differs is that its PETSc \texttt{Mat} now has
type \texttt{python} (rather than a normal sparse format such as \texttt{aij}). To
provide the appropriate matrix-vector actions, the
\texttt{ImplicitMatrix} instance provides an
\texttt{ImplicitMatrixContext} instance to the PETSc
\texttt{Mat}\footnote{Owing to the cross-language issues and lack of
proper inheritance mechanisms in C, this is the standard way of
implementing new types from Python in PETSc.}. This context object
contains the PDE-level information -- the bilinear form and boundary
conditions -- necessary to implement matrix-vector products.
Moreover, this context object enables building custom preconditioners
since it is available from within the ``low-level'' PETSc \texttt{Mat}.
UFL's \texttt{adjoint} function, which reverses the test and trial
function in a bilinear form, also makes it straightforward to provide
the action of the matrix transpose, needed in some Krylov
methods~\cite[\S 7.1]{Saad:2003}. The implicit matrix constructor
simply stashes the action of the original bilinear form and its
adjoint, and the multiplication and transposed multiplication are
nearly identical using Firedrake's \texttt{assemble} method with boundary
conditions appropriately enforced.
We enable \texttt{FieldSplit} preconditioners on implicit matrices by
means of overloading submatrix extraction. The PETSc interface to
submatrix extraction does not presuppose any particular block
structure. Instead, the function receives integer index sets
corresponding to rows and columns to be extracted into the submatrix.
Since the PDE-level description operates at the level of fields, we
only support extraction of submatrices that correspond to some subset
of the fields that the matrix contains. Our method determines
whether a provided index set is a concatenation of a subset of the
index sets defining the different fields and returns the list of
integer labels of the fields in the subset. While this implementation
compares index sets by value and therefore increases in expense as the
number of per-process degrees of freedom increases, it must only be
carried out once per solve (be it linear or non-linear), since the
index set structure does not change. We have not found it to be a
measureable fraction of the solution time in our implementation.
Splitting implicit matrices offers an efficient alternative to
splitting assembled sparse matrices.
Currently, splitting a standard assembled matrix into
blocks requires the allocation and copying of the subblocks.
While PETSc also includes a ``nested'' matrix type (essentially an
array of pointers to matrices),
collecting multiple fields into a single block (e.g.~the pressure
and velocity in B\'enard convection) requires that the user code state
up front the order in which nesting occurs. This would mean that
editing/recompilation of the code is necessary to switch
between preconditioning approaches that use different variable
splittings, contrary to our goal of efficient high-level solver
configuration and customisation.
The typical user interface in Firedrake involves interacting with
PETSc via a \texttt{VariationalSolver}, which takes charge of
configuring and calling the PETSc linear and nonlinear
solvers. It allocates matrices and sets the relevant callback
functions for Jacobian and residual evaluation to be used inside \texttt{SNES}
(PETSc's nonlinear solver object). Switching between implicit and
standard sparse matrices is now facilitated through additional PETSc
database options, so that the type of Jacobian matrix is set with
\texttt{-mat\_type} and the, possibly different, preconditioner matrix
type with \texttt{-pmat\_type}. This latter option facilitates using
assembled matrices for the matrix-vector product, while still
providing PDE-level information to the solver. In this way, enabling
matrix-free methods simply requires an options change in Firedrake and
no other user modification.
\subsection{Preconditioners}
\label{sec:preconditioners}
It is helpful to briefly review certain aspects of the PETSc formalism
for preconditioners.
One can think of (left) preconditioning as converting a linear system
\begin{equation}
b - Ax = 0
\end{equation}
into an equivalent system
\begin{equation}
\hat{P} \left( b - Ax \right) = 0,
\end{equation}
where $\hat{P}(\cdot)$ applies an approximation of the inverse of the
preconditioning matrix $P$ to the residual\footnote{We use this
notation since it possible that $\hat{P}$ is not a linear operator.}.
Then, given a current iterate $x_i$, we have the residual
\begin{equation}
r_i = b - Ax_i.
\end{equation}
PETSc preconditioners are specified to act on residuals, so that
$\hat{P}(r_i)$ then gives an approximation to the error $e_i = x -
x_i$. This enables sparse direct methods to act as
preconditioners, converting the residual into the exact (up to
roundoff error) residual, and direct solvers nonetheless conform to
the \texttt{KSP} interface (e.g.~\texttt{-ksp\_type preonly -pc\_type lu}).
PETSc preconditioners are built in terms of both the system matrix $A$
and a possibly different ``preconditioning matrix'' $A_p$ (for
example, preconditioning a convection-diffusion operator with the
Laplace operator). So then, $\hat{P} = \hat{P}(A, A_p)$ is a method for
constructing an (approximation to) the inverse of $A$.
Preconditioner implementations must provide PETSc with an
\texttt{apply} method that computes $y \leftarrow \hat{P} x$. Creation
of the data (for example, an incomplete factorisation) necessary to
apply the preconditioner is carried out in a \texttt{setUp} method.
Firedrake now provides Python-level scaffolding to expedite the
implementation of preconditioners that act on implicit matrices.
Instead of manipulating matrix entries like ILU or algebraic
multigrid, these preconditioners use the UFL problem description from
the Python context contained in the incoming matrix $P$ to do what is
needed. Hence, these preconditioners can be parametrised not over
particular matrices, but over bilinear forms. To demonstrate the
generality of our approach, we have implemented several such examples.
\subsubsection{Assembled preconditioners}
\label{sec:assembled-preconditioners}
While one can readily define block preconditioners using implicit
matrices, the best methods for inverting the diagonal blocks may
in fact be algebraic. This illustrates a critical use case of our
simplest preconditioner acting on implicit matrices.
We have defined a generic preconditioner \texttt{AssembledPC} whose
\texttt{setUp} method simply forces the assembly of an underlying bilinear form and
then sets up a sub-preconditioner (typically an algebraic one) acting
on the sparse matrix. Then, the \texttt{apply} method simply forwards
to that of the sub-preconditioner.
For example, to use an
implicit matrix-vector product but incomplete factorisation on an
assembled matrix for the preconditioner, one might use options like
\begin{lstlisting}
-mat_type matfree
-pc_type python
-pc_python_type firedrake.AssembledPC
-assembled_pc_type ilu
\end{lstlisting}
As mentioned, \texttt{FieldSplit} preconditioners provide a critical use
case, enabling one to leave the overall matrix implicit, and assemble
only those blocks that are required. In particular, the
off-diagonal blocks never require assembly, and this
can result in significant memory savings.
\subsubsection{Schur complement approximations}
\label{sec:schur-complement-approx}
Our next example, Schur complement approximations, is more specialised
but very relevant to the problems in fluid mechanics expressed above.
PETSc provides two pathways to define preconditioners for the Schur
complement, such as \cref{eq:pcddef}. Within the source code, one
may pass to the function \texttt{PCFieldSplitSetSchurPre} a matrix
which will be used by a preconditioner to construct an approximation
to the Schur complement. Alternatively, PETSc can automatically
construct some approximations that may be obtained by algebraic
manipulations of the original operator (such as the SIMPLE or LSC
approximations~\cite{Elman:2008}). While the latter may be
configured using only runtime options, the former requires that the
user pick apart the solver and call \texttt{PCFieldSplitSetSchurPre}
on the appropriate \texttt{PC} object. Enabling this preconditioning
option, or incorporating it into larger coupled systems requires
modification of the model source code.
Since our implicit matrices and their subblocks contain the UFL
problem specification, a preconditioner acting on the Schur
complementment block has complete freedom to utilise the UFL bilinear
form to set up auxiliary operators. We have implemented two Schur
complement approximations suitable for incompressible flow, an inverse
mass matrix and the PCD
preconditioner, both of which follow a similar pattern. The \texttt{setUp}
function extracts the pressure function space from the UFL
bilinear form and defines and assembles bilinear forms for the
auxiliary operators. It also defines user-configurable \texttt{KSP} contexts
as needed (e.g.~for the $K_p$ and $M_p$ operators in \cref{eq:pcddef}).
The PCD preconditioner also requires a subsequent update phase in
which the $F_p$ matrix is updated as the Jacobian evolves.
The \texttt{apply} method simply performs the correct combination of
matrix-vector products and linear solves.
The high-level Python syntax of petsc4py and Firedrake combine
to allow a very concise implementation in these cases. In the case of
PCD, we specify the initial and subsequent setup methods plus
application method in less than 150 lines of code, including Python
doc strings and hooks into the PETSc viewer system.
\paragraph{User data}
\label{sec:user-data}
The PCD preconditioner requires a very slight modification of the
application code. In particular, UFL does not expose named
parameters. That is, one may not ask the variational problem what the
Reynolds number is. Also, it is not obvious to the preconditioner
which piece of the current Newton state corresponds to the velocity,
which is needed in constructing $F_p$. To address such difficulties,
Firedrake's \texttt{VariationalSolver} classes can take an arbitrary
Python dictionary of user data, which is available inside the implicit
matrix, and hence to the preconditioners. This facility requires
documentation, but fits with the general PETSc idiom of allowing all
callbacks to user code to provide a generic ``application context''.
\subsubsection{Additive Schwarz}
\label{sec:additive-schwarz-pc}
Our additive Schwarz implementation requires both more involved UFL
manipulation and low-level implementation details. We have
implemented it as a Python preconditioner that defers to a PETSc
\texttt{PCCOMPOSITE} to perform the composition, but extracts and
manipulates the symbolic description of the problem to create two
Python preconditioners, one for the $P_1$ subproblem and one for the
local, high-degree, patch problems.
The $P_1$ preconditioner requires us to construct the $P_1$
discretisation of the given operator, plus restriction and
prolongation operators between the global $P_k$ and $P_1$ spaces. UFL
provides a utility to make the first of these straightforward -- we
just replace the test and trial functions in the original expression
graph with test and trial functions taken from the $P_1$ space on the
same mesh. The second is a bit more involved. We rely on the fact
that the $P_1$ basis functions on a cell are naturally embedded in the
$P_k$ space, and hence their interpolant in $P_k$ is exact. Using
FIAT~\cite{Kirby:2004} to construct this interpolant on a single cell, we then generate
a cell kernel that is called for every coarse element in the mesh to
produce the prolongation operator as a sparse matrix. Optionally,
this can also occur in a matrix-free fashion.
Setting up and solving the patch problems presents more
complications. During a startup phase, we must query the mesh to
discover and store the cells in each vertex patch. At this time, we
also construct the sets of global degrees of freedom involved in each
patch, setting up indirections between patch-level and processor-level
degrees of freedom.
Our implementation, like the rest of Firedrake, leverages PETSc's \texttt{DMPlex} representation of
computational meshes~\cite{Knepley:2009} to
iterate over and query the mesh to construct this information. Due to
the repeated low-level instructions required for this, we have
implemented this in C as a normal PETSc preconditioner. Our
implementation requires that the high-level ``problem aware''
preconditioner, in Python, initialise the patch preconditioner with
the problem-specific data. This includes the function space
description, identification of any Dirichlet nodes in the space, along
with a callback to construct the patch operator. This callback is
effectively the low-level code created when calling \texttt{assemble}
on a UFL form. As is usual with PETSc objects, all aspects of the
subsolves are configurable at runtime. Application of the patch
inverses can either store and reuse matrices and factorisations (at
the cost of high memory usage) or build, invert, and discard matrices
patch-by-patch. This has much lower memory usage, but is computationally
more expensive without access to either fast patch inverses or fast
patch assembly routines.
\section{Examples and results}
\label{sec:examples}
We now present some examples and weak scaling results using Firedrake,
and the new preconditioning framework we have developed. All results
in this study were conducted on ARCHER, a Cray XC30 hosted at the
University of Edinburgh. Each compute node contains two 2.7 GHz,
12-core E5-2697v2 (Ivy Bridge) processors, for a total of 24 cores per
node, with a guaranteed not to exceed floating point performance of
$518.4 \operatorname{Gflop/s}$. The spec sheet memory bandwidth is
$119.4 \operatorname{GB/s}$ per node, and we measured a STREAM
triad~\cite{McCalpin:1995} bandwidth of $74.1\operatorname{GB/s}$ when
using 24 pinned MPI processes\footnote{The compiler did not generate
non-temporal stores for this code.}. All experiments were performed with 24
MPI ranks per node (i.e.~fully populated) with processes pinned to
cores. For all experiments, we use regular simplicial
meshes\footnote{These meshes are nonetheless treated as unstructured
by Firedrake.} of the
unit $d$-cube with piecewise linear coordinate fields.
\subsection{Operator application}
\label{sec:basic-timing}
Without access to fast, sum-factored algorithms, forming element tensors has
complexity $\mathcal{O}(p^{3d})$ for Jacobian matrices, and
$\mathcal{O}(p^{2d})$ for residual evaluation. Similarly,
matrix-vector products for assembled sparse matrices require
$\mathcal{O}(p^{2d})$ work, as do matrix-free applications (although
the constants can be very different). Since
Firedrake does not currently implement sum-factored algorithms on
simplices, we expect our matrix-free implementation to have the
same time complexity as assembled sparse matrix-vector application.
An advantage is that we have constant memory usage per
degree of freedom (modulo surface-to-volume effects).
\Cref{fig:poisson-matvec} shows performance of our implementation for
a Poisson operator discretised with piecewise polynomial Lagrange
basis functions. We see that we broadly observe the expected
algorithmic behavior (barring in three dimensions, as explained in the
figure). Assembled matrix-vector multiplication is faster than
matrix-free application, although not by much for the two-dimensional
case, at the cost of higher memory consumption per degree of freedom
and the need to first assemble the matrix (costing approximately 10
matrix-free actions).
\begin{figure}[htbp]
\centering
\subfloat[Degrees of freedom per second processed for matrix
assembly and matrix-vector products. The performance of matrix-free operator action and
assembly at degree 5 in 3D becomes noticeably worse because the
data for tabulated basis functions spills from the fastest
cache.]{\includegraphics[width=0.48\textwidth]{figures/poisson-matvec-time.pdf}}
\hspace{0.03\textwidth}
\subfloat[Bytes of memory per degree of freedom. For the
matrix-free case, memory usage is not quite constant, since
Firedrake stores the ghosted representation, and so a
surface-to-volume term appears in the memory per dof (more
noticeable in three dimensions).]{\includegraphics[width=0.48\textwidth]{figures/poisson-matvec-memory.pdf}}
\caption{Performance of matrix-vector products for a Poisson
operator discretised on simplices in two and three dimensions (48
MPI processes).}
\label{fig:poisson-matvec}
\end{figure}
The same story appears for more complex problems, and we show one
example, the operator for Rayleigh-B\'enard convection discretised
using $P_{k+1}$-$P_k$-$P_k$ elements, in \Cref{fig:rb-matvec}. In
two dimensions, the matrix-free action is faster than
assembled operator application, and in three dimensions the cost is
less than a factor 1.5 greater (even at lowest order). Given the high cost
of matrix assembly, any iterative method that requires fewer than 10
matrix-vector products will be better off matrix-free, even before
memory savings are considered.
\begin{figure}[htbp]
\centering
\subfloat[Degrees of freedom per second processed for matrix
assembly and matrix-vector products.]{\includegraphics[width=0.48\textwidth]{figures/rb-matvec-time.pdf}}
\hspace{0.03\textwidth}
\subfloat[Bytes of memory per degree of freedom.]{\includegraphics[width=0.48\textwidth]{figures/rb-matvec-memory.pdf}}
\caption{Performance of matrix-vector products for the
Rayleigh-B\'enard equation discretised on simplices in two and
three dimensions (48 MPI processes).}
\label{fig:rb-matvec}
\end{figure}
To determine if these timings are good in absolute terms, we use a
roofline model \cite{Williams:2009}. The arithmetic intensity for
assembled matrix-vector products is calculated following
\cite{Gropp:2000}. For matrix assembly and matrix-free operator
application, we count effective flops in the element kernel by
traversing the intermediate representation of the generated code, the
required data movement assumes a perfect cache model for any fields
(each degree of freedom is only loaded for main memory once), and
includes the cost of moving the indirection maps. The spec sheet
memory bandwidth per node is $119.4\operatorname{GB/s}$, and we
measure a STREAM triad bandwidth of $74.1\operatorname{GB/s}$ per
node; the guaranteed not to exceed floating point performance is
$518.4\operatorname{Gflop/s}$ per node (one AVX multiplication and one
AVX addition issued per cycle per core). As evidenced in
\cref{fig:roofline}, there is almost no extra performance available
for the application of assembled operators: the matrix-vector product
achieves close to the machine peak in all cases. In contrast, the
matrix-free actions, with significantly higher arithmetic intensity,
are quite a distance from machine peak: this suggests a direction for
future optimisation efforts in Firedrake.
\begin{figure}[htbp]
\centering
\subfloat[Performance of assembly and matrix-vector products for the
Poisson operator. The assembled matrix achieves performance close
to machine peak, while matrix-free products (and matrix assembly)
are a way away.]{\includegraphics[width=0.48\textwidth]{figures/poisson-roofline.pdf}}
\hspace{0.03\textwidth}
\subfloat[Performance of assembly and matrix-vector products for the
Rayleigh-B\'enard operator. The \texttt{nest} matrix has higher
arithmetic intensity than the \texttt{aij} matrix due to using a
blocked CSR format for the diagonal velocity block. As with the
Poisson operator, assembled matrices achieve almost machine peak,
whereas the matrix-free operator has room for
improvement.]{\includegraphics[width=0.48\textwidth]{figures/rb-roofline.pdf}}
\caption{Roofline plots for the experiments of
\Cref{fig:poisson-matvec} and \Cref{fig:rb-matvec}.}
\label{fig:roofline}
\end{figure}
\subsection{Runtime solver composition}
\subsubsection{Poisson}
\label{sec:poisson-results}
We now consider solving the Poisson problem \cref{eq:poisson-weak} in
three dimensions. We choose as domain a regularly meshed unit cube,
$\Omega = [0, 1]^d$, and apply homogenous Dirichlet conditions on
$\partial\Omega$, along with a constant forcing term. For low degree
discretisations, ``black-box'' algebraic multigrid methods are robust
and provide high performance. Their performance, however, degrades
with increasing approximation degree. Here we show how we can plug in
the additive Schwarz approach described in \cref{sec:additive-schwarz}
to provide a preconditioner with mesh and degree independent iteration
counts, although we do not achieve time to solution independent of
these parameters. This increase in time to solution with increasing
problem size is due to a non-scalable coarse grid solve: we use
algebraic multigrid V cycles.
The main cost of this preconditioner is the application of the (dense)
patch inverses, the cost of our implementation is therefore quite
high. We also comment that if patch operators are not stored between
iterations, the overall memory footprint of the method is quite small.
Developing fast algorithms to build and invert these patch operators
is the subject of ongoing work.
In \Cref{tab:poisson-iterations} we compare the algorithmic and
runtime performance of hypre's boomerAMG algebraic multigrid solver
applied directly to a $P_4$ discretisation with the additive Schwarz
approach. The only changes to the application file were in the
specification of the runtime solver options. The provided solver
options are shown in
\cref{sec:poisson-hypre} for the hypre preconditioner and
\cref{sec:poisson-schwarz} for the Schwarz approach.
\begin{table}[htbp]
\caption{Krylov iterations, and time
to solution for $P_4$ Poisson problem using hypre and the Schwarz
preconditioner described in \cref{sec:additive-schwarz} as the problem is
weakly scaled. The required number of Krylov iterations grows
slowly for the hypre preconditioner, but is constant for Schwarz.
However, the overall time to solution is still lower with hypre.}
\label{tab:poisson-iterations}
\centering
\begin{tabular}{c|c|c|c|c|c}
DoFs ($\times 10^{6}$) & MPI processes & \multicolumn{2}{|c|}{Krylov its} & \multicolumn{2}{|c}{Time to solution (s)}\\
& & hypre & schwarz & hypre & schwarz\\
\hline
2.571 & 24 & 19 & 19 & 5.62 & 9.48\\
5.545 & 48 & 20 & 19 & 6.45 & 10.6\\
10.22 & 96 & 20 & 19 & 6.17 & 10.3\\
20.35 & 192 & 21 & 18 & 6.53 & 10.7\\
43.99 & 384 & 22 & 19 & 7.53 & 11.9\\
81.18 & 768 & 22 & 19 & 7.52 & 11.7\\
161.9 & 1536 & 23 & 19 & 8.98 & 13\\
350.4 & 3072 & 24 & 19 & 8.56 & 14\\
647.2 & 6144 & 26 & 19 & 9.32 & 13.9\\
1291 & 12288 & 28 & 19 & 10.2 & 17.3\\
2797 & 24576 & 29 & 19 & 13 & 22.5\\
\end{tabular}
\end{table}
\subsubsection{\texttt{FieldSplit} examples}
\label{sec:fieldsplit-results}
Merely being able to solve the Poisson equation is a relatively
uninteresting proposition. The power in our (and PETSc's) approach is
the ease of composition, \emph{at runtime}, of scalable building
blocks to provide preconditioners for complex problems. To
demonstrate this, we consider solving the Rayleigh-B\'{e}nard
equations for stationary convection \eqref{eq:rb-residual}.
A block preconditioner for this
problem was developed in~\cite{Howle:2012}, but its performance was
only studied in two-dimensional systems, and the implementation of the
preconditioner was tightly coupled with the problem. The components
of this preconditioner are: an inexact inverse of the Navier-Stokes
equations, for which the block preconditioners discussed in~\cite{Elman:2014} provide mesh-independent iteration counts;
an inexact inverse of the scalar (temperature) convection diffusion
operator. For the Navier-Stokes block we approximate the Schur
complement with the pressure-convection-diffusion approach (which
requires information about the discretisation inside the
preconditioner). The building blocks are an approximate inverse for
the velocity convection-diffusion operator, and approximate inverses
for pressure mass and stiffness matrices. For moderate velocities,
the velocity convection-diffusion operator can be treated
with algebraic multigrid. Similarly, the pressure mass matrix can be
inverted well with only a few iterations of a splitting-based method
(e.g.~point Jacobi), while multigrid is again good for the stiffness
matrix. Finally, the temperature convection-diffusion operator can
again be treated with algebraic multigrid.
Using the notation of \cref{eq:nspcdpc} and \cref{eq:rbpc}, we need
approximate inverses $\widetilde{N}^{-1}$ and $\widetilde{K}^{-1}$.
Where $\widetilde{N}^{-1}$ itself needs approximate inverses
$\widetilde{F}^{-1}$, $K_p^{-1}$, and $M_p^{-1}$. We can make
different choices for all of these inverses, the matrix format
(including matrix-free) for
the operators, and convergence tolerance for all approximate
inverses. These options (and others) can all be configured at
runtime, while maintaining a single code base for the specification of
the underlying PDE model, merely by modifying solver options.
Explicitly assembling the Jacobian and inverting with a direct solver
requires a relatively short options list: \cref{sec:rb-direct-solver-parameters}.
Conversely, to implement the preconditioner of \cref{eq:rbpc}, with
algebraic multigrid for all approximate inverses (except the pressure
mass matrix), and the operator applied matrix-free, we need
significantly more options. These are shown in full in
\cref{sec:rb-iterative-solver-parameters}.
\subsubsection{Algorithmic and parallel scalability}
\label{sec:parallel-scaling}
Firedrake and PETSc are designed such that the user of the library
need not worry in detail about distributed memory parallelisation,
provided they respect the collective semantics of operations.
Since our implementation of solvers and preconditioners operates at
the level of public APIs, we only need to be careful that we use the
correct communicators when constructing auxiliary objects.
Parallelisation therefore comes ``for free''.
In this section, we show that our approach scales to large problem
sizes, with scalability limited only by the performance of the
building block components of the solver.
We consider the algorithmic performance of the Rayleigh-B\'{e}nard
problem \cref{eq:rb-residual} in a regularly meshed unit cube,
$\Omega = [0,1]^3$. We choose as boundary conditions:
\begin{subequations}
\begin{align}
u &= 0 \quad \text{on $\partial\Omega$}\\
\nabla p \cdot n &= 0 \quad \text{on $\partial\Omega$}\\
T &= 1 \quad \text{on the plane $x = 0$}\\
T &= 0 \quad \text{on the plane $x = 1$}\\
\nabla T \cdot n &= 0 \quad \text{otherwise}
\end{align}
\label{eq:rb-bcs}
\end{subequations}
and take $\text{Ra} = 200$ and $\text{Pr} = 6.18$.
The constant pressure nullspace is projected out in the linear solver.
The solution to this problem is shown in \cref{fig:rb-picture}.
\begin{figure}[htbp]
\centering
\includegraphics[width=\textwidth]{figures/rb-picture.png}
\caption{Solution to the Rayleigh-B\'enard problem of
\cref{eq:rb-residual} with boundary conditions as specified in
\cref{eq:rb-bcs}, and $g$ pointing up. Shown are streamlines of
the velocity field coloured by the magnitude of the velocity,
and isosurfaces of the pressure.}
\label{fig:rb-picture}
\end{figure}
We perform a weak scaling experiment (increasing both the number of
degrees of freedom, and computational resource) to study any mesh
dependence in our solver. For the full set of solver options see
\cref{sec:rb-iterative-solver-parameters}. Newton iterations reduce
the residual by $10^{8}$ in three iterations, with only a weak
increase in the number of Krylov iterations, as seen in
\Cref{tab:rb-iterations}.
\begin{table}[htbp]
\caption{Newton iteration counts, total Krylov iterations, and time
to solution for Rayleigh-B\'{e}nard convection as the problem is
weakly scaled. The required number of linear iterations grows
slowly as the mesh is refined, however the time to solution grows
much faster.}
\label{tab:rb-iterations}
\centering
\begin{tabular}{c|c|c|c|c}
DoFs ($\times 10^{6}$) & MPI processes & Newton its & Krylov its & Time to solution (s)\\
\hline
0.7405 & 24 & 3 & 16 & 31.7\\
1.488 & 48 & 3 & 16 & 36.3\\
2.973 & 96 & 3 & 17 & 43.9\\
5.769 & 192 & 3 & 17 & 47.3\\
11.66 & 384 & 3 & 17 & 56\\
23.39 & 768 & 3 & 17 & 64.9\\
45.54 & 1536 & 3 & 18 & 85.2\\
92.28 & 3072 & 3 & 18 & 120\\
185.6 & 6144 & 3 & 19 & 167\\
\end{tabular}
\end{table}
The scalability does not look as good as these results would suggest,
with only 20\% parallel efficiency for this weakly scaled problem on
6144 cores. Looking at the inner solves indicates the problem,
although the outer Krylov solve performs well, our approximate
inner preconditioners are not fully mesh independent. \Cref{tab:rb-inner-iterations} shows the total number of iterations
for both the Navier-Stokes solve and the temperature solve as part of
the application of the outer preconditioner.
\begin{table}[htbp]
\caption{Total iterations for Navier-Stokes and temperature solves
(with average iterations per outer linear solve in brackets) for
the nonlinear solution of the Rayleigh-B\'{e}nard problem. We see
weak mesh dependence in the per-solve iteration counts. When
multiplied up by the slight mesh dependence in the outer solve,
this results in a noticeable inefficiency.}
\label{tab:rb-inner-iterations}
\centering
\begin{tabular}{c|c|c}
DoFs ($\times 10^{6}$) & Navier-Stokes iterations & Temperature iterations\\
\hline
0.7405 & 329 (20.6) & 107 (6.7)\\
1.488 & 338 (21.1) & 110 (6.9)\\
2.973 & 365 (21.5) & 132 (7.8)\\
5.769 & 358 (21.1) & 133 (7.8)\\
11.66 & 373 (21.9) & 137 (8.1)\\
23.39 & 378 (22.2) & 139 (8.2)\\
45.54 & 403 (22.4) & 151 (8.4)\\
92.28 & 420 (23.3) & 154 (8.6)\\
185.6 & 463 (24.4) & 174 (9.2)\\
\end{tabular}
\end{table}
In addition to iteration counts increasing, the time to compute a
single iteration also increases. This is observable more clearly in
the previous results for the Poisson operator (\Cref{tab:poisson-iterations}). This is due to
sub-optimal scalability of the algebraic multigrid that is used for
all the building blocks in these solves. Our results for the Poisson
equation using hypre's boomerAMG appear similar to previously reported
results on weak scalability from the hypre team~\cite{Baker:2012},
and so we do not expect to gain much improvement here without changing
the solver. This can, however, be done without modification to the
existing solver: as soon as a better option is available, we can just
drop it in.
\section{Conclusions and future outlook}
We have presented our approach to extending Firedrake and the existing
solver interface to support matrix-free operators and the necessary
preconditioning infrastructure. Our approach is extensible and
composable with existing algebraic solvers supported through PETSc.
In particular, it removes much of the friction in developing block
preconditioners requiring auxiliary operators. The performance of
such preconditioners for complex problems still relies on having good
approximate inverses for the blocks, but our composable approach can
seamlessly take advantage of any such advances.
|
1,941,325,220,000 | arxiv | \section{Introduction}
The task of camera relocalization is to estimate the 6-DoF (Degree of Freedom) camera pose from a test frame with respect to a known environment.
It is of great importance for many computer vision and robotics applications, such as Simultaneously Localization and Mapping (SLAM), Augmented Reality (AR), and navigation, \textit{etc}.
One popular solution to camera relocalization is to make use of advanced hardware, \textit{e.g.}, LIDAR sensors, WIFI, Bluetooth or GPS. However, these approaches may suffer from bad weather for outdoor environments, and instability or blocked signal for indoor environments. Another popular solution replaces the above hardware with a RGB/RGB-D sensor that feeds only visual observation for camera relocalization, also known as visual relocalization, which is the focus of this paper.
\begin{figure}[t]
\centering
\includegraphics[width=0.94\linewidth]{teaser.pdf}
\caption{Demonstration of our algorithm. We build a hierarchical space partition over the entire scene environment to construct a 3-level 4-way neural tree. For the input static (green) or dynamic (red) points from a visual observation, our neural tree will route them into either inlier (solid line) or outlier (dashed line) categories. Only the points falling into the inlier category will be considered for camera pose estimation.}
\label{figure:teaser}
\vspace{-6mm}
\end{figure}
The problem of visual relocalization has been studied for decades, and recent advances \cite{cavallari2019real,li2019hierarchical} have reached around 100\% camera pose accuracy (5cm / 5$^\circ$) on the popular indoor scene benchmarks 7-scenes \cite{shotton2013scene} and 12-scenes \cite{valentin2016learning}. One type of successful approach in this regard is designed based on decision trees, which was firstly introduced into the camera relocalization field in \cite{shotton2013scene}, with many follow-ups \cite{massiceti2017random,meng2017backtracking,meng2018exploiting,cavallari2017fly,cavallari2019real}. They build a binary regression forest that takes a query image point sampled from the visual observation as input, and routes it into one leaf node via a hierarchical splitting strategy, which is simply implemented as color/depth comparison within the neighbourhood of the query point. The leaf node fits a density distribution over the 3D world coordinates from the training scene. Hence, by evaluating the decision tree with a test image, a 2D/3D-3D correspondence can be easily established between the input sample and regressed world coordinate for camera pose optimization.
Although the aforementioned approaches are good at camera relocalization in static training environments, they tend to fail in dynamic test scenes, which are quite common yet challenging in real life. This is mainly due to the fact that the decision tree is constructed using only the static training image sequence so that, for any image point belonging to dynamic regions captured during evaluation, it is challenging to locate its correct correspondence in the leaf node. Recent studies \cite{wald2020beyond} have demonstrated that the decision tree based approaches achieve around 28\% camera pose accuracy (5cm 5$^\circ$), which is also the best among all the competitors, in their proposed RIO-10 benchmark developed for dynamic indoor scenes. This performance is far from being comparable to the ones in static indoor scenes.
In order to tackle the challenges of camera relocalization in dynamic indoor environments, in this paper, we propose to learn an \textit{outlier-aware neural tree} to help establish point correspondences for accurate camera pose estimation focusing only on the confident static regions of the environment. Our algorithm inherits the general framework of decision trees, but mainly differs in the following aspects in order to obtain better generalization ability in dynamic test scenes.
(a) \textbf{Hierarchical space partition.} We perform an explicit hierarchical spatial partition of the 3D scene in the world space to construct the decision tree. Then each split node in the decision tree not only performs a hard data partition selection, but in fact one which also corresponds to a physically-meaningful 3D geometric region.
(b) \textbf{Neural routing function.} Given an input point sampled from the 2D visual observation, the split node needs to determine which divided sub-region in the world space to go. Such a classification task needs more contextual understanding of the 3D environment. Therefore, we propose a neural routing function, implemented as a deep classification network, for learning the splitting strategy.
(c) \textbf{Outlier rejection.} In order to deal with potential dynamic input points, we propose to consider these points as outliers and reject them during the hierarchical routing process in the decision tree. Specifically, the neural routing function learns to classify any input point from the dynamic region into the outlier category, stopping any further routing for that point. Once our proposed neural tree is fully trained, we follow the optimization and refinement steps in existing works \cite{cavallari2017fly,cavallari2019real} to calculate the final pose.
We further train and test our proposed outlier-aware neural tree on the recent camera relocalization benchmark, RIO-10, which aims for dynamic indoor scenes. Experimental results demonstrate that our proposed algorithm outperforms the state-of-the-art localization approaches by at least 30\% on camera pose accuracy. More analysis shows that our algorithm is robust to various types of scene changes and successfully rejects most dynamic input samples during neural routing.
\section{Related Work}
\subsection{Camera Relocalization}
\textbf{Direct pose estimation.}
Approaches of this type aim for predicting the camera pose directly from the input image. One popular solution in this direction is image retrieval \cite{gee20126d,galvez2011real,glocker2014real,arandjelovic2016netvlad,jegou2010aggregating}. They approximate the camera pose of the query image by matching the most similar images in the dataset with known poses using low-level image features.
Instead of matching features, PoseNet \cite{kendall2015posenet} and many follow-ups \cite{kendall2017geometric,wang2019atloc,brahmbhatt2018geometry,walch2017image,sattler2019understanding} propose to use a convolution neural network to directly regress the 6D camera pose from an input image.
However, as mentioned in \cite{sattler2019understanding}, the performance of direct pose regression using CNNs is more similar to the one using image retrieval, and still lags behind the 3D structure-based approaches detailed below.
\textbf{Indirect pose estimation.}
Approaches of this type find correspondences between camera and world coordinate points, and calculate the camera pose through optimization with RANSAC \cite{chum2003locally}. One common direction is to leverage the 2D-3D correspondences between traditional keypoints in the observed image and 3D scene map \cite{sattler2011fast,lim2012real,sattler2016large,sattler2016efficient}, followed by some recent works that deploy deep learning features \cite{sarlin2018leveraging,sarlin2019coarse,taira2018inloc,dusmanu2019d2} to replace the extracted poor descriptors.
Another common direction to seek correspondences is scene coordinate regression.
Shotton \textit{et al.} \cite{shotton2013scene} proposes to regress the 3D points in the world space from a query image point by training a decision tree, followed by many variants \cite{meng2017backtracking,meng2018exploiting,brachmann2016uncertainty,valentin2015exploiting}.
The other related works \cite{brachmann2017dsac,brachmann2018learning,brachmann2019neural,brachmann2019expert,li2018full,yang2019sanet,li2019hierarchical,massiceti2017random} in this direction leverage the deep convolutional neural network to regress the world coordinates from an input image, with a following pose optimization step.
\begin{figure*}[t]
\centering
\includegraphics[width=0.94\linewidth]{figure.pdf}
\caption{Illustration of our algorithm on the simple 2D case. Top: constructing a 3-level 4-way outlier-aware neural tree of a scene environment via hierarchical space partition. The dashed line and circle indicates the outlier category designed to reject the input dynamic points. Bottom: training an outlier-aware neural routing function for each split node in the neural tree.} \label{figure:framework}
\vspace{-5mm}
\end{figure*}
\subsection{Decision Tree and Deep Learning.}
Some recent efforts have been devoted to combining the two families of decision tree and deep learning techniques.
The deep neural decision trees \cite{kontschieder2015deep} propose a principled, joint and global optimization of split and leaf node parameters, and hence enable end-to-end differentiable training of the whole decision tree. Shen \textit{et al.} \cite{shen2017label} presents label distribution learning tree to enable all the decision trees in a forest to be learned jointly. The variants of deep neural decision trees have been successfully applied for the task of human age estimation \cite{shen2018deep} and monocular depth estimation \cite{roy2016monocular}. Most of the aforementioned works formulate the last few fully connected layers in a classification neural network with the decision tree structure, and hence are significantly different from our algorithm.
\section{NeuralRouting}
\subsection{Overview}
The input to our algorithm is a training sequence of <RGB-D image, camera pose> and a test frame for camera relocalization.
Our algorithm can be separated into two steps, \textit{scene coordinate regression} and \textit{camera pose estimation}. The former step is conducted by learning a neural tree that takes a query point along with its neighbor context as input, and regresses its scene coordinate in the world space to build a 3D-3D correspondence, based on which, the latter step estimates the camera pose via iterative optimization followed by an optional ICP refinement. The neural tree is constructed via performing an explicit space partition in the scene environment, and learns to reject the dynamic points as outliers during the hierarchical routing process. In this way, our algorithm learns to build the 3D-3D correspondence only within the confident static region for accurate camera pose optimization. We firstly revisit the decision tree and its adaptation for camera relocalizatoin in Sec. \ref{sec:revisit}, and introduce our outlier-aware neural tree developed for relocalization in dynamic environments in Sec. \ref{sec:tree}. Finally, we describe the camera pose optimization and refinement details in Sec. \ref{sec:pose}.
\vspace{-1mm}
\subsection{Decision Tree for Camera Relocalization} \label{sec:revisit}
\vspace{-1mm}
Depending on whether the target is continuous or discrete, the decision tree can be used for either regression or classification tasks. A decision tree consists of a set of split nodes and leaf nodes. Each split node is assigned with a routing function, which learns the decision rules for the input sample partition, and each leaf node contains a probability density distribution fitted for the partitioned data. Given an input sample, inference starts from the root node and descends level-by-level until reaching the leaf node by evaluating the routing functions. A standard decision tree is binary, and employs greedy algorithms to learn the parameters at each split node to achieve locally-optimal hard data partition.
For the task of camera relocalization, the decision tree \cite{cavallari2019real} is used to build the 3D knowledge of the known environment using the provided training sequence. Each split node takes a query point (RGB-D) sampled from the captured image as input and routes it into one child node. The leaf node fits a distribution over a set of 3D points in the world space that are projected from the training images using the ground truth camera pose and calibration parameters. Therefore, when evaluating a test frame with a decision tree, by routing an input point from root node to leaf node, a 3D-3D point correspondence can be easily established and further used for camera pose optimization.
\vspace{-1mm}
\subsection{Outlier-aware Neural Tree} \label{sec:tree}
\vspace{-1mm}
\subsubsection{Hierarchical Space Partition for Decision Tree} \label{sec:tree_hsp}
\vspace{-1mm}
For the existing decision trees \cite{shotton2013scene,massiceti2017random,meng2017backtracking,meng2018exploiting,cavallari2017fly,cavallari2019real} developed for the camera relocalization problem, as there is no ground truth label for supervised training, the decision tree becomes ultimately a clustering strategy for the training data as observed in previous works \cite{cavallari2019real,castin2018clustering}. The decision rules at split nodes are learned via CLUS algorithm \cite{blockeel2000top} that uses variance reduction as the split criterion and achieves local-optimal hard data partition.
In this paper, we propose to perform a hierarchical space partition for the target scene environment to construct our decision tree. We represent the entire scene as the root node, and iteratively partition the scene until reaching predefined depth. Each split node is responsible for a geometric region in the scene, and partitions this region into sub-regions of equal size for its child nodes. Each leaf node contains a set of 3D world coordinates in its covered local geometric region.
The space partition strategy is illustrated in Figure \ref{figure:framework} and detailed below.
Given a 3D scene model constructed in world space using the training sequence of <RGB-D image, camera pose>, we build an $m$-way decision tree, where $m$ is the $z$th power of 2. To perform a hierarchical space partition, we start from the root node which represents the entire scene environment. Then we compute the tight bounding box of the scene in the world coordinate system. We conduct iterative $z$ cuts to divide the bounding box into $m$ sub-boxes of equal volume size. In order to avoid the corner cases, such as long and narrow sub-boxes which may create challenges to learn the routing function, the decision rule is designed to encourage the divided bounding box to be similar to a cube. Specifically, to perform one cut on the bounding box of size ($w,h,l$), we find the longest edge over ($w,h,l$) and divide the box into two identical halves from the middle point of the edge. We iterate over such a process on the divided box until $z$ cuts are achieved. We perform such a top-down data partition iteratively for the nodes among all the levels.
The decision tree constructed in this way features several properties:
(a) our constructed tree structure relies on the explicit space partition over the \textbf{3D scene environment} in the world space, not on the data partition of the visual observations (RGB+D) in the 2D camera space, then it requires the routing function to have more 3D understanding ability;
(b) \textbf{each split node is physically meaningful}, and covers a specific geometric region, which is spatially related to other father or child nodes;
(c) \textbf{the decision rules are predefined} by the $z$-cut space partition strategy introduced in the above paragraph and stay constant for all the nodes, instead of optimized via greedy algorithms to behave differently for different nodes;
(d) the decision tree is more tolerant to \textbf{an $m$-way tree implementation}, not limited to a standard binary decision tree;
(e) \textbf{the constructed tree structure is scene-dependent}, and may contain empty nodes that cover no geometric regions in the scene.
Overall, the constructed decision tree via hierarchical space partition is more flexible in structure and physically meaningful compared to a standard decision tree in previous works.
\vspace{-2mm}
\subsubsection{Outlier-aware Neural Routing Function} \label{sec:route_func}
\vspace{-2mm}
Given an input sample, the purpose of the routing function at each split node is to send it to one of the child nodes. In our problem setting, the input sample is from the observed 2D RGB-D frame, and its ground truth label is determined by its corresponding location in the 3D world space. For purpose of accurate prediction, the routing function needs to understand the 3D scene context from 2D observations. Therefore, inspired by many previous works regarding point cloud classification \cite{qi2017pointnet,qi2017pointnet++} and point generation from 2D images \cite{fan2017point}, we take advantage of the point cloud processing framework to implement a neural routing function. We introduce the formulation of the input and network structure in detail below.
\textbf{Input representation and sampling. }
The input to the neural routing function is a query point $x_q$ that needs to be localized in the 3D world space, along with a set of context points $\{x_{o_i}\}_{i=1}^N$ in the neighbourhood of the query point. The input point is associated with color and depth, which are however both highly viewpoint dependent. In order to obtain better generalization ability in different viewpoints, given an input RGB-D frame,
We augment its depth channel via transforming it into the rotation-invariant geometric feature following PPF-FoldNet \cite{deng2018ppf}. To be specific, we firstly compute the oriented point cloud by projecting the full-frame depth into 3D camera space using camera calibration parameters, and calculating the pointwise normals in a 17-point neighbourhood \cite{hoppe1992surface}. Then we encode the query point and its context points into pair features,
\begin{multline}
\{ (x_q^{(p)},x_q^{(n)},x_{o_1}^{(p)},x_{o_1}^{(n)}),(x_q^{(p)},x_q^{(n)},x_{o_2}^{(p)},x_{o_2}^{(n)}),\cdots,\\
(x_q^{(p)},x_q^{(n)},x_{o_N}^{(p)},x_{o_N}^{(n)}) \} \in \mathbb{R}^{12 \times N}
\end{multline}
where $p$ and $n$ denotes the camera coordinate and normal, which form a 12-channel vector for each pair of oriented points ($x_q$, $x_{o_i}$).
Each pair feature ($x_q^{(p)},x_q^{(n)},x_{o_i}^{(p)},x_{o_i}^{(n)}$) is then transformed into the rotation-invariant geometric representation \cite{deng2018ppf} that consists of three angles and one pair distance,
\begin{multline}
r = \{\angle(x_q^{(n)},x_q^{(p)}-x_{o_i}^{(p)}), \angle(x_{o_i}^{(n)},x_q^{(p)}-x_{o_i}^{(p)}),\\ \angle(x_q^{(n)},x_{o_i}^{(n)}), \Vert x_q^{(p)}-x_{o_i}^{(p)} \Vert_2\} \in \mathbb{R}^{4}
\end{multline}
Overall, for each input context point, it consists of both color $c$ and transformed rotation-invariant information $r$, represented as $\{x_{o_i}^{(c)},x_{o_i}^{(r)}\} \in \mathbb{R}^{7}$. Since the rotation-invariant feature for all context points is computed in the local reference frame with query point as origin, we omit the geometric feature and only take the color information as input for query point $x_q^{(c)} \in \mathbb{R}^{3}$.
Given an input image, the query point for a split node is randomly sampled among the 2D image pixels whose projected 3D world coordinates belong to the split node. The context points are randomly sampled within the 3D neighbourhood ball of the query point. Ball query defines a radius, which is adaptively changed from level to level due to the varying size of covered 3D geometric region in our problem setting. In the implementation, we calculate the radius as the length of the longest edge of the covered bounding box.
\textbf{Routing function design. }
The routing function consists of two parts, the \textit{feature extraction} module and \textit{classification} module. The \textit{feature extraction} module leverages the pointwise multi-layer perception (MLP) to learn the features from both query point and context points inspired by the recent popular point cloud processing network PointNet \cite{qi2017pointnet}, while the \textit{classification} module combines the deep features from query point and context points to learn which child node the query point should be routed to.
As the query point and context points are different in input channel, point number and impact for the classification task, we use different network parameters to encode their feature, specifically,
\begin{equation}
h_q = f_{featQ}(x_q^{(c)})
\end{equation}
\begin{equation}
h_o = f_{featO}(\{x_{o_i}^{(c)},x_{o_i}^{(r)}\}_{i=1}^N)
\end{equation}
where $f_{featQ}$ and $f_{featO}$ are implemented with a 3-layer pointwise MLP (64-128-32/512), and extract the internal deep features ($h_q \in \mathbb{R}^{32}$, $h_o \in \mathbb{R}^{512}$) for query and context points respectively. $f_{featO}$ is followed with a max pooling layer to extract the global context feature.
Then $h_q$ and $h_o$ are concatenated and inputted into the classification module,
\begin{equation}
p = f_{class}(h_q,h_o)
\end{equation}
where $f_{class}$ is also implemented as a three-layer MLP (2048-1024-$k$), and outputs the probability ($p \in \mathbb{R}^{k}$) for all the child nodes. Since the constructed tree structure is scene dependent as mentioned in Sec. \ref{sec:tree_hsp}, the number of child nodes $k$ is adaptively changed from node to node and from scene to scene. As for supervision, we apply a cross entropy loss between the predicted probability $p$ and the ground truth label $y$ for supervision,
\begin{equation}
\mathcal{L} = -\sum_{i=1}^{k} \mathbbm{1}\{y_i = i\} \log\frac{\exp(p_i)}{\sum_{i=1}^{k} \exp(p_i)}
\end{equation}
where $y_i$ is the label for the $i$th child node.
\textbf{Outlier rejection. }
The aforementioned neural routing function is designed to route the input sample into one of the child nodes that are bound to 3D geometric regions. Given a query point belonging to dynamic regions in the test frame, the hierarchical routing functions will send it into one of the leaf nodes that contain the 3D world coordinates only from the static training scene, and it may establish an inaccurate 3D-3D correspondence for camera pose optimization. In order to solve this issue, we propose to reject the query points from dynamic regions as outlier, hence the established correspondence will be maintained in the confident static region.
In order to achieve this goal, we further improve the neural routing function to output the probability vector $p$ of $k+1$ channels, where the additional channel refers to the outlier class. To generate the training samples for each split node from a given input image, the routing function considers any image pixel belonging to the current split node as \textit{inlier input} query point, which should be routed into child nodes. As the dynamic points in test environments are highly unpredictable, irregular, and do not exist in the training data, we simply consider any image pixel not covered by the current split node as \textit{outlier input} query point, which simulates the dynamic points and should be rejected without further routing. To train the routing function for each split node, the inlier and outlier input query points are sampled to be 3:1. Notice the outlier rejection strategy is incorporated into the neural routing function from the second level, since for the root node, all the image pixels belong to the inlier input.
\vspace{-1mm}
\subsubsection{HyperNetwork for the Routing Functions}
\vspace{-1mm}
In order to construct a $t$-level $m$-way tree, there are at most $m^{j-1}$ neural routing functions at level $j$ except for the bottom level that contains leaf nodes, and totally at most $\frac{m^{t-1}-1}{m-1}$ routing functions for the whole tree. It is both time-consuming and storage-inefficient to train so many deep networks. In order to decrease the training time and storage space for efficient deep learning, the previous works \cite{ha2017hypernetworks,fan2018decouple} unify the learnable parameters among different convolution layers in a network, time steps in a RNN, or hyper-parameters in an image filter within a standalone network, mostly known as HyperNetwork. More recent work \cite{fan2019general} further discovers that learning the normalization parameters with the HyperNetwork has similar performance as learning the convolution parameters, while the former case is more storage and running time friendly due to much less learnable parameters in the normalization layer compared to convolution layer.
Inspired by these previous works, in this paper, we propose to leverage HyperNetwork to learn a single neural routing function for all the split nodes in the same level of a decision tree. Specifically, given the one-hot value $x_{node}$ that indicates the split node index, we learn to predict the learnable scale $\theta_{scale}$ and shift $\theta_{shift}$ parameters in the normalization layer of the \textit{classification} module in the neural routing function,
\begin{equation}
\theta_{scale},\theta_{shift} = f_{hyper}(x_{node})
\end{equation}
where $f_{hyper}$ refers to the HyperNetwork, and is implemented a three-layer MLP. The size of $\theta_{scale}$ and $\theta_{shift}$ depends on the channel number in the normalization layer. Then the normalization parameters in the \textit{classification} module is replaced with the predicted ones from the HyperNetwork,
\begin{equation}
p = f_{class}(h_q,h_o;\theta_{scale},\theta_{shift})
\end{equation}
Therefore, for a $t$-level tree, we only need to learn totally $t$ neural routing functions.
\begin{table*}[t]
\centering
\begin{tabular}{lcccccc}
\toprule
\textbf{Method} &
\textbf{Score} $\uparrow$ &
\textbf{DCRE}$\mathbf{(0.05)}$ $\uparrow$&
\textbf{DCRE}$\mathbf{(0.15)}$ $\uparrow$&
\textbf{Pose}$\mathbf{(0.05m, 5^{\circ})}$ $\uparrow$&
\textbf{Outlier}$\mathbf{(0.5)}$ $\downarrow$&
\textbf{N/A}\\
\midrule
HFNet \cite{sarlin2019coarse} & 0.373 & 0.064 & 0.103 & 0.018 & 0.360 & 0.000 \\
HF-Net Trained \cite{sarlin2019coarse} & 0.789 & 0.192 & 0.300 & 0.073 & 0.403 & 0.000 \\
NetVLAD \cite{arandjelovic2016netvlad} & 0.575 & 0.007 & 0.137 & 0.000 & 0.431 & 0.000 \\
DenseVLAD \cite{torii201524} & 0.507 & 0.008 & 0.136 & 0.000 & 0.501 & 0.006\\
Active Search \cite{sattler2016efficient} & 1.166 & 0.185 & 0.250 & 0.070 & \color{red}{0.019} & 0.690\\
Grove \cite{cavallari2017fly} & 1.240 & 0.342 & 0.392 & 0.230 & 0.102 & 0.452 \\
Grove V2 \cite{cavallari2019real} & 1.162 & \color{blue}{0.416} & 0.488 & \color{blue}{0.274} & 0.254 & 0.162 \\
D2Net \cite{dusmanu2019d2} & \color{blue}{1.247} & 0.392 & \color{blue}{0.521} & 0.155 & 0.144 & 0.014 \\
\midrule
NeuralRouting (Ours) & \color{red}{1.441} & \color{red}{0.538} & \color{red}{0.615} & \color{red}{0.358} & \color{blue}{0.097} & 0.227\\
\bottomrule
\end{tabular}
\vspace{2mm}
\caption{Comparison on the test split of the RIO-10 benchmark w.r.t. the average score (1 + DCRE (0.05) - Outlier (0.5)), DCRE errors, camera pose accuracy and outlier ratio. \textit{N/A} denotes invalid/missing predictions. The red and blue numbers rank the first and second for each metric. }
\label{table:relocalization_RIO}
\vspace{-3mm}
\end{table*}
\begin{table*}[t]
\centering
\begin{tabular}{l|ccccccccc}
\toprule
\textbf{Pose}$\mathbf{(0.05m, 5^{\circ})}$ $\uparrow$ & Chess & Fire & Heads & Office & Pumpkin & Kitchen & Stairs & Average \\
\midrule
Shotton et al. \cite{shotton2013scene} & 92.60\% & 82.90\% & 49.40\% & 74.90\% & 73.70\% & 71.80\% & 27.80\% & 67.60\% \\
Guzman-Rivera et al. \cite{guzman2014multi} & 96.00\% & 90.00\% & 56.00\% & 92.00\% & 80.00\% & 86.00\% & 55.00\% & 79.30\% \\
Valentin et al. \cite{valentin2015exploiting} & 99.40\% & 94.60\% & 95.90\% & 97.00\% & 85.10\% & 89.30\% & 63.40\% & 89.50\% \\
Brachmann et al. \cite{brachmann2016uncertainty} & 99.60\% & 94.00\% & 89.30\% & 93.40\% & 77.60\% & \color{blue}{91.10\%} & 71.70\% & 88.10\% \\
Schmidt et al. \cite{schmidt2016self} & 97.75\% & 96.55\% & \color{blue}{99.80\%} & 97.20\% & 81.40\% & \color{red}{93.40\%} & 77.70\% & 92.00\% \\
Grove \cite{cavallari2017fly} & 99.40\% & 99.00\% & \color{red}{100.00\%} & 98.20\% & \color{red}{91.20}\% & 87.00\% & 35.00\% & 87.10\% \\
Grove V2 \cite{cavallari2019real} & \color{red}{99.95}\% & \color{blue}{99.70\%} & \color{red}{100.00}\% & \color{blue}{99.48\%} & \color{blue}{90.85\%} & 90.68\% & \color{red}{94.20}\% & \color{red}{96.41\%} \\
\midrule
NeuralRouting (Ours) & \color{blue}{99.85\%} & \color{red}{100.00\%} & \color{red}{100.00\%} & \color{red}{99.80\%} & 88.80\% & 90.96\% & \color{blue}{84.20\%} & \color{blue}{94.80\%} \\
\bottomrule
\end{tabular}
\vspace{2mm}
\caption{Comparison on the 7-scenes dataset w.r.t. the camera pose accuracy. The red and blue numbers rank the first and second for each scene.}
\label{table:relocalization_7scenes}
\vspace{-4mm}
\end{table*}
\vspace{-1mm}
\subsection{Camera Pose Estimation} \label{sec:pose}
\vspace{-1mm}
The core of our algorithm is a decision tree, which is the same as many previous camera relocalization works \cite{cavallari2017fly,cavallari2019real}. Therefore, we inherit similar optimization and refinement steps following \cite{cavallari2019real} for camera pose computation, which are introduced below.
In order to generate the camera pose in $\mathbf{SE}(3)$, we firstly fit modes in the leaf nodes and then optimize the pose by leveraging the established 3D-3D correspondences. Each leaf node covers a set of 3D points (XYZ+RGB) in the world space projected from the 2D image pixels captured in the training sequence. Following \cite{valentin2015exploiting}, we detect the modes of the empirical distribution in each leaf node via mean shift \cite{comaniciu2002mean}, and then construct a Gaussian Mixture Model (GMM) via iteratively estimating a 3D Gaussian distribution for each mode.
After mode fitting of the leaf nodes, we leverage the preemptive locally-optimized RANSAC \cite{chum2003locally} for camera pose optimization. We start by generating 1024 pose hypotheses, each of which is computed by applying the Kabsch algorithm \cite{kabsch1976solution} on three randomly sampled 3D-3D point correspondences that relate the camera and world space. Given an observed point in camera space, its corresponding world coordinate is sampled from one random mode in the fitted GMM of the routed leaf node. We filter out the hypotheses that do not conform to the rigid body transformations following \cite{cavallari2017fly}, and regenerate the alternatives until they satisfy the above requirement.
The final camera pose is selected by iteratively re-evaluate and re-rank the hypotheses using the Mahalanobis distance, and discard the worse half until only one pose hypothesis is left.
\textbf{Multi-leaves.}
Given an input query point, the aforementioned pose optimization process fits modes only from the routed leaf node, which is common for the existing decision tree implementations as their routing function performs hard data partition and hence the input point can only be routed into a single leaf node. In contrast, the proposed neural routing function performs a ``soft'' data partition with predicted probability $p$, hence the input point can be ``routed'' to all the leaf nodes with different accumulated probabilities through probability multiplication of all routed split nodes. Motivated by the above observation, to achieve more robust pose optimization, we fit the mode by combining the world coordinates from multiple routed leaf nodes with highest accumulated probabilities, instead of a single leaf node. In the implementation, we use four leaf nodes, which works the best experimentally, for mode fitting.
\textbf{Pose refinement.}
Last but not least, we follow \cite{cavallari2019real} to incorporate our camera relocalizer into a 3D reconstruction pipeline for further camera pose refinement, which mainly consists of ICP \cite{besl1992method} and model-based hypothesis ranking.
\section{Experiments}
\subsection{Implementation Details}
\textbf{Tree structure.} For all the experiments in this paper, we implement the 5-level 16-way tree for scene partition, thus a perfect tree structure consists of 4369 nodes in this case. During our implementation, according to the specific scene geometry, such a tree contains about 2000 to 3000 valid nodes.
\textbf{Training details.} The neural routing functions are implemented in PyTorch. Benefited from the design of HyperNetwork, we only train 5 neural routing functions. Each routing function is trained for 60 epochs with a batch size of 256. The network weights are optimized with Adam \cite{kingma2014adam} whose initial learning rate is 0.001 and betas are (0.9,0.999). The initial learning rate is halved every 20 epochs until the end. The number of context points is set as 600 all the time.
\subsection{Dataset}
We test our proposed algorithm on two camera relocalization benchmarks, RIO-10 \cite{wald2020beyond} and 7-scenes \cite{shotton2013scene}, which are developed for dynamic and static indoor scenes respectively. The RIO-10 dataset includes 10 real indoor environments, each of which is scanned several times over different time periods, and demonstrates the common geometric and illumination changes in dynamic environments. This dataset is separated into training/validation/test split, while the test results should be obtained via submission to their online benchmark. The 7-scenes dataset contains only training and test set, and is the most popular camera relocalization benchmark for the static indoor environments in the past.
\setlength{\tabcolsep}{6pt}
\begin{table}[t]
\centering
\begin{tabular}{c|c}
\toprule
& \textbf{Pose}$\mathbf{(0.05m, 5^{\circ})}$ $\uparrow$ \\
\midrule
Ours w/o outlier labels & 25.14\%\\
Ours w/o multi-leaves & 25.80\%\\
5-level 8-way Tree & 24.60\%\\
3-level 16-way Tree & 16.75\%\\
4-level 16-way Tree & 25.31\%\\
\midrule
Ours (5-level 16-way Tree) & 27.05\%\\
Ours w. pose refinement & 31.99\%\\
\bottomrule
\end{tabular}
\vspace{2mm}
\caption{Ablation study on the validation set (10 scenes) of the RIO-10 benchmark. Ours is the full pipeline of our algorithm. }
\label{table:ablation}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=0.94\linewidth]{curve_600dpi.png}
\caption{The charts show the performance (\textbf{DCRE}$\mathbf{(0.15)}$) of compared approaches with respect to semantic (semantic difference), geometric (depth difference) and visual change (NSSD, Normalized Correlation Coefficient) as introduced in RIO-10 dataset \cite{wald2020beyond}.}
\label{figure:curve}
\end{figure*}
\subsection{Evaluation Metrics}
In order to evaluate the quality of estimated camera pose, we adopt the common camera pose accuracy \textbf{Pose}$\mathbf{(0.05m, 5^{\circ})}$, which is computed as the proportion of test frames whose translation error is within 5 centimeters and angular error is within 5 degrees. In the RIO-10 benchmark \cite{wald2020beyond}, we further adopt their proposed new metric \textbf{DCRE}, short for Dense Correspondence Re-Projection Error, which is computed as the average magnitude of the 2D correspondence displacement normalized by the image diagonal. The displacement is calculated between 2D projections of the underlying 3D model using the ground truth and predicted camera poses. DCRE depicts an error that correlates with the visual perception, not only with the absolute camera pose. Then \textbf{DCRE}$\mathbf{(0.05)}$ and \textbf{DCRE}$\mathbf{(0.15)}$ are the percentage of test frames whose DCRE is within 0.05 or 0.15, while \textbf{Outlier}$\mathbf{(0.5)}$ describes the opposite case, which is the percentage of test frames whose DCRE is above 0.5.
\begin{figure}
\centering
\includegraphics[width=0.94\linewidth]{trajectory.pdf}
\caption{Ground truth (green) and predicted (blue) camera pose trajectory on the validation set of RIO-10 dataset.}
\label{figure:trajectory}
\vspace{-3mm}
\end{figure}
\subsection{Numerical Results}
We compare our algorithm with all the other approaches on the test split of the RIO-10 dataset, shown in Table \ref{table:relocalization_RIO}. Among all the metrics that evaluate the quality of camera pose estimations, our algorithm ranks the first except for \textbf{Outlier}$\mathbf{(0.5)}$, where our performance is the second best. Regarding the camera pose accuracy \textbf{Pose}$\mathbf{(0.05m, 5^{\circ})}$, which is more common and directly measures the pose quality, our result (0.358) surpasses the state-of-the-art approaches (0.274) significantly by about 30\%. It demonstrates the effectiveness and robustness of our proposed outlier-aware neural tree on the dynamic indoor environments.
We further test our algorithm on the popular camera relocalization benchmark 7-scenes for static indoor scenes, shown in Table \ref{table:relocalization_7scenes}. Our algorithm ranks the second place on the averaged camera pose accuracy among all the existing approaches, and lags behind the best performance within a very small gap. It further shows the excellent generalization ability of our algorithm on static scenes, though it is developed for dynamic environments. Note the baseline results in RIO-10 and 7-scenes datasets are from the official online benchmark and the recent SOTA relocalization paper \cite{cavallari2019real}, respectively.
We test the running time of our algorithm on GPU. For a single image, the camera pose optimization and refinement take around 100 ms and 150 ms separately, similar to the previous decision tree based approach \cite{cavallari2019real}. The neural routing runs for 480 ms, while its light version without considering multiple leaves during routing takes only 100 ms yet achieves similar camera pose accuracy as verified in Table \ref{table:ablation}.
\subsection{Ablation Study}
To justify the effectiveness of our algorithm design, we conduct an ablation study as shown in Table \ref{table:ablation}, which is evaluated on the validation set of RIO-10 dataset. Space partition is important for our neural tree construction, hence we firstly test different strategies to partition the scene by varying the hyper-parameters $t$ and $m$ in the $t$-level $m$-way tree. We observe that as the number of levels $t$ decreases, the camera pose accuracy also degrades, this is mainly due to the increased box size in leaf node, which creates difficulty in fitting a good distribution and sampling effective world coordinates.
Notice the leaf nodes in 4-level 16-way tree and 5-level 8-way tree have the same box size, while 16-way tree is better in camera pose accuracy. This is mainly because the 4-level tree has fewer routing functions to be trained, and hence accumulates less error from the deep network during hierarchical routing.
Finally we validate the design of outlier classification and multi-leaves in camera pose optimization by removing them from the entire framework, and observe worse pose accuracy as expected.
\subsection{Analysis}
\textbf{Pose trajectory.} We visualize the pose trajectory of both our estimations and ground truth on two scenes in the validation split of RIO-10 dataset in Figure \ref{figure:trajectory}. We observe a significant overlap between the two trajectories, which verifies the effectiveness of our algorithm in dynamic indoor environments.
\textbf{Performance against various scene changes.} To discover how our algorithm is robust to scene changes compared to other approaches, we visualize the overall performance of each method with images of increasing visual, geometric and semantic change as defined in RIO-10 dataset, in Figure \ref{figure:curve}. We are glad to see that our plotted curve is almost the best among all the different types of scene changes. It further verifies our algorithm for camera relocalization in dynamic indoor scenes.
\section{Conclusion}
In this paper, we propose a novel outlier-aware neural tree to achieve accurate camera relocalization in dynamic indoor environments. Our core idea is to construct a decision tree via hierarchical space partition of the scene environment, and learn a neural routing function to reject the dynamic input points during the level-wise routing process. Extending our work to only the RGB input and generalization to novel environments are more realistic yet challenging settings, which are treated as valuable future directions to explore.
\section*{Acknowledgements}
This work was supported in part by National Key R\&D Program of China (2019YFF0302902), National Science Foundation of China General Program grant No. 61772317, NSF grant IIS-1763268, a grant from the Samsung GRO program, a Vannevar Bush Faculty Fellowship, and a gift from Amazon AWS ML program.
\section{Appendix}
The appendix provides the additional supplemental material that cannot be included into the main paper due to its page limit:
\begin{itemize}
\item More Space Partition Strategies.
\item Extension to Neural Forest.
\item Unified Neural Routing Function -- PointNet++.
\item Ablation for HyperNetwork.
\end{itemize}
\subsection*{A. More Space Partition Strategies} \label{sec:space}
In the main paper, we evaluate different space partition strategies by varying the hyper-parameters in a $t$-level $m$-way tree. In this section, we conduct more experiments by constructing the covered bounding boxes in different manners, which is another important factor that influences the tree structure. To be specific, we firstly follow the axis of world coordinate system and compute the axis-aligned minimum bounding box (AABB) of the entire scene as the root node for space partition. This is our \textbf{original} implementation in the main paper. To explore more space partition strategies, we rotate the scene model along the x and y axis by \textbf{30$^{\circ}$}, and calculate the new AABB. Similarly, we also obtain a version by rotating \textbf{60$^{\circ}$}. The bounding boxes constructed above all follow the world coordinates, and may leave many blank 3D spaces in the box, which does not make full use of the neural routing functions. To resolve this issue, we obtain the \textbf{compact} box by recalculating the world coordinate system of the scene using PCA \cite{pearson1901liii}, and fit the tightest bounding box along the new axis. In order to alleviate the potential influence of coordinate axis to camera pose optimization as observed in \cite{brachmann2018learning,brachmann2017dsac}, we further rotate the compact box to align with the default world coordinate axis for a fair comparison with other boxes.
We illustrate the different bounding box constructions in Figure \ref{figure:box}. We observe that regarding the compactness between the bounding box and 3D scene model, compact box > original box > rotation 60$^{\circ}$ > rotation 30$^{\circ}$.
The corresponding numerical results of the above space partition strategies are shown in Table \ref{table:result}. Consistent with the compactness, the camera pose accuracy also follows the same order: compact box > original box > rotation 60$^{\circ}$ > rotation 30$^{\circ}$. It indicates an interesting observation that the more compact the box is, the higher the pose accuracy can be achieved by our algorithm. This is mainly because in a compact box, the geometric regions are more uniformly sampled among all the split nodes in the decision tree, which strengthens the utilization of the neural routing functions.
\setlength{\tabcolsep}{6pt}
\begin{table}[t]
\centering
\begin{tabular}{c|c}
\toprule
& \textbf{Pose}$\mathbf{(0.05m, 5^{\circ})}$ $\uparrow$ \\
\midrule
original + rotation 30$^{\circ}$ & 51.05\% \\
original + rotation 60$^{\circ}$ & 52.98\% \\
original box & 54.93\% \\
compact box & 56.68\% \\
\midrule
forest & 58.29\% \\
\midrule
PointNet++ & 1.93\% \\
\bottomrule
\end{tabular}
\vspace{2mm}
\caption{Camera pose accuracy on the scene 01 in the validation set of RIO-10 dataset.}
\label{table:result}
\vspace{-4mm}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=0.94\linewidth]{supp_bbx.pdf}
\caption{Different space partition strategies via bounding box construction.} \label{figure:box}
\vspace{-5mm}
\end{figure}
\subsection*{B. Extension to Neural Forest}
In the existing camera relocalization works implemented with decision trees \cite{cavallari2017fly,cavallari2019real}, they usually train a number of trees on the same scene to obtain a forest for the pose optimization. In this way, the final prediction of the forest is simply the union of the fitted modes among all the trees. In these works, the decision rule for each split node is adaptively learned as either color or depth comparison from the training data. Hence by simply sampling different input samples in the same scene, they are able to learn different decision trees.
Motivated by the previous works, we further extend our proposed neural tree to neural forest by training multiple trees. However, in our work, the decision rules are predetermined by the space partition strategy. In order to enable the diversity of multiple trees, we adopt the four different space partition strategies introduced in Section \ref{sec:space}, and unify their predictions to form a neural forest. The numerical results are shown in Table \ref{table:result}. We are glad to observe that the performance can be further upgraded by the utilization of a forest.
\subsection*{C. Unified Neural Routing Function -- PointNet++}
The main paper employs the hierarchical node-wise neural routing functions to classify each input query point into one of the leaf nodes. This can be naturally viewed as the point-wise segmentation task, where each segmentation label refers to one leaf node. As the input is formulated as the form of point cloud, we can achieve a unified neural routing function by directly leveraging the popular state-of-the-art point cloud segmentation network PointNet++ \cite{qi2017pointnet++}. In our problem setting, the PointNet++ takes the colored point cloud from a single frame as input, and outputs the point-wise segmentation mask. In this case, each input point is the query point and also serves as the context point for the other query points. In this unified neural routing function, the outlier rejection is not an option anymore and excluded from the segmentation label. We adopt the MSG as the PointNet++ backbone in the implementation.
Its numerical result is shown in Table \ref{table:result}, which performs much worse compared to our neural routing function implementation. It demonstrates the effectiveness of our unique neural tree design.
\subsection*{D. Ablation for HyperNetwork}
In our algorithm, HyperNetwork unifies the network parameters of all the neural routing functions from the same level into a single network, and hence saves much storage space and training time. However, as observed in the previous work \cite{fan2019general}, HyperNetwork may potentially degrade the performance compared to the version that separately trains each network. To investigate the potential influence of HyperNetwork on the performance of neural routing functions, we select three split nodes for each level in our neural tree, and compare their classification accuracy between the implementations with and without HyperNetwork as shown in Table \ref{table:hypernetwork}.
Interestingly, we observe that the usage of HyperNetwork only degrades the performance within a reasonable range similar to the past experience \cite{fan2019general}.
\setlength{\tabcolsep}{3pt}
\begin{table}[t]
\centering
\begin{tabular}{c|ccc|ccc}
\toprule
& \multicolumn{3}{c|}{w. HyperNet} & \multicolumn{3}{c}{w/o HyperNet} \\
\midrule
& level 2 & level 3 & level 4 & level 2 & level 3 & level 4 \\
\midrule
node 1 & 58.1\% & 75.9\% & 70.5\% & 60.5\% & 79.5\% & 75.0\% \\
node 2 & 57.9\% & 66.5\% & 69.5\% & 56.8\% & 62.6\% & 69.3\% \\
node 3 & 23.6\% & 48.5\% & 49.3\% & 28.9\% & 66.7\% & 50.0\% \\
\midrule
average & 46.5\% & 63.63\% & 63.1\% & 48.7\% & 69.9\% & 64.7\% \\
\bottomrule
\end{tabular}
\vspace{2mm}
\caption{Ablation study of HyperNetwork on the scene 01 in the validation set of RIO-10 dataset. For each level, we collect three split nodes to evaluate their classification accuracy on the validation set.}
\label{table:hypernetwork}
\vspace{-4mm}
\end{table}
{\small
\bibliographystyle{ieee_fullname}
|
1,941,325,220,001 | arxiv | \section{Introduction}
In this paper we address the model selection problem for the Stochastic Block Model (SBM); that is, the estimation of the number of communities given a sample of the adjacency matrix. The SBM was introduced by \cite{holland1983stochastic} and has rapidly popularized in the literature as a model for random networks
exhibiting blocks or communities between their nodes. In this model, each node in the network has associated a latent discrete random variable describing its community label, and given two nodes, the possibility of a connection between them depends only on the values of the nodes' latent variables.
From a statistical point of view, some methods have been proposed to address the problem of parameter estimation or label recovering for the SBM. Some examples include maximum likelihood estimation \citep{bickel2009nonparametric, amini2013pseudo}, variational methods \citep{daudin2008mixture, latouche2012variational}, spectral clustering \citep{rohe2011spectral} and Bayesian inference \citep{van2017bayesian}.
The asymptotic properties of these estimators have also been considered in subsequent works such as \cite{bickel2013asymptotic} or \cite{su2017strong}. All these approaches assume the number of communities is known \emph{a priori}.
The model selection problem for the SBM, that is the estimation of the number of communities, was also addressed before, see for example the recent work \citet{le2015estimating} and references therein. But to our knowledge it was not until \citet{wang2017likelihood} that a consistency result was obtained for such a penalized estimator.
In the latter, the authors propose a penalized likelihood criterion and show its convergence in probability (weak consistency) to the true number of communities. Their proof only applies to the case where the number of candidate values for the estimator is finite (it is upper bounded by a known constant) and the network average degree grows at least as a polylog function on the number of nodes.
From a practical point of view, the computation of the log-likelihood function
and its supremum is not a simple task due to the hidden nature of the nodes' labels.
\cite{wang2017likelihood}
propose a variational method as described in \cite{bickel2013asymptotic} using the EM algorithm of \cite{daudin2008mixture},
a
profile maximum likelihood criterion as in \cite{bickel2009nonparametric} or the pseudo-likelihood algorithm in \cite{amini2013pseudo}.
The method introduced in \cite{wang2017likelihood} has been subsequently studied in \cite{hu2016consistency}, where the authors propose a modification of the penalty term. However, in practice, the computation of the suggested estimator still remains a demanding task since it depends on the profile maximum likelihood function.
In this paper we take an information-theoretic perspective and introduce the Kri\-chevs\-ky-Tro\-fi\-mov (KT) estimator, see \citet{kt1981}, in order to determine the number of communities of a SBM based on a sample of the adjacency matrix of the network. We prove the strong consistency of this estimator, in the sense that the empirical value is equal to the correct number of communities in the model with probability one, as long as the number of nodes $n$ in the network is sufficiently large. The strong consistency is proved in the \textit{dense} regime, where the probability of having an edge is considered to be constant, and in the \textit{sparse} regime where this probability goes to zero with $n$ having order $\rho_n$. The study of the second regime is more interesting in the sense that it is necessary to control how much information is required (in the sense of the number of edges in the network) to estimate the parameters of the model. We prove that the consistency in the sparse case is guaranteed when the expected degree of a random selected node grows to infinity as a function of order
$n\rho_n \rightarrow \infty$, weakening the assumption in \citet{wang2017likelihood} that proves consistency in the regime $\frac{n\rho_n}{\log n} \to \infty$. We also consider a smaller order penalty function and we do not assume a known upper bound on the true number of communities.
To our knowledge this is the first strong consistency result for an estimator of the number of communities, even in the bounded case.
The paper is organized as follows. In Section~\ref{defs} we define the model and the notation used in the paper, in Section~\ref{kt} we introduce the KT estimator for the number of communities and state the main result. The proof of the consistency of the estimator is presented in Section~\ref{proof}.
\section{The Stochastic Block Model}\label{defs}
Consider a non-oriented random network with nodes $\{1,2,\dotsc, n\}$, specified by its adjacency matrix $\bold{X}_{n\times n} \in \{0,1\}^{n\times n}$ that is symmetric and has diagonal entries equal to zero. Each node $i$ has associated a latent (non-observed) variable $Z_i$ on $[k]:=\{1,2,\dotsc, k\}$, the \emph{community} label of node $i$.
The SBM with $k$ communities is a probability model for a random network as above, where the latent variables $\bold{Z}_n=(Z_1,Z_2,\cdots,Z_n)$ are independent and identically distributed random variables over $[k]$ and the law of the adjacency matrix $\bold{X}_{n\times n}$, conditioned on the value of the latent variables
$\bold{Z}_n=\bold{z}_n$, is a product measure of Bernoulli random variables whose parameters depend only on the nodes' labels. More formally, there exists a probability distribution over $[k]$, denoted by $\pi=(\pi_1,\cdots,\pi_{k})$, and a symmetric probability matrix $P \in [0,1]^{k\times k}$ such that the distribution of the pair $(\bold{Z}_n,\bold{X}_{n\times n})$ is given by
\begin{equation}\label{def-prob}
\P(\bold{z}_n,\a) = \prod_{a=1}^{k} \pi_{a}^{n_a} \prod_{a,b=1}^{k} P_{a,b}^{O_{a,b}/2} (1-P_{a,b})^{(n_{a,b}-O_{a,b})/2}\,,
\end{equation}
where the counters $n_a=n_a(\bold{z}_n)$, $n_{a,b}=n_{a,b}(\bold{z}_n)$ and $O_{a,b}=O_{a,b}(\bold{z}_n,\a)$ are given by
\begin{align*}
n_a(\bold{z}_n) &= \sum\limits_{i=1}^n \mathds{1}\{z_i=a\}\, , \qquad\quad\;\, 1 \leq a \leq k\\
n_{a,b}(\bold{z}_n) &=\begin{cases}
n_a(\bold{z}_n)n_b(\bold{z}_n)\, ,& 1 \leq a,b \leq k\,;\, a\neq b\\
n_a(\bold{z}_n)(n_a(\bold{z}_n)-1)\, & 1 \leq a,b \leq k\,;\, a=b \,
\end{cases}
\end{align*}
and
\[
O_{a,b}(\bold{z}_n,\a) = \sum_{i,j=1}^n \mathds{1}\{z_i=a,z_j=b\}x_{ij} \, ,\quad 1 \leq a,b \leq k\,.
\]
As it is usual in the definition of likelihood functions, by convention we define $0^0=1$ in \eqref{def-prob} when some of the parameters are 0.
We denote by
$\Theta^k$ the parametric space for a model with $k$ communities, given by
\[
\Theta^k= \left\lbrace (\pi,P)\colon \pi \in (0,1]^k, \,\sum_{a=1}^k\pi_a=1,\, P \in [0,1]^{k\times k}, \,P \text{ symmetric}\right\rbrace\,.
\]
The \emph{order} of the SBM is defined as the smallest $k$ for which
the equality \eqref{def-prob} holds for a pair of parameters
$(\pi^0, P^0)\in \Theta^k$ and will be denoted by $k_0$.
If a SBM has order $k_0$ then it cannot be reduced to a model with less communities than $k_0$; this specifically means that $P^0$ does not have two identical columns.
When $P^0$ is fixed and does not depend on $n$, the mean degree of a given node grows linearly in $n$ and this regime produces very connected (dense graphs). For this reason in this paper we also consider the regime producing sparse graphs (with less edges), that is we allow $P^0$ to decrease with $n$ to the zero matrix. In this case we write $P^0=\rho_nS^0$, where $S^0 \in [0,1]^{k\times k}$ does not depend on $n$ and $\rho_n$ is a function decreasing to 0 at a rate $n\rho_n \rightarrow \infty$.
\section{The KT order estimator}\label{kt}
The Krichevsky-Trofimov order estimator in the context of a SBM is a
regularized estimator based on a mixture distribution for the adjacency matrix $\bold{X}_{n\times n}$.
Given a sample $(\bold{z}_n,\a)$ from the distribution \eqref{def-prob} with parameters $(\pi^0,P^0)$, where we assume we only observed the network $\a$, the estimator of the number of communities is defined by
\begin{equation}\label{est_kt}
\hat{k}_{\mbox{\tiny{KT}}}(\a)=\argmax\limits_{k} \{ \, \log \KT{k}{\a} - \pena{k}{n} \,\}\,,
\end{equation}
where $\K{\a}$ is the mixture distribution for a SBM with $k$ communities
and $\pena{k}{n}$ is a penalizing function that will be specified later.
As it is usual for the KT distributions we choose as ``prior'' for the pair
$(\pi,P)$ a product measure obtained by a Dirichlet($1/2,\cdots, 1/2$) distribution (the prior distribution for $\pi$) and a product of $(k^2+k)/2$ Beta($1/2,1/2$) distributions (the prior for the symmetric matrix $P$). In other words, we define the distribution on $\Theta^k$
\begin{equation}\label{mixture_dist}
\nu_k(\pi, P)= \Biggl[ {\textstyle\frac{\Gamma\left(\frac{k}{2}\right)}{\Gamma\left(\frac{1}{2}\right)^k}}\prod_{a=1}^k\pi_{a}^{-\frac12} \Biggr]
\Biggl[ \;\prod_{1\leq a\leq b \leq k} {\textstyle\frac{1}{\Gamma\left(\frac{1}{2}\right)^2}}\;P_{a,b}^{-\frac12}(1-P_{a,b})^{-\frac12}\Biggr]
\end{equation}
and we construct the mixture distribution for $\bold{X}_{n\times n}$, based on
$\nu_k(\pi, P)$, given by
\begin{equation}\label{kt-mix}
\K{\a}=\mathbb{E}_{\nu_k}[\, \P(\a) \,] = \int_{\Theta^k}\P(\a)\nu_k(\pi, P) d\pi dP \,,
\end{equation}
where $\P(\a)$ stands for the marginal distribution obtained from \eqref{def-prob}, and given by
\begin{equation}\label{eq:likelihood_function}
\P(\a)=\sum\limits_{\bold{z}_n \in [k]^n} \P(\bold{z}_n,\a)\,.
\end{equation}
As in other model selection problems where the KT approach has proved to be very useful, as for example in the case of Context Tree Models \citep{csiszar-talata-2006} or Hidden Markov models
\citep{gassiat2003optimal}, in the case of the SBM there is a closed relationship between the KT mixture distribution and the maximum likelihood function.
The following proposition shows a non asymptotic uniform upper bound for the log ratio between these two functions. Its proof
is postponed to the Appendix.
\begin{proposition}\label{prop:razao_L_KT}
For all $k$ and all $n\geq \max(4,k)$ we have
\begin{equation*}\label{eq:razao_L_KT}
\max\limits_{\a}\;\Bigl\{\,\log\dfrac{ \sup_{(\pi,P) \in \Theta^k} \P(\a) }{\K{\a}}\,\Bigr\}\; \leq\; \left( \textstyle\frac{k(k+2)}{2} -\frac{1}{2}\right) \log n + c_{k,n}\,,
\end{equation*}
where
\begin{equation*}
c_{k,n} =\textstyle\frac{k(k+1)}{2} \log \Gamma\bigl(\frac{1}{2}\bigr) + \frac{k(k-1)}{4n}+ \frac{1}{12n}+ \log\frac{\Gamma(\frac{1}{2})}{\Gamma(\frac{k}{2})} +\frac{7k(k+1)}{12} \,.
\end{equation*}
\end{proposition}
Proposition~\ref{prop:razao_L_KT} is at the core of the proof of the strong consistency of $\hat{k}_{\mbox{\tiny{KT}}}$ defined by \eqref{est_kt}.
By strong consistency we mean that the estimator equals the order $k_0$ of the SBM with probability one, for all sufficiently large $n$ (that may depend on the sample $\a$).
In order to derive the strong consistency result for the KT order estimator, we need a penalty function in \eqref{est_kt} with a given rate of convergence when $n$ grows to infinity. Although there are a range of possibilities for this penalty function, the specific form we use in this paper is
\begin{equation}\label{eq:penalty}
\begin{split}
\pena{k}{n} &= \sum\limits_{i=1}^{k-1}\textstyle\frac{(i(i+2)+3+\epsilon)}{2}\log n \\
&= \left[ \textstyle\frac{k(k-1)(2k-1)}{12} + \textstyle\frac{k(k-1)}{2} + \textstyle\frac{(3+\epsilon)(k-1)}{2} \right] \log n
\end{split}
\end{equation}
for some $\epsilon>0$. The convenience of the expression above will be make clear in the proof of the consistency result. Observe that the penalty function defined by \eqref{eq:penalty} is dominated by a tern of order $k^3\log n$ and then it is of smaller order than the function $\frac{k(k+1)}{2} n\log n$ used in \citet{wang2017likelihood}, so our results also apply in this case.
It remains an open question which is the smallest penalty function for a strongly consistent estimator.
We finish this section by stating the main theoretical result in this paper.
\begin{thm}[\sc Consistency Theorem]\label{the:estimators_convergence}
Suppose the SBM has order $k_0$ with parameters $(\pi^0,P^0)$. Then, for a penalty function of the form \eqref{eq:penalty} we have that
\[
\hat{k}_{\mbox{\tiny{KT}}}(\a) = k_0
\]
eventually almost surely as $n\to\infty$.
\end{thm}
The proof of this and other auxiliary results are given in the next section and in the Appendix.
\section{Proof of the Consistency Theorem}\label{proof}
The proof of Theorem~\ref{the:estimators_convergence} is divided in two main parts. The first one, presented in Subsection~\ref{non-over}, proves that $\hat{k}_{\mbox{\tiny{KT}}}(\a)$ does not overestimate the true order $k_0$, eventually almost surely when $n\to\infty$, even without assuming a known upper bound on $k_0$. The second part of the proof, presented in Subsection~\ref{non-under}, shows that $\hat{k}_{\mbox{\tiny{KT}}}(\a)$ does not underestimate $k_0$, eventually almost surely when $n\to \infty$. By combining these two results we prove that $\hat{k}_{\mbox{\tiny{KT}}}(\a) = k_0$ eventually almost surely as $n\to\infty$.
\subsection{Non-overestimation}\label{non-over}
The main result in this subsection is given by the following proposition.
\begin{proposition}\label{prop:no_overestimation}
Let $\a$ be a sample of size $n$ from a SBM of order $k_0$, with parameters $\pi^0$ and $P^0$. Then, the $\hat{k}_{\mbox{\tiny{KT}}}(\a)$ order estimator defined in \eqref{est_kt} with penalty function given by \eqref{eq:penalty} does not
overestimate $k_0$, eventually almost surely when $n\to\infty$.
\end{proposition}
The proof of Proposition~\ref{prop:no_overestimation} follows straightforward from Lemmas~\ref{lemma:k0_log}, \ref{lemma:log_n} and \ref{lemma:n_inf} presented below.
These lemmas are inspired in the work \cite{gassiat2003optimal} which proves consistency for an order estimator of a Hidden Markov Model.
\begin{lemma}\label{lemma:k0_log}
Under the hypotheses of Proposition~\ref{prop:no_overestimation} we have that
$$\hat{k}_{\mbox{\tiny{KT}}}(\a) \not\in (k_0,\log n]$$
eventually almost surely when $n\to\infty$.
\end{lemma}
\begin{proof}
First observe that
\begin{equation}\label{firsteq}
\mathbb{P}_{\pi^0,P^0}( \hat{k}_{\mbox{\tiny{KT}}}(\a) \in (k_0, \log n] ) \;=\; \sum\limits_{k=k_0+1}^{\log n} \mathbb{P}_{\pi^0,P^0}( \hat{k}_{\mbox{\tiny{KT}}}(\a) = k )\,.
\end{equation}
Using Lemma~\ref{prop:ineq_prob_k} we can bound the sum in the right-hand side by
\begin{align*}
\sum\limits_{k=k_0+1}^{\log n}& \exp\left\lbrace \textstyle\frac{(k_0(k_0+2)-1)}{2}\log n + c_{k_0,n} + \pena{k_0}{n} - \pena{k}{n} \right\rbrace\\
\hspace{0.5cm} &\leq e^{ c_{k_0,n}}\,\log n \,\exp\left\lbrace \textstyle\frac{(k_0(k_0+2)-1)}{2}\log n + \pena{k_0}{n} - \pena{k_0+1}{n} \right\rbrace
\end{align*}
where the last inequality follows from the fact that $\pena{k}{n}$ is an increasing
function in $k$. Moreover, a simple calculation using the specific form in \eqref{eq:penalty} gives
\begin{align*}
\textstyle\frac{(k_0(k_0+2)-1)}{2}&\log n + \pena{k_0}{n} - \pena{k_0+1}{n}\\[2mm]
& =\bigl(\textstyle\frac{(k_0(k_0+2)-1)}{2} - \textstyle\frac{(k_0(k_0+2)-1+4 +\epsilon))}{2}\bigr)\log n \\[2mm]
&=\; -(2+{\epsilon}/{2})\log n\,.
\end{align*}
By using this expression in the right-hand side of the las inequality to bound \eqref{firsteq} we obtain
that
\begin{equation*}
\sum_{n=1}^{\infty}\,\mathbb{P}_{\pi^0,P^0}( \hat{k}_{\mbox{\tiny{KT}}}(\a) \in (k_0, \log n] ) \;\leq\; C_{k_0} \sum\limits_{n=1}^{\infty} \frac{\log n}{n^{2+\epsilon/2}} \;<\; \infty\;,
\end{equation*}
where $C_{k_0}$ denotes an upper-bound on $\exp(c_{k_0,n})$. Now the result follows by the first Borel Cantelli lemma.
\end{proof}
\begin{lemma}\label{lemma:log_n}
Under the hypotheses of Proposition~\ref{prop:no_overestimation} we have that
$$\hat{k}_{\mbox{\tiny{KT}}}(\a) \not\in (\log n, n]$$
eventually almost surely when $n\to\infty$.
\end{lemma}
\begin{proof}
As in the proof of Lemma~\ref{lemma:k0_log} we write
\[
\mathbb{P}_{\pi^0,P^0}( \hat{k}_{\mbox{\tiny{KT}}}(\a) \in ( \log n , n] ) = \sum\limits_{k=\log n}^{n} \mathbb{P}_{\pi^0,P^0}( \hat{k}_{\mbox{\tiny{KT}}}(\a) = k )
\]
and we use again Lemma~\ref{prop:ineq_prob_k} to bound the sum in the right-hand side by
\begin{align*}
\sum\limits_{k=\log n}^{ n}& \exp\left\lbrace \textstyle\frac{(k_0(k_0+2)-1)}{2}\log n + c_{k_0,n} + \pena{k_0}{n} - \pena{k}{n} \right\rbrace\\
& \leq\; e^{ c_{k_0,n}}\, n \,\exp\left\lbrace -\log n \left[ -\textstyle \frac{(k_0(k_0+2)-1)}{2} - \frac{\text{pen}(k_0,n)}{\log n} + \frac{\text{pen}(\log n,n)}{\log n} \right] \right\rbrace\,.
\end{align*}
Since $\pen{k,n}{}/\log(n)$ does not depend on $n$ and increases cubically in $k$ we have that
\begin{equation*}
\liminf_{n\rightarrow \infty }\,\, \textstyle\frac{\text{pen}(\log n,n)}{\log n} - \frac{(k_0(k_0+2)-1)}{2} - \frac{\text{pen}(k_0,n)}{\log n} \;>\; 3
\end{equation*}
and thus
\begin{equation*}
\sum\limits_{n=1}^{\infty}\,n\,\exp\left\lbrace -\log n \left[ -\textstyle \frac{(k_0(k_0+2)-1)}{2} - \frac{\text{pen}(k_0,n)}{\log n} + \frac{\text{pen}(\log n,n)}{\log n} \right] \right\rbrace \;<\; \infty\,.
\end{equation*}
Using the fact that $\exp(c_{k_0,n})$ is decreasing on $n$, the result follows from the first Borel Cantelli lemma.
\end{proof}
\begin{lemma}\label{lemma:n_inf}
Under the hypotheses of Proposition~\ref{prop:no_overestimation} we have that
$$\hat{k}_{\mbox{\tiny{KT}}}(\a) \not\in (n, \infty)$$
eventually almost surely when $n\to\infty$.
\end{lemma}
\begin{proof}
Observe that it is enough to prove that
\[
\log\, \KT{n+m}{\a} - \pena{n+m}{n} \;\leq\; \log \KT{n}{\a} - \pena{n}{n}
\]
for all $m \geq 1$. By using Proposition~\ref{prop:razao_L_KT} we have that
\begin{equation*}
- \log \,\KT{n}{\a} \;\leq\; -\log\sup_{(\pi,P) \in \Theta^n} \P(\a) + \textstyle \left( \frac{n(n+2)}{2} -\frac{1}{2}\right) \log n + c_{n,n}
\end{equation*}
and by \eqref{kt-mix} we obtain
\[
\KT{n+m}{\a} \;\leq\; \sup_{(\pi,P) \in \Theta^{n+m}} \P(\a) \,.
\]
Thus, as
\[
\sup_{(\pi,P) \in \Theta^{n+m}} \P(\a) \;=\; \sup_{(\pi,P) \in \Theta^n} \!\!\P(\a)
\]
we obtain
\begin{align*}
&\log \;\KT{n+m}{\a} - \log \KT{n}{\a} \;\leq\; \textstyle\left( \frac{n(n+2)}{2} -\frac{1}{2}\right) \log n + c_{n,n} \\
&\leq\; \textstyle \frac{(n(n+2) -1)}{2} \log n + n(n+1)\Bigl( \frac{\log \Gamma\left(\frac{1}{2}\right)}{2} + \frac{7}{12} \Bigr) +\frac{n(n-1)}{4n} + \frac{1}{12n} - \log \frac{ \Gamma(\frac{n}{2})}{\Gamma(\frac{1}{2})} \\
&\leq \;\pena{n+m}{n} - \pena{n}{n}
\end{align*}
where the last inequality holds for $n$ big enough.
\end{proof}
\subsection{Non-underestimation}\label{non-under}
\renewcommand{\theta}{\pi,P}
In this subsection we deal with the proof of the non-underestimation of $\hat{k}_{\mbox{\tiny{KT}}}(\a)$. The main result of this section is the following
\begin{proposition}\label{prop:no_underestimation}
Let $\a$ be a sample of size $n$ from a SBM of order $k_0$ with parameters $(\pi^0,P^0)$. Then, the $\hat{k}_{\mbox{\tiny{KT}}}(\a)$ order estimator defined in \eqref{est_kt} with penalty function given by \eqref{eq:penalty} does not underestimate $k_0$, eventually almost surely when $n\to\infty$.
\end{proposition}
In order to prove this result we need Lemmas~\ref{lemma:ratio_underfitting_rho} and \ref{lemma:ratio_underfitting2}
below, that explore limiting properties of the under-fitted model. That is we handle with the problem of fitting a SBM of order $k_0$ in the parameter space $\Theta^{k_0-1}$.
An intuitive construction of a ($k-1$)-block model from a $k$-block model is obtained by merging two given blocks. This merging can be implemented in several ways, but here we consider the construction given in \cite{wang2017likelihood}, with the difference that instead of using the sample block proportions we use the limiting distribution $\pi$ of the original $k$-block model.
Given $(\pi,P)\in \Theta^{k}$
we define the merging operation $M_{a,b}(\pi,P) = (\pi^*,P^*)\in \Theta^{k-1}$ which combines blocks with labels $a$ and $b$. For ease of exposition we only show the explicit definition for the case $a=k-1$ and $b=k$.
In this case, the merged distribution $\pi^*$ is given by
\begin{align}\label{eq:merging1}
\pi^*_i &= \pi_i\, \hspace{2cm} \text{for } 1 \leq i \leq k-2\,,\\
\pi^*_{k-1} &= \pi_{k-1}+\pi_{k}\,.\notag
\end{align}
On the other hand, the merged matrix $P^*$
is obtained as
\begin{align}\label{eq:merging}
P^*_{l,r} &\;=\; P_{l,r} \hspace{4cm} \text{for } 1 \leq l,r \leq k-2\,,\notag\\[2mm]
P^*_{l,k-1} &\;=\; \frac{\pi_l\pi_{k-1}P_{l,k-1}+ \pi_l\pi_{k}P_{l,k}}{\pi_l\pi_{k-1}+ \pi_l\pi_{k}} \hspace{0.6cm} \text{for } 1 \leq l \leq k-2\,,\\[2mm]
P^*_{k-1,k-1} &\;=\; \frac{\pi_{k-1}\pi_{k-1}P_{k-1,k-1}+ 2\pi_{k-1}\pi_{k}P_{k-1,k} + \pi_{k}\pi_{k}P_{k,k}}{\pi_{k-1}\pi_{k-1}+ 2\pi_{k-1}\pi_{k}+ \pi_{k}\pi_{k}} \,.\notag
\end{align}
For arbitrary $a$ and $b$ the definition is obtained by permuting the labels.
Given $\a$ originated from the SBM of order $k_0$ and parameters $(\pi^0,P^0)$,
we define the profile likelihood estimator of the label assignment under the ($k_0-1$)-block model as
\begin{equation}\label{profileest}
\bold{z}_n^{\star} \; =\; \argmax \limits_{\bold{z}_n \in [k_0-1]^n}\;\sup\limits_{(\theta) \in \Theta^{k_0-1}} \P(\bold{z}_n, \a)\,.
\end{equation}
The next lemmas show that the logarithm of the ratio between the maximum likelihood under the true order $k_0$ and the maximum profile likelihood under the under-fitting $k_0-1$ order model is bounded from below by a function growing faster than $n\log n$, eventually almost surely when $n\to\infty$. Each lemma consider one of the two possible regimes $\rho_n=\rho>0$ (dense regime) or $\rho_n \rightarrow 0$ at a rate $n\rho_n \rightarrow \infty$ (sparse regime).
\begin{lemma}[dense regime]\label{lemma:ratio_underfitting_rho} Let $(\bold{z}_n,\a)$ be a sample of size $n$ from a SBM of order $k_0$ with parameters $(\pi^0,P^0)$, with $P^0$ not depending on $n$. Then there exist $r,s \in [k_0]$ such that for $(\pi^*,P^*) = M_{r,s}(\pi^0,P^0)$ we have that almost surely
\begin{align}\label{eq:lim_ratio_underfitting_rho}
\liminf\limits_{n\rightarrow \infty} \;\dfrac{1}{n^2}\log&\dfrac{\sup_{(\theta) \in \Theta^{k_0}}\P(\bold{z}_n,\a)}{\sup_{(\theta) \in \Theta^{k_0-1}}\P(\bold{z}_n^{\star},\a)} \notag\\
&\qquad \geq\;\frac{1}{2} \Biggl[\;\sum_{a,b=1}^{k_0}\pi^0_a\pi^0_b\,\gamma(P^0_{ab}) - \sum_{a,b=1}^{k_0-1}\pi^*_a \pi^*_b\,\gamma(P^*_{a,b})\Biggr]\\
&\qquad >\;0\,,\notag
\end{align}
where $\gamma (x)=x\log x + (1-x)\log (1-x)$.
\end{lemma}
\begin{proof}
Given $k$ and ${\bf\bar z}_n \in [k]^n$ define the empirical probabilities
\begin{equation}\label{empprob}
\begin{split}
\hat{\pi}_a({\bf\bar z}_n) &= \dfrac{n_a({\bf\bar z}_n)}{n}\, , \hspace{2cm} 1\leq a \leq k\\
\hat{P}_{a,b}({\bf\bar z}_n,\a) &= \dfrac{O_{a,b}({\bf\bar z}_n,\a)}{n_{a,b}({\bf\bar z}_n)}\, , \hspace{0.6cm} 1\leq a,b \leq k\,.
\end{split}
\end{equation}
Then the maximum likelihood function is given by
\begin{equation*}\label{eq:complete_z0}
\begin{split}
\log\sup_{(\theta) \in \Theta^{k_0}}\P(\bold{z}_n,\a)& \;=\; n \,\sum\limits_{a=1}^{k_0}{\hat{\pi}_a(\bold{z}_n)} \log \hat{\pi}_a(\bold{z}_n)\\
&+\dfrac{1}{2} \sum\limits_{a,b=1}^{k_0} n_{a,b}(\bold{z}_n)\gamma( \hat{P}_{a,b}(\bold{z}_n,\a))\,.
\end{split}
\end{equation*}
Using that $n_{a,b}=n_an_b$ for $a\neq b$ and $n_{a,a}=n_a(n_a-1)$ the last expression is equal to
\begin{equation}\label{eq:complete_z0}
\begin{split}
n\, \sum\limits_{a=1}^{k_0}{\hat{\pi}_a(\bold{z}_n)} \log &\,\hat{\pi}_a(\bold{z}_n) - \dfrac{n}{2} \sum\limits_{a=1}^{k_0}{\hat{\pi}_a(\bold{z}_n)} \gamma( \hat{P}_{a,a}(\bold{z}_n,\a) )\\
& + \dfrac{n^2}{2} \sum\limits_{a,b=1}^{k_0} \hat{\pi}_a(\bold{z}_n)\hat{\pi}_b(\bold{z}_n)\gamma( \hat{P}_{a,b}(\bold{z}_n,\a) )\, .
\end{split}
\end{equation}
The first two terms in \eqref{eq:complete_z0} are of smaller order compared to $n^2$, so by the Strong Law of Large Numbers we have that almost surely
\begin{equation}\label{eq:lim_zero}
\begin{split}
\lim\limits_{n \rightarrow \infty}\;\dfrac{1}{n^2}\log\sup_{(\pi,P) \in \Theta^{k_0}}\P(\bold{z}_n,\a) &\;=\;\dfrac{1}{2}\sum\limits_{a,b=1}^{k_0}\pi^0_a\pi^0_b\,\gamma(P^0_{a,b})\, .
\end{split}
\end{equation}
Similarly for $k_0-1$ and $\bold{z}_n^{\star} \in [k_0-1]^n$ we have that almost surely
\begin{equation}\label{eq:lim_tilde}
\limsup\limits_{n \rightarrow \infty}\;\dfrac{1}{n^2}\,\log \sup\limits_{(\theta) \in \Theta^{k_0-1}}\P(\bold{z}_n^{\star}, \a) = \dfrac{1}{2} \sum\limits_{a,b=1}^{ k_0-1}\tilde\pi_a\tilde\pi_b\,\gamma(\tilde P_{a,b})\,,
\end{equation}
for some $(\tilde\pi,\tilde P) \in \Theta^{k_0-1}$.
Combining \eqref{eq:lim_zero} and \eqref{eq:lim_tilde} we have that almost surely
\begin{equation}\label{eq:liminf_eq}
\begin{split}
\liminf\limits_{n\rightarrow \infty}\;\dfrac{1}{n^2}&\log\dfrac{\sup_{(\theta) \in \Theta^{k_0}}\P(\bold{z}_n,\a)}{\sup_{(\theta) \in \Theta^{k_0-1}}\P(\bold{z}_n^{\star},\a)}\\[2mm]
&=\; \dfrac{1}{2}\sum\limits_{a,b=1}^{k_0} \pi^0_a\pi^0_b\,\gamma(P^0_{a,b}) - \dfrac{1}{2} \sum\limits_{a,b=1}^{k_0-1}\tilde\pi_a \tilde\pi_b\, \gamma( \tilde P_{a,b})\,.
\end{split}
\end{equation}
To obtain a lower bound for \eqref{eq:liminf_eq} we need to compute $(\tilde\pi, \tilde P)$ that minimizes the right-hand side.
This is equivalent to obtain $(\tilde\pi, \tilde P) \in \Theta^{k_0-1}$ that maximizes the second term
\begin{equation}\label{eq:function_tilde}
\sum\limits_{a,b=1}^{k_0-1}\tilde\pi_a \tilde\pi_b\, \gamma( \tilde P_{a,b})\,.
\end{equation}
Denote by $(\bold{\widetilde{Z}}_n,\bold{\widetilde{X}}_{n\times n})$ a $(k_0-1)$-order SBM with distribution $(\tilde\pi,\tilde P)$.
By definition
\[
\tilde P_{\tilde a,\tilde b} = \frac{P(\tilde X_{i,j}=1,\tilde Z_i=\tilde a,\tilde Z_j=\tilde b)}{P(\tilde Z_i=\tilde a,\tilde Z_j=\tilde b)}\,.
\]
Observe that when $\bold{\widetilde{X}}_{n\times n}=\bold{X}_{n\times n}$, the numerator equals
\begin{align*}
\sum_{a,b=1}^{k_0}&P(X_{i,j}=1|Z_i=a,Z_j= b)P(Z_i=a,Z_j= b,\tilde Z_i=\tilde a,\tilde Z_j= \tilde b)\\
& =\sum_{a,b=1}^{k_0} P(Z_i=a,\tilde Z_i=\tilde a)\,P^0_{a,b}\, P(Z_j= b,\tilde Z_j= \tilde b) \\[2mm]
&= (QP^0Q^T)_{\tilde{a},\tilde{b}}\,,
\end{align*}
where $Q(a,\tilde a)$ denotes a joint distribution on $[k_0]\times [k_0-1]$ (a coupling) with marginals $\pi^0$ and $\tilde\pi$, respectively.
Similarly, the denominator can be written as
\begin{align*}
\sum_{a,b=1}^{k_0} P(Z_i=a,\tilde Z_i=\tilde a)P(Z_j= b,\tilde Z_j= \tilde b)=(Q(\bold{1}\bold{1}^T)Q^T)_{\tilde{a},\tilde{b}}\,,
\end{align*}
where $\bold{1}$ denotes the matrix with dimension $(k_0-1)\times k_0$ and all entries equal to 1. Then we can rewrite \eqref{eq:function_tilde} as
\begin{equation}\label{eq:function_tilde_matrix}
\sum\limits_{a,b=1}^{k_0-1} (Q(\bold{1}\bold{1}^T)Q^T)_{a, b}\,\gamma
\left[ \dfrac{(QP^0Q^T)_{a,b}}{ (Q(\bold{1}\bold{1}^T)Q^T)_{a,b}} \right]\,.
\end{equation}
Therefore, finding a pair $(\tilde \pi, \tilde P)$ maximizing \eqref{eq:function_tilde} is equivalent to finding an optimal coupling $Q$ maximizing \eqref{eq:function_tilde_matrix}. \cite{wang2017likelihood} proved that there exist
$r,s \in [k_0]$ such that \eqref{eq:function_tilde_matrix} achieves its maximum
at $(\pi^*,P^*) = M_{r,s}(\pi^0,P^0)$, see Lemma~A.2 there. This concludes the proof of the first inequality in \eqref{eq:lim_ratio_underfitting_rho}. In order to prove the second strict inequality in \eqref{eq:lim_ratio_underfitting_rho}, we consider for convenience and without loss of generality, $r=k_0-1$ and $s=k_0$ (the other cases can be handled by a permutation of the labels).
Notice that in the right-hand side of \eqref{eq:liminf_eq}, with $(\tilde\pi,\tilde P)$
substituted by the optimal value $M_{k_0-1,k_0}(\pi^0,P^0)$ defined by
\eqref{eq:merging1} and \eqref{eq:merging}, all the terms with $1\leq a,b\leq k_0-2$
cancel. Moreover, as $\gamma$ is a convex function, Jensen's inequality implies
that
\begin{equation}\label{jensen1}
\pi^*_a\pi^*_{k_0-1}\gamma(P^*_{a,k_0-1}) \;\leq\; \pi^0_a\pi^0_{k_0-1}\gamma(P^0_{a,k_0-1})+ \pi^0_a\pi^0_{k_0}\gamma(P^0_{a,k_0})
\end{equation}
for all $a=1,\dotsc, k_0-2$ and similarly
\begin{equation}\label{jensen2}
(\pi^*_{k_0-1})^2\gamma(P^*_{k_0-1,k_0-1}) \;\leq\; \sum_{a,b=k_0-1} ^{k_0} \pi^0_a \pi^0_b\gamma(P^0_{a,b})\,.
\end{equation}
The equality holds for all $a$ in \eqref{jensen1} and in \eqref{jensen2} simultaneously if and only if
\[
P^0_{a,k_0} \;=\; P_{a,k_0-1} \qquad\text{ for all }a=1,\dotsc, k_0\,,
\]
in which case the matrix $P^0$ would have two identical columns, contradicting the fact that the sample $(\bold{z}_n,\a)$ originated from a SBM with order $k_0$. Therefore the strict inequality must hold in \eqref{jensen1} for at least one $a$ or in \eqref{jensen2}, showing that the second inequality in \eqref{eq:lim_ratio_underfitting_rho} holds.
\end{proof}
\begin{lemma}[sparse regime]\label{lemma:ratio_underfitting2}
Let $(\bold{z}_n,\a)$ be a sample of size $n$ from a SBM of order $k_0$ with parameters
$(\pi^0,\rho_n S^0)$, where $\rho_n \rightarrow 0$ at a rate $n\rho_n \rightarrow \infty $. Then there exist $r,s \in [k_0]$ such that for $(\pi^*,P^*) = M_{r,s}(\pi^0,S^0)$ we have that almost surely
\begin{align}\label{eq:lim_ratio_underfitting2}
\liminf\limits_{n\rightarrow \infty} \;\dfrac{1}{\rho_nn^2}\log&\dfrac{\sup_{(\theta) \in \Theta^{k_0}}\P(\bold{z}_n,\a)}{\sup_{(\theta) \in \Theta^{k_0-1}}\P(\bold{z}_n^{\star},\a)} \notag\\
&\qquad \geq\;\frac{1}{2} \Biggl[\;\sum_{a,b=1}^{k_0}\pi^0_a\pi^0_b\,\tau(S^0_{a,b}) - \sum_{a,b=1}^{k_0-1}\pi^*_a \pi^*_b\,\tau(P^*_{a,b})\Biggr]\\
&\qquad >\;0\,,\notag
\end{align}
where $\tau(x)=x\log x - x$.
\end{lemma}
\begin{proof
This proof follows the same arguments used in the proof of Lemma~\ref{lemma:ratio_underfitting_rho}, but as in this case $P^0$ decreases to 0 some limits must be handled differently. As shown in \eqref{eq:complete_z0} we have that
\begin{align}\label{baseq1}
\log\sup_{(\theta) \in \Theta^{k_0}}\P(\bold{z}_n,\a) \;= \;&
\,n\, \sum\limits_{a=1}^{k_0}{\hat{\pi}_a(\bold{z}_n)} \log \,\hat{\pi}_a(\bold{z}_n) \notag\\
&- \dfrac{n}{2} \sum\limits_{a=1}^{k_0}{\hat{\pi}_a(\bold{z}_n)} \gamma( \hat{P}_{a,a}(\bold{z}_n,\a) )
\\
&+ \dfrac{n^2}{2}\!\sum\limits_{a,b=1}^{k_0} \hat{\pi}_a(\bold{z}_n)\hat{\pi}_b(\bold{z}_n)\gamma( \hat{P}_{a,b}(\bold{z}_n,\a) )\,.\notag
\end{align}
For $\rho_n \rightarrow 0$, \cite{bickel2009nonparametric} proved that
\begin{align}\label{eq:bickel_2009}
\sum\limits_{a,b=1}^{k_0}& \hat{\pi}_a(\bold{z}_n)\hat{\pi}_b(\bold{z}_n)\gamma( \hat{P}_{a,b}(\bold{z}_n,\a) ) \notag \\
&= \rho_n \sum\limits_{a,b=1}^{k_0} \hat{\pi}_a(\bold{z}_n)\hat{\pi}_b(\bold{z}_n)\,\tau\Bigl( \frac{\hat{P}_{a,b}(\bold{z}_n,\a)}{\rho_n} \Bigr) + \dfrac{E_n}{n^2}\log \rho_n + O(\rho_n^2)\,,
\end{align}
where $E_n=\sum\limits_{a,b=1}^{k_0} O_{ab}(\bold{z}_n,\a) $ (twice the total number of edges in the graph)
and $\tau(x)=x\log x - x$. Thus, as $\rho_n n\to\infty$ we can drop the first two terms in \eqref{baseq1} and we have that
\begin{align}\label{eq:complete_z0_rho2}
\dfrac{1}{\rho_n n^2}\;\log\!\sup_{(\theta) \in \Theta^{k_0}}\P(\bold{z}_n,\a)\;=\;&
\dfrac{1}{2}\sum\limits_{a,b=1}^{k_0}\hat{\pi}_a(\bold{z}_n)\hat{\pi}_b(\bold{z}_n)\tau\Bigl( \frac{\hat{P}_{a,b}(\bold{z}_n,\a)}{\rho_n} \Bigr)\notag\\
&+\dfrac{ E_n\log \rho_n }{2\rho_n n^2}+ O(\rho_n)
\end{align}
and
\begin{align}\label{eq:complete_z0_rho3}
\dfrac{1}{\rho_n n^2}\;\log\!\sup_{(\theta) \in \Theta^{k_0-1}}\P(\bold{z}_n^{\star},\a)\;=\;&
\dfrac{1}{2}\sum\limits_{a,b=1}^{k_0-1}\hat{\pi}_a(\bold{z}_n^{\star})\hat{\pi}_b(\bold{z}_n^{\star})\tau\Bigl( \frac{\hat{P}_{a,b}(\bold{z}_n^{\star},\a)}{\rho_n} \Bigr)\notag\\
&+\dfrac{ E_n\log \rho_n }{2\rho_n n^2}+ O(\rho_n)\,.
\end{align}
Now, as in the proof of Lemma \ref{lemma:ratio_underfitting_rho} there must exist some $(\tilde \pi, \tilde S)\in \Theta^{k_0-1}$ such that almost surely
\begin{align}\label{eq:liminf_eq2}
\liminf\limits_{n\rightarrow \infty}\dfrac{1}{\rho_nn^2}\log&\dfrac{\sup_{(\theta) \in \Theta^{k_0}}\P(\bold{z}_n,\a)}{\sup_{(\theta) \in \Theta^{k_0-1}}\P(\bold{z}_n^{\star},\a)}\notag\\
& =\; \dfrac{1}{2}\sum\limits_{a,b=1}^{k_0} \pi^0_a\pi^0_b\,\tau(S^0_{a,b}) -
\dfrac{1}{2} \sum\limits_{a,b=1}^{k_0-1}\tilde{\pi}_a\tilde{\pi}_b\, \tau( \tilde{S}_{a,b})\,.
\end{align}
As before, we want to obtain $(\tilde{\pi}, \tilde{S})\in \Theta^{k_0-1}$ that maximizes the second term in the right-hand side of the equality above.
The rest of the proof here is analogous to that of Lemma~\ref{lemma:ratio_underfitting_rho}, by observing that $\tau$ is also a convex function and therefore the lower bound on 0 in \eqref{eq:lim_ratio_underfitting2} also holds.
\end{proof}
\begin{proof}[Proof of Prosposition~\ref{prop:no_underestimation}]
To prove that $\hat{k}_{\mbox{\tiny{KT}}}(\a)$ does not underestimate $k_0$ it is enough to show that for all $k < k_0$
\[
\log \KT{k_0}{\a} - \pena{k_0}{n} \; >\; \log \KT{k}{\a} - \pena{k}{n}
\]
eventually almost surely when $n\to\infty$.
As
\[
\lim_{n \rightarrow \infty}\;\dfrac{1}{\rho_n n^2} \Bigl[\pena{k_0}{n} - \pena{k}{n}
\Bigr] \;=\; 0
\]
this is equivalent to show that
\begin{equation*}\label{eq:eq_lim_kt}
\liminf_{n \rightarrow \infty}\;\dfrac{1}{\rho_n n^2}\, \log \dfrac{\KT{k_0}{\a}}{\KT{k}{\a} } \;>\; 0\,.
\end{equation*}
First note that the logarithm above can be written as
\begin{align*}
\log \,\dfrac{\KT{k_0}{\a}}{\KT{k}{\a} }\;= \;&
\log \,\dfrac{\KT{k_0}{\a}}{\sup_{(\theta) \in \Theta^{k_0}}\P(\a)} \\[2mm]
&+ \log \,\dfrac{\sup_{(\theta) \in \Theta^{k_0}}\P(\a)}{\KT{k}{\a}} \,.
\end{align*}
Using Proposition \eqref{prop:razao_L_KT} we have that the first term in the right-hand side can be bounded below by
\begin{equation}\label{proof:overestimation_ineq1}
\log \,\dfrac{\KT{k_0}{\a}}{{\sup_{(\theta) \in \Theta^{k_0}}\P(\a)}} \;\geq\; -\left( \textstyle\frac{k_0(k_0+2)}{2} -\frac{1}{2}\right)\log n - c_{k_0,n}\,.
\end{equation}
On the other hand, the second term can be lower-bounded by using
\begin{align}\label{proof:overestimation_ineq2}
\log \,&\dfrac{\sup_{(\theta) \in \Theta^{k_0}}\P(\a)}{\KT{k}{\a}}\notag\\
&\qquad = \;\log \dfrac{\sup_{(\theta) \in \Theta^{k_0}}\P(\a)}{{\sup_{(\theta) \in \Theta^{k}}\P(\a)}} \;+\; \log \dfrac{{\sup_{(\theta) \in \Theta^{k}}\P(\a)}}{\KT{k}{\a}}\\
&\qquad \geq\; \log\dfrac{\sup_{(\theta) \in \Theta^{k_0}}\P(\a)}{{\sup_{(\theta) \in \Theta^{k}}\P(\a)}} \,.\notag
\end{align}
By combining \eqref{proof:overestimation_ineq1} and \eqref{proof:overestimation_ineq2} we obtain
\begin{align*}
\dfrac{1}{\rho_n n^2}\,\log \dfrac{\KT{k_0}{\a}}{\KT{k'}{\a} } &\;\geq\; -\left( \textstyle\frac{k_0(k_0+2)}{2} \,-\,\frac{1}{2}\right)\dfrac{\log n}{\rho_n n^2} - \dfrac{c_{k_0,n}}{\rho_n n^2}\\
&\qquad +\dfrac{1}{\rho_n n^2}\,\log\dfrac{\sup_{(\theta) \in \Theta^{k_0}}\P(\a)}{{\sup_{(\theta) \in \Theta^{k}}\P(\a)}} \,.
\end{align*}
Now, as $n \rho_n\rightarrow \infty$ it suffices to show that for $k < k_0$, almost surely we have
\begin{equation}\label{eq:lim_razao_proban}
\liminf\limits_{n \rightarrow \infty}\,\dfrac{1}{\rho_n n^2}\log\dfrac{\sup_{(\theta) \in \Theta^{k_0}}\P(\a)}{{\sup_{(\theta) \in \Theta^{k}}\P(\a)}} \; >\; 0\,.
\end{equation}
We start with $k=k_0-1$. Using $\bold{z}_n^{\star}$ defined by \eqref{profileest}
we have that
\begin{equation*}
\begin{split}
\sup_{(\theta) \in \Theta^{k_0-1}}\P(\a) &\; \leq\; \sum\limits_{\bold{z}_n \in [k_0-1]^n}\sup_{(\theta) \in \Theta^{k_0-1}}\P(\a, \bold{z}_n)\\
& \leq \;(k_0-1)^n\sup_{(\theta) \in \Theta^{k_0-1}}\P(\bold{z}_n^{\star},\a)
\end{split}
\end{equation*}
and on the other hand
\begin{equation*}
\begin{split}
\sup_{(\theta) \in \Theta^{k_0}}\P(\a) &\;=\; \sup_{(\theta) \in \Theta^{k_0}}\;\sum\limits_{{\bf\bar z}_n \in [k_0]^n}\P({\bf\bar z}_n,\a)\\
& \geq \; \sup_{(\theta) \in \Theta^{k_0}}\P(\bold{z}_n,\a)\,.
\end{split}
\end{equation*}
Therefore
\begin{align}\label{lim1}
\log\;\dfrac{\sup_{(\theta) \in \Theta^{k_0}}\P(\a)}{{\sup_{(\theta) \in \Theta^{k_0-1}}\P(\a)}}
\;\geq\; &\log\; \dfrac{\sup_{(\theta) \in \Theta^{k_0}}\P(\bold{z}_n,\a)}{\sup_{(\theta) \in \Theta^{k_0-1}}\P(\bold{z}_n^{\star},\a)} \notag\\
&- n\log\,(k_0-1)\,.
\end{align}
Using that $n \rho_n \rightarrow \infty$ on both regimes $\rho_n=\rho>0$ (dense regime) and $\rho_n \rightarrow 0$ (sparse regime) we have, by
Lemmas~\ref{lemma:ratio_underfitting_rho} and \ref{lemma:ratio_underfitting2}
that almost surely \eqref{eq:lim_razao_proban} holds for $k=k_0-1$.
To complete the proof, let $k < k_0-1$. In this case we can write
\begin{align*}
\log\,\dfrac{\sup_{(\theta) \in \Theta^{k_0}}\P(\a)}{{\sup_{(\theta) \in \Theta^{k}}\P(\a)}} \; =\; &\log\,\dfrac{\sup_{(\theta) \in \Theta^{k_0}}\P(\a)}{{\sup_{(\theta) \in \Theta^{k_0-1}}\P(\a)}} \\
&+ \log\,\dfrac{{\sup_{(\theta) \in \Theta^{k_0-1}}\P(\a)}}{{\sup_{(\theta) \in \Theta^{k}}\P(\a)}}\,.
\end{align*}
The first term in the right-hand side can be handled in the same way as
in
\eqref{lim1}.
On the other hand the second term is non-negative because the maximum likelihood function is a non-decreasing function of the dimension of the model and $k<k_0-1$. This finishes the proof of Proposition~\ref{prop:no_underestimation}
\end{proof}
\section{Discussion}
In this paper we introduced a model selection procedure based on the Krichevsky-Trofimov mixture distribution for the number of communities in the Stochastic Block Model. We proved the almost sure convergence (strong consistency) of the penalized estimator \eqref{est_kt} to the underlying number of communities, without assuming a known upper bound on that quantity. To our knowledge this is the first strong consistency result for an estimator of the number of communities, even in the bounded case.
The family of penalty functions of the form \eqref{eq:penalty} are of smaller order compared to the ones
used in \cite{wang2017likelihood}, therefore our results also apply to their family of penalty functions.
Moreover, we consider a wider family of sparse models with edge probability of order $\rho_n$, where $\rho_n$ can decrease to 0 at a rate $n\rho_n \rightarrow \infty$.
It remains open if it is even possible to obtain consistency in the sparse regime with $\rho_n=1/n$ and which are the smallest penalty functions for a consistent estimator of the number of communities.
\newpage
|
1,941,325,220,002 | arxiv | \section*{Abstract}
{\bf
We present recent phenomenological studies, tailored on kinematic configurations typical of current and forthcoming analyses at the LHC, for two novel probe channels of the BFKL resummation of energy logarithms. Particular attention is drawn to the behavior of distributions differential in azimuthal angle and rapidity, where significant high-energy effects are expected.
}
\vspace{10pt}
\noindent\rule{\textwidth}{1pt}
\tableofcontents\thispagestyle{fancy}
\noindent\rule{\textwidth}{1pt}
\vspace{10pt}
\section{Introduction}
\label{sec:intro}
With more and more data to be collected at the Large Hadron Collider (LHC), the study of semi-hard processes~\cite{Gribov:1983ivg} in the large center-of-mass energy limit gives us an opportunity to further test perturbative QCD (pQCD) in unexplored kinematical configurations thus contributing to a better understanding of the dynamics of strong interactions. Within pQCD computations, reducing theoretical uncertainties coming from higher order corrections is required to have a reliable estimate of the production rate. At high energies, the validity of the perturbative expansion, truncated at a certain order in the strong coupling $\alpha_s$, is spoiled. This is due to the appearance of large logarithms of the center-of-mass energy squared, $s$, associated with the perturbative calculations and it is needed to resum them to all orders in $\alpha_{s}$. The most powerful framework to perform this resummation is the Balitsky–Fadin–Kuraev–Lipatov (BFKL)~\cite{Fadin:1975cb,kuraev1976multi,Kuraev:1977fs,Balitsky:1978ic} approach, initially developed at the so called leading logarithmic approximation (LLA), where it prescribes how to resum all terms proportional to $(\alpha_s \ln s )^n$. In order to improve the obtained results at LLA, the so called next-to-leading logarithmic approximation (NLA) was considered~\cite{Fadin:1998py,Ciafaloni:1998gs}, where also all terms proportional to $\alpha_s(\alpha_s \ln s)^n$, were resumed. Clearly, a significant question for collider phenomenology is highlighting at which energies the BFKL dynamics becomes significant and cannot be overlooked. Typical BFKL observables that can be studied at the LHC are the azimuthal coefficients of the Fourier expansion of the cross section differential in the variables of the tagged objects over the relative azimuthal-angle. They take a certain factorization form given as the convolution of a universal BFKL Green’s function with process dependent impact factors, the latter describing the transition from each colliding proton to the respective final-state identified object. The BFKL Green's function obeys an integral equation, whose kernel is known at the next-to-leading order (NLO)~\cite{Fadin:1998py,Ciafaloni:1998gs,Fadin:1998jv,Fadin:2004zq,Fadin:2005zj}.
Over last years, pursuing the goal of identifying observables that fit the data where BFKL approach is needed, a number of reactions have been proposed for different collider environments: the exclusive diffractive leptoproduction of two light vector mesons~\cite{Pire:2005ic,Segond:2007fj,Enberg:2005eq,Ivanov:2005gn,Ivanov:2006gt}, the inclusive hadroproduction of two jets featuring large transverse momenta and well separated in rapidity, the so-called Mueller-–Navelet jets~\cite{Mueller:1986ey}, for which several phenomenological studies have appeared during last years~\cite{Colferai:2010wu,Caporale:2012ih,Ducloue:2013hia,Ducloue:2013bva,Caporale:2013uva,Caporale:2014gpa,Colferai:2015zfa,Caporale:2015uva,Ducloue:2015jba,Celiberto:2015yba,Celiberto:2015mpa,Celiberto:2016ygs,Celiberto:2016vva,Caporale:2018qnm}, the inclusive detection of two light-charged rapidity-separated hadrons~\cite{Celiberto:2016hae,Celiberto:2016zgb,Celiberto:2017ptm}, three- and four-jet hadroproduction~\cite{Caporale:2016zkc,Caporale:2016xku}, $J/\Psi$-plus-jet~\cite{Boussarie:2017oae}, hadron-plus-jet~\cite{Bolognino:2019cac}, heavy-flavor~\cite{Bolognino:2021mrc,Bolognino:2021hxx,Celiberto:2021dzy,Celiberto:2021fdp} and forward Drell–Yan dilepton production with a possible backward-jet tag~\cite{Golec-Biernat:2018kem}. The second class of probes for BFKL is given by single forward emissions in lepton-proton or proton-proton scatterings, giving us the possibility to probe the unintegrated
gluon distribution in the proton (UGD), which is linked to BFKL via the convolution between the BFKL gluon
Green’s function and the proton impact factor. Proposed channels to study the UGD are the exclusive light vector-meson electroproduction~\cite{Bolognino:2018rhb,Bolognino:2018mlw,Bolognino:2019bko,Bolognino:2019pba,Celiberto:2019slj,Bolognino:2021niq,Bolognino:2021gjm}, the exclusive quarkonium photoproduction~\cite{Bautista:2016xnp,Garcia:2019tne,Hentschinski:2020yfm}, and the inclusive tag of Drell–Yan pairs in forward directions~\cite{Motyka:2014lya,Brzeminski:2016lwh,Motyka:2016lta,Celiberto:2018muu}.
In this work we concentrate on the BFKL $\phi$-summed cross sections for two proposed reactions, the inclusive production of Higgs-plus-jet and $\Lambda$-plus-jet\footnote{The diffractive production of $\Lambda$-jet was studied by three of us~\cite{Celiberto:2020rxb}, with some predictions tailored on the CMS and CASTOR
typical kinematic ranges.} at the LHC, where the final tagged particles are well separated in rapidity.
\section{Theoretical framework}
\label{sec:theory}
We present a general expression for the inclusive hadroproduction processes of our considerations~(depicted in Fig.~\ref{fig:semi-hard_processes}):
\begin{equation}\label{eq:semi_hard_process}
{\rm proton}(p_1) \ + \ {\rm proton}(p_2) \ \to \ {\rm O_i}(\vec k_i, y_i) \ + \ {\rm X} \ + \ {\rm jet}(\vec k_J, y_J),
\end{equation}
where a jet is always detected in association with the Higgs boson or the $\Lambda$-hyperon ($ {\rm O_i}(\vec k_i, y_i), i\\ \equiv \{{\rm H},\Lambda\}$), emitted with high transverse momenta $|\vec k_{i,J}|\equiv \kappa_{i,J}\gg \Lambda_{QCD}$, and large rapidity separation $\Delta Y = |y_i-y_J|$. The symbol $X$ stands for an undetected system of hadrons.
In the BFKL approach the cross section of the hard subprocesses can be presented as the Fourier sum of the azimuthal coefficients ${\cal C}_n$,
having so
\begin{equation}\label{eq:BFKL_crssec}
\frac{d\sigma}
{dy_idy_J\, d\kappa_i \, d\kappa_Jd\phi_i d\phi_J}
=\frac{1}{(2\pi)^2}\left[{\cal C}_0+\sum_{n=1}^\infty 2\cos (n\phi )\,
{\cal C}_n\right]\, ,
\end{equation}
where $\phi=\phi_i-\phi_J-\pi$, with $\phi_{i,J}$ representing the Higgs/$\Lambda$ and jet azimuthal angles, while $y_{i,J}$ and $\kappa_{i,J}$ are their
rapidities and transverse momenta, respectively. The $C_{0}$ coefficient gives us the $\phi$-summed cross section, while the $C_{n\ne 0}$ ones are connected to the so-called azimuthal-correlation coefficients.
\section{Numerical analysis and discussion}
\label{sec:Numerical}
In order to match the realistic kinematic cuts adopted by the current experimental analyses
at the LHC, we integrate the coefficients $C_0$ over the phase space for the two
emitted objects, while their rapidity distance $\Delta Y$, is kept fixed
\begin{equation}\label{Integrated_coefficients}
C_0(\Delta Y,s) =
\int_{\kappa^{\rm min}_{i=H,\Lambda}}^{{\kappa^{\rm max}_{i=H,\Lambda}}}d|\vec \kappa_{i}|
\int_{\kappa^{\rm min}_J}^{{\kappa^{\rm max}_J}}d|\vec \kappa_J|
\int_{y^{\rm min}_{i=H,\Lambda}}^{y^{\rm max}_{i=H,\Lambda}}dy_{i}
\int_{y^{\rm min}_J}^{y^{\rm max}_J}dy_J
\delta \left( y_i - y_J - \Delta Y \right){\cal C}_0.
\end{equation}
We allow for a larger rapidity range of the jet in our two considered processes, $| y_J |< 4.7$, and we consider asymmetric kinematic cuts for the final-state transverse momenta in ranges 10 GeV $< \kappa_{H} < 2m_{top}$ and 20 GeV $< \kappa_{J} < $ 60 GeV for Higgs-jet, with $|y_H | < 2.5$ inside the CMS rapidity acceptances, while for the $\Lambda$ + jet case, the $\Lambda$-particle is considered to be detected in symmetric rapidity range: $-2.0$ to $2.0$ and transverse momenta (typical CMS measurements): 10 GeV $<\kappa_\Lambda<$ 21.5 GeV and 35 GeV $< \kappa_J <$ 60 GeV. All numerical simulations were done using the hybrid Fortran2008/Python3 modular package {\tt JETHAD}~\cite{Celiberto:2020wpk}. The MMHT~2014~PDF set were used via the Les Houches Accord PDF Interface (LHAPDF) 6.2.1~\cite{Buckley:2014ana}, together with the AKK~2008~\cite{Albino:2008fy} FF interfaces which describe the hadronization probability of the initial state partons into the final state detected $\Lambda$ hyperon.
In Fig.~\ref{fig:C0_0J} we present results for the $\Delta Y$-dependence of the $\phi$-summed cross section, in the asymmetric kinematic configurations for both Higgs + jet (left plot) and the $\Lambda$ + jet (right plot) reactions. Here, the usual behavior of BFKL effects comes easily into play, where the growth with energy of the pure hard cross section as an effect of the resummation of high-energy logarithms, is downtrend by the convoluted PDFs and FFs. For the Higgs-jet process, we can remarkably notice that NLA predictions (red) are almost entirely nested inside LLA uncertainty bands (blue), since the large energy scales provided by the emission of a Higgs boson suppress the higher order corrections~\cite{Celiberto:2020tmb,Celiberto:2021tky}.
Cross sections in the $\Lambda$-jet channel are steadily lower when compered to
the previously studied di-hadron~\cite{Celiberto:2016hae,Celiberto:2016zgb,Celiberto:2017ptm} and the hadron-jet~\cite{Bolognino:2019cac} reactions, this, together with the fact that the lower cutoff to identify $\Lambda$ hyperon is 10 GeV, which is larger than the
corresponding one for any light-hadron tagging, that gives us the opportunity to quench the experimental minimum-bias effects.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{./figs/semi-hard-process_color.pdf}
\caption{Schematic representation of our considered semi-hard processes.}
\label{fig:semi-hard_processes}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{./figs/C0_0J_sc_2019_kt-a_Tnvv_MSB_CMS14.pdf}
\includegraphics[width=0.45\textwidth]{./figs/c0_h-j.png}
\caption{$\Delta Y$-dependence of the $C_0$ in the symmetric $p_T$-range, for the inclusive Higgs-jet (left plot), and $\Lambda$-jet process (right plot), with $\kappa^{max}_{\Lambda,CMS}=$ 21.5 GeV, and $\kappa^{max}_{J,CMS} =$ 60 GeV. }
\label{fig:C0_0J}
\end{figure}
\section{Conclusion}
We have proposed two inclusive hadroproduction reactions, the Higgs boson plus a jet, and $\Lambda$-particle in association with a jet, as another novel
diffractive semi-hard channels to test the BFKL resummation. In both cases the final detected particles feature high transverse momenta and separated by a large rapidity distance. At variance with previously studied reactions, the Higgs + jet production channel exhibits quite a fair stability under higher-order corrections, and $\Lambda$-particle emissions in the final state dampen the experimental minimum-bias contamination, thus easing the comparison with forthcoming LHC data.
The next step in our program of investigating semi-hard phenomenology consists in performing the full NLA BFKL analysis for the Higgs + jet channel, via including the full NLO jet and Higgs impact factors, and extend our study to cover the kinematic configurations for the new-generations of colliders, such as HL-LHC~\cite{Chapon:2020heu}, EIC~\cite{AbdulKhalek:2021gbh}, NICA~\cite{Arbuzov:2020cqg}, and the FPF~\cite{Anchordoqui:2021ghd}.
|
1,941,325,220,003 | arxiv | \section*{Abstract}
Quantum walks are roughly analogous to classical random walks, and like classical walks they have been used to find new (quantum) algorithms. When studying the behavior of large graphs or combinations of graphs it is useful to find the response of a subgraph to signals of different frequencies. In so doing we can replace an entire subgraph with a single vertex with frequency dependent scattering coefficients.
In this paper a simple technique for quickly finding the scattering coefficients of any quantum graph will be presented. These scattering coefficients can be expressed entirely in terms of the characteristic polynomial of the graph's time step operator. Moreover, with these in hand we can easily derive the ``impulse response" which is the key to predicting the response of a graph to any signal. This gives us a powerful set of tools for rapidly understanding the behavior of graphs or for reducing a large graph into its constituent subgraphs regardless of how they are connected.
\pagebreak
\begin{tableofcontents}
\end{tableofcontents}
\pagebreak
\section{Introduction}
In classical computer science random walks have proven to be a useful tool for understanding and developing new algorithms and techniques. The same has been true for quantum walks \cite{reitzner} and quantum computers. A number of these quantum walks have already been experimentally implemented, some using trapped ions \cite{ScMaScGlEnHuSc09,KaFoChStAlMeWi09} and others using photons in optical networks \cite{PeLaPoSoMoSi08}-\cite{schreiber}. The goal of this paper is to provide a set of powerful and computationally cheap tools for rapidly understanding the behavior of graphs in terms of scattering signals.
A graph is a set of vertices with edges connecting those vertices. In a classical random walk we imagine a particle that inhabits a vertex and has some probability of moving to connected vertices. In a discrete random walk time is divided into integer steps and the probability of the particle jumping to an adjacent vertex in a given step is described by a stochastic matrix. Like classical walks, the time can be either continuous \cite{farhi} or advance in discrete steps \cite{davidovich,vazirani}. But unlike classical walks, with quantum walks the time step operator is unitary rather than stochastic.
In a classical walk the Hilbert space is composed entirely of the set of vertices, but in a quantum walk that isn't sufficient. In a nutshell, there isn't enough information in a vertex state alone for time-reversibility (an essential characteristic of unitary processes) because at the very least the particle needs to ``know" where it was in the previous time step. In this paper we'll use ``edge states" to more elegantly encode this information \cite{hillery0}. For example, the edge state $|A,B\rangle$ is the state on the edge between vertices $A$ and $B$ that points from $A$ to $B$. This is exactly equivalent to a particle on vertex $B$ that was previously on vertex $A$. In this edge state formalism each vertex hosts a unitary operator that takes all of the incoming states and maps them to outgoing states.
Those already familiar with quantum walks will probably be more familiar with the ``coin space" \cite{reitzner} formalism, which is more directly analogous to classical random walks; the Hilbert space of discrete time coined quantum walks is the tensor product of the position space and the coin space \cite{Kollar}. That is, each vertex is amended with an ancillary ``coin space". The coin keeps track of where the particle previously was, and for this reason the dimension of the coin space associated with a vertex is always greater than or equal to the degree of that vertex.
Finally and most importantly, in a quantum walk the particle is in a superposition of states described by probability amplitudes (as opposed to probabilities). When a position measurement is made the probability of finding a particle in the state $|\psi\rangle$ on the edge $|e\rangle$ is $|\langle e|\psi\rangle|^2$, in adherence to Born's rule.
Several previous studies have investigated scattering on quantum graphs. Similar to the formality found in this paper, semi-infinite lines (``runways" of attached edges) are attached to a graph. On these runways the time step operator passes the particle from one edge to the next sequentially, either toward or away from a particular vertex in the graph, and in this way the particle enters and exits the graph. In \cite{farhi} scattering theory was used to show that tree graphs could be used to solve some kinds of decision problems using continuous time walks. A discrete time scattering theory approach was fleshed out in \cite{feldman} where the connection between the connection between the number of steps to get through a graph and the transmission amplitude was found, as well as some more general results on the reflection and transmission amplitudes for a graph. In \cite{modwalks} it was shown how the scattering matrices for subgraphs could be used to construct the scattering matrix for the overall graph.
In this paper we show that the semi-infinite runways of edges can be replaced with a single edge. This allows us to analyze problems using strictly finite graphs, removing the issue of non-normalizable states. More importantly, we show that the scattering coefficients, as well as the response to any incoming states, can be described entirely and succinctly by the characteristic polynomial of that finite graph's time step operator. The time step operator is dictated by the structure of the graph, so this allows us to immediately see the relationship between the scattering coefficients of a graph and the structure of that graph.
Specifically, we find that the scattering coefficients of a graph are determined by the eigenstates and eigenvalues of the time step operator (which is now a finite matrix). Fortunately, the techniques described in this paper do not at any time require these quantities to be calculated; instead we find that the characteristic polynomial itself is all that is needed.
\vspace{5mm}
In section 2 we consider the case of a single runway. The problem and the exact definition of the effective reflection coefficient are defined rigorously. Having only a single runway leads to some surprisingly compact results that are explored here. We then use the frequency-dependent effective reflection coefficient to derive the graph's impulse response. With this in hand we are able to rapidly calculate (with a convolution) the response to any arbitrary signal.
In section 3 we address the challenges of using a reflected signal to gain information about a graph and a theorem is proven which describes this difficulty explicitly. Here we learn the relationship between the eigenvalues of the time step operator and the length of a signal on the runway necessary to detect the effect of those eigenvalues. This is an important tool for understanding the computational time of algorithms.
In section 4 we explore the case of a graph attached to multiple runways, and the effective scattering coefficient between them. A very powerful theorem is derived that allows for the rapid and simultaneous calculation of each of these coefficients in terms of the resolvent.
Finally, in the appendix (the second half of this paper) there are a series of examples that put all of the theorems and techniques described in this paper to use, demonstrating how simple they are in practice.
\pagebreak
\section{Basic Framework}
The situation in question is an arbitrary graph, $G$, attached to an infinite ``runway" of edges. The vertices on the runway are labeled 0, 1, 2, ... where 0 is a given vertex of $G$. We define the unitary time step operator on the runway as the one that passively moves each edge state one step. I.e., ${\bf U}|j,j+1\rangle = |j+1,j+2\rangle$ and ${\bf U}|j+1, j\rangle = |j,j-1\rangle$. The behavior of ${\bf U}$ in the graph $G$ is not specified here.
Our goal is to replace the graph $G$ with a {\it single} vertex and to encode all of its behavior into that one vertex. A set of constant reflection/scattering coefficients doesn't contain nearly enough information to simulate the behavior of a complicated graph, but if we allow them to be frequency-dependent, then this goal is attainable.
The frequency-dependent scattering coefficient is defined such that it replaces the entire graph with a single reflection coefficient at vertex zero with value $S(\lambda)$. When dealing with multiple runways attached to the same graph we can consider them one at a time, since the time step operator is linear.
\begin{figure}[h!]
\centering
\includegraphics[width=2.8in]{defined.jpg}
\caption{A signal that advances by $\lambda$ every time step produces a ``reflection" that is shifted by some phase. The scattering coefficient is defined such that the graph can be replaced with a single vertex that reflects with $S(\lambda)$.}
\end{figure}
In this section we'll first consider a graph with one connected runway. It will be demonstrated that we can understand the response of a graph connected to an infinite runway by looking at the characteristic polynomial of the graph alone.
We define an operator, ${\bf U}$, on $G$ and the runway such that ${\bf U}|1,0\rangle = |in\rangle$ and ${\bf U}|out\rangle = |0,1\rangle$, and $\forall n>0$, ${\bf U}|n-1,n\rangle = |n,n+1\rangle$ and ${\bf U} |n+1,n\rangle = |n,n-1\rangle$.
\vspace{5mm}
An incoming pure momentum state takes the form $\sum_{j=0}^\infty \lambda^{j+1} |j+1,j\rangle$. If vertex 0 is completely reflective with reflection coefficient $r$, then the response is $\sum_{j=0}^\infty r\lambda^{n-j} |j,j+1\rangle$ (see figure 1). Clearly,
\begin{equation}
|\Psi\rangle = \sum_{j=0}^\infty \lambda^{j+1} |j+1,j\rangle + \sum_{j=0}^\infty r\lambda^{-j} |j,j+1\rangle
\end{equation}
is an eigenstate with eigenvalue $\lambda$.
In this form it's easier to see how this is a signal and reflection. After $n$ time steps this will take the form $\lambda^n|\Psi\rangle = \sum_{j=0}^\infty \lambda^{n+j+1} |j+1,j\rangle + \sum_{j=0}^\infty r\lambda^{n-j} |j,j+1\rangle$, and after $n+1$ time steps the state is $\lambda^{n+1}|\Psi\rangle = \sum_{j=0}^\infty \lambda^{n+j+2} |j+1,j\rangle + \sum_{j=0}^\infty r\lambda^{n+1-j} |j,j+1\rangle$. The coefficient of $|0,1\rangle$ is $r$ times the coefficient of $|1,0\rangle$ in the {\it previous} time step, which is exactly as it should be.
If instead of a single simply-reflecting vertex there is a graph attached to vertex 0, then the eigenstate now takes the form
\begin{equation}\label{psi}
|\Psi\rangle = \sum_{j=0}^\infty \lambda^{j+1} |j+1,j\rangle + \sum_{j=0}^\infty S(\lambda)\lambda^{-j} |j,j+1\rangle + |G\rangle
\end{equation}
where $|G\rangle$ is the component of the eigenstate contained in $G$. In this way we can define an ``effective reflection coefficient", $S(\lambda)$. Unless $G$ is a single vertex, $S(\lambda)$ will be a non-constant function of $\lambda$. In either case, equation \ref{psi} is a $\lambda$-eigenstate of the graph and runway.
\begin{figure}[h!]
\centering
\includegraphics[width=3.0in]{attached.jpg}
\caption{${\bf U}$ and ${\bf U}_\alpha$. By selecting the correct value of $\alpha$ we can produce a $\lambda$-eigenstate of ${\bf U}_\alpha$ that is identical to the $\lambda$-eigenstate of ${\bf U}$ on the edges they have in common (all of which are in $G$).}
\end{figure}
In order to find $S(\lambda)$ we create a new operator, ${\bf U}_\alpha$, that reflects back into the graph rather than transmitting into or receiving from the runway. That is:
\begin{equation}
{\bf U}_\alpha |out\rangle = \alpha |in\rangle
\end{equation}
We will find that there is a simple relationship between $\alpha$ and $S(\lambda)$, and that we can determine the correct value of $\alpha$ by tuning it such that ${\bf U}_\alpha$ has $\lambda$ as an eigenvalue.
First we need to derive a few properties of the characteristic polynomial of ${\bf U}_\alpha$.
\vspace{5mm}
\begin{thm}\label{polynomial}
$C(z,\alpha) = \left|{\bf U}_\alpha - z{\bf I}\right| = b(z)(f(z) + \alpha g(z))$, where $f(z)$, $g(z)$, and $b(z)$ are polynomials in $z$. $f(z)$ and $g(z)$ share no common roots, the roots of $b(z)$ sit on the unit circle, and the roots of $f(z)$ sit strictly within the unit circle.
\end{thm}
{\it Proof}
This is easy to immediately verify by inspection of the matrix ${\bf U}_\alpha - z{\bf I}$. $\alpha$ appears once, so every term in the characteristic polynomial either contains an $\alpha$ or doesn't. Clearly, the characteristic polynomial is affine in $\alpha$.
We can collect the terms with and without $\alpha$'s into two polynomials. Trivially, those polynomials can be labeled $b(z)f(z)$ and $\alpha b(z)g(z)$, where $b(z)$ is the collection of all of the factors common to both polynomials.
Since the roots of $b(z)$ are independent of $\alpha$, and since ${\bf U}_\alpha$ can be unitary (when $|\alpha|=1$), the roots of $b(z)$ are eigenvalues of a unitary matrix and therefore have modulus 1.
Clearly, $f(z)=|{\bf U}_0-z{\bf I}|$, where ${\bf U}_0 := {\bf U}_\alpha\big|_{\alpha=0}$. Define $|\Psi_0\rangle = a|out\rangle+|G\rangle$ to be a normalized eigenstate of ${\bf U}_0$ and $\eta$ to be a root of $f(z)$. Since ${\bf U}_0|out\rangle = 0|in\rangle$, we know that $\langle in|\Psi_0\rangle = 0$. When $|\alpha|=1$ we know that ${\bf U}_\alpha$ is unitary and therefore
$\begin{array}{ll}
|\eta|^2 = \langle\Psi_0|{\bf U}_0^\dagger {\bf U}_0|\Psi_0\rangle \\[2mm]
= \langle\Psi_0|\left({\bf U}_\alpha^\dagger - \alpha^*|out\rangle\langle in|\right)\left({\bf U}_\alpha - \alpha|in\rangle\langle out|\right)|\Psi_0\rangle \\[2mm]
= \langle\Psi_0|{\bf U}_\alpha^\dagger{\bf U}_\alpha|\Psi_0\rangle - 2Re\left[ \alpha^*\langle\Psi_0|out\rangle\langle in|{\bf U}_\alpha|\Psi_0\rangle \right] + |\alpha|^2\langle\Psi_0|out\rangle\langle in|in\rangle\langle out|\Psi_0\rangle \\[2mm]
= \langle\Psi_0|\Psi_0\rangle - 2Re\left[ a^*\alpha^*\langle in|{\bf U}_\alpha|\Psi_0\rangle \right] + |\alpha|^2|a|^2 \\[2mm]
= 1 - 2Re\left[ a^*\alpha^*\left(a\alpha\right) \right] + |\alpha|^2|a|^2 \\[2mm]
= 1 - |\alpha|^2|a|^2 \\[2mm]
=1 - |a|^2 \\[2mm]
<1
\end{array}$
If $a=0$, then $|\Psi_0\rangle$ is a bound eigenstate, and $\eta$ would actually be a root of $b(z)$. Therefore all of the roots of $f(z)$ are inside the unit circle.
$\square$
\vspace{5mm}
In everything that follows $b(z)$ either doesn't play a roll, isn't relevant, or factors out. So, it will be suppressed. So far we've replaced one unknown variable, $S(\lambda)$, with another, $\alpha$, however this is a step forward because we can quickly find a closed solution for $\alpha$.
\vspace{5mm}
\begin{thm}\label{important1}
$S(\lambda) = \frac{1}{\alpha} = -\frac{g(\lambda)}{f(\lambda)}$.
\end{thm}
{\it Proof}
This is the essential trick of this paper.
The $\lambda$-eigenstate of ${\bf U}$ takes the form $|\Psi\rangle=\sum_{k=0}^\infty \lambda^{k+1}|k+1,k\rangle + S(\lambda)\sum_{k=0}^\infty \lambda^{-k}|k,k+1\rangle + \lambda S(\lambda)|out\rangle + |in\rangle + |G\rangle$.
The $\lambda$-eigenstate of ${\bf U}_\alpha$ takes the form $|\Psi_\alpha\rangle = \frac{\lambda}{\alpha}|out\rangle + |in\rangle + |G\rangle$.
Restricted to $G$, these two states are the same. But whereas the coefficient of $|in\rangle$ in $|\Psi\rangle$ is dictated by the incoming signal on the runway, in $|\Psi_\alpha\rangle$ it's dictated by the coefficient of $|out\rangle$ and the value of $\alpha$. By tuning $\alpha$ to the correct value we're ``feeding the output to the input" so that $\langle in|{\bf U}^n|\Psi\rangle = \langle in|{\bf U}_\alpha^n|\Psi_\alpha\rangle = \lambda^n$, $\forall n$.
Equating the coefficients of the $|out\rangle$ states in these two eigenstates we see immediately that $S(\lambda)=\frac{1}{\alpha}$. We can then solve for $\alpha$ using the characteristic equation. When $\lambda$ is an eigenvalue we have that $0=f(\lambda)+\alpha g(\lambda)$ and therefore $\alpha = -\frac{f(\lambda)}{g(\lambda)}$. It follows that $S(\lambda)=-\frac{g(\lambda)}{f(\lambda)}$. Note that $S(z)$ is a meromorphic function of $z$.
$\square$
\vspace{5mm}
To reiterate and make clear what $S(\lambda)$ is: so long as the runway and $G$ are in the $\lambda$-eigenstate, we can replace the $G$ with a reflection coefficient, $S(\lambda)$, at vertex 0. In this way we can describe the graph's reaction to an infinite signal with frequency $\lambda=e^{i\theta}$.
\subsection{Particulars for a Single Runway}
In the case of a single input and output ${\bf U}_\alpha$ is unitary for $|\alpha|=1$. This unitarity has a lot of consequences, but in particular $S(z)$ takes the following form:
\begin{equation}\label{compliment}
S(z) = \frac{1}{\alpha} = -\frac{g(z)}{f(z)} = -\frac{g_0}{z^{s} } \prod_j \frac{1-z\eta_j^*}{z - \eta_j}
\end{equation}
where $|\eta_j|<1$, $\forall j$.
\vspace{5mm}
In this section we'll explore the special case of a single runway.
\vspace{5mm}
\begin{thm}
$C(z,\alpha) = g_0\alpha z^dC^*\left(\frac{1}{z}, \frac{1}{\alpha}\right), \quad \forall z,\forall\alpha \ne 0$ where $C^*$ indicates the coefficients are conjugated, $d$ is the degree of $C(z,\alpha)$, and $g_0$ is the constant term of $g(z)$. Equivalently, $f(z) = z^s\prod_{j=1}^{d^\prime} \left(z-\eta_j\right), \quad g(z) = g_0z^df^*\left(\frac{1}{z}\right) = g_0\prod_{j=1}^{d^\prime} \left(1-z\eta_j^*\right)$, where $d^\prime+s=d$.
\end{thm}
{\it Proof}
In what follows assume that $|\alpha| = 1$. This means that ${\bf U}_\alpha$ is unitary, and the roots of the associated characteristic polynomial, $C(z,\alpha)$, all have modulus 1. While this proof will only consider $f(z)$ and $g(z)$, it works in exactly the same way for $b(z)f(z)$ and $b(z)g(z)$.
Let $C(z,\alpha) = \prod_{k=1}^d \left(z - \lambda_k\right) = \sum_{k=0}^d f_k z^k + \alpha\sum_{k=0}^d g_k z^k$. Note that $f_d=1$ and $g_d = g_{d-1}=0$, since when the determinant was taken any term with $\alpha$ necessarily did not include 2 diagonal elements (2 powers of $z$).
\vspace{5mm}
$\begin{array}{ll}
f(\lambda_k) + \alpha g(\lambda_k) = 0 \\[2mm]
\Leftrightarrow C(\lambda_k,\alpha) = 0 \\[2mm]
\Leftrightarrow \left(C(\lambda_k, \alpha)\right)^* = 0 \\[2mm]
\Leftrightarrow C^*(\lambda_k^*, \alpha^*) = 0 \\[2mm]
\Leftrightarrow C^*\left(\frac{1}{\lambda_k}, \frac{1}{\alpha}\right) = 0 \\[2mm]
\Leftrightarrow 0 = f^*\left(\frac{1}{\lambda_k}\right) + \frac{1}{\alpha}g^*\left(\frac{1}{\lambda_k}\right) \\[2mm]
\end{array}$
Therefore, $C(z, \alpha)$ and $C^*\left(\frac{1}{z}, \frac{1}{\alpha}\right)$ have the same set of zeros. It also follows that $\alpha z^d C^*\left(\frac{1}{z}, \frac{1}{\alpha}\right) = \alpha z^df^*\left(\frac{1}{z}\right) + z^dg^*\left(\frac{1}{z}\right)$ is a polynomial in $z$ and $\alpha$ which, again, has the same set of zeros. Therefore, $\alpha z^d C^*\left(\frac{1}{z}, \frac{1}{\alpha}\right)$ and $C(z,\alpha)$ are proportional to each other.
$\begin{array}{ll}
bC(z,\alpha) \\[2mm]
=\alpha z^dC^*\left(\frac{1}{z},\frac{1}{\alpha}\right) \\[2mm]
= \alpha z^d\left[\sum_{k=0}^d f_k^* \frac{1}{z^k} + \frac{1}{\alpha}\sum_{k=0}^d g_k^* \frac{1}{z^k}\right] \\[2mm]
= \alpha\sum_{k=0}^d f_k^*z^{d-k} + \sum_{k=0}^d g_k^* z^{d-k} \\[2mm]
= \alpha\sum_{k=0}^d f_{d-k}^*z^{k} + \sum_{k=0}^d g_{d-k}^* z^{k} \\[2mm]
\Rightarrow \left\{\begin{array}{ll}
bf_k = g_{d-k}^* \\
bg_k = f_{d-k}^* \\
\end{array}\right.
\end{array}$
Now,
$f_d=1 \Rightarrow b=g_0^*$
We now have that $C(z,\alpha) = g_0\alpha z^d C^*\left(\frac{1}{z}, \frac{1}{\alpha}\right)$.
Since the constant term in a characteristic equation is equal to the determinant, $1=|\alpha g_0|=|g_0|$, which means that $0$ is not a root of $g(z)$. Because $g_d=g_{d-1}=0$ we have that $f_0=f_1=0$, which implies that $s\ge 2$
Keep in mind that the result above is merely a statement about the polynomial $C(z,\alpha)$. It is true regardless of the value of $\alpha$.
\begin{equation}
C(z,\alpha) = g_0\alpha z^d C^*\left(\frac{1}{z},\frac{1}{\alpha}\right)
\end{equation}
in general, $\forall \alpha\ne0$. Or equivalently,
\begin{eqnarray}
f(z) = g_0z^d g^*\left(\frac{1}{z}\right)\\
g(z) = g_0z^d f^*\left(\frac{1}{z}\right)
\end{eqnarray}
$f(z)$ and $g(z)$ are said to be ``reciprocal polynomials" of each other.
$\square$
\vspace{5mm}
We can say even more about the characteristic polynomial. The zeros of $f(z)$ and $g(z)$ have a very particular behavior and relationship.
\vspace{5mm}
\begin{thm}
$C(z,\alpha) = \underbrace{z^s\prod_j \left(z-\eta_j\right)}_{f(z)} + \alpha\underbrace{g_0\prod_j \left(1-z\eta_j^*\right)}_{g(z)}$
where $0<|\eta_j|<1$, $\forall j$.
\end{thm}
{\it Proof}
We already know that $|\eta_j|<1$ from theorem \ref{polynomial} and that $f(z) = g_0z^d g^*\left(\frac{1}{z}\right)$ from the last theorem. It follows that for $\eta_j\ne0$, $f(\eta_j)=0 \Leftrightarrow g\left(\frac{1}{\eta_j^*}\right) = 0$. Therefore, if $f(z) = z^s\prod_j \left(z - \eta_j\right)$, then $g(z)\propto\prod_j \left(z - \frac{1}{\eta_j^*}\right)\propto\prod_j \left(1 - z\eta_j^*\right)$. With $g_0$ the constant term in $g(z)$, we can write $g(z) = g_0 \prod_j \left(1 - z\eta_j^*\right)$.
Notice that $f(z)$ and $g(z)$ share a root if and only if $\eta_j = \frac{1}{\eta_j^*}$ or $|\eta_j| = 1$. But this is exactly what we expect for the roots of $b(z)$.
$\square$
\vspace{5mm}
The above statement about the roots of $f(z)$ applies more generally; that is, it continues to apply when there are multiple inputs and outputs.
\vspace{5mm}
\begin{thm}\label{distinct}
When $|\alpha|=1$, the solutions of $C(z, \alpha)$ are distinct.
\end{thm}
{\it Proof}
When $|\alpha|=1$ we know that ${\bf U}_\alpha$ is unitary. An immediate consequence of which is the fact that ${\bf U}_\alpha$ is diagonalizable and expressible as ${\bf U}_\alpha = \sum_\lambda \lambda{\bf P}_\lambda$, where ${\bf P}_\lambda$ is a projection operator onto the $\lambda$-eigenspace. Each of these projections can be expressed in terms of the resolvent, which in turn can be written as a power series in $\alpha-\alpha_0$ near $\alpha_0$, where $\alpha_0$ is any arbitrary point on the unit circle.
This implies that the projection operators can likewise be expressed as a power series in $\alpha-\alpha_0$, and since ${\bf P}_\lambda = |V_\lambda\rangle\langle V_\lambda|$, it follows that the eigenvectors share the same property. Finally, since ${\bf U}_\alpha$ and $|V_\lambda\rangle$ are power series in $\alpha-\alpha_0$, and ${\bf U}_\alpha|V_\lambda\rangle = \lambda|V_\lambda\rangle$, we can see that the eigenvalues themselves, $\lambda$, are power series in $\alpha-\alpha_0$.
Now define $c_0(z)(z-\lambda_0)^t = f(z) + \alpha_0g(z)$. Note that according to the last theorem $g(z)\ne0$ when $|z|=1$.
\vspace{5mm}
$\begin{array}{ll}
0 = f(\lambda) + \alpha g(\lambda) \\[2mm]
= f(\lambda) + \alpha_0 g(\lambda) + (\alpha-\alpha_0) g(\lambda) \\[2mm]
= c_0(\lambda)(\lambda-\lambda_0)^t + (\alpha-\alpha_0) g(\lambda) \\[2mm]
\Rightarrow (\lambda-\lambda_0)^t = - \frac{g(\lambda)}{c_0(\lambda)}(\alpha-\alpha_0) \\[2mm]
\Rightarrow \lambda = \lambda_0 + O\left(\sqrt[t]{\alpha-\alpha_0}\right) \\[2mm]
\end{array}$
However, since the eigenvalues are expressible as a power series in $\alpha-\alpha_0$, $t=1$. Therefore, because $\alpha_0$ is arbitrary, the degree of any zero of $f(z) + \alpha g(z)$ is one when $|\alpha|=1$.
$\square$
\vspace{5mm}
In this proof it was important that $|\alpha|=1$ because it ensures that ${\bf U}_\alpha$ is unitary. For a finite set of values of $\alpha$ (off of the unit circle) we find that $f(z)+\alpha g(z)$ can have higher degree roots, however at those points we find that ${\bf U}_\alpha$ is no longer diagonalizable and the degenerate eigenvalues correspond to {\it generalized} eigenvectors.
\vspace{5mm}
\begin{thm}\label{permute}
When $\alpha$ loops once around the unit circle the eigenvalues cyclicly permute one step. That is, looping $\alpha$ changes $\lambda_j\to\lambda_{j+1}$ and $\lambda_d\to\lambda_1$, where $arg\left(\lambda_1\right) < arg\left(\lambda_2\right) < \cdots < arg\left(\lambda_d\right)$.
\end{thm}
We know that looping $\alpha$ once (returning it to its original value) can't change the spectrum of the eigenvalues, so the effect must be a permutation. In addition, since the eigenvalues are always distinct for every value of $|\alpha|=1$, this permutation must be cyclic (the eigenvalues can't "slide past each other" on the unit circle).
So we know that looping $\alpha$ produces a permutation of the eigenvalues of the form $\lambda_j\to\lambda_{j+t}$ (where $\lambda_d\equiv\lambda_0$). The only question that remains is the value of $t$.
Define $\lambda = e^{i\theta}$. The eigenvalues satisfy
$\begin{array}{ll}
0 = f(\lambda) + \alpha g(\lambda) \\[2mm]
\Rightarrow -f\left(e^{i\theta}\right) = \alpha g\left(e^{i\theta}\right) \\[2mm]
\Rightarrow -e^{is\theta}\prod_{j=1}^{d^\prime} \left(e^{i\theta} - \eta_j\right) = \alpha g_0\prod_{j=1}^{d^\prime} \left(1 - e^{i\theta}\eta_j^*\right) \\[2mm]
\Rightarrow -e^{is\theta}\prod_{j=1}^{d^\prime} \left(e^{i\theta} - \eta_j\right) = \alpha g_0\prod_{j=1}^{d^\prime} e^{i\theta} \left(e^{i\theta} - \eta_j\right)^* \\[2mm]
\Rightarrow -e^{is\theta}\prod_{j=1}^{d^\prime} \left(e^{i\theta} - \eta_j\right) = \alpha g_0 e^{id^\prime \theta} \prod_{j=1}^{d^\prime} \left(e^{i\theta} - \eta_j\right)^* \\[2mm]
\Rightarrow \alpha = -g_0^*e^{i(s-d^\prime)\theta}\prod_{j=1}^{d^\prime} \frac{\left(e^{i\theta} - \eta_j\right)}{\left(e^{i\theta} - \eta_j\right)^*} \\[2mm]
\Rightarrow \log(\alpha) = i\pi + \log(g_0) + i(s-d^\prime)\theta + \sum_{j=1}^{d^\prime} \log\left( \frac{\left(e^{i\theta} - \eta_j\right)}{\left(e^{i\theta} - \eta_j\right)^*}\right) \\[2mm]
\Rightarrow i\arg(\alpha) = i\pi + i\arg(g_0) + i(s-d^\prime)\theta + \sum_{j=1}^{d^\prime} i2\arg\left(e^{i\theta} - \eta_j\right) \\[2mm]
\end{array}$
We now have a relation between the zeros of $f(z)$ and the phase of $\alpha$.
\begin{equation}\label{phase}
\arg(\alpha) = \pi + \arg(g_0) + (s-d^\prime)\theta + 2\sum_{j=1}^{d^\prime} \arg\left(e^{i\theta} - \eta_j\right)
\end{equation}
At this point we allow $\theta$ to smoothly increase by $2\pi$, then take the difference. Since $|\eta_j|<1$, the angle between $e^{i\theta}$ and $\eta_j$ sweeps from $0$ to $2\pi$ monotonically.
$\begin{array}{ll}
\Rightarrow \Delta arg\left(\alpha\right) = (s-{d^\prime})2\pi + 2\sum_{j=1}^{d^\prime} 2\pi \\[2mm]
\Rightarrow \Delta arg\left(\alpha\right) = (s+{d^\prime})2\pi \\[2mm]
\Rightarrow \Delta arg\left(\alpha\right) = 2\pi d \\[2mm]
\end{array}$
Looping a given eigenvalue once around the unit circle causes $\alpha$ to loop $s+d^\prime = d$ times. Looping an eigenvalue once is a permutation of the form $\lambda_{j}\to\lambda_{j+d} = \lambda_j$. It follows that if looping $\alpha$ once produces a permutation of the form $\lambda_j\to\lambda_{j+t}$, then looping $\lambda_j$ means that $\alpha$ loops $\frac{d}{t}$ times. But we know that looping an eigenvalue once requires $\alpha$ to loop $d$ times, and therefore $t=1$.
$\square$
\vspace{5mm}
\begin{thm}\label{unique}
Any eigenvalue $\lambda$, such that $|\lambda|=1$, can be induced by choosing the correct value of $\alpha$. Moreover, this value is unique.
\end{thm}
In the last theorem it was shown that the eigenvalues, which are functions of $\alpha$, cyclicly permute when $\alpha$ loops around the unit circle. These functions are continuous, so every value between these eigenvalues exist for some value of $\alpha$ as well. Moreover, $arg(\lambda)$ is a strictly monotonic function of $arg(\alpha)$, and therefore the value of $\alpha$ is unique for a given eigenvalue.
This can be seen by first showing that $\frac{\partial}{\partial \theta} arg\left(e^{i\theta} - \eta\right) > \frac{1}{2}$ when $|\eta|<1$. This can be proven by either using the inscribed angle theorem to establish a lower bound or by direct calculation. It follows that
$\begin{array}{ll}
arg\left(\alpha\right) = \pi + (s-d^\prime)\theta + i\log{(g_0)} + 2\sum_{j=1}^{d^\prime} arg\left(e^{i\theta} - \eta_j\right) \\[2mm]
\Rightarrow \frac{\partial}{\partial \theta} arg\left(\alpha\right) = (s-d^\prime) + 2\sum_{j=1}^{d^\prime} \frac{\partial}{\partial \theta} arg\left(e^{i\theta} - \eta_j\right) > (s-d^\prime) + 2d^\prime\left(\frac{1}{2}\right) = s \ge 0 \\[2mm]
\Rightarrow \frac{\partial}{\partial \theta} arg\left(\alpha\right) > 0 \\[2mm]
\end{array}$
Therefore $\arg(\alpha)$ and $\arg(\lambda)$ are strictly monotonic functions of each other. This monotonicity ensures the uniqueness of $\alpha$ for a given value of $\lambda$ by making sure that $\arg(\alpha(\lambda))$ doesn't double back on itself.
$\square$
\subsection{Arbitrary Inputs for a Single Runway}
In the language of signal analysis, the last section was a derivation of the ``frequency response" of the graph $G$. We can define the input $x[n]$ (output $y[n]$) as the amplitude on the state $|1,0\rangle$ (state $|0,1\rangle$) at time step $n$ (time step $n+1)$. The input can be encoded onto the runway in an initial state of the form $\sum_{k=0}^\infty x[k]|k+1,k\rangle$.
At time step $n$, the overall state of the graph and runway will be:
\begin{equation}
\sum_{k=0}^\infty x[k+n]|k+1,k\rangle + \sum_{k=0}^\infty y[n-k-1]|k+1,k\rangle + |G\rangle
\end{equation}
That is, we can describe the coefficients of the edge states on the runway with the ``signal function" $x[n]$ and the ``response function" $y[n]$ which are series of complex numbers, one for each integer $n$.
At the $n$th time step $x[n]$ is the coefficient of $|1,0\rangle$ and $y[n-1]$ is the coefficient of $|0,1\rangle$. This is so defined such that if $S(z)\equiv r$, where $r$ is constant, then $y[n]=rx[n]$. When $S(z)$ is constant this holds true for any $x[n]$, but for non-trivial $S(z)$ we have
\begin{equation}\label{ref1}
x[n]=z^n, \forall n \quad \Rightarrow \quad y[n]=S(z)x[n], \forall n
\end{equation}
This is nothing more than a restatement of the definition of $S(z)$, as described in eq. \ref{psi}. Using $S(z)$ we can derive an expression for the ``impulse response", $h[n]$, which is the response, $y[n]$, produced from the input $x[n] = \delta[n]$, where $\delta[n]=\left\{\begin{array}{ll}1, n=0\\0,n\ne0\end{array}\right.$ is the Kronecker delta function. From the definitions of $x[n]$, $y[n]$, and the impulse response itself we can say that
\begin{equation}\label{ref2}
h[n]=\langle 0,1|{\bf U}^{n+1}|1,0\rangle
\end{equation}
That is; the impulse is just the state $|1,0\rangle$ at time zero and the impulse response is just the coefficient of $|0,1\rangle$ read at each sequential time step.
We already know that for a simple reflection (such that ${\bf U}|1,0\rangle = r|0,1\rangle$) $y[n]=rx[n]$ and therefore $h[n]=r\delta[n]$. This can be seen from either of equations \ref{ref1} or \ref{ref2}. The response of {\it any} signal can be found using the fact that $y[n] = (x*h)[n] = \sum_{k}h[k]x[n-k]$. So rather than being a single example, the impulse response is the key to finding the response to any input \cite{dspfirst}.
\vspace{5mm}
\begin{thm}
If $h[n]$ is the impulse response (that is, $y[n]=h[n]$ when $x[n]=\delta[n]$), then for any signal function $x[n]$, $y[n] = (x*h)[n]$.
\end{thm}
{\it Proof}
A graph's response to signal functions is a map, $\mathcal{T}$, from the space of sequences of complex numbers to itself. When we say that $y[n]$ is the response to $x[n]$, we can write this more succinctly as $\mathcal{T}\left(x[n]\right) = y[n]$. $\mathcal{T}$ inherits linearity from the linearity of ${\bf U}$.
Define $x_k = x[k]$, $\forall k$. We do this to distinguish between the function $x[n]$ and the value of the function evaluated at $k$.
We can use the Kronecker delta function to break $x[n]$ apart into a sum of simple signals. Trivially, for any fixed value of $k$, $x_k\delta[n-k] = x_k\delta[k-n] = \left\{\begin{array}{ll}x_k&, n=k\\0&,n\ne k\end{array}\right.$.
Therefore, if we sum over $k$ we reconstruct the full function:
$$x[n] = \sum_k x_k\delta[n-k]$$
From this and the fact that $\mathcal{T}\left(\delta[n]\right) = h[n]$ it follows that
$\begin{array}{ll}
y[n] \\[2mm]
= \mathcal{T}\left(x[n]\right) \\[2mm]
= \mathcal{T}\left(\sum_k x_k\delta[n-k]\right) \\[2mm]
= \sum_k x_k \mathcal{T}\left(\delta[n-k]\right) \\[2mm]
= \sum_k x_k h[n-k] \\[2mm]
= \left(x*h\right)[n]
\end{array}$
$\square$
\vspace{5mm}
\begin{thm}\label{impulse}
The impulse response, $h[n]$, is given by $h[n] = \frac{1}{2\pi i}\oint_{|z|=1} z^{n-1}S(z)\,dz = -\frac{1}{2\pi i}\oint_{|z|=1} z^{n-1}\frac{g(z)}{f(z)}\,dz$
\end{thm}
{\it Proof}
For a single frequency, $x[n]=\lambda^n$, we find that
$\begin{array}{ll}
y[n] = \sum_{k}h[k]x[n-k] \\[2mm]
= \sum_{k}h[k]\lambda^{n-k} \\[2mm]
= \sum_{k}h[k]\lambda^{-k}\lambda^n \\[2mm]
= \left(\sum_{k}h[k]\lambda^{-k}\right)x[n] \\[2mm]
\end{array}$
For any fixed value of $\lambda$ we already have a way of writing this (thm. \ref{important1}). Therefore
\begin{equation}\label{simpulse}
\sum_{k}h[k]\lambda^{-k} = S(\lambda) = -\frac{g(\lambda)}{f(\lambda)}
\end{equation}
Rewriting $\lambda = e^{i\omega}$:
$\begin{array}{ll}
S(\lambda) = \sum_{k}h[k]\lambda^{-k} \\[2mm]
\Rightarrow \sum_{k}h[k]e^{-i\omega k} = S\left(e^{i\omega}\right) \\[2mm]
\Rightarrow \sum_{k}h[k]e^{i\omega (n-k)} = e^{in\omega}S\left(e^{i\omega}\right) \\[2mm]
\Rightarrow \int_0^{2\pi} \sum_{k}h[k]e^{i\omega (n-k)}\,d\omega = \int_0^{2\pi} e^{in\omega}S\left(e^{i\omega}\right) \,d\omega \\[2mm]
\Rightarrow 2\pi h[n] = \int_0^{2\pi} e^{in\omega}S\left(e^{i\omega}\right) \,d\omega \\[2mm]
\Rightarrow h[n] = \frac{1}{2\pi}\int_0^{2\pi} e^{in\omega} S\left(e^{i\omega}\right) \,d\omega \\[2mm]
\Rightarrow h[n] = \frac{1}{2\pi i}\oint_{|z|=1} z^{n-1}S(z)\,dz & (z=e^{i\omega}, dz = ie^{i\omega}d\omega) \\[2mm]
\Rightarrow h[n] = -\frac{1}{2\pi i}\oint_{|z|=1} z^{n-1}\frac{g(z)}{f(z)}\,dz \\[2mm]
\end{array}$
This is a perfectly nice integral, since the zeros of $f(z)$ are all on the interior of the unit disk.
$\square$
\vspace{5mm}
\begin{thm}\label{impulse2}
For a single input and output, $h[n]= \Omega_0\delta[n-s] + \sum_j \Omega_j\eta_j^n$
where $\Omega_j = \left\{\begin{array}{ll}
-g_0\prod_k \left(\frac{1}{-\eta_k}\right) &, j=0 \\[2mm]
-g_0\frac{1-|\eta_j|^2}{\eta_j^{s+1}}\prod_{k\ne j}\left(\frac{1-\eta_j\eta_k^*}{\eta_j-\eta_k}\right) &, j\ne0
\end{array}\right.$
\end{thm}
{\it Proof}
From the previous theorem the impulse response is:
\begin{equation}
h[n] = -\frac{1}{2\pi i}\oint_{|z|=1} z^{n-1}\frac{g(z)}{f(z)}\,dz = -\frac{g_0}{2\pi i}\oint_{|z|=1} z^{n-s-1}\prod_k\frac{1-z\eta_k^*}{z-\eta_k}\,dz
\end{equation}
\vspace{5mm}
Using residue calculus we can solve this directly
$\begin{array}{ll}
h[n] \\[2mm]
= -\frac{g_0}{2\pi i}\oint_{|z|=1} z^{n-s-1}\prod_k\frac{1-z\eta_k^*}{z-\eta_k}\,dz \\[2mm]
= -g_0\prod_k \left(\frac{1}{-\eta_k}\right)\delta[n-s] - g_0\sum_j (1-|\eta_j|^2)\prod_{k\ne j}\left(\frac{1-\eta_j\eta_k^*}{\eta_j-\eta_k}\right) \eta_j^{n-s-1} \\[2mm]
= -g_0\prod_k \left(\frac{1}{-\eta_k}\right)\delta[n-s] + \sum_j \left[-g_0\frac{1-|\eta_j|^2}{\eta_j^{s+1}}\prod_{k\ne j}\left(\frac{1-\eta_j\eta_k^*}{\eta_j-\eta_k}\right)\right] \eta_j^n
\end{array}$
$\square$
\vspace{5mm}
So the impulse response is a set of exponentially decaying signals corresponding to the zeros of $f(z)$.
\pagebreak
\section{Quantum Sounding}
Since there is a closed form for the response to any signal that is a function only of the eigenvalues of ${\bf U}_0$, we can (at most) find the spectrum of a graph's eigenvalues. We are now equipped to ask the question ``What can be learned about a graph attached to a runway by means of a signal on that runway?" and even ask the more practical question ``How difficult is it to do so?".
The challenge we face is that the closer $\eta$ is the unit circle, the more difficult it is to detect. From eq. \ref{compliment} we know that (for a single runway) $S(z) = -\frac{g(z)}{f(z)} = -\frac{g_0}{z^s} \prod_k \frac{1 - z\eta_k^*}{z-\eta_k}$. When $|z|=1$ we can easily verify that $\left| S(z) \right| = 1$. More specifically, for each $k$, $\left| \frac{1 - z\eta_k^*}{z-\eta_k} \right| = 1$. We'll look at each of these individually and since each has modulus 1, we can concern ourselves entirely with their phase.
Here we assume that $0\ll|\eta_k|<1$. When $z\not\approx \eta_k$ we can see that $\frac{1 - z\eta_k^*}{z-\eta_k} = -\eta_k^* \left( \frac{z - \frac{1}{\eta_k^*}}{z-\eta_k} \right) \approx -\eta_k^*$.
Importantly, $\frac{1 - z\eta_k^*}{z-\eta_k}$ is approximately constant for most values of $z$ on the unit circle. This is because $\frac{1}{\eta_k^*} = \frac{\eta_k}{\left|\eta_k\right|^2}\approx\eta_k$, and therefore $\left( \frac{z - \frac{1}{\eta_k^*}}{z-\eta_k} \right)\approx 1$.
$\frac{1 - z\eta_k^*}{z-\eta_k}$ has one pole inside of the unit circle and one zero outside of it. As a result, when we apply the argument principle we find that if $z$ runs around the unit circle in a positively oriented loop, then $\Delta\arg\left( \frac{1 - z\eta_k^*}{z-\eta_k} \right) = -2\pi$.
\vspace{5mm}
It follows then that a zero of $f(z)$ has very little impact on $S(z)$ except when $z$ is within a small neighborhood of that zero and within that small neighborhood the phase suddenly jumps by $-2\pi$. We'll now make this a little more rigorous to find the extent of this neighborhood.
\vspace{5mm}
\begin{thm}\label{window}
If $\eta=(1-\delta)e^{i\tau}$ is a root of $f(z)$ and $x[n]=e^{in\theta}$, we find that $\eta$ can only be detected when $|\theta-\tau|=O(\delta)$. Moreover, the phase of the reflection coefficient decreases by $2\pi$ in this neighborhood of $\eta$.
\end{thm}
{\it Proof}
Without loss of generality, we can assume that $\eta=1-\delta$. We define $e^{i\phi} = \frac{1 - z\eta_k^*}{z-\eta_k} = \frac{1 - e^{i\theta}(1-\delta)}{e^{i\theta}-(1-\delta)}$ and quickly find that
$$e^{i\phi} = \left[-1+\frac{\delta^2(1+\cos{(\theta)})}{2(1-\delta)(1-\cos{(\theta)})+\delta^2}\right] + i\frac{(-2\delta+\delta^2)\sin{(\theta)}}{2(1-\delta)(1-\cos{(\theta)})+\delta^2}$$
Clearly, for $\theta\ne0$, $\lim_{\delta\to0}e^{i\phi}=-1$. Define $c=\frac{\theta}{\delta}=O(1)$. The imaginary part of this last equation for small values of $\delta$ and $\theta$ is
$$\sin{(\phi)} = \frac{(-2\delta+\delta^2)\sin{(c\delta)}}{2(1-\delta)(1-\cos{(c\delta)})+\delta^2} = \frac{-2c\delta^2+O(\delta^3)}{c^2\delta^2+\delta^2 +O(\delta^3)} = -\frac{2c}{c^2+1} + O(\delta)$$
So the window for which $|\phi|<\frac{\pi}{2}$ is approximately $-\delta<\theta<\delta$ and the window for which $|\phi-\pi|>d$ (for which $e^{i\phi}$ is different from -1) is approximately $-\frac{2}{d}\delta<\theta<\frac{2}{d}\delta$.
This is the statement of the theorem.
$\square$
\vspace{5mm}
In appendix \ref{valve} there is an example of how a graph structure can be used to detect the phase of a reflection coefficient.
\vspace{5mm}
From theorem \ref{polynomial} we know that $b(z)f(z) = \left| {\bf U}_0 -z{\bf I} \right|$, which means that the degree of the polynomial $f(z)$ is less than or equal to the number of edge states in $G$. $S(z) = -\frac{g(z)}{f(z)}$, as established in theorem \ref{important1}. Since the zeros of $f(z)$ are all within the unit circle and the zeros of $g(z)$ are all outside, we can apply the argument principle to $S(z)$ around the unit circle to conclude that
$$\left|\Delta\arg\left(S(z)\right)\right|\le 2\pi|G|$$
The equality is achieved if there are no bound states. So, with very little effort, we have an algorithm that provides a lower bound for the dimension of a graph's Hilbert space (the number of edge states).
\pagebreak
\section{Scattering: Multiple Inputs and Outputs}
Define ${\bf U}_0$ to be the time step operator of a {\it finite} graph, $G$, with some states prepared as either loose inputs or outputs. An input state has no pre-image and an output state has no image. That is: ${\bf U}_0|out\rangle = 0$ and ${\bf U}_0^{-1}|in\rangle = \emptyset$.
If we wish to ``splice" a runway onto the graph we first choose one input state, $|in_j\rangle$, and one output state, $|out_k\rangle$, and define ${\bf U}^{(jk)}$ as ${\bf U}^{(jk)}|1,0\rangle = |in_j\rangle$ and ${\bf U}^{(jk)}|out_k\rangle = |0,1\rangle$. The behavior on the runway is the same regardless of which states on $G$ it connects with, so $\forall j,k$ and $n>0$, ${\bf U}^{(jk)} |n-1,n\rangle = |n,n+1\rangle$ and ${\bf U}^{(jk)} |n+1,n\rangle = |n,n-1\rangle$.
\vspace{5mm}
A $\lambda$-eigenstate of ${\bf U}^{(jk)}$ necessarily takes the form:
\begin{equation}
|\Psi\rangle = \sum_{n=0}^\infty \Big[ \lambda^{n+1} |n+1,n\rangle + S_{jk}(\lambda) \lambda^{-n} |n,n+1\rangle \Big] + |in_j\rangle + \lambda S_{jk}(\lambda) |out_k\rangle + \cdots
\end{equation}
As before we introduce another operator, ${\bf U}^{(jk)}_\alpha \equiv {\bf U}_0 + \alpha |in_j\rangle\langle out_k|$, that reflects back into the graph rather than communicating with the runway. For both of these operators ${\bf U}^{(jk)}_\alpha |out_i\rangle = {\bf U}^{(jk)} |out_i\rangle = 0$, $\forall i\ne k$.
A $\lambda$-eigenstate of ${\bf U}^{(jk)}_\alpha$ necessarily takes the form:
\begin{equation}
|\Psi_\alpha\rangle = |in_j\rangle + \frac{\lambda}{\alpha} |out_k\rangle + \cdots
\end{equation}
If the $\lambda$-eigenstates are identical on $G$, then clearly $\frac{\lambda}{\alpha} = \lambda S_{jk}(\lambda)$ and therefore $S_{jk}(\lambda) = \frac{1}{\alpha}$.
\vspace{5mm}
\begin{thm}
$S_{jk}(\lambda) = -\frac{g_{jk}(\lambda)}{f(\lambda)}$, where $C_{jk}(z) = \left| {\bf U}^{(jk)}_\alpha - z{\bf I} \right| = f(z) + \alpha g_{jk}(z)$ is the characteristic polynomial of ${\bf U}^{(jk)}_\alpha$.
\end{thm}
{\it Proof} If $\lambda$ is an eigenvalue of ${\bf U}^{(jk)}_\alpha$, then $0 = f(\lambda) + \alpha g_{jk}(\lambda)$. By matching the coefficient of the $|out_k\rangle$ state in $|\Psi_\alpha\rangle$ and $|\Psi\rangle$ we find that $S_{jk}(\lambda) = \frac{1}{\alpha} = -\frac{g_{jk}(\lambda)}{f(\lambda)}$.
$\square$
\vspace{5mm}
This is essentially the same as thm. \ref{important1} with one unimportant difference. In this case ${\bf U}^{(jk)}_\alpha$ is no longer unitary since ${\bf U}^{(jk)}_\alpha|out_\ell\rangle=0$ for $\ell\ne k$. As a result we can no longer say that $|\alpha|=1$, however this has no impact on the proof. In fact, it is to be expected that for multiple runways $|\alpha|\ge1$. From thm. \ref{ssum} we see that for $|z|=1$, $\sum_k |S_{jk}(z)|^2=1$ so we can conclude that $\left|S_{jk}(z)\right|\le1$ and $|\alpha|=\frac{1}{\left|S_{jk}(z)\right|}\ge1$.
\vspace{5mm}
For any operator ${\bf M}$ we call $\left({\bf M} - z{\bf I}\right)^{-1}$ the ``resolvent" of ${\bf M}$. The resolvent has many fascinating properties \cite{kato}, and here we introduce one more.
\vspace{5mm}
\begin{thm}[The Resolvent Theorem]\label{important2}
$S_{jk}(z) = -\langle out_k| \left({\bf U}_0 - z{\bf I}\right)^{-1} |in_j\rangle$ or stated differently ${\bf S}^{T}(z) = -\left({\bf U}_0 - z{\bf I}\right)^{-1}$.
\end{thm}
{\it Proof} By removing the minor of the $\alpha$ element of ${\bf U}^{(jk)}_\alpha$ we have that $\left|{\bf U}^{(jk)}_\alpha-z{\bf I}\right| = \left|{\bf U}_0 - z{\bf I}\right| + \alpha\left|{\bf U}_\alpha^{(jk)} - z{\bf I}\right|_{<j,k>} = \left|{\bf U}_0 - z{\bf I}\right| + \alpha\left|{\bf U}_0-z{\bf I}\right|_{<j,k>}$, where the sub-index indicates the cofactor of the $\alpha$ element (the $|in_j\rangle\langle out_k|$ element) in ${\bf U}_\alpha^{(jk)}$.
So, $\left\{\begin{array}{ll}
f(z) = \left|{\bf U}_0 - z{\bf I}\right| \\
g_{jk}(z) = \left|{\bf U}_0 - z{\bf I}\right|_{<j,k>} \end{array}\right.$
$\left|{\bf U}^{(jk)}_\alpha-z{\bf I}\right|_{<j,k>} = \left|{\bf U}_0-z{\bf I}\right|_{<j,k>}$, since this cofactor is not a function of $\alpha$ (that row and column is removed). It is a known property of cofactors that if ${\bf B}$ is the cofactor matrix of ${\bf A}$ (that is; every element of ${\bf B}$ is the corresponding cofactor of ${\bf A}$), then $\left|{\bf A}\right|{\bf A}^{-1} = {\bf B}^{T}$. It follows that:
\begin{equation}\label{transpose}
S_{jk}(z) = -\frac{g_{jk}(\lambda)}{f(\lambda)} = -\frac{\left|{\bf U}_0-z{\bf I}\right| \left({\bf U}_0-z{\bf I}\right)^{-1}_{kj}}{\left|{\bf U}_0-z{\bf I}\right|} = -\left({\bf U}_0-z{\bf I}\right)^{-1}_{kj}\end{equation}
This applies only to those edges that are ``prepared" to be inputs and outputs as described at the beginning of this section. That is, $\alpha$ needs to be the only element appearing in both its row and column. Replacing an arbitrary element of ${\bf U}_0$ with $\alpha$ destroys the unitarity of the time step operator, ${\bf U}$.
\vspace{5mm}
Equation \ref{transpose} says that when you want $S_{jk}(\lambda)$, the scattering coefficient between $|in_j\rangle$ and $|out_k\rangle$, you can find it in the $|out_k\rangle\langle in_j|$ element of $\left({\bf U}_0-z{\bf I}\right)^{-1}$. In other words:
\begin{equation}
S_{jk}(z) = -\langle out_k| \left({\bf U}_0 - z{\bf I}\right)^{-1} |in_j\rangle
\end{equation}
$\square$
\vspace{5mm}
The above theorem applies to the single runway case as well.
\subsection{Particulars for Multiple Runways}
Once again, define ${\bf U}_\alpha^{(jk)} \equiv {\bf U}_0 + \alpha |in_j\rangle\langle out_k|$ and $C_{jk}(z) = \left|{\bf U}_\alpha^{(jk)} - z{\bf I}\right| = b(z)\left(f(z) + \alpha g_{jk}(z)\right)$.
The zeros of $g_{jk}(z)$ are {\it not} fixed by the zeros of $f(z)$, the way there are in the single runway case, and are not necessarily outside of the unit circle. We can see an example of this in appendix \ref{square}.
As in thm. \ref{polynomial}, those eigenvalues with modulus 1 correspond to bound eigenstates, and are factored out as $b(z)$. However, in the general case we find that $g_{jk}(z)$ and $f(z)$ may have zeros in common. If $|\psi\rangle$ is an eigenstate such that $\langle out_k|\psi\rangle = \langle in_j|\psi\rangle = 0$, then it is independent of $\alpha$, however it may not necessarily be a bound state since there are other runways that it can be ``leaking" out of. If an eigenvalue is independent of $\alpha$ then it must be a common factor of $f(z)$ and $g_{jk}(z)$ and if the associated eigenstate has a component on any of the $|out\rangle$ states, then its eigenvalue must be less than 1. In fact, in a derivation nearly identical to that found in theorem \ref{polynomial} we find that the eigenvalue, $\eta$, of a eigenstate of ${\bf U}_0$, $|\psi\rangle$, satisfies $|\eta|^2 = 1 - \sum_{k=1}^M \left|\langle out_k|\psi\rangle\right|^2 < 1$
These are the zeros of $f(z)$. If $\langle out_k|\psi\rangle=0$, then $\eta$ is also a zero of $g_{jk}(z)$ and $|\psi\rangle$ is an eigenstate of ${\bf U}_\alpha^{(jk)}$, $\forall j$.
\vspace{5mm}
\begin{thm}\label{ssum}
If $|z|=1$, then $\sum_{k}\left|S_{jk}(z)\right|^2 = 1$.
\end{thm}
{\it Proof} The $\lambda$-eigenstate for a graph $G$ attached to $M$ runways with a signal coming in from the $j$th runway takes the form
$$|\Psi\rangle=\sum_{n=0}^\infty \lambda^{n+1} |n+1,n\rangle_j + \sum_{k=1}^M\left[S_{jk}(\lambda)\sum_{n=0}^\infty \lambda^{-n} |n,n+1\rangle_k\right] + |G\rangle + |in_j\rangle + \sum_{k=1}^M \lambda S_{jk}(\lambda)|out_k\rangle$$
The subscript on the runway states indicates to which runway they correspond. This is an eigenstate, so ${\bf U}|\Psi\rangle = \lambda|\Psi\rangle$ and it follows that ${\bf U}\left(|G\rangle + |in_j\rangle\right) = \lambda\left(|G\rangle + \sum_{k=1}^M \lambda S_{jk}(\lambda)|out_k\rangle\right)$. Being unitary ${\bf U}$ is an isometry and therefore
$\begin{array}{ll}
\left||G\rangle + |in_j\rangle\right| =\left|\lambda|G\rangle + \lambda^2\sum_{k=1}^M S_{jk}(\lambda)|out_k\rangle\right| \\[2mm]
\Rightarrow 1 + \langle G|G\rangle = |\lambda|^2 \langle G|G\rangle + |\lambda|^4 \sum_{k=1}^M \left|S_{jk}(\lambda)\right|^2
\end{array}$
Clearly if $|\lambda|=1$, then the statement of the theorem follows immediately.
\vspace{5mm}
It may seem worrisome that $\lambda$ isn't assumed to be modulus 1 despite being the eigenvalue of a unitary operator, but keep in mind that $|\Psi\rangle$ isn't normalizable and ${\bf U}$ is operating on an infinite dimensional Hilbert space. However, on any {\it finite} state we can still make use of the fact that unitary operations are isometries.
$\square$
\subsection{Arbitrary Signals for Multiple Runways}
As before, define the input $x[n]$ (output $y[n]$) as the amplitude on the state $|1,0\rangle$ (state $|0,1\rangle$) at time step $n$ (time step $n+1)$. The input can be encoded onto the runway in an initial state of the form $\sum_{n=0}^\infty x[n] |n+1,n\rangle$. If $x[n]=\lambda^n$ for all $n$, then $y[n] = S_{jk}(\lambda)x[n]$.
Applying exactly the same proof used in the single input/output case (thm. \ref{impulse}) we find that
\begin{equation}
h_{jk}[n] = \frac{1}{2\pi i}\oint_{|z|=1} z^{n-1}S_{jk}(z)\,dz = -\frac{1}{2\pi i}\oint_{|z|=1} z^{n-1}\frac{g_{jk}(z)}{f(z)}\,dz
\end{equation}
where $h_{jk}[n]$ is the response produced by the $k$th output to an impulse received from the $j$th input.
\vspace{5mm}
Like the single runway case we find that $S_{jk}(z) = \sum_{n=0}^{\infty} h_{jk}[n]z^{-n}$ (see eq. \ref{simpulse}). This gives us a second proof and a little insight into theorem \ref{important2}.
In general, the impulse response is $h_{jk}[n] = \langle 0,1|\left({\bf U}^{(jk)}\right)^{n+1}|1,0\rangle$ (see eq. \ref{ref2}), where ${\bf U}^{(jk)}|1,0\rangle = |in_j\rangle$ and ${\bf U}^{(jk)}|out_k\rangle = |0,1\rangle$. The way we have defined the graph (such that it includes ``in" and ``out" states) implies that $h_{jk}[0]=h_{jk}[1]=0$, and therefore $S_{jk}(z) = \sum_{n=2}^{\infty} h_{jk}[n]z^{-n}$. In what follows we'll keep $h_{jk}[1]$ in the sum; this changes nothing but makes the derivation a little smoother.
\vspace{5mm}
$\begin{array}{ll}
S_{jk}(z) \\[2mm]
= \sum_{n=1}^{\infty} h_{jk}[n]z^{-n} \\[2mm]
= \sum_{n=1}^{\infty} z^{-n}\langle 0,1|\left({\bf U}^{(jk)}\right)^{n+1}|1,0\rangle \\[2mm]
= \sum_{n=1}^{\infty} z^{-n}\langle out_k|\left({\bf U}^{(jk)}\right)^{n-1}|in_j\rangle \\[2mm]
= \sum_{n=1}^{\infty} z^{-n}\langle out_k|{\bf U}_0^{n-1}|in_j\rangle \\[2mm]
= \sum_{n=0}^{\infty} z^{-n-1}\langle out_k| {\bf U}_0^n |in_j\rangle \\[2mm]
= \langle out_k| \left[ \sum_{n=0}^{\infty} z^{-n-1} {\bf U}_0^n \right] |in_j\rangle \\[2mm]
= \langle out_k| \left[ \frac{1}{z} \sum_{n=0}^{\infty} \left( \frac{1}{z} {\bf U}_0 \right)^n \right] |in_j\rangle \\[2mm]
= \langle out_k| \left[ \frac{1}{z} \left( {\bf I} - \frac{1}{z} {\bf U}_0 \right)^{-1} \right] |in_j\rangle \\[2mm]
= \langle out_k| \left[ - \left( {\bf U}_0 - z{\bf I} \right)^{-1} \right] |in_j\rangle \\[2mm]
\end{array}$
\vspace{5mm}
So, we can either think of theorem \ref{important2} as being a result of the nature of the characteristic polynomial of ${\bf U}_\alpha$ or as a symptom of the fact that the power series of the negative resolvent, $-\left( {\bf U}_0 - z{\bf I} \right)^{-1}$, is a sum of impulse responses multiplied by $z^{-n}$ which is equal to the frequency response, $S(z)$.
\pagebreak
\section*{Acknowledgements}
This research was supported by a grant from the John Templeton Foundation and would not have been possible without many enlightening conversations with Professor Mark Hillery.
\pagebreak
\section{Bibliography}
|
1,941,325,220,004 | arxiv | \section{Introduction}
Collective spatiotemporal patterns emerging in dynamical networks are determined by the interplay of the dynamics of the nodes and the nature of the interactions among the nodes. So it is of utmost relevance to ascertain what properties of the nodes of the network impact collective dynamics. This also helps to address the important reverse question: perturbation of what class of nodes in the network have the most significant effect on the resilience of the network? Understanding this will allow us determine which nodes render the network most susceptible to external influences. Alternately, it will suggest which nodes to protect more stringently from perturbations in order to protect the dynamical robustness of the entire network. As a test-bed for understanding this we consider the collective dynamics of a group of coupled bi-stable elements. Bi-stable systems are relevant in a variety of fields, ranging from relaxation oscillators\cite{relax_ocs} and multivibrators\cite{multiVib}, to light switches\cite{optical_bistable} and Schmitt triggers. Further it is of utmost importance in digital electronics, where binary data is stored using bi-stable elements.
Specifically, in this work we will explore bi-stable elements, connected in different network topologies, ranging from regular rings to random scale-free and star networks. We focus on the response of this network to localized perturbations on a sub-set of nodes. The central question we will investigate here is the following: {\em what characteristics of the nodes (if any) significantly affect the global stabilty?} So we will search for discernable patterns amongst the nodes that aid the maintenance of the stability of the collective dynamics of the network on one hand, and the nodes that rapidly destroy it on the other. In particular, we consider three properties of the nodes: degree, betweeness centrality and closeness centrality. Since these features of a node determine the efficiency of information transfer originating from it, or through it, they are expected to influence the propagation of perturbations emanating from the node.
Normalized degree of a node $i$ in an undirected network is given by the number of neighbors that are directly connected to the node scaled by the total number of nodes $N$, and is denoted by $k_i$. So a high degree node indicates that there is direct contact with a larger set of nodes. Normalized betweeness centrality of a node $i$ \cite{chen,pre_hong} is given as:
$$b_i= \frac{2}{(N-1)(N-2)} \sum_{s,t\in I}\frac{\sigma(s,t|i)}{\sigma(s,t)}$$
where $I$ is the set of all nodes, $\sigma(s,t)$ is the number of shortest paths between nodes $s$ and $t$ and $\sigma(s,t|i)$ is the number of shortest paths passing through the node $i$. So if node $i$ has high betweeness-centrality, it implies that it lies on many shortest paths, and thus there is high probability that a communication from $s$ to $t$ will go through it. Normalized Closeness Centrality is defined as:
$$c_i=\frac{N-1}{\sum_{j} d(j,i)}$$
where $d(j,i)$ is the shortest path between node $i$ and node $j$ in the graph. Namely, it is the inverse of the average length of of the shortest path between the node and all other nodes in the network\cite{bavelas}. So high closeness centrality indicates short communication path to other nodes in the network, as there are minimal number of steps to reach other nodes.
In this work we will explore the extent to which the features of the nodes given above influence the recovery of a network from large localized perturbations. In order to gauge the global stability and robustness of the collective state of this network, we will {\em introduce a variant of the recent framework of multi-node basin stability} \cite{mitra}. In general, the basin stability of a particular attractor of a multi-stable dynamical system is given by the fraction of perturbed states that return to the basin of the attraction of the dynamical state under consideration. In our variant of this measure, we consider an initial state where all the bi-elements in the network are in the same well, and we will refer to this as a {\em synchronized state}. So a synchronized state here does not imply complete synchronization. Rather it implies a collective state where the states of the nodes are confined to the neigbourhood of the same attracting stable state, i.e. lies within the basin of attraction of one of the two attracting states. We then perturb a specific number of nodes of a prescribed type, with the perturbations chosen randomly from a given subset of the state space. The multi-node basin stability (BS) is then defined as the fraction of such perturbed states that manage to revert back to the original state from these localized perturbations. Namely, multi-node BS reflects the fraction of the volume of the state space of a sub-set of nodes that belong to the basin of attraction of the synchronized state. So the importance of multi-node BS stems from the fact that it determines the probability of the system to remain in the basin of attraction of the synchronized state when random perturbations affect a specific number of nodes. This allows us to extract the contributions of individual nodes to the overall stability of the collective behaviour of the dynamical network. Further, since one perturbs subsets of nodes with certain specified features, our variant of the concept of multi-node BS will suggest which nodal properties make the network more vulnerable to attack.
Specifically we consider the system of $N$ diffusively coupled bi-stable elements, whose dynamics is given as:
\begin{equation}
\dot{x_i} = F(x_i) + C \frac{1}{K_i} \sum_j (x_j - x_i) = F(x_i) + C (\langle x_i^{nbhd} \rangle - x_i)
\label{main}
\end{equation}
where $i$ is the node index ($i = 1, \dots N$) and $C$ is the coupling constant reflecting the strength of coupling. The set of $K_i$ neighbours of node $i$ depends on the topology of the underlying connectivity, and this form of coupling is equivalent to each site evolving diffusively under the influence of a ``local mean field'' generated by the coupling neighbourhood of each site $i$, $\langle x_i^{nbhd} \rangle = \frac{1}{K_i} \sum_j x_j$, where $j$ is the node index of the neighbours of the $i^{th}$ node, with $K_i$ being the total number of neighbours of the node.
The function $F(x)$ gives rise to a double well potential, with two stable states $x^*_-$ and $x^*_+$. For instance one can choose
$$F(x) = x - x^3$$
yielding two stable steady states $x^*_{\pm}$ at $+1$ and $-1$, separated by an unstable steady state at $0$. Note that the synchronized state here is a fixed point, either $x^*_-$ and $x^*_+$, for all the nodes, i.e. $x_i$ is equal to $x^*_-$ or $x^*_+$, for all $i$.
We first investigate the two limiting network cases: (i) {\em Ring}, where all nodes have the same degree, closeness and betweeness centrality, and a (ii) {\em Star network}, where the central (hub) node has the maximum normalized degree ($k_{hub} = 1$), betweeness centrality ($b_{hub} \sim 1$), and closeness centrality ($c_{hub} = 1$), while the rest of the nodes, namely the peripheral nodes (``leaves'') have very low degree ($k_{peri} \sim 0$ for large networks), betweeness centrality ($b_{peri}=0$) and closeness centrality $c_{peri} \sim 0.5$. So on one hand we have the Ring which is completely homogeneous, and on the other hand we have the Star network where the difference in degree, closeness and betweeness centrality of the hub and the peripheral nodes is extremely large. Exploring these limiting cases allows us to gain understanding of the robustness of the network to large perturbations affecting nodes with different properties.
As indicated eralier, to gauge the effect of different nodal features on the robustness of the dynamical state of the network, we do the following: we first consider a network close to a stable synchronized state, namely one where the states $x_i$ of all the nodes $i$ have a small spread in values centered around $x^*_-$ or $x^*_+$, i.e. all elements are confined to the same well. We then give a {\em large perturbation} to a small fraction of nodes, denoted by $f$. This strong perturbation typically kicks the state of the perturbed nodes to the basin of attraction of the other well. We then ascertain whether all the elements return to their original wells after this perturbation, i.e. if the perturbed system recovers completely to the initial state. We repeat this ``experiment'' over a large sample of perturbed nodes and perturbation strengths, and find the fraction of times the system manages to revert to the original state. This measure of global stability is then a variant of multi-node Basin Stability and it is indicative of the robustness of the collective state to perturbations localized at particular nodes of a certain type in the network.\\
{\bf Dynamics of a Ring of Bistable Systems}\\
We first investigate the spatiotemporal evolution of a ring of bistable elements, all of whose states are confined to the same well, other than a few nodes that experience a large perturbation which pushes their state to the basin of the other well.
We find that even when the fraction of perturbed nodes is very small, these perturbed nodes are unable to return to the original well. That is, the elements in the Ring are unable to drag the few perturbed nodes back to the well of the majority of the elements, suggesting that the Ring is not robust against such localized perturbations.
Next we attempt to discern the effect of coupling on the robustness of the dynamics. Fig.~\ref{bsvsc_ring}(a) shows the multi-node basin stability for this system, as the coupling strength is increased in the range $0$ to $2$, for clusters of perturbed nodes with $f$ ranging from $0.01$ to $0.08$. It is evident from the basin stability of the system, that there is a {\em sharp transition} from zero basin stability, namely the situation where {\em no} perturbed state returns to the original state, to basin stability close to one, namely where {\em all} sampled perturbed states return to the original state. This indicates that the {\em system recovers from large localized perturbations more readily if it is strongly coupled}. Further, the figure also demonstrates the extreme sensitivity of basin stability to the number of nodes being perturbed. We find that the system fails to return to the original state, even at very high coupling strengths, when more than 5\% of the nodes experience perturbations. For instance, Fig.~\ref{bsvsc_ring}(a) shows the case of a single perturbed node (i.e. $f=0.01$), where the entire network recovers for coupling strengths stronger than approximately $0.2$. In contrast, for $f=0.08$, where a cluster of $8$ nodes are perturbed in the Ring of $100$ elements, there is zero basin stability in the entire coupling range. So a Ring loses its ability to return to the original state rapidly with increasing number of perturbed nodes.
For very small $f$, the entire network returns to its original state, and basin stability is close to $1$. On increasing $f$ one observes that there exists a minimum fraction, which we denote as $f_{crit}$, after which the basin stability sharply declines from $1$. So $f_{crit}$ indicates the minimum fraction of nodes one typically needs to perturb in order to destroy the collective state where all elements are in the same well. We find that high coupling strengths increase $f_{crit}$. For instance, $f_{crit} \approx 0.02$ for $C=0.5$ and $f_{crit} \approx 0.04$ for $C=1$. So for stronger coupling, the bulk of the elements are capable of pulling the perturbed nodes back to original well, increasing the resilience of the network.
Due to the structure of the ring, the stability of the system with respect to localized perturbations depends on whether the perturbed nodes are contiguous and occur in a cluster (cf. the case in Fig.~\ref{bsvsc_ring}(a)) or randomly spread over the ring, where the locations of the perturbed nodes are uncorrelated. Fig \ref{bsvsc_ring}(b) shows the multi-node basin stability when nodes perturbed are chosen randomly for different values of coupling. We observe that the system is more stable here, as compared to the case when nodes are perturbed in cluster, namely {\em perturbations at random locations in a ring allows the system to recover its original dynamical more readily than perturbations on a cluster of contiguous nodes}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.9\linewidth]{Ring_n=100_k=2_pc=1-b,2-g,8-r_c=1}
(a)
\includegraphics[angle=270, width=0.9\linewidth]{Ring_n=100_k=2_f=0.08_randvsClus}
(b)
\caption{Dependence of Multi-Node Basin Stability on coupling strength, for a ring of bistable elements given by Eqn.~\ref{main}, with the number of perturbed nodes $f$ equal to $0.01$ (blue), $0.02$ (green) and $0.08$ (red). Here the size of the ring is $N=100$, and size of the coupling neighbourhood is $k=2$, namely each site couples to its two nearest neighbours. Panel (a) shows the case where the perturbed nodes occur in clusters, with the inset in (a) showing the dependence of multi-node basin stability on the fraction of perturbed nodes $f$ for different system sizes $N$, for $C=1$. Panel (b) shows the multi-node basin stabilityfor the case of perturbations on randomly located nodes, for $f=0.08$. The case of perturbation in clusters (orange) is also shown for reference, for the same fraction of perturbed nodes.}
\label{bsvsc_ring}
\end{figure}
Further, the inset in Fig. ~\ref{bsvsc_ring}(a) shows the dependence of multi-node basin stability on the fraction of perturbed nodes $f$ for different system sizes $N$. We find that $f_{crit} \sim \frac{C}{N}$, implying that $f_{crit} \rightarrow 0$ as system size $N \rightarrow \infty$. This indicates that in a very large Ring, even the smallest finite fraction of perturbed nodes can disturb the Ring from its original steady state. So one can conclude that the synchronized state in the Ring is very susceptible to destruction, as only very few nodes in the system need to be perturbed in order to push the system out of the original state. Namely, in a ring of bi-stable elements, {\em the collective state where all elements are in the same well, is a very fragile state.} \\
{\bf Dynamics of a Star Network of Bistable Systems}\\
Now we study the spatiotemporal evolution of bistable elements connected in a star configuration. Here the central hub node has the maximum degree, betweenness and closeness centrality, while the rest of the nodes, namely the peripheral leaf nodes have very low degree, betweeness and closeness centrality. Namely, in this network the difference in degree, closeness and betweeness centrality of the hub and the peripheral nodes is extremely large. So this network offers a good test-bed to investigate the correlation between specific properties of a node and the resilience of the network to large localized perturbations at such nodes.
Figs.~\ref{Starspt}a-b display the dynamics for two illustrative cases. In Fig.~\ref{Starspt}(a), {\em only the hub node is perturbed} in the star network consisting of $100$ elements. We notice, that this {\em single} perturbed node pulls {\em all} the other nodes of the network away from its original state. So the star network is {\em extremely vulnerable to perturbations at the hub}, and cannot typically recover from disturbances to the state of the hub, even if all the other nodes are unperturbed. On the other hand, Fig.~\ref{Starspt}(b), shows what ensues when a {\em large number of peripheral nodes are perturbed}. Now, even when as many as $90$ nodes are perturbed, namely 90\% of the network experiences a disturbance in its state, the entire network still manages to recover to its original state. This dramatic difference in the outcome of perturbations clearly illustrates how sensitively the robustness of a dynamical state depends on the degree, closeness and betweeness centrality of the perturbed node.
\begin{figure}[htb]
\centering
\includegraphics[width=\plotSize]{Star_n=100_h_pc=1}
\includegraphics[width=\plotSize]{Star_n=100_l_pc=90}
\hspace{5cm} (a) \hfill (b) \hspace{5cm}
\caption{Time evolution of $100$ bistable elements coupled in star configuration, given by Eqn.~\ref{main}, with coupling strength $C = 1$. In (a) only the hub node is perturbed; in (b) $90$ peripheral nodes are perturbed.}
\label{Starspt}
\end{figure}
Next we examine the multi-node basin stability of the network, for fraction $f$ of perturbed nodes ranging from $1/N$ (namely single node in Fig.~\ref{bsvsc_star}(a) ) to $f \sim 1$, namely the case where nearly all nodes in the system are perturbed. As evident from Fig.~\ref{bsvsc_star} (b), when only the peripheral nodes are perturbed, even for values of $f$ as high as $0.7$, there is no discernable difference in the basin stability, which remains close to $1$. This implies that even when more than half the nodes in the network are perturbed the entire system almost always recovers to the original state. In contrast, in Fig.~\ref{bsvsc_star}(a) shows the single-node basin stability for the case of the hub node being perturbed, where the basin stability is clearly drastically reduced and approaches zero very quickly. It is clear that just a {\em single} node is enough to destroy the stability of the network, if that node has very high degree, closeness and betweeness centrality, such as the hub node. These quantitative results are consistent with the qualitative spatiotemporal patterns observed in Fig.~\ref{Starspt}.
\begin{figure}[htb]
\centering
\includegraphics[angle=270, width=0.9\linewidth]{Star_n=100}\\(a)\\
\includegraphics[width=0.9\linewidth]{Star_n=100_c=1}\\(b)
\caption{Multi-node Basin Stability vs coupling strength for a Star network of size $N=100$: (a) the hub node is perturbed (blue) and a single peripheral node is perturbed (green); (b) Multi-node Basin Stability vs number of nodes perturbed in the Star network of bistable elements. Here the size of the network $N=100$ and coupling strength $C=1$. The blue curve represents the case where only peripheral nodes are perturbed, while green represents the case where the hub is perturbed alongwith peripheral nodes.}
\label{bsvsc_star}
\end{figure}
Further Fig. \ref{bsvsc_star}(b) shows the decline in Multinode Basin Stability with increasing fraction of perturbed nodes $f$. Interestingly, when only the peripheral nodes are perturbed, Basin Stability is close to one even when a very large fraction of nodes in the system are perturbed, and $f_{crit} \approx 0.7$. So when only peripheral nodes are perturbed, the perturbed nodes manage to return to their original state, {\em even when a majority of nodes in the system have been pushed to the basin of attraction of the other state}. However when the perturbed nodes include the central hub node, the basin stabilty declines rapidly with increasing $f$, and $f_{crit} \approx 1/N $. So our analysis reveals how significant the degree, closeness and betweeness of nodes are in determining the resilience of the network. In fact, very clearly, the {\em hub node holds the key to the maintenenance of the collective state}.
We can rationalize the above results from dynamics of the coupled system as follows: the influence of the neighbours on a particular node is through the local mean field generated by the nieghbouring nodes. Now it is clear if a node has few neighbours the influence of a perturbation on its neighbour will be very large. On the other hand, if a node is connected to many other nodes, as is typically true of nodes with high degree or betweenness centrality, such as a hub in a star network, the influence of peturbations on a node in its neighbourhood is scaled by a factor of $1/k$, where $k$ is the number of neighbours of the node. This implies that the effect of the peripheral nodes on the hub is much smaller than the hub on the periphery. Namely, the hub affects all peripheral nodes strongly with the coupling term being of $O(1)$. However a perturbation on a peripheral node affects the hub only through a coupling of $O(1/N)$, which is vanishingly small for large networks. Further the peripheral nodes do not affect each other directly, but only through perturbations propagating to the hub, while the hub simultaneously affects all peripheral nodes.\\
{\bf Dynamics of a Random Scale-Free Network of Bistable Systems}\\
We will now go on to explore Random Scale-Free (RSF) Networks of bi-stable dynamical elements. In particular, we construct these networks via the Barabasi-Albert preferential attachment algorithm, with the number of links of each new node denoted by parameter $m$ \cite{scalefree}. The network is characterised by a fat-tailed degree distribution. Specifically we will display results for networks of size $N=100$, with $m=1$ and $m=2$. Figs.~\ref{RSFspt}a-b shows two contrasting representative cases where (a) twenty nodes with highest betweeness centrality are perturbed, and (b) twenty nodes with the lowest betweeness centrality were perturbed. It is observed that perturbation on nodes with high betweeness centrality destabilises the entire network, and the perturbed nodes rapidly drag all the other nodes to a different well. This is evident from the switched colors of the asymptotic state in Fig.~\ref{RSFspt}a. On the other hand, when the perturbed nodes have low betweeness centrality, the network recovers quickly from the perturbation and reverts to the original well, as clearly seen in Fig.\ref{RSFspt}b. These completely different outcomes occur even though the {\em number of perturbed nodes is the same} in both cases, thereby clearly illustrating that nodes with high betweeness-centrality have much stronger influence on the global stability of the system.
\begin{figure}[htb]
\centering
\includegraphics[width=\plotSize]{RSF_n=100_k=2_h_pc=20}
\includegraphics[width=\plotSize]{RSF_n=100_k=2_l_pc=20}
\hspace{5cm} (a) \hfill (b) \hspace{5cm}
\caption{Time evolution of $100$ bistable elements coupled in a Random Scale-Free network with $m=2$, given by Eqn.~\ref{main}, with coupling strength $C = 1$. In (a) $20$ nodes of highest betweeness centrality are perturbed; in (b) $20$ nodes of lowest betweeness centrality are perturbed. Here the original state of the networks has all nodes in the negative well, i.e. all $x_i < 0$.}
\label{RSFspt}
\end{figure}
We will now present the dependence of the global stability of the collective dynamics on different centrality measures in this heterogeneous network, quantitatively, through multi-node basin stability measures. In particular, in order to explore the correlation between a given centrality measure of the nodes and the resilience of the system, we will estimate the multi-node basin stability under perturbations on sub-sets of nodes with increasing (or decreasing) values of the centrality under consideration. That is, we order the nodes according to the centrality we are probing, and consider the effect of perturbations on fraction $f$ of nodes with the highest (or lowest) centrality.
The influence of perturbations on nodes with the highest and lowest betweeness, closeness and degree centrality in a Random Scale-Free network are displayed in Figs.~\ref{rsf1}a-c. The broad trends are similar for all three centrality measures, and it is clearly evident that when nodes with the highest betweeness, closeness and degree centrality are perturbed, multi-node basin stability falls drastically. On the other hand, perturbing the same number of nodes of low centrality leaves the basin stability virtually unchanged. Further, when nodes of low centrality are perturbed, for sufficiently high coupling strengths, the network almost always recovers to its original state, yielding a basin stability of $1$. So one can conclude that perturbing nodes with high betweeness, closeness and degree centrality destroys the synchronized state readily, while perturbing nodes of low centrality allows the perturbed nodes to return to the original state, thereby restoring the sychronized state. For reference, Fig.~\ref{rsf1} also shows the basin stability of a network where the perturbed nodes are randomly chosen, corresponding to {\em random attacks} on a subset of nodes. Clearly, a {\em targetted attack} on nodes with high centrality can destroy the collective dynamics much more efficiently than random attacks.
\begin{figure}[htb]
\centering
\includegraphics[width=0.3\linewidth]{RSF_n=100_m=2_pc=20_btc}
\includegraphics[width=0.3\linewidth]{RSF_n=100_m=2_pc=20_clc}
\includegraphics[width=0.3\linewidth]{RSF_n=100_m=2_pc=20_deg}
\includegraphics[width=0.3\linewidth]{RSF_n=100_m=2_c=1_btc}
\includegraphics[width=0.3\linewidth]{RSF_n=100_m=2_c=1_clc}
\includegraphics[width=0.3\linewidth]{RSF_n=100_m=2_c=1_deg}
\hspace{2cm} (a) \hfill (b) \hfill (c) \hspace{2cm}
\caption{(a) Dependence of the Multinode Basin Stability of Random Scale-Free networks of size $N=100$, with $m=2$, on coupling strength $C$, with $f=0.2$ (top panels) and fraction $f$ of perturbed nodes, with $C=1$ (bottom panels). In the panels, three cases are shown. In the first case, the perturbed nodes are chosen at random (green curves). In the second case (red curves) the perturbed nodes are chosen in descending order of (a) betweeness centrality, (b) closeness centrality and (c) degree (i.e. the perturbed nodes are the ones with the highest $b$, $c$ or $k$ centrality measures). In the third case (blue curves) the perturbed nodes are chosen in ascending order of (a) betweeness centrality, (b) closeness centrality and (c) degree (i.e. the perturbed nodes are the ones with the lowest $b$, $c$ or $k$ centrality measures).}
\label{rsf1}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=0.3\linewidth]{btc}
\includegraphics[width=0.3\linewidth]{clc}
\includegraphics[width=0.3\linewidth]{deg}
\includegraphics[width=0.3\linewidth]{RSF_n=100_btch_c=1}
\includegraphics[width=0.3\linewidth]{RSF_n=100_clch_c=1}
\includegraphics[width=0.3\linewidth]{RSF_n=100_degh_c=1}
\hspace{2cm} (a) \hfill (b) \hfill (c) \hspace{2cm}
\caption{Top panel: Probability distribution of the (a) betweeness centrality, (b) closeness centrality and (c) degree of the nodes in a Random Scale-Free network of size $N=100$, with $m=1$ (blue) and $m=2$ (green). Bottom panel: Multi-node basin Stability vs fraction $f$ of nodes perturbed, for Random Scale-Free network of size $N=100$, coupling strength $C=1$, with $m=1$ and $m=2$, where the perturbed nodes are chosen in descending order of (a) betweeness centrality, (b) closeness centrality and (c) degree (i.e. the perturbed nodes are the ones with the highest $b$, $c$ or $k$ centrality measures).}
\label{m12_prob}
\end{figure}
Now we investigate which centrality measure is most crucial in determining the global robustness of collective behaviour in the network. We do this through the following numerical experiment: we compare the basin stability of the collective dynamics of Random Scale-Free networks with $m=1$ and $m=2$. Interestingly, the distribution of the betweeness centrality, closeness centrality and degree of the nodes in RSF networks with $m=1$ and $m=2$ are significantly different, as evident in Fig.~\ref{m12_prob} (top panels). It is clear that for $m=1$ the distribution of the degree and closeness centrality of the nodes in the network is shifted towards lower $k$ and $c$ values as compared to RSF networks with $m=2$, while the distribution of betweeness centrality shifts towards higher values in RSF networks with $m=1$ vis-a-vis the distribution of the betweeness centrality in RSF networks with $m=2$. So in RSF networks with $m=1$ the nodes with the highest betweeness centrality typically have significantly higher $b$ than in RSF networks with $m=2$ of the same size. On the other hand, since the tail of the probability distribution of degree and closeness centrality of the nodes in a RSF network with $m=2$ extends further than that in a RSF network with $m=1$, the nodes with the highest degree and closeness centrality typically have lower $k$ and $c$ in RSF networks with $m=1$ compared to RSF networks with $m=2$. So these networks can potentially {\em provide a test-bed for determining which of the centrality properties most crucially influence dynamical robustness}. Note that it was not possible to use the Ring to probe this issue, as all nodes there have identical centrality propoerties. Nor did the Star network offer a system where one could distinguish between the effects of different centrality measures on dynamical robustness, as the nodes there split into two classes, the single hub and the periphery, with all the peripheral nodes having identical betweeness, closeness and degree. However, one can compare the response of Random Scale-Free networks with different $m$ to probe which nodal property renders a heterogeneous network most vulnerable to large localized perturbations.
Fig.~\ref{m12_prob} (bottom panels) displays the dependence of the multi-node Basin stability on the fraction of perturbed nodes $f$ in the Random Scale-Free network with $m=1$ and $m=2$. As number of nodes perturbed increases, the multi-node basin stability falls significantly for RSF networks with $m=1$, while RSF networks with $m=2$ remains robust up to a critical fraction $f_{crit}$ of perturbed nodes, with $f_{crit} \sim 0.2$. One can rationalize this, by noting the difference in the typical values of betweeness centrality at the highest end in the RSF network with $m=1$ and $m=2$. For instance, if one considers $10 \%$ of nodes with the highest betweeness centrality in these networks of size $N=100$, typically $b$ lies between $0.1$ to $0.9$ for $m=1$ and between $0.01$ and $0.5$ for $m=2$. So the marked difference in the sensitivity of the global stability to perturbations in Random Scale-Free networks with $m=1$ and $m=2$ stems from the higher betweeness centrality of the nodes in the former network.
Now when nodes of the highest closeness centrality and degree are perturbed we observe the same trend as above. This occurs inspite of the tail of the distribution of closeness centrality and degree extending to higher values for RSF networks with $m=2$ as compared to RSF networks with $m=1$, implying that the nodes with highest degree and closeness centrality for the $m=2$ case will have a larger value of $k$ and $c$, as compared to the $m=1$ case. So one may have expected that the RSF network with $m=2$ would be less stable than the RSF network with $m=1$. However, the observations are contrary to this expectation and this surprising result stems from the following: the set of nodes with the highest betweeness centrality, closeness centrality and degree, overlap to a very large extent. So for instance, for $f=0.1$ in a network of size $N=100$, the set of $10$ nodes with the highest betweeness centralities, is practically the same as the set of nodes with the highest closenes centralities and highest degrees. However, in the RSF network with $m=1$ these nodes have higher betweeness centrality, while having lower closeness centrality and degree, than the corresponding set in the RSF network with $m=2$. Now higher betweeness centrality should inhibit stability, while lower closeness and degree should aid the stability of the collective state. So the comparative influence of these two opposing trends will determine the comparative global stability of these two classes of networks. If the betweeness centrality of the perturbed nodes is more crucial for stability, the multi-node Basin Stability of the network with $m=1$ will go to zero faster than the network with $m=2$. On the other hand if closeness centrality (and/or degree) of the perturbed nodes dictates global stability rather than betweeness centrality, the network with $m=2$ will lose global stability faster than the one with $m=1$. Now, since we find that network with $m=1$ always loses stability faster than the network with $m=2$, we can conclude that the {\em effect of betweeness centrality on the global stability is more dominant than the effect of the closeness centrailty and degree of the perturbed nodes}.
\begin{figure}[htb]
\centering
\includegraphics[width=\plotSize]{RSF_m=1,2_btch_c=1}
\includegraphics[width=\plotSize]{RSF_m=1,2_btch_c=1_scaled}
\hspace{5cm} (a) \hfill (b) \hspace{5cm}
\caption{(a) Multi-node basin Stability vs fraction $f$ of nodes perturbed, for Random Scale-Free network of size $N=50$ (blue), $100$ (green), $200$ (red) for $m=2$ and $m=1$ (inset). (b) Scaling resulting in data collapse, for the case of $m=2$ and $m=1$ (inset). The nodes perturbed are the ones with highest value of betweeness centrality.}
\label{bsvsf_rsf_varyN}
\end{figure}
Lastly, we study the effect of system size on multi-node basin stability, perturbing nodes in decreasing order of betweeness centrality. Figs.~\ref{bsvsf_rsf_varyN}a-b shows the results for networks sizes ranging from $50$ to $200$. We have found an appropriate finite-size scaling that allows data collapse (cf. Fig.~\ref{bsvsf_rsf_varyN} insets), and this indicates the value $f_{crit}$ in the limit of large network size. We observe that a Random Scale-Free network with $m=1$ yields $f_{crit} \rightarrow 0$ (i.e. the smallest fraction of perturbed nodes destroy the collective state), while $f_{crit} \sim 0.2$ for the case of $m=2$. So a RSF network with $m=2$ is more robust to localized perturbations than a a RSF network with $m=1$, as in the $m=2$ case, even when nearly $20 \%$ of the nodes of the highest betweeness centrality are perturbed the entire network still manages to return to the original state. This compelling difference again arises due to the fact that the highest betweeness centrality found in the RSF network with $m=1$ is significantly higher on an average than that in RSF networks of the same size with $m=2$. This again corroborates the results in Fig.~\ref{m12_prob}, and highlights the profound influence of betweeness centrality on global stability.\\
{\bf Robustness of the phenomena:} \\
In order to ascertain the generality of our observations, we have considered different nonlinear functions $F(x)$ in Eq.(1). For example, we explored a system of considerable biological interest, namely, a system of coupled synthetic gene networks. We used the quantitative model, developed in \cite{hasty}, describing the regulation of the operator region of $\lambda$ phase, whose promoter region consists of three operator sites. The chemical reactions describing this network, given by suitable re-scaling yields \cite{hasty}
$$F_{gene}(x) = \frac{m (1 + x^2 + \alpha \sigma_1 x^4)}
{1 + x^2 + \sigma_1 x^4 + \sigma_1 \sigma_2 x^6} - \gamma_x x$$
where $x$ is the concentration of the repressor. The non linearity in this $F_{gene} (x)$ leads to a double well potential, and different $\gamma$ introduces varying degrees of asymmetry in the potential. We studied a system of coupled genetic oscillators given by: $\dot{x_i} = F_{gene}(x_i) + C (\langle x_i^{nbhd} \rangle -x_i)$, where $C$ is the coupling strength and $\langle x_i^{nbhd} \rangle$ is the local mean field generated by the set of neighbours of site $i$.
Further we studied different networks of a piece-wise linear bi-stable system, that can be realised efficiently in electronic circuits \cite{circ}, given by:
\begin{equation}
F(x) = - \alpha x + \beta \ g(x)
\end{equation}
with the piecewise-linear function $g (x)=x$ when $x^*_{l} \le x \le x^*_{u}$,
$g(x) = x^*_{l}$ when $x < x^*_{l}$ and
$g(x)= x^*_{u}$ when $x > x^*_{u}$,
where $x^*_{u}$ and $x^*_{l}$ are the upper and lower thresholds respectively.
We simulated the coupled dynamics of these two bi-stable systems for different network topologies as well. We find that the qualitative trends in both these bi-stable systems is similar to that described above, indicating the generality of the central results presented here.\\
{\bf Conclusions:}\\
In summary, we have investigated the collective dynamics of bi-stable elements connected in different network topologies, ranging from rings and small-world networks, to scale-free networks and stars. We estimated the dynamical robustness of such networks by introducing a variant of the concept of multi-node basin stability which allowed us to gauge the global stability of the dynamics of the network in response to local perturbations affecting particular nodes of a system. We show that perturbing nodes with high closeness and betweeness-centrality significantly reduces the capacity of the system to return to the desired stable state. This effect is very pronounced for a star network which has one hub node with significantly different closeness/betweeness-centrality than the peripheral nodes. Considering such a network with all nodes in one well, if one perturbs the hub to another well, this {\em single} perturbed node drags the entire system to its well, thereby preventing the network from recovering its dynamical state. In contrast, even when {\em all} peripheral nodes are kicked to the other well, the hub manages to restore the entire system back to the original well. Lastly we explore explore Random Scale-Free Networks of bi-stable dynamical elements. Since the distribution of betweeness centralities, closeness centralities and degrees of the nodes is significantly different for Random Scale-Free Networks with $m=1$ and $m=2$, these networks have the potential to provide a test-bed for determining which of these centrality properties most inluences the robustness of the collective dynamics. The comparison between the global stability of these two classes of networks provides clear indications that the {\em betweeness centrality of the perturbed node is more crucial for dynamical robustness, than closeness centrality or degree of the node}. This result is important in deciding which nodes to safeguard in order to maintain the collective state of this network against targetted localized attacks.
|
1,941,325,220,005 | arxiv | \section{Introduction}
\label{sect:intro}
The data collected in the heavy-ion experiments at the Relativistic Heavy-Ion Collider (RHIC) are most commonly interpreted as the evidence that the matter produced in relativistic heavy-ion collisions equilibrates very fast (presumably within a fraction of 1~fm/c) and its behavior is very well described by the perfect-fluid hydrodynamics \cite{Kolb:2003dz,Huovinen:2003fa,Shuryak:2004cy,Teaney:2001av,Hama:2005dz,Hirano:2007xd,Nonaka:2006yn}. Such features are naturally explained by the assumption that the produced matter is a strongly coupled quark-gluon plasma (sQGP) \cite{Shuryak:2004kh}. Another explanation assumes that the plasma is weakly interacting, however the plasma instabilities lead to the fast isotropization of matter, which in turn helps to achieve equilibration \cite{Mrowczynski:2005ki}. Recently, it has been also shown that the model assuming thermalization of the transverse degrees of freedom only \cite{Bialas:2007gn} is consistent with the data describing the transverse-momentum spectra and the elliptic flow coefficient $v_2$. This result indicates that the assumption of the fast thermalization/isotropization might be relaxed.
In view of the problems related to the thermalization and isotropization of the plasma, it is useful to develop and analyze the models which can describe anisotropic systems. Recently, an effective model describing the anisotropic fluid/plasma dynamics in the early stages of relativistic heavy-ion collisions has been introduced \cite{Florkowski:2008ag}. The model has the structure similar to the perfect-fluid relativistic hydrodynamics -- the equations of motion follow from the conservation laws. However, it admits the possibility that the longitudinal and transverse pressures are different (as usual, the longitudinal direction is defined here by the beam axis). The main characteristic feature of the proposed model is the use of the pressure relaxation function $R$ which determines the time changes of the ratio of the transverse and longitudinal pressures and, possibly, defines the way how the system becomes isotropic, i.e., how the transverse and longitudinal pressures become equilibrated. The role of the pressure relaxation function is similar and complementary to the role played by the equation of state. It characterizes the material properties of the medium whose spacetime dynamics is otherwise governed by the conservation laws.
In this paper we develop the formulation of Ref. \cite{Florkowski:2008ag} in two ways: i) we introduce the microscopic interpretation for the relaxation function in the case where the considered system consists of particles whose behavior is described by the momentum-anisotropic phase-space distribution function, ii) we include the effects of the fields considering the case of anisotropic magnetohydrodynamics -- this approach may be regarded as a very crude attempt to include the effects of color fields on the particle dynamics.
Our first finding is that the use of the anisotropic phase-space distribution functions leads inevitably to the pressure relaxation functions $R$ which imply that the ratio of the longitudinal and transverse pressures tends asymptotically to zero, $P_L/P_T \to 0$. This behavior is complementary to the recent results obtained from the analyses of the early-stage partonic free-streaming \cite{Jas:2007rw,Broniowski:2008qk}. In fact, our approach includes the free-streaming as the special case, however, it may be also applied in the cases where the collisions are present but their effect does not change the assumed generic form of the phase-space distribution function. The time asymptotics $P_L/P_T \to 0$ means that the systems with the initial prolate momentum shape, i.e., the systems that are initially elongated along the beam axis in the momentum space with \mbox{$P_L > P_T$} (see for example Refs. \cite{Jas:2007rw,Randrup:2003cw}), naturally pass through the transient isotropic stage where the transverse and longitudinal pressures are equal. Our second finding follows from the study of the magnetohydrodynamic model. We show that the inclusion of the fields lowers the longitudinal pressure and increases the transverse pressure, hence, for the initially prolate systems the stage when the {\it total} longitudinal and transverse pressures become equal may be reached earlier, depending on the strength of the field.
The two models discussed by us cannot explain the phenomenon of reaching the {\it stable} isotropic stage. However, we indicate that the presence of the fields may have an impact on the process of isotropization, presumably restored by the effects of those collisions and/or field instabilities that are not taken into account in the present formalism. From the practical point of view, our formalism allows for the determination of the space-time evolution of the color-neutral anisotropic distributions, which may be used, for example, as the background distributions in the analysis of the plasma instabilities. Interestingly, in the case where we have initially $P_L > P_T$, the dynamics of the background and the plasma instable behavior evolve in the same direction, i.e., the two processes restore the equality of pressures. On the other hand, if the initial configuration has $P_T > P_L$, the plasma instabilities must compete with the growing asymmetry of the background (see Ref. \cite{Rebhan:2008uj,Rebhan:2008ry} where the growth of the instabilities was studied in the Bjorken longitudinal expansion and substantially large times of about 20 fm/c were found for RHIC). The interplay of such competing processes may be directly related to the problem of very fast thermalization/isotropization taking place in relativistic heavy-ion collisions. In addition, the discussed by us transformation of the longitudinal pressure into the transverse one is an interesting phenomenon analyzed in the context of the RHIC HBT puzzle -- we note that the recently proposed explanations of this puzzle suggest a very fast formation of the transverse flow \cite{Gyulassy:2007zz,Broniowski:2008vp,Pratt:2008bc}.
The paper is organized as follows: In Sect. II we consider the anisotropic system of partons described by the momentum-anisotropic phase-space distribution function. We calculate the moments of the distribution function, determine the pressure relaxation function $R$, and argue that the general form of $R$ implies that the ratio of the longitudinal and transverse pressures tends asymptotically to zero. In Section III we consider an example of boost-invariant magnetohydrodynamics. We analyze in detail the consistency of this approach and show that it may be treated as the special case of the formalism developed in \cite{Florkowski:2008ag} with the appropriate relaxation function $\hat R$. We also show how the presence of the local magnetic fields affects the dynamics of particles. We conclude in Sect. IV.
Below we assume that particles are massless and we use the following definitions for rapidity and spacetime rapidity,
\begin{eqnarray}
y = \frac{1}{2} \ln \frac{E_p+p_\parallel}{E_p-p_\parallel}, \quad
\eta = \frac{1}{2} \ln \frac{t+z}{t-z}, \label{yandeta}
\end{eqnarray}
which come from the standard parameterization of the four-momentum and spacetime coordinate of a particle,
\begin{eqnarray}
p^\mu &=& \left(E_p, {\vec p}_\perp, p_\parallel \right) =
\left(p_\perp \cosh y, {\vec p}_\perp, p_\perp \sinh y \right), \nonumber \\
x^\mu &=& \left( t, {\vec x}_\perp, z \right) =
\left(\tau \cosh \eta, {\vec x}_\perp, \tau \sinh \eta \right). \label{pandx}
\end{eqnarray}
Here the quantity $p_\perp$ is the transverse momentum
\begin{equation}
p_\perp = \sqrt{p_x^2 + p_y^2},
\label{energy}
\end{equation}
and $\tau$ is the (longitudinal) proper time
\begin{equation}
\tau = \sqrt{t^2 - z^2}.
\label{tau}
\end{equation}
Throughout the paper we use the natural units where $c=1$ and $\hbar=1$.
\section{Anisotropic system of particles}
\label{sect:aniso-system}
In this Section we consider a system of particles/partons described by the distribution function which is asymmetric in the momentum space, i.e., its dependence on the longitudinal and transverse momentum is different. We calculate the particle current, the energy-momentum tensor, and the entropy current of such a system. As the special case we consider the exponential Boltzmann-like distributions frequently used in other studies. This Section is also used to introduce general concepts of anisotropic plasma dynamics, which will be applied to the system consisting of particles and fields in the next Section.
\subsection{Anisotropic momentum distribution}
\label{sect:aniso-distribution}
We take into account the phase space distribution function whose dependence on the transverse and longitudinal momentum is determined by the two space-time dependent scales, $\lambda_\perp$ and $\lambda_\parallel$, namely
\begin{equation}
f = f\left( \frac{p_\perp}{\lambda_\perp},\frac{|p_\parallel|}{\lambda_\parallel}\right).
\label{Fxp1}
\end{equation}
The form (\ref{Fxp1}) is valid in the local rest frame of the plasma element. For boost-invariant systems, the explicitly covariant form of the distribution function has the structure
\begin{equation}
f = f\left( \frac{\sqrt{(p \cdot U)^2 - (p \cdot V)^2 }}{\lambda _\perp },
\frac{|p \cdot V|}{\lambda _\parallel }\right),
\label{Fxp2}
\end{equation}
where
\begin{equation}
U^{\mu} = ( u_0 \cosh\eta,u_x,u_y, u_0 \sinh\eta),
\label{U}
\end{equation}
\begin{equation}
V^{\mu} = (\sinh\eta,0,0,\cosh\eta),
\label{V}
\end{equation}
and $u^0, u_x, u_y$ are the components of the four vector
\begin{equation}
u^\mu = \left(u^0, {\vec u}_\perp, 0 \right) = \left(u^0, u_x, u_y, 0 \right).
\label{smallu}
\end{equation}
The four-velocity $u^\mu$ is normalized to unity
\begin{equation}
u^\mu u_\mu = u_0^2 - u_x^2 - u_y^2 = 1.
\label{normsmallu}
\end{equation}
The four-vector $U^\mu$ describes the four-velocity of the plasma element. It may be obtained from $u^\mu$ by the Lorentz boost along the $z$ axis with rapidity $\eta$. The appearance of the four-vector $V^\mu$ is a new feature related to the anisotropy -- in the rest frame of the plasma element we have $V^\mu = (0,0,0,1)$. We note that the four vectors $U^\mu$ and $V^\mu$ satisfy the following normalization conditions:
\begin{equation}
U^\mu U_\mu = 1, \quad V^\mu V_\mu = -1, \quad U^\mu V_\mu = 0.
\label{normort}
\end{equation}
The boost-invariant character of Eq. (\ref{Fxp2}) is immediately seen if we write the explicit expression for $p \cdot U$ and $p \cdot V$ which both depend only on $y-\eta$ and the transverse coordinates, namely
\begin{eqnarray}
p \cdot U &=& p_\perp u_0 \cosh(y-\eta) - {\vec p}_\perp \cdot {\vec u}_\perp , \nonumber \\
p \cdot V &=& p_\perp u_0 \sinh(y-\eta) .
\label{pdotUV}
\end{eqnarray}
From Eqs. (\ref{pdotUV}) one can also infer that in the rest frame system of the plasma element, where $\eta = 0$ and $ {\vec u}_\perp = 0$, we have $p \cdot U = p_\perp \cosh y$ and $p \cdot V = p_\perp \sinh y$. Thus, in the local rest frame of the plasma element Eq. (\ref{Fxp2}) is reduced to Eq. (\ref{Fxp1}).
\subsection{Moments of anisotropic distribution}
\label{sect:aniso-moments}
Using the standard definitions of $N^\mu$ and $T^{\mu \nu}$ as the first and the second moment of the distribution function (\ref{Fxp2}), namely
\begin{eqnarray}
N^\mu &=& \int \frac{d^3p}{(2\pi)^3 \, E_p} \, p^{\mu} f,
\label{Nmu}
\end{eqnarray}
\begin{eqnarray}
T^{\mu \nu} &=& \int \frac{d^3p}{(2\pi)^3 \, E_p} \, p^{\mu} p^\nu f,
\label{Tmunu}
\end{eqnarray}
we obtain the following decompositions:
\begin{eqnarray}
N^\mu &=& n \, U^\mu,
\label{Nmudec}
\end{eqnarray}
\begin{eqnarray}
T^{\mu \nu} &=& \left( \varepsilon + P_T\right) U^{\mu}U^{\nu}
- P_T \, g^{\mu\nu} - (P_T - P_L) V^{\mu}V^{\nu}. \nonumber \\
\label{Tmunudec}
\end{eqnarray}
We note that $N^\mu$ does not have the contribution proportional to the four-vector $V^\mu$ since such a term would be proportional to the scalar product $V^\mu N_\mu$ that vanishes in the local rest frame. Similarly, the energy-momentum tensor does not contain the terms proportional to the symmetric combination $V^\mu U^\nu + U^\mu V^\nu$, see Ref. \cite{Ryblewski:2008fx} for a more explicit presentation of the analogous decompositions.
Equation (\ref{Nmudec}) defines the particle density $n$, which may be calculated from the formula
\begin{eqnarray}
n &=& \int \frac{d^3p}{(2\pi)^3} \, f\left( \frac{p_\perp}{\lambda _\perp},
\frac{| p_\parallel |}{\lambda _\parallel }\right) \label{rho1} \\
&=& \frac{\lambda_\perp^2 \,\lambda_\parallel}{2 \pi^2} \int\limits_0^\infty
d\xi_\perp\,\xi_\perp\, \int\limits_{0}^\infty d\xi_\parallel \, f\left(\xi_\perp,\xi_\parallel \right), \nonumber
\end{eqnarray}
where we have introduced the dimensionless variables
\begin{equation}
\xi_\perp = \frac{p_\perp}{\lambda_\perp}, \quad \xi_\parallel = \frac{p_\parallel}{\lambda_\parallel}.
\label{xis}
\end{equation}
In the similar way we calculate the energy density,
\begin{eqnarray}
\varepsilon &=& \int \frac{d^3p}{ (2\pi)^3} \, E_p \, f\left( \frac{p_\perp}{\lambda _\perp}, \frac{| p_\parallel |}{\lambda _\parallel }\right) \label{epsilon1} \\
&=& \frac{\lambda_\perp^2 \,\lambda_\parallel^2}{2\pi^2} \int\limits_0^\infty
d\xi_\perp\,\xi_\perp \int\limits_0^\infty \,d\xi_\parallel\,
\sqrt{\xi_\parallel^2 + x\, \xi_\perp^2 }
\, f\left(\xi_\perp,\xi_\parallel \right),
\nonumber
\end{eqnarray}
where the variable $x$ is defined by the expression
\begin{equation}
x = \left( \frac{\lambda_\perp}{\lambda_\parallel} \right)^2.
\label{iks}
\end{equation}
Finally, the transverse and longitudinal pressure is obtained from the equations
\begin{eqnarray}
P_T &=& \int \frac{d^3p}{ (2\pi)^3} \, \frac{p_\perp^2}{2 E_p} \, f\left( \frac{p_\perp}{\lambda _\perp}, \frac{| p_\parallel |}{\lambda _\parallel }\right) \label{PT1} \\
&=& \frac{\lambda_\perp^4}{2\pi^2} \int
\frac{d\xi_\perp\,\xi_\perp^3 \,d\xi_\parallel}
{2 \sqrt{ \xi_\parallel^2 + x\, \xi_\perp^2 }}
\, f\left(\xi_\perp,\xi_\parallel \right),
\nonumber
\end{eqnarray}
\begin{eqnarray}
P_L &=& \int \frac{d^3p}{ (2\pi)^3} \, \frac{p_\parallel}{E_p} \, f\left( \frac{p_\perp}{\lambda _\perp}, \frac{| p_\parallel |}{\lambda _\parallel }\right) \label{PL1} \\
&=& \frac{\lambda_\perp^2 \,\lambda_\parallel^2}{2\pi^2} \int
\frac{d\xi_\perp\,\xi_\perp \,d\xi_\parallel \,\xi_\parallel^2}
{\sqrt{\xi_\parallel^2 + x\, \xi_\perp^2 }}
\, f\left(\xi_\perp,\xi_\parallel \right).
\nonumber
\end{eqnarray}
From now on the limits of the integrations over $\xi_\perp$ and $\xi_\parallel$ are always from 0 to infinity. In the local rest-frame of the fluid element, where we have $U^\mu = (1,0,0,0)$ and $V^\mu = (0,0,0,1)$ one finds
\begin{equation}
T^{\mu \nu} = \left(
\begin{array}{cccc}
\varepsilon & 0 & 0 & 0 \\
0 & P_T & 0 & 0 \\
0 & 0 & P_T & 0 \\
0 & 0 & 0 & P_L
\end{array} \right),
\label{Tmunuarray}
\end{equation}
hence, as expected the structure (\ref{Fxp2}) allows for different pressures in the longitudinal and transverse directions.
One may also calculate the entropy current using the Boltzmann definition,\footnote{The formula (\ref{S}) assumes the classical Boltzmann statistics. It may be generalized to the case of bosons or fermions in the standard way.}
\begin{equation}
S^{\mu} = g_0\int \frac{d^3p}{(2 \pi)^3} \frac{p^\mu}{ E_p} \left(\frac{f}{g_0}\right)
\, \left[1 - \ln \left(\frac{f}{g_0}\right) \right],
\label{S}
\end{equation}
here $g_0$ is the degeneracy factor related to internal quantum numbers such as spin or color. The entropy current has the structure
\begin{equation}
S^{\mu} = \sigma \, U^\mu,
\label{Sstr}
\end{equation}
where
\begin{eqnarray}
\sigma = \frac{\lambda_\perp^2 \,\lambda_\parallel}{2\pi^2} \int
d\xi_\perp\,\xi_\perp\,d\xi_\parallel \, f\left(\xi_\perp,\xi_\parallel \right) \left[1 - \ln \frac{f\left(\xi_\perp,\xi_\parallel \right)}{g_0} \right]. \nonumber \\
\label{sigma}
\end{eqnarray}
Comparison of Eqs. (\ref{rho1}) and (\ref{sigma}) indicates that the particle density and the entropy density are proportional, with the proportionality constant depending on the specific choice of the parton distribution function $f$.
\subsection{Pressure relaxation function}
\label{sect:rel-funct}
With the help of the variables $x = (\lambda_\perp/\lambda_\parallel)^2$ and $n$ we may rewrite our expressions (\ref{epsilon1}), (\ref{PT1}), and (\ref{PL1}) in the concise form
\begin{eqnarray}
\varepsilon &=& \left(\frac{n}{g} \right)^{4/3} R(x),
\label{epsilon2}
\end{eqnarray}
\begin{eqnarray}
P_T &=& \left(\frac{n}{g} \right)^{4/3}
\left[\frac{R(x)}{3} + x R^\prime(x) \right],
\label{PT2}
\end{eqnarray}
\begin{eqnarray}
P_L &=& \left(\frac{n}{g} \right)^{4/3}
\left[\frac{R(x)}{3} - 2 x R^\prime(x) \right],
\label{PL2}
\end{eqnarray}
where the function $R(x)$ is defined by the integral
\begin{equation}
R(x) = x^{-1/3} \int \frac{d\xi_\perp\,\xi_\perp\,d\xi_\parallel}{2\pi^2}
\sqrt{\xi_\parallel^2 + x \xi_\perp^2} f(\xi_\perp,\xi_\parallel),
\label{Rofiks}
\end{equation}
$R^\prime(x) = dR(x)/dx$, and $g$ is a constant defined by the expression
\begin{equation}
g = \int \frac{d\xi_\perp\,\xi_\perp\,d\xi_\parallel}{2\pi^2}
f(\xi_\perp,\xi_\parallel).
\label{gconst}
\end{equation}
It is quite interesting to observe that the structure of Eqs. (\ref{epsilon2}) - (\ref{PL2}) agrees with the structure derived in \cite{Florkowski:2008ag}, where no reference to the underlying microscopic picture was made but only the general consistency of the approach based on the anisotropic energy-momentum tensor (\ref{Tmunudec}) and the conservation laws was studied. The only difference is that the entropy density $\sigma$ used in Ref. \cite{Florkowski:2008ag} is now replaced by the particle density $n$.
In fact, one may repeat the arguments presented in \cite{Florkowski:2008ag} replacing the assumption of the conservation of entropy by the assumption of the particle-number conservation (note that we have shown above that $n$ and $\sigma$ are proportional if one uses the ansatz (\ref{Fxp1})). In such a case we end up with the structure which exactly matches Eqs. (\ref{epsilon2}) - (\ref{PL2}) and $R$ may be identified with the pressure relaxation function. Moreover, the results of Ref. \cite{Florkowski:2008ag} allow us to relate the variable $x = \lambda_\perp^2/\lambda_\parallel^2$ with the quantity $n \tau^3$ -- we shall come back to the discussion of this point below, after the analysis of some special cases of the anisotropic distribution functions.
\subsection{Boltzmann-like anisotropic distribution}
\label{sect:aBoltz}
As the special case of the anisotropic distribution function we may consider the exponential distribution of the form
\begin{equation}
f_1 = g_0 \exp \left( -\sqrt{\frac{p_\perp ^2}{\lambda_\perp ^2} +
\frac{p_\parallel^2}{\lambda_\parallel^2} } \, \right),
\label{aBoltz1}
\end{equation}
which may be regarded as the generalization of the Boltzmann equilibrium distribution where $\lambda_\perp = \lambda_\parallel = T$ (as explained above, $g_0$ is the degeneracy factor connected with internal quantum numbers). In this case we recover the structure (\ref{epsilon2}) - (\ref{PL2}) with the relaxation function of the form \footnote{Note that for $x < 1$ the function $(\arctan\sqrt{x-1})/\sqrt{x-1}$ should be replaced by $(\hbox{arctanh}\sqrt{1-x})/\sqrt{1-x}$}
\begin{equation}
R_1(x) = \frac{3\, g_0\, x^{-\frac{1}{3}}}{2 \pi^2} \left[ 1 + \frac{x \arctan\sqrt{x-1}}{\sqrt{x-1}}\right]
\end{equation}
and the constant (\ref{gconst}) is simply
\begin{equation}
g_1 = \frac{g_0}{\pi^2}.
\label{g1const}
\end{equation}
Another interesting anisotropic distribution function has the factorized form
\begin{equation}
f_2 = g_0 \exp\left( -\frac{p_\perp}{\lambda_\perp} \right)
\exp\left( - \frac{|p_\parallel|}{\lambda_\parallel} \right).
\label{aBoltz2}
\end{equation}
In this case we obtain
\begin{eqnarray}
R_2(x) &=& \frac{g_0 x^{-1/3}}{2 \pi^2 (1+x)^2} \left[\,
1 + 5 x \sqrt{x} + 2 x^2 \sqrt{x} - 2 x
\vphantom{\frac{1+\sqrt{x}+\sqrt{1+x}}{1+\sqrt{x}-\sqrt{1+x}}}
\right. \nonumber \\
& & \left. \quad + \frac{3 x}{\sqrt{x+1}}
\ln \frac{1+\sqrt{x}+\sqrt{1+x}}{1+\sqrt{x}-\sqrt{1+x}} \,\,
\right]
\end{eqnarray}
and
\begin{equation}
g_2 = \frac{g_0}{2 \pi^2}.
\label{g1const}
\end{equation}
The calculation of the entropy density gives $\sigma = 4 n$ and $\sigma = 8 n$, for the cases $f=f_1$ and $f=f_2$, respectively.
The structure of Eq. (\ref{Rofiks}) implies that for $x \ll 1$ \mbox{($\lambda_\perp \ll \lambda_\parallel$)} the function $R(x)$ behaves like $x^{-1/3}$. In this limit $P_T = 0$ and $\varepsilon = P_L$. Similarly, for $x \gg 1$ \mbox{($\lambda_\perp \gg \lambda_\parallel$)} the function $R(x)$ behaves like $x^{1/6}$, implying that $P_L = 0$ and $\varepsilon = 2 P_T$. This behavior is expected if we interpret the parameters $\lambda_\perp$ and $\lambda_\parallel$ as the transverse and longitudinal temperatures, respectively. In agreement with those general properties we find
\begin{eqnarray}
R_1(x) &\approx & \frac{3 g_0 }{2 \pi^2}
\left[ x^{-1/3} + \frac{1}{2}(\ln 4 - \ln x) x^{2/3} \right] ,
\nonumber \\
R_2(x) &\approx & \frac{g_0 }{2 \pi^2}
\left[ x^{-1/3} +\frac{1 }{2} (6 \ln 2 - 8 - 3 \ln x) x^{2/3}\right] ,
\nonumber \\
\label{smalliks}
\end{eqnarray}
for $x \ll 1$, and
\begin{eqnarray}
R_1(x) &\approx & \frac{3 g_0 }{4 \pi}
\left( x^{1/6} + \frac{1}{2} x^{-5/6}\right),
\nonumber \\
R_2(x) &\approx & \frac{g_0 }{\pi^2}
\left( x^{1/6} + \frac{1}{2} x^{-5/6}\right),
\label{bigiks}
\end{eqnarray}
for $x \gg 1$.
In Fig. \ref{fig:ratios} we plot the ratios: $P_L/P_T$ (solid line), $P_L/\varepsilon$ (decreasing dashed line), and $P_T/\varepsilon$ (increasing dashed line) for the two cases: $f = f_1$ (a) and $f=f_2$ (b). The considered ratios are functions of the $x$ parameter only. In agreement with the remarks given above we see that $\varepsilon = P_L$ for $x = 0$, and $\varepsilon = 2 P_T$ in the limit $x \to \infty$. For $f = f_1$ the two pressures become equal if $x=1$, since in this case the distribution function $f_1$ becomes exactly isotropic. For $f = f_2$ the equality of pressures is reached for $x \approx 0.7$. Except for such small quantitative differences, the behavior of the pressures is very much similar in the two cases, as can be seen from the comparison of the upper and lower part of Fig. \ref{fig:ratios}.
\begin{figure}[t]
\begin{center}
\subfigure{\includegraphics[angle=0,width=0.45\textwidth]{RATIOS1.eps}} \\
\subfigure{\includegraphics[angle=0,width=0.45\textwidth]{RATIOS2.eps}}
\end{center}
\caption{(Color online) The ratios $P_L/P_T$ (solid red lines), $P_L/\varepsilon$ (decreasing blue dashed lines), and $P_T/\varepsilon$ (increasing blue dashed lines) shown as functions of the variable $x$, {\bf a)} the results for the distribution function (\ref{aBoltz1}), {\bf b)} the same for the distribution function (\ref{aBoltz2}). }
\label{fig:ratios}
\end{figure}
We note that the choice $R(x) = x^{1/6}$ corresponds to the case of transverse hydrodynamics, see Ref. \cite{Florkowski:2008ag}. In the transverse-hydrodynamic approach the matter forms non-interacting transverse clusters which do not interact with each other yielding $P_L=0$. The concept of transverse hydrodynamics was initiated in Refs. \cite{Heinz:2002rs,Heinz:2002xf} and recently reformulated in Refs. \cite{Bialas:2007gn,Chojnacki:2007fi,Ryblewski:2008fx}.
\subsection{Time dependence of pressure anisotropy}
\label{sect:PToverPL}
In this Section we briefly recall the arguments of Ref. \cite{Florkowski:2008ag} concerning the consistency of the anisotropic plasma dynamics. The basic assumptions are the particle number conservation,
\begin{equation}
\partial_\mu N^\mu = \partial \left( n U^\mu \right) = 0,
\label{partcons}
\end{equation}
and the energy-momentum conservation law,
\begin{equation}
\partial_\mu T^{\mu \nu} = 0,
\label{enmomcon}
\end{equation}
with the energy-momentum tensor of the form (\ref{Tmunudec}). As shown in the previous Sections, the entropy conservation used in \cite{Florkowski:2008ag} may be typically identified with the particle number conservation. In view of the further development of the model discussed in the next Section we shall turn to the particle number conservation as the basic input. We note that the assumption (\ref{partcons}) means that our description may be valid only after the time when most of the particles is produced.
The projection of the energy-momentum conservation law (\ref{enmomcon}) on the four-velocity $U_\nu$ indicates that the energy density is generally a function of two variables, \mbox{$\varepsilon = \varepsilon(n,\tau)$}. The mathematical consistency of this approach, i.e., the requirement that $d\varepsilon$ is a total differential, implies directly that the functions $\varepsilon(n,\tau)$, $P_T(n,\tau)$, and $P_L(n,\tau)$ must be of the form (\ref{epsilon2}) - ({\ref{PL2}) where
\begin{equation}
x = x_0 \frac{n \tau^3}{n_0 \tau_0^3}
\label{oldiks}
\end{equation}
with $x_0, n_0$ and $\tau_0$ being constants that may be used to fix the initial conditions. In particular, it is convenient to regard $\tau_0$ as the initial time, and $n_0$ as the maximal initial density (at the very center of the system). Then, $x_0$ is the maximal initial value of $x$. Note that the particle density $n$ is very small at the edge of the system, hence the initial transverse pressure is always zero in this region. On the other hand, at the center of the system at the initial time $\tau=\tau_0$ we may have $P_T < P_L$ or $P_T > P_L$ depending on the value of $x_0$.
Combing Eqs. (\ref{iks}) and (\ref{oldiks}) we are coming to the main conclusion reached so far: For the microscopic phase-space distribution function of the form (\ref{Fxp1}), the pressure relaxation function is completely determined by Eq. (\ref{Rofiks}) where
\begin{equation}
x = \frac{\lambda_\perp^2}{\lambda_\parallel^2} = x_0 \frac{n \tau^3}{n_0 \tau_0^3}.
\label{newiks}
\end{equation}
In the region where the matter is initially formed we have $0 < n \leq n_0$ and the right-hand-side of Eq. (\ref{newiks}) grows with time -- the particle density $n$ cannot decrease faster than $1/\tau^3$, since this would require a three-dimensional expansion at the speed of light. We thus conclude that the ratio of the parameters $\lambda_\parallel/\lambda_\perp$ tends asymptotically in time to zero. Consequently, for sufficiently large evolution times the ratio of the longitudinal and transverse pressures becomes negligible. As mentioned above, even if the initial conditions require that $P_T$ is larger than $P_L$ at the center of the system, at the edges we have very small density which means that $P_L >> P_T$ in this region. In the case where the longitudinal expansion dominates, $n = n_0 \tau_0/\tau$ and $x = x_0 \tau^2/\tau_0^2$, hence $x$ and $\tau$ are simply related.
\subsection{Longitudinal free-streaming}
\label{sect:PToverPL}
The anisotropic distribution functions considered in the previous Sections should satisfy the Boltzmann kinetic equation (in some reasonable approximation). In this respect we assume that the effects of both the free-streaming and the parton collisions do not change the generic structure (\ref{Fxp2}), while the time changes of the parameters $\lambda_\perp$,$\lambda_\parallel$, and $u^\mu$ are determined by the conservation laws. The spirit of this approach is very similar to that used in the perfect-fluid hydrodynamics, where the collisions maintain the equilibrium shape of the distribution function, whereas the conservation laws determine the time changes of the parameters such as temperature or the fluid velocity.
Clearly the relation of our framework to the underlying kinetic theory should be elaborated in more detail in further investigations, which may possibly determine the microscopic conditions which validate our approximations. Here, we may easily analyze the case of pure free-streaming where the distribution function satisfies the collisionless kinetic equation
\begin{equation}
p^\mu \partial_\mu f(x,p) = 0.
\label{kineq}
\end{equation}
For the pure longitudinal expansion (with vanishing transverse flow, ${\vec u}_\perp=0$, and the parameters $\lambda_\perp,\lambda_\parallel$ depending only on the proper time $\tau$) we rewrite Eq. (\ref{kineq}) in the form
\begin{equation}
\left[ \cosh(y-\eta) \frac{\partial}{\partial\tau}
+ \frac{\sinh(y-\eta)}{\tau} \frac{\partial}{\partial \eta} \right] f\left(w,v\right) = 0,
\label{kineq1}
\end{equation}
where $w = p_\perp/\lambda_\perp(\tau)$ and $v = p_\perp \sinh(y-\eta)/\lambda_\parallel(\tau)$, see Eqs. (\ref{Fxp2}) and (\ref{pdotUV}). By direct differentiation we obtain \footnote{For simplicity we consider here the functions depending on $v^2$ and disregard the sign of the absolute value.}
\begin{equation}
\frac{\partial f}{\partial w} \frac{d\lambda_\perp}{\lambda_\perp^2 d\tau}
+ \frac{\sinh(y-\eta)}{\lambda_\parallel^2} \frac{\partial f}{\partial v}
\left[\frac{d\lambda_\parallel}{d\tau} + \frac{\lambda_\parallel}{\tau} \right] =0.
\end{equation}
The solution to this equation exists for any form of the function $f$ provided $\lambda_\perp$ is a constant and $\lambda_\parallel \sim 1/\tau$. Thus, we may write
\begin{equation}
f = f\left( \frac{p_\perp}{\lambda_\perp^0}, \frac{\tau p_\perp \sinh(y-\eta)}{\tau_0 \lambda_\parallel^0} \right),
\label{freestreamsol}
\end{equation}
where $\tau_0$, $\lambda_\perp^0$ and $\lambda_\parallel^0$ are constants.
In the considered case the variable $x$ equals
\begin{equation}
x = \left( \frac{\lambda_\perp}{\lambda_\parallel} \right)^2 =
\left( \frac{\lambda_\perp^0 }{\lambda_\parallel^0 \tau_0} \right)^2 \, \tau^2
\end{equation}
hence, it is consistent with Eq. (\ref{newiks}), where for the boost-invariant longitudinal expansion we may substitute $n = n_0 \tau_0/\tau$. We thus see that our approach includes the boost-invariant free-streaming as the special case. In particular, Eq. (\ref{freestreamsol}) agrees with the form of the color neutral background used in Refs. \cite{Rebhan:2008uj,Rebhan:2008ry,Martinez:2008di}. In the next Section we show that our framework includes also the case where the partons interact with local magnetic fields.
\section{Locally anisotropic magnetohydrodynamics}
\label{sect:AMHD}
In this Section we generalize the formulation discussed in Sect. \ref{sect:aniso-system}. We analyze in detail magnetohydrodynamics as an example of the physical system consisting of particles and fields, which is also known to exhibit strong anisotropic behavior. Of course, the magnetohydrodynamics by itself cannot be directly applied to modeling of the early stages of heavy-ion collisions. However, several phenomena analyzed in its framework show similarities with the color field dynamics discussed in the context of Color Glass Condensate \cite{McLerran:1993ni,Kharzeev:2001yq} and Glasma \cite{Lappi:2006fp}, hence we think that the elaboration of this example may shed light on more complicated color-hydrodynamics which may be the right description of the early stages of heavy-ion collisions.
Our analysis of the boost-invariant magnetohydrodynamics, where the initial magnetic field is parallel to the collision axis, shows that in the considered system a similar phenomena take place as in the system consisting of particles only. The presence of the fields lowers the longitudinal pressure (which eventually may be negative) and increases the transverse pressure, see a related analysis in \cite{Vredevoogd:2008id}.
\subsection{General formulation}
\label{sect:binv}
At first let us recapitulate the main physical assumptions of locally anisotropic magnetohydrodynamics (for recent formulation see for example \cite{PhysRevE.47.4354,PhysRevE.51.4901}). Let $U^\mu$ be the plasma four-velocity and $F^{\mu \nu}$ be the electromagnetic-field tensor. We define the rest-frame electric and magnetic field by the following equations
\begin{equation}
E^\mu = F^{\mu \nu} U_\nu,
\label{Emu}
\end{equation}
\begin{equation}
B^\mu = \frac{1}{2} \epsilon^{\mu \alpha \beta \gamma} U_\alpha F_{\beta \gamma},
\label{Bmu}
\end{equation}
where $\epsilon^{\alpha \beta \gamma \delta}$ is a completely antisymmetric tensor with $\epsilon^{0123} = 1$. Eqs. (\ref{Emu}) and (\ref{Bmu}) yield
\begin{equation}
F^{\mu \nu} = E^\mu U^\nu - E^\nu U^\mu + \frac{1}{2} \epsilon^{\mu \nu \alpha \beta} \left(B_\alpha U_\beta - B_\beta U_\alpha \right).
\label{Fmunu1}
\end{equation}
We note that both $E^\mu$ and $B^\mu$ are spacelike and orthogonal to $U^\mu$,
\begin{eqnarray}
E_\mu E^\mu & \leq & 0, \quad E_\mu U^\mu = 0, \label{EU} \\
B_\mu B^\mu & \leq & 0, \quad B_\mu U^\mu = 0.
\label{BU}
\end{eqnarray}
The picture of anisotropic magnetohydrodynamics requires that $U^\mu$ corresponds to the frame where the electric field is absent,
\begin{equation}
E^\mu = 0.
\label{Emu0}
\end{equation}
In this case the Maxwell equations may be written in the form
\begin{equation}
\partial_\mu F^{\mu \nu} = 4 \pi j^\nu,
\label{maxwell1}
\end{equation}
\begin{equation}
\partial_\mu {}^* F^{\mu \nu} = 0,
\label{maxwell2}
\end{equation}
where
\begin{equation}
F^{\mu \nu} = \frac{1}{2} \epsilon^{\mu \nu \alpha \beta} (B_\alpha U_\beta - B_\beta U_\alpha)
\label{Fmunu2}
\end{equation}
and ${}^* F^{\mu \nu}$ is the dual electromagnetic tensor
\begin{equation}
{}^*F^{\mu \nu} = B^\mu U^\nu - B^\nu U^\mu.
\label{Fdual}
\end{equation}
Besides Eqs. (\ref{maxwell1}) -- (\ref{Fdual}) the plasma dynamics is determined by the particle conservation law, see Eq. (\ref{partcons}), the electromagnetic current conservation (the consequence of Eq. (\ref{maxwell1})), and the energy-momentum conservation law for matter and fields. Before we analyze the role of the conservation laws we shall discuss, however, the constraints coming from the boost-invariance.
\subsection{Imposing longitudinal boost-invariance}
\label{sect:binv}
Our aim is to construct the boost-invariant field tensors $F^{\mu \nu}(x)$ and ${}^*F^{\mu \nu}(x)$. The condition of boost-invariance requires that the transformed fields at the new spacetime positions are equal to the original fields in the new positions. Formally, this condition may be written in the form
\begin{equation}
F^{\mu \nu \, \prime}(x^\prime) = L^\mu_{\,\,\,\alpha} L^\nu_{\,\,\,\beta }
F^{\alpha \beta}(x) = F^{\mu \nu}(x^\prime).
\end{equation}
where $L$ describes the longitudinal Lorentz boost. Similarly, for a boost-invariant four-vector field $A^\mu(x)$ we have
\begin{equation}
A^{\mu \,\prime}(x^\prime) = L^\mu_{\,\,\,\alpha}
A^{\alpha}(x) = A^{\mu}(x^\prime).
\end{equation}
It is easy to check that the four-vectors $U^\mu$ and $V^\mu$ defined by Eqs. (\ref{U}) and (\ref{V}) are invariant under Lorentz boosts with rapidity $\alpha$ along the longitudinal axis, defined by the matrix
\begin{equation}
L^{\mu}_{\,\, \nu}(\alpha) =
\left(
\begin{array}{cccc}
\cosh \alpha & 0 & 0 & \sinh \alpha \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
\sinh \alpha & 0 & 0 & \cosh \alpha
\end{array}
\right).
\label{Lmunu}
\end{equation}
Since $U^\mu$ and $V^\mu$ are boost-invariant, the structure of Eqs. (\ref{Fmunu2}) and (\ref{Fdual}) suggests that the boost-invariant formalism follows from the ansatz
\begin{equation}
B^\mu = B V^\mu.
\label{BmuVmu}
\end{equation}
where $B$ is a scalar function depending on $\tau$ and transverse coordinates ${\vec x}_\perp$. Equation (\ref{BmuVmu}) defines the field tensors as the tensor products of the boost-invariant four-vectors, hence, by construction the field tensors are boost-invariant. In addition, we observe that in this case Eq. (\ref{BU}) is automatically fulfilled.
\subsection{Homogeneous dual field equations}
\label{sect:conlaw}
Projection of the homogeneous dual field equations (\ref{maxwell2}) on the four-velocity $U_\nu$ gives
\begin{equation}
V^\mu \partial_\mu B + B \partial_\mu V^\mu - B U_\nu U^\mu \partial_\mu V^\nu = 0.
\label{dualeq1}
\end{equation}
For the boost-invariant systems all terms in (\ref{dualeq1}) are identically zero, hence it is automatically fulfilled. On the other hand, the projection of (\ref{maxwell2}) on the four-vector $V_\nu$ gives
\begin{equation}
U^\mu \partial_\mu \ln \left( \frac{n \tau}{B} \right) = 0,
\label{dualeq2}
\end{equation}
hence $B$ is related to the particle density $n$ and the proper time $\tau$ by the expression
\begin{equation}
B = B_0 \frac{n \tau}{n_0 \tau_0}.
\label{Bntau}
\end{equation}
One may check that with the ansatz (\ref{Bntau}) all four equations in (\ref{maxwell2}) are automatically satisfied, hence Eq. (\ref{Bntau}) is the main piece of information delivered by the homogeneous dual field equations. In particular the equation ${\vec \nabla} \cdot {\vec B} = 0$ turns out to be equivalent with the continuity equation for the particle number.
Collecting now Eqs. (\ref{BmuVmu}) and (\ref{Bntau}) we find the explicit form of the dual field tensor ${}^* F^{\mu \nu} $,
\begin{widetext}
\begin{eqnarray}
{}^* F^{\mu \nu} = \frac{B_0 n \tau}{n_0 \tau_0} \left(
\begin{array}{cccc}
0 & u_x \sinh\eta & u_y \sinh\eta & -u^0 \\
-u_x \sinh\eta & 0 & 0 & -u_x \cosh\eta \\
-u_y \sinh\eta & 0 & 0 & -u_y \cosh\eta \\
u^0 & u_x \cosh\eta & u_y \cosh\eta & 0
\end{array} \right). \nonumber \\
\end{eqnarray}
\end{widetext}
The structure of the dual tensor allows us to infer the form of the electric and magnetic fields,
\begin{eqnarray}
{\vec B} &=& \frac{B_0 n \tau}{n_0 \tau_0}\left(- u_x \sinh\eta , -u_y \sinh\eta , u^0 \right),
\label{vecB1} \\
{\vec E} &=& \frac{B_0 n \tau}{n_0 \tau_0}\left(- u_y \cosh\eta , u_x \cosh\eta , 0 \right),
\label{vecE1}
\end{eqnarray}
The above structure implies directly that with no transverse expansion, i.e., for $u_x=u_y=0$ only the longitudinal magnetic field is present in the system and, in view of the relation $n=n_0 \tau_0/\tau$, it should be a constant, $B=B_0$.
In our general approach the situation $u_x=u_y=0$ corresponds to the initial condition for the evolution of the system. It resembles the case of the Glasma \cite{Lappi:2006fp} where also the longitudinal chromo-magnetic field is present, however, in the case of Glasma the direction of the field is random with the coherence transverse length set by the saturation scale (another difference is the presence of the longitudinal chromo-electric field in the Glasma). When the transverse expansion starts, due to the presence of the transverse pressure, it initiates the formation of the transverse magnetic and electric fields which are always perpendicular to each other, ${\vec B} \cdot {\vec E} = 0$. We note, however, that in the local rest frame of the plasma element, the only non-zero component is $B_z$.
A more compact form representing the fields ${\vec B}$ and ${\vec E}$ may be achieved if we use the following parameterization of the particle current
\begin{eqnarray}
N^\mu &=& n \left(u^0 \cosh\eta, u_x, u_y, u^0 \sinh\eta \right) \nonumber \\
&=& \left(n \, u^0 \cosh\eta, n_x, n_y, n u^0 \sinh\eta \right).
\label{Nmun}
\end{eqnarray}
Using the quantities $n_x$ and $n_y$ we write
\begin{eqnarray}
{\vec B} &=& \frac{B_0}{n_0 \tau_0}\left(- z\, n_x , - z\, n_y , n\, \tau\, u^0 \right),
\label{vecB2} \\
{\vec E} &=& \frac{B_0}{n_0 \tau_0}\left(- t\, n_y , t\, n_x , 0 \right).
\label{vecE2}
\end{eqnarray}
\subsection{Inhomogeneous field equations}
\label{sect:conlaw}
We turn now to the inhomogeneous field equations (\ref{maxwell1}). In our approach those equations may be used to determine the electromagnetic current of the system, $j^\mu = (\rho, j_x, j_y, j_z)$. The straightforward calculation, where the form of the magnetic and electric fields given by Eqs. (\ref{vecB2}) and (\ref{vecE2}) is used, leads us to the expressions
\begin{eqnarray}
j^0 = \rho &=& \frac{B_0\,t}{n_0 \tau_0} \left(\partial_y n_x - \partial_x n_y \right), \nonumber \\
j^1 = j_x &=& \frac{B_0}{n_0 \tau_0} \left[\tau \partial_y (n u^0) + 2 n_y + \tau \partial_\tau n_y \right], \nonumber \\
j^2 = j_y &=& \frac{B_0}{n_0 \tau_0} \left[-\tau \partial_x (n u^0) - 2 n_x - \tau \partial_\tau n_x \right], \nonumber \\
j^3 = j_z &=& \frac{B_0\,z}{n_0 \tau_0} \left(\partial_y n_x - \partial_x n_y \right).
\label{jmu}
\end{eqnarray}
One may check by the explicit calculation that the electromagnetic four-current $j^\mu$ defined by Eq. (\ref{jmu}) is conserved, as required by the equation (\ref{maxwell1}).
In the magnetohydrodynamic appoach one usually assumes that matter is neutral. In our case, the neutrality condition $\rho = 0$ implies that the flow must be rotationless, i.e., the following equation should be satisfied
\begin{equation}
\partial_y n_x - \partial_x n_y = 0.
\label{rotless}
\end{equation}
In this case also the longitudinal component of the electromagnetic current vanishes, which means that the non-zero current circulates around the $z$-axis.
The explicit calculation with the magnetic and electric fields given by Eqs. (\ref{vecB2}) and (\ref{vecE2}) shows also that
\begin{equation}
{\vec E} + {\vec v} \times {\vec B} = 0.
\label{Ohm1}
\end{equation}
This is nothing else but the non-covariant version of the condition (\ref{Emu0}).
\subsection{Conservation laws}
\label{sect:conlaw}
Besides the Maxwell equations, the equations of magnetohydrodynamics include the conservation laws for: the particle number, the electromagnetic current (following directly from Eq. (\ref{maxwell1})), and the energy-momentum of the combined system consisting of matter and fields. The total energy-momentum conservation law may be written in the form
\begin{equation}
\partial_\mu {\hat T}^{\mu \nu} = 0.
\label{enmomconhat}
\end{equation}
where the energy-momentum tensor, ${\hat T}^{\mu \nu}$, including the contributions from matter and fields has the structure
\begin{eqnarray}
{\hat T}^{\mu \nu} &=& \left(\varepsilon + P_T + \frac{B^2}{4\pi} \right) U^\mu U^\nu
-\left(P_T + \frac{B^2}{8\pi} \right) g^{\mu \nu} \nonumber \\
&+& \left(P_L - P_T - \frac{B^2}{4\pi} \right) V^\mu V^\nu,
\label{Thatmunu}
\end{eqnarray}
One may reduce the tensor (\ref{Thatmunu}) to the form (\ref{Tmunudec}) if we introduce the following variables
\begin{eqnarray}
{\hat \varepsilon} &=& \varepsilon + \frac{B^2}{8 \pi}
= \varepsilon + {\bar \varepsilon}, \nonumber \\
{\hat P_T} &=& P_T + \frac{B^2}{8 \pi} = P_T + {\bar P}_T, \nonumber \\
{\hat P_L} &=& P_L - \frac{B^2}{8 \pi} = P_L + {\bar P}_L.
\label{hatvariables}
\end{eqnarray}
Clearly, the variables with a hat describe the sum of the matter and field contributions to the total energy density and transverse/longitudinal pressures (the field contributions are marked with a bar).
\begin{figure}[t]
\begin{center}
\subfigure{\includegraphics[angle=0,width=0.45\textwidth]{RATIOS1c10.eps}} \\
\subfigure{\includegraphics[angle=0,width=0.45\textwidth]{RATIOS1c50.eps}}
\end{center}
\caption{(Color online) The ratios: ${\hat P}_L/{\hat P}_T$ (solid red lines), ${\hat P}_L/{\hat \varepsilon}$ (decreasing blue dashed lines), and ${\hat P}_T/{\hat \varepsilon}$ (increasing blue dashed lines) shown as functions of the variable $x$, \mbox{{\bf a)} the results} for the distribution function (\ref{aBoltz1}) and $c=0.1$, {\bf b)} the same for $c=0.5$.}
\label{fig:ratiosc}
\end{figure}
Following the same method as that introduced in Ref. \cite{Florkowski:2008ag} we find that the energy-momentum conservation leads to the differential equation
\begin{equation}
d{\hat \varepsilon} = \frac{{\hat \varepsilon}+{\hat P_T} }{n} dn +
\frac{{\hat P_T} - {\hat P_L} }{\tau} d\tau.
\label{dhateps}
\end{equation}
Exactly this structure implies that the energy density and pressures are of the form (\ref{epsilon2}) -- (\ref{PL2}). So we may immediately write
\begin{eqnarray}
{\hat \varepsilon} &=& \left(\frac{n}{g} \right)^{4/3} {\hat R}(x),
\label{epsilon3}
\end{eqnarray}
\begin{eqnarray}
{\hat P}_T &=& \left(\frac{n}{g} \right)^{4/3}
\left[\frac{{\hat R}(x)}{3} + x {\hat R}^\prime(x) \right],
\label{PT3}
\end{eqnarray}
\begin{eqnarray}
{\hat P}_L &=& \left(\frac{n}{g} \right)^{4/3}
\left[\frac{{\hat R}(x)}{3} - 2 x {\hat R}^\prime(x) \right],
\label{PL3}
\end{eqnarray}
where the complete relaxation function for matter and fields equals
\begin{equation}
{\hat R}(x) = R(x) + c_0 x^{2/3},
\label{hatR}
\end{equation}
with the parameter $c_0$ defined by the equation
\begin{equation}
c_0 = \frac{B_0^2}{8\pi} \left( \frac{g}{n_0}\right)^{4/3} x_0^{-2/3}.
\label{c0}
\end{equation}
In Fig. \ref{fig:ratiosc} we show the ratios: ${\hat P}_L/{\hat P}_T$ (solid red lines), ${\hat P}_L/{\hat \varepsilon}$ (decreasing blue dashed lines), and ${\hat P}_T/{\hat \varepsilon}$ (increasing blue dashed lines) shown as functions of the variable $x$. Similarly to the case without the magnetic field we observe that the ratio of the longitudinal and transverse pressures decreases with $x$. The new feature of the case with the field is, however, that this ratio may become negative. This behavior reflects the negative contribution of the field pressure $ {\bar P}_L = -B^2/(8\pi)$ to the total pressure ${\hat P}_L$. It becomes dominant for the large values of $x$, where the matter contribution, growing as $x^{1/6}$, may be neglected with the field contribution, growing as $x^{2/3}$.
In view of our discussion in Sect. II, the variable $x$ depends monotonically on time, hence the $x$ dependence reflects to large extent the time evolution of the studied ratios. If the initial conditions assume very small value of $x_0$ (and consequently of the initial $x$) the system has initially larger total longitudinal pressure than the transverse pressure ${\hat P}_T$. The time evolution tends to equilibrate and then to invert the ratio of the two pressures. The time scale for this process is determined by the initial value of the field, $B_0$, as can be noticed by the comparison of the upper and lower part of Fig. \ref{fig:ratiosc}.
We close this section with the following remark. Since, $B$ is a function of $n$ and $\tau$, we may rewrite Eq. (\ref{dhateps}) in the equivalent form as
\begin{equation}
d{\varepsilon} = \frac{{\varepsilon}+{P_L} }{n} dn +
\frac{{P_T} -{P_L} }{B} dB.
\label{dhatepsnew}
\end{equation}
This equation displays the dependence of the energy density $\varepsilon$ on the particle density $n$ and the magnetic field $B$. The equation of the form $\varepsilon = \varepsilon(n,B)$ plays a role of the equation of state. For the boost-invariant systems the functional dependence $\varepsilon(n,B)$ may be changed to the non-trivial dependence of $\varepsilon$ on $n$ and $\tau$, as introduced in Ref. \cite{Florkowski:2008ag}.
\section{Conclusions}
In this paper we have developed the formalism introduced in Ref. \cite{Florkowski:2008ag} discussing i) the system described by the anisotropic distribution function and ii) the system of partons interacting with local magnetic fields. The presented results may be used to analyze anisotropic systems formed in relativistic heavy-ion collisions. In particular, they may be used to find anisotropic neutral distribution functions which form the background for the field instabilities possibly responsible for the genuine thermalization/isotropization. In addition, our analysis indicates that the process of stable isotropization may require that the assumption concerning boost-invariance and/or entropy conservation should be relaxed.
\medskip
Acknowledgements: We thank W. Broniowski and \mbox{St. Mr\'owczy\'nski} for helpful discussions and critical comments.
|
1,941,325,220,006 | arxiv | \section{Introduction}
No test of Bell's inequalities \cite{Bell:1964, Bell:1971} to date has been free
of ``loopholes''. This means that, despite the high levels of statistical
significance frequently achieved, violations of the inequalities could be the
effects of experimental bias of one kind or another, not evidence for the presence of
quantum entanglement. Recent proposed experiments by Garc\'{\i}a-Patr\'{o}n
\etal~\cite{Garc:2004} and Nha and Carmichael \cite{Nha:2004} show
promise of being genuinely free from such problems. If the world in fact
obeys local realism, they should \textit{not}, therefore, infringe any Bell inequality.
\begin{figure}[htbp]
\centerline{\includegraphics[width=2.6in,height=2.6in]{ThompsonFig1.eps}}
\caption{Proposed experimental set-up.
In the current text, phase shifts $\theta $ and $\phi $ are renamed $\theta
_{A}$ and $\theta _{B}$. For further explanation, see main text.
(\textit{Reprinted with permission from Garc\'{\i}a-Patr\'{o}n et al.,
Phys.~Rev.~Lett.~}\textbf{\textit{93}}\textit{, 130409 (2004).
Copyright (2004) by the American Physical Society.
})}
\label{fig1}
\end{figure}
The current article discusses a classical model that should be able, once all
relevant details are known, to explain the results, accounting not only for the failure to
infringe the selected Bell test but also for other failures in the detailed predictions.
It depends on the classical theory for homodyne detection (re-derived here from first
principles) and the known behaviour of (degenerate-mode) optical parametric
amplifiers (OPA) \cite{Walls:1994}.
As far as the ``loophole-free'' status of the proposed experiments is
concerned, there would appear to be no problem. A difficulty that seems
likely to arise, though, is that theorists may not agree that the test beams
used were in fact ``non-classical'', so the failure to infringe a Bell
inequality will not in itself be interpreted as showing a failure of quantum
mechanics\footnote{
The predicted violation is in any case small, so failure may be put
down to other ``experimental imperfections''.}.
The criterion to be used to establish the non-classical nature of the light is
the observation of negative values of the Wigner density, and there is reason
to think that, even if the standard method of estimation seems to show that
these are achieved, the method may be in error. Wigner density is,
in any event, irrelevant to our model. Far from being, as suggested by
Garc\'{\i}a-Patr\'{o}n and others, the ``hidden variable'' needed, it plays no
part whatsoever.
Regardless of the outcome of the Bell tests, and whether or not the light is
declared to be non-classical, there are features of the experiments that can
usefully be exploited to compare the strengths of quantum mechanics versus
(updated) classical theory as viable models. The two theories approach the
situation from very different angles. Classical theory traces the causal
relationships between phenomena, starting with the simplest assumption and
building in random factors later where necessary. Quantum mechanics starts
with models of complete ensembles, all random factors included. This, it is
argued, is inappropriate, since two features of the proposed experiments
demand that we consider the behaviour of individual events, not whole
ensembles: the process of homodyne detection itself, and the Bell test.
\section{The proposed experiments}
The experimental set-up proposed by Garc\'{\i}a-Patr\'{o}n \textit{et al.} is shown
in Fig.~1, that of Nha and Carmichael being similar. In the words of the
Garc\'{\i}a-Patr\'{o}n \textit{et al.}~proposal:
\begin{quotation}
\noindent
The source (controlled by Sophie) is based on a master laser beam, which
is converted into second harmonic in a nonlinear crystal (SHG). After
spectral filtering (F), the second harmonic beam pumps an optical parametric
amplifier (OPA) which generates two-mode squeezed vacuum in modes A and B.
Single photons are conditionally subtracted from modes A and B with the use
of the beam splitters BS$_{A}$ and BS$_{B}$ and single-photon detectors
PD$_{A}$ and PD$_{B}$. Alice (Bob) measures a quadrature of mode A (B) using
a balanced homodyne detector that consists of a balanced beam splitter
BS$_{3}$ (BS$_{4 })$ and a pair of highly-efficient photodiodes. The local
oscillators LO$_{A}$ and LO$_{B}$ are extracted from the laser beam by means
of two additional beam splitters BS$_{1}$ and BS$_{2}$.
\end{quotation}
The classical description, working from the same figure, is just a little
different. Quantum theoretical terms such as ``squeezed vacuum'' and
``quadrature''\footnote{The usual model for the electric field as the sum of two
orthogonal quadratures is appropriate where there is no base-line for the phase
but not, as here, where all phases concerned are defined and measured relative to
a definite base, namely that of the master laser. As will be seen, it it phase differences
of $180^{\circ}$, not $90^{\circ}$, that feature in the classical model.}
are not used since they are not appropriate to the model and would cause confusion.
The description might run as follows:
The master laser beam (which is, incidentally, pulsed) is frequency-doubled
in the crystal SHG. After filtering to remove the original frequency, the
beam is used to pump the OPA, a resonant cavity containing a nonlinear crystal
cut so as to produce degenerate parametric down-conversion of the input. The
output comprises pairs of classical wave pulses at half the input frequency,
i.e.~at the original laser frequency.
The selection of pairs for analysis is done by splitting each output at an
unbalanced beamsplitter (BS$_{A}$ or BS$_{B})$, the smaller parts going to
sensitive detectors PD$_{A}$ or PD$_{B}$. Only if there are detections at
both PD$_{A}$ and PD$_{B}$ is the corresponding homodyne detection included
in the analysis. The larger parts proceed to balanced homodyne detectors,
i.e.~ones in which the intensities of local oscillator and test inputs are
approximately equal. The source of the local oscillators LO$_{A}$ and
LO$_{B}$ is the same laser that stimulated, after frequency doubling, the
production of the test beams.
\section{Homodyne detection}
In (balanced) homodyne detection, the test beam is mixed at a beamsplitter
with a local oscillator beam of the same frequency and the two outputs sent
to photodetectors that produce voltage readings for every input pulse. In
the proposed ``loophole-free'' Bell tests the difference between the two
voltages will be converted into a digital signal by counting all positive
values as +1, all negative as --1.
\begin{figure}[htbp]
\centerline{\includegraphics[width=1.5in,height=1.5in]{ThompsonFig2.eps}}
\caption{Inputs and outputs at the beamsplitter in a homodyne detector.
E$_{L }$ is the local oscillator beam, E the test beam, E$_{t }$ and E$_{r}$
the transmitted and reflected beams respectively.}
\label{fig2}
\end{figure}
Assuming the inputs are all classical waves of the same frequency and there
are no losses, it can be shown (see below) that the difference between the
intensities of the two output beams is proportional to the product of the
two input intensities multiplied by $\sin \theta$, where $\theta$ is the phase
difference between the test beam and local oscillator. If voltages are
proportional to intensities then it follows that the voltage difference will
be proportional to $\sin \theta $. When digitised, this transforms to a step function,
taking the value $-1$ for $-\pi < \theta < 0$ and $+1$ for $0 < \theta < \pi $.
(The function is not well defined for integral multiples of $\pi$.)
\subsection{Classical derivation of the predicted voltage difference}
Assume the test and local oscillator signals have the same frequency,
\textit{$\omega $}, the time-dependent part of the test signal being modelled by $e^{i\phi }$,
where (ignoring a constant phase offset\footnote{The phase offset depends on the difference in optical path lengths,
which will not in practice be exactly constant due to thermal oscillations. If a
complete model is ever constructed, this should therefore be a parameter.})
\textit{$\phi =\omega $t} is the phase angle,
and the local oscillator phase and test beam phases differ by \textit{$\theta $}. [Note that
although complex notation is used here, only the real part has meaning: this
is an ordinary wave equation, not a quantum-mechanical ``wave function''. To
allay any doubts on this score, the derivation is partially repeated with no
complex notation in the Appendix.]
Let the electric fields of the test signal, local oscillator and reflected and transmitted signals
from the beamsplitter have amplitudes $E$, $E_{L}$, $E_{r}$ and $E_{t}$ respectively, as
shown in Fig.~\ref{fig2}. Then, after allowance for phase delays of $\pi/2$ at each
reflection and assuming no losses, we have
\begin{equation}
\label{eq1}
E_r = \frac{1}{\sqrt 2} (Ee^{i(\phi +\pi /2)}+E_L e^{i(\phi +\theta )})
\end{equation}
and
\begin{equation}
\label{eq2}
E_t = \frac{1}{\sqrt 2} (Ee^{i\phi }+E_L e^{i(\phi +\theta +\pi /2)}).
\end{equation}
The intensity of the reflected beam is therefore
\begin{eqnarray}
\label{eq3}
E_r E_r^\ast & = & \frac{1}{2} (Ee^{i(\phi +\pi /2)}+E_L e^{i(\phi +\theta )})(Ee^{-i(\phi
+\pi /2)}
\nonumber \\
& + & E_L e^{-i(\phi +\theta )})
\nonumber \\
& = & \frac{1}{2}(E^2+E_L ^2+EE_L e^{i(\pi /2-\theta )}+EE_L e^{-i(\pi /2-\theta )})
\nonumber \\
& = & \frac{1}{2} (E^2+E_L ^2+2EE_L \cos (\pi /2-\theta )
\nonumber \\
& = & \frac{1}{2} (E^2+E_L ^2+2EE_L \sin \theta ).
\end{eqnarray}
Similarly, it can be shown that the intensity of the transmitted beam is
\begin{equation}
\label{eq4}
E_t E_t^\ast = \frac{1}{2} (E^2+E_L ^2-2EE_L \sin \theta ).
\end{equation}
If the voltages registered by the photodetectors are proportional to the
intensities, it follows that the difference in voltage is proportional to
$2EE_L \sin \theta$. When digitised, this translates to the
step function mentioned above. The probabilities for the two possible
outcomes are, as shown in Fig.~\ref{fig3},
\begin{equation}
\label{eq5}
p_{-} = \left\{
\begin{array}{ll}
1 & \mbox{for $-\pi < \theta < 0$} \\
0 & \mbox{for $0 < \theta < \pi$}
\end{array}
\right.
\end{equation}
and
\begin{equation}
\label{eq6}
p_{+} = \left\{
\begin{array}{ll}
0 & \mbox{for $-\pi < \theta < 0$} \\
1 & \mbox{for $0 < \theta < \pi$}
\end{array}
\right.
\end{equation}
Note that the probabilities are undefined for integral multiples of $\pi$.
In practice it would be reasonable to assume that, due to the presence of noise,
all the values were 0.5, but for the present purposes the integral values will
simply be ignored.
\begin{figure}[htbp]
\centerline{\includegraphics[width=1.8in,height=1.8in]{ThompsonFig3.eps}}
\caption{Probabilities of `+' and `--' outcomes versus phase difference,
using digitised difference voltages from a perfect, noise-free, balanced
homodyne detector.}
\label{fig3}
\end{figure}
\section{Application to the proposed Bell tests}
If the frequencies and phases of both test beams and both local oscillators
were all identical apart from the applied phase shifts, the experiments would
be expected to produce step function relationships between counts and
applied shifts both for the individual (singles) counts and for the
coincidences.
It may safely be assumed that this is not what is observed. It would have
shown up in the preliminary trials on the singles counts (see ref.~\cite{Wenger:2004}),
which would have followed something suggestive of the basic predicted step
function as the local oscillator phase shift was varied. What is observed in
practice is more likely to be similar to the results obtained by Breitenbach
\textit{et al.} \cite{Breitenbach:1995}. Their Fig.~6a, reproduced here as
Fig.~\ref{fig4}, shows a distribution of photocurrents that is clustered around zero,
for $\theta$ taking integer multiples of $\pi$, but is scattered fairly equally among
positive and negative values in between.
\begin{figure}[htbp]
\centerline{\includegraphics[width=2.7in,height=1.8in]{ThompsonFig4.eps}}
\caption{A typical scatter of ``noise amplitude'' (related to voltage difference)
as phase $\theta$ is scanned over time. Minimum scatter occurs for $\theta$ taking
integral multiples of $\pi$. (\textit{Reprinted from G. Breitenbach \etal,
{\it J.~Opt.~Soc.~Am. B}~\textbf{12} 2304 (1995).})}
\label{fig4}
\end{figure}
When digitised, the distribution would reduce to two straight horizontal
lines, showing that for each choice of $\theta $ there is an equal chance of
a `$+$' or a `$-$' result. As in any other Bell test setup, though, the absence
of variations in the singles counts does not necessarily mean there is no variation in
the coincidence rates. As explained in the next section, however, the
coincidence curves are not the zig-zag ones of standard classical
theory. These would be expected if we had \textit{full}
``rotational invariance''\footnote{``Rotational invariance'' means the hidden variable
takes all possible values with equal probability. In the current context, if the
experiment does indeed produce high visibility coincidence curves, the hidden variable
responsible will be the common phase difference between test beams and local
oscillators before application of the phase shifts (``detector settings'')
$\theta_A$ and $\theta_B$. It is argued that in the proposed experiment there will
be at best \textit{approximate} rotational invariance, if there is appreciable
variation in the (again common) frequency.}. If the ideas presented here are
correct, we have instead, in the language of an article by the
present author \cite{Thompson:1999}, only \textit{binary} rotational invariance.
Breitenbach's scatter of photocurrent differences is
seen as evidence that the relative phase can (under perfect conditions) take
just one of two values, 0 or $\pi$. The scatter is formed from a superposition
of two sets of points, corresponding to two sine curves that are out of phase,
together with a considerable amount of noise.
This interpretation accords well with more comprehensive results of the experiment as
reported elsewhere \cite{Breitenbach:1997}. When part of the initial laser beam is used
as ``seed'' to the OPA, judicious adjustments of the phase can produce ``bright squeezed
light'' and a scatter with alternately positive and negative points. The presence of the
seed causes selection of one particular phase set.
The two ``phase sets'' arise from the way in which the pulses are produced,
which involves, after the frequency doubling, the \textit{degenerate} case of
parametric down-conversion, the latter producing pulses that are
(in contrast to the general case of conjugate frequencies) of
\textit{exactly equal frequency}. Consider an initial pump laser of frequency
$\omega$. In the proposed experiment, this will be doubled in the crystal SHG
to 2$\omega$ then halved in the OPA back to $\omega$. At the frequency doubling
stage, one laser input wave peak gives rise to two output ones. Assuming that there
are causal mechanisms involved, it seems inevitable that every other wave peak
of the output will be exactly in phase with the input. When we now use this
input to produce a down-conversion, the outputs will be in phase either with
the even or with the odd peaks of the input, which will make them either in
phase or exactly out of phase with the original laser. [The matter can alternatively
be approached mathematically, as per ref.~\cite{Walls:1994}, where it is treated as
resonance in a nonlinear situation in which there are two solutions.]
We thus have two classes of output, differing in phase by $\pi$. If we
define the random variable $\alpha$ to be 0
for one class, $\pi$ for the other, this will clearly be an important
``hidden variable'' of the system.
The existence of two classes of output of exactly equal frequency and exactly
opposite phase may well be a feature common to a number of different experiments
employing degenerate parametric down-conversion sources.
One example is discussed in ref~\cite{Thompson:1999},
namely the Bell test experiment conducted by Weihs \etal~\cite{Weihs:1998},
but there are many more and further examples, not all concerned
with Bell tests, will be given in later papers. Accepted theory is handicapped by
various pre-conceptions. In some cases, for example ref.~\cite{Tittel:1998}, the
assumption is made that even in the degenerate case the output pair have conjugate, not identical,
frequencies. (If this is the case in the proposed experiment, though, it will severely
reduce the visibility of any coincidence curve observed when the experimental beam is
mixed back with the source laser in the homodyne detector.)
In other cases the use of the standard model in terms of orthogonal
quadratures leads to neglect of more appropriate models.
As regards the possibility of any difference in frequency between the test beam
and the master laser, the preliminary experiments for the Garc\'{\i}a-Patr\'{o}n proposal
\cite{Wenger:2004}, using just one output beam may already be sufficient to show that the interference
is stronger than would then be the case. It is known that the source laser has
quite a broad band width, i.e.~that $\omega _{0}$ is not constant. Though it is likely that
it is only part of the pump spectrum that induces a
down-conversion, so that the band width of the test beam may be considerably
narrower than that of the pump, it too is non zero. It follows that
agreement of frequency between this and the test beam must be because we are
always dealing, in the degenerate case, with \textit{exact} frequency doubling and
halving.
\section{A classical model of the proposed Bell test}
In the proposed Bell test of Garc\'{\i}a-Patr\'{o}n \textit{et al.~}, positive voltage
differences will be treated as +1, negative as --1. Applying this version of
homodyne detection
to both branches of the experiment, the CHSH test ($ -2 \leq S \leq 2$) will
then be applied to the coincidence counts. Under quantum theory it is
expected that, so long as ``non-classical'' beams are employed, the test
will be violated. However, since there are no obvious loopholes in the
actual Bell test (see later), there is no apparent reason in our model
why local realism should not win: the test should
\textit{not} be violated. In the classical view, this prediction is unrelated to any
supposed non-classical nature of the light.
\subsection{The basic local realist model}
If we take the simplest possible case, in which to all intents and purposes
all the frequencies involved are the same, the hidden variable in the local
realist model is clearly going to be the phase difference ($\alpha = 0$ or $\pi$)
between the test signal and the local oscillator. If high visibility
coincidence curves are seen, it must be because the values of $\alpha$ are identical for
the A and B beams. Assuming no noise, the basic model is easily written down.
From equation (\ref{eq5}), the probability of a $-1$ outcome on side A is
\begin{equation}
\label{eq7}
p_{-} (\theta _{A}, \alpha )= \left\{
\begin{array}{ll}
1 & \mbox{for $-\pi < \theta _{A} - \alpha < 0$} \\
0 & \mbox{for $ 0 < \theta _{A} - \alpha < \pi $},
\end{array}
\right.
\end{equation}
\noindent
where $\theta _{A}$ is the phase shift applied to the local oscillator A,
$\alpha$ is the hidden variable and all angles are reduced modulo $2\pi$.
Similarly, the probability of a +1 outcome is
\begin{equation}
\label{eq8}
p_{+} (\theta _{A}, \alpha )= \left\{
\begin{array}{ll}
0 & \mbox{for $-\pi < \theta _{A} - \alpha < 0$} \\
1 & \mbox{for $ 0 < \theta _{A} - \alpha < \pi $},
\end{array}
\right.
\end{equation}
Assuming equal probability $\frac{1}{2}$ for each of the two possible values of
$\alpha$, the standard ``local realist'' assumption that independent
probabilities can be multiplied to give coincidence ones leads to a
predicted coincidence rate of
\begin{eqnarray}
\label{eq9}
P_{++} (\theta _A ,\theta _B)& = & \frac{1}{2} p_+ (\theta _A ,0)p_+ (\theta _B
,0)
\nonumber \\
& + & \frac{1}{2} p_+ (\theta _A ,\pi )p_+ (\theta _B ,\pi ),
\end{eqnarray}
with similar expressions for $P_{+-}$, $P_{-+ }$ and $P_{- -}$.
\noindent The result for $\theta _{A}=\pi/2$, for example, is
\begin{equation}
\label{eq10}
P_{++} (\pi /2, \theta _{B}) = \left\{
\begin{array}{ll}
0 & \mbox{for $-\pi < \theta _{B} < 0$}\\
1/2 & \mbox{for $0 < \theta _{B} < \pi$}.
\end{array}
\right.
\end{equation}
For $\theta _{A}$ = --$\pi $/2 it is
\begin{equation}
\label{eq11}
P_{++} (-\pi /2, \theta _{B}) = \left\{
\begin{array}{ll}
1/2 & \mbox{for $-\pi < \theta _{B} < 0$}\\
0 & \mbox{for $0 < \theta _{B} < \pi$}.
\end{array}
\right.
\end{equation}
Note that, as illustrated in Fig.~\ref{fig5}, the coincidence probabilities \textit{cannot}, in this
basic model, be expressed as functions of the difference in detector
settings, $\theta _{B}- \theta _{A}$. This failure, marking a significant deviation
from the quantum mechanical prediction, is an inevitable consequence of the fact that
we have (as mentioned earlier) only binary, not full, rotational invariance.
\begin{figure}[htbp]
\centerline{\includegraphics[width=1.8in,height=2.7in]{ThompsonFig5.eps}}
\caption{Predicted coincidence curves for the ideal experiment.
(a) and (b) illustrate the settings most likely to be chosen in practice,
giving the strongest correlations. $\theta _{A}$ is fixed at $\pi/2$ or
$-\pi/2$ while $\theta _{B }$ varies. In theory, any value of $\theta
_{A}$ between 0 and $\pi$ would give the same curve as (a), any between
$-\pi$ and 0 the same as (b). An example is shown in (c), where $\theta
_{A }$ is $\pi/4$ but the curve is identical to (a). We do not have
rotational invariance: the curve is not a function of $\theta _{B }-\theta _{A}$.
}
\label{fig5}
\end{figure}
\subsection{Fine-tuning the model}
Many practical considerations mean that the final local realist prediction
will probably not look much like the above step function. It may not even be
quite periodic. The main logical difference is that, despite all that has
been said so far, the actual variable that is set for the local oscillators
is not directly the phase shift but the path length, and, since the
frequency is likely to vary slightly from one signal to the next (though
always keeping the same as that of the pump laser), the actual phase
difference between test and local oscillator beams will depend on the path
length difference \textit{and} on the frequency. In a complete model, therefore, the
important parameters will be path length and frequency, with phase derived
from these.
If frequency variations are sufficiently large, the situation may approach
one of rotational invariance (RI), but it seems on the face of it unlikely
that this can be achieved without loss of correlation. If we do have RI,
perhaps produced artificially by introducing random phase variations into the
OPA pump beam, the model becomes the standard realist one in which the predicted quantum
correlation varies linearly with difference in phase settings, but it is
more likely that what will be found is curves that are \textit{not} independent of the
choice of individual phase setting. They will be basically the predicted
step functions but converted to curves as the result of the addition of
noise.
\begin{figure}[htbp]
\centerline{\includegraphics[width=2.0in,height=1.0in]{ThompsonFig6.eps}}
\caption{Likely appearance of coincidence curves in a real experiment with
moderate noise.}
\label{fig6}
\end{figure}
It is essential to know the actual experimental conditions. Several relevant
factors can be discovered by careful analysis of the variations in the raw
voltages in each homodyne detection system. If noise is low, the presence of
the two phase sets, and whether or not they are equally represented, should
become apparent.
All this complexity, though, has no bearing on the basic fact of the existence
of a hidden variable model and the consequent prediction that the CHSH Bell test
will not be violated.
\subsection{The role of the ``event-ready'' detectors}
In the quantum-mechanical theory, the expectation of violation of the Bell
test all hinges on the production of ``non-classical'' light. The light
directly output from the crystal OPA is assumed to be Gaussian, i.e.~it
takes the form of pulses of light that have a Gaussian intensity profile and
also, as a result of Fourier theory, a Gaussian spectrum. When this is
passed through an unbalanced beamsplitter (BS$_{A}$ or BS$_{B}$) and a
``single photon'' detected by an ``event-ready'' detector, the theory says
that the subtraction of one photon leaves the remaining beam
``non-Gaussian''. Although there is mention here of single photons, the
theory is concerned with the ensemble properties of the complete beams, not
with the individual properties of its constituent pulses.
In the local realist (classical) model considered here, the shapes of the spectra
are not relevant except insofar as a narrow band width is desirable for the
demonstration of dramatic correlations. The ``event-ready detectors'' (PD$_{A}$
and PD$_{B}$ in Fig.~\ref{fig1}) play, instead, the important role of selecting
for analysis only the strongest down-converted output pairs, it being assumed that
the properties of the transmitted and reflected light at the unbalanced beamsplitters
are identical apart from their amplitudes. It is likely that those detected signals
that are coincident with each other will be genuine ``degenerate'' ones,
i.e.~of exactly equal frequency, quasi-resonant with the pump laser. The unbalanced
beamsplitters and the detectors PD$_{A}$ and PD$_{B}$ need to be set so that
the intensity of the detected part is sufficient to be above the minimum for
detection but low enough to ensure that all but the strongest pulses are
ignored.
In neither theory are the event-ready detectors really needed in their
``Bell test'' role of ensuring a fair test (see below), since the homodyne
detectors are almost 100{\%} efficient.
\section{Validity of the proposed Bell test}
Coincidences between the digitised voltage differences will be used in the
CHSH Bell test \cite{Clauser:1969, Thompson:1996}, but avoiding the ``post-selection''
that has, since
Aspect's second experiment \cite{Aspect:1982}, become customary.
The Garc\'{\i}a-Patr\'{o}n \textit{et al.}~proposal is to use event-ready detectors,
as recommended by Bell himself for use in real experiments \cite{Clauser:1978}.
None of the usual loopholes \cite{Thompson:2003} are expected to be applicable:
\begin{enumerate}
\item With the use of the event-ready detectors, non-detections are of little concern.
The detectors (PD$_{A}$ and PD$_{B}$ in Fig.~\ref{fig1}) act to define the sample to be analysed,
and the fact that they do so quite independently of whether or not any member of the sample
is then also detected in coincidence at the homodyne detectors ensures that no bias is
introduced here. The estimate of ``quantum correlation''\footnote{In the derivation of Bell inequalities such as the CHSH
inequality, the statistic required is simply the expectation value of the product of the
outcomes. This is commonly referred to in this context as the ``quantum correlation''.
Only when there are no null outcomes, possible outcomes being restricted to just $+1$
or $-1$, does it coincide with ordinary statistical correlation.}
to be used in calculating the
CHSH test statistic is $E = (N_{++AB} + N_{--AB} - N_{+-AB} - N_{-+AB}) / N_{AB}$, where
the $N$'s are coincidence
counts and the subscripts are self-explanatory. This contrasts with the usual method, in
which the denominator used is not $N_{AB}$ but the sum of observed coincidences,
$N_{++AB} + N_{--AB} + N_{+-AB} + N_{-+AB}$. The use of the latter can readily be shown
to introduce bias unless it can be assumed that the sample of detected pairs is a fair one.
That such an assumption fails in some plausible local realist models has been known
since 1970 or earlier \cite{Thompson:1996, Pearle:1970}.
\item There is no possibility of synchronisation problems \cite{Fine:1982}, since a pulsed source is used.
\item No ``accidentals'' will be subtracted \cite{Thompson:2003}.
\item The ``locality'' loophole can be closed by using long paths and a random system for
choosing the ``detector settings'' (local oscillator phase shifts) during the propagation of
the signals.
\end{enumerate}
The system is almost certainly not going to be ``rotationally invariant''
(not all phase differences will be equally likely), but this will not
invalidate the Bell test. It may, however, be important in another way. It
is likely that high visibilities will be observed in the coincidence curves
(i.e.~high values of (max -- min)/(max + min) in plots of coincidence rate
against difference in phase shift), leading to the impression that the Bell
test ought to be violated. These visibilities, though, will depend on
the absolute values of the individual settings. High ones will be balanced by low,
with the net effect that violation does not in fact happen.
\section{Validity and significance of negative estimates for Wigner densities}
In the quantum mechanical theory discussed in the loophole-free Bell test
proposals and in other recent papers \cite{Lvovsky:2001,Wenger:2004},
part of the evidence that is put forward as indicating that negative Wigner
densities are likely to be obtained consists in the observation that, when
$\theta$ is varied randomly, the distribution of observed voltage differences
shows a double peak (see Fig.~\ref{fig7}). There is a tendency to observe
roughly equal numbers of + and -- results but relatively few near zero. The
fact that the relationship depends on the sine of $\theta$ is, however,
sufficient to explain why this should be so.
\begin{figure}[htbp]
\centerline{\includegraphics[width=2.7in,height=2.00in]{ThompsonFig7.eps}}
\caption{Observed distribution of voltage differences, X, using homodyne
detection in an experiment similar to the proposed Bell test and averaging
over a large number of applied phase shifts. (\textit{Based on Fig.~4a of
A.~I.~Lvovsky et al., Phys.~Rev.~Lett.~} \textbf{\textit{87}}\textit{, 050402 (2001).})}
\label{fig7}
\end{figure}
To illustrate, let us consider the following. The sine of an angle is
between 0 and 0.5 whenever the angle is between 0 and $\pi/6$. It is
between 0.5 and 1.0 when the angle is between $\pi/6$ and $\pi/2$. Since
the second range of angles is twice the first yet the range of the values of
the sine is the same, it follows that if all angles in the range 0 to $\pi/2$
are selected equally often there will be twice as many sine values seen
above 0.5 as below. The same holds when random angles between $\pi/2$ and $\pi$
are chosen, whilst for values between $-\pi$ and 0 we find a symmetrical result for
negative values.
When allowance is made for the addition of noise, the production of a
distribution such as that of Fig.~\ref{fig7} for the average when angles are sampled
uniformly comes as no surprise. Clearly, as the experimenters themselves
recognise, the dip is not in itself sufficient to prove the non-classical
state of the light. For this, direct measurement of the Wigner density is
required, but there is a problem here. No actual direct measurement is
possible, so it has to be estimated, and the method proposed is the Radon
transformation \cite{Leonhardt:1997}. It is claimed that in other experiments
Wigner densities calculated either by this procedure or by a ``more precise
maximum-liklihood reconstruction technique'' \cite{Babichev:2004}
have shown negative regions, but perhaps the methods should be checked for
validity?
In any event, as already explained, the natural hidden variable relevant to
the proposed experiment is the phase of the individual pulse, not any
statistical property of the whole ensemble. Indeed, the use of Wigner density as
a substitute for hidden variables was never originally intended and has no
theoretical basis. In Bell's much-quoted paper on the subject (Ch.~21 of
ref.~\cite{Bell:1987}), the ``hidden variables'' remain, as ever, parameters such
as position and momentum that are specific to individual particles. The role
of the negative Wigner density is merely to provide, in certain rather special
circumstances, and alternative test for nonlocality in that, if negative values
are found, then real local hidden variables cannot exist.
\section{Suggestions for extending the experiment}
The basic set-up would seem to present an ideal opportunity for investigation of some key
aspects of the quantum and classical models, as well as the operation of the
Bell test ``detection loophole''.
\begin{enumerate}
\item \textbf{The operation of the detection loophole} could be illustrated if,
instead of using the digitised difference voltages of the homodyne
detectors, the two separate voltages are passed through discriminators. The
latter operate by applying a threshold voltage that can be set by the
experimenter and counting those pulses that exceed it. These can be used in
a conventional CHSH Bell test, i.e.~using total observed coincidence count as
denominator in the estimated quantum correlations $E$.
The model that has been known since Pearle's time (1970) predicts that, as
the threshold voltage used in the discriminators is increased and hence the
number of registered events decreased (interpreted in quantum theory as the
detection efficiency being decreased), the CHSH test statistic $S$, if calculated
using estimates $E = (N_{++} + N_{--} - N_{+-} - N_{-+}) /
(N_{++} + N_{--} + N_{+-} + N_{-+})$, will increase.
If noise levels are low, it may well exceed the Bell limit of 2.
Such an experiment has, in a sense, already been performed by Babichev \etal
\cite{Babichev:2004} and yielded the expected results. If it is accepted that their
source (two outputs from a beamsplitter) would have been in an entangled state, then
their Fig.~4b clearly demonstrates that high detector thresholds (i.e.~low detector
efficiencies) can lead to violations of the standard form of the CHSH test.
\item \textbf{The existence of the two phase sets} is, in point of fact, well known when
OPA's are operated above threshold \cite{Walls:1994}. The resonance into one or
other of the sets then becomes stable. The fact that that two sets are also
responsible for many of the observations below threshold could be further investigated if
either the raw voltages or the undigitised difference voltages are analysed.
So long as the noise level is low, the existence of the two superposed
curves, one for $\alpha = 0$ and the other for $\alpha =\pi$, should
be apparent. It would be interesting to investigate how the pattern changed
as optical path lengths were varied. Breitenbach's pattern might be hard to
reproduce using long path lengths, where exact equality is needed unless the
light is monochromatic.
\item \textbf{Comparison of overall performance:} If the primary goal of the
experimenter is clearly set out to be the comparison of the performance of the
two rival models, rather than merely the conduct of a Bell test, further ideas
for modifying the set-up will doubtless emerge when the first experiments have been
done. Many of the predictions of the quantum-mechanical model have already appeared
in print \cite{Garc:2004,Nha:2004}. The first stage in comparing models should
probably be, therefore, to conduct supplementary experiments so as to
establish the relevant parameters of the full classical model and hence make
equivalent empirical predictions. It is possible, though, that qualitative predictions
alone will be sufficient to demonstrate superiority one way or the other.
\end{enumerate}
\section{Conclusion}
The proposed experiments would, \textit{if the ``non-classicality'' of the
light could be demonstrated satisfactorily}, provide a definite answer one way or the
other regarding the reality of quantum entanglement. They could usefully be
extended to include empirical investigations into the operation of the Bell
test detection loophole. Perhaps more importantly, though, they present
valuable opportunities to compare the performance of the two theories in
both their total predictive power and their comprehensibility. Are
parameters such as ``Wigner density'' and ``degree of squeezing'' really the
relevant ones, or would we gain more insight into the situation by talking
only of frequencies, phases and intensities? Parameters such as the
detection efficiency and the transmittance of the beamsplitters will
undoubtedly affect the results, but do they do this in the way the quantum
theoretical model suggests? It will take considerably more than just the
minimum runs needed for the Bell test if we are to find the answers.
The detailed predictions of the classical model cannot be given until
the full facts of the experimental set-up and the performance of the various
parts are known, but it gives, in any event, a simple explanation of the
double-peaked nature of the distribution of voltage differences. The peaks
arise naturally from the way in which homodyne detection works, and the
quantum theoretical idea that they are one of the indications of a non-classical beam
or of negative Wigner density would not appear to be justifiable. The idea
that a classical beam can become non-classical by the act of ``subtracting
a photon'' is, equally, of doubtful validity. The experimental role of the
subtraction and detection of part of each beam is to aid the selection for
coincidence analysis of those pulses that are likely to be most strongly correlated.
\ack{Acknowledgements}
I am grateful to Ph.~Grangier for drawing my attention to his team's
proposed experiment \cite{Garc:2004}, and for helpful discussions. I should also
like to thank G.~Breitenbach and H.~Carmichael for pointing me in
the direction of related work.
|
1,941,325,220,007 | arxiv | \section{Introduction}
In the modern scenario of creation of the matter in the Universe
there is a stage called preheating, when the matter (usually
imitating by
a massless scalar field) is generating from vacuum fluctuations
due to the effect of parametric resonance [1,2,3].
Namely it is assumed that
the matter field $\chi$ is coupled with inflaton field $\phi$, and the
coupling constant $g$ is large. During inflation and right after
that period the matter field has a large effective ``mass''
determined by this coupling, and each mode of the matter
field with sufficiently small wavenumber
evolves by a standard manner, oscillating with
a frequency proportional to this ''mass'' and adiabatically
decreasing amplitude. The inflaton starts to oscillate itself
after inflation,
and near the moments of time when $\phi(t)\approx 0$ the matter field
effectively loses its ``mass'', the adiabatic approximation breaks and
a possibility of resonant growth of the matter field amplitude
and the corresponding occupation numbers of $\chi$
``particles'' appears. In fact, this effect may lead to exponential growth
of the
occupation
numbers, and the resulting distribution of $\chi$ ``particles'' is
strongly nonthermal. After some moment of time $t_{*}$ (end of the first
stage of preheating) the energy density of the $\chi$ ``particles'' becomes
comparable with the energy density of the inflaton, and back reaction
processes become influence the dynamics of inflaton field $\phi$ and
the Universe. It is believed that during some short
period of time $\delta
t$ after $t_{*}$ the
$\chi$ particles are thermalized by rescattering effects, and
after thermalization the energy density of
these ``particles'' evolves according to the standard picture of the Hot
Big Bang model. Such scenario of the first stage of preheating is
called the scenario of broad resonance [2,3].
As was mentioned by a number of authors (e.g [4]),
the characteristic value of the growth
rate of the matter field modes does not depend significantly on the
value of wavenumber $k$ for the modes with sufficiently small $k$.
At first glance this
fact gives a very interesting possibility of amplification of the
matter field modes
and metric perturbations at exponentially large scales,
say, corresponding to the
scale of the present horizon.
In this paper we consider the simplest model of
preheating with two scalar fields, the inflaton field
$\phi$ and the matter field
$\chi$ and the potential
$V(\phi, \chi)={m^{2}\phi^{2}\over 2}+{g^{2}\phi^{2}\chi^{2}\over 2}$,
and show that in this model the
value of metric perturbations at the scale of present horizon
is suppressed by an extremely small factor,
and therefore taking into account the stage of preheating
does not lead to
any modifications of the standard picture of
generation of cosmological perturbations in inflationary scenario.
We have two basic arguments supporting this conclusion. At first,
the field $\chi$ has a large effective mass before the stage of
preheating
(much larger than the Hubble parameter at the end of inflation),
and as a consequence the spectrum of initial field fluctuations
$\delta \chi_{k}$
is strongly
suppressed at small $k$ at the time $t_{in}$ of the beginning of the
preheating stage, the rms value of the field fluctuations
$\delta \chi_{rms}\propto k^{3/2}$ (also [5]).
During preheating the field
modes may grow exponentially fast, but the characteristic growth rate
$G={{\dot {\delta \chi_{k}}} \over {\delta \chi_{k}}}$
is constrained by some maximal value $\mu_{max}m$, where numerical
constant
$\mu_{max}={1\over \pi}\ln{(1+\sqrt 2)}\approx 0.28$ [3].
The first stage of preheating ends at the time $t_{*}\approx 50\div
100m^{-1}$,
and this estimate does not depend strongly on the
parameters of the theory
(see [3], and also the eqns. $(22)$, $(23)$ below).
Thus, at the end of the first stage of preheating,
the r.m.s value of the
field fluctuations contains a factor
$$\delta \chi_{rms}(t_{*})< G\delta \chi_{rms}(t_{in})
\sim e^{-3N/2+\mu_{max}m t_{*}}
= e^{-47}e^{-1.5(N-50)+0.28(mt_{*}-100)}, \eqno (1)$$
where the $N$ is the number of e-folds, and we choose the standard
value $N=50$ to represent the scale of present horizon.
Here we express the amplitude of the field perturbation in terms
of the natural Plank units.
Secondly,
the suppression
factor for the metric perturbations
may be even much smaller than the estimate $(1)$ if one uses
the standard theory.
In this theory the contribution of the second field
$\chi$ in the scalar mode of metric perturbations
is determined by the terms containing multiplication of
homogeneous ``background'' part of the field $\chi_{0}(t)$ and the
perturbed
part $\delta \chi$, and their derivatives (see eqns. (25,26) below).
The amplitude of homogeneous part of the field behaves like the mode
$\delta \chi_{k}$ with wavenumber $k=0$, and
is constrained by the
fact that the field $\chi_{0}$ cannot contribute to the total
energy density during the last stage of inflation. Assuming that
during last $N$ e-folds the dynamics of the Universe
has been controlled by
the inflaton field $\phi$, after the end of the first stage of
preheating the amplitude $\chi_{0}$ should contain the same factor
$(1)$. Therefore the rms amplitude of the metric perturbations
$\delta_{rms}$ contains the factor $(1)$ squared:
$$\delta_{rms}(t_{*})
< e^{-94}e^{-3(N-50)+0.56(mt_{*}-100)}, \eqno 2$$
and is suppressed by enormously small factor $e^{-94}$ for
the typical parameters of the theory. This estimate is however
changed in a more self-consistent
approach to the evolution of the perturbations
during preheating. In the theory of preheating the role of background
is effectively played by a condensate of $\chi$ particles with relatively
large
wavenumbers ( a typical wavenumber of such particles $k_{crit}$
is always
larger than the characteristic wavenumber corresponding to the
scale of cosmological horizon during the
preheating stage, see [3] and also
the eq. $(13)$). The fluctuations of energy-momentum tensor
of such condensate can give rise to additional metric perturbations,
which are second order with respect to the perturbations of the
matter field, and therefore cannot be obtained in the frameworks of
the standard perturbation theory. The estimate shows that
the fluctuations of the energy-momentum tensor decrease with scale
proportional to $k^{3/2}$ (similar to the rms value of $\delta \chi$
field, see eq. $(47)$),
and is of order of unity at the scale corresponding to the
critical wavenumber $k_{crit}$ at the time $t_{*}$. Since in
the large scale limit the metric fluctuations are of the order
of the energy density fluctuations, the estimate
$\delta_{rms}(t_{*})\sim \delta \chi_{rms}(t_{*})$
is a more reliable estimate in the more realistic approach to
the dynamics of preheating, and this estimate still contains a very
small number $e^{-47}$ for the scale corresponding to the present horizon.
In fact, it can be shown (see Section 3) that
the metric perturbations of this kind take their maximal value at the scale
corresponding to the horizon size at the time of the
end of preheating, and this value is smaller
than $\sim 10^{-3}$ for the broad resonance case
\footnote{Note, that when estimating the metric perturbations
induced by the non-linear terms, we do not take into account the
contribution of such fluctuations in the dynamics of the background
model. This contribution is of order of the leading term at the end of
preheating, and can change our estimates on a numerical factor.
We believe that this factor is of order of unity, and cannot change
our results significantly.}.
Therefore in the simplest models
the transition from inflation to
the hot Big Bang proceeds smoothly, without significant
generation of the metric perturbations.
We present our arguments in a more rigorous form
in the next two Sections. In Section $2$ we introduce the
basic ideas
of preheating theory, and estimate the time $t_{*}$ of duration
of the first stage of preheating.
In Section $3$ we discuss the application of the theory of cosmological
perturbations to our case of two interacting fields, and estimate the
upper limit on the metric perturbations.
We use below the natural system of units, and set the Plank mass
$M_{pl}=\sqrt{8\pi}$.
\section{Preheating in the regime of broad resonance}
The theory of initial stage of preheating has been developed in the
paper [3], and we will closely follow this paper. For our purposes
we need to know the evolution of both fields $\phi$ and $\chi$,
and also the evolution of the scale factor $a(t)$. As usually we divide
both fields on background parts and perturbations
$\phi=\phi_{0}(t)+\delta \phi(t, \vec x),
\chi=\chi_{0}(t)+\delta \chi(t, \vec x)$, and apply Fourier
transform to the perturbed parts. We temporary neglect the influence
of the metric perturbations in this Section, and consider
this effect later on.
Assuming that the contribution of
the field $\chi$ in the energy density and the pressure
is negligible, we can use
the standard expression describing the evolution of the scale factor and
the field $\phi_{0}$ in the theory of massive scalar field (see
e.g. [6]):
$$a(t)\approx a_{0}\tau^{2/3}, \eqno 3$$
$$\phi_{0}(t)\approx \phi_{in}{\sin \tau \over \tau}, \eqno 4$$
where $\phi_{in}=2\sqrt{{2\over 3}}$ is a characteristic value of the
field in the beginning of preheating stage, $a_{0}$ is an
``initial'' value
of the scale factor, dimensionless time $\tau =mt$, and the preheating
stage begins when approximately $\tau =\tau_{in}\approx 1$.
The evolution equation for the perturbed part of the field $\chi$ can
be conveniently written in the form:
$$\ddot X_{k}+\omega_{k}^{2}X_{k}=0, \eqno 5$$
where $X_{k}$ is the rescaled field amplitude:
$X_{k}=a^{3/2}\delta \chi_{k}$, $\omega_{k}$ is the effective frequency:
$$\omega_{k}^{2}={k^{2}\over a^{2}}+g^{2}\phi_{in}^{2}{({\sin \tau
\over \tau})}^{2} +\Delta, \eqno 6$$
and the correction $\Delta=-({3\over 4}H^{2}+{3\over 2}{\ddot a\over a})$.
Hereafter $H={\dot a \over a}$ is the expansion rate.
The positive frequency solutions of this equation $X_{+}$
determine a vacuum state,
and must have a form:
$$X_{+}={1\over {(2\pi)}^{3/2}\sqrt{2\omega_{k}}}e^{-i\theta} \eqno 7$$
at the moments of time sufficiently close to $\tau_{in}$
\footnote{A care should be taken when specifying the positive frequency
solution for the modes with wavelengths larger than the
horizon scale.
Strictly speaking the vacuum state should be the standard Bunch-Davies
vacuum state for a massive field, but a more accurate expression give
essentially the same result.}
Here
$\theta=\int^{t} dt^{'} \omega $
\footnote{Wherever it is possible we will drop
the index $k$ in our expressions.}.
The solution
of the equation $(5)$
can also be represented
in another form by introducing two complex functions
$\alpha, \beta$, which are constrained by the normalization condition
${|\alpha|}^{2}-{|\beta|}^{2}=1$:
$$X_{+}={1\over {(2\pi)}^{3/2}}
({\alpha (t)\over \sqrt{2\omega}}e^{-i\theta}+
{\beta (t)\over \sqrt{2\omega}}e^{i\theta}). \eqno 8$$
To reconcile this representation with $(7)$, we should set
$\alpha(\tau\sim \tau_{in})= 1$, and
$\beta(\tau\sim \tau_{in})= 0$.
Obviously, the representation $(8)$
is specially convenient if the solution of eq. $(5)$ is close to its
adiabatic approximation, and $\alpha, \beta \approx const$.
For the general case, the evolution of the functions $\alpha$,
$\beta$ follows from the eq. $(5)$:
$$\dot \alpha ={\dot \omega \over 2\omega}e^{2i\theta}\beta,\quad
\dot \beta ={\dot \omega \over 2\omega}e^{-2i\theta}\alpha. \eqno 9$$
If the adiabatic condition ${\dot \omega \over \omega^{2}}\ll 1$
is satisfied, the functions $\alpha$, $\beta$ are approximately constant,
and no additional ``particles'' of the field $\chi$ are produced, the field
oscillates with the frequency $\approx \omega$, and the field
amplitude decays
as $a^{-3/2}\sim \tau^{-1}$. The adiabatic approximation
breaks when the field $\phi$ is close to zero, and the time $\tau$
is close to $\tau_{j}=\pi j$ (the integer index $j$ must be much larger
than unity for the validity of the approximate equations $(3,4)$). Rewriting
the adiabatic condition near the points $\tau=\tau_{j}$, we have
$${\dot \omega \over \omega^{2}}\approx {m\tau_{j} \over g\phi_{in}
{\Delta \tau}^{2}}={1\over 2q{(\tau_{j})}^{1/2}{\Delta \tau }^{2}}, \eqno 10$$
where $\Delta \tau =\tau -\tau_{j}$, and the parameter $q(\tau)$
characterizes the strength of the resonance [3]:
$$q={2\over 3}{({g\over m})}^{2}{1\over \tau^{2}}. \eqno 11$$
We will also use $q_{0}=q(\tau =1)$ to parameterize our expressions.
The regime of broad resonance corresponds to very large values of
$q_{0}$, see below.
Thus
when the parameter $q$ is very large, the functions $\alpha$ and $\beta$
are approximately constant between the moments of time $\tau_{j}$, but
when the time is very close to $\tau_{j}$,
the functions $\alpha$ and $\beta$ can be changed. The rule of
change can be written as an iterative
mapping between the functions $\alpha_{j-1}$, $\beta_{j-1}$ and
$\alpha_{j}$, $\beta_{j}$ corresponding to the time periods $\tau_{j-1} <
\tau < \tau_{j}$ and $\tau_{j} < \tau < \tau_{j+1}$ [3]:
$$\left( \matrix{\alpha^{j} \cr \beta^{j}} \right )=\left ( \matrix {
a & b \cr b^{*} & a^{*}}\right )=\left (\matrix {\alpha^{j-1}\cr \beta^{j-1}}
\right ), \eqno 12$$
where
$a=\sqrt{1+e^{-\pi \kappa^{2}}}e^{i\phi},$
and
$b=ie^{-{\pi \over 2}\kappa^{2}+2i\theta^{j}},$
and $\theta^{j}=\int^{t_{j}}dt \omega$, $t_{j}=\tau_{j}/m$,
$\phi=arg \Gamma({1+i \kappa^{2}\over 2})+
{\kappa^{2}\over 2}(1+2\ln{2\over\kappa^{2}})$, and $\kappa=k/k_{*}$,
$k_{*}$ is a characteristic value of comoving momentum:
$k_{*}=2^{1/2}a(t)q^{1/4}m$. If the wavenumber is much larger than
a critical cutoff wavenumber:
$$k_{crit}={k_{*}\over \pi}=\sqrt{2\over \pi}a(t)q^{1/4}m=
\sqrt{{2\over \pi}}a_{0}q_{0}^{1/4}\tau^{1/6}m, \eqno 13$$
the effect of particle creation is exponentially damped.
It is important to note that the bulk of the produced particles has
values of wavenumbers close to the cutoff value $(13)$,
the corresponding wavelengths of these particles are well inside
the cosmological horizon. The particles occupation numbers
$n_{j}={|\beta_{j}|}^{2}$ can also be expressed by an iterative way:
$$n_{j}\approx (1+2e^{-\pi\kappa^{2}}-2\sin {\theta_{tot}}
e^{-{\pi \over 2}}\kappa^{2}\sqrt {1+e^{-\pi \kappa^{2}}})n_{j-1}, \eqno 14$$
where $\theta_{tot}=2\theta^{j}-\phi +arg (\beta^{j-1})-arg (\alpha^{j-1})$,
and the limit of large occupation numbers $n_{j-1} \gg 1$ is assumed
hereafter.
In particular, from the eq. $(14)$
it follows that the number of ``particles'' can
either increase or decrease depending on the value of $\theta_{tot}$.
The evolution of $\theta_{tot}$ is rather complicated in the limit
of large $g$, but for our purposes
it is sufficient to note that the occupation number
cannot increase more than $3+2\sqrt 2$ times, and the
amplitude of the field $\sim \sqrt {n_{j}}$ cannot increase more than
$1+\sqrt 2$ times. It is convenient to characterize the change of the
amplitude by the growth rate:
$$\mu_{j}={1\over 2\pi}\ln{{n^{j}\over n^{j-1}}}, \eqno 15$$
and also average the growth rate over the time:
$$\mu^{eff}={\pi \over \tau} \sum_{j} \mu_{j}. \eqno 16$$
From the eqns. $(14-16)$ it follows that neither $\mu_{j}$ nor
$\mu^{eff}$ can exceed the maximal value $\mu_{max}={1\over \pi}
\ln {(1+\sqrt 2)}\approx 0.28$. In fact, the effective growth rate
$\mu^{eff}$ is about two times smaller than its maximal value. The
numerical calculations give $\mu^{eff}$ in the interval $0.1-0.18$
with an average value of order of $0.14$
for the coupling constant $g$ in the interval from $0.9\cdot 10^{-4}$
to $10^{-3}$ [3].
The background field $\chi_{0}(t)$ obeys the same equation $(5)$ provided
the wavenumber $k$ is set to zero. Therefore we can obtain the expression
for the evolution of the background field combining the positive and
negative frequency solutions $\delta \chi_{+}$, $\delta \chi_{-}=
\delta \chi_{+}^{*}$.
Let denote
the value of the field $\chi_{0}$ at the beginning of the preheating stage
as $\chi_{in}$, and its time derivative as $\dot \chi_{in}$. The values of
these quantities are determined by their evolution at the previous stage of
inflation. Assuming that the contribution of the field $\chi$ in
the potential has been negligible during last $N$ e-folds, we estimate the
initial amplitude as
$\chi_{in}\approx {m\over g}e^{-3N/2}$, and its time derivative as
$\dot \chi_{in}\approx \omega (\tau \sim \tau_{in})\chi_{in}
\approx m e^{-3N/2}$. Let introduce a characteristic field amplitude
and angle:
$$\chi_{*}=\sqrt{\chi_{in}^{2}+g^{-2}\dot \chi_{in}^{2}},\quad
\phi_{0}=\arctan {{g\chi_{in}\over \dot \chi_{in}}}. \eqno 17$$
Then
$$\chi_{0}(t)={{(2\pi a_{0})}^{3/2}\omega_{0}^{1/2}\chi_{*}\over i\sqrt 2}
(e^{i\phi_{0}}\delta \chi_{-} -e^{-i\phi_{0}}\delta \chi_{+})=$$
$$
{\chi_{*}\over 2i}{({a_{0}\over a})}^{3/2}
{({\omega_{0}\over \omega})}^{1/2}
(e^{-i\theta}(\beta^{*}e^{i\phi_{0}}-\alpha e^{-i\phi_{0}})+
e^{i\theta}(\alpha^{*}e^{i\phi_{0}}-\beta e^{-i\phi_{0}})), \eqno 18$$
where the wavenumber $k$ is set to zero in the expressions for
$\alpha$ and $\beta$,
and $\omega_{0}=\omega (\tau=\tau_{in})\sim g$.
We also need the evolution of the field $\delta \phi$, but we discuss it in
the next Section.
Since our order-of-magnitude estimates $(1), (2)$ are exponentially
sensitive to the value of the moment of time $t_{*}$ of the end of the
first stage of
preheating, it is very important to obtain a reliable estimate of
$t_{*}$. Following [3] we assume that the first stage of preheating
ends when
the vacuum expectation value for $\chi^{2}$ gives a contribution to the
potential of order of the leading classical term ${m^{2}\phi^{2}\over 2}$:
$$\left <\chi^{2} \right >(t_{*})={m\over g}. \eqno 19$$
Let estimate the dependence of $\left <\chi^{2}\right >$ on time.
Using the definition
of the function $\beta$, we have
$$\left <\chi^{2}\right >(t)={1\over 2\pi^{2}a^{3}\omega}\int^{k_{crit}}_{0}dk
k^{2}{|\beta|}^{2}, \eqno 20$$
where $k_{crit}$ is given by the eq. $(13)$. In order to calculate
the integral in the eq. $(20)$ we assume that each mode with a given $k$
grows with its average growth rate $\mu^{eff}\approx 0.14$ when
$k< k_{crit}(t)$, and the growth
rate for the modes with $k > k_{crit}(t)$ is zero. We have:
$$\left <\chi^{2} \right >=
{1\over 12\pi^{3}}{q_{0}^{1/4}m^{2}\over {\mu^{eff}}^{1/2}\tau}
e^{2\mu^{eff}\tau}, \eqno 21$$
where we approximate the value of $\omega$ as $\omega \approx {g\phi_{0}\over
\tau}$. Note that our estimate is slightly different from the estimate
[3] due to a different approximation in the calculation of the integral
in $(20)$. Substituting the eq. $(21)$ in $(19)$, we obtain:
$$\tau_{*}=mt_{*}={1\over 2\mu^{eff}}\ln{(\sqrt{{2\over 3}}{12\pi^{3}
{\mu^{eff}}^{1/2}\tau_{*}\over m^{2}q_{0}^{3/4}})}. \eqno 22$$
An approximate solution of the eq. $(21)$ can be written as:
$$\tau_{*}\approx \mu^{-1}_{0.14}(107+3.57\ln F), \eqno 23$$
where $F=m^{-2}_{-6}q_{4}^{-3/4}\mu_{0.14}^{1/2}$, and
$m_{-6}={m\over 10^{-6}}$, $q_{4}={q_{0}\over 10^{4}}$,
$\mu_{0.14}={\mu^{eff}\over 0.14}$. The time $\tau_{*}$ decreases with
increase of the parameter $q_{0}$, and is bounded by some minimal and
maximal values $\tau_{min}$, $\tau_{max}$.
For example, if the mass $m$ has its ``canonical'' value
$\sim 10^{-6}$, the parameter $q_{0}$ cannot exceed the maximal value
$q_{max}\sim m^{-2}=10^{12}$, and we have $t_{min}\approx 55$. On the
other hand, the parameter $q(t_{*})$ cannot be smaller than unity for
applicability of our theory.
\footnote{Of course preheating can go on when
$q(\tau) \ll 1$.
However, in that case the growth rate $\sim q$ is small, and
preheating is inefficient. In the two fields model
the metric perturbations in the limit of
small $q$ have been discussed in the paper [7]. The similar one field model
has been considered in the paper [8].}
Since $q\sim \tau^{-2}$, we have $q_{min}\sim
10^{4}$ as a minimal possible value of $q_{0}$.
Therefore, the time of the end of the
first stage of preheating is constrained by a condition: $55 < \tau_{*}
< 107$ for $\mu^{eff}\approx 0.14$ and $m \sim 10^{-6}$.
\section{The metric perturbations during preheating}
Now let us take into account the metric perturbations and consider the
self-consistent theory of perturbations. The shape of the metric
perturbations depends on gauge. In many applications the gauge-independent
formalism is used, where all perturbed quantities are expressed in terms
of their gauge-independent combinations. These combinations are naturally
linked to some preferable
coordinate system. The most commonly used preferable coordinate systems
are so-called comoving and Newtonian coordinate systems. However, both these
systems are not convenient in our case because the dynamical equations
written in these systems have a singularity when $\dot \phi_{0}=0$.
Therefore we use a synchronous coordinate system, and to specify
the gauge we assume that our synchronous system is comoving
at the moment of time $\tau=\tau_{in}$. For the simplicity we also
neglect the metric perturbations generated during inflation. In a
synchronous coordinate system the metrics has the following form:
$$ds^{2}=dt^{2}-a^{2}(t)(A\delta^{\alpha}_{\beta}+B^{,\alpha}_{,\beta})
dx_{\alpha}dx^{\beta}, \eqno 24$$
where the Greek indices run from $1$ to $3$, and the summation rule is
used hereafter. The prime stands for differentiation with respect to
$x^{\alpha}$. The perturbed equations of motion are (e.g. [9,10]):
$$H\dot h=\dot \phi_{0}\dot {\delta \phi} +\dot \chi_{0}\dot {\delta \chi}
+{\partial V \over \partial \phi}\delta \phi +{\partial V \over
\partial \chi}\delta \chi,
\eqno 25$$
$$\dot A=-(\dot \phi_{0}\delta \phi +\dot \chi_{0} \delta \chi), \eqno 26$$
$$\ddot {\delta \phi_{i}} +3H\dot {\delta \phi_{i}}+
{\dot \phi_{i} \dot h \over 2}+{\partial^{2}V\over \partial \phi_{i}
\partial \phi_{j}}\delta \phi_{j}=0, \eqno 27$$
where $h=3A+\Delta B$, $i,j=1,2$, for definiteness index $1$ stands for
the fields $\phi_{0}$, $\delta \phi$, and $2$ for the fields
$\chi_{0}$, $\delta \chi$. We neglect the terms proportional to
$k^{2}$ in the eqns $(25-27)$, assuming that the wavenumbers of the
perturbations are very small.
The quantities $A$ and $B$ are related
as (e.g. [10])
$$ B=\int^{t}{dt^{'}\over a^{3}}\int dt^{``}(aA) \eqno 28$$
As we discussed above
the background field $\chi_{0}$ is very small, and does not
contribute to the expansion law, and in that case
the equations of motion of the fields perturbations $(27)$ can be
rewritten in a simplified manner
\footnote{Obviously, when approaching the time $t_{*}$
the back reaction of the produced
$\chi$ ``particles'' influences the motion of our system much more
significantly than the field $\chi_{0}$, and comparable with the contribution
of the field $\phi_{0}(t)$.
We cannot take into account the back reaction without a very
significant complication of our consideration, but we hope that this
cannot change order-of-magnitude estimates. Therefore, our results should
be treated as semi-qualitative only.}.
The equation for the field $\delta \chi$
is reduced to the eq. $(5)$, and the equation for the field $\delta \phi$
has a form
$${\delta \ddot \phi}+(3H+{{\dot \phi_{0}}^{2}\over 2H}){\delta \dot \phi}
+({\partial^{2}V\over \partial \phi^{2}}+{\dot {\phi_{0}}\over 2H}{\partial V
\over \partial \phi})\delta \phi =J(\phi, \chi_{0}, \delta \chi), \eqno 29$$
$$J(\phi, \chi_{0}, \delta \chi)=
-({\partial^{2}V\over \partial \phi \partial \chi}
+{\dot {\phi_{0}}\over 2H}{\partial V
\over \partial \chi}\delta \chi+{\dot{\phi_{0}}\dot {\chi_{0}}{\delta
\dot \chi}\over 2H}). \eqno 30$$
In this approximation the fields $\chi_{0}$, $\delta \chi$ enter
only in the source term $J(\phi, \chi_{0}, \delta \chi)$, and the formal
solution of the eq. $(29)$ can easily be obtained if the source term is
known as a function of time $J(\phi(t), \chi_{0}(t), \delta \chi (t))= J(t)$:
$$\delta \phi \sim \int^{t}d\xi J(\xi) {\dot \phi_{0}(t)\dot \phi_{0}(\xi)H
\over a^{3}}\int^{t}_{\xi}d\xi^{'}{H\over a^{3}\dot H}, \eqno 31$$
Then one can substitute the solution $(31)$ and the expressions for
$\chi_{0}(t)$ and $\delta \chi (t)$ into the eqns. $(25)$, $(26)$, and find
the metric perturbations by integration. Unfortunately, this program is
analytically very complicated. Even the numerical solution is not trivial
for all possible parameters of the theory. Still, not everything is lost.
At first we need to estimate only the upper limit on
the metric perturbations, and therefore we can assume that both fields:
the background field $\chi_{0}$ and the perturbation $\delta \chi$ grow
with the maximal possible rate $\mu_{max}m$. In this
approximation at the time close to $\tau_{*}$
the growth rate is much larger than the expansion rate
$H={2m\over 3\tau_{*}}\sim
{m\over 100}$, and we can take into account the source term $(30)$ only
during the last Hubble epoch before the end of preheating ${\tau_{*}
-\tau \over \tau_{*}}\ll 1, \tau_{*}\gg 1$.
Secondly, we should take into account only
the leading terms in the expansion on powers of
the small parameter $q^{-1}(\tau)$.
Using the first approximation, we neglect the terms caused by the
expansion of the Universe in the equation of motion of the field $\delta
\phi$, and write:
$$\delta \ddot \phi +m^{2}\delta \phi =-{\partial^{2} V\over \partial \phi
\partial \chi}\delta \chi
=-2g^{2}\phi_{in}{\sin \tau \over \tau}\chi \delta \chi,
\eqno 32$$
where the term $\chi \delta \chi$ can be written as:
$$\chi \delta \chi ={\chi_{*}\over 2\sqrt 2 i{(2\pi a)}^{3/2}}
{({a_{0}\over a})}^{3/2}{\omega_{0}^{1/2}\over \omega}(s^{j}_{0}+
s^{j}_{-}e^{-2i\theta}+s^{j}_{+}e^{2i\theta}), \eqno 33$$
where
$$s_{0}^{j}=(2n_{j}+1)e^{i\phi_{0}}-2\alpha_{j}\beta_{j}e^{-i\phi_{0}},
\quad
s_{+}^{j}=\beta_{j}(\alpha_{j}^{*}e^{i\phi_{0}}-\beta_{j}e^{-i\phi_{0}}),
\quad
s_{-}^{j}=\alpha_{j}(\beta_{j}^{*}e^{i\phi_{0}}-\alpha_{j}e^{-i\phi_{0}}),$$
and
the functions $\alpha_{j}$, $\beta_{j}$ correspond to the period of time
$\tau_{j} < \tau < \tau_{j+1}$.
In the same approximation we can set $a=a(t_{*})=a_{*}$.
In this case during the
time period $\tau_{j-1}< \tau < \tau_{j}$ the equation $(32)$ is just
an oscillator equation with a source, represented by a sum of
a constant and an oscillating parts. Assuming $q(t)\gg 1$,
it can be easily seen that
a partial solution determined by the oscillating parts is small, and
the partial solution
of the eq. $(32)$ $\delta \phi_{p}$ is close to a constant:
$$\delta \phi_{p}\approx K(-1)^{j}s_{0}^{j}, \eqno 34$$
where
$$K={i{(g/m)}^{2}g^{-1/2}\over \sqrt 2}{({a_{0}\over a_{*}})}^{3/2}{\chi_{*}
\over {(2\pi a_{*})}^{3/2}}, \eqno 35$$
and we approximate $\omega_{0}\sim g$. The full solution can be found from
the continuity conditions at the time moments $\tau_{j}$, and has a form:
$$\delta \phi =K({(-1)}^{j}s_{0}^{j}+\sum^{l=j}_{l=0}
c_{l}\cos \tau), \eqno 36$$
where
$$c_{l}=-(s_{0}^{l}+s_{0}^{l-1}).$$
Now we substitute the solutions $(4)$, $(36)$ into
the eq. $(26)$, and by integrating the result
obtain the perturbation
of the scale factor $A$. Note that the term $\dot \chi_{0}\delta \chi <
O(\tau_{*}
{m\over g}\dot \phi_{0}\delta \phi)$ is not taken into account. It can
be said that the perturbation of the matter field $\delta \chi$ induces
the metric perturbation not directly, but through
the excitation of the inflaton perturbation $\delta \phi$. We have:
$$A\approx -{\phi_{0}K\over \tau_{*}}e^{2\mu_{max}\tau}R(\tau), \eqno 37$$
where the function of time
$$R(\tau)=\lbrace {(-1)}^{j}s_{0}^{j}\sin \tau +{1\over 2}\sum_{l=0}^{l=j}
c_{l}(\tau -\tau_{l}+{\sin {2\tau} \over 2})\rbrace e^{-2\mu_{max}\tau},
\eqno 38$$
is constrained by the condition $|R(\tau)|< O(1)$.
The coefficient $B$ is additionally damped with respect to $A$ in the long wave
limit, and is not calculated here. The vacuum
expectation value for $A^{2}$ has a form:
$$\left <A^{2} \right >
\approx {4\pi \phi_{0}^{2}|K|^{2}|R(\tau)|^{2}\over \tau_{*}^{2}}e^{4\mu_{max}\tau_{*}}
\int k^{3} d\ln k \approx {2\over 3\pi^{2}}{g\over m}{|R(\tau)|^{2}\over
\tau_{*}^{6}}m^{2}e^{-3N+4\mu_{max}\tau_{*}}\int e^{-3n}dn, \eqno 39$$
where $n=\ln {({a_{0}m\over k})}$ is e-folds number, corresponding to the wavenumber $k$,
and we assume that $\chi_{*}\sim {m\over g}e^{-3N/2}$.
Let us remind that for the modes with wavelengths of order of the present horizon scale
$n\approx N\approx 50$. The r.m.s value of the metric perturbation
$$A_{r.m.s.}(n)\approx \sqrt {{2\over 3\pi^{2}}}{({g\over m})}^{1/2}{|R(\tau)|\over \tau_{*}^{3}}m
e^{-3/2(N+n)+2\mu_{max}\tau_{*}}, \eqno 40$$
is even much smaller than the estimate $(2)$ due to very small factor $< 10^{-9}$ in the
front of the exponent in the eq. $(40)$.
Thus we conclude that in our case
the large-scale metric perturbation induced due to the coupling between
the matter field perturbations $\delta \chi$ and the background part
of the field $\chi_{0}$ is absolutely negligible.
There is another source of unavoidable
metric perturbations induced by fluctuations of the
energy density and the
pressure of the condensate of the $\chi$ ``particles''. These fluctuations
are non-linear, and therefore give rise to the metric
perturbations even in the absence of the classical field $\chi_{0}$.
The calculation of this effect
is even more complicated problem than the calculations in the linear theory, but it is possible
again to roughly estimate the characteristic order of magnitude. Similar to the linear case,
there is no hope to find some imprints of this effect at super-large scales, but
it might play an important role right before the end of preheating, at the
horizon scale $\lambda_{*}=H^{-1}(t_{*})={3\over 2}\tau_{*}m^{-1}$.
The relatively
large metric perturbations at this scale (say, with r.m.s amplitude
$\sim 10^{-2}\div 10^{-1}$) might lead to the copious production of primordial black holes,
and modify the evolution of the Universe right after the end of the preheating stage
\footnote{ When calculating the abundance of primordial black holes,
one should take into account that the metric perturbations are non-Gaussian
in that case. This can lead to a modification of the standard estimates, based
on assumption of Gaussian statistics for the perturbations.}.
An overproduction of the
primordial black holes might constrain
the parameters of our model.
To characterize the energy density fluctuations at some comoving scale $\sigma$,
we introduce coarse-grained energy density operator:
$$\hat \epsilon_{c.g}=\int d^{3}yG_{\sigma}(x-y)\hat \epsilon (y), \eqno 41$$
where
$$\hat \epsilon ={1\over 2}({\dot {\hat \chi}}^{2}+g^{2}\phi_{0}^{2}{\hat \chi}^{2})$$
is the operator of the energy density of the $\chi$ field, and the Gaussian window function
$$G_{\sigma}(\vec r)={(2\pi)}^{-3/2}\sigma^{-3}e^{-{r^{2}\over 2\sigma^{2}}}.$$
Consider the relative standard deviation
$$\delta_{\epsilon}(\sigma)=\sqrt{{\left <{\hat \epsilon_{c.g}}^{2}\right >
(\sigma)\over {\left <\hat \epsilon \right >}^{2}}-1}.
\eqno 42$$
The calculation of this quantity can be easily done in two limiting cases. Let introduce
the wavenumber $k_{\sigma}=\sigma^{-1}$ corresponding to the scale $\sigma$, and also
the wavenumber
$$k_{0}={k_{crit}\over {(2\mu^{eff} \tau )}^{1/6}}\
=\sqrt {{2\over \pi}}{q_{0}^{1/4}a_{0}m\over {(2\mu^{eff})}^{1/6}},$$
corresponding to the modes which have been amplified during all first stage of preheating:
$k_{0}\sim k_{crit}(\tau_{in})$, and the wavenumber corresponding to the horizon scale:
$$k_{h}(\tau)=a(\tau)H(\tau)={2\over 3}{a_{0}m\over \tau^{1/3}}.$$ Let also average the
quantity $(42)$ over many oscillations of the field $\chi$.
Then,
if $k_{0} < k_{\sigma} < k_{crit}$, the deviation $\delta_{\epsilon}$ is close to $1$:
$\delta_{\epsilon}\approx 1$. If $ k_{\sigma} < k_{0}$, we have
$$\delta_{\epsilon}^{2}\approx {3\over 2\sqrt 2}{({k_{\sigma}\over k_{0}})}^{3}. \eqno 43$$
The eq. $(43)$ can also be rewritten in terms of ratio $k_{\sigma}/k_{h}$:
$$\delta_{\epsilon}\approx D{({k_{\sigma}\over k_{h}})}^{3/2}, \eqno 44$$
where
$$D={\pi^{3/4}\over 3}{{(2\mu^{eff})}^{1/4}\over q_{0}^{3/8}\sqrt \tau}\approx 2\cdot 10^{-3}
{\mu_{0.14}^{1/4}\over q_{4}^{3/8}\sqrt \tau_{2}}, \eqno 45$$
and $\tau_{2}={\tau \over 10^{2}}$. For the perturbations with wavelengths smaller than
the horizon wavelength, the metric perturbation $A$ is related to the energy density
perturbation via Poisson equation: ${\Delta A \over a^{2}}\approx -\delta \epsilon$. Assuming
that $\delta \epsilon \sim \delta_{\epsilon}\left < \hat \epsilon \right >$,
and also that at the end of the
first stage of preheating the energy density of $\chi$ ``particles'' influences
the expansion law: $H^{2}\sim \left <\hat \epsilon \right >$, we have:
$$A(k_{\sigma})\sim {({k_{h}(\tau_{*})\over k_{\sigma}})}^{2}\delta_{\epsilon}=
D{({k_{h}(\tau_{*})\over k_{\sigma}})}^{1/2}, \eqno 46$$
for $k_{h}(\tau_{*})< k_{\sigma} < k_{0}$. For super-horizon perturbations the dependence
of the perturbations on $k$ should be the same as a dependence of a source of the
perturbations, and
$$A(k_{\sigma})\sim \delta_{\epsilon}= D{({k_{\sigma}\over {k_{h}(\tau_{*})}})}^{3/2}.
\eqno 47$$
Thus, the characteristic
metric perturbation induced by the fluctuations of the energy density
takes its maximal value at the horizon scale, and this value is smaller than $\sim 10^{-3}$
if $q_{0} > 10^{4}$.
Similar to the case of linear perturbation, we did not take into account the contribution
of $\chi$ 'particles' in the dynamics of background model when calculating the amplitude of
the metric perturbations induced by the non-linear terms. However, this contribution can
change the parameters of the background model (such as, e.g. the expansion rate) only on
factor of order two, and we believe this modification cannot change significantly
our estimates
\footnote{Recently, this question as well as the generation of the linear perturbations
in the same model
has been considered numerically in the paper [11]. The authors came to the similar
conclusion that the perturbations of both types are very small at the large scales.
I am very grateful to Dr. K. Jedamzik for drawing my attention to that paper.}.
\bigskip
{\large\bf Acknowledgments}
\bigskip
The author acknowledges Lev Kofman and especially
Vladimir Lukash for many valuable discussions, and also
Alexander Polnarev for useful comments. Queen Mary and Westfield College of London University is
acknowledged for the hospitality during the preparation of this paper in press. This work was
supported in part by the Danish Research Foundation through its establishment of the Theoretical
Astrophysics Center.
|
1,941,325,220,008 | arxiv | \section{Introduction}
\label{sec.Introduction}
The volume conjecture proposes a relation between the colored Jones invariants of a knot and the simplicial volume of its complement. In the formulation of \cite{MurakamiMurakami} the precise statement is as follows. Note that we use the variable $A$ from skein theory instead of the $q$ used in \cite{MurakamiMurakami} (the variables are related by $A^4 = q$).
\begin{conj}
\label{conj.Volconj} \textbf{Volume conjecture} \cite{Kashaev}, \cite{MurakamiMurakami}.
For any knot $K$ we have:
$$\lim_{N\to \infty}\frac{2\pi}{N}\log|J_N(K)(e^{\frac{\pi i}{2N}})| = \mathrm{Vol}(\mathbb{S}^3 - K)$$
\noindent where $J_N$ denotes the $N$--colored Jones invariant of $K$ and $\mathrm{Vol}$ is the simplicial volume.
\end{conj}
\noindent To gain more insight into this conjecture and to find simple examples where it holds true we seek to generalize the conjecture to the class of knotted trivalent graphs (KTGs) as defined in \cite{DThurston}. Roughly speaking a KTG is a thickened embedded graph that is allowed to have multiple edges and also edges without vertices, so that KTGs generalize framed knots and links.
Before turning to general KTGs we first discuss the generalization of the volume conjecture to links. For links the above version of the volume conjecture does not hold, because it fails for many split links \cite{MMOTY} and it also fails in a more serious way for the Whitehead chains defined in \cite{vanderVeen}.
For a split link (a link some of whose components can be separated from each other by a sphere in the complement) the normalization of the colored Jones invariant has to be adjusted slightly. For knots the colored Jones invariant was normalized by dividing by the unnormalized invariant of the unknot. If we use this normalization for a split link then the colored Jones invariant vanishes at the root of unity as was noted in \cite{MMOTY}. To avoid this problem we propose the following normalization. For a split link with $s$ split components we normalize by dividing by the unnormalized invariant of the $s$--fold unlink. With this normalization the normalized colored Jones invariant becomes multiplicative under distant union, see section \ref{sec.ColoredJones}. Since the simplicial volume is additive with respect to distant union it follows that using this normalization the volume conjecture is true for a split link if it holds for all its split components.
The above conjecture fails in a more serious way in the case of the Whitehead chains. For these links it was shown \cite{vanderVeen} that $J_{N}(e^{\frac{\pi i}{2N}}) = 0$ for all even $N$ but that the limit proposed in the volume conjecture is still valid when one restricts to odd colors $N$. In section \ref{sec.ColoredJones} we will argue that the sequence of even colors is special and that the same failure is not as likely to occur in any other subsequence.
The above motivates the following modification of the volume conjecture that we propose to call the $\mathrm{so(3)}$ volume conjecture. To the best knowledge of the author it still stands a chance to hold for all knots and links.
\begin{conj} $\mathbf{\mathrm{so(3)}}$ \textbf{volume conjecture}
\label{conj.So3volconj}
\newline The following form of the volume conjecture holds for all knots and links $L$:
$$\lim_{N\to \infty}\frac{2\pi}{N}\log|J_N(L)(e^{\frac{\pi i}{2N}})| = \mathrm{Vol}(L)$$
\noindent where $N$ runs over the odd numbers only and $J_N$ is normalized as described above.
\end{conj}
\noindent The name $\mathrm{so(3)}$ is chosen because we restrict ourselves to odd colors $N$, i.e. representations of the Lie algebra $\mathrm{so(3)}$ instead of $\mathrm{sl}(2)$. The restriction to odd $N$ is natural because Kashaev's original invariant for triangulated links in $\mathbb{S}^3$ was also defined for odd $N$ only, see condition (3.12) in \cite{Kash1}. One might also argue more generally that the odd colors correspond to the spherical representations of $\mathrm{sl}(2)$.
Now we would like to generalize the volume conjecture even further to the class of knotted trivalent graphs (KTGs). A motivation for this generalization is that such graphs show up naturally in the computation of the colored Jones invariant when one applies fusion. Another motivation is that very simple graphs such as planar graphs will have relatively simple Jones invariants and a complement that is easy to triangulate. Considering graphs in their own right will furthermore clarify the role of six-j symbols, since they are the $\mathrm{sl}(2)$-invariants of the tetrahedral graph. In order to obtain a volume conjecture in the case of a KTG we need to define both the colored Jones invariant of a KTG and the volume of a KTG.
The generalization of the colored Jones invariant to KTGs is fairly straightforward and is based on the Kauffman bracket, see section \ref{sec.ColoredJones}. The idea is to connect the three incoming Jones--Wenzl idempotents in a trivalent vertex in the only possible planar way. Alternatively one can think of a trivalent vertex as a Clebsch--Gordan injector of the representation on the incoming strand into the tensor product of the representations of the two outgoing strands. We need a slight extension of the usual formalism to deal with half twisted edges such as a M\"{o}bius band. It is well known that this procedure yields a Laurent polynomial when or KTG is a knot or a link. For general KTG's this will not be the case and we obtain an invariant that is a quotient of Laurent polynomials.
The definition of the volume of a KTG is more complicated and will be treated in detail in section \ref{sec.GeometryComplement}. Here we give a brief overview of the ideas involved. The boundary of the exterior of a graph is a handlebody so if the exterior is to be hyperbolic then the boundary cannot be a cusp but we can require it to be a totally geodesic boundary as in \cite{Frigerio2}. However very different graphs can have homeomorphic exteriors because the structure of edges and vertices is lost. To fix this we exclude annuli and tori around the edges from the boundary so that they become cusps and the remaining punctured spheres become a geodesic boundary. This version of the exterior will be called the outside of the graph. It is shown in \cite{Frigerio} that rigidity still holds for such structures provided that we use a system of closed curves on the boundary to keep track of where the cusps should be.
To deal with non-hyperbolic graphs we can no longer use the simplicial volume as was done for knots and links. This is because it was shown in \cite{Jungreis} that the simplicial volume of a hyperbolic manifold with geodesic boundary does not agree with its hyperbolic volume when the boundary is non-empty. To get around this we use the JSJ--decomposition and define the volume as the sum of the volumes of the hyperbolic pieces in the decomposition. For links this definition is known to agree with the simplicial volume.
Having defined the colored Jones invariant and the volume of a KTG, the above statement of the $\mathrm{so(3)}$ volume conjecture also makes sense for KTGs. Indeed, we propose that with this interpretation of volume and Jones invariant Conjecture \ref{conj.So3volconj} should be true for all KTGs.
\begin{conj} The $\mathrm{so(3)}$ volume conjecture holds for all knotted trivalent graphs.
\label{conj.KTG}
\end{conj}
\noindent To provide some evidence for this claim we will prove the $\mathrm{so(3)}$ volume conjecture for the class of augmented KTGs defined below. This will be the main purpose of the paper.
To describe the construction of augmented KTGs and to organize the calculations it is convenient to have a way to generate all KTGs by simple operations that we define now. For now let us think of a KTG as a thickened embedding of a graph whose edges are ribbons and whose vertices are disks. A more detailed treatment can be found in section \ref{sec.KTGs}.
\begin{figure}[htp]
\begin{center}
\includegraphics[width = 12 cm]{Fig1.eps}
\caption{First row: the four KTG moves triangle $A$, positive and negative half twists $H_\pm$ and Unzip $U$. Second row: the standard tetrahedron and the $n$--unzip $U_n$ (we have drawn the case $n = 2$).}
\label{fig.KTGmoves}
\end{center}
\end{figure}
\begin{df}
\label{df.KTGmoves}
The following four operations on KTGs will be called the KTG moves, see figure \ref{fig.KTGmoves}. The triangle move $A$ replaces a vertex by a triangle, the positive half twist move $H_+$ inserts a positive half twist into an edge, the negative half twist $H_-$ inserts a negative half twist and finally the unzip move $U$ takes an edge and splices it into two parallel edges.
We also define the following variations on the unzip move called the $n$--unzip $U_n$. This is the unzip together with the addition of $n$
parallel rings encircling the two unzipped strands.
\end{df}
\noindent The four KTG moves defined above are sufficient to generate all KTGs starting from the standard tetrahedron graph shown in figure \ref{fig.KTGmoves}.
\begin{st}
\label{st.KTGgen}
Any KTG can be obtained from the standard tetrahedron using the KTG moves only.
\end{st}
\noindent According to this theorem we can work with KTGs by studying sequences of KTG moves. Of course there are many inequivalent ways to produce the same KTG using the KTG moves, see section \ref{sec.KTGs}.
Now we can define the notion of an augmented KTG.
\begin{df}
\label{df.Augmented}
Let $S$ be a sequence of KTG moves. Define the singly augmented KTG corresponding to $S$ to be the KTG obtained from the standard
tetrahedron by the moves of $S$ except that all unzip moves are to be
replaced by $1$--unzip moves. We will denote the singly augmented KTG corresponding to $S$ by $\Gamma'_S$.
Likewise the $n$-augmented KTGs corresponding to $S$ are defined
to be all the KTGs that can be produced from the standard
tetrahedron by the moves of $S$ except that every unzip
move is to be replaced by an $m$--unzip move, where $m \geq n$. Note that one may choose a different $m$ for all unzip moves in $S$.
\end{df}
\noindent Let $\Gamma_S$ be the KTG obtained from a sequence of KTG moves
$S$ and let $\Theta$ be an $n$--augmented KTG corresponding to $S$. Then $\Gamma_S$ is
contained in $\Theta$ and $\Theta - \Gamma_S$ is an $r$--fold unlink. Here is $r$
the number of rings that were added to $\Gamma_S$ to obtain the augmented KTG $\Theta$. The number
$r$ is called the number of augmentation rings of $\Theta$.
With all definitions in place we can now formulate the main theorem of this paper.
\newpage
\begin{st}
\label{st.Main} $\mathrm{\mathbf{(Main\ Theorem)}}$\newline
Let $S$ be a sequence of KTG moves. There exists an $n\in\mathbb{N}$ such that all
$n$-augmented KTGs $\Gamma$ corresponding to $S$ satisfy the following.
\begin{enumerate}
\item[1)]{Let $t$ be the number of
triangle moves in $S$ and let $r$ be the number of augmentation rings of $\Gamma$. Let $\theta$ be the number of half twists counted with sign and define the following numbers.
$$\phi_N = (-1)^{\frac{N-1}{2}}e^{\frac{N^2-1}{4N}\pi i} \quad \mathrm{and} \quad
\mathrm{sixj}_{N}=\sum_{k=0}^{\frac{N-1}{2}}\qbinom{\frac{N-1}{2}}{k}^4(e^{\frac{\pi i}{2N}})$$ The normalized $N$--colored Jones invariant of $\Gamma$ satisfies:
\[J_N(\Gamma)(e^{\frac{\pi i}{2N}}) = \left\{
\begin{array}{ll}
\phi_N^\theta N^{r}\mathrm{sixj}_N^{t+1} & \mbox{if $N$ is odd}\\
0 & \mbox{if N is even}
\end{array}
\right.\]}
\item[2)]{The JSJ--decomposition of the outside of $\Gamma$ consists of the outside of $\Gamma'_S$ and a Seifert
fibered piece for every $n$--unzip used in the construction of $\Gamma$ such that $n \geq 2$. It follows that $\mathrm{Vol}(\Gamma) = \mathrm{Vol}(\Gamma'_S)$
Moreover the outside of $\Gamma'_S$ is hyperbolic with geodesic boundary and can be
obtained explicitly by gluing $2t+2$ regular ideal octahedra.}
\item[3)]{$\Gamma$ satisfies the $\mathrm{so(3)}$ volume conjecture, but not the original volume conjecture.}
\end{enumerate}
\end{st}
\noindent The quantum binomial coefficients used in the above definition of $\mathrm{sixj}_N$ are defined in section \ref{sec.ColoredJones}. For a definition of the colored Jones invariant of a KTG, see section \ref{sec.ColoredJones}. The outside of a graph is defined in section \ref{sec.GeometryComplement}, it plays the role of the complement but it is a manifold with boundary pattern \cite{Matveev}. We will also define the volume of such manifolds. In section \ref{sub.GeometryAugmented} we will show how to obtain the explicit glueing of octahedra mentioned above.
The proof of parts 1) and 2) of the main theorem will be given in sections \ref{sec.ColoredJones} and \ref{sec.GeometryComplement}, but it is easy to see how part 3) follows from the first two parts. The key ingredient is the following observation about the numbers $\mathrm{sixj}_N$. It was shown in \cite{Costantino} that $\lim_{N\to \infty}\frac{2\pi}{N}\log|\mathrm{sixj}_N|=2\mathrm{Vol(Oct)}$, where $\mathrm{Vol(Oct)}$ means the hyperbolic volume of the regular ideal octahedron. Plugging in the formula for the colored Jones from part 1) gives: $$\lim_{N\to \infty}\frac{2\pi}{N}\log|J_N(\Gamma)(e^{\frac{\pi i}{2N}})| = 2(t+1)\mathrm{Vol(Oct)}$$ as a limit over all the odd numbers $N$. According to part 2) of the main theorem this is exactly the volume of $\Gamma$ since $\mathrm{Vol}(\Gamma) = \mathrm{Vol}(\Gamma'_S)$ and $\mathrm{Vol}(\Gamma'_S)$ equals $(2t+2)\mathrm{Vol(Oct)}$. The original volume conjecture does not hold because the the even values of $N$ give a colored Jones of $0$. This concludes the proof of part 3) assuming the first two parts of the main theorem.
Now let us note some immediate corollaries.
\begin{cor}
\label{cor.Sublink}
For every KTG $\Gamma$ there is a KTG $\Theta$ containing $\Gamma$ such that $\Theta-\Gamma$ is an
unlink and $\Theta$ satisfies the $\mathrm{so(3)}$ volume conjecture. If $\Gamma$ happens to be a link
then so is $\Theta$.
\end{cor}
\begin{cor}
\label{cor.Triangular}
The $\mathrm{so(3)}$ volume conjecture holds for all KTGs that can be constructed from the standard tetrahedron using the triangle move and the half twist move only. The original volume conjecture fails for such KTGs.
\end{cor}
\noindent The final corollary has nothing to do with the volume conjecture, but gives an alternative proof of a result by Baker \cite{Baker}.
\begin{cor}
\label{cor.Arithmetic}
Every link is a sublink of an arithmetic link.
\end{cor}
\begin{proof}
The singly augmented link corresponding to the given link is an arithmetic hyperbolic 3--manifold, since it is obtained from glueing regular ideal octahedra by symmetries of the tiling of hyperbolic space by regular ideal octahedra, see \cite{Thurston}.
\end{proof}
\noindent The organization of the paper is as follows. In section \ref{sec.KTGs} we discuss KTGs, KTG diagrams and KTG moves. The subject of section \ref{sec.ColoredJones} is skein theory. Here we define the colored Jones invariant of a KTG and show how it can be expressed in terms of six-j symbols. Specializing to the $N$--th root of unity and making use of the special properties of augmentation yields part 1) of the main theorem. In section \ref{sec.GeometryComplement} we give a definition of the volume of a 3--manifold with boundary and we study the geometry of the outside of an augmented KTG. Here we prove part 2) of the main theorem. Section \ref{sec.Conclusion} is a short summary and a conclusion. \newline
\noindent \textbf{Acknowledgement.} I would like to thank Dave Futer, Rinat Kashaev, Jessica
Purcell, Nicolai Reshetikhin and Dylan Thurston for enlightening
conversations and the organizers of the conferences, workshops and seminars in Hanoi, Strasbourg, Basel, Aarhus and Geneva for giving me the
opportunity to present parts of this work there.
\section{Knotted Trivalent Graphs}
\label{sec.KTGs}
In this section we state some general facts about knotted trivalent graphs (KTGs). We discuss the extra Reidemeister moves that are necessary to relate isotopic KTG diagrams and describe how every KTG can be generated from the standard tetrahedron by the KTG moves.
\begin{df}
A fat graph is a 1--dimensional simplicial complex together with an embedding into a surface as a spine.
A knotted trivalent graph (KTG) is a trivalent fat graph embedded
as a surface into $\mathbb{S}^3$ and considered up to isotopy.
\end{df}
\noindent By a diagram of a KTG we will mean a regular projection of its spine KTG onto the plane together with the usual crossing information and small diagonal lines indicating where an edge of a KTG makes a half twist. Except for the locations in the diagram where there is a half twist the surface of the KTG is assumed to be parallel to the projection plane as in the blackboard framing. The half twist pictures are necessary because a KTG such as the M\"{o}bius band cannot be given the blackboard framing. See Figure \ref{fig.ExampleKTG} for an example of a KTG together with its diagram.
\begin{figure}[here]
\begin{center}
\includegraphics[width = 12cm]{Fig2.eps}
\caption{A KTG and its diagram.}
\label{fig.ExampleKTG}
\end{center}
\end{figure}
Next we consider the moves that relate diagrams of isotopic KTGs. We will call these moves the trivalent isotopy moves. In addition to the usual Reidemeister moves for framed links we have moves related to the trivalent vertex and the
half-integral framing. These additional moves are called the fork slide, trivalent twist, twist slide and addition of twists, see figure \ref{fig.TrivalentIsotopy}.
\begin{figure}[here]
\begin{center}
\includegraphics[width = 12cm]{Fig3.eps}
\caption{The additional trivalent isotopy moves on a KTG diagram. First row: the fork slide and the twist slide. Second row: the trivalent twist and the addition of twists (multiple cases).}
\label{fig.TrivalentIsotopy}
\end{center}
\end{figure}
\begin{df}
\label{df.TrivalentIsotopy}
The trivalent isotopy moves are the Reidemeister moves for framed links and the following four moves on KTG diagrams:
\begin{enumerate}
\item{Let the fork slide be the move where a strand is slid over or
under a trivalent vertex (first picture of figure \ref{fig.TrivalentIsotopy}).}
\item{One can slide a half twist past a crossing (second picture of figure \ref{fig.TrivalentIsotopy}). This is called the twist slide.}
\item{The trivalent twist is the move where a single half twist is moved past a trivalent vertex. It starts on one edge, passes the vertex, creates a crossing and one half twist on the other two edges (third picture of figure \ref{fig.TrivalentIsotopy}). The sign of the initial half twist equals the sign of the crossing and the two ensuing half twists.}
\item{One may cancel or create two half twists of opposite sign on the same edge. Two half twists of equal sign on the same edge may be replaced by a curl of the same sign on that edge (last pictures of figure \ref{fig.TrivalentIsotopy}). This is called addition of twists.}
\end{enumerate}
\end{df}
\noindent The same arguments that are used in the proof of Reidemeister's theorem can be employed to prove the following theorem, see also \cite{Turaev}.
\begin{st}
\label{fig.TrivalentReidemeister}
Two KTG diagrams define isotopic KTGs if and only if the diagrams are related by trivalent isotopy moves.
\end{st}
\subsection{KTG moves}
\label{sub.KTGmoves}
We now take a closer look at the KTG moves defined in the introduction (Definition \ref{df.KTGmoves}). We will give a proof of Theorem \ref{st.KTGgen} that states that any KTG can be generated from the standard tetrahedron (see figure \ref{fig.KTGmoves}) using the KTG moves.
It is important to note that the result of an unzip move is determined by the number of half twists present on the edge. Technically such half twists have to be pushed off the edge before one can perform the unzip. In practice it is however much easier to remember that $n$ half twists on an edge give rise to $n$ crossings between the two parallel edges produced by the unzip. This follows from the trivalent isotopy moves defined above. Alternatively it can be checked physically by cutting a twisted band into two pieces along its core.
\begin{figure}[here]
\begin{center}
\includegraphics[width = 12 cm]{Fig4.eps}
\caption{Generating an arbitrary diagram $D$ from the tetrahedron by sweep-out.}
\label{fig.Sweepout}
\end{center}
\end{figure}
\begin{proof} (of Theorem \ref{st.KTGgen}).
We start with the diagram $D$ of the KTG that we want to generate drawn hatched in the first picture of figure \ref{fig.Sweepout}. Below it we draw a standard tetrahedron in black. The hatched part of the picture still needs to be generated and the black part is already done.
We generate the diagram $D$ from the topmost edge of the tetrahedron step by step using the elementary steps depicted in figure \ref{fig.ElementarySteps}. The edge of the tetrahedron moves upwards over the hatched diagram $D$ and at every step we delete the hatched part of $D$ that is covered and regenerate it by one of the moves indicated in figure \ref{fig.ElementarySteps}.
The elementary moves $A$, $H_{\pm}$ and $U$ in figure \ref{fig.ElementarySteps} are the KTG moves, and the moves $B$ and $C$ are a composition of KTG moves, see figure \ref{fig.Dewanagri} for a proof. The last step in the derivation of the move $C$ consists of unzipping the half twisted edge. To do this one can either cut the edge along its core or first isotope the half twist up to get a crossing.
\begin{figure}[here]
\begin{center}
\includegraphics[width = 12 cm]{Fig5.eps}
\caption{The elementary steps encountered by the edge of the tetrahedron. The hatched parts are only meant to indicate the course of action, these parts are not actually there. With this in mind one recognizes the $U$ in the first picture as the unzip move.}
\label{fig.ElementarySteps}
\end{center}
\end{figure}
\noindent We stop the sweep-out process right before reaching the last hatched maximum of $D$, as indicated in the middle picture in figure \ref{fig.Sweepout}. To close the diagram we remove this maximum and unzip the three vertical edges of the tetrahedron to obtain the required diagram, see the last picture in figure \ref{fig.Sweepout}.
\begin{figure}[here]
\begin{center}
\includegraphics[width = 12 cm]{Fig6.eps}
\caption{A derivation of the move $B$ from the KTG moves (first row) and a derivation of $C$ from the KTG moves (second row).}
\label{fig.Dewanagri}
\end{center}
\end{figure}
\end{proof}
\noindent There are many ways to produce the same KTG using the KTG moves. For example if one starts with a single trivalent vertex and applies the triangle move then one can proceed in two ways to produce the same diagram. Either perform a single triangle move on the top vertex,
or do two triangle moves on the two lower vertices followed by an unzip on the middle edge at the bottom.
\section{The colored Jones invariant of a KTG}
\label{sec.ColoredJones}
Our definition and calculation of the colored Jones invariant will be based on the Kauffman bracket and its skein relation. We have chosen this language over the more general representation theoretic language because its formulas do not require a preferred direction in the projection plane. Throughout we will make use of the variable $A$ from skein theory. It is related to the $q$ from the introduction by $A^4 = q$.
\subsection{M\"{o}bius Skein Theory}
\label{sub.SkeinTheory}
To be able to include diagrams with half twisted edges we need to extend the usual skein theory a little. We propose to introduce the following extra relations called the half twist relations. A single edge with a positive half twist is equal to $(-A^3)^{1/2}$ times an untwisted edge. A single edge with a negative half twist is equal to $(-A^3)^{-1/2}$ times an untwisted edge. This definition is consistent with the value of the curl in ordinary skein theory and also with the trivalent isotopy move addition of twists from section \ref{sec.KTGs}.
\begin{figure}[here]
\begin{center}
\includegraphics[width = 12cm]{Fig7.eps}
\caption{The Kauffman relations and the additional twist relations together make up M\"{o}bius Skein Theory.}
\label{fig.Kauffman}
\end{center}
\end{figure}
\begin{df}
\label{df.Skein}
Let $\mathcal{R}$ be the quotient field of the ring of rational Laurent polynomials in $A^{1/2}$. Define the M\"{o}bius skein of a surface $\Sigma$ to be the $\mathcal{R}$-vector space of KTG diagrams without vertices in $\Sigma$ modulo the Kauffman bracket relations and the half twist relations shown in figure \ref{fig.Kauffman}.
\end{df}
\noindent The surface is allowed to have marked points on its boundary but in this case we only allow diagrams that have edges ending at all the boundary points.
Note that the above definition coincides with the usual definition of a skein space except for the half twist relations. A KTG diagram without vertices or half twists can be given the blackboard framing and its value in the M\"{o}bius skein will be exactly its value in the ordinary skein space.
We can now define the colored Jones invariant of a KTG using the notion of a Jones--Wenzl idempotent and a trivalent skein vertex, see \cite{MasbaumVogel}.
\begin{df}
\label{df.ColoredJones}
Define the unnormalized $N$--colored Jones invariant $\langle \Gamma \rangle_N(A)$ of a KTG $\Gamma$ to be the Kauffman
bracket of the M\"{o}bius skein element obtained from a diagram of $\Gamma$ in the plane by
replacing every edge by $N-1$ parallel edges joined by a $N-1$--th Jones--Wenzl idempotent and every vertex by a trivalent skein vertex.
More generally we also define the bracket of a KTG diagram with integer labels on the edges to be the bracket of the skein element obtained by replacing an edge labeled $B$ by a $B-1$--th Jones--Wenzl idempotent and the vertices by the appropriate trivalent skein vertices.
\end{df}
\noindent In this definition $\langle \Gamma \rangle_2$ coincides with the usual Kauffman bracket. As an example we note that if $M$ is the positive M\"{o}bius band then $\langle M \rangle_3 = -(A^8+A^4+1)$.
Note that replacing an $N$--colored edge with a half twist by parallel strands will cause the $N-1$ parallel edges to be intertwined and individually half twisted so that we get additional crossings and half twists.
Since there is no planar way to connect an odd number of incoming edges, the trivalent vertex is defined to be zero when all edges have even colors. Therefore the colored Jones invariant of any KTG with at least one vertex is also zero for even $N$. In the next section we will see that at the $4N$--th root of unity this is the case for all augmented KTGs.
For the above definition to make sense we still have to prove that the value of $\langle \Gamma \rangle_N$ does not depend on the particular KTG diagram we choose for $\Gamma$. For this we first need a fairly standard lemma on the half twist, see also the last diagram in figure \ref{fig.Recoupling}.
\begin{lem}
\label{lem.HalfTwist}
A positive half twist on $n$ bands on top of an $n$--th Jones--Wenzl idempotent is equal to $(-1)^\frac{n}{2}A^{\frac{n(n+2)}{2}}$ times the untwisted bands with the same idempotent at the bottom. For the negative half twist we get $(-1)^{-n/2}A^{-n(n+2)/2}$ in the same way.
\end{lem}
\begin{proof}
Because of the Jones--Wenzl idempotent there is only one way to resolve the crossings in the diagram that will give a non-zero contribution. A half twist on $n$ parallel bands produces $n(n-1)/2$ positive crossings yielding a contribution $A^{n(n-1)/2}$. Furthermore every strand contains a positive half twist, so the half twist relation gives another contribution of $(-1)^{n/2}A^{3n/2}$. Together this is exactly $(-1)^{n/2}A^{n(n+2)/2}$ as required. For the negative half twist the proof is the same.
\end{proof}
\begin{prop}
\label{prop.WellDefined}
The unnormalized $N$--colored Jones invariant $\langle \Gamma \rangle_N(A)$ of a KTG $\Gamma$ is a well defined invariant of KTGs.
\end{prop}
\begin{proof}
We need to check that the value of the unnormalized colored Jones invariant is unchanged under the trivalent isotopy moves of KTG diagrams defined in definition \ref{df.TrivalentIsotopy}. For the Reidemeister moves this is clear. Because a trivalent vertex is turned into a skein element without trivalent vertices or half twists, invariance under the fork slide move follows from invariance under Reidemeister II and III.
Lemma \ref{lem.HalfTwist} proves the invariance of the Jones under the twist slide move and the addition of half twists. Invariance under the trivalent twist move now follows from this lemma in combination with Theorem 3 of \cite{MasbaumVogel}.
\end{proof}
\noindent Note that the above proof also shows that the bracket of a KTG whose edges are colored by any integers is an invariant. This invariant is multiplicative under distant union.
To relate our definition of the unnnormalized colored Jones invariant to the ones that can be found in the literature we note that when $\Gamma$ is a link it coincides with $(-1)^{N-1}$ times the value of the unnormalized Jones invariant defined in \cite{Masbaum}. This follows from the remark that the bracket of a KTG diagram without half twists or vertices equals the bracket of the framed link in the usual skein theory.
The normalization of the Jones invariant that is used in the $\mathrm{so}(3)$ volume conjecture (Conjecture \ref{conj.So3volconj}) is defined as follows.
\begin{df}
\label{df.ColoredJonesNormalized}
Define the normalized colored Jones invariant of a KTG $\Gamma$ with $s$ split components to be $J_N(\Gamma) = \langle \Gamma \rangle_N/\langle U^s \rangle_N$, where $U^s$ is the $s$-component unlink.
\end{df}
\noindent For the volume conjecture we need to specialize to $A = \exp(\pi i/2N)$ but $\langle U^s \rangle_N = (-1)^{s(N-1)}[N]^s$, where $[N] = (A^{2N}-A^{-2N})/(A^{2}-A^{-2})$. At this value of $A$ we have $[N] = 0$ so we have check that we can divide out this pole and still get a well defined answer.
\begin{prop}
\label{prop.WellDefinedNormalized}
The normalized $N$--colored Jones invariant has a well defined value at $A = \exp(\pi i/2N)$.
\end{prop}
\begin{proof}
Since the unnormalized colored Jones invariant is multiplicative under distant union, the normalized colored Jones invariant also has this property. Therefore we can assume that the number of split components of our KTG $\Gamma$ is $1$. Let $\Gamma$ be the closure of a 1--1 tangle $\Theta$. We label the edges of $\Theta$ with $N$ and interpret $\Theta$ as an element of the M\"{o}bius skein of a square with $2N-2$ marked boundary points. As in the Temperley--Lieb algebra we can now write $\Theta$ as a scalar $f_N(A)$ times the $N-1$th Jones--Wenzl idempotent. Closing the tangle $\Theta$ we find that $\langle \Gamma \rangle_N = \langle U \rangle_N f_N(A)$.
It now remains to show that $f_N(A)$ is a quotient of Laurent polynomials in $A^{1/2}$ whose denominator is not zero at $A = \exp(\pi i/2N)$. To calculate $f_N(A)$ we expand all crossings and half twists in $\Theta$ so as to obtain an element of the Temperley--Lieb algebra and the component of the identity in this expression is $f_N(A)$. The calculation of $f_N(A)$ will involve the Jones--Wenzl idempotents, the skein relation and the half twist relations. From the recursive definition of the Jones--Wenzl idempotent it is clear that $f_N(A)$ is a quotient of Laurent polynomials in $A^{1/2}$ whose denominator does not have poles at $A = \exp(\pi i/2N)$.
\end{proof}
\noindent It follows from our discussion that the normalized colored Jones invariant is multiplicative under both connected sum and distant union of KTG diagrams. To see the multiplicativity with respect to connected sum we observe that it corresponds to concatenation of 1--1 tangles and hence to multiplication of scalars.
We now move on to the problem of calculating the unnormalized colored Jones invariant of a general KTG. Theorem \ref{st.KTGgen} tells us that all KTGs can be constructed from the standard tetrahedron by applying the KTG moves. It turns out that in skein theory the KTG moves correspond to the well known formulas shown in figure \ref{fig.Recoupling}, see also \cite{MasbaumVogel}. We will show below that these formulas can be used to calculate the colored Jones polynomial of any KTG from a sequence of KTG moves generating it.
\begin{figure}[h]
\begin{center}
\includegraphics[width = 12 cm]{Fig8.eps}
\caption{The value of the skein of the labeled standard tetrahedron is the six-j symbol defined below. The fusion formula reverses the unzip move, the half twist formula undoes the half twist move and the triangle formula undoes the triangle move.}
\label{fig.Recoupling}
\end{center}
\end{figure}
To be able to write down the formulas for the six-j symbols shown in figure \ref{fig.Recoupling} we first recall the definition of a quantum integer $[n]= \frac{A^{2n}-A^{-2n}}{A^2-A^{-2}}$. The value of the unknot is $\langle U \rangle_N = \langle N \rangle = (-1)^{N-1}[N]$. Quantum factorials and binomial coefficients are defined in the usual way in terms of the quantum integers.
Given six integer labels $j_1,...,j_6$ on the edges of a tetrahedron as in figure \ref{fig.Recoupling} such that all trivalent vertices are non-zero, define $V_1,V_2,V_3,V_4$ to be a half times the sums of the three labels around each of the four vertices. For example $V_1 = (j_1+j_2+j_3)/2$. Also define $\Box_1,\Box_2,\Box_3$ to be a half of the sums of the labels in the three squares (pairs of opposite edges). According to \cite{MasbaumVogel} the value of the tetrahedron is: $$\left\langle\begin{array}{ccc} j_1 & j_2 & j_3\\ j_4 & j_5 & j_6\end{array}\right\rangle \quad \mathrm{where} \quad \left\langle\begin{array}{ccc} j_1+1 & j_2+1 & j_3+1\\ j_4+1 & j_5+1 & j_6+1\end{array}\right\rangle =$$ $$ \frac{\prod_{m,n}(\Box_m-V_n)}{\prod_{k=1}^6 [j_k]!}\sum_{z=\max V_i}^{\min\Box_j}\frac{(-1)^z[z+1]!}{\prod_{r}(\Box_r-z)\prod_{s}(z-V_s)}$$
\noindent The value of the labeled theta graph is given by $\langle a\ b\ c \rangle$, where $$\langle a+1\ b+1\ c+1 \rangle = (-1)^{s}\frac{[s+1]![s-a]![s-b]![s-c]!}{[a]![b]![c]!} \quad \mathrm{where} \ s = \frac{a+b+c}{2}$$
\noindent The sum in the upper right equation in figure 8 ranges over all possible triples for which the trivalent vertex is nonzero, that is all $c$ such that $|a-b|\leq c \leq a+b$ and $a+b+c$ is odd. It should be remarked that since we replace an edge labeled $b$ by a $b-1$--th Jones--Wenzl idempotent while they are replaced by $b$--th Jones--Wenzl idempotents in \cite{MasbaumVogel} there is a slight shift of indices.
The formulas in figure \ref{fig.Recoupling} suffice to give a formula for the colored Jones invariant of any KTG in terms of the six-j symbols. By theorem \ref{st.KTGgen} we know that any KTG $\Gamma$ can be constructed from the tetrahedron by a sequence $S$ of KTG moves. To calculate $\langle \Gamma \rangle_N$ we start with the diagram corresponding to $S$ and label all edges by $N$. Now we reverse the KTG moves in $S$ move by move. At every step we keep track of the newly produced edge labels in the diagrams that we get. The formulas in figure \ref{fig.Recoupling} tell us that we get a six-j symbol when we reverse the triangle move $A$, a summation with so called fusion coefficients from the unzip move $U$ and a factor from the half twist moves $H_{\pm}$. In the next subsection we will use this knowledge to calculate the colored Jones of an augmented KTG at the relevant root of unity.
Finally note that it is well known that the colored Jones invariant of knots and links is a Laurent polynomial. For KTGs this is generally not the case. The colored Jones invariant (normalized or not) of a $KTG$ is merely a quotient of Laurent polynomials in $A^{1/2}$. As an example let us calculate the normalized colored Jones invariant of the theta graph $\theta$. From the formula in figure \ref{fig.Recoupling} we get: $$J_N(\theta) = (-1)^{3k} \frac{[3k+1]![k]!^3}{[2k]!^3[2k+1]} \quad \quad N = 2k+1$$
\noindent By considering the zeros of the numerator and the denominator it is clear that this is not a Laurent polynomial for odd $N$ greater than $3$. For example we can look at the number of zeros at $A = \exp(i\pi/4k)$.
\subsection{Asymptotics and augmentation}
\label{sub.AsymptoticsAugmentation}
We have seen that the unnormalized $N$--colored Jones invariant takes the form of a multi-sum of products and quotients of
quantum integers. Every unzip contributes a summation with fusion
coefficients, every triangle move produces a six-j symbol and every
half twist move contributes a power of $A$.
It is not trivial to determine the asymptotics of such a multisum formula. To circumvent this difficulty we augment the KTG. Adding extra unknotted ring-like components actually simplifies the sum at the relevant root of unity because of the following formula from skein theory \cite{Lickorish}, see figure \ref{fig.Ring}. The value of a $k-1$--th Jones--Wenzl idempotent encircled by a closed $N-1$--th idempotent is $(-1)^{N-1}[k N]/[k]$ times the idempotent.
\begin{figure}[here]
\begin{center}
\includegraphics[width = 12cm]{Fig9.eps}
\caption{The effect of adding a ring to a labeled edge. Note that every edge is replaced by a Jones--Wenzl idempotent.}
\label{fig.Ring}
\end{center}
\end{figure}
\noindent The following lemma gives a calculation of the above value at our root of unity.
\begin{lem}
\label{lem.Ring}
In skein theory adding a ring labeled $N$ encircling an edge labeled $k$ is the same as multiplying the edge by $(-1)^{N-1}[k N]/[k]$. The value of this constant is
\[\lim_{A \to e^{\pi i/2N}}(-1)^{N-1}\frac{[k N]}{[k]} = \left\{
\begin{array}{ll}
(-1)^{N-1+k-k/N}N & \mbox{$\mathrm{if}$ $N \mid k$}\\
0 & \mbox{$\mathrm{if}$ $N \nmid k$}
\end{array}\right. \]
\end{lem}
\begin{proof}
The value of $(-1)^{N-1}[k N]/[k] = (-1)^{N-1} \frac{A^{2kN}-A^{-2kN}}{A^{2k}-A^{-2k}}$ at $A=e^{\pi i/2N}$ depends on whether or not the denominator vanishes. The numerator is always zero but the denominator is zero if and only if $N \mid k$, therefore the value is $0$ if $N$ does not divide $k$. Using 'l Hospital's rule we calculate the value in case $N \mid k$.
$$\lim_{A \to e^{\pi i/2N}}(-1)^{N-1}\frac{[k N]}{[k]} = \lim_{A \to e^{\pi i/2N}}(-1)^{N-1}\frac{2kNA^{-1}}{2kA^{-1}}\frac{A^{2kN}+A^{-2kN}}{A^{2k}+A^{-2k}} =$$ $$(-1)^{N-1}\frac{2(-1)^{k}}{2(-1)^{k/N}} N = (-1)^{N-1+k-k/N}N$$
\end{proof}
\noindent The above lemma suggests that we can use an edge with a ring as a kind of delta function. In other words we can try to pick only the term $k = N$ from a sum over edges labeled $k$ by adding a ring to the edge. This will turn the expression of the colored Jones invariant into a single term thus making an asymptotic analysis possible. To make this idea precise we need to be careful because of poles in the six-j symbols and the possibility of several multiples of $N$ dividing $k$. This is done in the proof of part 1) of the main theorem that we will now present.
\begin{proof} (of part 1) of the main theorem (theorem \ref{st.Main}))
Let us fix a sequence $S$ of KTG moves and let $\Theta$ be the KTG generated by $S$ starting from the tetrahedron. In the previous subsection we have seen that it is possible to express the colored Jones invariant of $\Theta$ in terms of the sequence $S$ and the formulas from figure \ref{fig.Recoupling} by reversing the KTG moves one by one until one reaches the tetrahedron. From the formulas in figure \ref{fig.Recoupling} one sees that the unnormalized colored Jones invariant can be written as a multisum of products and quotients of quantum integers.
Let $n$ be a fixed integer that is at least one more than the maximum number of poles at $A=e^{\pi i/2N}$ in the summands of the expression of the unnormalized $N$ colored Jones of our KTG $\Theta$. It is very important to note that one can choose such a $n$ to be independent of $N$. To see this we write out all the six-j symbols in the expression for the colored Jones invariant to see that it is a multisum of quotients of quantum factorials. Moreover there is a number $a$ depending only on $S$ such that if $[r]$ occurs in a summand of the expression for the colored Jones then $r\leq aN$. Since the number of zeros of $[r]!$ at $A = \exp(2\pi i /2N)$ is $\left\lfloor r/N \right\rfloor$ we know that all terms $[r]!$ that occur have less than $a$ zeros. It follows that the number of poles in a summand of the multi-sum is less than $a$ times the number of quantum factorials present in the denominator. Suppose that the number of quantum factorials is at most $f$ then we can set $n = af+1$. Note that $f$ is independent of $N$ as well.
Now let $\Gamma$ be an $n$--augmented KTG. If we calculate the unnormalized colored Jones invariant then we get the same multisum as we did for $\Theta$ except that according to lemma \ref{lem.Ring} we have at least $n$ factors $(-1)^{N-1}(\frac{[k N]}{[k]})$ for every unzip move, where $k$ is the summation variable created by the formula for reversing the unzip in skein theory, see figure \ref{fig.Recoupling}. By lemma \ref{lem.Ring} and the construction of $n$ only those summands of the multisum for $\Gamma$ for which the summation variables are multiples of $N$ are non-zero at $A = \exp(2\pi i /2N)$.
Actually only the term where all summation variables are equal to $N$ is non-zero at the root of unity. To see this suppose that we have a term where one summation index equals $uN$ for some integer $u>1$. We may assume that the index whose value is $uN$ is the first in the order of appearance of the summations in the calculation. This means that the index what created at a stage of the calculation when multiples of $N$ other than $N$ itself did not occur. Since labels that are not multiples of $N$ will not contribute the only possibility is that the label came from fusing two edges labeled $N$. But this implies that the new summation ranges over the odd integers between $0$ and $2N$. Therefore only the summand where all labels are $N$ contributes.
Now that we know that in the multi-sum expression for the unnormalized colored Jones invariant of $\Gamma$ only the term where all indices are $N$ contributes at this root of unity, we can easily write down a closed form expression for its value. Reversing the KTG moves in $S$ now becomes a matter of multiplying by a particular factor. For the triangle move this factor is $\frac{\left\langle\begin{array}{ccc} N & N & N\\ N & N & N\end{array}\right\rangle}{\langle N N N \rangle}$, for the unzip it is $\frac{\langle N \rangle}{\langle N N N \rangle}$, for the half twist $H_\pm$ it is $(-1)^{\pm(N-1)/2}A^{\pm(N^2-1)/2}$ and finally one factor $\left\langle\begin{array}{ccc} N & N & N\\ N & N & N\end{array}\right\rangle$ for the tetrahedron.
Taking into account the normalization and the powers of $N$ from the augmentation we get the following formula for the normalized $N$--colored Jones invariant at $A = \exp(\pi i/2N)$. Note that $\Gamma$ has only one split component so that we divide by $\langle U \rangle_N$ only once. $$J_N(\Gamma)(e^{\frac{\pi i}{2N}}) = \left((-1)^{(N-1)/2}A^{\frac{N^2-1}{2}}\right)^\theta N^r\left(\frac{\langle N N N \rangle}{\langle N \rangle}\right)^u \times $$ $$ \frac{\left\langle\begin{array}{ccc} N & N & N\\ N & N & N\end{array}\right\rangle ^t}{\langle N N N \rangle^t}\frac{\left\langle\begin{array}{ccc} N & N & N\\ N & N & N\end{array}\right\rangle}{\langle N\rangle}$$
where $\theta$ is the number of half twists counted with sign, $u$ is the number of unzips in the sequence, $t$ the number of triangle moves and $r$ the number of augmentation rings.
Note that this formula is zero when $N$ is even, because then $$\left\langle\begin{array}{ccc} N & N & N\\ N & N & N\end{array}\right\rangle$$ is zero for generic $A$ because the trivalent vertices do not exist.
For odd $N = 2k+1$ we actually have $\frac{\langle N N N \rangle}{\langle N \rangle} = 1$ at $A = \exp(\pi i/2N)$. To see this, first observe that at this value of $A$ we have $[N+j] = -[j] = -[N-j]$. For generic values of $A$ we write: $$\frac{\langle N N N \rangle}{\langle N \rangle} = (-1)^{3k}\frac{[3k+1]![k]!^3}{[2k+1][2k]!^3} = $$
$$(-1)^k \frac{[1]\cdots[k][k+1]\cdots[2k][2k+1][2k+2]\cdots[3k+1]}{([k+1]\cdots[2k])^3[2k+1]}$$ $$ = (-1)^k \frac{[1]\cdots[k][2k+2]\cdots[3k+1]}{([k+1]\cdots[2k])^2}$$
At $A = \exp(\pi i/2(2k+1))$ this becomes equal to $1$ since $[1]\cdots[k] = [2k][2k-1]\cdots[k+1]$ and $[2k+2]\cdots[3k+1] = (-1)^k[1][2]\cdots[k]$.
The same type of calculation shows that: $$\frac{\left\langle\begin{array}{ccc} N & N & N\\ N & N & N\end{array}\right\rangle}{\langle N \rangle}(e^{\pi i/2N}) = \mathrm{sixj}_{N}$$
where $$\mathrm{sixj}_{N}=\sum_{k=0}^{(N-1)/2}\qbinom{(N-1)/2}{k}^4(e^{\pi i/2N})$$
\noindent Therefore the formula for the colored Jones of the KTG $\Theta$ at the root of unity simplifies to:
$$J_N(\Theta)(e^{\frac{\pi i}{2N}})\left((-1)^{(N-1)/2}A^{\frac{N^2-1}{2}}\right)^\theta N^r \mathrm{sixj}_N^{t+1}$$
\noindent as claimed in part 1) of the main theorem.
\end{proof}
\section{The geometry of the complement of an augmented KTG}
\label{sec.GeometryComplement}
In this section we are concerned with the definition and the calculation of the volume of a KTG. The generalization to graphs is not straight forward because of the following problem. A knot is determined by its complement but a graph is not. The homeomorphism type of the complement of a graph does not say much about the graph itself. For example the standard tetrahedron and the connected sum of two theta graphs, shown in figure 10 have homeomorphic complements. From the point of view of the volume conjecture this is very inconvenient because the colored Jones invariant at the root of unity does distinguish these graphs. If the volume conjecture is to hold for KTGs then we need to add a little structure to the complement so that we can recover the adjacency matrix of the graph from its complement.
In the first subsection we will show how to assign a 3--manifold with boundary to any embedded graph such that the graph can be recovered from the 3--manifold and we still have the possibility of rigid hyperbolic structures. In the second subsection we apply these ideas to augmented KTGs very explicitly and we give a proof of the second part of the main theorem (theorem \ref{st.Main}).
\begin{figure}[here]
\begin{center}
\includegraphics[width = 12cm]{Fig10.eps}
\caption{Two KTGs with homeomorphic complements.}
\label{fig.HomeoKTGs}
\end{center}
\end{figure}
\subsection{The volume of a 3--manifold with boundary}
In this section we lay down the necessary foundations that allow us to define the hyperbolic volume of a graph in $\mathbb{S}^3$. We start with some general notions about hyperbolic structures on 3--manifolds with boundary following \cite{Frigerio}.
\begin{df}
\label{df.Hyp}
A 3--manifold $M$ is called a hyperbolic manifold with geodesic boundary if it is locally modeled on the right upper half space \newline $\{(x,y,z)\in \mathbb{H}^3 | x \geq 0 \}$.
\end{df}
\noindent In the next subsection we will construct many hyperbolic manifolds with geodesic boundary by glueing ideal polyhedra along some of their faces. The remaining faces will make up the boundary.
Mostow rigidity holds for finite volume hyperbolic 3--manifolds with geodesic boundary provided that the boundary is compact \cite{Frigerio} but when the boundary is non-compact then it may fail. However even in the case of non-compact boundary one can save the rigidity result by considering annular cusp loops. In order to define this notion we first sketch the construction of the natural compactification of a hyperbolic 3--manifold with geodesic boundary.
Let $M$ be an orientable, finite volume, hyperbolic 3--manifold with geodesic boundary. The double $D(M)$ of $M$ is hyperbolic without boundary. Therefore it consists of a compact portion together with some cusps based on Euclidean surfaces. It follows that $M$ also consists of a compact portion together with some cusps of the form $T\times[0,\infty)$, where $T$ is a Euclidean surface with geodesic boundary such that $(T\times[0,\infty))\cap \partial T = \partial T \times [0,\infty)$. $M$ now admits a natural compactification $\bar{M}$ by adding such a surface $T$ for each cusp. Note that the compactification $\bar{M}$ of $M$ is obtained by adding tori and closed annuli. The set of these annuli will be called $\mathcal{A}_M$.
\begin{df}
\label{df.CuspLoop}
A loop $\gamma$ in a hyperbolic 3--manifold with geodesic boundary $M$ is called an annular cusp loop if in $\bar{M}$ it is freely homotopic to the core of an annulus of $\mathcal{A}_M$.
\end{df}
\noindent With this notion in place we can state the rigidity theorem for hyperbolic 3--manifolds with boundary proven in \cite{Frigerio}.
\begin{st}
\label{st.Rigidity}
Let $M$ and $M'$ be two orientable finite volume hyperbolic 3--manifolds with geodesic boundary and let $\phi:\pi_1(M)\to \pi_1(M')$ be an isomorphism. Suppose that $\phi$ satisfies the additional requirement that $\phi(\gamma)$ is an annular cusp loop in $M'$ if and only if $\gamma$ is an annular cusp loop in $M$. Then $\phi$ is induced by an isometry between $M$ and $M'$.
\end{st}
\noindent The additional requirement is necessary only in the case of 3--manifolds with non-compact geodesic boundary. In the compact case the set $\mathcal{A}_M$ is empty.
In order to save the rigidity we need to include the annular cusp loops into the structure of the manifold itself. This will be done in the context of 3--manifolds with boundary pattern that were introduced in \cite{Johannson}.
\begin{df}
\label{df.ManifoldBoundaryPattern}
A 3--manifold with boundary pattern is a pair $(M,P)$ where $M$ is a 3--manifold with boundary and $P$ is a one dimensional polyhedron $P \subset \partial M$. A homeomorphism of manifolds with boundary patterns is required to restrict to a homeomorphism between the boundary patterns.
\end{df}
\noindent If $M$ is a hyperbolic 3--manifold with geodesic boundary then we would like to include the boundary circles of the annuli $\mathcal{A}_M$ in the natural compactification of $M$ as a boundary pattern but of course they are not part of $\partial M$. Since the annuli connect in $\bar{M}$ to $\partial{M}$ we can push them inside a little to become part of $\partial M$.
\begin{df}
\label{df.BoundaryPattern}
The boundary pattern corresponding to the hyperbolic structure with geodesic boundary on a $M$ is defined to be the set of boundary curves of the annuli in $\mathcal{A}_M$, pushed inside of $\partial M$.
\end{df}
\noindent A corollary of the above rigidity theorem is now the following:
\begin{st}
\label{st.Rigidity2}
Let $(M,P)$ and $(M',P')$ be two orientable finite volume hyperbolic 3--manifolds with geodesic boundary and let $P$ and $P'$ be their corresponding boundary patterns.
If $f:(M,P)\to (M',P')$ is a homeomorphism of 3--manifolds with boundary pattern then $f$ is induced by an isometry between $M$ and $M'$.
\end{st}
\noindent Thus the hyperbolic structure is still rigid if one takes into account the boundary patterns. Therefore we should only allow hyperbolic structures that agree with the given boundary pattern.
\begin{df}
\label{df.HypBoundary}
A 3--manifold with boundary pattern $(M,P)$ is said to allow a hyperbolic structure with geodesic boundary if it can be given a finite volume hyperbolic structure with geodesic boundary that turns the components of $P$ into annular cusp loops.
\end{df}
\noindent To define the volume for more general manifolds with boundary pattern we use the JSJ--decomposition and add up the volumes of the pieces allowing a hyperbolic structure with geodesic boundary. We state a version of the JSJ--decomposition for 3--manifolds with boundary pattern taken from \cite{Matveev}.
\begin{st}
\label{st.JSJ}
Let $(M,P)$ be an orientable, irreducible and boundary irreducible 3--manifold with boundary pattern. There exists a JSJ-system of annuli and tori that is unique up to admissible isotopy.
The system decomposes $(M,P)$ into three types of JSJ-chambers: simple 3--manifolds, Seifert manifolds and I--bundles.
\end{st}
\noindent Note that the JSJ--chambers are also 3--manifolds with boundary pattern. In addition to the original boundary pattern of $(M,P)$ they also inherit the adjacent boundary curves of the annuli in the JSJ--system \cite{Matveev}.
\begin{df}
\label{df.HypVol}
Let $(M,P)$ be an orientable, irreducible and boundary irreducible 3--manifold with boundary pattern. We define the hyperbolic volume $ \mathrm{Vol}(M,P)$ of $(M,P)$ to be the sum of the hyperbolic volumes of the JSJ--chambers that allow a hyperbolic structure with geodesic boundary.
\end{df}
\noindent The rigidity theorem (Theorem \ref{st.Rigidity2}) above and the uniqueness of the JSJ--decomposition show that the volume is a well defined invariant of orientable, irreducible and boundary irreducible 3--manifold with boundary pattern. The definition of volume can extended further by demanding it to be additive under connected sums.
As a motivation for this definition of the hyperbolic volume of a 3--manifold with boundary pattern we note that it coincides with the simplicial volume in the case of an empty boundary \cite{Ratcliffe}. However for manifolds with boundary this notion seems to be more appropriate. Indeed the Gromov norm no longer agrees with the volume of a hyperbolic manifold as soon as the boundary is non-empty \cite{Jungreis}.
The most important example for our purposes is the so called outside of a graph. This is the version of the complement of a graph that is suitable for carrying a rigid hyperbolic structure.
\begin{df}
\label{df.GraphComplement}
Let $\Gamma$ be an embedded graph in $\mathbb{S}^3$, where edges without vertices and multiple edges are allowed. We define the outside $O_\Gamma$ of $\Gamma$ to be the 3--manifold with boundary pattern constructed as follows.
Let $N(\Gamma)$ be the neighborhood of $\Gamma$ made up from small open balls around the vertices, closed solid tori around the edges of $\Gamma$ without vertices and small closed solid cylinders around the edges that intersect the closure of the balls around the adjacent vertices in disjoint disks. Define the outside $O_{\Gamma}$ be $\mathbb{S}^3-N(\Gamma)$. Also define the exterior $E_\Gamma$ to be the closure of $O_{\Gamma}$ as a subspace of $\mathbb{S}^3$.
We will endow $O_\Gamma$ with the boundary pattern $P_\Gamma$ consisting of a circle around every hole on every holed sphere in its boundary.
\end{df}
\noindent The outside of a graph may not be irreducible because the graph might be the distant union of a number of split components. If this is the case we cut the outside along spheres and cap the spheres off with balls. The resulting pieces are outsides of non-splittable graphs. For such graphs the outside is an orientable, irreducible and boundary irreducible 3--manifold whose boundary consists spheres from which closed disks have been removed. We have one sphere for every vertex the number of holes it has is equal to the valency of the vertex.
The outside of a graph is not compact and neither is its boundary. The corresponding exterior is compact and will play the role of the natural compactification mentioned above. In the next section we will investigate the geometry and decomposition of 1--augmented KTGs in greater detail.
\subsection{The geometry of augmented KTGs}
\label{sub.GeometryAugmented}
In this subsection we prove part 2) of the main theorem. Let $\Gamma$ be an $n$--augmented KTG. The JSJ--system of the outside $O_\Gamma$ consists of tori only. One for every k--unzip move used to produce $\Gamma$, such that $k \geq 2$. The tori encircle the augmentation rings produced by the k--unzip move. Cutting along such a torus splits off a Seifert fibered JSJ--chamber of the form $(D_k \times \mathbb{S}^1, \emptyset)$, where $D_k$ is a $k$--times punctured disk and $k$ is the number of augmentation rings produced in the k--unzip move. After removing all such Seifert pieces we are left with a JSJ--chamber that is exactly the outside of the singly augmented KTG $\Gamma'$ corresponding to $\Gamma$. Note that by definition the hyperbolic volume of $\Gamma$ is equal to the volume of $\Gamma'$, since we neglect Seifert fibered chambers in the JSJ--composition.
We aim to show that the outside of any singly augmented KTG $\Gamma'$ admits a hyperbolic structure with geodesic boundary by decomposing it into regular ideal octahedra. The method of decomposition is similar to the construction for links in \cite{FuterPurcell}.
\begin{figure}[here]
\begin{center}
\includegraphics[width = 12cm]{Fig11.eps}
\caption{A truncated octahedron with colored faces (left). The truncated octahedron as the half space beyond the paper plus infinity (right).}
\label{fig.Octahedron}
\end{center}
\end{figure}
\noindent The first step is to use truncated octahedra to create the exterior of the augmented KTG. The truncated octahedra we use are combinatorial closed polyhedra with eight hexagonal faces and six square truncation faces. Half of the hexagonal faces are colored blue, the other half white in an alternating fashion. The truncation faces are painted red, see figure \ref{fig.Octahedron} (left).
\begin{lem}
\label{lem.Triangulation}
Every sequence of KTG moves $S$ has the following properties:
\begin{enumerate}
\item{The exterior $E_S$ of the singly augmented KTG $\Gamma_S'$ is homeomorphic to the space obtained by glueing together $2t+2$ truncated octahedra, where $t$ is the number of triangle moves in $S$.}
\item{If $\beta$ is a sufficiently small ball around an interior point of an edge in $E_S$, the intersection of $\beta$ and the union of the interiors of the octahedra making up $E_S$ has either two or four components.}
\item{For each vertex of $\Gamma_S'$ there is a pair of blue faces that is sent by the homeomorphism from part 1) onto the three holed sphere in the boundary of the exterior of $\Gamma_S'$ corresponding to that vertex. The boundary circles of every hole are glued together from pairs of red edges of the two blue faces.}
\end{enumerate}
\end{lem}
\begin{proof}
The proof proceeds by induction on the number of KTG moves in the sequence $S$.
\textbf{Induction basis.} Let us suppose first that $S$ is empty so that $\Gamma_S'$ is the standard tetrahedron. Now take two truncated octahedra and glue their white faces together in pairs via the identity. To see how this produces the exterior of the tetrahedron graph let us first look at a single truncated combinatorial octahedron, see figure 11 (right).
By a homeomorphism we can present the truncated octahedron as the upper half space (thought of as lying behind the paper) plus infinity with the colored faces on its boundary. The blue faces are now small blue disks in the plane, while one white face is stretched out so as to contain infinity. Now we bend the blue and red faces up as in figure \ref{fig.Base}. In this figure the interior of the octahedron is located directly above the blue dome-like faces in the upper half space. The horizontal plane on which the blue domes rest contains the white faces.
We can place the second octahedron in the lower half space with the blue faces pushed downwards so that it looks like the reflection of the upper octahedron in the horizontal plane. Glueing the octahedra together along the white faces thus produces a 3--manifold homeomorphic to the exterior of the tetrahedral graph. Since we used exactly two octahedra part 1) is proven.
For part 2) note that all edges of the exterior are alike so that we can concentrate on one of them. A small ball around an interior point of such an edge intersected with the interiors of the octahedra has two components: one in the upper half space and one in the lower half space.
The third part is also clear since the exterior can be arranged in such a way that the horizontal plane cuts it into mirror symmetrically arranged truncated octahedra.
\begin{figure}[h]
\begin{center}
\includegraphics[width = 12cm]{Fig12.eps}
\caption{The exterior of the standard tetrahedron can be obtained by glueing two truncated octahedra. Only the upper one is shown.}
\label{fig.Base}
\end{center}
\end{figure}
\textbf{Induction step.} Suppose $S$ is a sequence of KTG moves that has the properties 1,2,3 in the lemma. Let $T$ be a sequence of KTG moves obtained by performing one of the four KTG moves directly after $S$. In order to show that $T$ also has properties 1,2,3 we need to consider four cases depending on which KTG move was made: negative or positive half twist, triangle or unzip.
Half twist. If the last move was a half twist then the exteriors of $\Gamma_S$ and $\Gamma_T$ are homeomorphic and the number of triangle moves in producing them is equal. We can therefore use the gluing of truncated octahedra that worked for $S$.
Triangle move. Since $T$ contains one more triangle move than $S$ we need two more truncated combinatorial octahedra to glue the exterior $E_T$ than we needed to glue $E_S$. We will call the two new truncated octahedra $O_1$ and $O_2$. Let $v$ be the vertex of $\Gamma_S$ where the triangle move was performed. By the induction hypothesis 3) we know there are two blue faces $B_1$ and $B_2$ in $E_S$ that make up the three-holed sphere corresponding to $v$. The new glueing is produced from the old by decreeing that one blue face of $O_i$ is to be identified with the face $B_i$. The corresponding pairs of white faces of $O_1$ and $O_2$ should be identified also.
To see that that the exterior of $\Gamma_T$ is homeomorphic to the above gluing we start by bringing the truncated octahedron $O_1$ into the dome-like form seen in figure \ref{fig.Triangle} (right). The chosen blue face (drawn slightly transparent) is a hemisphere and the rest of $O_1$ is below it. The other blue faces are small domes and the white faces are horizontal. The red faces are half tubes. Now bring $O_2$ into mirror symmetric position below the horizontal plane and glue them along the white faces. The result is a closed ball with three tubular entrances connecting to a triangular tunnel in the middle. It is now clear that once we glue this ball inside the three-holed sphere corresponding to the vertex $v$ we get the exterior of $\Gamma_T$.
To check property 2) we only need to check the edges of $O_1$ and $O_2$. For them it is clear from the mirror symmetric arrangement of $O_1$ and $O_2$. This also proves property 3).
\begin{figure}[h]
\begin{center}
\includegraphics[width = 12cm]{Fig13.eps}
\caption{The operation corresponding to the triangle move. Only the boundary of the upper half has been depicted. The smaller pictures are a top view and should remind one of the KTG move.}
\label{fig.Triangle}
\end{center}
\end{figure}
Unzip. We will show that the gluing of octahedra that produces the exterior $E_S$ also produces $E_T$ after adding an extra identification of faces. Suppose that $\Gamma_T$ is obtained from $\Gamma_S$ by performing a 1--unzip move on the edge $e$. Take a small open ball neighborhood $B$ in $\mathbb{S}^3$ of the tube around $e$ that contains the two three-holed spheres around to the endpoints of $e$ but does not meet any
other parts of the boundary of $E_S$. This ball is depicted as a cylinder in figure \ref{fig.Unzip} (upper left). By the induction hypothesis we know that under the homeomorphism from part 1) the three-holed spheres both split up into two blue faces each in such a way that the boundary circles are glued from pairs of edges. One can thus arrange the ball $B$ in $\mathbb{R}^3$ such it is mirror symmetric with respect to the horizontal plane. Cutting along the horizontal plane produces two balls $B_1$ and $B_2$. The boundary of one of the balls can then be flattened to look like the second picture of figure 14 (upper right).
Now let us glue together the two blue faces in $B_1$. This produces the next picture in figure \ref{fig.Unzip} (lower right). The red face in the middle of the second picture becomes a tube and the opposite red faces are joined. Now glue the blue faces of $B_2$ in the same way and then glue $B_1$ and $B_2$ back together. We get the ball $B'$ seen in the last picture of figure \ref{fig.Unzip} (lower left).
Note that when there are $h$ half twists present on the edge $e$ then performing an unzip produces $h$ half twists between the resulting strands. To accommodate this feature in our glueing we cut $B'$ open again along the two pairs of blue faces. They form a punctured disk whose boundary circle is a longitude of the newly produced ring. The disk is pierced twice by the two horizontal components of the graph that go through the ring. A half twist in these two components is produced by reglueing the disks with a half twist. This last correction gives the homeomorphism between the exterior $E_T$ and the glueing of octahedra. When $h$ is even then the correction has not changed the glueing, but if $h$ is odd then we have identified the blue faces of $B_1$ to the diagonally opposite ones of $B_2$.
\begin{figure}[h]
\begin{center}
\includegraphics[width = 12cm]{Fig14.eps}
\caption{The homeomorphism corresponding to the 1--unzip.}
\label{fig.Unzip}
\end{center}
\end{figure}
The extra identification of faces has doubled the number of parts of octahedra coming together at some of the edges of the blue faces involved, but these were previously unglued so this settles part 2) of the lemma. Part 3) is still true because we simply deleted two vertices and left the exterior unchanged around the other ones.
\end{proof}
\noindent Now that we have constructed the exterior of the singly augmented KTG $\Gamma'$ the next step is to go back to its outside.
\begin{lem}
\label{lem.Glueing}
The outside $O_{\Gamma'}$ is homeomorphic as a 3--manifold with boundary pattern to the glueing of truncated octahedra that we constructed for the exterior in lemma \ref{lem.Triangulation}, except that we remove all closed red square faces and endow it with the boundary pattern formed by lines on the unglued blue faces that are parallel to the removed red edges.
\end{lem}
\begin{proof}
The proof of the previous lemma goes through step by step if we replace the exterior by the outside and remove the red truncation faces. It is easy to see that the homeomorphism can be made to identify the boundary patterns.
\end{proof}
\noindent Finally we turn to hyperbolic geometry. The above gluing of truncated octahedra has
the property that at each edge either two or four solid angles meet. This means that if we declare
all the truncated octahedra to be regular ideal hyperbolic
octahedra then we obtain a hyperbolic manifold with geodesic boundary with cusps based on the tori and annuli that used to be truncation faces \cite{Ratcliffe}.
The circles in the boundary pattern of the outside $O_{\Gamma'}$ now become annular cusp loops because in the exterior they are freely homotopic to the boundary circles of the annuli in the closure of $O_{\Gamma'}$. The exterior is exactly the natural compactification of $O_{\Gamma'}$. This finishes the construction of the hyperbolic structure on the outside of a singly augmented KTG and also the proof part 2) of the main theorem.
\section{Conclusion}
\label{sec.Conclusion}
The purpose of this paper was to generalize the volume conjecture to KTGs and to prove it for augmented KTGs. In order to generalize to links and KTGs it was necessary to restrict to odd colors. For knots this seems unnecessary and in general one may ask which KTGs will satisfy the original volume conjecture.
The generalization of the volume of the complement to KTGs involved considering a specific 3--manifold with boundary pattern called the outside of a graph. This notion also makes sense for arbitrary graphs so that one may try to apply geometric techniques to questions in graph embedding. One may also hope to generalize the volume conjecture to arbitrary graphs, but then one must first be able to define the colored Jones invariant of any vertex. For trivalent vertices the colored Jones invariant has a natural meaning as a Clebsch--Gordan projector but for arbitrary vertices there is more choice.
In this paper we have proven the volume conjecture for augmented KTGs provided they had sufficiently many augmentation rings. It would be very natural to try to remove this restriction on the number of rings but this will require a more detailed analysis of the colored Jones invariant of such KTGs.
Looking back we can summarize our proof as follows. We have seen three different meanings of the KTG moves. Firstly they can be used to generate all KTGs from the tetrahedron. Secondly, reading them backwards yields an expression for the colored Jones invariant in terms of six-j symbols. Thirdly the augmented moves encode combinatorics of the triangulation by octahedra of the corresponding singly augmented KTG. The second and the third viewpoint come together once one notices that augmenting kills the summations in the expression for the Jones invariant (at least at the root of unity). Using the known asymptotics of the regular six-j symbol that remains this gives a natural and proof for the volume conjecture for augmented KTGs.
It seems that the augmented KTGs form a tractable class of KTGs that makes a good testing ground for further extensions of the volume conjecture. For example the complexified volume conjecture \cite{MMOTY}. It is to be hoped that with the right definition of the Chern--Simons invariant for manifolds with boundary pattern this conjecture also holds for KTGs.
A reason for the tractability of the augmented KTGs might be that they are of arithmetic type, see corollary \ref{cor.Arithmetic}. So far all knots links and KTGs for which the volume conjecture was proven, were of arithmetic type or not hyperbolic at all (or a combination of the two).
\newpage
|
1,941,325,220,009 | arxiv | \section{Introduction}
Percolation, which was introduced by Broadbent and Hammersley \cite{BroadbentHammersley1957} in 1957,
is one of the fundamental models in statistical physics \cite{StaufferAharony1994,Grimmett1999}. In percolation systems, sites or bonds on a lattice are either occupied with probability $p$, or not with probability $1-p$. When increasing $p$ from below, a cluster large enough to span the entire system from one side to the other will first appear at a value $p_{c}$. This point is called the percolation threshold.
The percolation threshold is an important physical quantity, because many interesting phenomena, such as phase transitions, occur at that point. Consequently, finding percolation thresholds for a variety of lattices has been a long-standing subject of research in this field. In two dimensions, percolation thresholds of many lattices can be found analytically \cite{SykesEssam1964,Scullard2006,Ziff2006,ZiffScullard2006}, while others must be found numerically. In three and higher dimensions, there are no exact results, and all thresholds must be determined by approximation schemes or numerical methods. Many effective numerical simulation algorithms \cite{HoshenKopelman1976,Leath1976,NewmanZiff2000,NewmanZiff2001} have been developed. For example, the ``cluster multiple labeling technique" was proposed by Hoshen and Kopelman \cite{HoshenKopelman1976} to determine the critical percolation concentration, percolation probabilities, and cluster-size distributions for percolation problems. Newman and Ziff \cite{NewmanZiff2000,NewmanZiff2001} developed a Monte Carlo algorithm which allows one to calculate quantities such as the cluster-size distribution or spanning probability over the entire range of site or bond occupation probabilities from zero to one in a single run, and takes an amount of time that scales roughly linearly with the number of sites on the lattice.
Much work in finding thresholds has been done with these and other techniques. Series estimates of the critical percolation probabilities for the bond problem and the site problem were presented by Sykes and Essam \cite{SykesEssam1964-2}, which can be traced back to 1960s. Lorenz and Ziff \cite{LorenzZiff1998} performed extensive Monte Carlo simulations to study bond percolation on three-dimensional lattices using an epidemic cluster-growth approach. Determining the crossing probability \cite{ReynoldsStanleyKlein80,StaufferAharony1994,FumikoShoichiMotoo1989} $R(p)$ as a function of $p$ for different size systems, and using scaling to analyze the results is also a common way to find $p_{c}$. \red{Binder ratios have also been used to determine the threshold \cite{WangZhouZhangGaroniDeng2013,Norrenbrock16,SampaioFilhoCesarAndradeHerrmannMoreira18}.} By examining wrapping probabilities, Wang et al.\ \cite{WangZhouZhangGaroniDeng2013} and Xu et al.\ \cite{XuWangLvDeng2014} simulated the bond and site percolation models on several three-dimensional lattices, including simple cubic (SC), the diamond, body-centered cubic (BCC), and face-centered cubic (FCC) lattices. Other recent work on percolation includes Refs.\ \cite{MitraSahaSensharma19,KryvenZiffBianconi19,RamirezCentresRamirezPastor19,GschwendHerrmann19,Koza19,MertensMoore2018,MertensMoore18s,HassanAlamJituRahman17,KennaBerche17,MertensJensenZiff17}.
Percolation has been investigated on many kinds of lattices. In three and higher dimensions, the most common of these lattices are the SC, the BCC, and the FCC lattices. Thanks to the techniques mentioned above, precise estimates are known for the critical thresholds for site and bond percolation and related exponents in three dimensions. However, in four dimensions (4D), the estimates of bond percolation thresholds that have been determined for the BCC and FCC lattices are much less precise \cite{vanderMarck98} than the values that have been found for some other lattices \red{(that is, two vs.\ five or six significant digits).} In addition, to the best of our knowledge, the bond percolation threshold on SC lattice with the combinations of nearest neighbors (NN) and second nearest neighbors (2NN), namely (SC-NN+2NN), has not been reported so far. We note that the notation 2N+3N is also used for NN and 2NN \cite{MalarzGalam05}.
In this paper, we employ the single-cluster growth method \cite{LorenzZiff1998} to study bond percolation on several lattices in 4D. While confirming previous results of SC lattice, we obtain more precise estimates of percolation thresholds for BCC and FCC lattices. We also find a new value for bond threshold of the complex-neighborhood lattice, SC-NN+2NN. \red{Note that percolation on lattices with complex neighborhoods can also be interpreted as the percolation of extended objects on a lattice that touch or overlap each other \cite{KozaKondratSuszczynski14}.}
With regards to the latter system, Malarz and co-workers \cite{MalarzGalam05,MajewskiMalarz2007,KurzawskiMalarz2012,Malarz2015,KotwicaGronekMalarz19} have carried out several studies on lattices with various complex neighborhoods, that is, lattices with combinations of two or more types of neighbor connections, in two, three and four dimensions. Their results have all concerned site percolation, and are generally given to only three significant digits. Here we show that the single-cluster growth method can be efficiently applied to one of these lattices also. Our goal was to find results to at least five significant digits, which was not difficult to achieve using the methods given here. \red{Note that in general, for Monte Carlo work, increasing the precision by one digit requires at least 100 times more work in terms of the number of simulations, not to mention the additional work studying corrections to scaling and other necessary quantities.}
Precise percolation thresholds are needed in order to study the critical behavior, including critical exponents, critical crossing probabilities, critical and excess cluster numbers, etc. Four dimensions is interesting because it is close enough to six dimensions for $6-\epsilon$ series analysis to have a hope of yielding good results \cite{Gracey2015}, and in general there is interest on how thresholds depend upon dimensionality \cite{GauntRuskin78,vanderMarck98,vanderMarck98k,Grassberger03,TorquatoJiao13,MertensMoore2018}. The study of how thresholds depend upon lattice structure, especially the coordination number $z$, has also had a long history \cite{ScherZallen70,GalamMauger96,vanderMarck97,Wierman02,WiermanNaor05}. Having thresholds of more lattices is useful for extending those correlations.
In the following sections, we present the underlying theory, and discuss the simulation process. Then we present and briefly discuss the results that we obtained from our simulations.
\section{Theory}\label{sec:model}
The central property describing the cluster statistics in percolation is $n_{s}$, defined as the number of clusters (per site) containing $s$ occupied sites or bonds, as a function of the occupation probability $p$. At the percolation threshold $p_{c}$, $n_{s}$ is expected to behave as
\begin{equation}
n_{s} \sim A_0 s^{-\tau} (1+B_0 s^{-\Omega}+\dots),
\label{eq:ns}
\end{equation}
where $\tau$ is the Fisher exponent, and $\Omega$ is the leading correction-to-scaling exponent. Both $\tau$ and $\Omega$ are expected to be universal, namely the same for all lattices of a given dimensionality. The $A_0$ and $B_0$ are constants that depend upon the system (are non-universal). The probability a vertex belongs to a cluster with size greater than or equal to $s$ will then be
\begin{equation}
P_{\geq s} = \sum_{s'=s}^\infty s' n_{s'} \sim A_1s^{2-\tau} (1+B_1s^{-\Omega}+\dots),
\label{ps}
\end{equation}
where $A_1 = A_0/(\tau-2)$ and $B_1 = (\tau-2)B_0/(\tau+\Omega-2)$. Multiplying both sides of Eq.\ (\ref{ps}) by $s^{\tau-2}$, we have
\begin{equation}
s^{\tau-2}P_{\geq s} \sim A_1 (1+B_1 s^{-\Omega}+\dots).
\label{staup}
\end{equation}
It can be seen that there will be a linear relationship between $s^{\tau-2}P_{\geq s}$ and $s^{-\Omega}$ for large $s$, if we choose the correct value of $\Omega$. This linear relationship can be used to determine the value of percolation threshold, because for $p \ne p_c$ the behavior will be nonlinear.
Taking the logarithm of Eq.\ (\ref{ps}), we find
\begin{equation}
\begin{aligned}
\ln P_{\geq s} & \sim \ln A_1 + (2-\tau)\ln s + \ln(1+B_1 s^{-\Omega}) \\
& \sim \ln A_1 + (2-\tau)\ln s + B_1 s^{-\Omega},
\end{aligned}
\end{equation}
for large $s$. Similarly,
\begin{equation}
\ln P_{\geq 2s} \sim \ln A_1 + (2-\tau)\ln 2 s + B_1(2s)^{-\Omega}.
\end{equation}
Then it follows that
\begin{equation}
\begin{aligned}
\frac{\ln P_{\geq 2s} - \ln P_{\geq s}}{\ln 2} &\sim \frac{(2 - \tau)(\ln 2s - \ln s)}{\ln 2} - \frac{B_1 s^{-\Omega}(2^{-\Omega}-1)}{\ln 2} \\
&\sim (2 - \tau) + B_2 s^{-\Omega} ,
\end{aligned}
\label{localslope}
\end{equation}
where $(\ln P_{\geq 2s} - \ln P_{\geq s})/\ln 2$ is the local slope of a plot of $\ln P_{\geq 2s}$ vs.\ $\ln s$, and $B_2 = B_1(2^{-\Omega}-1)/\ln 2$.
Eq.\ (\ref{localslope}) implies that if we make of plot of the local slope vs.\ $s^{-\Omega}$ at $p_c$, linear behavior will be found for large $s$, and the intercept of the straight line will give the value of $(2-\tau)$. \red{Of course, there will be higher-order corrections to Eqs.\ (\ref{eq:ns}) and (\ref{localslope}) related
to an (unknown) exponent $\Omega_1$, but for large $s$ linear behavior in this plot should be found. We did not attempt to characterize the higher-order corrections to scaling.}
\section{Simulation results and discussions}
The basic algorithm of single-cluster growth method is as follows. An individual cluster starts to grow at the seeded site that is located on the lattice. We choose the origin of coordinates for the seeded site, though any site on the lattice can be chosen under periodic \red{or helical} boundary conditions. From this site, a cluster is grown to neighboring sites by occupying the connecting bonds with a certain probability $p$ or leaving them unoccupied with probability $1-p$. All of these clusters are allowed to grow until they terminate in a complete cluster, or when they reach an upper size cutoff, their growing is halted.
To grow the clusters, we check all neighbors of a growth site for unvisited sites, which we occupy with probability $p$, and put the newly occupied growth site on a first-in, first-out queue. \red{Growth sites are those occupied whose neighbors have yet to be checked for further growth, and unvisited sites are those site whose occupation has not yet been determined, for one particular run.} To simulate bond percolation, we simply leave the sites in the unvisited state when we do not occupy them through an occupied bond. (For site percolation, unoccupied visited sites are blocked from ever being occupied in the future.) The single-cluster growth method is similar to the Leath algorithm \cite{Leath1976}.
We utilize a simple programming procedure to avoid clearing out the lattice after each cluster is formed: the lattice values are started out at $0$, and for cluster (run) $n$, any site whose value is less than $n$ is considered unoccupied. When a site is occupied in the growth of a new cluster, it is assigned the value $n$. This procedure saves a great deal of time because we can use a very large lattice, and do not have to clear out the whole lattice after every cluster, many of which are small.
\red{Following is a pseudo-code of this basic algorithm:}
\medskip
\red{\tt Set all lat[x] = 0
for runs = 1 to runsmax
\quad Put origin on queue
\quad set lat[0]=runs
\quad do
\quad \quad get x = oldest member of queue
\quad \quad for dir = 0 to directionmax-1
\quad \quad \quad set xp = x + deltax[dir]
\quad \quad \quad if (lat[xp \& W] < runs)
\quad \quad \quad \quad if (rnd < prob)
\quad \quad \quad \quad \quad set lat[xp \& W] = runs
\quad \quad \quad \quad \quad put xp on queue
\quad while ((queue != empty) \&\& (size < max))
}
\medskip
\red{The actual code is not too many lines longer than this. {\tt rnd} is a random in the range $(0,1)$. {\tt max} is the maximum cluster size, $2^{15}$ to $2^{17}$ here. We use a one-dimensional array {\tt lat} of length $L^4 = 2^{28} = 268435456$, and use the
bit-wise ``and" function {\tt \&} to carry out the helical wraparound by writing {\tt lat[xp \& W]} with $W = L^4 - 1$. (This works only for $L$ that are powers of two.) The {\tt deltax} array are the eight values $1, -1, L, -L, L^2, - L^2, L^3, -L^3$ for the SC lattice, and generalized accordingly for the other lattices. The size of the cluster is just the value of the queue insert pointer.}
\red{For site percolation, one simply replaces the last five lines by
{\tt \quad \quad \quad if (lat[xp \& W] < runs)
\quad \quad \quad \quad set lat[xp \& W] = runs
\quad \quad \quad \quad if (rnd < prob)
\quad \quad \quad \quad \quad put xp on queue
\quad while ((queue != empty) \&\& (size < max))}
}
The size of the cluster is identified by the number of occupied sites it contains. Then the number of clusters whose size (number of sites) fall in a range of $(2^n, 2^{n+1}-1)$ for $n=0,1,2,\cdot\cdot\cdot$ is recorded in the $n$th bin. If a cluster is still growing when it reaches the upper cutoff, it is counted in the last bin. The cutoff was $2^{17}$ occupied sites for the SC lattice, $2^{16}$ for FCC and SC-NN+2NN, and $2^{15}$ for the BCC lattice. The cutoff had to be lower in the latter case because of the expanded nature of the BCC lattice represented on the SC lattice.
While the single-cluster growth method requires separate runs to be made for different values of $p$, it is not difficult to quickly zero in on the threshold to four or five digits, and then reserve the longer runs for finding the sixth digit. It is also simple to analyze the results as shown here --- one does not need to study things like the intersections of crossing probabilities for different size systems or create large output files of intermediate microcanonical results to find estimates of the threshold. The output files here are simply the 15 to 17 values of the bins for each value of $p$ described above.
The simulations on the SC lattice, SC-NN+2NN lattice, BCC lattice, and FCC lattice were carried out for system size $L\times L \times L \times L$ with $L=128$, and with periodic boundary conditions. For each lattice, we produced $10^9$ independent samples. Then the number of clusters greater than or equal to size $s$ could be found based on the data from our simulation, and the corresponding quantities, such as the local slope $((\ln P_{\geq 2s} - \ln P_{\geq s})/\ln 2)$, and $s^{\tau-2}P_{\geq s}$, could be easily calculated.
Figs.\ \ref{fig:sc-localslope-vs-s-omega} and \ref{fig:sc-s-tau-2-Ps-vs-s-omega}, respectively, show the plots of the local slope and $s^{\tau-2}P_{\geq s}$ vs.\ $s^{-\Omega}$ for the SC lattice under different values of $p$. When $p$ is away from $p_{c}$, no matter if it is larger or smaller than $p_{c}$, the curves show a deviation from linearity. When $p$ is very near to $p_{c}$, we can see better linear behavior for large $s$. The linear behavior here is in good agreement with the theoretical predictions of Eqs.\ (\ref{staup}) and (\ref{localslope}).
Based on these simulation results, for bond percolation on the SC lattice in 4D, we conclude
~\\
SC:
$p_c = 0.1601312(2)$, $\tau = 2.3135(7)$, and $\Omega = 0.40(3)$.
~\\
Here numbers in parentheses represent errors in the last digit(s), determined from the observed statistical errors.
The simulation results for other three lattices, i.e., the plots of the local slope and $s^{\tau-2}P_{\geq s}$ vs.\ $s^{-\Omega}$ for the SC-NN+2NN, BCC, and FCC lattices under different values of $p$ are shown in Figs.\ \ref{fig:sc-NN2NN-localslope-vs-s-omega},\ \ref{fig:sc-NN2NN-s-tau-2-Ps-vs-s-omega}, \ref{fig:bcc-localslope-vs-s-omega}, \ref{fig:bcc-s-tau-2-Ps-vs-s-omega}, \ref{fig:fcc-localslope-vs-s-omega}, and \ref{fig:fcc-s-tau-2-Ps-vs-s-omega}. From these figures, we can see similar behavior as the SC lattice. In order to avoid unnecessary repetition, we do not discuss the data one by one, and directly show the deduced values of $p_{c}$ and the two exponents below.
~\\
SC-NN+2NN:
$p_{c} = 0.035827(1)$, $\tau = 2.3138(12)$, and $\Omega = 0.40(3)$.
~\\
BCC:
$p_{c} = 0.074212(1)$, $\tau = 2.3133(9)$, and $\Omega = 0.41(3)$.
~\\
FCC:
$p_{c} = 0.049517(1)$, $\tau = 2.3135(9)$, and $\Omega = 0.41(3)$.
~\\
From these values, we have obtained precise estimates of the percolation threshold, and also confirmed the universality of the Fisher exponent $\tau$.
\begin{figure}[htbp]
\centering
\includegraphics[width=3.8in]{sc-localslope-vs-s-omega.pdf}
\caption{Plot of the local slope $(\ln P_{\geq 2s} - \ln P_{\geq s})/\ln 2$ vs.\ $s^{-\Omega}$ \red{with $\Omega = 0.40$} for the SC lattice under different values of $p$. \red{The solid line in the figure is a guideline through the data points for $p = 0.1601312 \approx p_c$. The intercept -0.3135 is an estimate for $2 - \tau$ by Eq.\ (\ref{localslope}).}}
\label{fig:sc-localslope-vs-s-omega}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=3.8in]{sc-s-tau-2-Ps-vs-s-omega.pdf}
\caption{Plot of $s^{\tau-2}P_{\geq s}$ vs.\ $s^{-\Omega}$ \red{with $\Omega = 0.40$} for the SC lattice under different values of $p$. \red{The solid line in the figure is a guideline following the points for $p = 0.1601312 \approx p_c$.}}
\label{fig:sc-s-tau-2-Ps-vs-s-omega}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=3.8in]{sc-nn2nn-localslope-vs-s-omega.pdf}
\caption{Plot of the local slope $((\ln P_{\geq 2s} - \ln P_{\geq s})/\ln 2)$ vs.\ $s^{-\Omega}$ \red{with $\Omega = 0.40$} for the SC-NN+2NN lattice under different values of $p$. \red{The solid line in the figure is a guideline through the data points for $p = 0.035827 \approx p_c$. The intercept -0.3137 is an estimate for $2 - \tau$ by Eq.\ (\ref{localslope}).}}
\label{fig:sc-NN2NN-localslope-vs-s-omega}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=3.8in]{sc-nn2nn-s-tau-2-Ps-vs-s-omega.pdf}
\caption{Plot of $s^{\tau-2}P_{\geq s}$ vs.\ $s^{-\Omega}$ \red{with $\Omega = 0.40$} for the SC-NN+2NN lattice under different values of $p$. \red{The solid line in the figure is a guideline following the points for $p = 0.035827 \approx p_c$.}}
\label{fig:sc-NN2NN-s-tau-2-Ps-vs-s-omega}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=3.8in]{bcc-localslope-vs-s-omega.pdf}
\caption{Plot of the local slope $((\ln P_{\geq 2s} - \ln P_{\geq s})/\ln 2)$ vs.\ $s^{-\Omega}$ \red{with $\Omega = 0.41$} for the BCC lattice under different values of $p$. \red{The solid line in the figure is a guideline through the data points for $p = 0.074212 \approx p_c$. The intercept -0.3134 is an estimate for $2 - \tau$ by Eq.\ (\ref{localslope}).} }
\label{fig:bcc-localslope-vs-s-omega}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=3.8in]{bcc-s-tau-2-Ps-vs-s-omega.pdf}
\caption{Plot of $s^{\tau-2}P_{\geq s}$ vs.\ $s^{-\Omega}$ \red{with $\Omega = 0.41$} for the BCC lattice under different values of $p$. \red{The solid line in the figure is a guideline following the points for $p = 0.074212 \approx p_c$.}}
\label{fig:bcc-s-tau-2-Ps-vs-s-omega}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=3.8in]{fcc-localslope-vs-s-omega.pdf}
\caption{Plot of the local slope $((\ln P_{\geq 2s} - \ln P_{\geq s})/\ln 2)$ vs.\ $s^{-\Omega}$ \red{with $\Omega = 0.41$} for the FCC lattice under different values of $p$. \red{The solid line in the figure is a guideline through the data points for $p = 0.049517 \approx p_c$. The intercept -0.3135 is an estimate for $2 - \tau$ by Eq.\ (\ref{localslope}).}}
\label{fig:fcc-localslope-vs-s-omega}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=3.8in]{fcc-s-tau-2-Ps-vs-s-omega.pdf}
\caption{Plot of $s^{\tau-2}P_{\geq s}$ vs.\ $s^{-\Omega}$ \red{with $\Omega = 0.41$} for the FCC lattice under different values of $p$. \red{The solid line in the figure is a guideline following the points for $p = 0.049517 \approx p_c$.}}
\label{fig:fcc-s-tau-2-Ps-vs-s-omega}
\end{figure}
When the probability $p$ is away from $p_{c}$, a scaling function needs to be included. Then the behavior can be represented as
\begin{equation}
P_{\geq s} \sim A_2 s^{2-\tau} f(B_2(p-p_{c})s^{\sigma}),
\label{ps2}
\end{equation}
in the scaling limit of $s \rightarrow \infty$ and $p \rightarrow p_{c}$. The scaling function $f(x)$ can be expanded as a Taylor series,
\begin{equation}
f(B_2(p-p_{c})s^{\sigma}) \sim 1+C_2(p-p_{c})s^{\sigma}+ \cdot\cdot\cdot.
\label{scaling}
\end{equation}
where $C_2 = B_2 f'(0)$. We assume $f(0)=1$, so that $A_2$ = $A_1$.
Combining Eqs.\ (\ref{ps2}) and (\ref{scaling}) leads to
\begin{equation}
s^{\tau-2}P_{\geq s} \sim A_2+D_2(p-p_{c})s^{\sigma}.
\label{vssigma}
\end{equation}
where $D_2=A_2 C_2$.
Eq.\ (\ref{vssigma}) predicts that $s^{\tau-2}P_{\geq s}$ will convergence to a constant value at $p_{c}$ for large $s$, while it deviates from a constant value when $p$ is away from $p_{c}$. This provides another way to determine the percolation threshold. Figs.\ \ref{fig:sc-s-tau-2-Ps-vs-s-sigma}, \ref{fig:sc-NN2NN-s-tau-2-Ps-vs-s-sigma}, \ref{fig:bcc-s-tau-2-Ps-vs-s-sigma}, \ref{fig:fcc-s-tau-2-Ps-vs-s-sigma} show the plots of $s^{\tau-2}P_{\geq s}$ versus $s^{\sigma}$ for the SC, SC-NN+2NN, BCC and FCC lattices, respectively. For these plots, we use the value of $\sigma=0.4742$, which is provided in Ref.\ \cite{Gracey2015}. The estimations of percolation thresholds are shown below, and they are consistent with the values obtained above.
~\\
SC: $p_{c} = 0.1601314(2)$.
~\\
SC-NN+2NN: $p_{c} = 0.035827(1)$.
~\\
BCC: $p_{c} = 0.074212(1)$.
~\\
FCC: $p_{c} = 0.049517(1)$.
~\\
\begin{figure}[htbp]
\centering
\includegraphics[width=3.8in]{sc-s-tau-2-Ps-vs-s-sigma.pdf}
\caption{Plot of $s^{\tau-2}P_{\geq s}$ vs.\ $s^{\sigma}$ \red{with $\sigma=0.4742$ and $\tau = 2.3135$} for the SC lattice under different values of $p$. \red{The dashed line in the figure is a guideline through the points for $p = 0.1601314 \approx p_c$}.}
\label{fig:sc-s-tau-2-Ps-vs-s-sigma}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=3.8in]{sc-nn2nn-s-tau-2-Ps-vs-s-sigma.pdf}
\caption{Plot of $s^{\tau-2}P_{\geq s}$ vs.\ $s^{\sigma}$ \red{with $\sigma=0.4742$ and $\tau = 2.3137$} for the SC-NN+2NN lattice under different values of $p$. \red{ The dashed line in the figure is a guideline through the points for $p = 0.035827 \approx p_c$}.}
\label{fig:sc-NN2NN-s-tau-2-Ps-vs-s-sigma}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=3.8in]{bcc-s-tau-2-Ps-vs-s-sigma.pdf}
\caption{Plot of $s^{\tau-2}P_{\geq s}$ vs.\ $s^{\sigma}$ \red{with $\sigma=0.4742$ and $\tau = 2.3134$} for the BCC lattice under different values of $p$. \red{The dashed line in the figure is a guideline through the points for $p = 0.074212 \approx p_c$}.}
\label{fig:bcc-s-tau-2-Ps-vs-s-sigma}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=3.8in]{fcc-s-tau-2-Ps-vs-s-sigma.pdf}
\caption{Plot of $s^{\tau-2}P_{\geq s}$ vs.\ $s^{\sigma}$ \red{with $\sigma=0.4742$ and $\tau = 2.3135$} for the FCC lattice under different values of $p$. \red{The dashed line in the figure is a guideline through the points for $p = 0.049517 \approx p_c$.}}
\label{fig:fcc-s-tau-2-Ps-vs-s-sigma}
\end{figure}
Our final estimates of percolation thresholds for all the lattices calculated in this paper are summarized in Table \ref{tab:ept}, where we also make a comparison with those of previous studies where available. It can be seen that for the SC lattice, our result is completely consistent with the existing ones within the error range, including the recent more precise result of Mertens and Moore \cite{MertensMoore2018}.
\red{To find our results for $p_c$, $\tau$ and $\Omega$, we basically adjusted these values to get the best linear behavior on the two plots
of $(\ln P_{\geq 2s} - \ln P_{\geq s})/\ln 2$ vs.\ $s^{-\Omega}$ and $s^{\tau-2}P_{\geq s}$ vs.\ $s^{-\Omega}$, and horizontal asymptotic behavior on the plot of $s^{\tau-2}P_{\geq s}$ vs.\ $s^{\sigma}$, for each of the four lattices. With the incorrect value of $\Omega$, for example, we would not get linear behavior over several orders of magnitude of $s$ for any value of $p$. The curves in the latter plots
were not overly sensitive to $\sigma$ so we used the recent value
$\sigma=0.4742$ \cite{Gracey2015}.}
For the BCC and FCC lattices, we find significantly more precise values of $p_{c}$ than van der Marck \cite{vanderMarck98}, who gave only two digits of accuracy. And we give for the first time a value of $p_{c}$ for the SC-NN+2NN lattice, which was not studied before for bond percolation.
\begin{table}[htb]
\caption{Estimations of bond percolation thresholds for the 4D percolation models.}
\begin{tabular}{c|c|c|c}
\hline\hline
lattice & $z$ & $p_{c}$ (present) & $p_{c}$ (previous) \\ \hline
SC & 8 & 0.1601312(2) & 0.16005(15) \cite{AdlerMeirAharonyHarrisKlein90}\\
& & & 0.160130(3) \cite{PaulZiffStanley2001}\\
& & & 0.1601314(13) \cite{Grassberger03}\\
& & & 0.1601310(10) \cite{DammerHinrichsen2004}\\
& & & 0.16013122(6) \cite{MertensMoore2018}\\
BCC & 16 & 0.074212(1) & 0.074(1) \cite{vanderMarck98} \\
FCC & 24 & 0.049517(1) & 0.049(1) \cite{vanderMarck98} \\
SC-NN+2NN & 32 & 0.035827(1) & ----- \\
\hline\hline
\end{tabular}
\label{tab:ept}
\end{table}
Table \ref{tab:ept} also shows the coordination number $z$ for each lattice. The values of $p_{c}$ decrease with the coordination number $z$ as one would expect.
Finding correlations between percolation thresholds and lattice properties has a long history in percolation studies \cite{ScherZallen70,vanderMarck97,Wierman02,WiermanNaor05}. In Ref.\ \cite{KurzawskiMalarz2012} it was found that the site thresholds for several 3D lattices can be fitted by a simple power-law in the coordination number $z$
\begin{equation}
p_{c}(z) \sim z^{-a},
\label{eq:scaling}
\end{equation}
with $a = 0.790(26)$ in 3D. Similar power-law relations for various systems were studied by Galam and Mauger \cite{GalamMauger96}, van der Marck \cite{vanderMarck98}, and others, usually in terms of $(z-1)^{-a}$ rather than vs.\ $z^{-a}$. Making a log-log plot of the 4D data of Table \ref{tab:ept}, along with the bond threshold $p_c = 0.2715(3)$ for the 4D diamond lattice \cite{vanderMarck98}, which has coordination number $z=5$, in Fig.\ \ref{fig:ln-pc-z-gamma}, we find $a = 1.087$. Deviations of the thresholds from this line are within about 2\%. We note that the data for site percolation thresholds of these lattices, taken from \cite{vanderMarck98,KotwicaGronekMalarz19,MertensMoore2018}, do not show such a nice linear behavior as do the bond thresholds, as shown in Fig.\ \ref{fig:ln-pc-z-gamma}. \red{We do not know any reason for this excellent power-law dependence of the bond thresholds, nor why the exponent has the value of approximately 1.087.}
\begin{figure}[htbp]
\centering
\includegraphics[width=3.8in]{ln-pc-z-gamma.pdf}
\caption{A log-log plot of percolation thresholds $p_{c}$ vs.\ coordination number $z$ for the lattices simulated in this paper (square symbols) and the diamond lattice (circle) provided in Ref.\ \cite{vanderMarck98}. The slope gives an exponent of $a = 1.087$ in Eq.\ (\ref{eq:scaling}), and the intercept of the line is at $\ln p_c = 0.435$. Also shown on the plot are the site thresholds for the same five
lattices (triangles) \cite{vanderMarck98,KotwicaGronekMalarz19,MertensMoore2018}, in which case the linearity of the data is not nearly as good.}
\label{fig:ln-pc-z-gamma}
\end{figure}
\section{Conclusions}
In this paper, by employing the single-cluster growth algorithm, bond percolation on SC, SC-NN+2NN, BCC, and FCC lattices in 4D was investigated. The algorithm allowed us to estimate the percolation thresholds with high precision with a moderate amount of calculation. For the BCC and FCC lattices, our results are about three orders of magnitude more precise than previous values, and for SC-NN+2NN lattice, we find a value of the bond percolation threshold for the first time. In addition, the results indicate that the percolation thresholds $p_{c}$ decrease monotonically with the coordination number $z$, quite accurately according to a power law of $p_{c} \sim z^{-a}$, with the exponent $a = 1.087$.
There remain many lattices where thresholds are not known, or where they are known only to \red{low significance, such as two or three digits}, and the methods described here can be used to find them with high accuracy in a straightforward manner. For example, the bond thresholds on the many complex neighborhood lattices of Malarz and co-workers have not been determined before, and knowing these thresholds may be useful for various applications.
Another result of this paper was a precise measurement of the exponent $\tau$, which we were able to do using the finite-size scaling behavior of Eq.\ (\ref{localslope}), which requires the knowledge of $\Omega$ although the results for $\tau$ are not very sensitive to the precise value of $\Omega$. Averaging the results over the four lattices, we find $\tau = 2.3135(5)$. This is consistent with previous Monte Carlo values of $2.3127(6)$ \cite{BallesterosEtAl97}, 2.313(3) \cite{PaulZiffStanley2001}, 2.313(2) \cite{Tiggemann01}, the recent Monte Carlo result of Mertens and Moore, 2.3142(5) \cite{MertensMoore2018}, and also close to the recent four-loop series result 2.3124 of Gracey \cite{Gracey2015}. In concurrent work, Deng et al.\ find that the fractal dimension in 4D equals $d_f = 3.0446(7)$, which implies by the scaling relation $\tau = 1 + d/d_f = 2.3138(3)$ \cite{DengEtAl2019}. Our value 2.3135(5) is a good average of all these measurements.
We have also found a fairly accurate value of the corrections-to-scaling exponent $\Omega$, with the result $0.40(3)$, which also gives a value of $\omega = \Omega d_f = 1.22(9)$. We determined $\Omega$ by adjusting its value until we found a straight line in plots like Figs.\ \ref{fig:sc-localslope-vs-s-omega} and \ref{fig:sc-s-tau-2-Ps-vs-s-omega} --- while simultaneously trying to find $p_c$ and $\tau$. Having three different kinds of plots for each lattice helped in this simultaneous determination of these three parameters. Previous Monte-Carlo values of $\Omega$ were 0.31(5) \cite{AdlerMeirAharonyHarris90}, 0.37(4) \cite{BallesterosEtAl97}, and 0.5(1) \cite{Tiggemann01}. In Ref.\ \cite{Gracey2015}, Gracey gives the series extrapolation of $\Omega = 0.4008$ \cite{Gracey2015}, which was based upon a Pad\'e approximation assuming the value of $\Omega = 2$ for 2D. Redoing that calculation using $\Omega = 72/91$ (2D) from Refs.\ \cite{AharonyAsikainen03,Ziff11b}, Gracey finds $\Omega = 0.3773$ \cite{Gracey19}. Both of these values (0.4008 and 0.3773) are consistent with our result of $\Omega = 0.40(3)$.
In forthcoming papers \cite{XunZiff20,XunZiff20b} the authors will report on a study of many 3D lattices with complex neighborhoods for both site and bond percolation.
\section{Acknowledgments}
We are grateful to the Advanced Analysis and Computation Center of the China University of Mining and Technology for the award of CPU hours to accomplish this work. This work is supported by the China Scholarship Council Project (Grant No. 201806425025) and the National Natural Science Foundation of China (Grant No.\ 51704293).
\bibliographystyle{unsrt}
|
1,941,325,220,010 | arxiv | \section{Introduction}
\label{sec:introduction}
Porous structures are widely found in natural objects, such as trabecular
bones, wood, and cork,
which have many appealing properties,
such as low weight and large internal surface area.
In the field of tissue engineering,
which aims to repair damaged tissues and organs,
porous scaffolds play a critical role in the formation of new functional tissues for medical purposes, i.e.,
they provide an optimum biological microenvironment for cell attachment, migration, nutrient delivery and product expression~\cite{Hutmacher2000Scaffolds}.
To facilitate cell growth and diffusion of both cells and nutrients
throughout the whole structure,
high porosity, adequate pore size and connectivity are key requirements
in the design of porous scaffolds~\cite{Starly2003Computer}.
Therefore, it is important to be able to control the pore size
and porosity when designing heterogenous tissue scaffolds.
Recently, triply periodic minimal surfaces (TPMSs) have been widely employed in the design of porous scaffolds~\cite{Y2018Gyroid}.
A TPMS is a type of minimal surface with periodicity in three independent
directions in three-dimensional Euclidean space and is represented by an implicit equation~\cite{Schnering1991Nodal}.
Generally speaking, porous scaffold design methods based on TPMS can be classified into two categories.
In the first class of methods,
a volume mesh model is embedded in an ambient TPMS,
and the intersection of them is taken as the porous scaffold~\cite{Yoo2011Porous,Yoo2012Heterogeneous,Yang2014Effective,Feng2018Porous}.
In the second class of methods,
a regular TPMS unit is transformed into each hexahedron of a hexahedron mesh model,
thus generating a porous scaffold~\cite{Yoo2011Computer,Chen2018Porous,Shi2018A}.
However, the first class of methods can generate incomplete TPMS units near
the boundary of a volume mesh model,
and the second class of methods may cause discontinuities between two adjacent TPMS units.
Moreover, in both method classes,
porosity and pore size are difficult to control.
In this study, we developed a method for generating heterogenous porous
scaffolds in a trivariate B-spline solid (TBSS) with TPMS designed in the parametric domain of the TBSS.
We also developed a porous scaffold storage format that saves significant storage space.
Specifically, given a TBSS,
a threshold distribution field (TDF) is first constructed in the cubic parameter domain of the TBSS.
Based on the TDF,
a TPMS is generated in the parameter domain.
Finally, by mapping the TPMS in the parameter domain to the TBSS,
a porous scaffold is produced in the TBSS.
In addition, the TDF can be modified locally in the parameter domain to
improve the engineering performance of the porous scaffold.
All of the TPMS units generated in the porous scaffold is complete,
and adjacent TPMS units are continuously stitched.
To summarize, the main contributions of this study are as follows:
\begin{itemize}
\item A TBSS is employed to generate porous scaffolds,
which ensures completeness of TPMS units,
and continuity between adjacent TPMS units.
\item Porosity is easy to control using the TDF defined in the parametric domain of the TBSS.
\item A storage format for porous scaffolds is designed based on the TDF in the parametric domain,
which saves significantly storage space.
\end{itemize}
The remainder of this paper is organized as follows.
In Section~\ref{sec:related_work}, we review related work on
porous scaffold design and TBSS generation methods.
In Section~\ref{sec:theories},
preliminaries on TBSS and TPMS are introduced.
Moreover,
the heterogeneous porous scaffold generation method using a TBSS and TDF in its parameter domain is presented in detail in Section~\ref{sec:method}.
In Section~\ref{sec:implementations},
some experimental examples are presented to demonstrate the effectiveness and efficiency of the developed method.
Finally, Section~\ref{sec:conclusion} concludes the paper.
\subsection{Related work}
\label{sec:related_work}
In this section, we review some related work on porous scaffold design and TBSS
generation methods.
\textbf{Porous scaffold design:}
In recent years,
TPMS has been of special interest to the porous scaffold design community owing to its excellent properties,
and many scaffold design methods have been developed based on TPMS.
Rajagopalan and Robb~\cite{Rajagopalan2006Schwarz} made the first attempt to design tissue scaffolds based on Schwarz's primitive minimal surface, which is a type of TPMS.
Moreover, the other two typical TPMSs (Schwarz's diamond surface and
Schoen's gyroid surface) are constructed by employing K3DSurf software to design tissue scaffolds~\cite{Melchels2010Mathematically},
which achieve a gradient change of gyroid structure in terms of pore size by adding a linear equation in $z$ into the TPMS function.
Yoo~\cite{Yoo2011Computer} developed a method for generating porous
scaffolds in a hexahedral mesh model,
where the coordinate interpolation algorithm and shape function method are employed to map TPMS units to hexahedron elements to generate tissue scaffolds.
To reduce the time consume in trimming and re-meshing process of Boolean operations,
a tissue scaffold design method based on a hybrid method of distance field and TPMS was further proposed in~\cite{Yoo2011Porous}.
Moreover, to make the porosity easier to control in the design a heterogeneous porous scaffold,
Yoo~\cite{Yoo2012Heterogeneous} introduced a method based on an implicit interpolation algorithm that uses the thin-plate radial basis function.
Similar to the method of Yoo~\cite{Yoo2012Heterogeneous},
Yang et al.~\cite{Yang2014Effective} introduced the sigmoid function and Gaussian radial basis function to design tissue scaffolds.
However, the hexahedral mesh based porous scaffold generation methods cannot
ensure continuity between adjacent TPMS units.
Recently, in consideration of the increasing attention towards gradient porous scaffolds,
Shi et al.~\cite{Shi2018A} utilized the TPMS and sigmoid function to generate functional gradient bionic porous scaffolds from Micro-CT data reconstruction.
Feng et al.~\cite{Feng2018Porous} proposed a method to design porous
scaffold based on solid T-splines and TPMS,
and analyzed the parameter influences on the volume specific surface area and porosity.
In addition, a heterogenous methodology for modeling porous scaffolds using a parameterized hexahedral mesh and TPMS was developed by Chen et al.~\cite{Chen2018Porous}.
\textbf{TBSS generation:}
TBSS modeling methods are developed mainly for producing three dimensional
physical domain in isogeometric analysis~\cite{Hughes2005Isogeometric}.
Specifically, to analyze arterial blood flow through isogeometric analysis,
Zhang et al.~\cite{zhang2007patient} introduced a skeleton-based method of generating trivariate non-uniform rational basis spline (NURBS) solids.
In~\cite{martin2009volumetric}, a tetrahedral mesh model is parameterized
through discrete volumetric harmonic functions and a cylinder-like TBSS is generated.
Aigner et al.~\cite{Aigner2009Swept} proposed a variational framework for
generating NURBS parameterizations of swept volumes using the given boundary conditions and guiding curves.
Optimization approaches have been developed for filling
boundary-represented models to produce TBSSs with positive Jacobian values~\cite{Wang2014An}.
Moreover, a discrete volume parameterization method for tetrahedral mesh
models and an iterative fitting algorithm have been presented for TBSS generation~\cite{Lin2015Constructing}.
\section{Preliminaries }
\label{sec:theories}
Preliminaries on TBSS and TPMS are introduced in this section.
\subsection{TBSS}
\label{subsec:tbss}
A B-spline curve of order $p+1$ is formed by several piecewise
polynomial curves of degree $p$,
and a B-spline curve has $C^p$ continuity at its breakpoints~\cite{Piegl1997The}.
A knot vector $U=\{u_0,u_1,\ldots,u_{m+p+1}\}$ is defined by a set of breakpoints $u_0 \leq u_1 \leq \cdots \leq u_{m+p+1}$.
The associated B-spline basis functions $N_{i,p}(u)$ of degree $p$ are
defined as follows:
\begin{equation}
\label{eq:basisfunction}
\begin{aligned}
&N_{i,0} =
\begin{cases}
1, & for \quad u_i \leq u < u_{i+1}, \\
0, & otherwise,
\end{cases}\\
&N_{i,p}(u)=\frac{u-u_i}{u_{i+p}-u_i}N_{i,p-1}(u)+\frac{u_{i+p+1}-u}{u_{i+p+1}-u_{i+1}}N_{i+1,p-1}(u)
\end{aligned}
\end{equation}
A TBSS of degree $(p,q,r)$ is a tensor product volume defined as
\begin{equation}
\label{eq:BsplineSolid}
P(u,v,w)=\sum_{i=0}^{m}\sum_{j=0}^{n}\sum_{k=0}^{l}N_{i,p}(u)N_{j,q}(v)N_{k,r}(w)P_{ijk}
\end{equation}
where $P_{ijk},\ i=0,1,\cdots,m,\ j=0,1,\cdots,n,\ k=0,1,\cdots,l$ are control points in the $u$, $v$ and $w$ directions and
$$N_{i,p}(u),N_{j,q}(v),N_{k,r}(w)$$
are the B-spline basis functions of degree $p$ in the $u$ direction, degree $q$ in the $v$ direction, and degree $r$ in the $w$ direction.
In this study,
the input to our porous scaffold generation algorithm is a TBSS
that represents geometry at a macro-structural scale.
The TBSS can be generated either by fitting the mesh vertices of a
tetrahedral mesh model~\cite{Lin2015Constructing},
or filling a closed triangular mesh model~\cite{Wang2014An}.
\subsection{TPMS}
\label{subsec:tpms}
A TPMS is an implicit surface that is infinite and periodic in three independent directions.
There are several ways to evaluate a TPMS,
and the most frequently employed approach is to approximate the TPMS using a periodic nodal surface defined by a Fourier series~\cite{Gandy2001Nodal},
\begin{equation}
\label{eq:TPMS}
\psi(r)=\sum_kA_kcos[2\pi (\bm{h}_k\cdot \bm{r})/\lambda_k - P_k]=C,
\end{equation}
where $\bm{r}$ is the location vector in Euclidean space,
$A_k$ is amplitude,
$\bm{h}_k$ is the $k^{th}$ lattice vector in reciprocal space,
$\lambda_k$ is the wavelength of the period,
$P_k$ is phase shift,
and $C$ is a threshold constant.
Please refer to~\cite{Yan2007Periodic} for more details on the abovementioned parameters.
The nodal approximations of P, D, G, and I-WP types of TPMSs,
which were presented in ~\cite{Schnering1991Nodal},
are listed in Table~\ref{tbl:nodal}.
The valid range of $C$ guarantees that the implicit surface is complete.
\begin{table*}[!htb]
\centering
\caption{Nodal approximations of typical TPMS units.}
\label{tbl:nodal}
\begin{tabular}{ l|l|c }
\hline
TPMS & Nodal approximations & Valid range of $C$\\
\hline
Schwarz's P Surface & $\psi_P(x,y,z)=cos(\omega_xx)+cos(\omega_yy)+cos(\omega_zz)=C$ & $[-0.8,0.8]$\\
Schwarz's D Surface & $\psi_D(x,y,z)=cos(\omega_xx)cos(\omega_yy)cos(\omega_zz)-sin(\omega_xx)sin(\omega_yy)sin(\omega_zz)=C$ & $[-0.6,0.6]$\\
Schoen's G Surface & $\psi_G(x,y,z)=sin(\omega_xx)cos(\omega_yy)+sin(\omega_yy)cos(\omega_zz)+sin(\omega_zz)cos(\omega_xx)=C$ & $[-0.8,0.8]$\\
Schoen's I-WP Surface & \tabincell{l}{$\psi_{I-WP}(x,y,z)=2[cos(\omega_xx)cos(\omega_yy)+cos(\omega_yy)cos(\omega_zz)+cos(\omega_zz)cos(\omega_xx)]$ \\$-[cos(2\omega_xx)+cos(2\omega_yy)+cos(2\omega_zz)]=C$} & $[-2.0,2.0]$\\
\hline
\end{tabular}
\end{table*}
In TPMS-based porous scaffold design methods,
the threshold value $C$~\pref{eq:TPMS} controls the porosity,
and the coefficients $\omega_x$, $\omega_y$, and $\omega_z$ (refer to Table~\ref{tbl:nodal}) which affect the period of the TPMS,
are called \emph{period coefficients}.
The effects of the two types of parameters in porous scaffold design
have been discussed in detail in the literature~\cite{Feng2018Porous}.
Additionally, in this study, the marching tetrahedra (MT) algorithm~\cite{Doi1991An} is
employed to extract the TPMS (shown in Fig.~\ref{fig:TPMSunits}).
\begin{figure*}[!htb]
\begin{center}
\subfigure[]{
\label{subfig:Psurface}
\includegraphics[width=0.18\textwidth]{Psurface-eps-converted-to.pdf}}
\subfigure[]{
\label{subfig:Dsurface}
\includegraphics[width=0.18\textwidth]{Dsurface-eps-converted-to.pdf}}
\subfigure[]{
\label{subfig:Gsurface}
\includegraphics[width=0.18\textwidth]{Gsurface-eps-converted-to.pdf}}
\subfigure[]{
\label{subfig:IWPsurface}
\includegraphics[width=0.18\textwidth]{IWPsurface-eps-converted-to.pdf}}
\caption
{
\small
Four types of TPMS units.
(a) P-type. (b) D-type. (c) G-type. (d) I-WP-type.
}
\label{fig:TPMSunits}
\end{center}
\end{figure*}
\section{Methodology of porous scaffold design}
\label{sec:method}
The whole procedure of the heterogenous porous scaffold generation method
based on TBSS and TPMS in its parametric domain is illustrated in Fig.\ref{fig:procedure}.
Specifically,
given a TBSS as input,
we design a method for constructing the TDF in its cubic parameter domain.
Based on the TDF, a TPMS is generated in the parameter domain using
MT algorithm~\cite{Doi1991An}.
Moreover, by mapping the TPMS in the parameter domain to the TBSS by
the TBSS function~\pref{eq:BsplineSolid},
a porous scaffold with compeleteness and continuity is produced in the TBSS.
If the porous scaffold in the TBSS does not meet the engineering
requirements,
the TDF can be locally modified in the parameter domain,
and the porous scaffold in the TBSS can be rebuilt.
Finally, we develop a storage format for the porous scaffold based on the TDF
in the parametric domain,
which saves significant storage space.
The details of the porous scaffold design method are elucidated in
the following sections.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.45\textwidth]{porous_scaffold-eps-converted-to.pdf}\\
\caption{\small The procedure of the heterogenous porous scaffold generation method.}
\label{fig:procedure}
\end{figure}
\subsection{Generation of TPMS and volume TPMS structures}
\label{subsec:tpms_generation}
After the TDF is constructed in the parameter domain,
and the period coefficients are assigned,
the TPMS $\psi = C$ in the parameter domain can be defined such that it is polygonized into a triangular mesh.
As a type of iso-surface, a TPMS can be polygonized by many algorithms
such as iterative refinement~\cite{Rajagopalan2006Schwarz}, Delaunay triangulation~\cite{George1998Delaunay},
marching cubes~\cite{Lorensen1987Marching} and MT~\cite{Doi1991An}.
To avoid ambiguous connection problems~\cite{Newman2006A}
and simplify the intersection types,
the MT algorithm is adopted to polygonize the TPMS in the parametric domain of a TBSS.
For this purpose, the cubic parameter domain is uniformly divided into
a grid.
In our implementation,
to balance the accuracy and storage cost of the porous scaffold,
the parametric domain is divided into a $100\times100\times100$ grid.
Moreover, we define three types of \emph{volume TPMS structures}
(refer to Fig.~\ref{fig:Pstructure}):
\begin{itemize}
\item {\it pore structure} represented by $\psi \geq C$,
\item {\it rod structure} represented by $\psi \leq C$,
\item {\it sheet structure} represented by $C - \epsilon \leq \psi \leq C$.
\end{itemize}
However, the triangular meshes of the three types of volume TPMS structures,
generated by the polygonization,
are open on the six boundary faces of the parameter domain,
but they should be closed to form a solid.
Take the pore structure ($\psi \geq C$) as an example.
In the polygonization procedure of the TPMS $\psi=C$ by the MT algorithm,
the triangles on the boundary faces of the parameter domain are categorized into two classes
by the iso-value curve $\psi = C$ on the boundary faces:
outside triangles, where the values of $\psi$ at the vertices of these triangles are larger than or equal to $C$,
and inside triangles, where the values of $\psi$ at the vertices of these triangles are smaller than or equal to $C$,
Therefore, the pore structure can be closed by adding the outside
triangles into the triangular mesh generated by polygonizing $\psi=C$.
The other two types of volume TPMS structures can be closed in
similar ways.
In Fig.~\ref{fig:Pstructure}, the three types of volume TPMS structures
are illustrated.
\begin{figure*}[!htb]
\begin{center}
\subfigure[]{
\label{subfig:Ppore_units}
\includegraphics[width=0.18\textwidth]{Ppore_units-eps-converted-to.pdf}}
\subfigure[]{
\label{subfig:Dpore_units}
\includegraphics[width=0.18\textwidth]{Dpore_units-eps-converted-to.pdf}}
\subfigure[]{
\label{subfig:Gpore_units}
\includegraphics[width=0.18\textwidth]{Gpore_units-eps-converted-to.pdf}}
\subfigure[]{
\label{subfig:IWPpore_units}
\includegraphics[width=0.18\textwidth]{IWPpore_units-eps-converted-to.pdf}}
\\
\subfigure[]{
\label{subfig:Prod_units}
\includegraphics[width=0.18\textwidth]{Prod_units-eps-converted-to.pdf}}
\subfigure[]{
\label{subfig:Drod_units}
\includegraphics[width=0.18\textwidth]{Drod_units-eps-converted-to.pdf}}
\subfigure[]{
\label{subfig:Grod_units}
\includegraphics[width=0.18\textwidth]{Grod_units-eps-converted-to.pdf}}
\subfigure[]{
\label{subfig:IWProd_units}
\includegraphics[width=0.18\textwidth]{IWProd_units-eps-converted-to.pdf}}
\\
\subfigure[]{
\label{subfig:Psheet_units}
\includegraphics[width=0.18\textwidth]{Psheet_units-eps-converted-to.pdf}}
\subfigure[]{
\label{subfig:Dsheet_units}
\includegraphics[width=0.18\textwidth]{Dsheet_units-eps-converted-to.pdf}}
\subfigure[]{
\label{subfig:Gsheeet_units}
\includegraphics[width=0.18\textwidth]{Gsheet_units-eps-converted-to.pdf}}
\subfigure[]{
\label{subfig:IWPsheet_units}
\includegraphics[width=0.18\textwidth]{IWPsheet_units-eps-converted-to.pdf}}
\caption
{\small
Three types of volume TPMS structures for the four types of TPMS units.
(a)(b)(c)(d) Pore structures for the P-type, D-type, G-type and I-WP-type TPMSs.
(e)(f)(g)(h) Rod structures for the P-type, D-type, G-type and I-WP-type TPMSs.
(i)(j)(k)(l) Sheet structures for the P-type, D-type, G-type and I-WP-type TPMSs.
}
\label{fig:Pstructure}
\end{center}
\end{figure*}
\subsection{TDF construction}
\label{subsec:tdf_construction}
The porosity is an important parameter in porous scaffold design because it has direct influences on the transport of nutrition and waste.
The porosity and pore size of porous scaffolds designed by TPMS units can be
controlled by adjusting the threshold $C$ (see Table~\ref{tbl:nodal}).
Moreover, the relationship between the porosity and the threshold $C$ is
illustrated in Figs.\ref{fig:pore_relationship}-\ref{fig:sheet_relationship}.
We can see that, for the three types of volume TPMS structures,
i.e., pore, rod, and sheet,
each has \emph{valid threshold range} and they are listed in Table~\ref{tbl:nodal}.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.5\textwidth]{pore_relationship-eps-converted-to.pdf}\\
\caption{\small Relationship between the threshold $C$ and the porosity of the four types of TPMSs based on pore structures.}
\label{fig:pore_relationship}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.5\textwidth]{rod_relationship-eps-converted-to.pdf}\\
\caption{\small Relationship between the threshold $C$ and the porosity of the four types of TPMSs based on rod structures.}
\label{fig:rod_relationship}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.5\textwidth]{sheet_relationship-eps-converted-to.pdf}\\
\caption{\small Relationship between the threshold $C$ and the porosity of the four types of TPMSs based on sheet structures.}
\label{fig:sheet_relationship}
\end{figure}
To design heterogeneous porous scaffolds,
we change the threshold $C$ to a trivariate function $C(u,v,w)$ defined on the parametric domain of a TBSS,
which is called \emph{threshold distribution field} (TDF).
Then, the \emph{TPMS in the parameter domain} is represented by the
zero-level surface of,
\begin{equation}
\label{eq:combination}
f(u,v,w)=\psi(u,v,w)-C(u,v,w)=0.
\end{equation}
Therefore, the TDF plays a critical role in the heterogeneous
porous scaffold generation,
and designing the TDF becomes a key problem in porous scaffold design.
In this study, we developed some convenient techniques for generating a TDF
in the cubic parameter domain of a TBSS.
In brief, the cubic parameter domain of a TBSS is first discretized
into a dense grid (in our implementation, it is discretized into a grid with a resolution of $50 \times 50 \times 50$),
called a \emph{parametric grid}.
Then, the threshold values at the grid vertices are assigned using the techniques presented later in this paper,
which constitute a \emph{discrete TDF}.
Finally, to save storage space,
the discrete TDF is fitted by a trivariate B-spline function, i.e.,
\begin{equation}
\label{eq:distribution}
C(u,v,w)=\sum_{i=0}^{n_u}\sum_{j=0}^{n_v}\sum_{k=0}^{n_w}
N_{i,p}(u)N_{j,q}(v)N_{k,r}(w)C_{ijk},
\end{equation}
where the scales $C_{ijk}$ are the control points of the trivariate
B-spline function.
Now, we elucidate the techniques for generating the discrete TDF.
\emph{Filling method.}
Initially, all of the values at the parameter grid vertices are set to $0$.
Then, the geometric quantities at points of the boundary surface of TBSS,
such as mean curvature, Gauss curvature,
are calculated and mapped to the boundary vertices of the parametric grid.
Furthermore, the quantities at the boundary vertices are diffused into
the inner parametric grid vertices by the Laplace smoothing operation~\cite{Field1988Laplacian}.
Thus, the entire parametric grid is filled,
and a discrete TDF is constructed.
In Fig.~\ref{fig:curvature_distribution},
the mean curvature distribution of the TBSS boundary surface is first calculated (Fig.~\ref{subfig:curvature_model}) and then is mapped to the boundary vertices of the parametric grid (Fig.~\ref{subfig:curvature_on_grid}).
\begin{figure}[!htb]
\begin{center}
\subfigure[]{
\label{subfig:curvature_model}
\includegraphics[width=0.2\textwidth]{curvature_model-eps-converted-to.pdf}}
\subfigure[]{
\label{subfig:curvature_on_grid}
\includegraphics[width=0.22\textwidth]{porosity_distribution_balljoint-eps-converted-to.pdf}}
\caption
{\small
Filling method.
(a) Mean curvature distribution on the TBSS boundary surface.
(b) Mean curvature distribution on the boundary vertices of the parametric grid.
}
\label{fig:curvature_distribution}
\end{center}
\end{figure}
\emph{Layer method.} The parametric grid vertices are classified into layers
according to their coordinates $(u_i,v_j,w_k)$.
For example, the vertices with the same $w_k$ coordinates can be classified
into the same layer.
Vertices of the same layer are assigned the same threshold values.
The TDFs in Figs.~\ref{subfig:radial_distribution} and ~\ref{subfig:layer_distribution} were generated using the layer method.
In Fig.~\ref{subfig:radial_distribution},
the vertices of the four side surfaces are taken as the first layer,
the vertices adjacent to the first layer are taken being in the second layer,
$\cdots$, and so on.
In Fig.~\ref{subfig:layer_distribution},
the vertices with the same $w_k$ coordinates are taken as the same layer.
\emph{Prescribed function method.}
The threshold values at the parametric grid vertices can be assigned by a
function prescribed by users.
For example, in Fig.~\ref{subfig:porosity_distribution_isis},
the threshold value at the vertex with coordinates $(u_i,v_j,w_k)$ is assigned by the function,
\begin{equation*}
f(u_i,v_j,w_k) = \abs{u_i-v_j} + \abs{v_j-w_k} + \abs{u_i-w_k}.
\end{equation*}
After the discrete TDF is generated,
the values at the grid vertices are linearly transformed into the valid threshold range according to the type of volume TPMS structure being produced.
Then, we fit the discrete TDF with a trivariate
B-spline function~\pref{eq:distribution},
using the least squares progressive-iteration approximation (LSPIA) method~\cite{Deng2014Progressive}.
The subscripts of the grid vertices, i.e., $(i,j,k)$,
are the natural parametrization of the vertices.
For the purpose of B-spline fitting,
they are normalized into the interval $[0,1] \times [0,1] \times [0,1]$.
The knot vectors of the B-spline function~\pref{eq:distribution} is
uniformly defined under the B\'{e}zier end condition.
In our implementation, the resolution of the control grid of the trivariate
B-spline function is taken as $20 \times 20 \times 20$.
Additionally, the initial values for LSPIA iteration at the control grid
of the B-spline function~\pref{eq:distribution} are produced by linear interpolation of the discrete TDF.
Suppose the LSPIA iteration has been performed for $l$ steps,
and the $l^{th}$ B-spline function $C^{(l)}(u,v,w)$ is constructed:
\begin{equation}
\label{eq:k_iteration}
C^{(l)}(u,v,w)=\sum_{i=0}^{n_u}\sum_{j=0}^{n_v}\sum_{k=0}^{n_w}
N_{i,p}(u)N_{j,q}(v)N_{k,r}(w)C^{(l)}_{ijk}.
\end{equation}
to generate the $(l+1)^{th}$ B-spline function $C^{(l+1)}(u,v,w)$,
the difference vector for each parametric grid vertex is calculated,
\begin{equation}
\label{eq:diff_data}
\delta^{(l)}_{\alpha,\beta,\gamma} = T_{\alpha,\beta,\gamma}-C^{(l)}(u_{\alpha},v_{\beta},w_{\gamma}),
\end{equation}
where $T_{\alpha,\beta,\gamma}$ is the threshold value at the vertex
$(\alpha,\beta,\gamma)$,
and $(u_{\alpha},v_{\beta},w_{\gamma})$ are its parameters.
Each difference vector $\delta^{(l)}_{\alpha,\beta,\gamma}$ is
distributed to the control points $C^{(k)}_{i,j,k}$
if the corresponding basis functions $N_{i,p}(u_{\alpha})N_{j,q}(v_{\beta})N_{k,r}(w_{\gamma})$ are non-zero.
Moreover, a weighted average of all the difference vectors distributed to a control point is taken,
leading to the \emph{difference vector for the control point},
\begin{equation}
\label{eq:diff_cont}
\Delta^{(l)}_{ijk}=\frac{\sum_{I\in I_{\alpha\beta\gamma}}{N_{i,p}(u_I)N_{j,q}(v_I)N_{k,r}(w_I)\delta^{(l)}_l}}{\sum_{I\in I_{\alpha\beta\gamma}}{N_{i,p}(u_I)N_{j,q}(v_I)N_{k,r}(w_I)}},
\end{equation}
where $I_{\alpha\beta\gamma}$ is the set of indices such that $$N_{i,p}(u_{\alpha})N_{j,q}(v_{\beta})N_{k,r}(w_{\gamma}) \neq 0.$$
Next, the $(l+1)^{th}$ control points $C^{(l+1)}_{ijk}$ are formed by adding the difference vectors $\Delta^{(l)}_{ijk}$ to the $l^{th}$ control points,
\begin{equation}
\label{eq:controlpoint}
C^{(l+1)}_{ijk}=C^{(l)}_{ijk}+\Delta^{(l)}_{ijk}.
\end{equation}
Thus, the $(l+1)^{th}$ B-spline function $C^{(l+1)}(u,v,w)$ is produced:
\begin{equation}
\label{eq:t_next_iteration}
C^{(l+1)}(u,v,w)=\sum_{i=0}^{n_u}\sum_{j=0}^{n_v}\sum_{k=0}^{n_w}
N_{i,p}(u)N_{j,q}(v)N_{k,r}(w)C^{(l+1)}_{ijk}.
\end{equation}
The convergence of LSPIA iteration has been proved in~\cite{Deng2014Progressive}.
After the LSPIA iterations stop,
the iteration result $C(u,v,w)$ is taken as the TDF in the parametric domain.
\emph{Local modification.}
With the TDF $C(u,v,w)$ in the parametric domain,
the TPMS in the parametric domain~\pref{eq:combination} can be generated.
By mapping the TPMS into the TBSS,
a porous scaffold is produced in the TBSS.
However, if the generated porous scaffold does not satisfy the practical
engineering requirements,
the TDF $C(u,v,w)$ can be locally modified in the parametric domain,
and then the porous scaffold can be rebuilt to meet the practical requirements.
To locally modify the TDF,
users first choose some vertices of the parameter grid,
and change their threshold values to their desirable values.
Then, a \emph{local} LSPIA iteration is invoked to fit the changed values at
the chosen vertices.
In the local LSPIA iteration,
the difference vector $\delta$~\pref{eq:diff_data} is calculated only at the chosen vertices,
and just the control points to which the difference vectors $\delta$ are distributed are adjusted.
The other control points without distributed difference vectors remain
unchanged.
By locally modifying the TDF in Fig.~\ref{subfig:radial_distribution}
using the method presented above,
the TDF is changed as illustrated in Fig.~\ref{subfig:radial_distribution_modify}.
\begin{figure}[!htb]
\begin{center}
\subfigure[]{
\label{subfig:radial_distribution}
\includegraphics[width=0.22\textwidth]{radial_distribution-eps-converted-to.pdf}}
\subfigure[]{
\label{subfig:radial_distribution_modify}
\includegraphics[width=0.22\textwidth]{radial_distribution_modify-eps-converted-to.pdf}}
\caption
{\small
Local modification of TDF.
(a) TDF generated by the layer method.
(b) TDF after local modification.
}
\label{fig:porosity_distribution}
\end{center}
\end{figure}
\subsection{Generation of heterogeneous porous scaffold in TBSS}
\label{subsec:porous_scaffold_generation}
Until now, there was only one unknown in the heterogeneous porous scaffold generation:
the period coefficients $\omega_x, \omega_y, \omega_z$ (Table~\ref{tbl:nodal}).
It is worth noting that the internal connectivity of the scaffold
is crucial to the transferring performance of the scaffold.
For a TPMS unit,
the smaller the unit volume,
the worse the internal connectivity of the micro-holes,
and the larger the unit volume,
the better the internal connectivity of the micro-holes.
The period coefficients $\omega_x, \omega_y, \omega_z$ (Table~\ref{tbl:nodal}) can
be employed to adjust the number of TPMS units,
as well as the size of TPMS units in the three parametric directions.
The values of the period coefficients $\omega_x, \omega_y, \omega_z$ used in the examples
in this paper are listed in Table~\ref{tbl:stat}.
After the TDF in the parametric domain and the period coefficients are
both determined (refer to Table~\ref{tbl:nodal}),
the TPMS in the parametric domain (Eq.~\pref{eq:combination}), i.e.,
\begin{equation*}
f(u,v,w)=\psi(u,v,w)-C(u,v,w)=0,
\end{equation*}
can be calculated.
For examples,
the TPMSs (sheet structure) calculated based on the TDFs in Figs.~\ref{subfig:layer_distribution}-
\ref{subfig:distribution_curvature} are illustrated in Figs.~\ref{subfig:axial_P_sheet_solid}-
~\ref{subfig:curvature_P_sheet_solid}.
It can be seen clearly from Figs.~\ref{subfig:axial_P_sheet_solid}-
~\ref{subfig:curvature_P_sheet_solid} that,
the porosity is controlled by the TDF in Figs.~\ref{subfig:layer_distribution}-
\ref{subfig:distribution_curvature}.
Finally, the heterogeneous porous scaffold
(Fig.~\ref{subfig:pore_balljoint_scaffold}) in the TBSS can be generated by mapping the TPMS in the parametric domain (volume TPMS structures) (Fig.~\ref{subfig:pore_volume_tpms}) into the TBSS, using the TBSS function.
It should be noted that,
to avoid fold-up, the Jacobian value of the TBSS should be positive.
Because the TPMS in the parametric domain is unitary
(Fig.~\ref{subfig:pore_volume_tpms}),
it has completeness and continuity between adjacent TPMS units.
Therefore, the heterogeneous porous scaffold in the TBSS is ensured to
be complete and continuous
(Fig.\ref{subfig:pore_balljoint_scaffold}).
\begin{figure*}[!htb]
\begin{center}
\subfigure[]{
\label{subfig:axial_P_sheet_solid}
\includegraphics[width=0.22\textwidth]{axial_P_sheet_solid-eps-converted-to.pdf}}
\subfigure[]{
\label{subfig:radial_P_sheet_solid}
\includegraphics[width=0.22\textwidth]{radial_P_sheet_solid-eps-converted-to.pdf}}
\subfigure[]{
\label{subfig:radial_modify_P_sheet_solid}
\includegraphics[width=0.22\textwidth]{radial_modify_P_sheet_solid-eps-converted-to.pdf}}
\subfigure[]{
\label{subfig:curvature_P_sheet_solid}
\includegraphics[width=0.22\textwidth]{curvature_P_sheet_solid-eps-converted-to.pdf}}
\\
\subfigure[]{
\label{subfig:layer_distribution}
\includegraphics[width=0.10\textwidth]{axial_distribution-eps-converted-to.pdf}}
\hspace{0.1\textwidth}
\subfigure[]{
\label{subfig:distribution_radial}
\includegraphics[width=0.10\textwidth]{radial_distribution-eps-converted-to.pdf}}
\hspace{0.1\textwidth}
\subfigure[]{
\label{subfig:distribution_radial_modify}
\includegraphics[width=0.10\textwidth]{radial_distribution_modify-eps-converted-to.pdf}}
\hspace{0.1\textwidth}
\subfigure[]{
\label{subfig:distribution_curvature}
\includegraphics[width=0.10\textwidth]{porosity_distribution_balljoint-eps-converted-to.pdf}}
\caption
{\small
Generation of the TPMSs (with their three-view drawing) (a,b,c,d) based on the corresponding TDF in the parametric domain (e,f,g,h).
Note that the porosity of the TPMS is controlled by the TDF.
}
\label{fig:distribution_sheetsolid}
\end{center}
\end{figure*}
\begin{figure}[!htb]
\begin{center}
\subfigure[]{
\label{subfig:pore_volume_tpms}
\includegraphics[width=0.23\textwidth]{pore_volume_tpms-eps-converted-to.pdf}}
\subfigure[]{
\label{subfig:pore_balljoint_scaffold}
\includegraphics[width=0.2\textwidth]{pore_balljoint_scaffold-eps-converted-to.pdf}}
\caption
{\small
Generation of heterogeneous porous scaffold.
(a) TPMS in the parametric domain.
(b) Heterogeneous porous scaffold in the TBSS.
}
\label{fig:porous_scaffold}
\end{center}
\end{figure}
\subsection{Storage format based on TDF}
\label{subsec:storage_format}
Owing to their complicated geometric and topological structure,
the storage costs for porous scaffolds are very large,
usually requiring hundreds of megabytes (MB) (refer to Table~\ref{tbl:stat}).
Therefore, the large storage cost becomes the bottleneck in porous scaffold
generation and processing.
In this study, we developed a new porous scaffold
storage format that reduces the storage cost of porous scaffolds significantly.
Using the new storage format,
the space required to store the porous scaffold models presented in this paper ranges from $0.567$ MB to $1.355$ MB,
while the storage space using the traditional STL file format ranges from $394.402$ MB to $1449.71$ MB.
Thus, the new storage format reduces the storage requirement by at least $98\%$ compared with the traditional STL file format.
Moreover, the generation of heterogenous porous scaffolds from the
new file format costs a few seconds to a dozen seconds (Table~\ref{tbl:stat}).
Specifically, because the TDF in the parametric domain and the
period coefficients (Table~\ref{tbl:nodal}) entirely determine the heterogenous porous scaffold in a TBSS,
the new storage format must only store the following information:
\begin{itemize}
\item period coefficients $\omega_x, \omega_y, \omega_z$,
\item control points of the TDF $C(u,v,w)$,
\item knot vectors of the TDF $C(u,v,w)$,
\item control points of the TBSS $P(u,v,w)$,
\item knot vectors of the TBSS $P(u,v,w)$.
\end{itemize}
Therefore, the new storage format is called the \emph{TDF format},
and is summarized in Appendix for clarity.
\section{Implementation, results and discussions}
\label{sec:implementations}
The developed heterogeneous porous scaffold generation method is implemented
in the C++ programming language and tested on a PC with a 3.60 GHz i7-4790 CPU and 16 GB RAM.
In this section, some examples are presented,
and some implementation details are discussed.
Moreover, the developed method is compared with classical
scaffold generation methods.
\subsection{Influence of threshold on heterogeneity of scaffold}
\label{subsec:threshold_influence}
The heterogeneity of a porous scaffold,
i.e., its pore size and shape,
is controlled by the TDF $C(u,v,w)$~\pref{eq:combination}.
The larger the value of $C(u,v,w)$,
the larger the pore size.
In Fig.~\ref{subfig:influence_distribution},
the TDF is generated by the layer method,
which takes the parametric grid vertices with the same $w$ coordinates as the same layer.
The small to large TDF values are visualized by blue to red colors.
Fig.~\ref{subfig:influence_porous_scaffold} illustrates the heterogeneous porous
scaffold (P-type, pore structure) generated based on the TDF in Fig.~\ref{subfig:influence_distribution}.
We can see that, with the TDF values varying from large to small
(Fig.~\ref{subfig:influence_distribution}),
the pore size of the porous scaffold also changes from large to small
(Fig.~\ref{subfig:influence_porous_scaffold}).
\begin{figure}[!htb]
\begin{center}
\subfigure[]{
\label{subfig:influence_distribution}
\includegraphics[width=0.18\textwidth]{influence_distribution-eps-converted-to.pdf}}
\subfigure[]{
\label{subfig:influence_porous_scaffold}
\includegraphics[width=0.18\textwidth]{influence_porous_scaffold-eps-converted-to.pdf}}
\caption
{ \small
Influence of threshold on the heterogeneity of scaffold.
(a) TDF generated by the layer method.
(b) Heterogenous porous scaffold (P-type, pore structure),
produced based on the TDF in (a).
}
\label{fig:influence_threshold}
\end{center}
\end{figure}
\subsection{Comparison with classical porous scaffold generation methods}
\label{subsec:comparison}
The heterogenous porous scaffold generation method developed in this study
is compared here with two classical methods presented in Refs.~\cite{Yoo2011Computer,Feng2018Porous}.
First, because the method proposed in~\cite{Yoo2011Computer}
generates a porous scaffold by mapping a regular TPMS unit to each hexahedron of a hexahedral mesh model,
it has the following shortcomings:
(1) Continuity cannot be ensured between two adjacent TPMS units in the porous scaffold.
(2) The threshold values $C$ of all TPMS units are the same.
(3) The geometric quality of the porous scaffold is greatly influenced by
the mesh quality of the hexahedral mesh model.
As illustrated in Fig.~\ref{subfig:porous_using_eightnodes_method},
the boundary mesh quality is poor,
with many slender triangles.
Secondly, the method presented in~\cite{Feng2018Porous} produces
a porous scaffold by first immersing a trivariate T-spline model in an ambient TPMS
and then taking the intersection of them as the porous scaffold.
Therefore, this method cannot guarantee completeness of the TPMS units.
As demonstrated in Fig.~\ref{subfig:porous_using_tspline_method},
many boundary TPMS units are broken.
However, because our method generates a heterogeneous porous scaffold by
mapping a unitary TPMS in the parametric domain to a TBSS, it avoids the shortcomings of the other two methods.
The heterogeneous porous scaffold generated by our method has the following
properties (Fig.~\ref{subfig:porous_using_our_method}):
\begin{itemize}
\item Completeness of TPMS units and continuity between adjacent TPMS units are guaranteed.
\item The TDF can be designed by users to easily control the porosity.
\item Because the degree of a TBSS is relatively high,
and a TBSS has high smoothness,
the heterogenous porous scaffold generated by TBSS mapping is highly smooth.
\item The TDF file format saves significant storage space.
\end{itemize}
\begin{figure}[!htb]
\begin{center}
\subfigure[]{
\label{subfig:porous_using_eightnodes_method}
\includegraphics[width=0.45\textwidth]{balljoint_eightnode-eps-converted-to.pdf}}
\subfigure[]{
\label{subfig:porous_using_tspline_method}
\includegraphics[width=0.45\textwidth]{balljoint_tspline-eps-converted-to.pdf}}
\subfigure[]{
\label{subfig:porous_using_our_method}
\includegraphics[width=0.45\textwidth]{balljoint_bspline1-eps-converted-to.pdf}}
\caption
{\small
Comparison with classical porous scaffold generating methods.
The left-most column is the models input to the corresponding methods.
(a)Method in~\cite{Yoo2011Computer} with hexahedral mesh as input.
(b)Method in~\cite{Feng2018Porous} with T-spline solid as input.
(c)Our method with TBSS as input.
}
\label{fig:comaprison}
\end{center}
\end{figure}
\begin{figure*}[!htb]
\begin{center}
\subfigure[]{
\label{subfig:balljoint_bspline_solid}
\includegraphics[width=0.18\textwidth]{balljoint_bspline_solid-eps-converted-to.pdf}}
\subfigure[]{
\label{subfig:porosity_distribution_balljoint}
\includegraphics[width=0.2\textwidth]{porosity_distribution_balljoint-eps-converted-to.pdf}}
\subfigure[]{
\label{subfig:balljoint_p_pore_scaffold}
\includegraphics[width=0.18\textwidth]{balljoint_p_pore_scaffold-eps-converted-to.pdf}}
\subfigure[]{
\label{subfig:balljoint_p_rod_scaffold}
\includegraphics[width=0.18\textwidth]{balljoint_p_rod_scaffold-eps-converted-to.pdf}}
\subfigure[]{
\label{subfig:balljoint_p_sheet_scaffold}
\includegraphics[width=0.18\textwidth]{balljoint_p_sheet_scaffold-eps-converted-to.pdf}}
\caption
{\small
Heterogeneous porous scaffold of \emph{Ball-joint}.
(a) TBSS.
(b) TDF in the parametric domain.
(c) P-type pore structure.
(d) P-type rod structure.
(e) P-type sheet structure.
}
\label{fig:balljoint_porous_scaffold}
\end{center}
\end{figure*}
\begin{figure*}[!htb]
\begin{center}
\subfigure[]{
\label{subfig:venus_bspline_solid}
\includegraphics[width=0.18\textwidth]{venus_bspline_solid-eps-converted-to.pdf}}
\subfigure[]{
\label{subfig:porosity_distribution_venus}
\includegraphics[width=0.2\textwidth]{porosity_distribution_venus-eps-converted-to.pdf}}
\subfigure[]{
\label{subfig:venus_d_pore_scaffold}
\includegraphics[width=0.18\textwidth]{venus_d_pore_scaffold-eps-converted-to.pdf}}
\subfigure[]{
\label{subfig:venus_d_rod_scaffold}
\includegraphics[width=0.18\textwidth]{venus_d_rod_scaffold-eps-converted-to.pdf}}
\subfigure[]{
\label{subfig:venus_d_sheet_scaffold}
\includegraphics[width=0.18\textwidth]{venus_d_sheet_scaffold-eps-converted-to.pdf}}
\caption
{\small
Heterogeneous porous scaffold of \emph{Venus}.
(a) TBSS.
(b) TDF in the parametric domain.
(c) D-type pore structure.
(d) D-type rod structure.
(e) D-type sheet structure.
}
\label{fig:venus_porous_scaffold}
\end{center}
\end{figure*}
\begin{figure*}[!htb]
\begin{center}
\subfigure[]{
\label{subfig:moai_bspline_solid}
\includegraphics[width=0.12\textwidth]{moai_bspline_solid-eps-converted-to.pdf}}
\hspace{0.055\textwidth}
\subfigure[]{
\label{subfig:porosity_distribution_moai}
\includegraphics[width=0.2\textwidth]{porosity_distribution_moai-eps-converted-to.pdf}}
\hspace{0.055\textwidth}
\subfigure[]{
\label{subfig:moai_iwp_pore_scaffold}
\includegraphics[width=0.12\textwidth]{moai_iwp_pore_scaffold-eps-converted-to.pdf}}
\hspace{0.055\textwidth}
\subfigure[]{
\label{subfig:moai_iwp_rod_scaffold}
\includegraphics[width=0.12\textwidth]{moai_iwp_rod_scaffold-eps-converted-to.pdf}}
\hspace{0.055\textwidth}
\subfigure[]{
\label{subfig:moai_iwp_sheet_scaffold}
\includegraphics[width=0.12\textwidth]{moai_iwp_sheet_scaffold-eps-converted-to.pdf}}
\caption
{
\small
Heterogeneous porous scaffold of \emph{Moai}.
(a) TBSS.
(b) TDF in the parametric domain.
(c) I-WP-type pore structure.
(d) I-WP-type rod structure.
(e) I-WP-type sheet structure.
}
\label{fig:moai_porous_scaffold}
\end{center}
\end{figure*}
\begin{figure*}[!htb]
\begin{center}
\subfigure[]{
\label{subfig:tooth_bspline_solid}
\includegraphics[width=0.18\textwidth]{tooth_bspline_solid-eps-converted-to.pdf}}
\subfigure[]{
\label{subfig:porosity_distribution_tooth}
\includegraphics[width=0.2\textwidth]{porosity_distribution_tooth-eps-converted-to.pdf}}
\subfigure[]{
\label{subfig:tooth_g_pore_scaffold}
\includegraphics[width=0.18\textwidth]{tooth_g_pore_scaffold-eps-converted-to.pdf}}
\subfigure[]{
\label{subfig:tooth_g_rod_scaffold}
\includegraphics[width=0.18\textwidth]{tooth_g_rod_scaffold-eps-converted-to.pdf}}
\subfigure[]{
\label{subfig:tooth_g_sheet_scaffold}
\includegraphics[width=0.18\textwidth]{tooth_g_sheet_scaffold-eps-converted-to.pdf}}
\caption
{
\small
Heterogeneous porous scaffold of \emph{Tooth}.
(a) TBSS.
(b) TDF in the parametric domain.
(c) G-type pore structure.
(d) G-type rod structure.
(e) G-type sheet structure.
}
\label{fig:tooth_porous_scaffold}
\end{center}
\end{figure*}
\begin{figure*}[!htb]
\begin{center}
\subfigure[]{
\label{subfig:isis_bspline_solid}
\includegraphics[width=0.1\textwidth]{isis_bspline_solid-eps-converted-to.pdf}}
\hspace{0.06\textwidth}
\subfigure[]{
\label{subfig:porosity_distribution_isis}
\includegraphics[width=0.2\textwidth]{porosity_distribution_isis-eps-converted-to.pdf}}
\hspace{0.06\textwidth}
\subfigure[]{
\label{subfig:isis_p_pore_scaffold}
\includegraphics[width=0.1\textwidth]{isis_p_pore_scaffold-eps-converted-to.pdf}}
\hspace{0.06\textwidth}
\subfigure[]{
\label{subfig:isis_p_rod_scaffold}
\includegraphics[width=0.1\textwidth]{isis_p_rod_scaffold-eps-converted-to.pdf}}
\hspace{0.06\textwidth}
\subfigure[]{
\label{subfig:isis_p_sheet_scaffold}
\includegraphics[width=0.1\textwidth]{isis_p_sheet_scaffold-eps-converted-to.pdf}}
\caption
{
\small
Heterogeneous porous scaffold of \emph{Isis}.
(a) TBSS.
(b) TDF in the parametric domain.
(c) P-type pore structure.
(d) P-type rod structure.
(e) P-type sheet structure.
}
\label{fig:isis_porous_scaffold}
\end{center}
\end{figure*}
\begin{table*}[!htb]
\centering
\footnotesize
\caption{Statistical data of the heterogenous porous scaffold generation method developed in this paper.}
\label{tbl:stat}
\begin{threeparttable}
\begin{tabular}{| c | c | c | c | c | c | c | c | c |}
\hline
\multirow{2}{*}{Model} & \multirow{2}{*}{Type} & \multirow{2}{*}{Structure}& \multirow{2}{*}{Period coefficients} & \multicolumn{3}{|c|}{Run time(s)\tnote{1}} & \multicolumn{2}{|c|}{Storage space(MB)\tnote{2}}\\
\cline{5-7} \cline{8-9}
& & & & {TDF} & {TPMS} &{Porous scaffold} & {STL format} & {TDF format}\\
\hline
\multirow{3}{*}{Ball joint} & \multirow{3}{*}{P} & pore & \multirow{3}{*}{$ (16,14,18)$}& \multirow{3}{*}{2.745} &0.326 & 3.194 & 741.367 & \multirow{3}{*}{0.810}\\
\cline{3-3} \cline{6-8}
& & rod & & &0.319 & 3.109 & 721.599 & \\
\cline{3-3} \cline{6-8}
& & sheet & & &0.658 & 6.651 & 1449.71 & \\
\hline
\multirow{3}{*}{Venus} & \multirow{3}{*}{D}& pore & \multirow{3}{*}{$ (10,10,10)$} & \multirow{3}{*}{2.728}& 0.293 & 3.024 & 701.682 & \multirow{3}{*}{0.947}\\
\cline{3-3} \cline{6-8}
& & rod & & &0.286 & 3.007 & 695.564 & \\
\cline{3-3} \cline{6-8}
& & sheet & & &0.528 & 6.581 & 1399.55 & \\
\hline
\multirow{3}{*}{Moai} & \multirow{3}{*}{I-WP}& pore & \multirow{3}{*}{$ (6,6,16)$} & \multirow{3}{*}{2.736} & 0.290 & 2.554 & 557.324& \multirow{3}{*}{0.824}\\
\cline{3-3} \cline{6-8}
& & rod & & &0.284 & 2.555 & 554.906 & \\
\cline{3-3} \cline{6-8}
& & sheet & & &0.583 & 5.530 & 1105.41 & \\
\hline
\multirow{3}{*}{Tooth} & \multirow{3}{*}{G}& pore & \multirow{3}{*}{$ (8,6,8)$} & \multirow{3}{*}{2.732} &0.238 & 1.974 & 394.402 & \multirow{3}{*}{0.567}\\
\cline{3-3} \cline{6-8}
& & rod & & &0.237 & 1.956 & 394.422 & \\
\cline{3-3} \cline{6-8}
& & sheet & & &0.499 & 4.156 & 773.741 & \\
\hline
\multirow{3}{*}{Isis} & \multirow{3}{*}{P}& pore & \multirow{3}{*}{$(6,6,16)$} & \multirow{3}{*}{2.754} &0.238 & 2.089 & 473.401 & \multirow{3}{*}{1.355}\\
\cline{3-3} \cline{6-8}
& & rod & & &0.234 & 1.994 & 454.359 & \\
\cline{3-3} \cline{6-8}
& & sheet & & &0.521 & 4.236 & 908.205 & \\
\hline
\end{tabular}
\begin{tablenotes}
\item[1] Run time (in second) for TDF construction, generation of volume TPMS in parametric domain and generation of heterogenous porous scaffold.
\item[2] Storage space (in megabyte) of heterogenous porous scaffolds using the traditional STL file format and the TDF file format developed in this paper.
\end{tablenotes}
\end{threeparttable}
\end{table*}
\subsection{Results}
\label{subsec:results}
In this section,
some heterogenous porous scaffold results generated by our method are presented (Figs.~\ref{fig:balljoint_porous_scaffold}-\ref{fig:isis_porous_scaffold}) to demonstrate the effectiveness and efficiency of the
method.
In Figs.~\ref{fig:balljoint_porous_scaffold}-\ref{fig:isis_porous_scaffold},
(a) is the input TBSS,
(b) is the TDF in the parametric domain,
and (c,d, and e) are the heterogeneous porous scaffolds of pore structure, rod structure, and sheet structure with different TPMS types.
Moreover, statistical data are listed in Table~\ref{tbl:stat},
including,
period coefficients $(\omega_x, \omega_y, \omega_z)$,
run times for generating the TDF,
TPMS in parametric domain,
and heterogenous porous scaffold in the TBSS,
and storage costs of porous scaffolds with the traditional STL file format and new TDF file format.
\subsection{TDF file format}
\label{subsec:tdf_file_format}
In Table~\ref{tbl:stat},
the storage spaces required to store the porous scaffold using the traditional STL file format and the new TDF file format are listed.
Using the TDF file format,
storing the porous scaffolds costs $0.567$ to $1.355$ MB,
while using the STL file format, it costs $394.402$ to $1449.71$ MB.
Therefore, at least $98\%$ of storage space is saved by using the new
TDF file format.
Moreover, in Table~\ref{tbl:stat},
the time cost for generating the heterogeneous porous scaffold from the TDF file format is also listed,
including the run time for generating volume TPMS structures and porous scaffolds.
We can see that the time costs range from $2$ to $7$ seconds,
which is acceptable for user interaction.
Finally, the TDF file format brings some extra benefits.
Traditionally, heterogeneous porous scaffolds have been stored as linear
mesh models.
However, the TDF file format stores a trivariate B-spline function.
Therefore, in theory, a porous scaffold can be generated to any
prescribed precision using the TDF file format.
In addition, the period coefficients and control points of the
trivariate B-spline function,
stored in the TDF file format,
can be taken as some types of \emph{parameters}.
Therefore, a heterogeneous porous scaffold can be changed by altering
these parameters,
just like in the parametric modeling technique.
\section{Conclusion}
\label{sec:conclusion}
In this study,
we developed a method for generating a heterogeneous porous scaffold in a TBSS by the TDF designed in the parametric domain of the TBSS.
First, the TDF is easy to be designed in the cubic parameter domain,
and is represented as a trivariate B-spline function.
The TDF can be employed to control the porosity of the porous scaffold.
Then, a TPMS can be generated in the parameter domain based on the TDF and
the period coefficients.
Finally, by mapping the TPMS into the TBSS,
a heterogeneous porous scaffold is produced.
Moreover, we presented a new file format (TDF) for storing the porous scaffold that saves significant storage space.
By the method developed in this study,
both completeness of the TPMS units and continuity between adjacent TPMS units can be guaranteed.
Moreover, the porosity of the porous scaffold can be controlled easily by
designing a suitable TDF.
More importantly, the TDF file format not only saves significant storage space,
but it can also be used to generate a porous scaffold to any prescribed precision.
In terms of future work, determining how to change the shape of a porous scaffold using the
parameters stored in the TDF file format is a promising research direction.
\section*{Acknowledgements}
This work is supported by the National Natural Science Foundation of China
under Grant No.61872316.
\bibliographystyle{elsart-num}
|
1,941,325,220,011 | arxiv |
\subsection{Setup}
\begin{table}[tbp]
\centering
\vspace{0.25cm}
\caption{Parameters of our FG-3DMOT algorithm}
\label{tab:params_vbi}
\resizebox{0.48\textwidth}{!}
{%
\begin{tabular}{@{}llc@{}}
\toprule
\textbf{Parameter Name} & \textbf{Symbol} & \textbf{Value} \\ \midrule
Detection Covariance & $\vect{\Sigma}^{det}$ & $\diag \begin{pmatrix} \SI{0.2}{\meter} \\ \SI{0.2}{\meter}\\ \SI{0.2}{\meter}\end{pmatrix}^2$ \\ \midrule
Detection Null-Hypothesis Covariance& $\vect{\Sigma}^{det}_{0}$ & $\diag \begin{pmatrix} \SI{100}{\meter} \\ \SI{100}{\meter}\\ \SI{1}{\meter}\end{pmatrix}^2$ \\ \midrule
Constant Velocity Covariance & $\vect{\Sigma}^{cv}$ & $\diag \begin{pmatrix} \SI{0.25}{\meter} \\ \SI{0.25}{\meter}\\ \SI{0.25}{\meter} \\ \SI{0.25}{\meter\per\second} \\ \SI{0.25}{\meter\per\second} \\ \SI{0.25}{\meter\per\second}\end{pmatrix}^2$ \\ \midrule
Repelling Covariance & $\vect{\Sigma}^{rep}$ & $\SI{0.5}{\meter^2}$ \\ \midrule
Repelling Distance Threshold & $d_{min}$ & $\SI{4}{\meter}$ \\ \midrule
Track Confidence Threshold (offline)& $c_{min}$ & $3.9$ \\ \midrule
Track Confidence Threshold (online) & $c_{min}$ & $3.5$ \\ \midrule
Number Consecutive Detections & $n_{det}$ & $2$ \\ \midrule
Num. Con. Null-Hypothesis Detections & $n_{lost}$ & $5$ \\ \midrule
Object Permanence (online) & $n_{perm}$ & $1$ \\ \bottomrule
\end{tabular}
}
\end{table}
\begin{table*}[tbph]
\centering
\vspace{0.2cm}
\caption{Results on the KITTI 2D MOT Testing Set for Class Car}
\tiny
\resizebox{0.83\textwidth}{!}
{%
\def1.02{1.02}
\begin{tabular}{@{} l c c c c c c c @{}}
\toprule
{\bf Method} & {\bf MOTA $\uparrow$} & {\bf MOTP $\uparrow$} & {\bf MT $\uparrow$} & {\bf ML $\downarrow$} & {\bf IDS $\downarrow$} & {\bf FRAG $\downarrow$} & {\bf FPS $\uparrow$}\\
\midrule
TuSimple \cite{NMOT} & 86.62 \% & 83.97 \% & 72.46 \% & 6.77 \% & 293 & 501 & 1.7 \\
MASS \cite{MASS} & 85.04 \% & 85.53 \% & 74.31 \% & \bestResult{2.77} \% & 301 & 744 & 100.0 \\
MOTSFusion \cite{MOTSFusion} & 84.83 \% & 85.21 \% & 73.08 \% & \bestResult{2.77} \% & 275 & 759 & 2.3 \\
mmMOT \cite{Zhang} & 84.77 \% & 85.21 \% & 73.23 \% & \bestResult{2.77} \% & 284 & 753 & 50.0 \\
mono3DT \cite{mono3DT} & 84.52 \% & 85.64 \% & 73.38 \% & \bestResult{2.77} \% & 377 & 847 & 33.3 \\
MOTBeyondPixels \cite{BeyondPixels} & 84.24 \% & \bestResult{85.73} \% & 73.23 \% & \bestResult{2.77} \% & 468 & 944 & 3.3 \\
AB3DMOT \cite{Baseline} & 83.84 \% & 85.24 \% & 66.92 \% & 11.38 \% & \bestResult{9} & 224 & \bestResult{212.8} \\
IMMDP \cite{IMMDP} & 83.04 \% & 82.74 \% & 60.62 \% & 11.38 \% & 172 & 365 & 5.3 \\
aUToTrack \cite{autotrack} & 82.25 \% & 80.52 \% & 72.62 \% & 3.54 \% & 1025 & 1402 & 100.0 \\
JCSTD \cite{JCSTD} & 80.57 \% & 81.81 \% & 56.77 \% & 7.38 \% & 61 & 643 & 14.3 \\
\midrule
{\bf FG-3DMOT (offline)} & \bestResult{88.01} \% & 85.04 \% & \bestResult{75.54} \% & 11.85 \% & 20 & \bestResult{117} & 23.8 \\
{\bf FG-3DMOT (online)} & 83.74 \% & 84.64 \% & 68.00 \% & 9.85 \% & \bestResult{9} & 375 & 27.1 \\
\bottomrule
\end{tabular}
}
\label{tab:result}
\end{table*}
In order to evaluate our proposed algorithm, we use the KITTI 2D MOT benchmark \cite{KITTI}. It is composed of 21 training and 29 testing sequences, with a total length of 8008 respectively 11095 frames.
For each sequence, 3D point clouds, RGB images of the left and right camera and ego motion data are given at a rate of 10 FPS.
The testing split does not provide any annotations, since it is used for evaluation on the KITTI benchmark server.
The training split contains annotations for 30601 objects and 636 trajectories of 8 classes.
Since annotations are rare for a lot of classes, we only evaluate our algorithm on the car subset.
As previously mentioned, we use PointRCNN \cite{PointRCNN} as 3D object detector.
Since \cite{Baseline} uses the same detector and made the obtained 3D bounding boxes publicly available, we use them for comparability reasons.
We normalize the provided confidence to positive values by adding a constant offset.
Since our algorithm is robust against false positive detections we achieve the best results by using all available detections.
All other parameters are shown in \autoref{tab:params_vbi}.
We construct the graph using the libRSF framework \cite{Pfeifer} and solve the optimization problem with Ceres \cite{Agarwal}, using the dogleg optimizer.
Since the KITTI 2D MOT benchmark is evaluated in the image space through 2D bounding boxes, we need to transform our 3D bounding boxes into the image space and flatten them into 2D bounding boxes.
Furthermore, we only output 2D bounding boxes which overlap at least 25\% with the image in order to suppress detections that are not visible in the image, but in the laser scan.
\subsection{Results}\label{sec:results}
We evaluated our tracking algorithm in online and offline use case on the KITTI 2D MOT testing set.
Since offline results are rare and not among the best algorithms on the leaderboard, we compare our online and offline results against the 10 best methods on the leaderboard (accessed February 2020), which are summarized in \autoref{tab:result}.
The used metrics multi object tracking accuracy (MOTA), multi object tracking precision (MOTP), mostly tracked objects (MT), mostly lost objects (ML), id switches (IDS) and track fragmentation (FRAG) are defined in \cite{Li09learningto,Bernardin}.
The proposed algorithm achieves accurate, robust and reliable tracking results in online as well as offline application, performing better than many state-of-the-art algorithms.
In the offline use case, we improve the state-of-the art in accuracy, mostly tracked objects and the fragmentation of tracks.
Especially the fragmentation of our tracks is significantly lower than the previous state-of-the-art, since our algorithm can propagate tracks without any measurements far into the future and truncate them afterwards, if no measurements were associated.
Even for the online solution, we achieve low track fragmentation and state-of-the-art id switches.
Furthermore, the results demonstrate that our approach benefits from the optimization of past states and the ability to re-assign past detections based on future information, since it achieves considerably higher accuracy and lower fragmentation in the offline application.
Although we did not optimize run time in any way, our algorithm is real-time feasible in online as well as offline use case.
The computation time is similar in both cases and could be improved significantly, especially for the online use case.
We provide a visualization of the results of our offline tracker in the accompanying video, which features KITTI testing sequence \textit{0014}\footnote[1]{https://youtu.be/mvZmli4jrZQ}.
\textbf{Limitations of the KITTI Ground Truth:}
During the evaluation of our algorithm on the training split we discovered the limits of KITTI's 2D bounding box ground truth.
As an example we visualized a failure case of the ground truth in \autoref{fig:kitti_gt}.
The ground truth uses \textit{Don't Care} 2D bounding boxes to label regions with occluded, poorly visible or far away objects.
Bounding boxes which overlap at least 50\% with a \textit{Don't Care} area are not evaluated.
As seen in \autoref{fig:kitti_gt}, the ground truth is not labeled consistently and objects (here cars) are neither labeled \textit{Don't Care} nor \textit{Car}.
Therefore, we get a lot of false positives, since our algorithm can track objects from the first occurrence and from high distances.
As a result, we assume that we could achieve significantly higher accuracy (MOTA) with a consistent labeled ground truth.
\textbf{Limitations of our Algorithm:}
After the evaluation on the testing set of the KITTI benchmark we discovered one main failure case of our algorithm.
In testing sequences \textit{0005} and \textit{0006} the ego-vehicle captures a highway scene, hence it self and all other cars are moving with high velocities.
Since our algorithm works in a static 3D coordinate system and we initialize new objects as a general assumption with zero velocity, these cases are hard to capture for our algorithm.
Furthermore, such scenes with fast moving cars are not present in the training split, whereby we could not adopt our algorithm to handle these scenes.
This issue will be addressed in future work.
\subsection{Factor Graphs for State Estimation}
\begin{figure}[tbph]
\centering
\includegraphics[width=0.9\linewidth]{images/FG_Tracking.pdf}
\caption{Example of the factor graph representation of the proposed 3D multi-object tracking algorithm. Small dots represent error functions (factors) that define the least squares problem and big circles are the corresponding state variables. The set of tracked object varies over time due to the appearance and disappearance of objects from the field of view.}
\label{fig:FactorGraph}
\end{figure}
The factor graph, as a graphical representation of non-linear least squares optimization, is a powerful tool to solve complex state estimation problems and dominates today's progress in state estimation for autonomous systems \cite{Dellaert2017}.
\begin{equation}
\vect{\opt{X}} =\argmax_{\vect{X}} \vect{P}(\vect{X}|\vect{Z})
\label{eqn:argmax}
\end{equation}
To estimate the most likely set of states $\vect{\opt{X}}$ based on a set of measurements $\vect{Z}$, \autoref{eqn:argmax} is solved using the factorized conditional probabilities:
\begin{equation}
\probOf{\vect{X}}{\vect{Z}} \propto \prod_i \probOf{\measurementOf{}{i}}{\stateOf{}{i}} \cdot \prod_j \vect{P}(\stateOf{}{j})
\label{eqn:posteriori}
\end{equation}
Please note that we omit the priors $\vect{P}(\stateOf{}{j})$ in further equations for simplicity.
By assuming a Gaussian distributed conditional $\probOf{\measurementOf{}{i}}{\stateOf{}{i}}$, the maximum-likelihood problem can be transformed to a minimization of the negative log-likelihood:
\begin{equation}
\vect{\est{X}} = \argmin_{\vect{X}} \sum_{i} \frac{1}{2} \left\| \info^{\frac{1}{2}} \left( \errorOf{}{i} - {\vect{\mu}} \right) \right\| ^2
\label{eqn:arg_min}
\end{equation}
Therefore, the estimated set of states $\vect{\est{X}}$ can be obtained by applying non-linear least squares optimization of the measurement function $\errorOf{}{i} = f(\measurementOf{}{i}, \stateOf{}{i})$.
Additionally, the mean ${\vect{\mu}}$ and square root information $\info^{\frac{1}{2}}$ of a Gaussian distribution are used to represent the sensor's error characteristic.
State-of-the-art frameworks like GTSAM \cite{Dellaert} or Ceres \cite{Agarwal} allow an efficient solution for online as well as offline estimation problems.
The major advantage over traditional filter-based solutions is the flexibility to re-estimate past states with the usage of current information.
This enables the algorithm to correct past estimation errors and improves even the estimation of current states.
Our algorithmic goal is the application of this capability to overcome the limitation of fixed assignments, that filters have to obey.
\begin{algorithm}[tbp]
\SetAlgoLined
generate detections $\vect{Z}$ using PointRCNN \cite{PointRCNN}
\ForEach{time step $t$}
{
create GMM based on $\measurementOf{}{t,j}$ and null-hypothesis
\eIf{$t == 0$}
{
init $\stateOf{pos}{t,i}$ at $\measurementOf{pos}{t,j}$ and $\stateOf{vel}{t,i}=[0,0,0]$
}{
propagate $\stateOf{pos}{t-1,i}$ to $t$
get correspondence between $\stateOf{}{t,i}$ and $\measurementOf{}{t,j}$ according to $-\log \left( \probOf{\measurementOf{}{t,j}}{\stateOf{}{t,i}} \right)$
\If{$\stateOf{}{t,i}$ does not correspond to any $\measurementOf{}{t,j}$}
{
keep $\stateOf{}{t,i}$ marked as lost or delete it
}
\If{$\measurementOf{}{t,j}$ does not correspond to any $\stateOf{}{t,i}$}
{
init $\stateOf{pos}{t,i}$ at $\measurementOf{pos}{t,j}$ and $\stateOf{vel}{t,i}=[0,0,0]$
}
}
add factors \autoref{eqn:arg_min_mm}, \autoref{eqn:error_cv} and \autoref{eqn:error_rep}
optimize factor graph
}
get association between $\vect{X}$ and $\vect{Z}$
\eIf{$\stateOf{}{t,i}$ has associated $\measurementOf{}{t,j}$}
{
$\stateOf{dim}{t,i} = \measurementOf{dim}{t,j}$ and $x^{conf}_{t,i} = z^{conf}_{t,j}$
}{
$\stateOf{dim}{t,i} = \stateOf{dim}{t-1,i}$ and $x^{conf}_{t,i}=0$
}
\If{$\meanOp\left( x^{conf}_{t,i} \text{ } \forall \text{ } t \right) < c_{min}$}
{
delete $\stateOf{}{i}$
}
\caption{Tracking Algorithm for the Offline Case}
\label{algo:tracking}
\end{algorithm}
\subsection{Solving the Assignment Problem}
Despite their capabilities, factor graphs are facing the same challenges as filters when it comes to unknown assignments between measurements and states.
The assignment can be represented by a categorical variable $\vect{\theta} = \left\{ \theta_{i,j} \right\}$ which describes the probability of the $j\text{th}$ detection to belong to $i\text{th}$ object.
To solve the assignment problem exactly, the following integral over $\vect{\theta}$ has to be solved:
\begin{equation}
\probOf{\vect{X}}{\vect{Z}} = \int \probOf{\vect{X}, \vect{\theta}}{\vect{Z}} \mathrm{d}\boldsymbol{\vect{\theta}}
\label{eqn:hidden}
\end{equation}
Common filter-based solutions estimate $\vect{\theta}$ once, based on the predicted states, and assume it to be fixed in the following inference process.
This leads to a decreased performance in the case of wrong assignments.
Instead of including wrong assumptions, we formulate the state estimation problem without any assumptions regarding the assignment.
Therefore, we assume that $\vect{\theta}$ follows a discrete uniform distribution.
This can be done by describing the whole set of measurements with an equally weighted Gaussian mixture model (GMM):
\begin{equation}
\begin{split}
\probOf{\measurementOf{}{i}}{\stateOf{}{i}} \propto \sum_{j=1}^n&{c_j \cdot \exp \left( -\frac{1}{2} \left\| \sqrtinfoOf{j} \left( \errorOf{}{i} - {\vect{\mu}}_j \right) \right\| ^2 \right)}\\
\text{with } &c_j = w_j \cdot \det\left( \sqrtinfoOf{j} \right)
\end{split}
\label{eqn:gmm}
\end{equation}
The GMM encodes that each measurement $\vect{z}_j$ with mean ${\vect{\mu}}_j$ and uncertainty $\sqrtinfoOf{j}$ is assigned to each state $\stateOf{}{i}$ with the same probability.
In our case, the error function is identical with the corresponding state ($\errorOf{}{i} = \stateOf{}{i}$).
By assigning all measurements to all states, the assignment problem has to be solved during inference by combining all available information.
It also allows to re-assign measurements and correct wrong matches with future evidence, which relaxes the requirement of an optimal initial assignment.
Using a GMM as probabilistic model breaks the least squares formulation derived in \autoref{eqn:arg_min}, which is limited to simple single Gaussian models.
The authors of \cite{Olson2012} proposed an approach to maintain this relationship by approximating the sum inside \autoref{eqn:gmm} by a maximum-operator.
This allows to reformulate the weighted error function as follows:
\begin{equation}
\begin{split}
\left\| \errorOf{det}{t,i} \right\|^2
&=
\min_{j}
\begin{Vmatrix}
\sqrt{- 2 \cdot \ln \frac{c_j}{\gamma_m}}\\
\sqrtinfoOf{j} \left( \errorOf{}{i} - {\vect{\mu}}_j \right)
\end{Vmatrix}
^2\\
\text{with } \gamma_m &= \max_{j} c_j
\end{split}
\label{eqn:arg_min_mm}
\end{equation}
For a detailed explanation of this equation, we refer the reader to our previous work \cite{Pfeifer2019} and the original publication \cite{Olson2012}.
Due to the flexibility of factor graphs, additional information like different sensors or motion models can be added to the optimization problem.
This combination of factor graphs and multimodal probabilistic models allows to formulate a novel inference algorithm as robust back-end for different multi-object tracking applications, that can be applied online as well as offline.
\subsection{Detection and Preprocessing}
We apply PointRCNN \cite{PointRCNN} to obtain the 3D bounding boxes $\vect{Z}$, which are composed of the 3D coordinates of the object's center, its dimensions, the rotation of the bounding box and its confidence and is defined at time step $t$ as:
\begin{equation}
\begin{split}
\measurementOf{}{t,j} &= \begin{bmatrix}
\measurementOf{pos}{t,j}\\
\measurementOf{dim}{t,j}\\
z^{conf}_{t,j}
\end{bmatrix}\\
\text{with }
\measurementOf{pos}{t,j} = &\begin{bmatrix}
z^x_{t,j}\\
z^y_{t,j}\\
z^z_{t,j}
\end{bmatrix}
\measurementOf{dim}{t,j} = \begin{bmatrix}
z^h_{t,j}\\
z^w_{t,j}\\
z^l_{t,j}\\
z^\theta_{t,j}
\end{bmatrix} \\
\end{split}
\label{eqn:detections}
\end{equation}
Subsequently, all detections $\vect{Z}$ are transformed into a global coordinate system, which is defined relative to the ego vehicles pose at the first frame of the sequence.
\subsection{State Estimation}
The estimated states within the factor graph are composed of the 3D position $\stateOf{pos}{t,i}$ of the $i = 1 \dots M$ objects and their corresponding velocities $\stateOf{vel}{t,i}$, both are defined at time step $t$ as:
\begin{equation}
\stateOf{pos}{t,i} = \begin{bmatrix}
p^x_{t,i}\\
p^y_{t,i}\\
p^z_{t,i}
\end{bmatrix}
\text{ }
\stateOf{vel}{t,i} = \begin{bmatrix}
v^x_{t,i}\\
v^y_{t,i}\\
v^z_{t,i}
\end{bmatrix}
\label{eqn:states}
\end{equation}
\begin{algorithm}[tbp]
\SetAlgoLined
generate detections $\vect{Z}$ using PointRCNN \cite{PointRCNN}
\ForEach{time step $t$}
{
create GMM based on $\measurementOf{}{t,j}$ and null-hypothesis
\eIf{$t == 0$}
{
init $\stateOf{pos}{t,i}$ at $\measurementOf{pos}{t,j}$ and $\stateOf{vel}{t,i}=[0,0,0]$
}{
propagate $\stateOf{pos}{t-1,i}$ to $t$
get correspondence between $\stateOf{}{t,i}$ and $\measurementOf{}{t,j}$ according to $-\log \left( \probOf{\measurementOf{}{t,j}}{\stateOf{}{t,i}} \right)$
\If{$\stateOf{}{t,i}$ does not correspond to any $\measurementOf{}{t,j}$}
{
keep $\stateOf{}{t,i}$ marked as lost or delete it
}
\If{$\measurementOf{}{t,j}$ does not correspond to any $\stateOf{}{t,i}$}
{
init $\stateOf{pos}{t,i}$ at $\measurementOf{pos}{t,j}$ and $\stateOf{vel}{t,i}=[0,0,0]$
}
}
add factors \autoref{eqn:arg_min_mm}, \autoref{eqn:error_cv} and \autoref{eqn:error_rep}
optimize factor graph
get association between $\stateOf{}{t,i}$ and $\measurementOf{}{t,j}$
\eIf{$\stateOf{}{t,i}$ has associated $\measurementOf{}{t,j}$}
{
$\stateOf{dim}{t,i} = \measurementOf{dim}{t,j}$ and $x^{conf}_{t,i} = z^{conf}_{t,j}$
}{
$\stateOf{dim}{t,i} = \stateOf{dim}{t-1,i}$ and $x^{conf}_{t,i}=0$
}
\If{$\meanOp\left( x^{conf}_{t,i} \text{ } \forall \text{ } t \right) < c_{min}$}
{
do not output $\stateOf{}{t,i}$
}
}
\caption{Tracking Algorithm for the Online Case}
\label{algo:tracking_online}
\end{algorithm}
The detection factor is defined at each time step using \autoref{eqn:arg_min_mm} with the components mean ${\vect{\mu}}_j = \measurementOf{pos}{t,j}$ and a fixed uncertainty $\sqrtinfoOf{j} = (\vect{\Sigma}^{det})^{-\frac{1}{2}}$ which corresponds to the detector's accuracy.
A generic null-hypothesis with a broad uncertainty $\vect{\Sigma}^{det}_{0}$ and mean ${\vect{\mu}}_{0} = \meanOp\left( \measurementOf{pos}{t,j} \text{ } \forall \text{ } j \right)$ is added to provide robustness against missing or wrong detections.
We use a simple constant velocity factor to describe the vehicle's motion:
\begin{equation}
\left\| \errorOf{cv}{t,i} \right\|^2_{\vect{\Sigma}^{cv}} =
\left\|
\begin{matrix}
\left( \stateOf{pos}{t,i} - \stateOf{pos}{t+1,i} \right) - \stateOf{vel}{t,i} \cdot \Delta t\\
\stateOf{vel}{t,i} - \stateOf{vel}{t+1,i}
\end{matrix}
\right\|^2_{\vect{\Sigma}^{cv}}
\label{eqn:error_cv}
\end{equation}
Please note, that $ \left\| \cdot \right\|^2_{\vect{\Sigma}}$ denotes the Mahalanobis distance with the covariance matrix $\vect{\Sigma}$.
To prevent two objects from occupying the same space, we add another simple constraint, which punishes the proximity of two objects. We call this factor the repelling factor:
\begin{equation}
\left\| \errorOf{rep}{t,n,m} \right\|^2_{\vect{\Sigma}^{rep}} =
\left\|
\frac{1}{\left\|\stateOf{pos}{t,n} - \stateOf{pos}{t,m}\right\|}
\right\|^2_{\vect{\Sigma}^{rep}}
\label{eqn:error_rep}
\end{equation}
The overall factor graph consists of one detection factor \autoref{eqn:arg_min_mm} per estimated object and corresponding constant velocity factors \autoref{eqn:error_cv} that connect the states of one object over time.
Repelling factors \autoref{eqn:error_rep} are added between object pairs if the euclidean distance is below a defined threshold $d_{min}$.
A visualization of the constructed factor graph is shown in \autoref{fig:FactorGraph}.
\begin{figure}[tpb]
\centering
\vspace{0.2cm}
\includegraphics[width=0.5\linewidth]{images/kitti_fail_109_crop.png}
\hspace*{-0.7em}
\includegraphics[width=0.5\linewidth]{images/kitti_fail_110_crop.png}
\caption{Limitations of the KITTI Ground Truth: Visualized are \textit{Don't Care} areas (red), \textit{Cars} (blue) and the results of our algorithm for class \textit{Car} (green). Due to the inconsistent labels we erroneously get 4 false positives (frames 109 and 110 from KITTI training sequence 0000).}
\label{fig:kitti_gt}
\end{figure}
Our proposed algorithm is identical for offline and online tracking, except for the postprocessing step (compare \autoref{algo:tracking} and \autoref{algo:tracking_online}).
In the online use case it has to be done at each time step $t$. While it can be done once at the end for the offline solution.
At each time step $t$ new states can be added (creation of tracks) and existing states can be deleted (death of tracks) or carried over to the next time step (see \autoref{fig:FactorGraph}).
In order to create new tracks, we need to find detections $\measurementOf{}{t,j}$ which do not correspond to any $\stateOf{}{t,i}$.
Therefore, we create a similarity matrix between all $\stateOf{}{t,i}$ (column) and $\measurementOf{}{t,j}$ (row), including null-hypothesis $\measurementOf{}{0}$, by calculation of $-\log \left( \probOf{\measurementOf{}{t,j}}{\stateOf{}{t,i}} \right)$.
Subsequently, we find the minimum (best similarity) and delete its state (column) from the matrix.
If the related measurement (row) is $\measurementOf{}{0}$ we mark the state as lost, since it does not correspond to any real measurement $\measurementOf{}{t,j}$.
Otherwise, we also delete the measurement from the matrix, since it has a corresponding state.
These steps are repeated until all states are deleted from the matrix.
In that way, we can simultaneously find $\stateOf{}{t,i}$ which correspond to the null-hypothesis $\measurementOf{}{0}$ and $\measurementOf{}{t,j}$ which do not correspond to any $\stateOf{}{t,i}$.
A new $\stateOf{}{t,i}$ is created for each unrelated $\measurementOf{}{t,j}$ in order to track it.
To suppress the tracking of false positive detections, all $\stateOf{}{i}$ which have less consecutive detections than the defined threshold $n_{det}$ are deleted from the factor graph.
Thereby, the algorithm can simultaneously track objects from the first occurrence and suppress false positives in the offline use case.
This is not possible for the online solution.
Instead, states are only handed to the postprocessing step if they have $n_{det}$ or more consecutive detections.
In order to handle missing detections or occlusion of objects, a track $\stateOf{}{i}$ has to be marked as lost (correspond to $\measurementOf{}{0}$) for more than $n_{lost}$ consecutive time steps before it is terminated.
In this case, the last $n_{lost}$ states $\stateOf{}{t,i}$ are deleted, since they do not correspond to real measurements.
If a track has a corresponding $\measurementOf{}{t,j}$ before hitting $n_{lost}$, all $\stateOf{}{i}$ are kept in the factor graph and the track lives on.
Again, this is not possible in the online use case.
Instead, tracks $\stateOf{}{i}$ marked as lost are only handed to the postprocessing step for a short duration of $n_{perm}$ time steps $t$.
\subsection{Postprocessing}
After optimization, the factor graph returns the implicitly associated detection for each $\stateOf{}{t,i}$, which is either $\measurementOf{}{t,j}$ or the null-hypothesis $\measurementOf{}{0}$.
Based on this, the optimized state positions $\stateOf{pos}{t,i}$ are combined with the matched bounding box $\stateOf{dim}{t,i} = \measurementOf{dim}{t,j}$ and confidence $x^{conf}_{t,i} = z^{conf}_{t,j}$ or, in the case of the matched null-hypotheses, with the last bounding box of the same track $\stateOf{dim}{t,i} = \stateOf{dim}{t-1,i}$ and $x^{conf}_{t,i}=0$.
Subsequently, we delete tracks $\stateOf{}{i}$ with mean confidence below threshold $c_{min}$ for the offline solution (see \autoref{algo:tracking}).
In case of online tracking, the data association, bounding box fitting and track management has to be done at each time step $t$ (see \autoref{algo:tracking_online}).
Therefore, we do not delete $\stateOf{}{i}$ from the factor graph if the mean confidence is below threshold $c_{min}$, since the track can get above $c_{min}$ in the future.
Instead, $\stateOf{}{t,i}$ is not handed to the postprocessing step and thus not part of the output.
By filtering the online and offline results based on their confidence we can effectively throw away tracks with a high number of matched detections with low confidence, which are most likely false positives, and tracks which are matched primarily to the null-hypotheses.
As a result, our algorithm is robust against false positives from the object detector.
\subsection{Setup}
\begin{table}[tbp]
\centering
\vspace{0.25cm}
\caption{Parameters of our FG-3DMOT algorithm}
\label{tab:params_vbi}
\resizebox{0.48\textwidth}{!}
{%
\begin{tabular}{@{}llc@{}}
\toprule
\textbf{Parameter Name} & \textbf{Symbol} & \textbf{Value} \\ \midrule
Detection Covariance & $\vect{\Sigma}^{det}$ & $\diag \begin{pmatrix} \SI{0.2}{\meter} \\ \SI{0.2}{\meter}\\ \SI{0.2}{\meter}\end{pmatrix}^2$ \\ \midrule
Detection Null-Hypothesis Covariance& $\vect{\Sigma}^{det}_{0}$ & $\diag \begin{pmatrix} \SI{100}{\meter} \\ \SI{100}{\meter}\\ \SI{1}{\meter}\end{pmatrix}^2$ \\ \midrule
Constant Velocity Covariance & $\vect{\Sigma}^{cv}$ & $\diag \begin{pmatrix} \SI{0.25}{\meter} \\ \SI{0.25}{\meter}\\ \SI{0.25}{\meter} \\ \SI{0.25}{\meter\per\second} \\ \SI{0.25}{\meter\per\second} \\ \SI{0.25}{\meter\per\second}\end{pmatrix}^2$ \\ \midrule
Repelling Covariance & $\vect{\Sigma}^{rep}$ & $\SI{0.5}{\meter^2}$ \\ \midrule
Repelling Distance Threshold & $d_{min}$ & $\SI{4}{\meter}$ \\ \midrule
Track Confidence Threshold (offline)& $c_{min}$ & $3.9$ \\ \midrule
Track Confidence Threshold (online) & $c_{min}$ & $3.5$ \\ \midrule
Number Consecutive Detections & $n_{det}$ & $2$ \\ \midrule
Num. Con. Null-Hypothesis Detections & $n_{lost}$ & $5$ \\ \midrule
Object Permanence (online) & $n_{perm}$ & $1$ \\ \bottomrule
\end{tabular}
}
\end{table}
\begin{table*}[tbph]
\centering
\vspace{0.2cm}
\caption{Results on the KITTI 2D MOT Testing Set for Class Car}
\tiny
\resizebox{0.83\textwidth}{!}
{%
\def1.02{1.02}
\begin{tabular}{@{} l c c c c c c c @{}}
\toprule
{\bf Method} & {\bf MOTA $\uparrow$} & {\bf MOTP $\uparrow$} & {\bf MT $\uparrow$} & {\bf ML $\downarrow$} & {\bf IDS $\downarrow$} & {\bf FRAG $\downarrow$} & {\bf FPS $\uparrow$}\\
\midrule
TuSimple \cite{NMOT} & 86.62 \% & 83.97 \% & 72.46 \% & 6.77 \% & 293 & 501 & 1.7 \\
MASS \cite{MASS} & 85.04 \% & 85.53 \% & 74.31 \% & \bestResult{2.77} \% & 301 & 744 & 100.0 \\
MOTSFusion \cite{MOTSFusion} & 84.83 \% & 85.21 \% & 73.08 \% & \bestResult{2.77} \% & 275 & 759 & 2.3 \\
mmMOT \cite{Zhang} & 84.77 \% & 85.21 \% & 73.23 \% & \bestResult{2.77} \% & 284 & 753 & 50.0 \\
mono3DT \cite{mono3DT} & 84.52 \% & 85.64 \% & 73.38 \% & \bestResult{2.77} \% & 377 & 847 & 33.3 \\
MOTBeyondPixels \cite{BeyondPixels} & 84.24 \% & \bestResult{85.73} \% & 73.23 \% & \bestResult{2.77} \% & 468 & 944 & 3.3 \\
AB3DMOT \cite{Baseline} & 83.84 \% & 85.24 \% & 66.92 \% & 11.38 \% & \bestResult{9} & 224 & \bestResult{212.8} \\
IMMDP \cite{IMMDP} & 83.04 \% & 82.74 \% & 60.62 \% & 11.38 \% & 172 & 365 & 5.3 \\
aUToTrack \cite{autotrack} & 82.25 \% & 80.52 \% & 72.62 \% & 3.54 \% & 1025 & 1402 & 100.0 \\
JCSTD \cite{JCSTD} & 80.57 \% & 81.81 \% & 56.77 \% & 7.38 \% & 61 & 643 & 14.3 \\
\midrule
{\bf FG-3DMOT (offline)} & \bestResult{88.01} \% & 85.04 \% & \bestResult{75.54} \% & 11.85 \% & 20 & \bestResult{117} & 23.8 \\
{\bf FG-3DMOT (online)} & 83.74 \% & 84.64 \% & 68.00 \% & 9.85 \% & \bestResult{9} & 375 & 27.1 \\
\bottomrule
\end{tabular}
}
\label{tab:result}
\end{table*}
In order to evaluate our proposed algorithm, we use the KITTI 2D MOT benchmark \cite{KITTI}. It is composed of 21 training and 29 testing sequences, with a total length of 8008 respectively 11095 frames.
For each sequence, 3D point clouds, RGB images of the left and right camera and ego motion data are given at a rate of 10 FPS.
The testing split does not provide any annotations, since it is used for evaluation on the KITTI benchmark server.
The training split contains annotations for 30601 objects and 636 trajectories of 8 classes.
Since annotations are rare for a lot of classes, we only evaluate our algorithm on the car subset.
As previously mentioned, we use PointRCNN \cite{PointRCNN} as 3D object detector.
Since \cite{Baseline} uses the same detector and made the obtained 3D bounding boxes publicly available, we use them for comparability reasons.
We normalize the provided confidence to positive values by adding a constant offset.
Since our algorithm is robust against false positive detections we achieve the best results by using all available detections.
All other parameters are shown in \autoref{tab:params_vbi}.
We construct the graph using the libRSF framework \cite{Pfeifer} and solve the optimization problem with Ceres \cite{Agarwal}, using the dogleg optimizer.
Since the KITTI 2D MOT benchmark is evaluated in the image space through 2D bounding boxes, we need to transform our 3D bounding boxes into the image space and flatten them into 2D bounding boxes.
Furthermore, we only output 2D bounding boxes which overlap at least 25\% with the image in order to suppress detections that are not visible in the image, but in the laser scan.
\subsection{Results}\label{sec:results}
We evaluated our tracking algorithm in online and offline use case on the KITTI 2D MOT testing set.
Since offline results are rare and not among the best algorithms on the leaderboard, we compare our online and offline results against the 10 best methods on the leaderboard (accessed February 2020), which are summarized in \autoref{tab:result}.
The used metrics multi object tracking accuracy (MOTA), multi object tracking precision (MOTP), mostly tracked objects (MT), mostly lost objects (ML), id switches (IDS) and track fragmentation (FRAG) are defined in \cite{Li09learningto,Bernardin}.
The proposed algorithm achieves accurate, robust and reliable tracking results in online as well as offline application, performing better than many state-of-the-art algorithms.
In the offline use case, we improve the state-of-the art in accuracy, mostly tracked objects and the fragmentation of tracks.
Especially the fragmentation of our tracks is significantly lower than the previous state-of-the-art, since our algorithm can propagate tracks without any measurements far into the future and truncate them afterwards, if no measurements were associated.
Even for the online solution, we achieve low track fragmentation and state-of-the-art id switches.
Furthermore, the results demonstrate that our approach benefits from the optimization of past states and the ability to re-assign past detections based on future information, since it achieves considerably higher accuracy and lower fragmentation in the offline application.
Although we did not optimize run time in any way, our algorithm is real-time feasible in online as well as offline use case.
The computation time is similar in both cases and could be improved significantly, especially for the online use case.
We provide a visualization of the results of our offline tracker in the accompanying video, which features KITTI testing sequence \textit{0014}\footnote[1]{https://youtu.be/mvZmli4jrZQ}.
\textbf{Limitations of the KITTI Ground Truth:}
During the evaluation of our algorithm on the training split we discovered the limits of KITTI's 2D bounding box ground truth.
As an example we visualized a failure case of the ground truth in \autoref{fig:kitti_gt}.
The ground truth uses \textit{Don't Care} 2D bounding boxes to label regions with occluded, poorly visible or far away objects.
Bounding boxes which overlap at least 50\% with a \textit{Don't Care} area are not evaluated.
As seen in \autoref{fig:kitti_gt}, the ground truth is not labeled consistently and objects (here cars) are neither labeled \textit{Don't Care} nor \textit{Car}.
Therefore, we get a lot of false positives, since our algorithm can track objects from the first occurrence and from high distances.
As a result, we assume that we could achieve significantly higher accuracy (MOTA) with a consistent labeled ground truth.
\textbf{Limitations of our Algorithm:}
After the evaluation on the testing set of the KITTI benchmark we discovered one main failure case of our algorithm.
In testing sequences \textit{0005} and \textit{0006} the ego-vehicle captures a highway scene, hence it self and all other cars are moving with high velocities.
Since our algorithm works in a static 3D coordinate system and we initialize new objects as a general assumption with zero velocity, these cases are hard to capture for our algorithm.
Furthermore, such scenes with fast moving cars are not present in the training split, whereby we could not adopt our algorithm to handle these scenes.
This issue will be addressed in future work.
\subsection{Detection and Preprocessing}
We apply PointRCNN \cite{PointRCNN} to obtain the 3D bounding boxes $\vect{Z}$, which are composed of the 3D coordinates of the object's center, its dimensions, the rotation of the bounding box and its confidence and is defined at time step $t$ as:
\begin{equation}
\begin{split}
\measurementOf{}{t,j} &= \begin{bmatrix}
\measurementOf{pos}{t,j}\\
\measurementOf{dim}{t,j}\\
z^{conf}_{t,j}
\end{bmatrix}\\
\text{with }
\measurementOf{pos}{t,j} = &\begin{bmatrix}
z^x_{t,j}\\
z^y_{t,j}\\
z^z_{t,j}
\end{bmatrix}
\measurementOf{dim}{t,j} = \begin{bmatrix}
z^h_{t,j}\\
z^w_{t,j}\\
z^l_{t,j}\\
z^\theta_{t,j}
\end{bmatrix} \\
\end{split}
\label{eqn:detections}
\end{equation}
Subsequently, all detections $\vect{Z}$ are transformed into a global coordinate system, which is defined relative to the ego vehicles pose at the first frame of the sequence.
\subsection{State Estimation}
The estimated states within the factor graph are composed of the 3D position $\stateOf{pos}{t,i}$ of the $i = 1 \dots M$ objects and their corresponding velocities $\stateOf{vel}{t,i}$, both are defined at time step $t$ as:
\begin{equation}
\stateOf{pos}{t,i} = \begin{bmatrix}
p^x_{t,i}\\
p^y_{t,i}\\
p^z_{t,i}
\end{bmatrix}
\text{ }
\stateOf{vel}{t,i} = \begin{bmatrix}
v^x_{t,i}\\
v^y_{t,i}\\
v^z_{t,i}
\end{bmatrix}
\label{eqn:states}
\end{equation}
\begin{algorithm}[tbp]
\SetAlgoLined
generate detections $\vect{Z}$ using PointRCNN \cite{PointRCNN}
\ForEach{time step $t$}
{
create GMM based on $\measurementOf{}{t,j}$ and null-hypothesis
\eIf{$t == 0$}
{
init $\stateOf{pos}{t,i}$ at $\measurementOf{pos}{t,j}$ and $\stateOf{vel}{t,i}=[0,0,0]$
}{
propagate $\stateOf{pos}{t-1,i}$ to $t$
get correspondence between $\stateOf{}{t,i}$ and $\measurementOf{}{t,j}$ according to $-\log \left( \probOf{\measurementOf{}{t,j}}{\stateOf{}{t,i}} \right)$
\If{$\stateOf{}{t,i}$ does not correspond to any $\measurementOf{}{t,j}$}
{
keep $\stateOf{}{t,i}$ marked as lost or delete it
}
\If{$\measurementOf{}{t,j}$ does not correspond to any $\stateOf{}{t,i}$}
{
init $\stateOf{pos}{t,i}$ at $\measurementOf{pos}{t,j}$ and $\stateOf{vel}{t,i}=[0,0,0]$
}
}
add factors \autoref{eqn:arg_min_mm}, \autoref{eqn:error_cv} and \autoref{eqn:error_rep}
optimize factor graph
get association between $\stateOf{}{t,i}$ and $\measurementOf{}{t,j}$
\eIf{$\stateOf{}{t,i}$ has associated $\measurementOf{}{t,j}$}
{
$\stateOf{dim}{t,i} = \measurementOf{dim}{t,j}$ and $x^{conf}_{t,i} = z^{conf}_{t,j}$
}{
$\stateOf{dim}{t,i} = \stateOf{dim}{t-1,i}$ and $x^{conf}_{t,i}=0$
}
\If{$\meanOp\left( x^{conf}_{t,i} \text{ } \forall \text{ } t \right) < c_{min}$}
{
do not output $\stateOf{}{t,i}$
}
}
\caption{Tracking Algorithm for the Online Case}
\label{algo:tracking_online}
\end{algorithm}
The detection factor is defined at each time step using \autoref{eqn:arg_min_mm} with the components mean ${\vect{\mu}}_j = \measurementOf{pos}{t,j}$ and a fixed uncertainty $\sqrtinfoOf{j} = (\vect{\Sigma}^{det})^{-\frac{1}{2}}$ which corresponds to the detector's accuracy.
A generic null-hypothesis with a broad uncertainty $\vect{\Sigma}^{det}_{0}$ and mean ${\vect{\mu}}_{0} = \meanOp\left( \measurementOf{pos}{t,j} \text{ } \forall \text{ } j \right)$ is added to provide robustness against missing or wrong detections.
We use a simple constant velocity factor to describe the vehicle's motion:
\begin{equation}
\left\| \errorOf{cv}{t,i} \right\|^2_{\vect{\Sigma}^{cv}} =
\left\|
\begin{matrix}
\left( \stateOf{pos}{t,i} - \stateOf{pos}{t+1,i} \right) - \stateOf{vel}{t,i} \cdot \Delta t\\
\stateOf{vel}{t,i} - \stateOf{vel}{t+1,i}
\end{matrix}
\right\|^2_{\vect{\Sigma}^{cv}}
\label{eqn:error_cv}
\end{equation}
Please note, that $ \left\| \cdot \right\|^2_{\vect{\Sigma}}$ denotes the Mahalanobis distance with the covariance matrix $\vect{\Sigma}$.
To prevent two objects from occupying the same space, we add another simple constraint, which punishes the proximity of two objects. We call this factor the repelling factor:
\begin{equation}
\left\| \errorOf{rep}{t,n,m} \right\|^2_{\vect{\Sigma}^{rep}} =
\left\|
\frac{1}{\left\|\stateOf{pos}{t,n} - \stateOf{pos}{t,m}\right\|}
\right\|^2_{\vect{\Sigma}^{rep}}
\label{eqn:error_rep}
\end{equation}
The overall factor graph consists of one detection factor \autoref{eqn:arg_min_mm} per estimated object and corresponding constant velocity factors \autoref{eqn:error_cv} that connect the states of one object over time.
Repelling factors \autoref{eqn:error_rep} are added between object pairs if the euclidean distance is below a defined threshold $d_{min}$.
A visualization of the constructed factor graph is shown in \autoref{fig:FactorGraph}.
\begin{figure}[tpb]
\centering
\vspace{0.2cm}
\includegraphics[width=0.5\linewidth]{images/kitti_fail_109_crop.png}
\hspace*{-0.7em}
\includegraphics[width=0.5\linewidth]{images/kitti_fail_110_crop.png}
\caption{Limitations of the KITTI Ground Truth: Visualized are \textit{Don't Care} areas (red), \textit{Cars} (blue) and the results of our algorithm for class \textit{Car} (green). Due to the inconsistent labels we erroneously get 4 false positives (frames 109 and 110 from KITTI training sequence 0000).}
\label{fig:kitti_gt}
\end{figure}
Our proposed algorithm is identical for offline and online tracking, except for the postprocessing step (compare \autoref{algo:tracking} and \autoref{algo:tracking_online}).
In the online use case it has to be done at each time step $t$. While it can be done once at the end for the offline solution.
At each time step $t$ new states can be added (creation of tracks) and existing states can be deleted (death of tracks) or carried over to the next time step (see \autoref{fig:FactorGraph}).
In order to create new tracks, we need to find detections $\measurementOf{}{t,j}$ which do not correspond to any $\stateOf{}{t,i}$.
Therefore, we create a similarity matrix between all $\stateOf{}{t,i}$ (column) and $\measurementOf{}{t,j}$ (row), including null-hypothesis $\measurementOf{}{0}$, by calculation of $-\log \left( \probOf{\measurementOf{}{t,j}}{\stateOf{}{t,i}} \right)$.
Subsequently, we find the minimum (best similarity) and delete its state (column) from the matrix.
If the related measurement (row) is $\measurementOf{}{0}$ we mark the state as lost, since it does not correspond to any real measurement $\measurementOf{}{t,j}$.
Otherwise, we also delete the measurement from the matrix, since it has a corresponding state.
These steps are repeated until all states are deleted from the matrix.
In that way, we can simultaneously find $\stateOf{}{t,i}$ which correspond to the null-hypothesis $\measurementOf{}{0}$ and $\measurementOf{}{t,j}$ which do not correspond to any $\stateOf{}{t,i}$.
A new $\stateOf{}{t,i}$ is created for each unrelated $\measurementOf{}{t,j}$ in order to track it.
To suppress the tracking of false positive detections, all $\stateOf{}{i}$ which have less consecutive detections than the defined threshold $n_{det}$ are deleted from the factor graph.
Thereby, the algorithm can simultaneously track objects from the first occurrence and suppress false positives in the offline use case.
This is not possible for the online solution.
Instead, states are only handed to the postprocessing step if they have $n_{det}$ or more consecutive detections.
In order to handle missing detections or occlusion of objects, a track $\stateOf{}{i}$ has to be marked as lost (correspond to $\measurementOf{}{0}$) for more than $n_{lost}$ consecutive time steps before it is terminated.
In this case, the last $n_{lost}$ states $\stateOf{}{t,i}$ are deleted, since they do not correspond to real measurements.
If a track has a corresponding $\measurementOf{}{t,j}$ before hitting $n_{lost}$, all $\stateOf{}{i}$ are kept in the factor graph and the track lives on.
Again, this is not possible in the online use case.
Instead, tracks $\stateOf{}{i}$ marked as lost are only handed to the postprocessing step for a short duration of $n_{perm}$ time steps $t$.
\subsection{Postprocessing}
After optimization, the factor graph returns the implicitly associated detection for each $\stateOf{}{t,i}$, which is either $\measurementOf{}{t,j}$ or the null-hypothesis $\measurementOf{}{0}$.
Based on this, the optimized state positions $\stateOf{pos}{t,i}$ are combined with the matched bounding box $\stateOf{dim}{t,i} = \measurementOf{dim}{t,j}$ and confidence $x^{conf}_{t,i} = z^{conf}_{t,j}$ or, in the case of the matched null-hypotheses, with the last bounding box of the same track $\stateOf{dim}{t,i} = \stateOf{dim}{t-1,i}$ and $x^{conf}_{t,i}=0$.
Subsequently, we delete tracks $\stateOf{}{i}$ with mean confidence below threshold $c_{min}$ for the offline solution (see \autoref{algo:tracking}).
In case of online tracking, the data association, bounding box fitting and track management has to be done at each time step $t$ (see \autoref{algo:tracking_online}).
Therefore, we do not delete $\stateOf{}{i}$ from the factor graph if the mean confidence is below threshold $c_{min}$, since the track can get above $c_{min}$ in the future.
Instead, $\stateOf{}{t,i}$ is not handed to the postprocessing step and thus not part of the output.
By filtering the online and offline results based on their confidence we can effectively throw away tracks with a high number of matched detections with low confidence, which are most likely false positives, and tracks which are matched primarily to the null-hypotheses.
As a result, our algorithm is robust against false positives from the object detector.
\subsection{3D Object Detection}
Reliable and accurate object detection is a crucial component of tracking algorithms, which follow the tracking-by-detection paradigm.
Prior work in the domain of 3D object detection can be roughly categorized into three classes.
The algorithms proposed in \cite{Ku2019} and \cite{Xu2018} use only 2D images to directly predict 3D object proposals using neural networks.
Another common approach is the combination of 2D images and 3D point clouds through neural networks to obtain 3D bounding boxes \cite{Ku2018,Chen2017}.
Methods of the third category solely rely on 3D point clouds and either project them to a 2D bird's-eye view \cite{Yang2018}, represent them as voxels \cite{VoxelNet} or directly extract 3D bounding boxes from raw point clouds \cite{PointRCNN}.
\subsection{Multi-Object Tracking}
Following the common tracking-by-detection approach, offline trackers try to find the global optimal solution for the data association task over whole sequences. Typical methods are min-cost flow algorithms \cite{GlobalAssociation}, Markov Chain Monte Carlo \cite{Choi} or linear programming \cite{Berclaz}.
In contrast, online tracking algorithms only rely on past and current information and need to be real-time feasible. Common are filter-based approaches like Kalman \cite{Baseline} or particle filter \cite{Breitenstein}.
The data association step is often formulated as a bipartite graph matching problem and solved with the Hungarian algorithm \cite{Baseline,SimpleOnline}.
In the domain of 3D multi-object tracking a lot of work focuses on neural network based approaches, especially end-to-end learned models like \cite{Zhang,Luo}.
Both jointly learn 3D object detection and tracking.
The authors of \cite{Luo} rely solely on point clouds as input, while \cite{Zhang} utilizes camera images and point clouds.
Another approach is the incorporation of neural networks into filter-based solutions.
The authors of \cite{Scheidegger} use 2D images as input for a deep neural network and combine it with a Poisson multi-Bernoulli mixture filter to obtain 3D tracks.
\subsection{Factor Graphs}
Although factor graphs are widely used in the field of robotics for \cite{Dellaert2017, Pfeifer2019}, they are not common in the tracking community.
The authors of \cite{Schiegg} focus on solving the data association for 2D cell tracking, but do not optimize the cell positions.
In \cite{Wang} the data association for multi-object tracking is solved using a factor graph in a 2D simulation, but an extended Kalman filter is applied for track filtering and prediction.
Therefore, the approach is not able to change the initial data association in past states.
Other work focuses on tracking with multiple sensors \cite{Meyer2017} or between multiple agents \cite{Meyer2016} and solves it via particle-based belief propagation in 2D space.
Their evaluation however, is limited to simulations and an application to real word use cases is unclear.
In contrast to prior work, we want to introduce online/offline capable, factor graph based multi-object tracking in 3D space.
Furthermore, our approach is able to jointly describe data association and state positions for current and past states in a single factor graph and solve it effectively via non-linear least squares optimization.
\section{Introduction}
\input{introduction.tex}
\setlength{\headheight}{0pt}
\section{Related Work}
\input{related_work.tex}
\section{Multimodality and Factor Graphs}
\input{factorgraph.tex}
\section{Factor Graph based Tracking}
\input{factorgraph_tracking.tex}
\section{Experiments}
\input{experiments.tex}
\section{Conclusion}
\input{conclusion.tex}
\bibliographystyle{IEEEtran}
\balance
\subsection{Factor Graphs for State Estimation}
\begin{figure}[tbph]
\centering
\includegraphics[width=0.9\linewidth]{images/FG_Tracking.pdf}
\caption{Example of the factor graph representation of the proposed 3D multi-object tracking algorithm. Small dots represent error functions (factors) that define the least squares problem and big circles are the corresponding state variables. The set of tracked object varies over time due to the appearance and disappearance of objects from the field of view.}
\label{fig:FactorGraph}
\end{figure}
The factor graph, as a graphical representation of non-linear least squares optimization, is a powerful tool to solve complex state estimation problems and dominates today's progress in state estimation for autonomous systems \cite{Dellaert2017}.
\begin{equation}
\vect{\opt{X}} =\argmax_{\vect{X}} \vect{P}(\vect{X}|\vect{Z})
\label{eqn:argmax}
\end{equation}
To estimate the most likely set of states $\vect{\opt{X}}$ based on a set of measurements $\vect{Z}$, \autoref{eqn:argmax} is solved using the factorized conditional probabilities:
\begin{equation}
\probOf{\vect{X}}{\vect{Z}} \propto \prod_i \probOf{\measurementOf{}{i}}{\stateOf{}{i}} \cdot \prod_j \vect{P}(\stateOf{}{j})
\label{eqn:posteriori}
\end{equation}
Please note that we omit the priors $\vect{P}(\stateOf{}{j})$ in further equations for simplicity.
By assuming a Gaussian distributed conditional $\probOf{\measurementOf{}{i}}{\stateOf{}{i}}$, the maximum-likelihood problem can be transformed to a minimization of the negative log-likelihood:
\begin{equation}
\vect{\est{X}} = \argmin_{\vect{X}} \sum_{i} \frac{1}{2} \left\| \info^{\frac{1}{2}} \left( \errorOf{}{i} - {\vect{\mu}} \right) \right\| ^2
\label{eqn:arg_min}
\end{equation}
Therefore, the estimated set of states $\vect{\est{X}}$ can be obtained by applying non-linear least squares optimization of the measurement function $\errorOf{}{i} = f(\measurementOf{}{i}, \stateOf{}{i})$.
Additionally, the mean ${\vect{\mu}}$ and square root information $\info^{\frac{1}{2}}$ of a Gaussian distribution are used to represent the sensor's error characteristic.
State-of-the-art frameworks like GTSAM \cite{Dellaert} or Ceres \cite{Agarwal} allow an efficient solution for online as well as offline estimation problems.
The major advantage over traditional filter-based solutions is the flexibility to re-estimate past states with the usage of current information.
This enables the algorithm to correct past estimation errors and improves even the estimation of current states.
Our algorithmic goal is the application of this capability to overcome the limitation of fixed assignments, that filters have to obey.
\begin{algorithm}[tbp]
\SetAlgoLined
generate detections $\vect{Z}$ using PointRCNN \cite{PointRCNN}
\ForEach{time step $t$}
{
create GMM based on $\measurementOf{}{t,j}$ and null-hypothesis
\eIf{$t == 0$}
{
init $\stateOf{pos}{t,i}$ at $\measurementOf{pos}{t,j}$ and $\stateOf{vel}{t,i}=[0,0,0]$
}{
propagate $\stateOf{pos}{t-1,i}$ to $t$
get correspondence between $\stateOf{}{t,i}$ and $\measurementOf{}{t,j}$ according to $-\log \left( \probOf{\measurementOf{}{t,j}}{\stateOf{}{t,i}} \right)$
\If{$\stateOf{}{t,i}$ does not correspond to any $\measurementOf{}{t,j}$}
{
keep $\stateOf{}{t,i}$ marked as lost or delete it
}
\If{$\measurementOf{}{t,j}$ does not correspond to any $\stateOf{}{t,i}$}
{
init $\stateOf{pos}{t,i}$ at $\measurementOf{pos}{t,j}$ and $\stateOf{vel}{t,i}=[0,0,0]$
}
}
add factors \autoref{eqn:arg_min_mm}, \autoref{eqn:error_cv} and \autoref{eqn:error_rep}
optimize factor graph
}
get association between $\vect{X}$ and $\vect{Z}$
\eIf{$\stateOf{}{t,i}$ has associated $\measurementOf{}{t,j}$}
{
$\stateOf{dim}{t,i} = \measurementOf{dim}{t,j}$ and $x^{conf}_{t,i} = z^{conf}_{t,j}$
}{
$\stateOf{dim}{t,i} = \stateOf{dim}{t-1,i}$ and $x^{conf}_{t,i}=0$
}
\If{$\meanOp\left( x^{conf}_{t,i} \text{ } \forall \text{ } t \right) < c_{min}$}
{
delete $\stateOf{}{i}$
}
\caption{Tracking Algorithm for the Offline Case}
\label{algo:tracking}
\end{algorithm}
\subsection{Solving the Assignment Problem}
Despite their capabilities, factor graphs are facing the same challenges as filters when it comes to unknown assignments between measurements and states.
The assignment can be represented by a categorical variable $\vect{\theta} = \left\{ \theta_{i,j} \right\}$ which describes the probability of the $j\text{th}$ detection to belong to $i\text{th}$ object.
To solve the assignment problem exactly, the following integral over $\vect{\theta}$ has to be solved:
\begin{equation}
\probOf{\vect{X}}{\vect{Z}} = \int \probOf{\vect{X}, \vect{\theta}}{\vect{Z}} \mathrm{d}\boldsymbol{\vect{\theta}}
\label{eqn:hidden}
\end{equation}
Common filter-based solutions estimate $\vect{\theta}$ once, based on the predicted states, and assume it to be fixed in the following inference process.
This leads to a decreased performance in the case of wrong assignments.
Instead of including wrong assumptions, we formulate the state estimation problem without any assumptions regarding the assignment.
Therefore, we assume that $\vect{\theta}$ follows a discrete uniform distribution.
This can be done by describing the whole set of measurements with an equally weighted Gaussian mixture model (GMM):
\begin{equation}
\begin{split}
\probOf{\measurementOf{}{i}}{\stateOf{}{i}} \propto \sum_{j=1}^n&{c_j \cdot \exp \left( -\frac{1}{2} \left\| \sqrtinfoOf{j} \left( \errorOf{}{i} - {\vect{\mu}}_j \right) \right\| ^2 \right)}\\
\text{with } &c_j = w_j \cdot \det\left( \sqrtinfoOf{j} \right)
\end{split}
\label{eqn:gmm}
\end{equation}
The GMM encodes that each measurement $\vect{z}_j$ with mean ${\vect{\mu}}_j$ and uncertainty $\sqrtinfoOf{j}$ is assigned to each state $\stateOf{}{i}$ with the same probability.
In our case, the error function is identical with the corresponding state ($\errorOf{}{i} = \stateOf{}{i}$).
By assigning all measurements to all states, the assignment problem has to be solved during inference by combining all available information.
It also allows to re-assign measurements and correct wrong matches with future evidence, which relaxes the requirement of an optimal initial assignment.
Using a GMM as probabilistic model breaks the least squares formulation derived in \autoref{eqn:arg_min}, which is limited to simple single Gaussian models.
The authors of \cite{Olson2012} proposed an approach to maintain this relationship by approximating the sum inside \autoref{eqn:gmm} by a maximum-operator.
This allows to reformulate the weighted error function as follows:
\begin{equation}
\begin{split}
\left\| \errorOf{det}{t,i} \right\|^2
&=
\min_{j}
\begin{Vmatrix}
\sqrt{- 2 \cdot \ln \frac{c_j}{\gamma_m}}\\
\sqrtinfoOf{j} \left( \errorOf{}{i} - {\vect{\mu}}_j \right)
\end{Vmatrix}
^2\\
\text{with } \gamma_m &= \max_{j} c_j
\end{split}
\label{eqn:arg_min_mm}
\end{equation}
For a detailed explanation of this equation, we refer the reader to our previous work \cite{Pfeifer2019} and the original publication \cite{Olson2012}.
Due to the flexibility of factor graphs, additional information like different sensors or motion models can be added to the optimization problem.
This combination of factor graphs and multimodal probabilistic models allows to formulate a novel inference algorithm as robust back-end for different multi-object tracking applications, that can be applied online as well as offline.
\subsection{3D Object Detection}
Reliable and accurate object detection is a crucial component of tracking algorithms, which follow the tracking-by-detection paradigm.
Prior work in the domain of 3D object detection can be roughly categorized into three classes.
The algorithms proposed in \cite{Ku2019} and \cite{Xu2018} use only 2D images to directly predict 3D object proposals using neural networks.
Another common approach is the combination of 2D images and 3D point clouds through neural networks to obtain 3D bounding boxes \cite{Ku2018,Chen2017}.
Methods of the third category solely rely on 3D point clouds and either project them to a 2D bird's-eye view \cite{Yang2018}, represent them as voxels \cite{VoxelNet} or directly extract 3D bounding boxes from raw point clouds \cite{PointRCNN}.
\subsection{Multi-Object Tracking}
Following the common tracking-by-detection approach, offline trackers try to find the global optimal solution for the data association task over whole sequences. Typical methods are min-cost flow algorithms \cite{GlobalAssociation}, Markov Chain Monte Carlo \cite{Choi} or linear programming \cite{Berclaz}.
In contrast, online tracking algorithms only rely on past and current information and need to be real-time feasible. Common are filter-based approaches like Kalman \cite{Baseline} or particle filter \cite{Breitenstein}.
The data association step is often formulated as a bipartite graph matching problem and solved with the Hungarian algorithm \cite{Baseline,SimpleOnline}.
In the domain of 3D multi-object tracking a lot of work focuses on neural network based approaches, especially end-to-end learned models like \cite{Zhang,Luo}.
Both jointly learn 3D object detection and tracking.
The authors of \cite{Luo} rely solely on point clouds as input, while \cite{Zhang} utilizes camera images and point clouds.
Another approach is the incorporation of neural networks into filter-based solutions.
The authors of \cite{Scheidegger} use 2D images as input for a deep neural network and combine it with a Poisson multi-Bernoulli mixture filter to obtain 3D tracks.
\subsection{Factor Graphs}
Although factor graphs are widely used in the field of robotics for \cite{Dellaert2017, Pfeifer2019}, they are not common in the tracking community.
The authors of \cite{Schiegg} focus on solving the data association for 2D cell tracking, but do not optimize the cell positions.
In \cite{Wang} the data association for multi-object tracking is solved using a factor graph in a 2D simulation, but an extended Kalman filter is applied for track filtering and prediction.
Therefore, the approach is not able to change the initial data association in past states.
Other work focuses on tracking with multiple sensors \cite{Meyer2017} or between multiple agents \cite{Meyer2016} and solves it via particle-based belief propagation in 2D space.
Their evaluation however, is limited to simulations and an application to real word use cases is unclear.
In contrast to prior work, we want to introduce online/offline capable, factor graph based multi-object tracking in 3D space.
Furthermore, our approach is able to jointly describe data association and state positions for current and past states in a single factor graph and solve it effectively via non-linear least squares optimization.
\section{Introduction}
\input{introduction.tex}
\setlength{\headheight}{0pt}
\section{Related Work}
\input{related_work.tex}
\section{Multimodality and Factor Graphs}
\input{factorgraph.tex}
\section{Factor Graph based Tracking}
\input{factorgraph_tracking.tex}
\section{Experiments}
\input{experiments.tex}
\section{Conclusion}
\input{conclusion.tex}
\bibliographystyle{IEEEtran}
\balance
|
1,941,325,220,012 | arxiv | \section{Introduction} \label{sec:intro}
Low mass X-ray binaries (LMXBs) are binary systems hosting a compact object, that can be a neutron star (NS) or a stellar-mass black hole (BH), and a low-mass companion star (with mass $\lesssim 1M_{\odot}$). The latter is typically a main-sequence star, filling its Roche lobe and transferring matter and angular momentum to the compact object through the formation of an accretion disk.
LMXBs can be transient, displaying short and sudden outbursts, with X-ray luminosities that can reach $L_{X}\sim 10^{36}-10^{38}\rm \, erg/s$ and high accretion rates, and longer, quieter intervals of quiescence, with a drop of the X-ray luminosity by up to seven orders of magnitude. At X-ray frequencies, outbursts are typically characterised by a sharp increase of the flux, lasting days--months, and a longer, slower decay, that can take place over weeks or months, until reaching its former quiescent level \citep{Frank1987}.
X-ray radiation typically comes from the internal part of the accretion disc, close to the compact object \citep{Lasota2001}, from the corona (which is a region of hot electron plasma that is thought to surround the compact object and, according to some models, the accretion disc) and, in case of NSs, from the compact object itself; optical radiation, on the other hand, is thought to primarily come from the companion star and the external part of the disc, the latter being dominant during outbursts, plus a contribution in some systems from synchrotron radiation from compact, collimated jets (see e.g. \citealt{Homan2005}; \citealt{Russell2007}; \citealt{Buxton2012}; \citealt{Kalemci2013}; \citealt{Baglio2018}; \citealt{Baglio2020}). A rise in the optical flux is expected to occur as the temperature in the disc increases, triggering the ionization of hydrogen which may start the outburst (see \citealt{Lasota2001} for a review).
The mechanism that triggers such outbursts is still uncertain. The most accredited scenario is called the \textit{disc-instability model} (DIM; see \citealt{Lasota2001}, \citealt{Hameury2020} for reviews)
The DIM was first suggested to explain the outbursts in dwarf novae (a subclass of cataclysmic variables, that display recurrent outbursts; see \citealt{Cannizzo1982}), and then extended to LMXBs due to the analogy that was observed between the two classes of systems during outbursts, in particular regarding their fast-rise and exponential decay (\citealt{vanparadijs1984}; \citealt{Cannizzo1985}). According to the DIM, the instability is driven by the ionization state of hydrogen in the disc. If all the hydrogen in the disc is ionized, the system is considered as stable, as it happens e.g. in persistent LMXBs or in nova-like systems (i.e. the class of cataclysmic variables that show a persistent behaviour). However, if the mass accretion rate or the temperature becomes low enough to allow for the recombination of hydrogen, then a thermal-viscous instability can occur in the disc, that oscillates between a hot, ionized state, that we call outbursts, and a cold, recombined state, that is quiescence.
When the system is in quiescence, the cold accretion disc accumulates mass until a critical density, and at the same time the temperature rises until the hydrogen ionization temperature is reached at a certain radius (ignition point).
At the ignition point, two heating fronts are generated (\citealt{Smak1984}; \citealt{Menou1999}), one propagating inwards, and the other outwards.
Two different types of outbursts can be observed, depending, above all, on how fast the two fronts propagate. ``Inside-out'' outbursts start at small radii, and the inward heating front will fast reach the inner accretion disc; ``outside-in'' outbursts instead are ignited further away in the disc, therefore the propagation towards the inner disc takes longer.
In addition, in inside-out outbursts the outward heating front propagates towards regions of higher densities, while outside-in fronts will always encounter regions with decreasing surface density \citep{Dubus2001}. Therefore, it is easier for an inside-out outburst to stall, and to develop a cooling front that switches off the outburst. Inside-out outbursts therefore typically propagate slowly, leading to long rise times of the outburst.
Once the outburst is triggered, accretion continues at high rates, giving rise to the observed high X-ray luminosity. Then the outburst starts to decay, and the disk is depleted, bringing the system back to its quiescent state \citep{Lasota2001}.
This picture is very simplified and many studies have shown that the effect of direct and indirect irradiation from the compact object, plus the evaporation of the accretion disc in a region that is close to the compact object (for example, the hot inner flow, or corona), plus geometrical effects are important to take into account in order for the DIM to work for LMXBs (see \citealt{Dubus1999}, \citealt{Dubus2001}). In particular, irradiation has been found to ease the propagation of the outwards heating front in inside-out outbursts by reducing the critical density needed for a certain ring of the disc to become thermally unstable \citep[][]{Dubus2001}. Moreover, some variations are observed for different systems; for example, the time delay between the occurrence of the disk instability (happening with the beginning of the heating front propagation in the disk), and the actual start of the outburst (when accretion onto the compact object is detected as an increase in X-ray luminosity) can be different from system to system.
Observations of the optical rise to outburst are crucial to probe the DIM (in particular, the measurement of the optical to X-ray delay of the rise to outburst, and the gradual optical long-term increase that is sometimes observed before an outburst is triggered). Unfortunately, such observations are often difficult, since outbursts typically rise in a few days, and are frequently detected only when the X-ray flux rises above the all-sky monitors' detection threshold, therefore missing the initial stages of the optical rise. Such optical to X-ray delays during the rise of an outburst have been measured using optical monitoring and X-ray all-sky monitors in a few systems, such as V404 Cyg ($< $7 days; \citealt{Bernardini2016_precursor}), GRO J1655-40 ($<6$ days; \citealt{Orosz1997}; \citealt{Hameury1997}), XTE J1550-564 ($<9$ days; \citealt{Jain2001}), XTE J1118+480 ($<$10 days; \citealt{Wren2001}; \citealt{Zurita2006}), 4U 1543-47 ($<5$ days), ASASSN-18ey (MAXI J1820+070; $<7$ days; \citealt{Tucker2018}), Aql X-1 (3-8 days; \citealt{Shahbaz1998}; \citealt{Russell2019}), etc. Recently, a delay of 12 days was measured for the NS LMXB SAX J1808.4-3658 \citep[][]{Goodwin2020} using an X-ray instrument more sensitive than an all-sky monitor ($NICER$), giving an important confirmation to the optical to X-ray delay during the onset of outbursts in LMXBs.
It is clear that the continuous optical monitoring of LMXBs is essential in order to obtain such measurements, together with many other possible achievements (like, e.g., the study of the quiescent behaviour of the sources, or the monitoring of the different stages of a LMXB outburst; \citealt{Russell2019}). As part of this effort, we have been monitoring $\sim 50$ LMXBs with the Las Cumbres Observatory (LCO) and Faulkes 2-m and 1-m robotic telescopes since 2008 \citep[][]{Lewis2008}, and recently we developed a pipeline, the X-ray Binary New Early Warning System (XB-NEWS) that is able to process all the collected data as soon as they are acquired, and produces real-time light curves of all the monitored objects \citep[for more details on the project, see][]{Russell2019}. Since the monitoring was started and the pipeline has been routinely running, we have been able to detect the onset of outbursts in a few cases before the X-ray all sky monitors could, like in the case of SAX J1808.4-3658 \citep[][]{Goodwin2020} and the one presented in this work.
\section{Centaurus X-4}
Cen X-4 (short for Centaurus X-4) is a NS LMXB, discovered in July 1969 during an outburst by the X-ray satellite Vela 5B \citep{Conner1969}. The source had a second outburst ten years later, in 1979, as detected by the All-Sky Monitor experiment on the Ariel 5 satellite \citep{Kaluzienski1980}, and radio detections were reported \citep{Hjellming1979}. The optical counterpart was identified with a bright, blue object, which brightened to a magnitude of $V\sim 12.8$ mag from $V\sim 18.7$ mag \citep{Canizares1980}. Later, the companion star was classified as a $0.35\,M_{\odot}$ K5--7 V star, filling a $0.6\, R_{\odot}$ Roche lobe (\citealt{Shahbaz1993}; \citealt{Torres2002}; \citealt{Davanzo2005}; \citealt{Shahbaz2014}). The ratio between the masses of the two stars has also been carefully evaluated by \citet{Shahbaz2014}, thanks to which a relatively accurate estimate of the neutron star mass has been derived ($M_{\rm NS}=1.94^{+0.37}_{-0.85}$).
The orbital period has been measured with different techniques, leading to a period of $\sim 15.1$hr (see \citealt{McClintock1990}; \citealt{Torres2002}; \citealt{Casares2007}).
Cen X-4 is one of the brightest quiescent NS-LMXBs in the optical, with $V\sim 18.7$ mag, and a non-negligible accretion disc contribution at optical frequencies also in quiescence
(\citealt{Shahbaz1993}, \citealt{Torres2002}, \citealt{Davanzo2005}).
The interstellar absorption is low, $A_{\rm V}=0.31\pm0.16$ mag \citep{Russell2006}, and the distance to the system is $1.2\pm0.2$ kpc \citep{Chevalier1989}, that is reasonably consistent with the most recent estimate obtained with Gaia ($2.1^{+1.2}_{-0.6}$ kpc; \citealt{Bailer2018}).
Cen X-4 has been in quiescence since the end of its second outburst in 1979. In December 2020, signs of a possible gradual brightening over the previous $\sim 3$ years were reported thanks to an optical monitoring of the source performed with the LCO 2-m and 1-m robotic telescopes \citep{Waterval2020}. After 2020 August 31 (MJD 59092), the source was Sun-constrained until 2020 December 30 (MJD 59213); the first LCO observation after the Sun constraint ended showed a significant brightening in all optical bands \citep{Saikia2021}, which then resulted in prominent flaring activity that lasted for $\sim 2$ weeks. By mid-January, the source was back to its quiescent levels at all wavelengths \citep{vandenEijnde2021}.
In this paper, we present long-term optical monitoring of Cen X--4, which led to the prediction of a possible new outburst, and we report on the subsequent observed flaring activity using optical and X-ray observations.
For the whole study presented in this work, the following Python packages have been used for coding purposes: Matplotlib \citep[][]{Hunter2007} and NumPy \citep[][]{Vanderwalt2011}. Additional data analyses were done using IDL version 8.7.3.
\section{Observations and data analysis} \label{sec:obs}
\subsection{Optical monitoring with LCO}
Cen X-4 has been regularly monitored in the optical during the last $\sim 13.5$ years with the LCO 2-m and 1-m robotic telescopes, from February 14, 2008 (MJD 54510) to June 30, 2021 (MJD 59395), mostly using $V$, $R$ and $i'$ filters (Tab. \ref{tab:wavelengths}). In total, the monitoring campaign until 2021 June 30 has acquired 316, 183, 315 images in $V$, $R$ and $i'$, respectively, plus 110 and 36 images in the $g'$ and $r'$ filters, respectively. The images have been processed and analysed by the recently developed XB-NEWS pipeline, that downloads the reduced images (i.e. bias, dark, and flat-field corrected images) from the LCO archive\footnote{\url{https://archive.lco.global}}, automatically rejects poor quality reduced images, performs astrometry using Gaia DR2 positions\footnote{\url{https://www.cosmos.esa.int/web/gaia/dr2}}, carries out multi-aperture photometry (MAP; \citealt{Stetson1990}), solves for photometric zero-point offsets between epochs \citep[][]{Bramich2012}, and flux-calibrates the photometry using the ATLAS-REFCAT2 catalog \citep[][]{Tonry2018}. If the target is not detected in an image above the detection threshold, then XB-NEWS performs forced MAP at the target coordinates. In this case, we reject all forced MAP magnitudes with an uncertainty $> 0.25$ mag, as these are very uncertain photometric measurements. The pipeline produces near real-time calibrated light curves. For further details on XB-NEWS, see \citet{Russell2019} and \citet{Goodwin2020}.
\begin{table}[]
\centering
\begin{tabular}{cc|cc}
\hline
Filter & $\nu_c$ (Hz) & Filter & $\nu_c$ (Hz) \\
\hline
$uvw2$ & $1.556\times10^{15}$ & $R$ & $4.680\times10^{14}$ \\
$uvm2$ & $1.334\times10^{15}$ & $i'$ & $3.979\times10^{14}$ \\
$uvw1$ & $1.154\times10^{15}$ & $z'$ & $3.286\times10^{14}$ \\
$u$ & $8.658\times 10^{14}$ & $J $ & $2.419\times10^{14}$ \\
$g'$ & $6.289\times10^{14}$ & $H$ & $1.807\times10^{14}$ \\
$V$ & $5.505\times10^{14}$ & $K$& $1.389\times10^{14}$\\
$r'$ & $4.831\times10^{14}$ & & \\
\hline
\end{tabular}
\caption{Central frequency $\nu_c$ of the UV/optical/NIR filters that are relevant for this work.}
\label{tab:wavelengths}
\end{table}
By visual inspection of the light curves, the presence of a number of outliers was evident. We therefore performed a systematic search for outliers in the light curves by plotting each band against the other, using observations taken a maximum of 0.5 days apart. We then selected all points lying outside the 2-sigma interval and investigated the corresponding images. The majority of these images (a total of 9 in $V$, $R$ and $i'$-band, respectively) were found to be of poor quality for various reasons (i.e. background issues) and were therefore rejected.
\begin{figure*}
\centering
\includegraphics[width=19cm]{lc_long.png}
\caption{13.5 years of optical monitoring of Cen X-4 performed with LCO in $g'$, $V$, $r'$, $R$ and $i'$ bands. All magnitudes are calibrated; error bars represent $1\sigma$ uncertainties.}
\label{fig:long_monitoring}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=8.5cm]{Fig2_prova3.png}
\caption{\textit{Top} panel: zoom of the LCO light curve between 2020, December 30 (MJD 59213) and 2021, January 26 (MJD 59240); \textit{Middle panel}: \textit{Swift}/UVOT observations of the 2021 flare. Upper limits are not plotted, for clarity. All UVOT magnitudes are AB magnitudes. With solid, dashed, dotted and dashed-dotted lines, the quiescent levels in uvw1, uvw2, uvm2 and u (respectively) of the source are drawn. The quiescent levels have been estimated by averaging UVOT archival data, starting January 2012. \textit{Bottom} panel: \textit{Swift}/XRT and {\it NICER} observations of the 2021 flare.}
\label{fig:flare_lc}
\end{figure}
In the end, a total of 109, 292, 36, 163 and 294 reliable magnitudes in the $g'$, $V$, $r'$, $R$ and $i'$ bands (Tab. \ref{tab:wavelengths}), respectively, have been obtained during our long-term optical monitoring of Cen X-4 with LCO (Fig. \ref{fig:long_monitoring}).
\subsection{Optical and near infrared observations with REM}
Cen X-4 was observed on January 5, 2021 (MJD 59219) with the 60cm Rapid Eye Mount (REM; \citealt{Zerbi2001}; \citealt{Covino2004}) telescope (La Silla, Chile). Strictly simultaneous, 300s integration time observations have been obtained using the optical SDSS $g'r'i'z'$ filters (Tab. \ref{tab:wavelengths}), for a total of 9 observations per filter. Images were reduced using standard procedures (bias subtraction and flat-field correction), and aperture photometry was performed on the stars in the field using {\tt PHOT} in {\tt IRAF}\footnote{IRAF is distributed by the National Optical Astronomy Observatory, which
is operated by the Association of Universities for Research in Astronomy, Inc.,
under cooperative agreement with the National Science Foundation.}. Photometry was then flux-calibrated using APASS\footnote{\url{http://www.aavso.org/download-apass-data}} stars in the field \citep[][]{Henden2019}.
The system was then observed again on May 22, 2021 (MJD 59356) with REM. Observations were acquired in the optical SDSS $g'$, $r'$, $i'$, $z'$ bands, strictly simultaneously (90s integration, for a total of 26 images/filter). Reduction and analysis of the optical data was performed as described above.
At the same time, NIR (2MASS $JHK$ bands) observations were acquired with the REMIR camera mounted on REM, alternating the filters, performing 15s integration exposures. A total of 90 images/filter were acquired. Dithering of the images was performed with the aim of evaluating the variable contribution of the sky, which was then subtracted from each image. Images were then combined 5 by 5 to increase the signal to noise. Flux calibration of the NIR images was performed against a group of 2MASS stars in the field.
\subsection{Swift X-ray and optical/UV monitoring}
The Neil Gehrels Swift Observatory \citep[hereafter \emph{Swift};][]{buhino2005} observed Cen X--4 16 times between 2020 December 28 and 2021 January 23 with the X-Ray Telescope (XRT; \citealt{Burrows2005}) and Ultraviolet and Optical Telescope (UVOT; \citealt{Roming2005}) instruments. For the XRT we only analyzed data obtained when the instrument was in Photon Counting mode; as the source was too faint to be detected in the short Window Timing mode exposures. For each XRT observation, we extracted source spectra from a circular aperture with a radius of 20 pixels ($\sim$47\arcsec) centred on the source. Background-only spectra were extracted from an annulus with inner and outer radii of 40 and 60 pixels ($\sim$94\arcsec\ and $\sim$141\arcsec), respectively, also centred on the source. From the background-subtracted spectra, we created a 0.5--10 keV light curve (with one data point per observation), correcting for changes in the effective area between observations that resulted from differences in how bad columns affected the source counts.
The UVOT instrument observed Cen X-4 during the 2020/2021 flare, using all available filters ($v$, $b$, $u$, $uvw1$, $uvm2$, $uvw2$), for a total of 14 epochs between MJD 59211 (2020 December 28) and MJD 59237 (2021 January 23). The data were analysed using the {\tt uvotsource} HEASOFT routine, defining as the extraction region a circular aperture centred on the source with a radius of 3 arcsec, and as
background a circular aperture (away from the source) with radius of 10 arcsec.
Several detections have been obtained, in addition to some upper-limits in all bands. The light curves are shown in the mid-panel of Fig. \ref{fig:flare_lc}.
\subsection{{\it NICER} monitoring}
The Neutron Star Interior Composition Explorer ({\it NICER}; \citealt{Gendreau2012}; \citealt{gearad2016}) observed Cen X-4 extensively in early 2021. We analyzed all observations made between January 1 and February 19. The observations, each comprised of one or more good-time-intervals, were reprocessed using the {\tt nicerl2} script that is part of the {\tt NICERDAS} package in {\tt HEASOFT v6.28}, using calibration version 20200722.
Spectra were extracted for each good-time-interval (GTI) using the tool {\tt nibackgen3C50}, which also creates background spectra \citep{remillard2021}. For some GTIs, the parameters used to calculate background spectra could not be matched with the pre-calculated library of background spectra that is used by {\tt nibackgen3C50}. In those cases the GTI was excluded from our analysis, leaving a total of 186 spectra, with exposure times ranging from 51 s to 2627 s.
Background-subtracted light curves in the 0.5-10 keV band were extracted from the spectra, with each data point representing the average count rate of a single GTI. Inspection of the resulting light curve revealed strong flaring during the time interval of MJD 59240--59250 (January 26 to February 5, 2021), that were likely due to residual background. By filtering out GTIs for which the background count rate in the 0.5-10 keV band was $>$0.5 counts\,s$^{-1}$ these ``flaring'' episodes were almost completely removed. Several suspicious outliers on MJD 59241-59242 and after MJD 59260 were removed manually.
\section{Results}\label{Results_Section}
\subsection{The long-term optical monitoring}
The long-term optical light curves obtained with LCO are shown in Fig. \ref{fig:long_monitoring}. Strong variability is observed, characterized by the emission of optical flares or dips of the order of up to $\sim 0.5$ mags on timescales of 1-2 months. A similar level of activity was previously reported in the optical by \citet{Zurita2003} and in the X-rays/UV by \citealt{Campana2004} and \citet{Bernardini2013}. Moreover, a decreasing trend in the average optical flux is observed in the long term monitoring, up to $\sim$ MJD 58016 (September 20, 2017). After that date, the average flux gradually increased until right before the start of the 2020 period of Sun constraint (i.e. September 2020).
Any long term variability in the optical light curve of LMXBs is typically related to an evolution of the accretion disk (see e.g. the case of V404 Cyg; \citealt{Bernardini2016_precursor}), or, rarely, of the jet (as in the case of Swift J1357.2-0933; \citealt{Russell2018}; Caruso et al. in preparation). The companion star contribution is instead expected to exhibit a double-humped ellipsoidal modulation at the orbital period of the source \citep{Orosz1997}.
We know from previous studies that jets are unlikely to contribute to the quiescent optical emission of Cen X-4 \citep{Baglio2014}, and therefore we will principally focus on the accretion disc emission.
To estimate the level of variability of the stable accretion disc, we first determined a flux threshold for the emission of flares. Flares likely originate in the accretion disc, and are probably due to variability in the accretion rate, which happens on the viscous time scale (days--weeks), or could be related to irradiation and have timescales of seconds-minutes.
Following the work by \citet{Jonker2008}, performed on the accreting millisecond X-ray pulsar IGR J00291+5934, we first folded the light curves on the known orbital period of the source (0.6290630 days; \citealt{McClintock1990}); then we established a possible magnitude threshold brighter than which the points are assumed to be flares from the disc, and we removed all the magnitudes that were brighter than the threshold in each band. We further binned these points in 20 bins of orbital phase of equal width. To better approximate the double-humped ellipsoidal modulation of the companion star emission, we performed a non-linear weighted least-squares fit to the binned magnitudes ($m$) vs. phase ($x$) data with a double sinusoidal function plus a constant: m=$C+ A_1 \, \sin(2\pi\,(x-\Phi)/0.5-\pi/2)+A_2\, \sin(2\pi\,(x-\Phi)-\pi/2)$, where $C$ is a constant magnitude, $\Phi$ is the phase corresponding to the inferior conjunction of the companion star, and $A_1$ and $A_2$ are the semi-amplitudes of the two oscillations; we note that one oscillation has a fixed double periodicity with respect to the other, and the free parameters of the fit are $C$, $A_1$, $A_2$, $\Phi$. We computed the $\chi^2$ and the degrees of freedom (dof) of the fit. We then changed the threshold value and repeated the above steps. Eventually, we plotted our results in a $\chi^2$ against number of dof plot, for all bands (see Fig. \ref{fig:chi2} for the $R$ band plot); as in \citet{Jonker2008}, the relation is linear until a certain level, then it deviates from the linear correlation. We therefore took the point where the deviation occurs as the threshold level for the flares: $V=17.98$ mag, $R=17.27$ mag and $i'=17.15$ mag. \\
\begin{figure}
\centering
\includegraphics[scale=0.4]{chi2_R.png}
\caption{$\chi^2$ of the fit of a double sinusoidal function plus a constant to the $R$ band light curve of Cen X-4 against the number of dof in the fit for the different magnitude thresholds that we considered. Superimposed, we plotted a dashed line corresponding to the linear fit of the lowest dof points, before the transition to a steeper correlation happens. }
\label{fig:chi2}
\end{figure}
Once all flares were excluded, the folded light curves show a modulation, which is expected from the companion star, and some scatter (the errors from the photometry are much smaller than the observed scatter; Fig. \ref{fig:orbital_modulation}). In all filters, the semi-amplitude of the modulations is low ($\sim0.1$ mag).
We first performed a fit with the double sinusoidal function model, in order to evaluate the parameters of the modulation. However, the light curves still have a significant contribution coming from the accretion disc. In order to isolate the modulation from the companion star, we estimated the lower envelope of the modulation following \citet{Pavlenko1996} and \citet{Zurita2004}. We divided the $V, R, i'$ light curves into 10 identical phase bins; for each bin, we found the minimum brightness; we defined the lower envelope emission as all the observations that differ from this minimum by at most twice the average uncertainty of the 10 faintest observations in the bin. We then performed the fit of the lower envelope of the modulation with the double sinusoidal model, fixing the parameters of the modulation to those obtained for the whole light curves, after the flares removal (solid line in Fig. \ref{fig:orbital_modulation}). The constant magnitude of the modulation corresponds to $V=18.48 \pm 0.01$, $R=17.66 \pm 0.01$, $i'=17.51 \pm 0.01$.
The lower envelope of the modulation is plotted as a solid line in Fig. \ref{fig:orbital_modulation}.
\begin{figure}
\centering
\includegraphics[width=8cm]{fig_V_le.png}
\includegraphics[width=8cm]{fig_R_le.png}
\includegraphics[width=8cm]{fig_i_le.png}
\caption{From top to bottom, $V$, $R$ and $i'$-band light curves folded on the $\sim 15.1$ hr orbital period \citep{McClintock1990}. Orbital phases are calculated according to the ephemeris by \citet{McClintock1990}. The black solid curve shows the best fit to the lower envelope in each case, considering a simple model for the expected ellipsoidal modulation.
In each panel, an horizontal black dashed line indicates the magnitude threshold for the flares (see text). All points lying above the dashed line have therefore to be considered as flares. All the observations during the 2020/2021 misfired outburst are plotted with an 'x' symbol for comparison. }
\label{fig:orbital_modulation}
\end{figure}
We then subtracted the contribution of the lower envelope (constant+modulation) from every data point, with the aim of isolating the emission from the accretion disc.
We converted the resulting magnitudes into flux densities (mJy), and built a light curve of the non-stellar flux densities during the last 13 years of observations. The result is presented in Fig. \ref{fig:residuals}, and clearly shows a downward trend of the flux emitted from the disc before $\sim$ September 20, 2017 (MJD 58016), followed by an upward trend after this date. We performed a weighted least squares fit with a constant plus a linear function ($C+A\,t$, where $C$ is a constant flux, $t$ is time expressed in MJD, and $A$ is the gradient of the line) of the two trends separately for each band (no upward trend fitting was possible for the $R$-band, due to the lack of data after MJD 58016). We note that the inclusion of the linear function improves the fit in all cases with a $> 10 \sigma$ significance, according to an $F$-test.
The results of the fit show a decrease of flux of $(0.83\pm 0.02)\times 10^{-5}$ mJy/day, $(1.22\pm 0.02)\times 10^{-5}$ mJy/day and $(1.28\pm 0.03)\times 10^{-5}$ mJy/day in $V$, $R$ and $i'$ band, respectively, before MJD 58016, and an increase of flux of $(5.85\pm0.08) \times 10^{-5}$ and $(8.67\pm0.11) \times 10^{-5}$mJy/day in $V$ and $i'$ band, respectively, after MJD 58016.
The upward trend is therefore $\sim7$ times steeper with respect to the downward one, both in $i'$ and $V$ band.
\begin{figure}
\centering
\includegraphics[scale=0.5]{plot_residuals_V.png}
\includegraphics[scale=0.5]{plot_residuals_R.png}
\includegraphics[scale=0.5]{plot_residuals_ip.png}
\caption{From top to bottom, $V$, $R$ and $i'$-band light curves of the residuals, obtained after the subtraction of the sinusoidal modulation from the original light curves. Only the points from before the beginning of the 2020 Sun constraint are shown. Superimposed with dashed lines, the linear fits of the long-term trends are shown, where possible. The observations acquired during the 2020/2021 misfired outburst are plotted with an 'x' symbol for comparison.}
\label{fig:residuals}
\end{figure}
\subsection{The 2020/2021 flare}
After the Sun constraint ended, on 2020 December 30 (MJD 59213), Cen X-4 was found to be significantly brighter at optical wavelengths than before \citep{Saikia2021}, with a brightening of $0.57\pm0.12$ and $0.42\pm0.09$ mag in $V$ and $i'$ band, respectively, compared to the previous point.
During the first days of activity, the rise in the optical emission was found to be steep, with a flux increase of $\sim0.3$ mags and $\sim0.8$ mags in $\sim6$ days (i.e. until 2020 January 5; MJD 59219) in $V$ and $i'$ band, respectively.
From our long-term monitoring of Cen X-4 with LCO, the modulation of the source has a $\sim 0.1$ mag semi-amplitude, which is much smaller than what is required to explain the amplitude of the variability.
However, instead of undergoing a full outburst, the flare peaked on MJD $\sim 59219-20$ (2021 January 5-6) in all optical bands, and then started to fade rapidly, at a similar, steep rate as during the rise (losing $\sim 1-1.2$ mag in $\sim 8-9$ days in all bands), and reached quiescent levels again on MJD $\sim 59228$ (2021 January 14), $\sim 8$ days after the peak. Since a proper outburst did not have the chance to start, we classify this peculiar activity as ``misfired outburst''.
Looking at Fig. \ref{fig:residuals}, we note that soon before the beginning of the Sun constraint, a few detections were lying above the quiescent level indicated by the linear fit at all wavelengths. However, this flux increase looks comparable to the amount of activity typically observed during quiescence for Cen X-4 (see Fig. \ref{fig:residuals}). We consider therefore unlikely that these points are marking the beginning of the misfired outburst, which likely started during the Sun constraint, or at the end of it.
\subsubsection{Short term optical variability}
The REM observations performed during the misfired outburst on 2021 January 5 (MJD 59219) resulted in optical light curves showing variability (Fig. \ref{fig:short_ts_lc}, left panel).
Following \citet{Vaughan2003}, we evaluated the fractional root-mean-square (rms) of the light curves in order to quantify the variability in the $i'$, $r'$, $g'$ bands, and we measured a fractional rms of $(12.9\pm2.9)\%$, $(12.1\pm2.3)\%$, $(15.1\pm 3.4)\%$ in $i'$, $r'$, $g'$-band, respectively. The intrinsic variability is therefore comparable in all bands.
In $z'$-band, a dramatic variability is observed, with $\sim 1$ mag difference between the lowest and highest point of the light curve (and a fractional rms of $28.6\pm2.8\%$. However, very similar variability is also observed for a comparison star of similar brightness in $z'$-band, so we tend to attribute it to the strong fringing of the $z'$-band images.
A similar, higher significance variability is observed in the 22.5 min duration light curve obtained on the same day (MJD 59219; Jan 5) with LCO in the $g'$-band, which has a fractional rms of $(15.5\pm0.5)\%$, for a light curve with a time resolution of $\sim 56$s. A $g'$-band light curve with the same time-resolution, obtained at the end of the flaring episode with LCO on MJD 59234 (2021 January 20), has a significantly lower short timescale variability (fractional rms of $(2.8\pm0.9)\%$; Fig. \ref{fig:short_ts_lc}, right panel).
Despite the value of the fractional rms being comparable, this variability is observed on much longer timescales (minutes; Fig. \ref{fig:short_ts_lc}) with respect to sources like the BH XRBs GX 339-4 or MAXI J1535-571 (seconds, or less), for which the variability was attributed to the presence of a flickering jet (e.g. \citealt{Gandhi2010}; \citealt{Baglio2018}). It is therefore unlikely that the observed optical variability can be attributed to the emission of jets in the system.
\begin{figure*}
\centering
\includegraphics[width=9cm, angle=0]{plot.png}
\includegraphics[width=8.9cm, angle=0]{lc_both_days_xbnews3.png}
\caption{\textit{Left}: $g'$, $r'$ (top panel), $i'$, $z'$ (bottom panel) light curves obtained with REM on 2021, Jan 05 (MJD 59219), showing minute-timescale variability. \textit{Right}: $g'$-band light curves obtained with LCO on 2021, Jan 05 and 20 (MJD 59219 and 59234, respectively), showing minute-timescale variability.}
\label{fig:short_ts_lc}
\end{figure*}
\subsubsection{X-rays}\label{X-ray_sec}
The X-ray coverage of the 2020/2021 flare started on MJD 59211 (December 28 2020). The {\it Swift/XRT} and {\it NICER} light curves in Figure \ref{fig:flare_lc} show that the X-ray flare peaked around the same time as the optical, between MJD 58918 and MJD 59221 (Jan 4 and Jan 7 2021). The {\it NICER} light curve (which has the higher time resolution) reveals strong variability (by factors of 2--3) on a time scale of hours near the peak of the flare. Analysis of archival data shows that the X-ray peak count rates observed during the 2020/2021 flare were a factor $\sim$2 ({\it Swift/XRT}) and $\sim$2.5 ({\it NICER}) times higher than the maximum count rates of Cen X-4 in observations made prior to December 2020, when the source was in quiescence.
Spectra obtained from most of the {\it Swift} observations and {\it NICER} GTIs were not of sufficient quality to perform detailed spectral fits. Using XSPEC V12.11.1 \citep{ar1996}, we performed a fit to the {\it NICER} spectrum with the highest count rate (MJD 59220.373038, the first GTI of observation 3652010501, with an exposure time $\sim$1250 s). The main goal of this spectral fit was to obtain a reliable count rate to flux conversion factor that can be used to estimate the outburst flux (see Sec. \ref{sec:dim}), under the assumption that the spectral shape didn't change significantly during the outburst. The {\it NICER} spectrum was rebinned to a minimum of 30 counts per spectral bin so that $\chi^2$ fitting could be employed. Following \citet{cackett2010} and \citet{chakrabarty2014}, who studied the variable quiescent spectra of Cen X-4, we fit the 0.5--10 keV spectrum with a continuum model comprised of a thermal and a non-thermal component. For the thermal component we used the neutron-star atmosphere model of \citet{heinke2006} ({\tt nsatmos} in XSPEC) and for the non-thermal component we used a power-law; the band pass of {\it NICER} did not extend high enough to test more sophisticated models for the non-thermal component (as was done in \citealt{chakrabarty2014}, for example). Interstellar absorption was modelled with the {\tt tbabs} model in XSPEC, with the abundances set to {\tt WILM} and cross sections to {\tt VERN}. For the {\tt nsatmos} component we fixed the neutron-star mass to 1.9 $M_\odot$ \citep{shwadh2014} and the distance to 1.2 kpc. The model fits well ($\chi^2$=145 for 145 degrees of freedom); we obtain an $n_{\rm H}$ of 6.52(1)$\times10^{20}$ cm$^{-2}$, a neutron-star temperature log($T_{\rm nsa}; K)$=6.24$\pm$0.05, a neutron-star radius of 9.6$\pm$1.3 km, and a power-law index of 0.73$\pm$0.18. The unabsorbed 0.5--10 keV flux was (1.95$\pm$0.10)$\times10^{-11}$ erg\,cm$^{-2}$\,s$^{-1}$ (corresponding to a luminosity of $\sim3.4\times10^{33}$ erg\,s$^{-1}$ at 1.2 kpc), with the power-law contributing $\sim$50\% in the 0.5--10 keV band. This gives count rate to flux conversion factor of $\sim$2.6$\times10^{-12}$ erg\,cm$^{-2}$\,cts$^{-1}$. The power-law index of 0.73 is very low compared to NS LMXBs in a slightly higher luminosity range \citep[$>10^{34} \rm erg\,s^{-1}$; see, e.g.,][]{wijnands2015,stoop2021} where the index is around 2.5, but it is consistent with the lowest values found by \citep{cackett2010} for Cen X-4. We note that a fit with a single power-law does not perform well, yielding $\chi^2$=246 for 147 degrees of freedom (power-law index of 3.37$\pm$0.07 and $n_{\rm H}$ of 2.6(2)$\times10^{20}$ cm$^{-2}$).
\section{Discussion}
\subsection{Long-term optical monitoring}
We have been monitoring the long-term quiescent optical behaviour of Cen X-4 for almost 13.5 years, since Feb. 14th 2008. After taking into account the modulation due to the companion star, we isolated the accretion activity of the source and observed a linear downward trend followed by a steeper upward trend during quiescence. From the gradual optical brightening detected in the long term lightcurve of Cen X-4, \citet{Waterval2020} predicted that Cen X-4 might enter an outburst in the near future. Subsequently, flaring activity of the source was detected both at optical (\citealt{Saikia2021}, \citealt{Baglio2021}) and X-ray wavelengths \citep{Eijnden2021_1}.
The DIM predicts a continuously increasing optical flux during quiescence \citep[][]{Lasota2001}, but observations of both LMXBs and dwarf novae typically show a constant or decreasing flux with time, as we detect for Cen X-4 in our optical monitoring. A very similar behaviour was reported for the BH XRB V404 Cyg \citep[][]{Bernardini2016_precursor}, where a 0.1 mag decrease in brightness over $\sim 2000$ days was observed, and linked to changes in the accretion rate from year to year (as is likely the case for Cen X-4, too).
This decay was then followed by a low-amplitude, relatively fast enhancement of optical emission (0.1 mag increase over $\sim 1000$ days), that was an indication of an increase in the mass accretion rate, which eventually culminated in the 2015 outburst of the source. Other X-ray transient sources where a slow and significant optical rise has been seen together together with an outburst, are the BH XRBs GS 1354-64 (BW Cir; \citealt{Koljonen2016}) and Swift J1357.2-0933 \citep{Russell2018}. Similarly, a slow optical rise during quiescence was observed for the BH XRBs H1705-250 and GRS 1124-68 (see \citealt{Yang2012} and \citealt{Wu2016}, respectively; see also Tab. 1 of \citealt{Russell2018} for a summary), although no new outburst has yet been detected for these sources.
Even though on different timescales, an optical precursor to an outburst has recently been observed also for the NS LMXB SAX J1808.4-3658 \citep{Goodwin2020}, that underwent a complete outburst in August 2019. The optical magnitude was observed to fluctuate by $\sim 1$ magnitude for $\sim$ 8 days before the proper outburst rise was initiated in the optical. This optical precursor can have several possible origins: an enhanced mass transfer from the companion star, which would then help triggering the outburst; instabilities in the outer disc, which could lead to heating fronts propagating through the entire disc, that would contribute to igniting the outburst; changes in the pulsar radiation pressure, the compact object being a millisecond pulsar.
Similarly, signatures of enhanced optical activity soon before the onset of an outburst have been suggested also for the NS LMXB IGR J00291+5934, whose optical lightcurve is dominated by flaring and flickering activity prior to the start of an outburst, completely hiding the sinusoidal modulation of the companion star \citep{Baglio2017}.
\begin{figure}
\centering
\includegraphics[width=9.5cm]{Figure_1_old_method.png}
\includegraphics[width=9.5cm]{Figure_2_old_method.png}
\caption{\textit{Top}: Quiescent de-reddened SED of Cen X-4 obtained in May 2021 using REM and LCO data (red dots). With green `x', the quiescent curve published in \citet{Baglio2014} is plotted, for comparison. Blue stars instead represent the fluxes obtained as the average of the lower envelope emission in the long-term monitoring of Cen X-4 during quiescence (Fig. \ref{fig:orbital_modulation}). Superimposed, the fit of the lower envelope emission with a non-irradiated blackbody. \textit{Bottom}: Average de-reddened SED of Cen X-4 during the recent misfired outburst, based on REM strictly simultaneous observations acquired on January 5, 2021 (orange squares) and \textit{Swift}/UVOT (same date, uvm2 filter; green square). Superimposed, the fit of the REM points with an irradiated star black body (blue, solid line). With a dashed grey line, the blackbody fit of the quiescent lower envelope is plotted (from the top panel), for comparison purposes.}
\label{fig:comparison_SED}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=9cm]{SED_subtraction_le2.png}
\caption{Residual fluxes of Cen X-4 during the misfired outburst after the subtraction of the companion star emission, obtained as the black body fit to the lower envelope emission (see Fig. \ref{fig:comparison_SED}, upper panel).}
\label{fig:subtracted_SED}
\end{figure}
The optical flux enhancement leading towards the flaring activity observed for Cen X-4 supports the DIM with irradiation and disc evaporation/condensation \citep{Dubus2001}, which explains the evolution outburst-quiescence mechanisms at all wavelengths in an X-ray binary. The DIM predicts that during quiescence, the cold disc accumulates mass from the companion star via Roche lobe overflow, and that causes the gradual brightening of the disc in optical wavelengths \citep{Lasota2001}. Generally, an outburst is expected to occur when the accretion disc reaches a critical density. The disc temperature then increases, causing hydrogen in the disc to ionize. This heating front is propagated through the disc closer to the inner accretion flow, causing enhancement of activity in higher-energy wavebands like X-rays, and the outburst starts.
The gradual brightening of Cen X-4 in quiescence can therefore be explained with matter slowly accumulating in the accretion disc and getting optically brighter. The amount of matter in the disc, increasing year after year, could account for the increasing optical flux that is observed (similarly to what happened for V404 Cyg; \citealt{Bernardini2016_precursor}). However, for Cen X-4 the optical and X-ray flaring activity did not lead to the ignition of a proper outburst, for reasons that we will discuss in the next sections.
\subsection{The misfired outburst}
\begin{figure*}
\centering
\includegraphics[width=9.1cm]{cmd_bb.png}
\includegraphics[width=8.8cm]{cmd_bb_old_outburst.png}
\caption{\textit{Left}: Optical CMD ($g'$ vs. $g'-i'$) during the misfired outburst of Cen X-4. Bluer colors, corresponding to higher colour indices, are to the left, and redder colors (i.e. lower color index) to the right. The black body model is plotted with a red solid curve. Different temperature values are also highlighted close to the black body line. \textit{Right}: Optical CMD ($B$ vs. $B-V$) during the 1979 outburst of Cen X-4. The red solid line represents the black body model that best describes the data. Errors on the $B$-band points are not available in the literature, and therefore are not plotted. }
\label{cmd_figure}
\end{figure*}
\subsubsection{Spectral Energy Distribution}\label{Sec:SED}
In order to shed light on the nature of the misfired outburst, we built SEDs during the period of activity (using \textit{Swift}/UVOT and REM data obtained on January 4-5 2021) and during quiescence (using REM and LCO data acquired on May 22-23 2021). To do so, fluxes were de-reddened using the absorption coefficient $A_V=0.31\pm 0.16$ mag as reported in \citet{Russell2006}, and considering the relations of \citet{Cardelli1989} to evaluate the absorption coefficients at all wavelengths. Although from the light curves it is already clear that both disc and companion star contribute to the quiescent and flare emission of Cen X-4, we tried to model the two SEDs with a simple irradiated blackbody function, using the known parameters of the companion star (a $0.6\,R_{\odot}$ radius and a $0.35\, M_{\odot}$ mass; \citealt{Shahbaz1993}; \citealt{Torres2002}; \citealt{Shahbaz2014}). This of course constitutes a caveat, considering that no multi-temperature blackbody of the disc is added to the model; for a more precise determination of the accretion disc contribution, we refer the reader to Sec. \ref{Sec_CMD}. Since the least-squares fit is insensitive to the irradiation luminosity parameter, we fixed it to the measured X-ray luminosity of $L_{\rm X}=4.5\times 10^{32}\, \rm erg\,s^{-1}$ during quiescence \citep{Campana2004}, and to $L_{\rm X}=2.4\times 10^{33}\, \rm erg\,s^{-1}$ during the flare, as estimated from \textit{Swift} observations performed on 2021 Jan 4 (this work). The results are shown in Fig. \ref{fig:comparison_SED}.
The fit of the quiescent SED obtained with REM and LCO in May 2021 (top panel of Fig. \ref{fig:comparison_SED}) gives comparable results to what was reported in \cite{Baglio2014}, with a blackbody temperature of $(4.43\pm0.01)\times 10^3\, \rm K$, consistent with a K5V-type star, as expected for Cen X-4 (\citealt{Shahbaz1993}; \citealt{Torres2002}). We note however that the NIR fluxes that we measure with our REM observations are lower with respect to the catalogued fluxes reported in 2MASS and published in \citealt{Baglio2014} (plotted as grey `x' in Fig. \ref{fig:comparison_SED}, upper panel). Considering that the 2MASS data were acquired in 2001, however, it is highly probable that the contribution from the accretion disc at optical and NIR frequencies was different with respect to 2021 (also given the long-term trend observed in Fig. \ref{fig:long_monitoring}), thus explaining the discrepancy.
In addition, we also plotted in Fig. \ref{fig:comparison_SED} (upper panel) the $V$, $R$, $i'$ fluxes obtained as the average emission from the lower envelope of the LCO long monitoring of Cen X-4 (Fig. \ref{fig:orbital_modulation}). These fluxes are the most constraining upper limits to the companion star contribution. The fluxes are lower with respect to the ones measured in quiescence with LCO by a factor of 1.7, 1.3 and 1.2 in $V$, $R$, $i'$ band, respectively. The fit of the three points with a black body gives a temperature of $(4.13\pm0.05)\times 10^3$ K, still consistent with a late type star.
The fit of the flare SED (Fig. \ref{fig:comparison_SED}, bottom panel) with the irradiated star model gives a blackbody with a higher temperature, $T=(4.92\pm0.03)\times 10^3$ K.
The UV point during the flare cannot be described by this simplified irradiated star model, suggesting for an origin in the inner regions of the multi-temperature disc as it heats up, or, as reported in \citet{Bernardini2016} for observations during quiescence, a hot spot on the disc edge. Unfortunately it is not possible with our data to be conclusive on this.
We then subtracted the blackbody obtained by fitting the lower envelope fluxes from the flare SED. The result is shown in Fig. \ref{fig:subtracted_SED}.
The residual SED peaks below the $r'$ band; this suggests a residual component with temperature $< 5\times 10^3\, \rm K$ (according to the Wien displacement law, $T=b/\lambda$, where $b\sim 2897\, \rm \mu m\, K$ and $\lambda$ is the wavelength of the peak). It is therefore likely that we are observing the emission from a cold accretion disc, in the build-up for the start of an outburst (we note that according to the color-magnitude diagram shown in Fig. \ref{cmd_figure} the temperature of the disc at the beginning of the outburst is indeed $\sim 5\times 10^3 \, \rm K$). In Fig. \ref{fig:subtracted_SED}, the UV excess is also visible.
\subsubsection{Color-Magnitude diagram}\label{Sec_CMD}
We studied the color-magnitude diagram (CMD), $g'$ versus $g'-i'$, of Cen X-4 using LCO and REM data (Fig. \ref{cmd_figure}, left panel) obtained during the misfired outburst. Superimposed, we plot the blackbody model for an accretion disk, which depicts the evolution of a single-temperature, constant-area blackbody that heats up and cools down (for details: \citealt{Maitra2008}; \citealt{Russell2011}; \citealt{Zhang2019}). In the model, the color changes are determined by the different origins of the emission at optical frequencies: for low temperatures, the Rayleigh-Jeans blackbody tail; for high temperatures, the blackbody curved peak.
We note that this model assumes that the flux emitted by the source is all coming from the accretion disk, without any contribution from other sources (like the companion star), whereas the model depicted in Sec. \ref{Sec:SED} is assuming that the irradiated companion star is producing all the flux. Even though it is clear that both star and disk are contributing to the emission of Cen X-4, we consider these tests useful in order to shed light on the different contributions to the emission processes.
We applied the disk model to Cen X-4 following \citet{Russell2011}, assuming an optical extinction of $A_{\rm V}=(0.31\pm0.16)$ mag \citep{Russell2006}, that is used to convert the color $g-i$ into an intrinsic spectral index (indicated on the top axis of Fig. \ref{cmd_figure}). The blackbody temperature depends on this color, while the normalization of the model depends on several different parameters; among these, the size of the blackbody and the distance to the source. Some of these parameters are uncertain so we varied the normalization of the model until a satisfactory approximation of the data was reached (see methods in \citealt{Maitra2008}, \citealt{Russell2011}, \citealt{Zhang2019}, \citealt{Baglio2020}).
The model approximates the data well, showing a trend that is consistent with a thermal blackbody. We therefore interpret this blackbody as that of the outer accretion disc (the surface area of the star is much smaller than that of the disc). Interestingly, the temperature of the disk remains low during the whole flare,
never exceeding $\sim 5400 \rm \,K$. This is also in agreement with the residual SED during the misfired outburst after the subtraction of the companion star contribution (Fig. \ref{fig:subtracted_SED}), where we observed a peak below the $r'$ band frequency, suggestive of a cold accretion disc ($T< 5\times 10^3 \, \rm K$).
We note that hydrogen is expected to be completely neutral below $5\times 10^3$ K (and completely ionised above $10^4$ K; \citealt{Lasota2001}).
It is therefore likely that the temperature required to start the heating wave in the disc, therefore kick-starting a full outburst, was never reached during the 2020/2021 activity. This condition is specific to this ``misfired'' outburst, as can be appreciated from the right panel of Fig. \ref{cmd_figure}, where the CMD ($B$ versus $B-V$ color) of Cen X-4 during the 1979 outburst (plus a few quiescent points) is shown, superimposed on a blackbody model which assumes the same normalization as during the 2020/2021 activity. In this outburst, the accretion disk reached and exceeded the temperature of 10000K, thus assuring the complete ionization of hydrogen in the disk, as required by the DIM to have a complete outburst. In particular, the brightest point in the CMD is found near the peak of the 1979 outburst. This shows that the data near the peak of the 1979 outburst are very close to the exact same model used to describe the 2020/2021 activity, thus reinforcing the idea that we witnessed a misfired outburst for Cen X-4 in 2021.
\subsubsection{Multi-wavelength correlation}
Another tool for disentangling the emission processes and for understanding the nature of the recent misfired outburst is the study of multi-wavelength correlations.
We studied the optical/X-ray correlation of the source during its flaring phase, using our LCO detections in the $i'$-band and quasi-simultaneous X-ray observations from {\it NICER} (taken within 1 hour).
For the conversion of X-ray count rate to flux, we use a power law index of 1.7$\pm$0.3 \citep{Bernardini2013} and the same energy range (0.5-10 keV).
\begin{figure}
\centering
\includegraphics[width=8.9cm]{00.correlation_iband_nicer_log_1hr.pdf}
\caption{Optical/X-ray correlation during the recent flaring activity with quasi-simultaneous (within 1 hour) LCO $i$'-band optical data and {\it NICER} (0.5-10 keV) X-ray data.}
\label{fig:correlation}
\end{figure}
We find a significant correlation between the optical and X-ray emission of the source during the flaring activity (with Pearson correlation coefficient = 0.89 and p-value = $8.2\times 10^{-7}$; see Fig. \ref{fig:correlation}). Previously, when the source was in quiescence, \citet{Cackett2013} found no significant correlation between the X-ray and simultaneous optical fluxes, while a positive correlation was observed between the X-ray flux and the simultaneous near-UV flux. Later, \citet{Bernardini2016} found evidence of optical ($V$-band), UV and X-ray correlation at quiescence on various timescales. The correlation slope found for outburst and quiescence had a slope of $\sim0.44$, showing that irradiation became important at high luminosities, but the slope is shallower than expected for irradiation near quiescence.
We fit the data during the misfired outburst using the orthogonal distance regression method of least squares, and find the slope of the optical/X-ray correlation to be 0.25$\pm$0.03, implying that irradiation is not playing a dominant role \citep[the expected slope for an irradiated disc is $\sim$ 0.5,][]{vanparadijs1994}. The observed slope would instead be more consistent with a viscously-heated accretion disc \citep[which can result in a slope $\sim$ 0.3, depending on the wavelength and on the compact object nature;][]{Russell2006}, or a combination of both \citep[e.g. $\sim$0.4 in GRS 1716-249,][]{Saikia1716}. For a viscously heated disc, a wavelength dependency of the optical/X-ray correlation slope has been observed for XRBs \citep{Russell2006}. In order to check for this, we studied the slope of the correlation using the four optical bands available during the misfired outburst ($i'$, $r'$, $V$, $g'$; Tab. \ref{tab:wavelengths}). Although the different values obtained were within the 1-sigma error range, we have found a slight trend of increasing slopes with increasing frequency (with values 0.25$\pm$0.03 for $i'$-band, 0.24$\pm$0.04 for $r'$-band, 0.30$\pm$0.04 for $V$-band and 0.35$\pm$0.06 for $g'$-band). This finding strengthens the argument that the optical emission originates from a viscously-heated accretion disc. From the optical/X-ray correlation coefficient, we can rule out the optical emission during the flaring activity to have an origin at the synchrotron jet \citep[for which the expected slope is much steeper, $\geq$ 0.7,][]{Russell2006}.
\subsection{DIM and inside-out outbursts}\label{sec:dim}
According to the DIM, when an XRB is in quiescence its accretion disc is cold and depleted. The mass transfer from the companion star, however, happening at low rates, replenishes the disc until the surface density at a certain annulus becomes sufficiently high to reach the critical density for which thermal equilibrium cannot be maintained. This makes the temperature of the ring increase over the hydrogen ionization temperature, and two different heating fronts begin to propagate inwards and outwards.
Inside-out outbursts are most commonly observed in XRBs \citep[even though the heating fronts still propagate both ways;][]{Menou2000}, because they typically possess low accretion rates ($< 10^{16}$g/s; \citealt{vanparadijs1996}; \citealt{Smak1984}; \citealt{Menou1999}). Under these conditions, the accumulation time will be longer than the viscous time for diffusion, and matter will not be able to accumulate at the outer edge of the disc, resulting in an inward diffusion.
Since the accretion rate decreases with radius, matter will then accumulate at a certain point, until it reaches the critical surface density for the thermal instability, triggering the inside-out outburst \citep{Lasota2001}.
Inside-out outbursts typically propagate slowly \citep[][]{Menou2000}; in fact, the outward front encounters regions of higher density while propagating (and also the critical density will be higher for larger radii). In case the front is not transporting enough matter to raise the density at a certain radius above the critical density, the propagation will stall, and a cooling wave (propagating inwards) will be generated, which will prevent the outburst from occurring. Interestingly, a similar interpretation has also been given for the so-called Failed-Transition outbursts, i.e., outburst that do not reach the high/soft state \citep[][]{Alabarta2021}.
LMXBs are typically subject to strong irradiation, that is important to take into account in the DIM (\citealt{Dubus2001}; see also \citealt{TetarenkoB2018}, where actual data were used to test the DIM with irradiation). Irradiation has no effect on the structure of the heating front, but it is important to determine for how long the outward heating front will be able to propagate. In fact, with the propagation of the inside-out front, the mass accretion rate at the inner disc radius rises, therefore increasing the irradiation of the outer cold disc. As a consequence, the external disc is heated, which reduces the critical density needed to undergo the thermal instability, making the outward front propagation easier.
We estimate the mass accretion rate $\dot{M}$ of Cen X-4 during the misfired outburst using our X-ray monitoring. We integrated the count rates over the entire outburst and we converted it to flux, using the count rate to flux conversion factor obtained from the spectral fit in Sec. \ref{X-ray_sec}. We calculate the luminosity considering a distance of 1.2 kpc. Using $\dot{M}=L\, R_{\rm NS}/(G\, M_{\rm NS})$ (where $L$ is the X-ray luminosity, $R_{\rm NS}$ and $M_{\rm NS}$ are the typical radius and mass of a neutron star and $G$ is the gravitational constant; we note that we are assuming that all X-rays are due to accretion), and including an efficiency factor of $20\%$ in converting gravitational energy into luminosity \citep{Frank1987}, we estimate a mass accretion rate of $\sim1.5\times 10^{13}$g/s, that is considerably lower than the critical mass accretion rate that needs to be achieved in order to have outside-in outbursts\footnote{Even at the maximum X-ray flux during the misfired outburst, $\dot{M}$ only reached $5\times 10^{15}$g/s, $\sim 2$ orders of magnitude lower than the critical mass accretion rate.} \citep[considering Cen X-4 orbital parameters, $\dot{M}_{\rm crit}\sim 4\times 10^{17}$g/s;][]{Lasota2001}. Therefore, it is likely that an inside-out propagation front was ignited close to the inner radius of the accretion disc. At the time of the ignition, the temperature of the accretion disc according to the modeling of the CMD (Fig. \ref{cmd_figure}) was $\sim 5.4 \times 10^3\, \rm K$.
However, instead of an increasing temperature of the disc, what we observe in Fig. \ref{cmd_figure} is a temperature that decreases with time, from $\sim 5.4 \times 10^3\, \rm K$ to $\sim 4.4\times 10^3 \, \rm K$ and lower. In addition, the slope of the X-ray/optical correlation shows a scarce role of irradiation in the emission from the system, in agreement with previous studies performed during quiescence (see e.g. \citealt{Davanzo2006}), likely due to the very low mass accretion rate and to the large size of the system.
It is possible that once the front started to propagate outwards, some irradiation was actually taking place, but the effect was low compared to all the other sources of emission in the optical (e.g. the companion star and the steady outer accretion disc, that emits in the optical). The overall optical emission would therefore dilute the effect of irradiation, explaining the shallow slope of the optical-X-ray correlation.
In addition, Cen X-4 is one of the XRBs with larger accretion discs known in the literature, due to its long orbital period ($\sim 15.1$ hr), which can explain the low level of irradiation to which the external accretion disc is exposed.
We therefore conclude that the propagation of the outward front has likely stalled soon after the ignition due to the low mass accretion rate and low effect of the irradiation, the latter also linked to the known large size of the system.
We note that the steep, short ($\sim 8-9$ days) decay phase after the misfired outburst peak is in agreement with the low level of irradiation that we observe in this work. In fact, the cooling front that is generated after the stall can only propagate if it finds a cold branch to fall onto \citep[][]{Lasota2001}; this is hampered by the effect of irradiation, that could keep the accretion disc hot, giving rise to the exponential and linear decay that is typically observed in strongly irradiated XRBs.
The factors which might have led to a misfired outburst are numerous. Among them, the size of the system surely makes a contribution, reducing the effect of irradiation and therefore facilitating the stall of the heating front propagation. We therefore predict that the larger the system is, the more likely it is for similar events to occur.
Alternatively, as also suggested for the optical precursor to the 2019 outburst of the accreting millisecond X-ray pulsar SAX J1808.4-3658 \citep[][]{Goodwin2020}, the misfired outburst of Cen X-4 could have been caused by a local thermal instability at a radius close to the inner radius of the disc, where the density was close to the critical density at which the trigger of the full outburst could begin \citep[e.g.][Fig. 7]{Menou2000}. This interpretation could work for Cen X-4, considering that the temperature in the disc has always remained below $6\times 10^3$ K (i.e. the temperature of hydrogen ionization).
Had the full outburst actually started for Cen X-4, the misfired outburst described in this work would therefore have been its precursor.
\section{Conclusions}
In this work we report on the long term optical monitoring of the neutron star low mass X-ray binary Cen X-4 during the past 13.5 years. The source spent the majority of this time in quiescence; the ellipsoidal modulation due to the companion star emission can be isolated, together with several short-timescale variations in all optical bands, likely due to activity in the accretion disc. Once the flares and the ellipsoidal modulation from the star are subtracted, the residual flux shows a linear downward trend spanning $\sim 3000$ days, followed by an upward trend for about 1000 days, $\sim7$ times steeper than the downward one. In the case of the black hole X-ray binary V404 Cyg \citep{Bernardini2016_precursor}, a similar upward trend of the flux preceded the start of an outburst in 2015. However, although a significant brightening was observed at the beginning of 2021 at all wavelengths (NIR--X-rays), a proper outburst was not triggered in the case of Cen X-4, which returned to quiescence a few weeks after the start of this enhanced activity. We term this behaviour as a ``misfired outburst'', because the temperature required to ionize hydrogen and initiate the outburst, was not reached.
The modeling of the color magnitude diagram during the misfired outburst with a single-temperature black body shows an accretion disc with temperatures below $5.4\times 10^3\, \rm K$; this result is in agreement with the residual spectral energy distribution, after the subtraction of the contribution from the companion star, and suggests that the accretion disc never reached the temperature that is required to ionize hydrogen (in contrast to what happened during the 1979 full outburst of the source, when, according to our model, the accretion disc reached temperatures of $\sim 2\times 10^4\, \rm K$, where hydrogen is typically completely ionized).
A possible interpretation is that an inside-out type outburst was initiated. Inside-out outbursts typically propagate slowly, because the heating front meets regions of higher density while propagating outwards. If the front is not transporting enough matter, it will stall unless irradiation is strong enough to heat the external disc, therefore decreasing the surface density and facilitating the propagation.
However, irradiation is scarce in Cen X-4. In fact, the optical/X-ray correlation during the misfired outburst has a shallow slope, inconsistent with a strongly irradiated disc; moreover, it was already reported in the past \citep[see e.g.][]{Davanzo2006} that the effects of irradiation are low in Cen X-4, consistent with the large size of the system.
It is therefore likely that the heating front was halted soon after its ignition, with a consequent production of an opposite cooling front, which switched off the outburst.
Alternatively, the observed activity could be the result of a local thermal-viscous instability in the disc, where temperatures increased without however reaching (and overcoming) the temperature for hydrogen ionization.
The optical monitoring of Cen X-4 is still ongoing, and will show whether a new misfired or full outburst might happen in the future, thus shedding further light on the possible mechanisms preventing a complete outburst to be triggered.
\acknowledgments
We thank the anonymous referee for useful comments and suggestions.
This research has made use of data and/or software provided by the High Energy Astrophysics Science Archive Research Center (HEASARC), which is a service of the Astrophysics Science Division at NASA/GSFC.
This work is also based on observations made with the REM Telescope, INAF Chile, and makes use of observations performed with the Las Cumbres Observatory network of telescopes.
DMR and DMB acknowledge the support of the NYU Abu Dhabi Research Enhancement Fund under grant RE124.
J.H. acknowledges support for this work from the {\it NICER} Guest Investigator program under NASA grant 80NSSC21K0662. We thank the {\it Swift} and {\it NICER} teams for rapidly approving, scheduling, and performing the X-ray observations.
SC and PDA acknowledge support from ASI grant I/004/11/5.
JvdE is supported by a Junior Research Fellowship awarded by St. Hilda's College, Oxford.
NM acknowledges the ASI financial/programmatic support via
the ASI-INAF agreement n. 2017-14-H.0 and the 'INAF Mainstream' project on the same subject.
TMD acknowledges support from the Spanish ministry of science under grant EUR2021-122010. TMD acknowledges support from the Consejeria de Economia, Conocimiento y Empleo del Gobierno de Canarias and the European Regional Development Fund under grant ProID2020-010104.
|
1,941,325,220,013 | arxiv | \section{Introduction}
We study the convergence of a broad class of adaptive discontinuous Galerkin (DG) and $C^0$-interior penalty (IP) finite element methods (FEM) for second-order fully nonlinear Isaacs equations, with a homogeneous Dirichlet boundary condition, of the form
\begin{equation}\label{eq:isaacs}
\begin{aligned}
F[u]\coloneqq \inf_{\alpha\in\mathscr{A}}\sup_{\beta\in\mathscr{B}}\left[L^{\alpha\beta} u-f^{\alpha\beta}\right] & = 0 &&\text{in }\Omega,\\
u & = 0 && \text{on }\partial\Omega,
\end{aligned}
\end{equation}
where $\Omega$ is a nonempty bounded convex polytopal open set in $\mathbb{R}^\dim$, $\dim\in\{2,3\}$, where $\mathscr{A}$ and $\mathscr{B}$ are nonempty compact metric spaces, and where the second-order nondivergence form elliptic operators $L^{\alpha\beta}$, $\alpha\in\mathscr{A},\beta\in\mathscr{B}$, are defined in~\eqref{eq:Lab_operators} below.
It is equally possible to consider Isaacs equations with the reverse order of the infimum and supremum in~\eqref{eq:isaacs}.
Isaacs equations arise in models of two-player stochastic differential games.
If $\mathscr{A}$ is a singleton set, then the Isaacs equation~\eqref{eq:isaacs} reduces to a Hamilton--Jacobi--Bellman (HJB) equation for the value function of the associated stochastic optimal control problem~\cite{FlemingSoner06}.
These equations find applications in a wide range of fields, such as engineering, energy, finance and computer science.
HJB and Isaacs equations are important examples of~\emph{fully nonlinear} partial differential equations (PDE), where the nonlinearity includes the second-order partial derivatives of the unknown solution, thereby prohibiting standard approaches via weak formulations that are commonly employed for divergence-form elliptic problems.
Several other important nonlinear PDE can be reformulated as Isaacs or HJB equations, including for instance the Monge--Amp\`ere equation~\cite{FengJensen17,Krylov87}; see also~\cite{Kawecki2018a}.
There still remain significant challenges in the design and analysis of stable, efficient and accurate numerical methods for fully nonlinear PDE such as~\eqref{eq:isaacs}.
Numerical methods that enjoy a discrete maximum principle can be shown to converge to the exact solution, in the sense of viscosity solutions, under rather general conditions which in particular allow the treatment of possibly degenerate elliptic problems~\cite{Souganidis91,CrandallIshiiLions92,KuoTrudinger1990,KushnerDupuis01}.
However, it is well-known that the need for a discrete maximum principle leads to significant costs in terms of computational efficiency, in terms of the order of accuracy, the flexibility of the grids and the locality of the stencils for strongly anisotropic diffusions~\cite{CrandallLions96,Kocan95,MotzkinWasow53}.
We refer the reader to \cite{DebrabantJakobsen13,FengJensen17,JensenS13,NochettoZhang18,SalgadoZhang19} for recent results and further discussion on this class of numerical methods.
Recently there has been significant interest in the design and analysis of methods that do not require discrete maximum principles for fully nonlinear PDE.
However, designing provably stable and convergent methods without a discrete maximum principle remains generally challenging.
In the series of papers~\cite{SS13,SS14,SS16}, this obstacle was overcome in the context of fully nonlinear HJB equations that satisfy the Cordes condition~\cite{Cordes1956,MaugeriPalagachevSoftova00}, which is an algebraic condition on the coefficients of the differential operator.
In particular, for fully nonlinear HJB equations on convex domains with Cordes coefficients, existence and uniqueness of the strong solution in $H^2(\Omega)\cap H^1_0(\Omega)$ was proved in~\cite{SS14} using a variational reformulation in terms of a strongly monotone operator equation.
It was then shown in~\cite{SS13,SS14} that the structure of the continuous problem can be preserved under discretization, forming the basis for a provably stable $hp$-version discontinuous Galerkin (DG) finite element method (FEM), with stability achieved in a mesh-dependent $H^2$-type norm, and with optimal convergence rates with respect to the mesh-size, and only half-order suboptimal rates with respect to the polynomial degree, under suitable regularity assumptions.
Moreover, the method was shown to be stable for general shape-regular simplicial and parallelipipedal meshes in arbitrary dimensions, thus opening the way towards adaptive refinements.
These results were then extended to the parabolic setting in~\cite{SS16}.
This approach has sparked significant recent activity exploring a range of directions, including $H^2$-conforming and mixed methods~\cite{Gallistl17,Gallistl19}, preconditioners~\cite{S18}, $C^0$-IP methods~\cite{Bleschmidt19,Kawecki19c,NeilanWu19}, curved elements~\cite{Kawecki19b}, and other types of boundary conditions~\cite{Gallistl19b,Kawecki19}.
Note that in the context of these problems, DG and $C^0$-IP methods are examples of nonconforming methods, since the appropriate functional setting is in $H^2$-type spaces.
In~\cite{KaweckiSmears20}, we provide a unified analysis of \emph{a posteriori} and \emph{a priori} error bounds for a wide family of DG and $C^0$-IP methods, where we also show that the original method of~\cite{SS13,SS14}, along with many related variants, are quasi-optimal in the sense of near-best approximations without any additional regularity assumptions, along with convergence in the small mesh-limit for minimal regularity solutions.
We are interested here in \emph{adaptive} methods for Isaacs and HJB equations based on successive mesh refinements driven by computable error estimators.
The first work on adaptivity for these problems is due to Gallistl~\cite{Gallistl17,Gallistl19}, who proved convergence of an adaptive scheme for some $C^1$-conforming and mixed method approximations.
In particular, the analysis there follows the framework of~\cite{MorinSiebertVeeser08}, where the key tool in the proof of convergence is the introduction of a suitable limit problem posed on a limit space of the adaptive approximation spaces, and a proof of convergence of the numerical solutions to the limit problem.
Note that in the case of nested conforming approximations, the limit space is obtained simply by closure of the sequence of approximation spaces with respect to the norm; however many standard $C^1$-conforming elements, such as Argyris or Hsieh--Clough--Tocher (HCT) elements, do not lead to nested spaces in practice.
More broadly, the analysis of adaptive methods for Isaacs and HJB equations is still in its infancy, and the analysis of rates of convergence of the adaptive algorithms remains open.
Even in the case of linear divergence-form equations, the construction and analysis of the corresponding limit spaces for adaptive nonconforming methods is less obvious than for the conforming methods, and this was only recently addressed by Kreuzer \& Georgoulis in~\cite{KreuzerGeorgoulis18} for DGFEM discretizations of divergence-form second-order elliptic equations.
Their approach has been extended to $C^0$-IP methods for the biharmonic equation in~\cite{DominincusGaspozKreuzer19}; we refer the reader to these references for further discussion of the literature on adaptivity for DGFEM for other PDE.
An advantage of the approach of~\cite{DominincusGaspozKreuzer19,KreuzerGeorgoulis18} is that the analysis encompasses all choices of the penalty parameters that are sufficient for stability of the methods.
Note that a further difficulty for the analysis of adaptive methods for both the biharmonic problem in~\cite{DominincusGaspozKreuzer19} and also for the fully nonlinear HJB and Isaacs equations considered here is the general absence of a sufficiently rich $H^2$-conforming subspace for DG and $C^0$-IP methods, which prevents a range of techniques employed in $H^1$-type settings~\cite{KarakashianPascal07,KreuzerGeorgoulis18}.
In this paper, we analyse in a single framework a broad family of DG and $C^0$-IP methods that are based on the original method of \cite{SS13,SS14} and recent variants.
These methods have significant advantages over $C^1$-conforming elements in terms of practicality, flexibility and computational cost.
They also require fewer unknowns than mixed methods.
We prove the plain convergence of a class of adaptive DG and $C^0$-IP methods on conforming simplicial meshes in two and three space dimensions for fixed but arbitrary polynomial degrees greater than or equal to two, and for all choices of penalty parameters that are sufficient for stability of the discrete problems.
Similar to~\cite{DominincusGaspozKreuzer19,Gallistl17}, the only condition on the marking strategy is that the set of elements marked for refinement at each step must include the element with maximum error estimator; in practice this allows for all relevant marking strategies.
In addition, we make several wider contributions to the general analysis of adaptive nonconforming methods in order to overcome some critical challenges appearing in the analysis, as we now explain.
The bedrock of our strategy for proving convergence of the adaptive methods is in the spirit of monotone operator theory: by showing weak precompactness in a suitable sense for the bounded sequence of numerical solutions, and by showing the asymptotic consistency of the numerical scheme, we use a \emph{strong times weak convergence} argument and the strong monotonicity of the problem to turn weak convergence of subsequences of numerical solutions into strong convergence of the whole sequence to the solution of the limit problem.
However, this step rests upon a proof that the weak limits of bounded sequences of finite element functions indeed belongs to the correct limit space, which, in the existing approaches of~\cite{DominincusGaspozKreuzer19,KreuzerGeorgoulis18}, requires a proof that the weak limit can also be approximated by a strongly convergent sequence of finite element functions.
Note that this is handled in~\cite{DominincusGaspozKreuzer19} for piecewise quadratic $C^0$-IP methods in two space dimensions using rather specific relations between the degrees of freedom of quadratic $C^0$-Lagrange elements and 4th-order HCT elements.
However, the extension to DG methods represents a significant challenge, which we resolve here in a unified way for both DG and $C^0$-IP methods in both two and three space dimensions.
A key ingredient of our analysis is a novel approach to the construction and analysis of the limit spaces, namely we provide intrinsic characterizations of the limit spaces, without reference to strongly approximating sequences of finite element functions.
This constitutes a foundational change from~\cite{DominincusGaspozKreuzer19,KreuzerGeorgoulis18} in terms of how we approach the analysis.
In particular, starting in Section~\ref{sec:limspace}, we define the limit spaces, along with some related more general first- and second-order spaces, directly via characterizations of the distributional derivatives of the function and its gradient and via appropriate integrability properties, see~Definitions~\ref{def:H1limitspaces},~\ref{def:HD_def} and~\ref{def:limit_space} of Section~\ref{sec:limspace} below.
This is done in the spirit of the definition of Sobolev spaces in terms of weak derivatives.
Some further benefits of this approach are significant simplifications in the theory, especially with regard to completeness of the spaces and weak precompactness of bounded sequences of finite element functions, as well as a broader understanding of the nature of the limit spaces.
We stress that this approach is by no means limited to HJB and Isaacs equations, and it is of general interest to the analysis of nonconforming adaptive methods for more general problems.
Our intrinsic approach to the limit spaces ultimately connects to~\cite{DominincusGaspozKreuzer19,KreuzerGeorgoulis18} since we also prove that the functions in the limit spaces are also limits of strongly converging sequences of finite element functions, see Theorem~\ref{thm:limit_space_characterization}.
This requires addressing a particular fundamental difficulty in the case of DG methods, as we now explain.
For DG methods, the limit space can be seen as a specific subspace of $SBV^2(\Omega)$, where $SBV^2(\Omega)$ denotes the space of functions of special bounded variation \cite{DeGiorgiAmbrosio88} with gradient density also of special bounded variation, see e.g.\ \cite{FonsecaLeoniParoni05} for a precise definition.
A surprising result due to~\cite{FonsecaLeoniParoni05}, based on an earlier result from~\cite{Alberti1991}, is that in general there exists functions in $SBV^2(\Omega)$ with nonsymmetric Hessians, and it is easy to see that such functions cannot be strong limits in the required sense of finite element functions.
One of our key results here is that the intrinsic properties the limit space, in particular the integrability properties and the structure of the jump sets, are sufficient to guarantee the symmetry of the Hessians and thereby rule out such pathological functions.
The key step in the analysis is a crucial approximation result, namely the density of the subspace of functions with only finitely many jumps over the set of faces that are never refined, see~Theorem~\ref{thm:finite_approx} below, which we use to prove the symmetry of the Hessians of these functions in~Corollary~\ref{cor:H2_omm_restriction}.
These results are obtained without \emph{a priori} knowledge of the existence of strongly convergent sequences of finite element functions, and thus resolves the challenge highlighted above.
The paper is organised as follows. Section~\ref{sec:notation} sets the notation and defines the DG and $C^0$-IP finite element spaces.
In section~\ref{sec:var_fem} we state our main assumptions on the problem~\eqref{eq:isaacs}, and recall some well-posedness results from~\cite{SS14,KaweckiSmears20}.
Section~\ref{sec:var_fem} then introduces the family of adaptive DG and $C^0$-IP methods that are considered, and states our main result on convergence of the adaptive algorithm in Theorem~\ref{thm:main}.
In Section~\ref{sec:limspace} we study the limit spaces as described above, and in Section~\ref{sec:limit_problem_proof} we introduce the limit problem, and prove our main result on the convergence of the adaptive algorithm.
\section{Notation}\label{sec:notation}
Let $\Omega\subset \mathbb{R}^\dim$ be a bounded convex polytopal open set in $\mathbb{R}^\dim$, $\dim\in\{2,3\}$.
For a Lebesgue measurable set $\omega \subset \mathbb{R}^\dim$, let $\abs{\omega}$ denote its Lebesgue measure, and let $\diam(\omega)$ denote its diameter. The $L^2$-norm of functions over $\omega$ is denoted by $\norm{\cdot}_{\omega}$.
For two vectors $\bm{v}$ and $\bm{w}\in \mathbb{R}^\dim$, let $\bm{v}\otimes\bm{w}\in \mathbb{R}^{\dim\times\dim}$ be defined by $(\bm{v}\otimes\bm{w})_{ij}=\bm{v}_i \bm{w}_j$.
Let~$\{\cT_k\}_{k\in\mathbb{N}}$ be a shape-regular sequence of conforming simplicial meshes on $\Omega$. We have in mind sequences of meshes $\{\cT_k\}_{k\in\mathbb{N}}$ that are obtained by successive refinements without coarsening from an initial mesh $\mathcal{T}_1$. More precisely, we assume the framework of~\cite{MorinSiebertVeeser08} of \emph{unique quasi-regular element subdivisions.}
The adaptive process that determines the mesh refinement is presented in Section~\ref{sec:var_fem} below. For real numbers $a$ and $b$, we write $a\lesssim b$ if there exists a constant $C$ such that $a\leq C b$, where $C$ depends only on the dimension $\dim$, the domain $\Omega$, and on the shape-regularity of the meshes and on the polynomial degrees $p$ and $q$ defined below, but is otherwise independent of all other quantities. We write $a\eqsim b$ if and only if $a\lesssim b$ and $b\lesssim a$.
For each $k\in\mathbb{N}$, let $\cF_k$ denote the set of $\dim-1$ dimensional faces of the mesh~$\cT_k$, and let $\cF_k^I$ and~$\cF_k^B$ denote the set of internal and boundary faces of $\cT_k$ respectively.
Let $\cS_k$ denote the \emph{skeleton} of the mesh~$\cT_k$, i.e.\ $\cS_k\coloneqq\bigcup_{F\in\cF_k} F$, and let $\mathcal{S}^I_k\coloneqq \bigcup_{F\in\Fk^I}F$ denote the internal skeleton of $\cT_k$.
For each $F\in\cF_k$, $k\in\mathbb{N}$, let $\bm{n}_F$ be a fixed choice of unit normal vector to $F$, where the choice of unit normal must be independent of $k$ and solely dependent on $F$.
If $F$ is a boundary face then $\bm{n}_F$ is chosen to be the outward normal to~$\Omega$.
In a slight abuse of notation, we shall usually drop the subscript and simply write $\bm{n}$ when there is no possibility of confusion.
For each $K\in \cT_k$, $k\in\mathbb{N}$, let $h_K \coloneqq \abs{K}^{\frac{1}{\dim}}$; note that shape-regularity of the meshes imply that $h_K \eqsim \diam(K)$.
For each $F\in\cF_k$, let $h_F \coloneqq \left(\mathcal{H}^{\dim-1}(F)\right)^{\frac{1}{\dim-1}}$, where $\mathcal{H}^{\dim-1}$ denotes the $(d-1)$-dimensional Hausdorff measure.
Shape-regularity also implies that $h_K\eqsim h_F$ for any element $K\in\cT_k$ and any face $F\in\cF_k$ contained in $K$.
Similarly, shape-regularity implies that $h_F\eqsim \diam(F)$ for all $F\in\cF_k$, $k\in\mathbb{N}$.
For each $k\in\mathbb{N}$, we define the global mesh-size function $h_k\colon \overline{\Omega}\rightarrow\mathbb{R}$ by $h_k|_{K^\circ}=h_K$ for each $K\in\cT_k$, where $K^\circ$ denotes the interior of $K$, and $h_k|_F=h_F$ for each $F\in\cF_k$.
The functions $\{h_k\}_{k\in\mathbb{N}}$ are uniformly bounded in $\Omega$ and are only defined up to sets of zero $\mathcal{H}^{d-1}$-measure, which will be sufficient for our purposes.
We say that two elements are neighbours if they have nonempty intersection.
For each $K\in \cT_k$ and $j\in \mathbb{N}_0$, we define the set $N_k^j(K)$ of $j$-th neighbours of $K$ recursively by setting $N_k^0(K) \coloneqq K$, and then defining $N_k^j(K)$ as the set of all elements in $\cT_k$ that are neighbours of at least one element in $N_k^{j-1}(K)$.
For the case $j=1$ we drop the superscript and write $N_k^1(K)=N_k(K)$.
It will be frequently convenient to use a shorthand notation for integrals over collections of elements and faces of the meshes.
For collections of elements $\mathcal{E}\subset \bigcup_{k\in\mathbb{N}}\cT_k$ that are disjoint up to sets of $\dim$-dimensional Lebesgue measure zero, we write $\int_{\mathcal{E}} \coloneqq \sum_{K\in\mathcal{E}}\int_K$, where the measure of integration is the Lebesgue measure on $\mathbb{R}^\dim$.
Likewise, if $\mathcal{G}\subset \bigcup_{k\in\mathbb{N}} \cF_k$ is a collection of faces that are disjoint up to sets of zero $\mathcal{H}^{\dim-1}$-measure, then we write $\int_{\mathcal{G}} \coloneqq \sum_{F\in\mathcal{G}}\int_F$, where the measure of integration is the $(\dim-1)$-dimensional Hausdorff measure on $\mathbb{R}^\dim$.
Note that in the case where $\mathcal{E}$ or $\mathcal{G}$ are countably infinite, the notation $\int_{\mathcal{E}}$ and $\int_{\mathcal{G}}$ represent infinite series whose convergence will be determined as necessary. We do not write the measure of integration as there is no risk of confusion.
\subsection{Derivatives and traces of functions of bounded variation.}\label{sec:BV}
We recall some known results about spaces of functions of bounded variation~\cite{AmbrosioFuscoPallara00,EvansGariepy2015}. For an open set $\omega\subset\Omega$, let $BV(\omega)$ denote the space of real-valued functions of bounded variation on $\omega$.
Recall that $BV(\omega)$ is a Banach space equipped with the norm $\norm{v}_{BV(\omega)}\coloneqq\norm{v}_{L^1(\omega)}+\abs{Dv}(\omega)$, where $\abs{Dv}(\omega)$ denotes the total variation of its distributional derivative $Dv$ over $\omega$, defined by $\abs{Dv}(\omega)\coloneqq \sup\left\{\int_\omega v \Div \bm{\phi}\colon \bm{\phi}\in C^\infty_0(\omega;\mathbb{R}^\dim),\|\bm{\phi}\|_{C(\overline{\omega};\mathbb R^d)}\le1 \right\}$.
To simplify the notation below, we also define $BV(\overline{\omega})\coloneqq BV(\omega)$ where $\overline{\omega}$ is the closure of $\omega$.
In the following, we shall frequently have to handle functions of bounded variation that are typically only piecewise regular over different and possibly infinite subdivisions of~$\Omega$, and the analysis is greatly simplified by adopting a notation that unifies and generalises various familiar concepts of weak and piecewise derivatives.
In particular we follow the notation of~\cite{FonsecaLeoniParoni05}.
For any $v\in BV(\Omega)$, the distributional derivative $Dv$ can be identified with a Radon measure on $\Omega$ that can be decomposed into the sum of an absolutely continuous part with respect to Lebesgue measure, and a singular part; the density of the absolutely continuous part of $D v$ with respect to Lebesgue measure is denoted by
\begin{equation}\label{eq:nabla_notation}
\nabla v =(\nabla_{x_1} v,\dots \nabla_{x_\dim} v) \in L^1(\Omega;\mathbb{R}^\dim).
\end{equation}
Following \cite{FonsecaLeoniParoni05}, for functions $v\in BV(\Omega)$ such that $\nabla v \in BV(\Omega;\mathbb{R}^\dim)$, we define $\nabla^2 v$ as the density of the absolutely continuous part of $D(\nabla v)$, the distributional derivative of $\nabla v$; in particular,
\begin{equation}\label{eq:Hessian_notation}
\begin{aligned}
\nabla^2 v \coloneqq \nabla(\nabla v) \in L^1(\Omega;\mathbb{R}^{\dim\times\dim}), &&&
(\nabla^2 v)_{ij} \coloneqq \nabla_{x_j} (\nabla_{x_i} v) \quad \forall i,\, j\in \{1,\dots,\dim\}.
\end{aligned}
\end{equation}
We then define the Laplacian $\Delta v = \Tr \nabla^2 v$, where $\Tr \bm{M}\coloneqq\sum_{i=1}^\dim \bm{M}_{ii}$ is the matrix trace for $\bm{M}\in\mathbb{R}^{\dim\times\dim}$.
We emphasize that $\nabla^2 v$ is defined in terms of $D(\nabla v)$ and not $D^2v$, the second distributional derivative of $v$, since in general $D^2 v$ is not necessarily a Radon measure.
Crucially, there is no conflict of notation here when considering Sobolev regular functions, since $\nabla v$ coincides with the weak gradient of $v$ if $v\in W^{1,1}(\Omega)$ and $\nabla^2 v$ coincides with the weak Hessian of $v$ if $v\in W^{2,1}(\Omega)$.
Moreover, for functions from the DG and $C^0$-IP finite element spaces defined shortly below, it is easy to see that the gradient and Hessian as defined above coincide with the piecewise gradient and Hessian over elements of the mesh.
Therefore, the above notation \emph{unifies} and \emph{generalises} the above notions of derivatives.
Furthermore, the more general notions of gradients and Hessians defined above play a key role in the formulation of intrinsic definitions of the limit spaces of the sequence of finite element spaces given in Section~\ref{sec:limspace}.
\paragraph{Jump and average operators.}
We recall some known results concerning one-sided traces of functions of bounded variation.
It follows from \cite[Theorems~5.6 \& 5.7]{EvansGariepy2015} that for each interior face $F\in\Fk^I$, $k\in\mathbb{N}$, there exist bounded one-sided trace operators $\tau_F^+\colon BV(\Omega)\rightarrow L^1(F)$ and $\tau_F^-\colon BV(\Omega)\rightarrow L^1(F)$, where the notation $\tau_F^\pm$ is determined by the chosen unit normal $\bm{n}_F$ so that $\tau_F^-$ and $\tau_F^+$ are the traces from the sides of $F$ for which $\bm{n}_F$ is outward pointing and inward pointing, respectively.
If $F$ is a boundary face, we only define its interior trace $\tau_F^-$, where it is recalled that $\bm{n}_F$ is outward pointing to $\Omega$.
In particular, \cite[Theorem~5.7]{EvansGariepy2015} shows that, for any $v\in BV(\Omega)$, we have $\tau_F^\pm v(x) =\lim_{r\rightarrow 0} \frac{1}{\abs{B_{\pm}(x,r)}} \int_{B_{\pm}(x,r)} v$ for $\mathcal{H}^{\dim-1}$-a.e.\ $x\in F$, where $B_{\pm}(x,r)\coloneqq\{y\in \Omega\colon \abs{x-y}<r,\, (y-x)\cdot \bm{n}_F \in\mathbb{R}_{\pm} \}$ are half-balls centred on $x$ of radius $r$, for which $\bm{n}_F$, and where $\mathbb{R}_+$ and $\mathbb{R}_-$ denote the sets of nonnegative and nonpositive real numbers, respectively.
Therefore, the values of the traces do not depend on a choice of surrounding element from any particular mesh.
However, the $L^1$-norm of traces on faces can be bounded in terms of the BV-norm on elements as follows. For each element $K\in \cT_k$, $k\in\mathbb{N}$, let $\tau_{\p K} \colon BV(K)\rightarrow L^1(\partial K)$ denote the corresponding trace operator from $K$ to $\partial K$.
For instance, if $F$ is a face and if $K$ is an element containing $F$ for which $\bm{n}_F$ is outward pointing, then $\norm{\tau_F^- v}_{L^1(F)}\leq \norm{\tau_{\p K} v}_{L^1(\partial K)}\lesssim \abs{Dv}(K)+\frac{1}{h_K}\norm{v}_{L^1(K)}$ for all $v\in BV(K)$; a similar bound holds for $\tau_F^+$ if $\bm{n}_F$ is inward pointing with respect to $K$.
In other words, the $L^1$-norm of the appropriate one-sided trace is bounded by the BV-norm of a function over the element containing the face.
We now define jump and average operators over faces.
For $v\in BV(\Omega)$, we define the jump $\jump{v}_F\in L^1(F)$ and average of $\avg{v}_F\in L^1(F)$ for each $F\in\cF_k$ by
\begin{equation}\label{eq:jumpavg}
\begin{aligned}
\avg{v}_F&\coloneqq \frac{1}{2}\left(\tau_{F}^+v+\tau_{F}^- v\right), & \jump{v}_F & \coloneqq \tau_{F}^-v -\tau_{F}^+v, &\forall F\in\Fk^I,\\
\avg{v}_F &\coloneqq \tau_{F}^- v & \jump{v}_F & \coloneqq \tau_F^{-} v & \forall F\in\Fk^B.
\end{aligned}
\end{equation}
The jump and average operators are further extended to vector fields in $BV(\Omega;\mathbb{R}^\dim)$ component-wise.
Although the sign of $\jump{v}_F$ depends on the choice of $\bm{n}_F$, in subsequent expressions the jumps will appear either under absolute value signs or in products with $\bm{n}_F$, so that the overall resulting expression is uniquely defined and independent of the choice of $\bm{n}_F$.
When no confusion is possible, we will often drop the subscripts and simply write $\avg{\cdot}$ and~$\jump{\cdot}$.
\paragraph{Tangential derivatives.} For $F\in \cF_k$ and a sufficiently regular function $w\colon F\mapsto \mathbb{R}$, let $\nabla_T w$ denote the tangential (surface) gradient of $w$, and let $\Delta_T w$ denote its the tangential Laplacian of $w$. We do not indicate the dependence on $F$ in order to alleviate the notation, as it will be clear from the context. Since all faces considered here are flat, these tangential differential operators commute with the trace operator for sufficiently regular functions, see~\cite{SS13} for further details.
\subsection{Finite element spaces.}\label{sec:fem_spaces_def}
For a nonnegative integer $p$, let $\mathbb{P}_p$ be the space of polynomials of total degree at most $p$.
In the following, let $p\geq 2$ denote a fixed choice of polynomial degree to be used for the finite element approximations.
We then define the finite element spaces $V_k^s$, $s\in \{0,1\}$, by
\begin{equation}\label{sec:fem_spaces}
\begin{aligned}
V_k^ 0 &\coloneqq \{v\in L^2(\Omega):v|_K\in \mathbb{P}_p\;\forall K\in\cT_k\},
&V_k^1&\coloneqq V_{k}^0 \cap H^1_0(\Omega).
\end{aligned}
\end{equation}
Therefore, the spaces $V_k^0$ and $V_k^1$ correspond to DG and $C^0$-IP spaces on $\cT_k$, respectively. Clearly $V_k^1$ is a subspace of $V_k^0$.
As mentioned above in section~\ref{sec:BV}, for any $v_k\in V_k^s$, the piecewise gradient of $v_k$ over $\cT_k$ coincides with $\nabla v_k$ the density of the absolutely continuous part of its distributional derivative $Dv_k$. Similarly, the piecewise Hessian of $v_k$ over $\cT_k$ coincides with $\nabla^2 v_k$ the density of the absolutely continuous part of $D(\nabla v_k)$.
\paragraph{Norms.}
We equip the spaces $V_k^s$ for each $s\in\{0,1\}$ with the same norm $\normk{\cdot}\colon V_k^s\rightarrow \mathbb{R}$ and jump seminorm $\absJ{\cdot}\colon V_k^s\rightarrow \mathbb{R}$ defined by
\begin{subequations}\label{eq:norm_def}
\begin{align}
\normk{v}^2\coloneqq \int_{\Omega}\left[ \abs{\nabla^2 v}^2+\abs{\nabla v}^2 + \abs{v}^2\right] + \absJ{v}^2,
\\ \absJ{v}^2\coloneqq \int_{\mathcal{F}_k^I}h_k^{-1}\abs{\jump{\nabla v}}^2+\int_{\cF_k}h_k^{-3}\abs{\jump{v}}^2,
\end{align}
\end{subequations}
for all $v\inV_k^s$.
Although $V_k^0$ and $V_k^1$ are equipped with the same norm, we remark that for any $v\inV_k^1$, the terms in~\eqref{eq:norm_def} involving the jumps $\jump{v}$ over mesh faces vanishes identically owing to $H^1_0$-conformity, whilst the terms involving the jumps $\jump{\nabla v}$ of first derivatives over internal mesh faces can be simplified to merely jumps of normal derivatives. However, to give a unified treatment of both cases $s=0$ and $s=1$, we will not make explicit use of these specific simplifications for the case $s=1$.
\paragraph{Lifting operators.} Let $q$ denote a fixed choice of polynomial degree such that $q\geq p-2$, which implies that $q\geq 0$ since $p\geq 2$. Let $V_{k,q}^0\coloneqq \{w \in L^2(\Omega)\colon w|_K \in \mathbb{P}_q \;\forall K \in\cT_k\}$ denote the space of piecewise polynomials of degree at most $q$ over $\cT_k$.
For each face $F\in\cF_k$, the lifting operator $r_k^F\colon L^2(F)\rightarrow V_{k,q}^0$ is defined by $\int_\Omega r_k^F(w) \varphi_k = \int_F w \{\varphi_k\}$ for all $\varphi_k \in V_{k,q}^0$.
Using inverse inequalities for polynomials, it is easy to see that $\norm{r_k^F(w)}_\Omega\lesssim h_F^{-1/2} \norm{w}_F$ for any $w\in L^2(F)$ and any $F\in\cF_k$.
Next, for each $F\in \cF_k$, we define $\bm{r}_k^F\colon L^2(F;\mathbb{R}^\dim)\rightarrow [V_{k,q}^0]^{\dim\times\dim} $, where $[V_{k,q}^0]^{\dim\times\dim}$ denotes the space of $\dim\times\dim$-matrix valued functions that are component-wise in $V_{k,q}^0$, as follows.
For all $\bm{w}\in L^2(F;\mathbb{R}^\dim)$ and all $i,\,j=1,\dots,\dim$, if $F\in \Fk^I$ is an interior face, then let $[\bm{r}_k^F(\bm{w})]_{ij} \coloneqq r_k^F(\bm{w}_i \bm{n}_j)$ where $\bm{n}=\bm{n}_F$ is the chosen unit normal for $F$.
Otherwise, if $F\in\Fk^B$ is a boundary face then let $[\bm{r}_k^F(\bm{w})]_{ij} \coloneqq r_k^F((\bm{w}_T)_i \bm{n}_j)$, where $\bm{w}_T=\bm{w}-(\bm{w}\cdot\bm{n})\bm{n}$ denotes the tangential component of $\bm{w}$ on $F$.
In other words, on boundary faces, only the tangential component of~$\bm{w}$ is considered in the lifting $\bm{r}_k^F(\bm{w})$.
It follows that, for any $\bm{\varphi}_k \in [V_{k,q}^0]^{\dim\times\dim}$,
\begin{equation}\label{eq:lifting_identity}
\int_\Omega \bm{r}_k^F(\bm{w}):\bm{\varphi}_k =
\begin{cases}
\int_F (\bm{w}\otimes\bm{n}):\{\bm{\varphi}_k\} = \int_F \bm{w}\cdot\{\bm{\varphi}_k \bm{n}\} &\text{if }F\in\Fk^I,\\
\int_F (\bm{w}_T\otimes\bm{n}):\{\bm{\varphi}_k\}= \int_F \bm{w}_T\cdot\{\bm{\varphi}_k \bm{n}\} &\text{if }F\in\Fk^B.
\end{cases}
\end{equation}
We then define the global lifting operator $\bm{r}_k$ and the lifted Hessian operator $\bm{H}_k$, which both map $V_k^s$, $s\in\{0,1\}$, into $L^2(\Omega;\mathbb{R}^{\dim\times\dim})$, as well as the lifted Laplacian operator~$\Delta_k$, by
\begin{align}\label{eq:lifted_Hessian}
\bm{r}_k \coloneqq \sum_{F\in\cF_k}\bm{r}_k^F, \qquad \bm{H}_k v_k \coloneqq \nabla^2 v_k - \br_k(\jump{\Npw v_k}), \qquad \Delta_k v_k \coloneqq \Tr \bm{H}_k v_k,
\end{align}
where it is recalled that $\Tr \bm{M}$ is the matrix trace for any $\bm{M}\in\mathbb{R}^{\dim\times\dim}$.
The operators defined above then satisfy the following bounds
\begin{equation}
\begin{aligned}
\norm{\bm{r}_k(\jump{\nabla v_k})}_\Omega\lesssim \absJ{v_k}, &&& \norm{\bm{H}_k v_k}_\Omega + \norm{\Delta_k v_k}_\Omega \lesssim \normk{v_k} &&& \!\!\forall v_k\inV_k^s.
\end{aligned}
\end{equation}
Using~\eqref{eq:lifting_identity}, it is easy to see that $\Tr\bm{r}_k^F(\bm{w})=0 $ for any $\bm{w}\in L^2(F)$ and when~$F\in\Fk^B$ is a boundary face, since $\Tr (\bm{w}_T\otimes\bm{n}) = \bm{w}_T\cdot\bm{n} =0$ as $\bm{w}_T$ is tangential to~$F$. Thus only interior face liftings contribute to $\Delta_k v_k$.
\section{Variational formulation of the problem and adaptive finite element approximation}\label{sec:var_fem}
\subsection{Variational formulation of the problem}\label{sec:variational}
In order to focus on the most important aspects of analysis, we shall restrict our attention to Isaacs and HJB equations without lower order terms, although we note that the approach we consider here easily accommodates problems with lower order terms, see~\cite{KaweckiSmears20,SS14,SS16}.
More precisely, let the real valued functions $a_{ij}=a_{ji}$ and $f$ belong to $C(\overline{\Omega}\times\mathscr{A}\times\mathscr{B} )$ for each $i,j=1,\ldots,\dim$.
For each $(\alpha,\beta)\in\mathscr{A}\times\mathscr{B}$, we then define the matrix-valued function $a^{\alpha\beta} \colon \Omega\rightarrow \mathbb{R}^{\dim\times\dim}$ by $a^{\alpha\beta}_{ij}(x)=a_{ij}(x,\alpha,\beta)$ for all $x\in \Omega$ and $i,\,j=1,\dots,\dim$.
The functions $f^{\alpha\beta}$ are defined similarly for all $\alpha\in\mathscr{A}$ and $\beta\in\mathscr{B}$.
Then, for each $\alpha\in\mathscr{A}$ and $\beta\in\mathscr{B}$, the operators $L^{\alpha\beta}:H^2(\Omega)\to L^2(\Omega)$ are defined by
\begin{equation}\label{eq:Lab_operators}\begin{aligned}
L^{\alpha\beta} v &= a^{\alpha\beta} {\colon}\nabla^2 v && \forall v\in H^2(\Omega).
\end{aligned}\end{equation}
The nonlinear operator $F\colon H^2(\Omega)\rightarrow L^2(\Omega)$ is then defined as in~\eqref{eq:isaacs}. Note that the compactness of $\overline{\Omega}\times\mathscr{A}\times\mathscr{B}$ and the continuity of the coefficients imply that $F$ is well-defined as a mapping from $H^2(\Omega)$ to $L^2(\Omega)$.
We consider the problem~\eqref{eq:isaacs} in its strong form, i.e.\ to find a solution $u\in H^2(\Omega)\cap H^1_0(\Omega)$ such that $F[u]=0$ pointwise a.e.\ in $\Omega$.
We assume that the problem is uniformly elliptic, i.e.\ there exists positive constants $\underline{\nu}$ and $\overline{\nu}$ such that $\underline{\nu}\abs{\bm{v}}^2\leq \bm{v}^{\top}a^{\alpha\beta}(x) \bm{v} \leq \overline{\nu}\abs{\bm{v}}^2$ for all $\bm{v}\in \mathbb{R}^\dim$, for all $x\in \overline{\Omega}$ and all $(\alpha,\beta)\in\mathscr{A}\times\mathscr{B}$, where $\abs{\bm{v}}$ denotes the Euclidean norm of $\bm{v}$.
Furthermore, we assume the Cordes condition: there exists a $\nu\in(0,1]$ such that
\begin{equation}\label{eq:Cordes}
\begin{aligned}
\frac{\abs{a^{{\alpha\beta}}(x)}^2}{\Tr(a^{{\alpha\beta}}(x))^2}\le\frac{1}{d-1+\nu}&&&\forall x \in \overline{\Omega},\quad\forall(\alpha,\beta)\in\mathscr{A}\times\mathscr{B},
\end{aligned}
\end{equation}
where $\abs{a^{{\alpha\beta}}}$ denotes the Frobenius norm of the matrix $a^{\alpha\beta}$.
It is well-known that if $\dim=2$, then uniform ellipticity implies the Cordes condition~\eqref{eq:Cordes}, see e.g.~\cite[Example~2]{SS14}.
In~\cite{SS14,SS16} and later in~\cite{KaweckiSmears20} it was shown that fully nonlinear HJB and Isaacs equations can be reformulated in terms of a renormalized nonlinear operator, as follows.
For each $(\alpha,\beta)\in\mathscr{A}\times\mathscr{B}$, let $\gamma^{{\alpha\beta}}\in C(\overline{\Omega})$ be defined by $\gamma^{{\alpha\beta}}\coloneqq \frac{\Tr a^{{\alpha\beta}} }{\abs{a^{{\alpha\beta}}}^2}$.
Let the renormalised operator~$F_{\gamma} \colon H^2(\Omega)\rightarrow L^2(\Omega)$ be defined by
\begin{equation}\label{eq:Fg_def}
\begin{aligned}
F_{\gamma}[v]\coloneqq \inf_{\alpha\in\mathscr{A}}\sup_{\beta\in\mathscr{B}}\left[\gamma^{{\alpha\beta}}\left(L^{\alpha\beta} v - f^{\alpha\beta}\right)\right] &&&\forall v \in H^2(\Omega).
\end{aligned}
\end{equation}
It is shown in~\cite{KaweckiSmears20}, see also~\cite{SS14}, that the renormalized operator $F_{\gamma}$ is Lipschitz continuous and satisfies the following bounds
\begin{subequations}\label{eq:cordes_ineq}
\begin{align}
\abs{F_{\gamma}[w] - F_{\gamma}[v] - \Delta (w-v) }&\leq \sqrt{1-\nu}\sqrt{|\nabla^2 z|^2+2\lambda|\nabla z|^2+\lambda^2|z|^2},\label{eq:cordes_ineq1}\\
|F_{\gamma}[w]-F_{\gamma}[v]|&\leq \big(1+\sqrt{d+1}\big)\sqrt{|\nabla^2z|^2+2\lambda|\nabla z|^2+\lambda^2|z|^2},\label{eq:cordes_ineq2}
\end{align}
\end{subequations}
for all functions $w$ and $v\in H^2(\omega)$ for any open subset $\omega\subset \Omega$, where $z:=w-v$, and with the above bounds holding pointwise a.e.\ in $\omega$.
The following Lemma from~\cite{KaweckiSmears20}, which extends earlier results from~\cite{SS14}, states that the equations $F[u]=0$ and $F_{\gamma}[u] =0$ have equivalent respective sets of sub- and supersolutions.
\begin{lemma}[\cite{KaweckiSmears20,SS14}]\label{lem:subsupersolutions}
A function $v\in H^2(\Omega)$ satisfies $F[v]\leq 0$ pointwise a.e.\ in $\Omega$ if and only if $F_{\gamma}[v]\leq 0$ pointwise a.e.\ in~$\Omega$. Furthermore, a function $v\in H^2(\Omega)$ satisfies $F[v]\geq 0$ pointwise a.e.\ in $\Omega$ if and only if $F_{\gamma}[v]\geq 0$ pointwise a.e.\ in~$\Omega$.
\end{lemma}
A particular consequence of Lemma~\ref{lem:subsupersolutions} is that a solution of $F[u]=0$ is equivalently a solution of $F_{\gamma}[u]=0$. Moreover, it is was shown in~\cite{SS14} for fully nonlinear HJB equations, and later for Isaacs equations in~\cite{KaweckiSmears20}, that under the above assumptions, there exists a unique strong solution of~\eqref{eq:isaacs}.
\begin{theorem}[\cite{KaweckiSmears20,SS14}]\label{thm:well_posedness}
There exists a unique $u\in H^2(\Omega)\cap H^1_0(\Omega)$ that solves $F[u]=0$ pointwise a.e.\ in $\Omega$, and, equivalently, that solves $F_{\gamma}[u]=0$ pointwise a.e.\ in $\Omega$.
\end{theorem}
In particular, the proof, due to~\cite{SS14}, involves reformulating the equation $F[u]=0$ in terms of a strongly monotone nonlinear operator equation of the form $A(u;v)=0$ for all $v\in H^2(\Omega)\cap H^1_0(\Omega)$, where
\begin{equation}
\begin{aligned}
A(u;v)\coloneqq \int_\Omega F_{\gamma}[u]\Delta v &&&\forall v\in H^2(\Omega)\cap H^1_0(\Omega).
\end{aligned}
\end{equation}
Note that the equivalence of these formulations is a consequence of the bijectivity of the Laplace operator from $H^2(\Omega)\cap H^1_0(\Omega)$ to $L^2(\Omega)$ on the convex domain $\Omega$.
It is then shown in~\cite{SS14,KaweckiSmears20} that $A(\cdot;\cdot)$ is Lipschitz continuous, and also strongly monotone on the space $H^2(\Omega)\cap H^1_0(\Omega)$, i.e.\
\begin{equation}\label{eq:continuous_strong_monotonicity}
\begin{aligned}
\frac{1}{c_{\star}}\norm{w-v}_{H^2(\Omega)}^2 \leq A(w;w-v)-A(v;w-v) &&&\forall w,\, v \in H^2(\Omega)\cap H^1_0(\Omega),
\end{aligned}
\end{equation}
where $c_\star$ in particular depends only on $\dim$, $\diam\Omega$ and $\nu$ from~\eqref{eq:Cordes}.
Therefore, the existence and uniqueness of a strong solution $u$ follows from the Browder--Minty theorem.
\subsection{Numerical discretizations and error estimators}\label{sec:num_schemes}
For each $k\in\mathbb{N}$, let the bilinear form $S_k\colon V_k^0\timesV_k^0\rightarrow \mathbb{R}$ be defined by
\begin{equation}\label{eq:B_def}
\begin{split}
S_k(w_k,v_k)\coloneqq &\int_\Omega \left[\nabla^2 w_k:\nabla^2 v_k - \Delta w_k \Delta v_k\right]
\\ & + \int_{\Fk^I} \left[\avg{\Delta_T w_k} \jump{\nabla v_k\cdot \bm{n}} + \avg{\Delta_T v_k} \jump{\nabla w_k\cdot \bm{n}} \right] \\
&- \int_{\cF_k} \left[ \nabla_T\avg{\nabla w_k\cdot \bm{n}} \cdot \jump{\nabla_T v_k} + \nabla_T\avg{\nabla v_k\cdot \bm{n}} \cdot \jump{\nabla_T w_k} \right],
\end{split}
\end{equation}
for all $w_k,\,v_k\inV_k^0$.
The bilinear form $S_k(\cdot,\cdot)$ represents a stabilization term in the numerical schemes defined below.
For two positive constant parameters $\sigma$ and $\rho$ to be chosen sufficiently large, let the jump penalisation bilinear form $J_k^{\sigma,\rho}\colon V_k^0\timesV_k^0\rightarrow \mathbb{R}$ be defined by
\begin{equation}\label{eq:jump_pen_def}
\begin{aligned}
J_k^{\sigma,\rho}(w_k,v_k)\coloneqq \int_{\Fk^I} \sigma h_k^{-1} \jump{\nabla w_k}\cdot\jump{\nabla v_k} + \int_{\cF_k} \rho h_k^{-3}\jump{w_k}\jump{v_k},
\end{aligned}
\end{equation}
for all $w_k,\,v_k\inV_k^0$.
For a parameter $\theta\in[0,1]$, let the nonlinear form $A_k\colon V_k^0\timesV_k^0\rightarrow \mathbb{R}$ be defined by
\begin{equation}\label{eq:nonlinear_form}
\begin{aligned}
A_k(w_k;v_k)\coloneqq \int_\Omega F_{\gamma}[w_k]\Delta_k v_k + \theta S_k(w_k,v_k) + J_k^{\sigma,\rho}(w_k,v_k),
\end{aligned}
\end{equation}
for all functions $w_k,\,v_k \in V_k^0$, where we recall that the lifted Laplacian $\Delta_k v_k$ appearing in the first integral on the right-hand side of \eqref{eq:nonlinear_form} is defined in~\eqref{eq:lifted_Hessian}.
The nonlinear form $A_k$ is nonlinear in its first argument, but linear in its second argument.
For a fixed choice of $s\in\{0,1\}$, the numerical scheme is then to find $u_k\inV_k^s$ such that
\begin{equation}\label{eq:num_scheme}
\begin{aligned}
A_k(u_k;v_k) = 0 &&&\forall v_k\inV_k^s.
\end{aligned}
\end{equation}
Since $s\in\{0,1\}$ is fixed, we omit the dependence of $u_k$ on $s$ in the notation, as there is no risk of confusion.
The choice $\theta=1/2$ is based on the method of~\cite{SS13,SS14,SS16}, with the modification that the nonlinear operator is tested against the lifted Laplacian rather than the piecewise Laplacian of test functions. The choice $\theta=0$ and $s=1$ is similar to the method of \cite{NeilanWu19}, again modulo the introduction of the lifted Laplacians for the first integral term.
The lifted Laplacians will play a role later on in the proof of asymptotic consistency of the nonlinear forms $A_k(\cdot;\cdot)$.
\begin{remark}[Simplifications for $C^0$-IP methods]
Note that when considering the restriction of $J_k^{\sigma,\rho}(\cdot,\cdot)$ to $V_k^1\times V_k^1$, the last term on the right-hand side of~\eqref{eq:jump_pen_def} vanishes identically, and we can take $\rho=0$. Furthermore, since the jumps of gradients of functions in $V_k^1$ have vanishing tangential components over the faces of the mesh, the first term in the right-hand side of \eqref{eq:jump_pen_def} can be further simplified to just the jumps in the normal components of the gradient. These simplifications can be useful in computational practice but we retain the general form above in order to present a unified analysis for both DG and $C^0$-IP methods.
\end{remark}
We recall now some basic properties of the numerical scheme that have been shown in previous works, see in particular \cite{KaweckiSmears20} for a complete treatment.
Building on the analysis in~\cite{SS13,SS14}, it was shown in~\cite{KaweckiSmears20} that the parameters $\sigma$ and $\rho$ can be chosen sufficiently large such that $A_k$ is strongly monotone with respect to~$\normk{\cdot}$, i.e.\ such that there is a fixed constant $C_{\mathrm{mon}}>0$ independent of $k$, such that
\begin{equation}\label{eq:strong_monotonicity}
\begin{aligned}
\frac{1}{C_{\mathrm{mon}}}\normk{w_k - v_k}^2 \leq A_k(w_k;w_k-v_k)-A_k(v_k,w_k-v_k) &&&\forall w_k,\,v_k\inV_k^s, \;\forall k\in\mathbb{N}.
\end{aligned}
\end{equation}
It is also straightforward to show from standard techniques along with~\eqref{eq:cordes_ineq2} that the nonlinear form $A_k$ is Lipschitz continuous, i.e.\ there exists a positive constant $C_{\mathrm{Lip}}$, independent of $k$, such that
\begin{equation}\label{eq:Ak_lipschitz}
\begin{aligned}
\abs{A_k(w_k;v_k)-A_k(z_k;v_k)}\leq C_{\mathrm{Lip}} \normk{w_k-z_k}\normk{v_k}&&&\forall w_k,\,z_k,\,v_k\inV_k^0,\;\forall k\in\mathbb{N}.
\end{aligned}
\end{equation}
It then follows from the Browder--Minty theorem that there exists a unique solution $u_k\inV_k^s$ of~\eqref{eq:num_scheme} for each $k\in\mathbb{N}$. We refer the reader to~\cite{KaweckiSmears20} for a detailed discussion of the dependencies of the constants.
The strong monotonicity bound~\eqref{eq:strong_monotonicity}, the boundedness of the data, and the Lipschitz continuity~\eqref{eq:Ak_lipschitz} also imply the boundedness of the sequence of numerical solutions, i.e.\
\begin{equation}\label{eq:num_sol_bounded}
\sup_{k\in\mathbb{N}}\normk{u_k} < \infty.
\end{equation}
Furthermore, it follows from~\cite[Theorem~4.3]{KaweckiSmears20} that the numerical solution $u_k$ is a quasi-optimal approximation of $u$, i.e.\, up to a constant, the error attained by $u_k$ is equivalent to the best approximation error of $u$ from the space $V_k^s$.
\paragraph{Analysis of stabilization terms.}
We collect here two results that will be used later in the analysis.
First, we note that the bilinear form $S_k(\cdot,\cdot)$ defined in~\eqref{eq:B_def} constitutes a stabilization term, and is consistent with the original problem, see~\cite[Lemma~5]{SS13}.
We will also use the following theorem from~\cite[Theorem~5.3]{KaweckiSmears20}, which improves on~\cite{SS13}, provides a quantitative bound for possibly nonsmooth functions in $V_k^s$.
\begin{theorem}[\cite{KaweckiSmears20}]\label{thm:b_k_jump_bound}
The bilinear form $S_k(\cdot,\cdot)$ satisfies
\begin{equation}
\begin{aligned}
\abs{S_k(w_k,v_k)} \lesssim \absJ{w_k}\absJ{v_k} &&& \forall w_k,\,v_k\in V_k^s, \;\forall s\in\{0,1\}.
\end{aligned}
\end{equation}
\end{theorem}
When it comes to the analysis of asymptotic consistency of the numerical schemes, it is advantageous to write the face terms in bilinear form $S_k(\cdot,\cdot)$ via the lifting operators defined in~Section~\ref{sec:fem_spaces_def}.
\begin{lemma}\label{lem:lifted_Bk_form}
For all $v_k,w_k\in V_k^s$, $s\in\{0,1\}$, there holds
\begin{equation}\label{eq:lifted_Bk_form}
\begin{aligned}
S_k(w_k,v_k) &= \int_\Omega \left[\bm{H}_k w_k {:} \bm{H}_k v_k - \Delta_k w_k \Delta_k v_k \right]
\\ & + \int_\Omega \left[ \Tr \bm{r}_k(\jump{\nabla w_k}) \Tr \bm{r}_k(\jump{\nabla v_k}) - \bm{r}_k(\jump{\nabla w_k}){:}\bm{r}_k(\jump{\nabla v_k}) \right].
\end{aligned}
\end{equation}
\end{lemma}
\begin{proof}
Using the identity~\eqref{eq:lifting_identity}, simple algebraic manipulations show that, for any $w_k$ and $v_k \inV_k^s$,
\begin{multline}\label{eq:lifted_Bk_form_1}
\int_\Omega \nabla^2 v_k : \bm{r}_k(\jump{\Npw w_k}) -\Delta v_k \Tr\bm{r}_k(\jump{\nabla w_k})\\ = \int_{\Fk^I} \left[\avg{\nabla^2 v_k}:(\jump{\nabla w_k}\otimes\bm{n}) - \avg{\Delta v_k}\jump{\nabla w_k \cdot \bm{n}}\right] + \int_{\Fk^B} \{\nabla^2 v_k\}:(\jump{ \nabla_T w_k}\otimes\bm{n})
\\ = \int_{\cF_k} \nabla_T\avg{\nabla v_k\cdot \bm{n}} \cdot \jump{\nabla_T w_k} -\int_{\Fk^I} \avg{\Delta_T v_k} \jump{\nabla w_k\cdot \bm{n}},
\end{multline}
where the second identity is obtained by cancelling terms exactly as in the proof of~\cite[Lemma~5]{SS13}. Note that it is possible to interchange $w_k$ and $v_k$ in~\eqref{eq:lifted_Bk_form_1}. The identity~\eqref{eq:lifted_Bk_form} is then obtained by expanding all terms in its right-hand side and simplifying with the help of~\eqref{eq:lifted_Bk_form_1}.
\end{proof}
Theorem~\ref{thm:b_k_jump_bound} will be used later in the proof of convergence of the adaptive methods.
\paragraph{Reliable and efficient a posteriori error estimator.}
For each $k\in\mathbb{N}$ and any $v_k\inV_k^s$, we define the element-wise error estimators $\eta_k(v_k,K)$ for each $K\in\cT_k$, and total error estimator $\eta_k(v_k)$, by
\begin{subequations}\label{eq:estimator_def}
\begin{align}
\left[\eta_k(v_k,K)\right]^2 &\coloneqq \int_K \abs{F_{\gamma}[v_k]}^2 + \sum_{F\in\Fk^I;F\subset \partial K} \int_F \delta_F h_k^{-1}\abs{\jump{\nabla v_k}}^2 +\sum_{F\in\cF_k; F\subset \partial K}\int_{F} \delta_F h_k^{-3} \abs{\jump{v_k}}^2 ,
\\ [\eta_k(v_k)]^2&\coloneqq\sum_{K\in\cT_k}[\eta_k(v_k,K)]^2,
\end{align}
\end{subequations}
where the weight $\delta_F=1/2$ if $F\in\cF_k^I$ and otherwise $\delta_F=1$ for $F\in\cF_k^B$.
The reliability and local efficiency of the above error estimators is shown in~\cite{KaweckiSmears20}, see also related results in~\cite{Kawecki19c,Bleschmidt19}.
In particular,~\cite[Theorem~4.2]{KaweckiSmears20} shows that there exists a constant $C_{\mathrm{rel}}>0$, independent of $k\in\mathbb{N}$, such that
\begin{equation}
\normk{u-v_k} \leq C_{\mathrm{rel}} \eta_k(v_k)\quad \forall v_k\inV_k^s,\;\forall s\in\{0,1\},\;\forall k\in\mathbb{N}.
\end{equation}
Note that the reliability bound indeed holds for all functions from the approximation space and not only the numerical solution $u_k\inV_k^s$; this results primarily from the fact that $u$ is a strong solution of the problem.
Furthermore, the estimators are locally efficent, in particular, there is a constant $C_{\mathrm{eff}}>0$ independent of $k$, such that
\begin{multline}
\frac{1}{C^2_{\mathrm{eff}}}[\eta_k(v_k;K)]^2 \\ \leq \int_K \abs{\nabla^2(u-v_k)}^2 + \sum_{\substack{F\in\Fk^I\\ F\subset \partial K}} \int_F \delta_F h_k^{-1}\abs{\jump{\nabla v_k}}^2 + \sum_{\substack{F\in\cF_k\\ F\subset \partial K}}\int_{F}\delta_F h_k^{-3} \abs{\jump{v_k}}^2,
\end{multline}
for all $v_k\inV_k^s$.
This implies the global efficiency bound
\begin{equation}\label{eq:global_efficiency}
\eta_k(v_k) \leq C_{\mathrm{eff}}\normk{u-v_k} \quad \forall v_k\inV_k^s,\;\forall s\in\{0,1\}.
\end{equation}
For further analysis of the dependencies of the constants $C_{\mathrm{rel}}$ and $C_{\mathrm{eff}}$ we refer the reader to~\cite{KaweckiSmears20}. Note that the error estimators do not feature any positive power weight of the mesh-size in the residual terms, which is an issue for the reduction property typically used in the analysis of convergence rates of adaptive algorithms.
\subsection{Adaptive algorithm and main result}\label{sec:main_result}
We now state precisely the adaptive algorithm. Consider a fixed choice of $s\in\{0,1\}$, with $s=0$ corresponding to the DG method, and $s=1$ corresponding to the $C^0$-IP method, and consider fixed integers $p$ and $q$ such that $p\geq 2$ and $q\geq p-2$.
Given an initial mesh $\mathcal{T}_1$, the algorithm produces the sequence of meshes $\{\cT_k\}_{k\in\mathbb{N}} $ and numerical solutions $u_k\inV_k^s$ by looping over the following steps for each $k\in\mathbb{N}$.
\begin{enumerate}
\item \emph{Solve.} Solve the discrete problem~\eqref{eq:num_scheme} to obtain the discrete solution $u_k\inV_k^s$.
\item \emph{Estimate.} Compute the estimators $\{\eta_k(u_k,K)\}_{K\in\cT_k}$ defined by~\eqref{eq:estimator_def}.
\item \emph{Mark.} Choose a subset of elements $\mathcal{M}_k\subset \cT_k$ marked for refinement, such that
\begin{equation}\label{eq:max_marking}
\max_{K\in\cT_k}\eta_k(u_k,K) = \max_{K\in \mathcal{M}_k}\eta_k(u_k,K).
\end{equation}
\item \emph{Refine.} Construct a conforming simplicial refinement $\mathcal{T}_{k+1}$ from $\cT_k$ such that every element of $\mathcal{M}_k$ is refined, i.e.\ $K \in \cT_k\setminus \mathcal{T}_{k+1}$ for all $K\in\mathcal{M}_k$.
\end{enumerate}
The marking condition~\eqref{eq:max_marking} is rather general and can be combined with additional conditions on the marked set such as those used in maximum and bulk-chasing strategies.
Since~\eqref{eq:max_marking} is sufficient for the proof of convergence of the adaptive method, we do not specify further conditions on the marking strategy and instead allow for any marking strategy that satisfies~\eqref{eq:max_marking}.
Recall also that the refinement routine is assumed to satisfy the conditions of \emph{quasi-regular subdivisions} of~\cite{MorinSiebertVeeser08}.
\paragraph{Main result.}
The main result of this work states that the sequence of numerical approximations generated by the adaptive algorithm converges to the solution of \eqref{eq:isaacs} and that the estimators vanish in the limit.
\begin{theorem}\label{thm:main}
The sequence of numerical solutions $\{u_k\}_{k\in\mathbb{N}}$ converges to the solution $u$ of~\eqref{eq:isaacs} with
\begin{equation}\label{eq:main}
\begin{aligned}
\lim_{k\rightarrow\infty}\normk{u-u_k} =0, &&& \lim_{k\rightarrow\infty} \eta_k(u_k)= 0.
\end{aligned}
\end{equation}
\end{theorem}
Theorem~\ref{thm:main} establishes plain convergence of the numerical solutions to the exact solution, without requiring any additional regularity assumptions on the problem.
\section{Analysis of the limit spaces}\label{sec:limspace}
In this section we introduce appropriate limit spaces for the sequence of the finite element spaces $\{V_k^s\}_{k\in\mathbb{N}}$.
We give here an intrinsic approach to the construction of the limit spaces, which is designed to overcome some key difficulties in the analysis of weak limits of bounded sequences of finite element functions. In particular, we construct the limit spaces in terms of some original function spaces that are of independent interest for adaptive nonconforming methods for more general problems.
\subsection{Sets of never-refined elements and faces}\label{sec:refinement_sets}
We start by considering some elementary properties of the sets of elements and faces that are never-refined, following~e.g.~\cite{DominincusGaspozKreuzer19,KreuzerGeorgoulis18,MorinSiebertVeeser08}.
Let $\cT^+$ be the set of elements of the sequence of meshes $\{\cT_k\}_{k\in\mathbb{N}}$ that are never refined once created, i.e.\
\[
\cT^+\coloneqq \bigcup_{m\geq 0}\bigcap_{k\geq m}\cT_k,
\]
and let $ \Omega^+ \coloneqq \bigcup_{K\in \cT^+}K$ be its associated subdomain.
Let the complement $\Omega^-\coloneqq\overline{\Omega}\setminus\Omega^+$, which represents the region of $\overline{\Omega}$ where the mesh-sizes become vanishingly small in the limit, as shown by Lemma~\ref{lem:hjvanishes} below.
For $k\in \mathbb{N}$, let $\Tk^+$ denote the set of never-refined elements in $\cT_k$, and let $\cT_k^{-}$ denote its complement in $\cT_k$, given by
\[
\begin{aligned}
\Tk^+ \coloneqq \cT_k\cap \cT^+, &&& \cT_k^{-} \coloneqq \cT_k\setminus \Tk^+.
\end{aligned}
\]
For integers $k\geq 1$ and $j\geq 0$, we also define the set $\Tk^{j+} \coloneqq \{K\in\cT_k:N_k^j(K)\subset\cT_k^+\}$ and its complement $\cT_k^{j-} \coloneqq \cT_k\setminus\cT_k^{j+}$, where we recall that $N_k^j(K)$ denotes the set of all elements in $\cT_k$ that are at most $j$-th neighbours of $K$.
Recalling that $N_k^0(K)=K$, we have $\cT_k^{0+}=\Tk^+$ and $\cT_k^{0-}=\cT_k^{-}$.
For the corresponding domains, we define $\Omega_k^{j+} \coloneqq \bigcup_{K\in\cT_k^{j+}} K$ and $\Omega_k^{j-} \coloneqq \bigcup_{K\in\cT_k^{j-}} K$. It follows that the intersection $\Omega_k^{j+}\cap\Omega_k^{j-}$ is a set of Lebesgue measure zero.
Similar to $\Tk^+$ and $\cT_k^{-}$, we use the shorthand notation $\Omega_k^+\coloneqq\Omega_k^{0+}$, and $\Omega_k^-\coloneqq\Omega_k^{0-}$.
Furthermore, it is also easy to see that the sets $\Tk^+$ and $\Tk^{j+} $ are ascending with respect to $k$, i.e.\ $\cT_k^{j+} \subset \mathcal{T}_{k+1}^{j+}$ for all $k\in \mathbb{N}$ and all $j\in\mathbb{N}_0$, whereas the $\Tk^{j+}$ are descending with respect to $j$, i.e.\ $\mathcal{T}_k^{j+}\subset \mathcal{T}_k^{(j-1)+}$ for all $j\in \mathbb{N}$.
The following two Lemmas are from \cite{DominincusGaspozKreuzer19}, see also \cite{MorinSiebertVeeser08}. The first Lemma states that neighbours of never-refined elements are also eventually never-refined, and the second Lemma shows that the mesh-size functions converge uniformly to zero on the refinement sets $\Omega_k^{j-}$ as $k\rightarrow \infty$, for any fixed $j\in\mathbb{N}_0$.
\begin{lemma}[\cite{DominincusGaspozKreuzer19,MorinSiebertVeeser08}]\label{lem:eventuallyinT+}
For every $K\in \cT^+$ there exists an integer $m=m(K) \in \mathbb{N}$ such that $K\in\cT_k^+$ for all $k\geq m$ and $N_k(K)=N_m(K) \subset \cT^+$ for all $k\geq m$.
\end{lemma}
\begin{lemma}[\cite{DominincusGaspozKreuzer19,MorinSiebertVeeser08}]\label{lem:hjvanishes}
For any $j\in \mathbb{N}_0$, we have $\norm{h_k\chi_{\Omega_k^{j-}}}_{L^\infty(\Omega)}\rightarrow 0$ as $k\rightarrow \infty$, where $\chi_{\Omega_k^{j-}}$ denotes the characteristic function of $\Omega_k^{j-}$. Moreover, $|\Omega_k^{j-}\setminus\Omega^{-}|=|\Omega^+\setminus\Omega_k^{j+}|\to0$ as $k\to\infty$.
\end{lemma}
For each $K\in \cT^+$, let $N_+(K)$ denote the neighbourhood of $K$ in $\cT^+$, i.e.\ $N_+(K)=\{K^\prime \in \cT^+,\; K^\prime\cap K \neq \emptyset\}$. Lemma~\ref{lem:eventuallyinT+} implies that for each $K\in \cT^+$, there exists $m=m(K)\in\mathbb{N}$ such that $N_+(K)= N_k(K)$ for all $k\geq m$.
\paragraph{Never-refined faces.}
Let $\cF^+$ denote the set of all faces of elements from $\cT^+$, i.e. $F\in \cF^+$ if and only if there exists $K\in \cT^+$ such that $F$ is a face of $K$.
The set $\cF^+$ is at most a countably infinite subcollection of $\bigcup_{k\in\mathbb{N}} \cF_k$. We also consider $\cF^{I+}$ and $\cF^{B+}$ the set of interior and boundary faces of $\cF^+$, respectively.
For each $k\in\mathbb{N}$, let $\cF_k^+ \coloneqq \cF_k\cap \cF^+$ denote the set of never-refined faces in $\cF_k$.
It holds trivially that $\cF^+=\bigcup_{k\in\mathbb{N}} \cF_k^+$ and that the sets $\cF_k^+$ are ascending, with $\cF_k^+\subset \mathcal{F}_{k+1}^+$ for all $k\in \mathbb{N}$.
We also consider the set $\cF^{\dagger}_k$, $k\in\mathbb{N}$, of faces of only elements in $\Tk^+$, defined by
\begin{equation}\label{eq:Fks_def}
\cF^{\dagger}_k \coloneqq \{F\in\cF^+ \colon \exists \{K,K^\prime\}\subset \cT_k^+, \text{ s.t. } F = K\cap K'\text{ or } F=K\cap\partial\Omega\}.
\end{equation}
Additionally, let $\cF^{I\dagger}_k\coloneqq \cF^{I+} \cap \cF^{\dagger}_k$ denote the subset of interior faces of $\cF^{\dagger}_k$.
The definition implies that $\cF^{\dagger}_k \subset \cF_k^+$ and $\cF^{I\dagger}_k \subset \Fk^{I+}$, however in general $\cF^{\dagger}_k\neq \cF_k^+$ since it is possible to refine pairs of neighbouring elements without refining their common face.
Note also that $\cF^{\dagger}_k \subset \mathcal{F}_{k+1}^{\dagger}$ for all $k\in\mathbb{N}$ and thus $\{\cF^{\dagger}_k\}_{k\in\mathbb{N}}$ also forms ascending sequence of sets with respect to $k$.
Moreover, since neighbours of elements in $\cT^+$ are eventually also in $\cT^+$, as shown by Lemma~\ref{lem:eventuallyinT+}, and since the meshes $\cT_k$ are conforming, we also have $\cF^+ = \bigcup_{k\in\mathbb{N}}\cF^{\dagger}_k$.
We also consider the skeletons formed by sets of never refined faces. In particular, let $\cS^+$ denote the skeleton of $\cF^+$, defined by $\cS^+ = \bigcup_{F\in\cF^+}F$.
Additionally, let $\cS_k^+ \coloneqq \cS_k\cap \cS^+$.
It follows that $\cS^+$ is a measurable set with respect to the $\dim-1$ dimensional Hausdorff measure with $\mathcal{H}^{\dim-1}(\cS^+)\in[0,\infty]$, i.e.\ $\mathcal{H}^{\dim-1}(\cS^+)$ is not necessarily finite.
The next Lemma shows that the set of never-refined faces of any particular mesh is fully determined after at most finitely many refinements.
\begin{lemma}\label{lem:face_refinements}
For each $k\in \mathbb{N}$ there exists $M=M(k)$ such that
\begin{equation}\label{eq:Fkp_equivalence}
\cF_k^+ = \mathcal{F}_k\cap \mathcal{F}_m \quad \forall m\geq M.
\end{equation}
\end{lemma}
\begin{proof}
The inclusion $\cF_k^+ \subset \mathcal{F}_k\cap \mathcal{F}_m$ for all $m$ large enough is clear and follows easily from the definitions.
The converse inclusion $\cF_k\cap \mathcal{F}_m \subset \cF_k^+$ for all $m$ large enough is shown by contradiction. Since $\cF_k$ is finite, if the claim were false, there would exist $F\in (\mathcal{F}_{m_j}\cap \cF_k)\setminus \cF_k^+$ for a sequence of indices $m_j\rightarrow \infty $ as $j\rightarrow \infty$. Then, by definition, there exists a sequence of elements $K_j\in \mathcal{T}_{m_j}$ such that $F$ is a face of $K_j$ for each $j\in \mathbb{N}$. The shape-regularity of the meshes implies that $h_{m_j}|_{K_j^\circ}=\abs{K_j}^{1/\dim}\gtrsim \diam(F)$ for all $j\in\mathbb{N}$ and hence $\epsilon\coloneqq \inf_{j\in\mathbb{N}} h_{m_j}|_{K_j^\circ}$ is strictly positive. Lemma~\ref{lem:hjvanishes} then implies that there exists $J$ such that $h_{m_j}|_{K^\circ} <\epsilon$ for all $K\in \mathcal{T}_{m_j}^{-}$ and all $j\geq J$, which implies that $K_j\in\mathcal{T}_{m_j}^+$ for all $j\geq J$ and thus $F\in \cF^+$. This implies that $F\in \cF_k^+$, thereby giving a contradiction and completing the proof.
\end{proof}
\paragraph{Mesh-size function on never-refined elements and faces.}
Recalling the notation of Section~\ref{sec:notation}, for each $K\in \cT^+$, let $h_+|_{K^\circ} \coloneqq h_K $ and for each face $F\in\cF^+$, let $h_+|_{F} \coloneqq h_F $.
Thus $h_+$ is defined on $\Omega^+$ up to sets of $\mathcal{H}^{\dim-1}$-measure zero. The function $h_+$ is $\mathcal{H}^{\dim-1}$-a.e.\ positive on $\Omega^+$, although it is generally not uniformly bounded away from zero.
Due to Lemma~\ref{lem:eventuallyinT+}, it follows that for each $K\in\cT^+$, there exists an $m=m(K)\in\mathbb{N}$ such that $h_+|_{K}=h_k|_{K}$ for all $k\ge m$, see also \cite[Lemma~4.3]{MorinSiebertVeeser08} which implies that $\norm{h_k-h_+}_{L^\infty(\Omega^+)}\rightarrow 0$ as $k\rightarrow \infty$.
\subsection{First-order spaces, Poincar\'e and trace inequalities.}
The construction of the limit spaces for the sequence of finite element spaces is broken down into several steps.
In a first step, we introduce particular subspaces of functions of (special) bounded variation with possible jumps only on the set of never-refined faces of the meshes, and that have sufficiently integrable gradients and jumps.
We then show that these spaces are Hilbert spaces, and that they enjoy a Poincar\'e inequality and $L^2$-trace inequalities on all elements from all of the meshes.
Recall the notation of Section~\ref{sec:notation}, in particular for a function $v\in BV(\Omega)$, the gradient $\nabla v$ denotes the density of the absolutely continuous part of the distributional derivative $Dv$.
\begin{definition}\label{def:H1limitspaces}
Let $H^1_D(\Omega;\cT^+)$ denote the space of functions $v \in L^2(\Omega)$ such that the zero-extension of $v$ to $\mathbb{R}^\dim$, also denoted by $v$, belongs to $BV(\mathbb{R}^\dim)$, such that
\begin{equation}\label{eq:distderiv_H100Tp}
\langle D v,\bm{\phi} \rangle_{\mathbb{R}^\dim} \coloneqq - \int_{\mathbb{R}^\dim} v \Div \bm{\phi} = \int_{\Omega} \nabla v \cdot \bm{\phi} - \int_{\cF^+} \jump{v} (\bm{\phi}{\cdot}\bm{n}) \quad \forall \bm{\phi}\in C^\infty_0\big(\R^\dim;\R^{\dim}\big),
\end{equation}
and such that
\begin{equation}\label{eq:H100Tp_norm}
\norm{v}_{H^1_D(\Omega;\cT^+)}^2\coloneqq\int_{\Omega} \left[\abs{\nabla v}^2 +\abs{v}^2\right] + \int_{\cF^+ }h_+^{-1}\abs{\jump{v}}^2 <\infty.
\end{equation}
Let $H^1(\Omega;\cT^+)$ denote the space of functions $v\in L^2(\Omega)\cap BV(\Omega)$ such that
\begin{equation}\label{eq:distderiv_H1Tp}
\langle D v,\bm{\phi} \rangle_{\Omega} \coloneqq - \int_\Omega v \Div \bm{\phi} = \int_{\Omega} \nabla v \cdot \bm{\phi} - \int_{\cF^{I+}} \jump{v} (\bm{\phi}{\cdot}\bm{n}) \quad \forall \bm{\phi}\in C^\infty_0\big(\Om;\R^\dim\big),
\end{equation}
and such that
\begin{equation}\label{eq:H1Tp_norm}
\norm{v}_{H^1(\Omega;\cT^+)}^2 \coloneqq\int_{\Omega} \left[\abs{\nabla v}^2 +\abs{v}^2\right] + \int_{\cF^{I+}}h_+^{-1}\abs{\jump{v}}^2 <\infty.
\end{equation}
\end{definition}
\begin{remark}[Piecewise $H^1$-regularity over $\cT^+$]\label{rem:notation}
For any $K\in \cT^+$, by simply considering test functions $\bm{\phi} \in C^\infty_0(K;\mathbb{R}^\dim)$ in \eqref{eq:distderiv_H1Tp}, it is seen that any function $v\in H^1(\Omega;\cT^+)$ is $H^1$-regular over $K$, i.e.\ $v|_K \in H^1(K)$, and that the weak derivative $\nabla (v|_K)$ coincides with $(\nabla v)|_K$ the restriction of $\nabla v$ to $K$.
\end{remark}
\begin{remark}
The space $H^1_D(\Omega;\cT^+)$ consists of functions with a weakly imposed Dirichlet boundary condition on $\partial \Omega$ through a Nitsche-type penalty term.
The definition of the space $H^1_D(\Omega;\cT^+)$ is motivated by the characterization of $H^1_0(\Omega)$ as the space of measurable functions on $\Omega$ whose zero-extension to $\mathbb{R}^\dim$ belongs to $H^1(\mathbb{R}^\dim)$, see \cite[Theorem~5.29]{AdamsFournier03}.
In particular, it follows that $H^1_0(\Omega)$ is a closed subspace of $H^1_D(\Omega;\cT^+)$.
In general, functions in $H^1_D(\Omega;\cT^+)$ do not have vanishing interior traces on $\partial\Omega$, which is why we avoid the notation $H^1_0(\Omega;\cT^+)$.
\end{remark}
We now show that the spaces in Definition~\ref{def:H1limitspaces} are continuously embedded into the corresponding spaces of functions of bounded variation.
\begin{lemma}\label{lem:BV_embedding_H1TP}
The space $H^1(\Omega;\cT^+)$ is continuously embedded in $BV(\Omega)$. The space $H^1_D(\Omega;\cT^+)$ is continuously embedded in $BV(\mathbb{R}^\dim)$, where functions in $H^1_D(\Omega;\cT^+)$ are considered to be extended by zero to $\mathbb{R}^\dim$.
\end{lemma}
\begin{proof}
Consider first the case of $H^1(\Omega;\cT^+)$ and let $v\in H^1(\Omega;\cT^+)$ be arbitrary.
Recall that $\pair{Dv,\bm{\phi}}_\Omega$ is given by \eqref{eq:distderiv_H1Tp} for any $\bm{\phi}\in C^\infty_0\big(\Om;\R^\dim\big)$.
Thus $\abs{\pair{Dv,\bm{\phi}}_{\Omega}}\leq \int_\Omega \abs{\nabla v}+\int_{\cF^+} \abs{\jump{v}}$ for any $\bm{\phi}\in C^\infty_0\big(\Om;\R^\dim\big)$ such that $\norm{\bm{\phi}}_{C(\overline{\Omega},\mathbb{R}^\dim)}\leq 1$.
Since $\Omega$ is bounded, we get $\norm{\nabla v}_{L^1(\Omega)}\lesssim \norm{v}_{H^1(\Omega;\cT^+)}$.
For the term involving jumps, the Cauchy--Schwarz inequality gives $\int_{\cF^+} \abs{\jump{v}}\leq \left(\int_{\cF^{I+}}h_+^{-1}\abs{\jump{v}}^2\right)^{\frac{1}{2}} \left(\int_{\cF^{I+}}h_+\right)^\frac{1}{2}$.
To bound $\int_{\cF^{I+}}h_+$, consider any face $F\in\cF^+$, and let $K\in\cT^+$ be an element that contains $F$.
Then, by shape-regularity of the meshes, we have $\int_F h_+ = \left(\mathcal{H}^{\dim-1}(F)\right)^{\frac{d}{d-1}} \lesssim \abs{K}$, and thus, after a counting argument, we get $\int_{\cF^{I+}}h_+\lesssim \abs{\Omega^+}\leq \abs{\Omega}<\infty$ since $\Omega$ is bounded.
These bounds then imply that $\abs{Dv}(\Omega)\lesssim \norm{v}_{H^1(\Omega;\cT^+)}$ and thus $H^1(\Omega;\cT^+)$ is continuously embedded in $BV(\Omega)$.
The proof of the corresponding claim for $H^1_D(\Omega;\cT^+)$ is similar to the one given above, where we only need to additionally use the fact that functions in $H^1_D(\Omega;\cT^+)$, once extended by zero to $\mathbb{R}^\dim$, remain compactly supported.
\end{proof}
\begin{theorem}\label{thm:completeness_H1tp}
The space $H^1(\Omega;\cT^+)$ is a Hilbert space with the inner-product
\begin{equation*}
\begin{aligned}
\pair{w,v}_{H^1(\Omega;\cT^+)}\coloneqq\int_\Omega \left[\nabla w{\cdot}\nabla v + wv \right]+\int_{\cF^{I+}} h_+^
{-1}\jump{w}\jump{v} &&&\forall w,\, v\in H^1(\Omega;\cT^+).
\end{aligned}
\end{equation*}
The space $H^1_D(\Omega;\cT^+)$ is a Hilbert space with the inner-product
\begin{equation*}
\begin{aligned}
\pair{w,v}_{H^1_D(\Omega;\cT^+)}\coloneqq \int_\Omega \left[\nabla w{\cdot}\nabla v + wv \right]+\int_{\cF^+}h_+^{-1}\jump{w}\jump{v} &&&\forall w,\,v \in H^1_D(\Omega;\cT^+).
\end{aligned}
\end{equation*}
\end{theorem}
\begin{proof}
It is clear that the spaces $H^1(\Omega;\cT^+)$ and $H^1_D(\Omega;\cT^+)$ are inner-product spaces when equipped with their respective inner-products, so it is enough to show that they are complete.
We give the proof in the case of $H^1_D(\Omega;\cT^+)$ as it is similar for $H^1(\Omega;\cT^+)$. Consider a Cauchy sequence $\{v_k\}_{k\in\mathbb{N}} \subset H^1_D(\Omega;\cT^+)$. Then, the continuous embedding of $H^1_D(\Omega;\cT^+)$ into $BV(\mathbb{R}^\dim)$ implies the existence of a $v\in BV(\mathbb{R}^\dim)$ such that $v_k\rightarrow v$ in $BV(\mathbb{R}^\dim)$. Since convergence in $BV(\mathbb{R}^\dim)$ implies convergence in $L^1(\mathbb{R}^\dim)$, and the $v_k$ form a Cauchy sequence in $L^2(\mathbb{R}^\dim)$, by uniqueness of limits we then deduce that $v\in L^2(\mathbb{R}^\dim)$ and that $v_k \rightarrow v$ in $L^2(\mathbb{R}^\dim)$. In particular, $v=0$ a.e.\ on $\mathbb{R}^\dim\setminus\Omega$.
Furthermore, continuity of the trace operator from $BV(K)$ to $L^1(\partial K)$ for each $K\in\cT^+$ implies that $\jump{v_k}_F \rightarrow \jump{v}_F \in L^1(F)$ for each $F\in\cF^+$, and again the functions $\jump{v_k}_F$ form a Cauchy sequence in $L^2(F)$, so we deduce similarly that $\jump{v}_F\in L^2(F)$ for all $F\in\cF^+$.
Additionally, using a diagonal argument over the countable set $\cF^+$, we may extract a subsequence $\{v_{k_j} \}_{j\in\mathbb{N}}$ such that $\jump{v_{k_j}}\rightarrow \jump{v}$ pointwise $\mathcal{H}^{\dim-1}$-a.e.\ on $\cS^+$, recalling that $\cS^+\coloneqq\bigcup_{F\in\cF^+} F$.
Therefore, Fatou's Lemma implies that $\int_{\cF^+}h_+^{-1}\abs{\jump{v}}^2=\int_{\cS^+}h_+^{-1}\abs{\jump{v}}^2<\infty$ and that $\int_{\cF^+} h_{+}^{-1} \abs{\jump{v-v_k}}^2_F=\int_{\cS^+} h_{+}^{-1} \abs{\jump{v-v_k}}^2_F \leq \liminf_{j\rightarrow\infty}\int_{\cS^+}h_+^{-1}\abs{\jump{v_{k_j}-v_k}}^2 \rightarrow 0$ as $k\rightarrow \infty$.
Then, using the fact that $\nabla v_k$ is a Cauchy sequence in $L^2(\Om;\R^\dim)$, it is easy to show that the distributional derivative $Dv$ is also of the form in~\eqref{eq:distderiv_H100Tp} and that $\nabla v_k \rightarrow \nabla v$ in $L^2(\Om;\R^\dim)$. This implies that $v\in H^1_D(\Omega;\cT^+)$ and that $v_k\rightarrow v $ as $k\rightarrow\infty$.
\end{proof}
The following Theorem shows that functions in $H^1(\Omega;\cT^+)$ and $H^1_D(\Omega;\cT^+)$ can be approximated by functions from the same space that have at most finitely many nonvanishing jumps.
\begin{theorem}\label{thm:H1tp_finite_jumps}
For every $v\in H^1(\Omega;\cT^+)$, respectively $v\in H^1_D(\Omega;\cT^+)$, there exists a sequence of functions $v_k \in H^1(\Omega;\cT^+)$ for all $k\in\mathbb{N}$, respectively $v_k\in H^1_D(\Omega;\cT^+)$ for all $k\in\mathbb{N}$, such that $\lim_{k\rightarrow \infty }\norm{v-v_k}_{H^1(\Omega;\cT^+)} =0$, respectively $\lim_{k\rightarrow \infty }\norm{v-v_k}_{H^1_D(\Omega;\cT^+)} =0$ and such that, for each $k\in \mathbb{N}$, there are only finitely many faces $F\in\cF^{I+}$, respectively $\cF^+$, such that $\jump{v_k}_F\neq 0$.
Moreover, $v_k=v$ and $\nabla v_k=\nabla v$ a.e.\ on $\Omega^+_k\cup\Omega^-$ for each $k\in\mathbb{N}$, and $\int_{\cT^+}h_+^{-2}\abs{v-v_k}^2\rightarrow 0$ as $k\rightarrow \infty$.
\end{theorem}
We postpone the proof of Theorem~\ref{thm:H1tp_finite_jumps} until after the proof of Theorem~\ref{thm:finite_approx} below, owing to the similar nature of the two results and the similarities in their proofs.
\begin{corollary}\label{cor:H1_omm_restriction}
For every $v\in H^1_D(\Omega;\cT^+)$, there exists a $w\in H^1_0(\Omega)$ such that $v=w$ and $\nabla v= \nabla w$ a.e.\ on~$\Omega^-$.
\end{corollary}
\begin{proof}
If $\Omega^-$ is empty then there is nothing to show, so we need only consider the case where $\Omega^-$ is nonempty.
Choose $k\in \mathbb{N}$ and let $v_k \in H^1_D(\Omega;\cT^+)$ be given by Theorem~\ref{thm:H1tp_finite_jumps}.
We infer from $\cF^+=\bigcup_{\ell\in\mathbb{N}}\cF^{\dagger}_{\ell}$ with ascending sets $\cF^{\dagger}_{\ell}$, c.f.\ Section~\ref{sec:refinement_sets}, that there exists $m=m(k)$ such that $v_k$ has nonzero jumps only on $\mathcal{F}^{\dagger}_m$, i.e.\ $\jump{v_k}_F=0$ for every face $F\in \cF^+\setminus \mathcal{F}^{\dagger}_m$.
Since any element in $\cT^+$ is by definition closed, it follows that $\Omega^+_m$ is a finite union of closed sets, and moreover it follows from Lemma~\ref{lem:eventuallyinT+} that $\Omega^+_m$ is disjoint from $\overline{\Omega^-}$. Therefore, $\Omega^+_m$ and $\overline{\Omega^-}$ are two disjoint compact sets in $\mathbb{R}^\dim$, so there exists a $\eta \in C^\infty_0(\mathbb{R}^\dim)$ such that $\eta|_{\Omega^-}=1$ and $\eta|_{\Omega^+_m}=0$.
Then, define $w(x) \coloneqq \eta(x) v_k(x) $ for all $x\in \mathbb{R}^\dim$, where we recall that $v_k$ is extended by zero outside of $\Omega$.
We see that $w=v$ a.e.\ on $\Omega^-$ immediately from the facts that $v_k=v$ on $\Omega^-$ and $\eta=1$ on $\Omega^-$.
It remains only to show that $w\in H^1_0(\Omega)$.
Note that $v_k=0$ on $\mathbb{R}^\dim\setminus \overline{\Omega}$ by definition, therefore $w=0$ on $\mathbb{R}^\dim\setminus \overline{\Omega}$.
Then, for any test function $\bm{\phi} \in C^\infty_0\big(\R^\dim;\R^{\dim}\big)$, we have
\begin{equation}\label{eq:H1_omm_restriction_1}
\langle Dw , \bm{\phi}\rangle_{\mathbb{R}^\dim} = \int_{\mathbb{R}^\dim} \left[ -v_k \Div (\eta \bm{\phi}) + v_k \nabla \eta \cdot \bm{\phi}\right] = \langle D v_k, \eta\bm{\phi}\rangle_{\mathbb{R}^\dim}+ \int_{\mathbb{R}^\dim}v_k \nabla \eta \cdot \bm{\phi}.
\end{equation}
Since $v_k\in H^1_D(\Omega;\cT^+)$ has a distributional derivative satisfying \eqref{eq:distderiv_H100Tp}, and since $\eta$ vanishes identically on every face $F\in \mathcal{F}^{\dagger}_m \subset \Omega^+_m$, whereas $\jump{v_k}_F=0$ for every face $F\in \cF^+\setminus \mathcal{F}^{\dagger}_m$; we then see from~\eqref{eq:H1_omm_restriction_1} that
\begin{equation}
\langle D w,\bm{\phi}\rangle_{\mathbb{R}^\dim} = \int_{\Omega} (\eta \nabla v_k + v_k \nabla \eta){\cdot} \bm{\phi} - \int_{\cF^+}\jump{v_k}(\eta \bm{\phi}\cdot \bm{n})=
\int_{\Omega} (\eta \nabla v_k + v_k\nabla \eta)\cdot \bm{\phi}
\end{equation}
for all $\bm{\phi} \in C^\infty_0\big(\R^\dim;\R^{\dim}\big)$, which implies that $w\in H^1(\mathbb{R}^\dim)$ and $\nabla w = \eta \nabla v_k + v_k\nabla \eta$. Since $w=0$ outside $\overline{\Omega}$, we conclude that $w\in H^1_0(\Omega)$ by \cite[Theorem~5.29]{AdamsFournier03}, and since $\eta=1$ on $\Omega^-$, we find that $\nabla w = \nabla v_k = \nabla v$ a.e.\ on $\Omega^-$, which completes the proof.
\end{proof}
We now turn to some key properties of the space $H^1(\Omega;\cT^+)$, namely that it enjoys a Poincar\'e and $L^2$-trace inequalities. For an element $K\in \cT_k$ for some $k\in\mathbb{N}$, let $\mathcal{F}^+_{\circ}(K)$ denote the set of faces in $\cF^{I+}$ that are contained in $K$ but do not lie entirely on the boundary of $K$, i.e.
\begin{equation}\label{eq:fp_circ}
\mathcal{F}^+_{\circ}(K) \coloneqq \{ F\in\cF^{I+}\colon F\subset K,\; F\not\subset\partial K \}.
\end{equation}
Note that by definition no boundary face of $\cF^+$ can intersect the interior of any element of any mesh.
The following Theorem shows that functions in $H^1(\Omega;\cT^+)$ enjoy a Poincar\'e inequality over elements of the meshes $\cT_k$, with optimal scaling with respect to element sizes. Recall that $h_K=\abs{K}^{\frac{1}{\dim}}\eqsim \diam K$ owing to shape-regularity of the sequence of meshes.
\begin{theorem}[Poincar\'e inequality]\label{thm:poincare}
For every $k\in\mathbb{N}$ and any $K\in \cT_k$, we have
\begin{equation}\label{eq:poincare}
h_K^{-2}\int_K \abs{v-\overline{v_K}}^2 \lesssim \int_K \abs{\nabla v }^2 + \int_{\cF^+_{\circ}(K)} h_+^{-1} \abs{\jump{v}}^2 \quad \forall v\in H^1(\Omega;\cT^+),
\end{equation}
where $\overline{v_K}$ denotes the mean-value of $v$ over $K$ and $\cF^+_{\circ}(K)$ is defined in~\eqref{eq:fp_circ}.
\end{theorem}
\begin{proof}
Let $v\in H^1(\Omega;\cT^+)$ be arbitrary.
Since $v\in L^2(\Omega)$ it is clear that the restriction of the distributional derivative $Dv$ to $K$ is in $H^{-1}(K;\mathbb{R}^\dim)$.
We start by showing that
\begin{equation}\label{eq:distderiv_H10}
\norm{D v}_{H^{-1}(K;\mathbb{R}^\dim)} \lesssim h_K \left( \int_K \abs{\nabla v }^2 + \int_{\cF^+_{\circ}(K)} h_+^{-1} \abs{\jump{v}}^2\right)^{\frac{1}{2}},
\end{equation}
where $\norm{D v}_{H^{-1}(K)}\coloneqq \sup \{\abs{ \pair{D v,\bm{\phi}}_{K} }\colon \bm{\phi} \in H^1_0(K;\mathbb{R}^\dim), \norm{\nabla \bm{\phi}}_K =1\}$.
By density of smooth compactly supported functions in $H^1_0(K;\mathbb{R}^\dim)$, it is enough to show~\eqref{eq:distderiv_H10} for $\bm{\phi}\in C^\infty_0(K;\mathbb{R}^\dim)$.
Consider now an arbitrary $\bm{\phi}\in C^\infty_0(K;\mathbb{R}^\dim)$, and extend it by zero to $\Omega$. Then $\pair{Dv,\bm{\phi}}_K=\pair{Dv,\bm{\phi}}_\Omega$ is given by \eqref{eq:distderiv_H1Tp}. Since $\bm{\phi}$ is compactly supported in $K$ and vanishes on faces in $\cF^+\setminus \cF^+_\circ(K)$, the Cauchy--Schwarz inequality gives
\begin{equation}
\abs{\pair{Dv,\bm{\phi}}_K}\leq \norm{\nabla v}_K \norm{\bm{\phi}}_K+ \left(\int_{\cF^+_\circ(K)}h_+^{-1}\abs{\jump{v}}^2\right)^{\frac{1}{2}} \left(\int_{\cF^+_\circ(K)} h_+ \norm{\bm{\phi}}_F^2\right)^{\frac{1}{2}}.
\end{equation}
Then, the multiplicative trace inequality, applied to the parent elements from $\cT^+$ of each face $F\in \cF^+_\circ(K)$, and the Cauchy--Schwarz inequality imply that
\begin{equation}
\begin{split}
\int_{\cF^+_\circ(K)} h_+ \norm{\bm{\phi}}_F^2 &\lesssim \sum_{K^\prime\in\cT^+(K)}\mkern-18mu \left[ h_{K^\prime} \norm{\nabla \bm{\phi}}_{K^\prime}\norm{\bm{\phi}}_{K^\prime } + \norm{\bm{\phi}}_{K^\prime}^2\right]
\\ &\leq h_K \norm{\nabla \bm{\phi}}_K \norm{\bm{\phi}}_K + \norm{\bm{\phi}}_K^2 \lesssim h_K^2 \norm{\nabla \bm{\phi}}^2_K,
\end{split}
\end{equation}
where $\cT^+(K)\coloneqq \{K^\prime\in \cT^+\colon K^\prime \subset K\}$ is the set of elements of $\cT^+$ contained in $K$, and where we have used the Poincar\'e--Friedrichs inequality $\norm{\bm{\phi}}_K\lesssim h_K \norm{\nabla \bm{\phi}}_K$ for $\bm{\phi} \in C^\infty_0(K;\mathbb{R}^\dim)$.
This implies that $\pair{Dv,\bm{\phi}}_K$ is bounded by the right-hand side of~\eqref{eq:distderiv_H10} for all $\bm{\phi} \in C^\infty_0(K;\mathbb{R}^\dim)$ such that $\norm{\nabla \bm{\phi}}_K=1$, and thus $Dv$ extends to a distribution in $H^{-1}(K)$ satisfying~\eqref{eq:distderiv_H10}.
Next, we use the fact that for any $v \in L^2(K)$, there exists a vector field $\bm{\phi} \in H^1_0(K;\mathbb{R}^\dim) $ such that $\Div \bm{\phi} = v - \overline{v_K}$ in $K$ and such that $\norm{\nabla\bm{\phi}}_{K} \lesssim \norm{v-\overline{v_K}}_{K}$, see~\cite{Bogovskii79}.
In particular we may take the constant to depend only on the shape-regularity of the meshes and on the spatial dimension, since $\bm{\phi}$ can be obtained by mapping back to a reference element through the Piola transformation, see e.g. the textbook~\cite[p.~59]{BoffiBrezziFortin2013}.
Then, noting that $\int_K \Div \bm{\phi} \overline{v_K} =0$, we obtain
\[
\norm{v-\overline{v_K}}_K^2= \int_K v \Div \bm{\phi} = - \langle Dv, \bm{\phi} \rangle_K \lesssim \norm{Dv}_{H^{-1}(K;\mathbb{R}^\dim)} \norm{\nabla\bm{\phi}}_{K },
\]
and then we use~\eqref{eq:distderiv_H10} and $\norm{\nabla\bm{\phi}}_{K}\lesssim \norm{v-\overline{v_K}}_{K}$ to obtain~\eqref{eq:poincare}.
\end{proof}
For each $K\in\cT_k$, recall that $\tau_{\p K} \colon BV(K)\rightarrow L^1(\partial K)$ denotes the trace operator. We now show that functions in $H^1(\Omega;\cT^+)$ have traces in $L^2$ over all element boundaries. Recall again that $h_K=\abs{K}^{\frac{1}{\dim}} \eqsim \diam K$ owing to the shape-regularity of the meshes.
\begin{theorem}[Trace inequality on element boundaries]\label{thm:trace}
For every $k\in\mathbb{N}$ and every $K\in\cT_k$, the trace operator $\tau_{\p K} $ is a bounded operator from $H^1(\Omega;\cT^+)$ to $L^2(\partial K)$ and satisfies
\begin{equation}\label{eq:trace_inequality}
h_K^{-1}\int_{\partial K} \abs{\tau_{\p K} v}^2 \lesssim \int_K \left[\abs{\nabla v}^2 + h_K^{-2}\abs{v}^2\right] + \int_{\mathcal{F}^+_{\circ}(K)}h_+^{-1} \abs{\jump{v}}^2 \quad\forall v \in H^1(\Omega;\cT^+),
\end{equation}
where $\mathcal{F}^+_{\circ}(K)$ is defined by~\eqref{eq:fp_circ}.
\end{theorem}
\begin{proof}
We start by showing \eqref{eq:trace_inequality} for functions $v\in H^1(\Omega;\cT^+)$ that have non-vanishing jumps on only finitely many faces of $\cF^+$, and we will extend the result to all of $H^1(\Omega;\cT^+)$ with the density result of Theorem~\ref{thm:H1tp_finite_jumps}.
First, suppose that there is a $\ell\in\mathbb{N}$ such that $\jump{v}_F=0$ for all $F\in\cF^{I+}\setminus \mathcal{F}_{\ell}^+$. It is then easy to see that $v|_{K^\prime}\in H^1(K^\prime)$ for any $K^\prime \in \mathcal{T}_{\ell}$, because the interior of any element $K^\prime$ of $\mathcal{T}_{\ell}$ is disjoint from all faces in $\mathcal{F}_\ell^+$.
Now, if $\ell< k$, then there is nothing to show as $v\in H^1(K)$ and the inequality \eqref{eq:trace_inequality} is then simply the scaled trace inequality for functions in $H^1(K)$, and the jump terms in the right-hand side of~\eqref{eq:trace_inequality} would then vanish as a sum over an empty set.
If $\ell\geq k$, then let $\mathcal{T}_{\ell}(K)\coloneqq\{K^\prime\in \mathcal{T}_\ell\colon K^\prime \subset K\}$ denote the set of children of $K$ in the mesh $\mathcal{T}_{\ell}$, and note that $\mathcal{T}_{\ell}(K)$ forms a conforming shape-regular triangulation of $K$ by nestedness of the meshes. Moreover, the function $v$ is piecewise $H^1$-regular over $\mathcal{T}_{\ell}(K)$.
Then, inequality~\eqref{eq:trace_inequality} holds owing to~\cite[Lemma~3.1]{FengKarakashian01}, which proves the trace inequality~\eqref{eq:trace_inequality} for piecewise $H^1$-regular functions with respect to finite subdivisions of an element.
To generalise the result to all functions in $H^1(\Omega;\cT^+)$, consider now an arbitrary $v\in H^1(\Omega;\cT^+)$ and let $\{v_{\ell}\}_{\ell\in\mathbb{N}}\subset H^1(\Omega;\cT^+)$ denote the sequence given by Theorem~\ref{thm:H1tp_finite_jumps} (indexed now by $\ell$).
The continuous embedding of $H^1(\Omega;\cT^+)$ into $BV(\Omega)$, given by Lemma~\ref{lem:BV_embedding_H1TP}, shows that $v_{\ell}\rightarrow v$ in $BV(\Omega)$ as $\ell\rightarrow\infty$, so the traces $\tau_{\p K} v_{\ell} \rightarrow \tau_{\p K} v$ in $L^1(\partial K)$ as $\ell\rightarrow\infty$.
But then, after extracting a subsequence (without change of notation), we can assume that $\tau_{\p K} v_{\ell}\rightarrow \tau_{\p K} v$ pointwise $\mathcal{H}^{\dim-1}$-a.e.\ on $\partial K$ as $\ell\rightarrow \infty$. Fatou's lemma then allows us to conclude that $\tau_{\p K} v \in L^2(\partial K)$ and that~\eqref{eq:trace_inequality} holds for general $v\in H^1(\Omega;\cT^+)$.
\end{proof}
\subsection{Second-order space, symmetry of Hessians and approximation by quadratic polynomials.}
We now turn towards the second key step in constructing a suitable limit space for the sequence of finite element spaces. In Definition~\ref{def:HD_def} below, we introduce a space of functions with suitably regular gradients and Hessians and sufficiently integrable jumps in values and gradients over never-refined faces.
Recall that we consider here the notion of Hessian defined in~\eqref{eq:Hessian_notation} for functions of bounded variation with gradients of bounded variation.
\begin{definition}\label{def:HD_def}
Let $H^2_D(\Om;\Tp)$ denote the space of functions $v \in H^1_D(\Omega;\cT^+)$ such that $\nabla_{x_i} v \in H^1(\Omega;\cT^+)$ for all $i=1,\dots,\dim$, where $\nabla v=(\nabla_{x_1} v,\dots,\nabla_{x_\dim} v)$, and such that
\begin{equation}\label{eq:HD_norm_def}
\norm{v}_{H^2_D(\Om;\Tp)}^2 \coloneqq \int_\Omega \left[\abs{\nabla^2 v}^2 + \abs{\nabla v}^2 + \abs{v}^2 \right] + \int_{\cF^{I+}} h_{+}^{-1}\abs{\jump{\nabla v}}^2 + \int_{\cF^+} h_{+}^{-3}\abs{\jump{v}}^2 <\infty.
\end{equation}
\end{definition}
Note that each component $\nabla_{x_i} v$ has a distributional derivative of the form~\eqref{eq:distderiv_H1Tp} if and only if
\begin{equation}\label{eq:distderiv_h2}
\pair{D(\nabla v),\bm{\varphi}}_{\Omega} \coloneqq - \int_\Omega \nabla v\cdot \Div \bm{\varphi} = \int_\Omega \nabla^2 v : \bm{\varphi} - \int_{\cF^{I+}} \jump{\nabla v}\cdot(\bm{\varphi} \bm{n}),
\end{equation}
for all $\bm{\varphi} \in C^\infty_0\big(\Om;\R^{\dim\times\dim}\big)$, where the divergence $\Div\bm{\varphi}$ is defined by $(\Div \bm{\varphi})_i \coloneqq \sum_{j=1}^\dim \nabla_{x_j} \bm{\varphi}_{ij}$ for all $i\in\{1,\dots,\dim\}$.
Therefore, a function $v\colon \Omega\rightarrow \mathbb{R}$ belongs to $H^2_D(\Om;\Tp)$ if and only if $v\in H^1_D(\Omega;\cT^+)$, if $D(\nabla v)$ is of the form given in~\eqref{eq:distderiv_h2}, and if $\norm{v}_{H^2_D(\Om;\Tp)}<\infty$.
The space $H^2_D(\Om;\Tp)$ is clearly non-empty and contains $H^2(\Omega)\cap H^1_0(\Omega)$ as a closed subspace.
\begin{theorem}[Completeness]\label{thm:completeness_H2}
The space $H^2_D(\Om;\Tp)$ is a Hilbert space under the inner-product
\begin{multline}\label{eq:HD_innerprod_def}
\pair{w,v}_{H^2_D(\Om;\Tp)}\coloneqq \int_\Omega\left[\nabla^2 w:\nabla^2 v+\nabla w\cdot \nabla v+wv\right]\\ +\int_{\cF^{I+}}h_+^{-1}\jump{\nabla w}\cdot\jump{\nabla v}+\int_{\cF^+}h_+^{-3}\jump{w}\jump{v},
\end{multline}
for all $w$, $v\inH^2_D(\Om;\Tp)$.
\end{theorem}
\begin{proof}
It is clear that $H^2_D(\Om;\Tp)$ is an inner-product space when equipped with the inner-product defined above, so it is enough to show that it is complete. Considering a Cauchy sequence $\{v_k\}_{k\in\mathbb{N}}$, it follows from Theorem~\ref{thm:completeness_H1tp} that there exists a $v\in H^1_D(\Omega;\cT^+)$ such that $v_k\rightarrow v$ in $H^1_D(\Omega;\cT^+)$; moreover Theorem~\ref{thm:completeness_H1tp} also shows that $\nabla_{x_i} v\in H^1(\Omega;\cT^+)$ with $\nabla_{x_i} v_k \rightarrow \nabla_{x_i} v$ in $H^1(\Omega;\cT^+)$ for each $i\in\{1,\dots,\dim\}$. This implies in particular that $\nabla^2 v_k \rightarrow \nabla^2 v $ in $L^2(\Om;\R^{\dim\times\dim})$.
Then, using a pointwise a.e.\ convergent subsequence for the jumps over faces, similar to the one in the proof of Theorem~\ref{thm:completeness_H1tp}, we find also that $\int_{\cF^+}h_+^{-3}\abs{\jump{v}}^2<\infty$ and $\int_{\cF^+}h_+^{-3}\abs{\jump{v-v_k}}^2\rightarrow 0$ as $k\rightarrow \infty$. This proves that $v\inH^2_D(\Om;\Tp)$ and that $v_k\rightarrow v$ in~$H^2_D(\Om;\Tp)$ as $k\rightarrow \infty$. Therefore $H^2_D(\Om;\Tp)$ is complete.
\end{proof}
\begin{remark}[Piecewise $H^2$-regularity on $\cT^+$]\label{rem:pw_H2_regularity}
As explained already in Remark~\ref{rem:notation}, a function $v\in H^2_D(\Om;\Tp)\subset H^1_D(\Omega;\cT^+)$ is piecewise $H^1$-regular over $\cT^+$, i.e.\ $v|_K\in H^1(K)$ for all $K\in\cT^+$, and $(\nabla v)|_K $ is equal to the weak gradient of $v|_K$ over $K$.
By definition, $\nabla_{x_i} v \in H^1(\Omega;\cT^+)$ so likewise $\nabla_{x_i} v|_K \in H^1(K)$ for all $i=1,\dots,\dim$, and hence $v|_K \in H^2(K)$ for all $K\in\cT^+$ and $\nabla^2 v|_K$ equals the weak Hessian of $v|_K$ over $K$, for each $K\in\cT^+$.
\end{remark}
\begin{remark}[Symmetry of the Hessians]\label{rem:symmetry}
The space $H^2_D(\Om;\Tp)$ is continuously embedded in the space $SBV^2(\Omega)$, which is defined as the space of functions $v\in SBV(\Omega)$ such that $\nabla v \in SBV(\Omega;\mathbb{R}^\dim)$ \cite{AmbrosioFuscoPallara00,FonsecaLeoniParoni05}.
There generally exists functions $v \in SBV^2(\Omega)$ such that $\nabla^2 v \coloneqq \nabla (\nabla v)$ fails to be symmetric, see~\cite{FonsecaLeoniParoni05}.
It is thus not \emph{a priori} obvious that $\nabla^2 v$ should be symmetric for a general function $v\in H^2_D(\Om;\Tp)$, yet the symmetry of the Hessian is essential for the approximation theory required to construct a suitable limit space for the sequence of finite element spaces.
One of the principal contributions of our work below is a proof that $\nabla^2 v$ is indeed symmetric a.e.\ on $\Omega$ for all $v\inH^2_D(\Om;\Tp)$, see Corollary~\ref{cor:H2_omm_restriction} below.
We immediately note that symmetry of $\nabla^2 v$ over the subset $\Omega^+$ is a consequence of piecewise $H^2$-regularity over $\cT^+$ as explained in Remark~\ref{rem:pw_H2_regularity}, so the difficulty is to show the symmetry of $\nabla^2 v $ on $\Omega^-$.
\end{remark}
\begin{figure}
\begin{center}
\includegraphics[height=2.5cm]{CTelement2d.pdf}
\hspace{1cm}
\includegraphics[height=2.5cm]{CTelement3d.pdf}
\caption{Degrees of freedom of the cubic Hsieh--Clough--Tocher (HCT) macro-element in two (left) and three (right) space dimensions on a reference element. The basis functions are $C^1$-regular and piecewise cubic with respect to subdivisions of the element into subsimplices \cite{CloughTocher66,DouglasDupontPercellScott79,WorseyFarin87}.
Solid dots represent degrees of freedom associated to point values, the circles represent gradient values, and the arrows represent directional derivative values.}
\label{fig:HCT_elements}
\end{center}
\end{figure}
The next Theorem shows that the subspace of functions in $H^2_D(\Om;\Tp)$ that have nonvanishing jumps in the values and gradients on at most finitely many faces of $\cF^+$ forms a dense subspace of $H^2_D(\Om;\Tp)$.
This result is the key to proving the symmetry of the Hessians of functions in $H^2_D(\Om;\Tp)$.
\begin{theorem}\label{thm:finite_approx}
For each $v\inH^2_D(\Om;\Tp)$, there exists a sequence of functions $v_k\in H^2_D(\Om;\Tp)$ for all $k\in\mathbb{N}$ such that
\begin{equation}\label{eq:finite_approx_1}
\lim_{k\rightarrow \infty }\norm{v-v_k}_{H^2_D(\Om;\Tp)} =0,
\end{equation}
and such that, for each $k\in\mathbb{N}$, there exists only finitely many faces $F\in\cF^+$, respectively $F\in\cF^{I+}$, for which $\jump{v_k}_F\neq 0$, respectively $\jump{\nabla v_k}_F\neq 0$.
Moreover, $v_k=v$, $\nabla v_k=\nabla v$ and $\nabla^2 v_k=\nabla^2 v$ a.e.\ on $\Omega^+_k\cup\Omega^-$ for each $k\in\mathbb{N}$, and additionally
\begin{equation}\label{eq:finite_approx2}
\lim_{k\rightarrow \infty}\int_{\cT^+} \left[h_+^{-4}\abs{v-v_k}^2 + h_{+}^{-2} \abs{\nabla(v-v_k)}^2 \right] =0.
\end{equation}
\end{theorem}
\begin{proof}
The proof is composed of four key steps.
\emph{Step~1. Construction of $v_k$.}
For each $k\in\mathbb{N}$, the function $v_k$ is defined as follows. First, let $v_k=v$ on $\Omega^-$. Then, for each $K \in \cT^+$, if $K\in \cT_k^+$ then let $v_k|_K=v|_K$.
Otherwise, if $K\in \cT^+\setminus \cT_k^+$, we define $v_k|_K$ in terms of a quasi-interpolant into the cubic HCT space, by first taking element-wise $L^2$-orthogonal projections in the neighbourhood of $K$ and then applying a local averaging of the degrees of freedom of the projections.
We shall define $v_k$ in this manner with respect to the possibly countably infinite set of elements in $\cT^+$, yet we note that the construction is entirely local to each element and its neighbours. As explained above, the neighbourhood of any element is the same as that from a finite mesh, and thus the standard techniques of analysis on finite meshes extend to the present setting.
The analysis of local averaging operators is rather standard by now, see e.g.~\cite{GeorgoulisHoustonVirtanen11,HoustonSchotzauWihler07,KarakashianPascal03,NeilanWu19}.
For simplicity, we give the details only for $\dim=2$, whereas for $\dim=3$ we outline the main ingredients in~Remark~\ref{rem:3dHCT} below, which is handled in a similar manner using the three-dimensional cubic HCT element depicted in Figure~\ref{fig:HCT_elements}.
Let $\mathrm{HCT}(K)$ denote the cubic HCT macro-element space over $K$, which consists of all $C^1(\overline{K})$-regular functions over $K$ that are piecewise cubic with respect to the barycentric refinement of $K$, see~\cite{CloughTocher66,DouglasDupontPercellScott79} and the textbook~\cite{Ciarlet02} for a full definition.
The degrees of freedom of $\mathrm{HCT}(K)$ are depicted in~Figure~\ref{fig:HCT_elements} above.
In particular $\mathrm{HCT}(K)$ contains all cubic polynomials over $K$.
For each $K^\prime\in\cT^+$, let $\pi_2 v|_{K^\prime} \in \mathbb{P}_2$ denote the $L^2$-orthogonal projection of $v$ over $K^\prime$ into the space of quadratic polynomials. Thus $\pi_2 v$ is a piecewise quadratic function over $\cT^+$.
Then, for each $K\in\cT^+\setminus\Tk^+$, we define $v_k|_K\in \mathrm{HCT}(K)$ by local averaging of the degrees of freedom of $\pi_2 v$ as follows.
Let $\mathcal{V}_K$ denote the set of vertices of $K$ and let $\mathcal{M}_K$ denote the set of mid-points of the faces of $\partial K$. We call $\mathcal{V}_K\cup\mathcal{M}_K$ the set of nodes.
For a node $z \in \mathcal{V}_K\cup\mathcal{M}_K$, let $N_+(z)\coloneqq \{K^\prime\in\cT^+\colon z\in K^\prime\}$ denote the set of neighbouring elements of $K$ that contain $z$, and let $\abs{N_+(z)}$ denote its cardinality.
Note that $N_+(z)\subset N_+(K)$ for any $z\in\mathcal{V}_K\cup \mathcal{M}_K$, where we recall that $N_+(K) $ is the set of neighbouring elements of $K$ in $\cT^+$.
Let $\mathcal{V}_K^I$ and $\mathcal{M}_K^I$ denote the set of interior vertices and interior face-midpoints, respectively.
We separate boundary vertices into two categories: if a vertex $z\in\mathcal{V}_K$ is on the boundary, and if all boundary faces containing $z$ are coplanar, then we say that $z$ is a flat vertex and we write $z\in\mathcal{V}_K^{\flat}$, otherwise we say that $z$ is a sharp vertex and we write $z\in\mathcal{V}_K^{\sharp}$.
We then define $v_k|_K$ for all $K\in\cT^+\setminus \Tk^+$ in terms of the degrees of freedom by
\begin{equation}\label{eq:CT_dofs}
\begin{aligned}
(v_k|_K)(z) &\coloneqq
\begin{cases}
\frac{1}{\abs{N_+(z)}}\sum_{K^\prime\in\mathbb{N}_+(z)} (\pi_2 v|_{K^\prime})(z) &\hspace{2cm}\text{if } z\in \mathcal{V}_K^I, \\
0 &\hspace{2cm}\text{if } z \in \mathcal{V}_K^{\flat}\cup\mathcal{V}_K^{\sharp},
\end{cases}
\\
(\nabla v_k|_K)(z) &\coloneqq \begin{cases}
\frac{1}{\abs{N_+(z)}}\sum_{K^\prime\in\mathbb{N}_+(z)} \nabla(\pi_2 v|_{K^\prime})(z) &\text{if }z\in \mathcal{V}_K^{I},\\
\frac{1}{\abs{N_+(z)}}\sum_{K^\prime\in\mathbb{N}_+(z)} (\nabla(\pi_2 v|_{K^\prime})(z) \cdot \bm{n}_{\partial\Omega} ) \bm{n}_{\partial\Omega} &\text{if } z\in\mathcal{V}_K^{\flat}, \\
0 &\text{if }z\in\mathcal{V}_K^{\sharp},
\end{cases}
\\
(\nabla v_k|_K)(z)\cdot \bm{n}_F &\coloneqq
\begin{cases}
\frac{1}{\abs{N_+(z)}}\sum_{K^\prime\in\mathbb{N}_+(z)} (\nabla(\pi_2 v|_{K^\prime})(z)\cdot \bm{n}_F) &\hspace{0.75cm}\text{if }z\in\mathcal{M}_K^I,\\
\nabla (\pi_2 v|_{K})(z)\cdot \bm{n}_F &\hspace{0.75cm}\text{if }z\in \mathcal{M}_K\setminus\mathcal{M}_K^I,
\end{cases}
\end{aligned}
\end{equation}
where, in the notation above, $\bm{n}_F$ is the chosen unit normal for the face $F$ containing the edge-midpoint $z\in\mathcal{M}_K$, and $\bm{n}_{\partial\Omega}$ denotes the unit outward normal to $\Omega$ at $z$ if $z\in\mathcal{V}_K^{\flat}$.
Still considering $K\in\cT^+\setminus\Tk^+$, it follows that $v_k|_K \in C^1(\overline{K})\cap H^2(K)$, and that, for any boundary face $F \subset \partial K$, $\jump{v_k}_F=v_k|_F=0$ owing to the vanishing values and vanishing first tangential derivatives at both boundary vertices on $F$. Moreover, if $K^\prime \in N_+(K)$ is a neighbouring element such that $K^\prime \in \cT^+\setminus\Tk^+$, then we note that all degrees freedom of $v_k|_{K^\prime}$ and $v_k|_{K}$ that belong to their common face $F$ coincide by definition, which implies that $\jump{v_k}_F =0 $ and $\jump{\nabla v_k}_F=0$.
Furthermore, following standard techniques involving inverse inequalities, see e.g.\ \cite{KarakashianPascal03}, we obtain the bound
\begin{equation}\label{eq:CT_quasi_bound_jump}
\sum_{m=0}^2 \int_K h_+^{2m-4}\abs{\nabla^m( \pi_2v - v_k ) }^2 \lesssim \int_{\cF^{I+}_K}h_+^{-1}\abs{\jump{\nabla \pi_2 v}}^2 + \int_{\cF^+_K} h_+^{-3}\abs{\jump{\pi_2 v}}^2 ,
\end{equation}
where $\cF^+_K=\{F\in\cF^+\colon F\cap K\neq \emptyset\}$ and $\cF^{I+}_K\coloneqq \cF^+_K\cap \cF^{I+}$ are sets of faces adjacent to $K$.
Note that~\eqref{eq:CT_quasi_bound_jump} corresponds to the generalisation of \cite[Lemma~3]{NeilanWu19} to fully discontinuous polynomials, and the only additional step required to obtain \eqref{eq:CT_quasi_bound_jump} beyond what is shown already in \cite{NeilanWu19} is the application of inverse inequalities on boundary faces to handle non-vanishing tangential derivatives of $\pi_2 v$.
Recalling that $v\in H^2_D(\Om;\Tp)$ is $H^2$-regular on each element of $\cT^+$, we infer from the application of the triangle inequality that
\begin{multline}\label{eq:CT_quasi_interpolant_bound}
\sum_{m=0}^2\int_K h_+^{2m-4} \abs{\nabla^m(v-v_k)}^2
\lesssim \sum_{m=0}^2\int_K h_+^{2m-4} \left[\abs{\nabla^m(v-\pi_2v)}^2+\abs{\nabla^m(\pi_2 v - v_k)}^2\right]\\
\lesssim \sum_{m=0}^2\int_K h_+^{2m-4} \abs{\nabla^m(v-\pi_2v)}^2 + \int_{\cF^{I+}_K}h_+^{-1}\abs{\jump{\nabla \pi_2 v}}^2 + \int_{\cF^+_K} h_+^{-3}\abs{\jump{\pi_2 v}}^2
\\ \lesssim \int_{N_+(K)} \abs{\nabla^2 v}^2 + \int_{\cF^{I+}_K} h_+^{-1}\abs{\jump{\nabla v}}^2 + \int_{\cF^+_K} h_+^{-3}\abs{\jump{v}}^2,
\end{multline}
where in passing from the first to the second line we have applied~\eqref{eq:CT_quasi_bound_jump}, and in passing from the second to the third line we have applied a further triangle inequality $\pi_2 v = \pi_2 v -v + v$ along with trace inequalities and the Bramble--Hilbert Lemma applied to $v-\pi_2 v$.
\emph{Step~2. Proof that $v_k$ has at most finitely many nonvanishing jumps.}
We now show that $v_k$ has nonvanishing jumps on at most finitely many faces of $\cF^+$ and $\nabla v_k$ has nonvanishing jumps on at most finitely many interior faces in $\cF^{I+}$.
It is clear that $\jump{v_k}_F=0$ for all boundary faces $F\in \cF^+ \setminus\mathcal{F}_k^+$, because any boundary face $ F\in \cF^+\setminus\cF_k^+ $ must be a face of an element of $K\in \cT^+\setminus \cT_k^+$.
So there are only finitely many boundary faces where $\jump{v_k}$ does not vanish. To study interior faces, Lemma~\ref{lem:eventuallyinT+} implies that there is $\ell\in \mathbb{N}$, $\ell=\ell(k)\geq k$ such that $\cT_k^+\subset \mathcal{T}_{\ell}^{1+}$.
Then, consider a face $F\in \cF^{I+} \setminus \mathcal{F}_{\ell}^{\dagger}$, recalling the notation in~\eqref{eq:Fks_def}, and consider the elements $K$, $K^\prime \in \cT^+$ forming $F$, i.e.\ $F=K\cap K^\prime$.
If either of $K$ or $K^\prime$ is in $\mathcal{T}_k^+\subset \mathcal{T}^{1+}_\ell$ then both must be in $\mathcal{T}_{\ell}^+$ and thus $F$ would have to be a face of $\mathcal{F}_\ell^{\dagger}$ by~\eqref{eq:Fks_def}, which would be a contradiction.
Therefore we have both $K$, $K^\prime \in \cT^+\setminus\mathcal{T}_k^+$.
Then the definition of $v_k$ on $K$ and $K^\prime$ above implies that the degrees of freedom of $v_k$ coincide on $F$, so $\jump{v_k}_F=0$ and $\jump{\nabla v_k}_F=0$ for all $F\in \cF^{I+}\setminus \mathcal{F}_{\ell}^{\dagger}$.
Since there are at most only finitely many faces in $\mathcal{F}_{\ell}^{\dagger}$ we conclude that $\jump{v_k}=0$ and $\jump{\nabla v_k}=0$ except for at most finitely many faces of $\cF^+$ and $\cF^{I+}$, respectively.
\emph{Step~3. Proof of \eqref{eq:finite_approx2} and of convergence of jumps.} We now consider the convergence of the $v_k$ to $v$ over $\Omega^+$.
Recall that $v=v_k$ on $\Omega_k^{+}\cup \Omega^-$ by definition.
Furthermore, if $K\in\cT^+\setminus \cT_k^+$ then $N_+(K) \subset \cT^+\setminus\cT_k^{1+}$ because if $K$ has a neighbour in $\cT_k^{1+}$ then $K$ itself must be in $\cT_k^+$.
Therefore, it follows from~\eqref{eq:CT_quasi_interpolant_bound} that
\begin{multline}\label{eq:finite_approx_volume_terms}
\sum_{m=0}^{2}\int_{\cT^+} h_+^{2m-4} \abs{\nabla^{m}(v-v_k)}^2 = \sum_{m=0}^2 \int_{\cT^+\setminus\cT_k^+} h_+^{2m-4} \abs{\nabla^m (v-v_k)}^2
\\ \lesssim \int_{\cT^+\setminus \cT_k^{1+}} \abs{\nabla^2 v}^2 + \int_{\cF^{I+}\setminus\cF_k^{1\dagger}} h_+^{-1}\abs{\jump{\nabla v}}^2 + \int_{\cF^+\setminus \cF_k^{1\dagger}} h_+^{-3}\abs{\jump{v}}^2 ,
\end{multline}
where $\cF_k^{1\dagger} $ denotes the set of all faces whose parent elements are in $\cT_k^{1+}$.
Since $\cF^+ = \bigcup_{k\in \mathbb{N}}\cF_k^{1\dagger}$ and since $\cT^+ = \bigcup_{k\in\mathbb{N}} \cT_k^{1+}$ as a consequence of Lemma~\ref{lem:eventuallyinT+}, we see that the right-hand side in~\eqref{eq:finite_approx_volume_terms} tends to zero as $k\rightarrow\infty $ as it is the tail of a convergent series. In particular, this proves \eqref{eq:finite_approx2}.
We now prove that
\begin{equation}\label{eq:finite_approx_jump_convergence}
\lim_{k\rightarrow \infty}\left(\int_{\cF^{I+}}h_+^{-1}\abs{\jump{\nabla(v-v_k)}}^2 + \int_{\cF^+}h_+^{-3}\abs{\jump{v-v_k}}^2\right) =0.
\end{equation}
Recalling that $v_k=v$ on $\cT^+$, we see $\jump{v-v_k}_F=0$ for all $F\in \cF^{\dagger}_k$.
Moreover, if $F\in \cF^+\setminus \cF^{\dagger}_k$, then $F$ must be a face of at least one element of $\cT^+\setminus \Tk^+$. Also, if $F=K\cap K^\prime$ for some $K\in \cT^+\setminus \Tk^+$ and some $K^\prime \in \Tk^+$, then the trace contribution to the jump from $K^\prime$ must vanish.
Therefore, after a counting argument, we can apply the trace inequality, which is applicable since $(v-v_k)|_K\in H^2(K)$ for all $K\in\cT^+$, and the bound~\eqref{eq:CT_quasi_interpolant_bound} to obtain
\[
\begin{aligned}
\int_{\cF^+}h_+^{-3}\abs{\jump{v-v_k}}^2 &= \int_{\cF^+ \setminus \cF^{\dagger}_k} h_+^{-3} \jump{v-v_k}^2 \lesssim \sum_{K\in\cT^+\setminus \cT_k^+} \int_{\partial K} h_+^{-3} \abs{v-v_k}^2 \\
&\lesssim \int_{\cT^+\setminus \cT_k^+ }h_+^{-2}\abs{\nabla (v-v_k)}^2 + h_+^{-4}\abs{v-v_k}^2 \rightarrow 0 \quad\text{as }k\rightarrow \infty
\end{aligned}
\]
where convergence follows from~\eqref{eq:finite_approx2} which was already shown above. A similar argument restricted to interior faces can be applied to the jumps of gradients, thus yielding~\eqref{eq:finite_approx_jump_convergence}.
\emph{Step~4. Proof of $v_k\in H^2_D(\Om;\Tp)$ and of \eqref{eq:finite_approx_1}.}
Since the piecewise gradient and Hessian coincide with the classical gradient and Hessian on each $K\in\cT^+$, the bound~\eqref{eq:CT_quasi_interpolant_bound} and a counting argument implies that $\int_{\cT^+}\abs{v_k}^2+\abs{\Npw{v_k}}^2+\abs{\Dpw{v_k}}^2\lesssim \norm{v}_{H^2_D(\Om;\Tp)}^2<\infty$. Furthermore, we get $\int_{\cF^{I+}}h_+^{-1}\jump{\Npw{v_k}}^2+\int_{\cF^+}h_+^{-3}\jump{v_k}^2 < \infty$ since $\jump{v_k}=0$ and $\jump{\Npw{v_k}}=0$ except for at most finitely many faces.
After extending $v_k$ by zero to $\mathbb{R}^\dim$, the distributional derivative of $v_k$ satisfies $
\pair{ Dv_k,\bm{\phi}}_{\mathbb{R}^\dim} = \langle Dv,\bm{\phi} \rangle_{\mathbb{R}^\dim} + \langle D(v_k-v),\bm{\phi} \rangle_{\mathbb{R}^\dim}$ for all $\bm{\phi}\in C^\infty_0\big(\R^\dim;\R^{\dim}\big)$.
Note that $v-v_k$ is nonvanishing only on $\cT^+\setminus \Tk^+\subset \cT^+$ and $v$ and $v_k$ are both in $H^2(K)$ for each $K\in\cT^+$.
For each $\ell\in\mathbb{N}$, let $\mathcal{F}^{\star}_\ell$ denote the set of faces of all elements in $\mathcal{T}_{\ell}^+$ that are not in $\cF^{\dagger}_{\ell}$; note that any element of $\mathcal{T}_{\ell}^{+}$ containing a face in $\mathcal{F}^{\star}_\ell$ is necessarily in $\mathcal{T}_{\ell}^{+}\setminus \mathcal{T}_{\ell}^{1+}$.
For shorthand, for each $F\in \mathcal{F}^{\star}_{\ell}$, let $\tau^{\ell}_F$ denote the trace operator from the side of $\Omega_{\ell}^+$, and note that $\tau^{\ell}_F=\tau_F^{\pm}$ depending on the orientation of $\bm{n}_F$.
Then, using elementwise integration by parts, we find that
\begin{equation}\label{eq:finite_approx_4}
\begin{split}
&\langle D(v_k-v),\bm{\phi} \rangle_{\mathbb{R}^\dim} = - \int_{\Omega^+}(v_k-v)\Div \bm{\phi} = - \lim_{\ell\rightarrow \infty} \int_{\Omega^+_{\ell}} (v_k-v)\Div \bm{\phi}
\\ &= \lim_{\ell\rightarrow \infty} \left(\int_{\Omega^+_{\ell}} \nabla(v_k-v) \cdot \bm{\phi} - \int_{\cF^{\dagger}_{\ell}} \jump{v_k-v}(\bm{\phi}\cdot \bm{n}) - \int_{\mathcal{F}^{\star}_\ell} \tau^\ell_F(v_k-v)(\bm{\phi}\cdot \bm{n}) \right)
\\ & = \int_{\Omega^+} \nabla(v_k-v) \cdot \bm{\phi} - \int_{\cF^+} \jump{v_k-v}(\bm{\phi}\cdot \bm{n}),
\end{split}
\end{equation}
where in passing from the second to the third lines, we have used the convergence as $\ell\rightarrow \infty$ of the first two terms in the second line, which follows from finiteness of $\int_{\Omega^+}\abs{\nabla (v_k-v)}^2+\int_{\cF^+}h_+^{-1}\abs{\jump{v_k-v}}^2<\infty$, and we have used the fact that the remainder term $\int_{\mathcal{F}^{\star}_\ell}\tau_F^{\ell} (v_k-v)(\bm{\phi}\cdot \bm{n})\rightarrow 0$ as $\ell\rightarrow \infty$ as a result of the Cauchy--Schwarz inequality, the trace inequality and the bound
\[
\lim_{\ell\rightarrow \infty}\int_{\mathcal{F}^{\star}_\ell}\abs{\tau^{\ell}_F(v_k-v)}\lesssim \lim_{\ell\rightarrow \infty} \left(\int_{\cT^+\setminus \mathcal{T}_{\ell}^{1+}} \left[\abs{\nabla(v-v_k)}^2+h_+^{-2}\abs{v-v_k}^2 \right] \right)^{\frac{1}{2}} =0,
\]
which crucially uses the finiteness $\int_{\cT^+}h_+^{-2}\abs{v-v_k}^2<\infty$ as a result of~\eqref{eq:finite_approx2}.
Hence, by addition and subtraction, we use~\eqref{eq:distderiv_H100Tp} for $\pair{Dv,\bm{\phi}}_{\mathbb{R}^\dim}$ and~\eqref{eq:finite_approx_4} to obtain
\[
\begin{aligned}
\langle Dv_k,\bm{\phi}\rangle_{\mathbb{R}^\dim} = \int_{\Omega^+} \nabla v_k \cdot \bm{\phi} + \int_{\Omega^-}\nabla v \cdot \bm{\phi} - \int_{\cF^+} \jump{v_k}\bm{\phi}\cdot \mathbf{n} &&& \forall \bm{\phi} \in C^\infty_0\big(\R^\dim;\R^{\dim}\big),
\end{aligned}
\]
which shows that $v_k$ satisfies~\eqref{eq:distderiv_H100Tp} and also that $\nabla v_k = \nabla v$ on $\Omega^-$. Therefore $v_k\in H^1_D(\Omega;\cT^+)$ for each $k\in \mathbb{N}$.
The same argument as above can now be applied to each of the components of $\nabla v_k$, since $\nabla v_k = \nabla v$ on $\Omega^+_k\cup \Omega^-$ and since $\int_{\cT^+}h_+^{-2}\abs{\nabla(v_k-v)}^2<\infty$ for all $k\in\mathbb{N}$ by~\eqref{eq:finite_approx2}. This yields
\begin{equation}\label{eq:finite_approx_5}
\langle D(\Npw{v_k}),\bm{\varphi}\rangle_{\Omega} = \int_{\Omega^+} \Dpw v_k:\bm{\varphi} + \int_{\Omega^-} \Dpw v:\bm{\varphi}- \int_{\cF^{I+}} \jump{\Npw{v_k}}\cdot(\bm{\varphi}\mathbf{n}),
\end{equation}
for all $\bm{\varphi}\in C^\infty_0\big(\Om;\R^{\dim\times\dim}\big)$, thus showing that $v_k$ satisfies~\eqref{eq:distderiv_h2}, that $\nabla^2 v_k$ equals the piecewise Hessian of $v_k$ over the elements $\cT^+$ and that $\nabla^2 v_k = \nabla^2 v$ on $\Omega^+_k \cup \Omega^-$.
These identities along with the bounds in~\eqref{eq:finite_approx_1},~\eqref{eq:finite_approx2}, \eqref{eq:finite_approx_volume_terms}, and \eqref{eq:finite_approx_jump_convergence} show that $\norm{v_k}_{H^2_D(\Om;\Tp)}<\infty$ and thus $v_k \in H^2_D(\Om;\Tp)$ for each $k\in\mathbb{N}$, and that $\norm{v-v_k}_{H^2_D(\Om;\Tp)}\rightarrow 0$ as $k\rightarrow \infty$.
\end{proof}
\begin{remark}[HCT element for $\dim=3$]\label{rem:3dHCT}
In the case $\dim=3$, we consider the generalization of the HCT element due to Worsey and Farin~\cite{WorseyFarin87}, which is based on the subdivision of each tetrahedra into twelve sub-tetrahedra following the incentre splitting algorithm detailed in \cite[p.~108]{WorseyFarin87}. Possible degrees of freedom on a reference element are depicted in Figure~\ref{fig:HCT_elements}, although we note that these elements are not affine equivalent.
A set of degrees of freedom on each physical element that extends the two-dimensional case includes the function value and gradient value at every vertex, and, for each edge, the orthogonal projection of the gradient values at edge midpoints into the plane orthogonal to the edge.
The generalization of~\eqref{eq:CT_dofs} on the averaging of the degrees of freedom is then similar to that in~\cite{NeilanWu19}.
\end{remark}
\emph{Proof of Theorem~\ref{thm:H1tp_finite_jumps}.} The proof of Theorem~\ref{thm:H1tp_finite_jumps} is similar to the proof just given for Theorem~\ref{thm:finite_approx}, where the only main difference is that the quasi-interpolation operator used in Theorem~\ref{thm:finite_approx} is replaced by a nodal quasi-interpolant into piecewise polynomials over $\cT^+\setminus\Tk^+$ that enforces $C^0$-continuity on all but finitely many faces of $\cF^+$ (for instance, it is enough to consider piecewise affine approximations).
If $v\in H^1_D(\Omega;\cT^+)$ then the quasi-interpolant also enforces a homogeneous Dirichlet boundary condition on $\partial\Omega$, whereas this is not needed for functions $v\in H^1(\Omega;\cT^+)$.
We leave the remaining details of the proof to the reader. \qed
\begin{corollary}[Symmetry of the Hessian]\label{cor:H2_omm_restriction}
For every $v\inH^2_D(\Om;\Tp)$ there exists a $w\in H^2(\Omega)\cap H^1_0(\Omega)$ such that $v=w$, $\nabla v=\nabla w$, and $\nabla^2 v=\nabla^2 w$ a.e.\ on~$\Omega^-$.
Furthermore, $\nabla^2 v$ is symmetric a.e.\ on $\Omega$ for all $v\inH^2_D(\Om;\Tp)$.
\end{corollary}
\begin{proof}
If $\Omega^-$ is empty then there is nothing to show, since $\nabla^2 v$ is symmetric a.e. on $\Omega^+$ as shown in Remark~\ref{rem:symmetry}. Therefore, we consider the case where $\Omega^-$ is nonempty. The proof follows the same path as the proof of~Corollary~\ref{cor:H1_omm_restriction}: choose $k\in \mathbb{N}$ and let $v_k \in H^2_D(\Om;\Tp)$ be given by Theorem~\ref{thm:finite_approx}. Then, by Lemma~\ref{lem:eventuallyinT+}, there exists $m=m(k)$ such that $v_k$ has possible nonzero jumps only on $\mathcal{F}^{\dagger}_m$, i.e.\ $\jump{v_k}_F=0$ for every face $F\in \cF^+\setminus \mathcal{F}^{\dagger}_m$ and $\jump{\nabla v_k}_F=0$ for every $F\in \cF^{I+} \setminus \mathcal{F}^{I\dagger}_m$. Then, as shown in the proof of Corollary~\ref{cor:H1_omm_restriction}, there exists $\eta \in C^\infty_0(\mathbb{R}^\dim)$ such that $\eta|_{\Omega^-}=1$ and $\eta|_{\Omega^+_m}=0$. Then, define $w(x) \coloneqq \eta(x) v_k(x) $ for all $x\in \mathbb{R}^\dim$, where we recall that $v_k$ is extended by zero outside of $\overline{\Omega}$. The same arguments in the proof of Corollary~\ref{cor:H1_omm_restriction} imply that $w\in H^1_0(\Omega)$ and that $\nabla w = \eta \nabla v_k + v_k \nabla \eta$, and moreover that $\nabla w=\nabla v$ a.e.\ on $\Omega^-$.
We now show that also $w\in H^2(\Omega)$ so that $w\in H^2(\Omega)\cap H^1_0(\Omega)$. Considering an arbitrary $\bm{\varphi}\in C^\infty_0\big(\Om;\R^{\dim\times\dim}\big)$, a straightforward calculation using the known distributional derivatives of $v_k$ and $\nabla v_k$ shows that
\[
\begin{aligned}
&\langle D(\nabla w),\bm{\varphi}\rangle = -\int_\Omega\nabla w\cdot(\Div\bm{\varphi}) = -\int_\Omega(\eta\Npw{v_k}+v_k\nabla\eta)\cdot(\Div\bm{\varphi})\\
& = -\int_\Omega \left[\nabla v_k {\cdot} \Div(\eta\bm{\varphi}) - \left(\nabla v_k{\otimes}\nabla \eta\right) {:} \bm{\varphi} + v_k \Div(\nabla \eta^\top \bm{\varphi}) - v_k \nabla^2\eta {: }\bm{\varphi} \right]\\
& = \int_{\Omega} \left[ \eta \nabla^2 v_k + \nabla v_k \otimes \nabla \eta + \nabla \eta \otimes \nabla v_k + v_k \nabla^2 \eta \right]: \bm{\varphi}
\end{aligned}
\]
where the last equality above follows from~\eqref{eq:distderiv_h2} and~\eqref{eq:distderiv_H1Tp}, where it is noted that all terms involving jumps vanish owing to the facts that $\bm{\varphi}$ vanishes on $\partial\Omega$, that $\eta$ vanishes on every face $F\in \mathcal{F}^{\dagger}_m$, and the fact that $v_k$ and $\nabla v_k$ have possible nonzero jumps only on $\mathcal{F}^{\dagger}_m$ as explained above.
Thus, $\nabla^2 w= \eta \nabla^2 v_k + \nabla v_k \otimes \nabla \eta + \nabla \eta \otimes \nabla v_k + v_k \nabla^2 \eta$ and $w\in H^2(\Omega) \cap H^1_0(\Omega)$.
Furthermore, we see that $\nabla^2 w = \nabla^2 v_k = \nabla^2 v $ a.e.\ in $\Omega^-$. Since $\nabla^2 w$ is symmetric owing to $w\in H^2(\Omega)$, we see that $\nabla^2 v$ is symmetric a.e.\ in $\Omega^-$. Since $\nabla^2 v$ is also symmetric a.e.\ on $\Omega^+$, as shown in Remark~\ref{rem:symmetry}, we conclude that $\nabla^2 v$ is symmetric a.e.\ on~$\Omega$.
\end{proof}
Since Corollary~\ref{cor:H2_omm_restriction} shows that functions $v\in H^2_D(\Om;\Tp)$ have symmetric Hessians, we may now write $\nabla^2_{x_ix_j}v\coloneqq (\nabla^2 v)_{ij}$, with symmetry giving $\nabla^2_{x_ix_j}v=\nabla^2_{x_jx_i} v$ for all $i,\,j\in\{1,\dots,\dim\}$.
The symmetry of the Hessians of functions in $H^2_D(\Om;\Tp)$ shown in Corollary~\ref{cor:H2_omm_restriction} crucially allows for the construction of good polynomial approximations over the meshes, including over elements that are eventually refined. Recall that the set $\cF^+_\circ(K)$, for any element $K$, is defined in~\eqref{eq:fp_circ}.
\begin{lemma}[Approximation by quadratic polynomials]\label{lem:p2_element_approx}
For every function $v\in H^2_D(\Om;\Tp)$, and every $K\in\cT_k$, $k\in\mathbb{N}$, we have
\begin{multline}\label{eq:p2_element_approx}
\inf_{\hat{v}\in \mathbb{P}_2}\sum_{m=0}^2 \int_K h_K^{2m-4} \abs{\nabla^m(v-\hat{v})}^2 \\ \lesssim \int_K \abs{\nabla^2 v-\overline{\nabla^2 v|_K}}^2 + \int_{\cF^+_\circ(K)} \left[ h_+^{-1} \abs{\jump{\nabla v}}^2+h_K^{-2}h_+^{-1}\abs{\jump{v}}^2\right],
\end{multline}
where $\mathbb{P}_2$ denotes the space of quadratic polynomials, and where $\overline{\nabla^2 v|_K} \in \mathbb{R}^{\dim\times\dim}$ denotes the component-wise mean-value of $\nabla^2 v$ over $K$, i.e.\ $\bigr[\overline{\nabla^2 v|_K}\bigr]_{ij}=\frac{1}{\abs{K}}\int_K \nabla^2_{x_i x_j} v$ for all $i,\,j\in\{1,\dots,\dim\}$.
\end{lemma}
\begin{proof}
We construct a polynomial $\hat{v}\in \mathbb{P}_2(K)$ such that
\begin{equation}\label{eq:p2_element_approx_1}
\int_K (v-\hat{v}) = \int_K \nabla_{x_i} (v-\hat{v}) = \int_K \nabla^2_{x_ix_j} (v-\hat{v}) =0, \quad \forall i,\,j \in \{1,\dots,\dim\},
\end{equation}
which implies that $\nabla^2 \hat{v} = \overline{\nabla^2 v|_K}$ since $\hat{v}$ is a quadratic polynomial.
For shorthand, let $\bm{H}\coloneqq \overline{\nabla^2 v|_K} \in \mathbb{R}^{\dim\times\dim}$, and note that $\bm{H}$ is symmetric owing to the symmetry of $\nabla^2 v$ as shown by Corollary~\ref{cor:H2_omm_restriction}.
Then, define the vector $\bm{d}\in \mathbb{R}^\dim$ by $\bm{d}=\frac{1}{\abs{K}}\int_K\left[\nabla v - \bm{H} x\right]\mathrm{d} x$, where the integral is taken component-wise, and let the constant $a$ be defined by $a \coloneqq \frac{1}{\abs{K}}\int_K \left[v- \bm{d}\cdot x - \frac{1}{2} x^\top \bm{H} x \right]\mathrm{d}x $. We claim that $\hat{v}(x)\coloneqq a + \bm{d}\cdot x + \frac{1}{2} x^\top \bm{H} x$ satisfies~\eqref{eq:p2_element_approx_1}.
First, it is clear that $\int_K (v-\hat{v})=0$ owing to the definition of the constant $a$. Next, the symmetry of $\bm{H}$ implies that $\nabla \hat{v}(x) = \bm{d}+\frac{1}{2}(\bm{H}+\bm{H}^\top)x= \bm{d}+ \bm{H} x$ for all $x\in K$, so by definition of the vector $\bm{d}$ we get $\int_K\nabla_{x_i}(v-\hat{v})=0$.
Finally, we have $\nabla^2 \hat{v}=\bm{H}=\overline{\nabla^2 v|_K}$, so~\eqref{eq:p2_element_approx_1} is verified.
To obtain~\eqref{eq:p2_element_approx}, it remains only to apply the Poincar\'e inequality of Theorem~\ref{thm:poincare} to $v-\hat{v}$ and each component of its gradient. First, the application of the Poincar\'e inequality to each component $\nabla_{x_i}(v-\hat{v}) \in H^1(\Omega;\cT^+)$, for each $i\in\{1,\dots,\dim\}$, followed by a summation over the components, gives
\begin{equation}\label{eq:p2_element_approx_2}
h_K^{-2}\int_{K} \abs{\nabla (v-\hat{v})}^2 \lesssim \int_{K} \abs{\nabla^2(v-\hat{v})}^2 + \int_{\cF^+_{\circ}(K)} h_{+}^{-1}\abs{\jump{\nabla v}}^2,
\end{equation}
where we have used the fact that each component of $\nabla(v-\hat{v})$ has zero mean-value on $K$ from~\eqref{eq:p2_element_approx_1}, and where we have simplified the jumps $\jump{\nabla(v-\hat{v})}=\jump{\nabla v}$ since $\hat{v}$ is a polynomial.
Next, the Poincar\'e inequality applied to $v-\hat{v}$, which also has vanishing mean-value over $K$ by~\eqref{eq:p2_element_approx_1}, also implies
\begin{equation}\label{eq:p2_element_approx_3}
h_K^{-4}\int_{K} \abs{v-\hat{v}}^2 \lesssim h_K^{-2}\int_{K} \abs{\nabla (v-\hat{v})}^2 + \int_{\cF^+_{\circ}(K)} h_K^{-2} h_+^{-1}\abs{\jump{v}}^2.
\end{equation}
We then obtain \eqref{eq:p2_element_approx} from the combinations of~\eqref{eq:p2_element_approx_2} with \eqref{eq:p2_element_approx_3}.
\end{proof}
\subsection{Limit spaces of finite element functions}
We now introduce the limit spaces of the finite element spaces, which consist of functions in $H^2_D(\Om;\Tp)$ that are piecewise polynomials of degree at most $p$ over~$\cT^+$.
Recall that the norm and inner-product of the space $H^2_D(\Om;\Tp)$ are defined in~\eqref{eq:HD_norm_def} and~\eqref{eq:HD_innerprod_def} respectively.
\begin{definition}[Limit spaces]\label{def:limit_space}
Let $V_\infty^0$ and $V_\infty^1$ be defined by
\begin{equation}\label{eq:limit_space_def}
V_\infty^0 \coloneqq \{ v\in H^2_D(\Om;\Tp)\colon v|_K \in \mathbb{P}_p \; \forall K \in \cT^+\}, \quad V_\infty^1 \coloneqq V_\infty^0 \cap H^1_0(\Omega).
\end{equation}
The spaces $V_\infty^0$ and $V_\infty^1$ are equipped with the same inner-product and norm as $H^2_D(\Om;\Tp)$.
\end{definition}
It follows that $V_\infty^1$ is a closed subspace of $V_\infty^0$ and that $V_\infty^0$ is a closed subspace of $H^2_D(\Om;\Tp)$. Therefore the spaces $V_\infty^s$, $s\in\{0,1\}$, are Hilbert spaces under the same inner-product as $H^2_D(\Om;\Tp)$, see Theorem~\ref{thm:completeness_H2}.
The following Theorem shows that functions in the spaces $V_\infty^s$ can be approximated by sequences of functions from the corresponding finite element spaces, thereby justifying the choice of notation.
\begin{remark}[Extension of $\norm{\cdot}_k$ to $H^2_D(\Om;\Tp)+V_k^s$]
The trace inequality of Lemma~\ref{thm:trace} implies that any function $v\inH^2_D(\Om;\Tp)$ has square-integrable jumps $\jump{v}$ and $\jump{\nabla v}$ over $\cF_k$ for each $k\in\mathbb{N}$.
Hence, the norm $\normk{v}$ is finite for any $v\inH^2_D(\Om;\Tp)$ and any $k\in\mathbb{N}$.
We may thus extend the norms $\normk{\cdot}$ to the sum space $H^2_D(\Om;\Tp) + V_k^s$ for all $s\in\{0,1\}$ and all $k\in\mathbb{N}$.
\end{remark}
\begin{theorem}[Approximation by finite element functions]\label{thm:limit_space_characterization}
Let $s\in\{0,1\}$. Then, for any $v\in V_\infty^s$, there exists a sequence of finite element functions $v_k\in V_k^s$ for each $k\in\mathbb{N}$, such that
\begin{equation}\label{eq:limit_space_characterization}
\lim_{k\rightarrow \infty} \norm{v-v_k}_k =0, \quad \sup_{k\in\mathbb{N}} \norm{v_k}_k<\infty.
\end{equation}
Moreover, the sequence $\{v_k\}_{k\in\mathbb{N}}$ above can be chosen such that
\begin{equation}
\lim_{k\rightarrow\infty}\int_\Omega \left[h_k^{-2}\abs{\nabla(v-v_k)}^2+h_k^{-4}\abs{v-v_k}^2\right]=0 .
\end{equation}
\end{theorem}
\begin{proof}
\emph{Step 1. Proof for $s=0$.}
For each $k\in\mathbb{N}$, let $v_k \in V_k^0$ denote the $L^2$-orthogonal projection of $v$ into $V_k^0$. Then, since $v|_K \in \mathbb{P}_p $ for each $K\in \cT^+$, it follows immediately that $(v-v_k)|_K = 0$ for each $K\in \Tk^+$. This implies also that the jumps $\jump{v-v_k}$ and $\jump{\nabla(v-v_k)}$ are only possibly nonvanishing on faces with at least one parent element in $\Tk^-$.
Therefore, a counting argument gives
\begin{multline}\label{eq:limit_space_1}
\norm{v-v_k}_k^2 \lesssim \sum_{m=0}^2 \int_{\Tk^-}\abs{\nabla^m(v-v_k)}^2 \\ + \sum_{K\in\Tk^-} \int_{\partial K}\left[ h_{k}^{-1}\abs{\tau_{\p K} \nabla (v-v_k)}^2 + h_{k}^{-3}\abs{\tau_{\p K} (v-v_k)}^2\right],
\end{multline}
where it is recalled that the trace operator $\tau_{\p K}$ is bounded from $H^1(\Omega;\cT^+)$ to $L^2(\partial K)$, as shown by Theorem~\ref{thm:trace}.
Recall also that by definition $h_k|_{K^\circ}=h_K=\abs{K}^{\frac{1}{\dim}}$.
Since $v_k$ is the $L^2$-orthogonal projection of $v$ into $V_k^0$, the stability of the $L^2$-orthogonal projection and inverse inequalities imply that $\sum_{m=0}^2\int_K h_k^{2m-4}\abs{\nabla^m(v-v_k)}^2\lesssim\inf_{\hat{v}\in\mathbb{P}_p}\sum_{m=0}^2 \int_K h_K^{2m-4}\abs{\nabla^m(v-\hat{v})}^2$ for every $K\in\cT_k$, where we recall that $p\geq 2$.
Therefore the trace inequality of Theorem~\ref{thm:trace} and the approximation bound of Lemma~\ref{lem:p2_element_approx} imply that, for each $K\in \Tk^-$,
\begin{multline}
\sum_{m=0}^2\int_K h_k^{2m-4}\abs{\nabla^m(v-v_k)}^2 + \int_{\partial K}\left[ h_{k}^{-1}\abs{\tau_{\p K} \nabla (v-v_k)}^2 + h_{k}^{-3}\abs{\tau_{\p K} (v-v_k)}^2\right]
\\ \lesssim \int_K \abs{\nabla^2 v - \overline{\nabla^2 v|_K}}^2 + \int_{\cF^+_\circ(K)} \left[ h_+^{-1} \abs{\jump{\nabla v}}^2+h_+^{-3}\abs{\jump{v}}^2\right],
\end{multline}
where we have used the inequality $h_k^{-2}h_+^{-1}\leq h_+^{-3}$ in the term for the jumps.
We now define $\pi_k^0(\nabla^2 v)$ the piecewise constant $L^2$-orthogonal projection of $\nabla^2 v$ over $\cT_k$; in particular, $\pi_k^0(\nabla^2 v)|_K = \overline{\nabla^2 v|_K}$ for each $K\in\cT_k$.
Next, recall that a face $F\in \cF^+_\circ(K)$ if and only if $F\in \cF^{I+}$, and that $F\subset K$ but $F\not\subset\partial K$; thus $F$ cannot be in $\cF_k$, and thus $F\in \cF^+\setminus \cF_k^+$. Therefore, it follows that
\begin{multline}\label{eq:limit_space_2}
\norm{v-v_k}_k^2 + \sum_{m=0}^1 \int_{\Omega}h_k^{2m-4}\abs{\nabla^m(v-v_k)}^2 \\ \lesssim \int_{\Tk^-} \abs{\nabla^2 v -\pi_k^0(\nabla^2 v)}^2 + \int_{\cF^{I+}\setminus\cF_k^+} \left[ h_+^{-1} \abs{\jump{\nabla v}}^2+h_+^{-3}\abs{\jump{v}}^2\right].
\end{multline}
We now show that the right-hand side of~\eqref{eq:limit_space_2} tends to $0$ as $k\rightarrow \infty$. The terms involving the jumps above consist of the tail of a convergent series bounded by $\norm{v}_{H^2_D(\Om;\Tp)}^2$, and thus
\[
\lim_{k\rightarrow\infty}\int_{\cF^{I+}\setminus\cF_k^+} \left[ h_+^{-1} \abs{\jump{\nabla v}}^2+h_+^{-3}\abs{\jump{v}}^2\right] =0.
\]
To handle the volume terms, let $\epsilon>0$ be arbitrary; then, there exist smooth functions $\bm{\varphi}_{ij} \in C^\infty_0(\Omega)$ such that $\norm{\nabla^2_{ij} v - \bm{\varphi}_{ij}}_{\Omega}<\epsilon$ for all $i,\,j \in \{1,\dots,\dim\}$. Therefore, recalling Lemma~\ref{lem:hjvanishes}, we get
\begin{equation*}
\begin{split}
\lim_{k\rightarrow \infty}\int_{\Tk^-}\abs{\nabla^2 v-\pi_k^0(\nabla^2 v)}^2
& \lesssim \sum_{i,j=1}^\dim \left[\norm{\nabla_{ij}^2 v- \bm{\varphi}_{ij}}_{\Omega}^2+ \lim_{k\rightarrow \infty}\norm{\bm{\varphi}_{ij}-\pi_k^0(\bm{\varphi}_{ij})}_{\Omega_k^{-}}^2 \right] \\
& \lesssim d^2 \epsilon^2 + \lim_{k\rightarrow \infty}\sum_{i,j=1}^\dim \norm{h_k \nabla \bm{\varphi}_{ij}}_{\Omega_k^{-}}^2 \leq d^2 \epsilon^2,
\end{split}
\end{equation*}
where, in the first inequality, we have used the stability of the $L^2$-orthogonal projection to bound $\norm{w_{ij}-\pi_k^0(w_{ij})}_\Omega\leq \norm{w_{ij}}_\Omega$, with $w_{ij}=\nabla_{ij} v-\bm{\varphi}_{ij}$, and where, in the second inequality, we note that $\norm{h_k \nabla \bm{\varphi}_{ij}}_{\Omega_k^{-}}\rightarrow 0$ in the limit owing to Lemma~\ref{lem:hjvanishes}.
Since $\epsilon$ is arbitrary, we conclude that $\int_{\Tk^-}\abs{\nabla^2 v-\pi_k^0(\nabla^2 v)}^2\rightarrow 0$ as $k\rightarrow \infty$, which completes the proof that the right-hand side of \eqref{eq:limit_space_2} vanishes in the limit; from this we then infer that
\[
\begin{aligned}
\lim_{k\rightarrow\infty}\norm{v-v_k}_k=0, &&&
\lim_{k\rightarrow\infty}\int_\Omega \left[h_k^{-2}\abs{\nabla(v-v_k)}^2+h_k^{-4}\abs{v-v_k}^2\right]=0.
\end{aligned}
\]
Combining the triangle inequality with the bounds obtained above then shows that $\sup_{k\in\mathbb{N}}\norm{v_k}_k\lesssim \norm{v}_{H^2_D(\Om;\Tp)}$ and thus completes the proof of~\eqref{eq:limit_space_characterization} for $s=0$.
\emph{Step 2. Proof for $s=1$.} Now let $s=1$ and consider $v\in V_\infty^1$. Let $w_k \in V_k^0$ be defined as the $L^2$-orthogonal projections of $v$ into $V_k$ for each $k\in\mathbb{N}$. Note that we are now relabelling the sequence of approximations used in \emph{Step~1} above.
Since $V_\infty^1\subset V_\infty^0$, it follows from the arguments of \emph{Step~1} that $\norm{v-w_k}_{k}\rightarrow 0$ and $\int_\Omega h_k^{2m-4}\abs{\nabla^m(v-w_k)}^2\rightarrow 0$ as $k\rightarrow \infty$, for each $m\in\{0,1\}$.
Now let $v_k\coloneqq E_k^1 w_k$ where $E_k^1\colon V_k^0\rightarrowV_k^1$ is the $H^1_0$-conforming enrichment operator based on local averaging of degrees of freedom as in~\cite{KarakashianPascal03}. Adapting the analysis therein to the present setting, we obtain the bounds
\begin{equation}\label{eq:limit_space_3}
\sum_{m=0}^2 \int_\Omega h_k^{2m-4}\abs{\nabla^m(w_k-v_k)}^2 \lesssim \int_{\cF_k} h_k^{-3}\abs{\jump{w_k}}^2=\int_{\cF_k} h_k^{-3}\abs{\jump{v-w_k}}^2\leq \norm{v-w_k}_k^2,
\end{equation}
where we have used the fact that now $v\in V_\infty^1 \subset H^1_0(\Omega)$ and hence $\jump{w_k}=\jump{w_k-v}$ for all faces of $\cF_k$. Furthermore, the bound~\eqref{eq:limit_space_3}, the triangle inequality and the trace inequality imply that
\begin{multline}\label{eq:limit_space_4}
\int_{\Fk^I}h_k^{-1}\abs{\jump{\nabla (v-v_k)}}^2 \lesssim \int_{\Fk^I}h_k^{-1}\abs{\jump{\nabla(v-w_k)}}^2 +\int_{\Fk^I}h_k^{-1}\abs{\jump{\nabla(w_k-v_k)}}^2
\\ \lesssim \norm{v-w_k}_k^2 +\sum_{m=1}^2 \int_\Omega h_k^{2m-4}\abs{\nabla^m(w_k-v_k)}^2 \lesssim \normk{v-w_k}^2.
\end{multline}
So, after applying the triangle inequality and combining the bounds~\eqref{eq:limit_space_3} and \eqref{eq:limit_space_4}, we get
\begin{equation}
\norm{v-v_k}_k^2 + \sum_{m=0}^1 \int_\Omega h_k^{2m-4}\abs{\nabla^m(v-v_k)}^2 \lesssim \norm{v-w_k}_k^2 + \sum_{m=0}^1 \int_\Omega h_k^{2m-4}\abs{\nabla^m(v-w_k)}^2,
\end{equation}
and we note that the right-hand side above tends to $0$ as $k\rightarrow \infty$.
Hence if $v\in V_\infty^1$, then the claim of the Theorem is also satisfied for a sequence of functions $v_k\inV_k^1$ for all $k\in\mathbb{N}$.
\end{proof}
\begin{remark}
Theorem~\ref{thm:limit_space_characterization} shows that functions in $V_\infty^s$ are limits in the sense of \eqref{eq:limit_space_characterization} of functions from the finite element spaces $V_k^s$, thereby justifying the choice of notation for the limit spaces.
Furthermore,~Theorem~\ref{thm:limit_space_characterization} establishes the connection between our approach and the approach in \cite{DominincusGaspozKreuzer19,KreuzerGeorgoulis18} where the limit spaces are defined in terms of the existence of an approximating sequence from the finite element spaces.
\end{remark}
\begin{corollary}[Limits of norms and jumps]\label{cor:jump_term_limits}
For any $v\inV_\infty^s$, $s\in\{0,1\}$, the sequence $\{\normk{v}\}_{k\in\mathbb{N}}$ is a monotone increasing sequence that converges to $\norm{v}_{H^2_D(\Om;\Tp)}$ as $k \rightarrow \infty$, and
\begin{equation}\label{eq:jump_term_vanish}
\lim_{k\rightarrow \infty}\int_{\Fk^I\setminus\cF^{I\dagger}_k}h_k^{-1}\abs{\jump{\nabla v}}^2+\int_{\cF_k\setminus\cF^{\dagger}_k}h_k^{-3}\abs{\jump{v}}^2=0.
\end{equation}
The limit in \eqref{eq:jump_term_vanish} also holds with the sets $\cF^{\dagger}_k$ and $\cF^{I\dagger}_k$ replaced by $\cF_k^+$ and $\Fk^{I+}$, respectively.
\end{corollary}
\begin{proof}
The proof follows the same lines as \cite{DominincusGaspozKreuzer19,KreuzerGeorgoulis18}, and we include the proof only for completeness. For $v\inV_\infty^s$, let $v_k\inV_k^s$ denote the sequence of functions given by Theorem~\ref{thm:limit_space_characterization}. We infer the uniformly boundedness of $\{\normk{v}\}_{k\in\mathbb{N}}$ from the convergence $\normk{v-v_k}\rightarrow 0$ as $k\rightarrow \infty$ and the uniform boundedness $\sup_{k\in\mathbb{N}}\normk{v_k}<\infty$. Moreover the sequence $\normk{v}$ is monotone increasing since $h_k^{-1}\leq h_m^{-1}$ for all $m\geq k$, and thus convergences to a limit.
We claim that $\int_{\Fk^I}h_k^{-1}\abs{\jump{\nabla v}}^2 \rightarrow \int_{\cF^{I+}}h_+^{-1}\abs{\jump{v}}^2$ and that $\int_{\cF_k}h_k^{-3}\abs{\jump{ v}}^2\rightarrow \int_{\cF^+}h_+^{-3}\abs{\jump{ v}}^2$.
For any $\epsilon>0$, there is an $\ell\in\mathbb{N}$ such that $\abs{\norm{v}^2_m-\norm{v}^2_k}<\epsilon$ for all $m,k\geq \ell$.
Moreover, Lemma~\ref{lem:face_refinements} shows that there is an $M=M(k)$ such that for all $m\geq M$, then $\cF_k^+=\cF_k\cap \mathcal{F}_m$ which implies also that $\Fk^{I+}=\Fk^I\cap\mathcal{F}^I_m$, hence
\begin{multline*}
\epsilon > \int_{\mathcal{F}^I_m\setminus(\Fk^{I+})} h_{m}^{-1} \abs{\jump{\nabla v}}^2 - \int_{\Fk^I\setminus(\Fk^{I+})}h_{k}^{-1}\abs{\jump{\nabla v}}^2
\\ + \int_{\mathcal{F}_m\setminus(\cF_k^+)} h_{m}^{-3} \abs{\jump{v}}^2 - \int_{\cF_k\setminus(\cF_k^+)} h_{k}^{-3} \abs{\jump{v}}^2
\\ \gtrsim \int_{\Fk^I\setminus(\Fk^{I+})}h_{k}^{-1}\abs{\jump{\nabla v}}^2 + \int_{\cF_k\setminus(\cF_k^+)} h_{k}^{-3} \abs{\jump{v}}^2,
\end{multline*}
where in the second inequality we use the fact that when face is refined, the $(d-1)$-dimensional Hausdorff measure of that face decreases at least by a fixed factor strictly less than one.
Thus, we obtain $\int_{\Fk^I\setminus(\Fk^{I+})}h_{k}^{-1}\abs{\jump{\nabla v}}^2 + \int_{\cF_k\setminus(\cF_k^+)} h_{k}^{-3} \abs{\jump{v}}^2 \rightarrow 0$ as $k\rightarrow \infty$.
Note that $h_+|_F=h_k|_F$ for any $F\in\cF_k^+$, so we use $\int_{\cF^{I+}\setminus\Fk^{I+}} h_+^{-1}\abs{\jump{\nabla v}}^2 \tends0 $ to obtain $\int_{\Fk^I}h_k^{-1}\abs{\jump{\nabla v}}^2 \rightarrow \int_{\cF^{I+}}h_+^{-1}\abs{\jump{\nabla v}}^2$.
Similarly, we find that $\int_{\cF_k}h_k^{-3}\abs{\jump{ v}}^2\rightarrow \int_{\cF^+}h_+^{-3}\abs{\jump{ v}}^2$.
The above limits imply that $\int_{\Fk^I\setminus\Fk^{I+}} h_+^{-1}\abs{\jump{\nabla v}}^2 \rightarrow 0$ and that $\int_{\cF_k\setminus\cF_k^+} h_+^{-3}\abs{\jump{\nabla v}}^2\rightarrow 0$ as $k\rightarrow \infty$.
Then, we obtain~\eqref{eq:jump_term_vanish} from the limits $\int_{\cF^{I+}\setminus\cF^{I\dagger}_k} h_+^{-1}\abs{\jump{\nabla v}}^2 \rightarrow 0$ and from $\int_{\cF^+\setminus\cF^{\dagger}_k} h_+^{-3}\abs{\jump{ v}}^2 \rightarrow 0$ as $k\rightarrow \infty$. We finally conclude that $\normk{v}\rightarrow \norm{v}_{H^2_D(\Om;\Tp)}$ as $k\rightarrow\infty$ from the above limits.
\end{proof}
\subsection{Limit lifting operators and weak compactness of bounded sequences of finite element functions}
In order to study the weak convergence properties of bounded sequences of functions from the finite element spaces, we now introduce a lifting operator defined on the limit space $V_\infty^s$ along with corresponding lifted differential operators. Recall that for each $F\in\cF^+$, there exists $\ell\in\mathbb{N}$ such that $F\in \cF^{\dagger}_k$ for each $k\geq \ell$, thereby implying that the operators $\bm{r}_k^F=\bm{r}_\ell^F$ for all $k\geq \ell$. We then define $\bm{r}_\infty^F \coloneqq \bm{r}_\ell^F$, and note that this is well-defined as it is independent of $\ell$. It follows that $\bm{r}_\infty^F$ maps $L^2(F;\mathbb{R}^\dim)$ into $L^2(\Om;\R^{\dim\times\dim})$, and moreover that the support of $\bm{r}_\infty^F(\bm{w})$ is contained in the union of all parent elements in $\cT^+$ of $F$, and is thus a subset of $\Omega^+$. Then, for any $v\inV_\infty^s$, define the lifted Hessian and Laplacian as
\begin{align}\label{eq:H_infty_def}
\bm{H}_\infty v \coloneqq \nabla^2 v - \bm{r}_\infty(\jump{\nabla v}), \quad \Delta_\infty v \coloneqq \Tr(\bm{H}_\infty v), \quad \bm{r}_\infty(\jump{\nabla v}) \coloneqq \sum_{F\in\cF^+}\bm{r}_\infty^F(\jump{\nabla v}),
\end{align}
where we note that series defining $\bm{r}_\infty(\jump{\nabla v})$ in \eqref{eq:H_infty_def} is understood as a convergent series of functions in $L^2(\Om;\R^{\dim\times\dim})$, owing to the finite overlap of the supports of the lifting operators which implies that
\begin{equation}\label{eq:infty_lifting_boundedness}
\norm{\bm{r}_\infty(\jump{\nabla v})}_{\Omega}^2\lesssim \sum_{F\in\cF^+}\norm{\bm{r}_\infty^F(\jump{\nabla v})}^2_{\Omega} \lesssim \int_{\cF^{I+}} h_+^{-1}\abs{\jump{\nabla v}}^2 + \int_{\cF^+}h_+^{-3}\abs{\jump{v}}^2 < \infty,
\end{equation}
for all $v\inV_\infty^s$, where we have used an inverse inequality to bound $\jump{\nabla_T v}$, the tangential component of the jumps, on boundary faces, which is possible since $v\in V_\infty^s$ is piecewise polynomial on $\cT^+$.
Moreover, the lifting $\bm{r}_\infty(\jump{\nabla v})$ is essentially supported on $\Omega^+$ and its restriction $\bm{r}_\infty(\jump{\nabla v})|_K$ is a piecewise $\dim\times\dim$-matrix valued polynomial of degree at most $q$ for each $K\in\cT^+$.
It is then easy to see that the operators $\bm{H}_\infty$ and $\Delta_\infty$ defined in~\eqref{eq:H_infty_def} are bounded on the space $V_\infty^s$, i.e.\
\begin{equation}\label{eq:infty_lifting_boundedness_2}
\begin{aligned}
\norm{\bm{H}_\infty v}_{\Omega} + \norm{\Delta_\infty v}_\Omega &\lesssim \norm{v}_{H^2_D(\Om;\Tp)} &&&\forall v \in V_\infty^s.
\end{aligned}
\end{equation}
The next lemma shows that the lifting operators defined in~\eqref{eq:H_infty_def} are the appropriate limits of the corresponding operators from~\eqref{eq:lifted_Hessian} applied to strongly convergent sequences of finite element functions.
\begin{lemma}[Convergence of lifting operators]\label{lem:lifted_Hess_convergence}
Let $\{v_k\}_{k\in\mathbb{N}}$ be a sequence of functions such that $v_k\inV_k^s$ for each $k\in\mathbb{N}$, and suppose that there is a $v\in V_\infty^s$ such that $\norm{v-v_k}_k\rightarrow 0$ as $k\rightarrow \infty$. Then~$\bm{H}_k v_k \rightarrow \bm{H}_{\infty} v$ and $\bm{r}_k(\jump{\nabla v_k})\rightarrow \bm{r}_{\infty}(\jump{\nabla v})$ in $L^2(\Om;\R^{\dim\times\dim})$ as $k\rightarrow\infty$.
\end{lemma}
\begin{proof}
Since the hypothesis of convergence in norms implies that $\nabla^2 v_k \rightarrow \nabla^2 v$ in $L^2(\Om;\R^{\dim\times\dim})$, it is enough to show that $\bm{r}_k(\jump{\nabla v_k}) \rightarrow \bm{r}_\infty(\jump{\nabla v})$ in $L^2(\Om;\R^{\dim\times\dim})$ as $k\rightarrow\infty$, as convergence of $\bm{H}_k v_k$ to $\bm{H}_\infty v$ then follows immediately.
By definition, the face lifting operator $\bm{r}_\infty^F=\bm{r}_k^F$ for every $F\in\cF^{\dagger}_k$.
So, the triangle inequality and the finite overlap of the supports of the lifting operators yield
\begin{multline}\label{eq:lifting_terms_bound}
\norm{\bm{r}_\infty(\jump{\Npw v})-\bm{r}_k(\jump{\Npw{ v_k}})}_\Omega^2
\lesssim \int_{\Fk^I}h_k^{-1}|\jump{\Npw(v-v_k)}|^2+\int_{\cF^{I+}\setminus\cF^{I\dagger}_k}h_+^{-1}|\jump{\Npw v}|^2
\\ +\int_{\Fk^I\setminus\cF^{I\dagger}_k}h_k^{-1}\abs{\jump{\Npw{v}}}^2
+ \int_{\Fk^B} h_k^{-3}\norm{\jump{v-v_k}}^2
\\ +\int_{\mathcal{F}^{B+}\setminus\cF_k^+}h_+^{-3}\abs{\jump{v}}^2+\int_{\Fk^B\setminus\cF^{\dagger}_k}h_k^{-3}\abs{\jump{v}}^2.
\end{multline}
The right-hand side of \eqref{eq:lifting_terms_bound} tends to zero owing to \eqref{eq:jump_term_vanish}, to the convergence of $\norm{v-v_k}_k\rightarrow 0$, and the vanishing tails $\int_{\cF^{I+}\setminus\cF^{I\dagger}_k}h_+^{-1}|\jump{\Npw v}|^2+\int_{\mathcal{F}^{B+}\setminus\cF_k^+}h_+^{-3}\abs{\jump{v}}^2\rightarrow 0$ as $k\rightarrow \infty$. Then $\bm{H}_k v_k\rightarrow \bm{H}_\infty v$ follows from the convergence of $\br_k(\jump{\Npw v_k})\rightarrow \bm{r}_\infty(\jump{\nabla v}) $ and $\nabla^2 v_k \rightarrow \nabla^2 v$.
\end{proof}
We now prove that bounded sequences of functions from the finite element spaces have appropriate weak compactness properties, and have weak limits in the limit spaces.
Let $\chi_{\Omega^+}$ denote the indicator function of the set~$\Omega^+$.
\begin{theorem}[Weak convergence]\label{thm:weak_convergence}
Let $\{v_k\}_{k\in\mathbb{N}}$ be a sequence of functions such that $v_k\in V_k^s$ for each $k\in\mathbb{N}$, and such that $\sup_{k\in\mathbb{N}}\norm{v_k}_k<\infty$.
Then, there exist a $v\in V_\infty^s$ and a $\bm{r}\in L^2(\Om;\R^{\dim\times\dim})$ such that $\bm{r} \chi_{\Omega^+}=\bm{r}_\infty(\jump{\nabla v})$ a.e.\ in $\Omega$, and there exists a subsequence $\{v_{k_j}\}_{j\in\mathbb{N}}$ such that $v_{k_j}\rightarrow v$ in $L^2(\Omega)$, $\nabla v_{k_j}\rightarrow \nabla v \in L^2(\Omega;\mathbb{R}^\dim)$, $\bm{H}_{k_j}v_{k_j}\rightharpoonup \bm{H}_{\infty} v$ and $\bm{r}_{k_j}(\jump{\nabla v_{k_j}}) \rightharpoonup \bm{r}$ in $L^2(\Om;\R^{\dim\times\dim})$ as $j\rightarrow \infty$.
\end{theorem}
\begin{proof}
Since $V_k^1\subset V_k^0$ for all $k\in\mathbb{N}$ and $V_\infty^1\subset V_\infty^0$, we consider the general case $s=0$, and handle the special case $s=1$ only where it is needed.
We will also frequently use the fact that for any integer $k\geq \ell$, if a face $F\in\cF_k \setminus \cF^{\dagger}_{\ell}$, then $h_k|_F \lesssim \norm{h_\ell \chi_{\Omega_{\ell}^{1-}}}_{L^\infty(\Omega)}$. This is due to the fact that any element $K\in \cT_k$ that contains $F$ must be included in $\Omega_{\ell}^{1-}$, for otherwise $F\in\cF^{\dagger}_{\ell}$ and there would be a contradiction.
\emph{Step 1. Compactness of values and gradients.}
The discrete Rellich--Kondrachov theorem for DG finite element spaces, see \cite[Theorem~5.6]{DiPietroErn12}, shows that the sequence $\{v_k\}_{k\in\mathbb{N}}$ is relatively compact in $L^2(\Omega)$, and thus, there exists a $v\in L^2(\Omega)$ and a subsequence, to which we pass without change of notation, such that $v_k\rightarrow v$ in $L^2(\Omega)$ as $k\rightarrow \infty$. Furthermore, after extending the functions $v_k$ and $v$ by zero, we further have $v_k \rightarrow v$ in $L^2(\mathbb{R}^\dim)$ as $k\rightarrow \infty$. The uniform boundedness of the sequence $\{v_k\}_{k\in\mathbb{N}}$ in $BV(\mathbb{R}^\dim)$, as shown by \cite[Lemma~5.2]{DiPietroErn12}, further implies that $v\in BV(\mathbb{R}^\dim)$.
Furthermore, for each $i\in\{1,\dots,\dim\}$, the sequence $\{\nabla_{x_i} v_k \}_{n\in\mathbb{N}}$ is uniformly bounded in both $BV(\Omega)$ and in $L^{r}(\Omega)$ for some $r>2$, see \cite[Lemma~2 \&~Theorem~4.1]{BuffaOrtner}, and thus, by compactness of the embedding of $BV(\Omega)$ into $L^1(\Omega)$, after passing to a further subsequence without change of notation, there is a $\bm{\sigma} \in L^2(\Omega;\mathbb{R}^\dim)$ such that $\nabla v_k \rightarrow \bm{\sigma}$ in $L^2(\Omega)^{\dim}$ as $k\rightarrow \infty$.
We also infer that the restriction $v|_K$ is a polynomial of degree at most $p$ for each $K\in\cT^+$, since it is the limit of the sequence of polynomials $ \{v_k|_K\}_{k\in\mathbb{N}}$. Furthermore, the equivalence of norms in finite dimensional spaces and the fact that $\nabla v_k \rightarrow \bm{\sigma}$ imply that $\bm{\sigma}|_K = \nabla v|_K$ for each $K\in\cT^+$. In addition, this implies that $\jump{v_k}_F \rightarrow \jump{v}_F$ for all $F \in \cF^+$, and $\jump{\nabla v_k}_F\rightarrow \jump{\nabla v}_F$ for all $F\in \cF^{I+}$, in any norm as $k\rightarrow \infty$.
\emph{Step~2. Bounds on the jumps.}
We now prove that $\int_{\cF^+} h_+^{-3}\abs{\jump{v}}^2 <\infty $ and $\int_{\cF^{I+}} h_+^{-1} \abs{\jump{\nabla v}}^2<\infty $.
Recall that $\cS_k$ and $\cS^+$ denote the skeletons of the sets of faces $\cF_k$ and $\cF^+$ respectively.
Consider now the function $h_k^{-3}\abs{\jump{v_k}}^2\colon \cS_k \rightarrow \mathbb{R}$, and extend it by zero to $\cS^+\setminus \cS_k$.
Then, since $h_k|_F= h_+|_F$ whenever $k$ is sufficiently large for each $F\in \cF^+$, we deduce that $h_k^{-3}\abs{\jump{v_k}}^2$ converges pointwise $\mathcal{H}^{d-1}$-a.e.\ to $h_+^{-3}\abs{\jump{v}}^2$ on $\cS^+$.
Therefore, Fatou's Lemma implies that
\begin{equation}{\label{eq:weak_convergence_1}}
\int_{\cF^+} h_+^{-3}\abs{\jump{v}}^2 = \int_{\cS^+}h_+^{-3}\abs{\jump{v}}^2 \leq \liminf_{k\rightarrow \infty}\int_{\cS_k^+} h_k^{-3}\abs{\jump{v_k}}^2 \leq \liminf_{k\rightarrow\infty}\norm{v_k}_k^2 < \infty .
\end{equation}
Similarly, $h_k^{-1}\abs{\jump{\nabla v_k}}^2$ converges $\mathcal{H}^{d-1}$-a.e.\ to $h_+^{-1}\abs{\jump{\nabla v}}^2$ on $\mathcal{S}^{I+}$ and Fatou's Lemma shows that
\begin{equation}\label{eq:weak_convergence_2}
\int_{\cF^{I+}}h_+^{-1}\abs{\jump{\nabla v}}^2 = \int_{\mathcal{S}^{I+}}h_+^{-1}\abs{\jump{\nabla v}}^2 \leq \liminf_{k\rightarrow\infty}\int_{\mathcal{S}^{I+}_k}h_k^{-1}\abs{\jump{\nabla v_k}}^2 \leq \liminf_{k\rightarrow\infty}\norm{v_k}_k^2 <\infty.
\end{equation}
\emph{Step~3. Proof that $v\in H^1_D(\Omega;\cT^+)$.}
Next, we claim that $\int_{\cF_k}\jump{v_k}(\bm{\phi}\cdot \bm{n})\rightarrow \int_{\cF^+}\jump{v}(\bm{\phi}\cdot \bm{n})$ as $k\rightarrow \infty$ for any $\bm{\phi} \in C^\infty_0\big(\R^\dim;\R^{\dim}\big)$. Assuming this claim for the moment, we verify that the function~$v$ has a distributional derivative of the form~\eqref{eq:distderiv_H100Tp} where $\nabla v = \bm{\sigma}$ in $\Omega$. Indeed, the convergence $v_k\rightarrow v$ in $L^2(\mathbb{R}^\dim)$ implies that $\pair{Dv,\bm{\phi}}_{\mathbb{R}^\dim}=\lim_{k\rightarrow\infty}\pair{Dv_k,\bm{\phi}}_{\mathbb{R}^\dim}$, and the convergence of the jumps and of $\nabla v_k \rightarrow \bm{\sigma}$ in $L^2(\mathbb{R}^\dim;\mathbb{R}^\dim)$ also imply that
\begin{equation}
\begin{aligned}
\pair{Dv,\bm{\phi}}_{\mathbb{R}^\dim} &=\lim_{k\rightarrow\infty} \left(\int_{\Omega} \nabla v_k \cdot \bm{\phi} - \int_{\cF_k} \jump{v_k}(\bm{\phi}\cdot\bm{n}) \right)
\\ & = \int_{\Omega} \bm{\sigma} \cdot \bm{\phi}-\int_{\cF^+}\jump{v}(\bm{\phi}\cdot \bm{n}) \quad \forall \bm{\phi}\in C^\infty_0\big(\R^\dim;\R^{\dim}\big),
\end{aligned}
\end{equation}
which shows that $\nabla v = \bm{\sigma}$ and that $v\in H^1_D(\Omega;\cT^+)$.
Returning to the claim that $\int_{\cF_k}\jump{v_k}(\bm{\phi}\cdot \bm{n})\rightarrow \int_{\cF^+}\jump{v}(\bm{\phi}\cdot \bm{n})$ as $k\rightarrow \infty$, we choose an $\ell \in \mathbb{N}$ to be specified below, and for any $k\geq \ell$ we split the series according to
\begin{multline}\label{eq:weak_convergence_3}
\int_{\cF^+}\jump{v}(\bm{\phi}\cdot\bm{n}) - \int_{\cF_k} \jump{v_k}(\bm{\phi}\cdot \bm{n})
\\ = \int_{\cF^{\dagger}_{\ell}}\jump{v-v_k}(\bm{\phi}\cdot\bm{n}) + \int_{\cF^+\setminus\cF^{\dagger}_{\ell}}\jump{v}(\bm{\phi}\cdot\bm{n}) - \int_{\cF_k\setminus \cF^{\dagger}_{\ell}}\jump{v_k}(\bm{\phi}\cdot \bm{n}).
\end{multline}
Note that, for any $\epsilon>0$, we may choose $\ell$ sufficiently large such that the second and third terms on the right-hand side of \eqref{eq:weak_convergence_3} are both bounded in absolute value by $\epsilon$ for any $k\geq \ell$. Indeed, for the second term this results from the fact that this represents the tail of a convergent series by~\eqref{eq:weak_convergence_1}, whereas for the third term, this follows from Lemma~\ref{lem:hjvanishes} and the bound $\left\lvert\int_{\cF_k\setminus \cF^{\dagger}_{\ell}}\jump{v_k}(\bm{\phi}\cdot \bm{n}) \right\rvert \lesssim
M \norm{h_\ell \chi_{\Omega_{\ell}^{1-}}}_{L^\infty(\Omega)}\norm{\bm{\phi}}_{C(\overline{\Omega};\mathbb{R}^\dim)} $ with $M\coloneqq \sup_{k\in\mathbb{N}}\norm{v_k}_k<\infty$.
Then, for any fixed $\ell\in \mathbb{N}$, the strong convergence of the jumps $\jump{v-v_k}$ over the finite set of faces $\cF^{\dagger}_{\ell}$ shows that the first term on the right-hand side vanishes of~\eqref{eq:weak_convergence_3} also vanishes as $k\rightarrow \infty$. We can then choose $k$ large enough such that the left-hand side of \eqref{eq:weak_convergence_3} is bounded by, e.g., $3\epsilon$, and since $\epsilon$ is arbitrary, we see that $\int_{\cF_k}\jump{v_k}(\bm{\phi}\cdot \bm{n})\rightarrow \int_{\cF^+}\jump{v}(\bm{\phi}\cdot \bm{n})$ as claimed.
\emph{Step~4. Weak convergence of Hessians and proof that $v\in V_\infty^s$.}
The sequences of functions $\{\nabla^2 v_k\}_{k\in\mathbb{N}}$ and $\{\br_k(\jump{\Npw v_k})\}_{k\in\mathbb{N}}$ are uniformly bounded in $L^2(\Om;\R^{\dim\times\dim})$ owing to the uniform boundedness of $\{\norm{v_k}_k\}_{k\in\mathbb{N}}$. Therefore, there exist $\bm{M}$ and $\bm{r}$ in $L^2(\Om;\R^{\dim\times\dim})$ such that $\nabla^2 v_k \rightharpoonup \bm{M}$ and $\br_k(\jump{\Npw v_k})\rightharpoonup \bm{r}$ in $L^2(\Om;\R^{\dim\times\dim})$ as $k\rightarrow\infty$.
Furthermore, it is easy to see that $\bm{r}|_{\Omega^+} = \bm{r}_\infty(\jump{\nabla v})|_{\Omega^+}$ since the restrictions $\br_k(\jump{\Npw v_k})|_K\rightarrow \bm{r}_\infty(\jump{\nabla v})|_K$ (in any norm) for all $K\in\cT^+$, owing to the strong convergence $\jump{\nabla v_k}_F\rightarrow \jump{\nabla v}_F $ (in any norm) for all $F\in\cF^{I+}$ shown in \emph{Step~1} above.
We now claim that the distributional derivative $D(\nabla v)$ is of the form given in~\eqref{eq:distderiv_h2} and in particular that
\begin{equation}\label{eq:weak_convergence_4}
\begin{aligned}
\pair {D(\nabla v),\bm{\varphi}}_\Omega &=\lim_{k\to\infty}\left(\int_\Omega \nabla^2{v_k}:\bm{\varphi}-\int_{\Fk^I}\jump{\Npw{v_k}}{\cdot}(\bm{\varphi}\mathbf{n})\right)
\\ &=\int_\Omega(\bm{M}-\bm{r}\chi_{\Omega^-}):\bm{\varphi}-\int_{\cF^{I+}}\jump{\nabla{v}}{\cdot}(\bm{\varphi}\mathbf{n}),
\end{aligned}
\end{equation}
for all $\bm{\varphi}\in C^\infty_0\big(\Om;\R^{\dim\times\dim}\big)$, where $\chi_{\Omega^-}$ denotes the indicator function of $\Omega^-$.
Supposing momentarily that~\eqref{eq:weak_convergence_4} is given, by definition we get $\nabla^2 v= \bm{M}-\bm{r}\chi_{\Omega^-} \in L^2(\Om;\R^{\dim\times\dim})$. Since $\bm{r}_\infty(\jump{\nabla v})$ vanishes on $\Omega^-$ and equals $\bm{r}$ on $\Omega^+$, we see that $\bm{H}_\infty v = \nabla^2 v - \bm{r}_\infty(\jump{\nabla v}) = \bm{M}-\bm{r} $ is then the weak limit of the sequence $\bm{H}_k v_k = \nabla^2 v_k - \br_k(\jump{\Npw v_k})$ in $L^2(\Om;\R^{\dim\times\dim})$.
Furthermore, the bounds~\eqref{eq:weak_convergence_1} and \eqref{eq:weak_convergence_2} above, and the fact that $v\in L^2(\Omega)$, $\nabla v \in L^2(\Omega;\mathbb{R}^\dim)$ and $\nabla^2 v \in L^2(\Om;\R^{\dim\times\dim})$ together imply that $\norm{v}_{H^2_D(\Om;\Tp)}<\infty$, thus showing that $v\in H^2_D(\Om;\Tp)$. Since~$v$ is piecewise polynomial over $\cT^+$, it follows that $v\in V_\infty^0$. For the special case $s=1$, we additionally have $v\in H^1_0(\Omega)$ owing to the fact that the functions $v_k$ are then uniformly bounded in $H^1_0(\Omega)$, which additionally implies that $v\in V_\infty^1$.
It remains only to show~\eqref{eq:weak_convergence_4}.
Consider a fixed but arbitrary $\bm{\varphi}\in C^\infty_0\big(\Om;\R^{\dim\times\dim}\big)$, and let $\bm{\varphi}_k $ be its piecewise mean-value projection on $\cT_k$, i.e.\ $\bm{\varphi}_k|_K\coloneqq \overline{\bm{\varphi}|_K}$ for each $K\in\cT_k$, where the mean-value is taken component-wise.
The first equality in~\eqref{eq:weak_convergence_4} follows directly from $\pair{D(\nabla v_k),\bm{\varphi}}_\Omega\rightarrow \pair{D(\nabla v),\bm{\varphi}}_\Omega$ owing to the convergence $\nabla v_k \rightarrow \nabla v$ in $L^2(\Om;\R^\dim)$.
The limit of the jump terms in~\eqref{eq:weak_convergence_4} is determined as follows. The triangle inequality gives
\begin{multline}\label{eq:weak_convergence_7}
\left\lvert\int_{\cF^{I+}}\jump{\nabla v}\cdot(\bm{\varphi}\bm{n})+\int_{\Omega} \chi_{\Omega^-}\bm{r}:\bm{\varphi} - \int_{\Fk^I}\jump{\nabla v_k}\cdot(\bm{\varphi}\bm{n}) \right\rvert
\\ \leq \left\lvert \int_{\cF^{I+}\setminus\cF^{I\dagger}_{\ell}} \jump{\nabla v}\cdot(\bm{\varphi} \bm{n})\right\rvert + \left\lvert \int_{\cF^{I\dagger}_{\ell}}\jump{\nabla (v-v_k)}\cdot(\bm{\varphi}\bm{n}) \right\rvert
\\ +\left\lvert \int_{\Fk^I\setminus \cF^{I\dagger}_{\ell}}\jump{\nabla v_k}:(\bm{\varphi}\bm{n}) - \int_\Omega \chi_{\Omega^-}\bm{r} :\bm{\varphi} \right\rvert.
\end{multline}
We show that the terms on the right-hand side of \eqref{eq:weak_convergence_7} become vanishingly small for $k$ and~$\ell$ sufficiently large, and hence the left-hand side vanishes in the limit as $k\rightarrow\infty$.
Let $\epsilon>0$ be arbitrary; it follows from~\eqref{eq:weak_convergence_2} that we can start by initially choosing $\ell$ large enough such that the first term $\abs{\int_{\cF^{I+}\setminus\cF^{I\dagger}_{\ell}} \jump{\nabla v}\cdot(\bm{\varphi} \bm{n})}<\epsilon$.
Turning to the last term on the right-hand side of \eqref{eq:weak_convergence_7}, for each $k\geq \ell$, consider the splitting of the lifting operator $\bm{r}_k$ into contributions from faces in $\cF^{\dagger}_{\ell}$ and $\cF_k\setminus\cF^{\dagger}_{\ell}$, i.e.\
\begin{equation}\label{eq:weak_convergence_5}
\begin{aligned}
\bm{r}_k = \bm{r}_{k,\ell}^+ + \bm{r}_{k,\ell}^{-},
&&& \bm{r}_{k,\ell}^+\coloneqq \sum_{F\in\cF^{\dagger}_{\ell}} \bm{r}_k^F, &&& \bm{r}_{k,\ell}^{-}\coloneqq \sum_{F\in\cF_k\setminus\cF^{\dagger}_{\ell}} \bm{r}_k^{F}.
\end{aligned}
\end{equation}
By definition, any face $F\in\cF^{\dagger}_{\ell}$ is a face of only elements that belong to $\mathcal{T}_{\ell}^+$ and thus $\supp \bm{r}_k^{F}(\jump{\nabla v_k}) \subset \Omega_{\ell}^{+}$ for all $k\geq \ell$ and all $F\in\cF^{\dagger}_{\ell}$, and thus $\bm{r}_{k,\ell}^+(\jump{\nabla v_k})$ vanishes a.e.\ on $\Omega_{\ell}^{-}$.
Furthermore, since any element of $\cT_k$ that contains a face belonging to $\cF_k\setminus \cF^{\dagger}_{\ell}$ must be a subset of $\Omega_{\ell}^{1-}$, we see that $\supp \bm{r}_{k,\ell}^{-}(\jump{\nabla v_k}) \subset \Omega_{\ell}^{1-}$ for all $k\geq \ell$. Additionally, we have the uniform bounds $\norm{\br_k(\jump{\Npw v_k})}_{\Omega}+\norm{\bm{r}_{k,\ell}^{+}(\jump{\nabla v_k})}_{\Omega}\lesssim\normk{v_k}\leq M$ for all $k,\ell\in\mathbb{N}$, where $M\coloneqq\sup_{k\in\mathbb{N}}\normk{v_k}$.
The definition of the lifting operators and the supports of the terms in the splitting of~\eqref{eq:weak_convergence_5} imply that
\begin{multline}\label{eq:weak_convergence_6}
\int_{\Fk^I\setminus\cF^{I\dagger}_{\ell}}\jump{\nabla v_k}\cdot \left\{ \bm{\varphi}_k\bm{n} \right\}+ \int_{\Fk^B\setminus\cF^{\dagger}_{\ell}}\jump{\nabla_T v_k}\cdot\left\{ \bm{\varphi}_k\bm{n} \right\} =
\int_{\Omega_{\ell}^{1-}} \bm{r}_{k,\ell}^{-}(\jump{\nabla v_k}):\bm{\varphi}_k
\\ = \int_{\Omega_{\ell}^{1-}} \br_k(\jump{\Npw v_k}):\bm{\varphi}_k - \int_{\Omega_{\ell}^{1-}\setminus\Omega_{\ell}^{-}} \bm{r}_{k,\ell}^+(\jump{\nabla v_k}):\bm{\varphi}_k.
\end{multline}
Lemma~\ref{lem:hjvanishes} also shows that $\abs{\int_{\Omega^{1-}_\ell\setminus\Omega_{\ell}^{-}}\bm{r}_{k,\ell}^+(\jump{\nabla v_k}){:}\bm{\varphi}_k }\lesssim \abs{\Omega^{1-}_{\ell}\setminus\Omega^-}^{\frac{1}{2}} M \norm{\bm{\varphi}}_{C(\overline{\Omega};\mathbb{R}^{\dim\times\dim})}<\epsilon$ for all $k\geq \ell$ whenever $\ell$ is chosen to be sufficiently large.
We can also choose $\ell$ large enough such that $\abs{\int_{\Fk^I\setminus\cF^{I\dagger}_{\ell}}\jump{\nabla v_k}{\cdot}\{(\bm{\varphi}-\bm{\varphi}_k)\bm{n})\}}\lesssim M \norm{h_\ell\nabla \bm{\varphi}}_{\Omega_{\ell}^{1-}}<\epsilon$ since $\norm{h_\ell \chi_{\Omega_{\ell}^{1-}}}_{L^\infty(\Omega)}\rightarrow 0$ as $\ell\rightarrow \infty$ by Lemma~\ref{lem:hjvanishes}.
Also, since $\bm{\varphi}$ is compactly supported in $\Omega$, we get $\abs{\int_{\Fk^B\setminus\cF^{\dagger}_{\ell}}\jump{\nabla_T v_k}\cdot\left\{ \bm{\varphi}_k\bm{n} \right\}}\lesssim M\norm{h_\ell \nabla \bm{\varphi}}_{\Omega_{\ell}^{1-}}<\epsilon$ for all $k\geq \ell$ sufficiently large.
Furthermore, by weak convergence of $\br_k(\jump{\Npw v_k})\rightharpoonup \bm{r}$ in $L^2(\Om;\R^{\dim\times\dim})$ and by strong convergence of $\bm{\varphi}_k\chi_{\Omega^{1-}_\ell}\rightarrow \bm{\varphi} \chi_{\Omega^-}$ in $L^2(\Om;\R^{\dim\times\dim})$ as $k\geq \ell\rightarrow \infty$, we can also choose $\ell$ large enough such that $\abs{ \int_{\Omega^{1-}_\ell} \br_k(\jump{\Npw v_k}) {:} \bm{\varphi}_k - \int_{\Omega} \chi_{\Omega^-}\bm{r}{:}\bm{\varphi} } < \epsilon$ for all $k\geq \ell$.
Thus, by addition and subtraction of the terms in \eqref{eq:weak_convergence_6}, we infer from the above inequalities that $\lvert \int_{\Fk^I\setminus \cF^{I\dagger}_{\ell}}\jump{\nabla v_k}{\cdot}(\bm{\varphi}\bm{n}) - \int_\Omega \chi_{\Omega^-}\bm{r}{:}\bm{\varphi} \rvert < 4 \epsilon$ for all $k\geq \ell$, which bounds the last term on the right-hand side of~\eqref{eq:weak_convergence_7}.
Finally, strong convergence of $\nabla v_k|_K\rightarrow \nabla v|_K$ (in any norm) for each element $K\in\cT^+$, and the finiteness of the set of faces $\cF^{I\dagger}_{\ell}$, imply that the second term in the right-hand side of \eqref{eq:weak_convergence_7} also vanishes in the limit $k\rightarrow \infty$, for any $\ell\in\mathbb{N}$.
Therefore, we conclude that the left-hand side of \eqref{eq:weak_convergence_7} vanishes in the limit $k\rightarrow \infty$, which then gives \eqref{eq:weak_convergence_4} upon recalling that $\nabla^2 v_k \rightharpoonup \bm{M}$ in $L^2(\Om;\R^{\dim\times\dim})$. This completes the proof.
\end{proof}
\begin{remark}
It is easy to construct examples of sequences of functions $\{v_k\}_{k\in\mathbb{N}}$ satisfying the conditions of Theorem~\ref{thm:weak_convergence} such that $\bm{r}$ is nonvanishing on~$\Omega^-$. This explains the appearance of the indicator functions $\chi_{\Omega^+}$ in the equation $\bm{r}\chi_{\Omega^+} = \bm{r}_{\infty}(\jump{\nabla v})$ in the statement of Theorem~\ref{thm:weak_convergence} and also the appearance of the indicator function $\chi_{\Omega^-}$ in~\eqref{eq:weak_convergence_4}.
\end{remark}
\section{The limit problem and proof of convergence}\label{sec:limit_problem_proof}
\subsection{The limit problem}\label{sec:asymptotic_consistency}
The convergence of the sequence of numerical solutions $\{u_k\}_{k\in \mathbb{N}}$ from~\eqref{eq:num_scheme} is shown by introducing a suitable notion of a limit problem on the space $V_\infty^s$.
We start by extending the definition of the operator $F_{\gamma}$ in~\eqref{eq:Fg_def} to functions in $H^2_D(\Om;\Tp)$ by $F_{\gamma}[v] \coloneqq \inf_{\alpha\in\mathscr{A}} \sup_{\beta\in\mathscr{B}} \left[\gamma^{{\alpha\beta}}(a^{{\alpha\beta}}:\nabla^2 v - f^{{\alpha\beta}}) \right]$ i.e.\ we use the notion of Hessian $\nabla^2 v$ defined by~\eqref{eq:Hessian_notation} inside the nonlinear operator $F_{\gamma}$.
The operator $F_{\gamma}$ is then a Lipschitz continuous mapping from $H^2_D(\Om;\Tp)$ to $L^2(\Omega)$, and the inequalities~\eqref{eq:cordes_ineq1} and \eqref{eq:cordes_ineq2} extend to functions in the sum space $H^2_D(\Om;\Tp)+V_k^s$ for each $k\in\mathbb{N}$.
Let the nonlinear form $A_\infty(\cdot;\cdot)\colon V_\infty^s\timesV_\infty^s\rightarrow \mathbb{R}$ be defined by
\begin{equation}\label{eq:Ainfty_def}
\begin{aligned}
A_\infty(v;w)\coloneqq \int_\Omega F_{\gamma}[v]\Delta_\infty w + \theta S_\infty(v,w) + J_\infty^{\sigma,\rho}(v,w) &&&\forall v,\,w \in V_\infty^s,
\end{aligned}
\end{equation}
where the bilinear forms $S_\infty\colon V_\infty^s\times V_\infty^s\rightarrow \mathbb{R}$ and $J_\infty^{\sigma,\rho}\colon V_\infty^s\times V_\infty^s\rightarrow \mathbb{R}$ are defined by
\begin{align}
S_\infty(v,w)&\coloneqq \int_\Omega \left[\bm{H}_\infty v {:} \bm{H}_\infty w - \Delta_\infty v \Delta_\infty v \right] \notag
\\ & \quad + \int_\Omega \left[\Tr\bm{r}_\infty(\jump{\nabla v}) \Tr\bm{r}_\infty(\jump{\nabla w}) - \bm{r}_\infty(\jump{\nabla v}){:}\bm{r}_\infty(\jump{\nabla w}) \right],\\
J_\infty^{\sigma,\rho}(v,w) &\coloneqq \int_{\cF^{I+}} \sigma h_+^{-1} \jump{\nabla v}\cdot\jump{\nabla w} + \int_{\cF^+} \rho h_+^{-3}\jump{v} \jump{w},\label{eq:limit_jumps}
\end{align}
for all functions $v$ and $w\inV_\infty^s$, where it is recalled that the lifting operators $\bm{r}_\infty$, lifted Hessian $\bm{H}_\infty$ and Laplacian $\Delta_\infty$ are defined in~\eqref{eq:H_infty_def}.
The definition of $S_\infty(\cdot;\cdot)$ is motivated by the identity~\eqref{eq:lifted_Bk_form} in Lemma~\ref{lem:lifted_Bk_form}, and this will be used later in the analysis.
We emphasize that the parameter $\theta$ in \eqref{eq:Ainfty_def} and the penalty parameters~$\sigma$ and~$\rho$ appearing in~\eqref{eq:limit_jumps} are the same as the ones used in the numerical scheme in Section~\ref{sec:num_schemes}.
Using the bounds on the lifting operators in~\eqref{eq:infty_lifting_boundedness} and~\eqref{eq:infty_lifting_boundedness_2} and the extension of~\eqref{eq:cordes_ineq2} to functions in $H^2_D(\Om;\Tp)$, see above, it is then straightforward to show that the nonlinear form $A_\infty(\cdot;\cdot)$ is Lipschitz continuous on $V_\infty^s$, i.e.\
\begin{equation}\label{eq:Ainfty_lipschitz}
\begin{aligned}
\abs{A_\infty(z;w)-A_\infty(v;w) }\lesssim \norm{z-v}_{H^2_D(\Om;\Tp)}\norm{w}_{H^2_D(\Om;\Tp)} &&& \forall z,\, v,\,w \in V_\infty^s.
\end{aligned}
\end{equation}
The following Lemma further motivates the above definitions by showing that the nonlinear forms $A_k$ are asymptotically consistent with the limit nonlinear form $A_\infty$ with respect to strongly convergent sequences in the first argument and weakly convergent (sub)sequences in the second argument. Recall that $\chi_{\Omega^+}$ denotes the indicator function of the set $\Omega^+$.
\begin{lemma}[Asymptotic consistency]\label{lem:asymptotic_consistency}
Let $\{w_{k_j}\}_{j\in\mathbb{N}}$ and $\{v_{k_j}\}_{j\in\mathbb{N}}$ be two sequences of functions, such that $w_{k_j},v_{k_j}\in V_{k_j}^s$ for each $j\in\mathbb{N}$, and such that $\sup_{j\in\mathbb{N}}\left[\norm{w_{k_j}}_{k_j}+\norm{v_{k_j}}_{k_j}\right]<\infty$.
Suppose that there exists a $v\inV_\infty^s$ such that $\normk{v-v_{k_j}}\rightarrow 0$ as $j\rightarrow \infty$.
Suppose also that there exists a $w\inV_\infty^s$ and a $\bm{r}\inL^2(\Om;\R^{\dim\times\dim})$ such that $\bm{r} \chi_{\Omega^+} = \bm{r}_\infty(\jump{\nabla w})$ a.e.\ in $\Omega$, and such that $v_{k_j}\rightarrow v$ in $L^2(\Omega)$, $\nabla v_{k_j}\rightarrow \nabla v$ in $L^2(\Om;\R^\dim)$, $\bm{H}_{k_j}w_{k_j} \rightharpoonup \bm{H}_\infty w$ and $\bm{r}_{k_j}(\jump{\nabla w_{k_j}})\rightharpoonup \bm{r} $ in $L^2(\Om;\R^{\dim\times\dim})$ as $j\rightarrow \infty$. Then
\begin{equation}\label{eq:asymptotic_consistency}
\lim_{j\rightarrow\infty} A_{k_j}(v_{k_j};w_{k_j}) = A_\infty(v;w).
\end{equation}
\end{lemma}
\begin{proof}
First, note that since the lifted Laplacian is defined as the trace of the lifted Hessians, its follows immediately that $\Delta_{k_j} w_{k_j}\rightharpoonup \Delta_\infty w$ in $L^2(\Omega)$ as $j\rightarrow \infty$.
Therefore, considering the nonlinear term in $A_k(\cdot;\cdot)$, we use the strong convergence $F_{\gamma}[v_{k_j}]\rightarrow F_{\gamma}[v]$ in $L^2(\Omega)$ and the weak convergence of the lifted Laplacians to get $\int_\Omega F_{\gamma}[v_{k_j}]\Delta_{k_j}w_{k_j} \rightarrow \int_{\Omega}F_{\gamma}[v]\Delta w$ as $j\rightarrow \infty$.
We next show convergence of the remaining terms in the nonlinear forms $A_k(\cdot;\cdot)$ as follows.
We now turn towards the term $S_{k_j}(v_{k_j},w_{k_j})$.
Lemma~\ref{lem:lifted_Hess_convergence} shows that $\bm{H}_{k_j}v_{k_j}\rightarrow \bm{H}_\infty v$ and that $\bm{r}_{k_j}(\jump{\nabla v_{k_j}})\rightarrow \bm{r}_\infty(\jump{\nabla v})$ in $L^2(\Om;\R^{\dim\times\dim})$ as $k\rightarrow \infty$.
Therefore we infer that
\[
\begin{aligned}
\int_\Omega \bm{H}_\infty v:\bm{H}_\infty w & =\lim_{j\rightarrow\infty}\int_\Omega \bm{H}_{k_j} v_{k_j}: \bm{H}_{k_j} w_{k_j},
\\ \int_\Omega \Delta_\infty w \Delta_\infty v &= \lim_{j\rightarrow\infty}\int_\Omega \Delta_{k_j} v_{k_j}\Delta_{k_j}w_{k_j}.
\end{aligned}
\]
Next, recall that $\bm{r}_\infty(\jump{\nabla v})$ vanishes on $\Omega^-$ for any $v\inV_\infty^s$, and since the weak limit $\bm{r}$ of the sequence $\bm{r}_{k_j}(\jump{\nabla w_{k_j}})$ satisfies $\bm{r}|_{\Omega^+}=\bm{r}_{\infty}(\jump{\nabla w})$ by hypothesis, we obtain the identities $\int_\Omega \bm{r}_\infty(\jump{\nabla v}):\bm{r}=\int_{\Omega} \bm{r}_\infty(\jump{\nabla v}):\bm{r}_\infty(\jump{\nabla w})$ and $\int_\Omega\Tr \bm{r}_\infty(\jump{\nabla v}) \Tr \bm{r}= \int_\Omega\Tr \bm{r}_\infty(\jump{\nabla v}) \Tr \bm{r}_\infty(\jump{\nabla w})$.
Therefore, we conclude that
\begin{align*}
\int_\Omega \bm{r}_\infty(\jump{\nabla v}):\bm{r}_\infty(\jump{\nabla w}) &= \lim_{j\rightarrow\infty}\int_\Omega \bm{r}_{k_j}(\jump{\nabla v_{k_j}}):\bm{r}_{k_j}(\jump{\nabla w_{k_j}}),
\\ \int_\Omega\Tr \bm{r}_\infty(\jump{\nabla v}) \Tr \bm{r}_\infty(\jump{\nabla w})&=\lim_{j\rightarrow\infty}\int_\Omega \Tr\bm{r}_{k_j}(\jump{\nabla v_{k_j}}) \Tr\bm{r}_{k_j}(\jump{\nabla w_{k_j}}).
\end{align*}
Therefore, using Lemma~\ref{lem:lifted_Bk_form} and the above limits, we obtain
\begin{equation}\label{eq:asymptotic_consistency_2}
\lim_{j\rightarrow\infty} S_{k_j}(v_{k_j},w_{k_j}) = S_\infty(v,w).
\end{equation}
It remains only to show the convergence of the jumps $J_{k_j}^{\sigma,\rho}(v_{k_j},w_{k_j})\rightarrow J_\infty^{\sigma,\rho}(v,w)$ as $j\rightarrow \infty$. It follows from the strong convergence of the sequence $v_{k_j}$ to $v$ that it is enough to consider the limits of $\int_{\mathcal{F}_{k_j}^I} h_{k_j}^{-1} \jump{\nabla v}\cdot\jump{\nabla w_{k_j}}$ and $\int_{\mathcal{F}_{k_j}}h_{k_j}^{-3}\jump{v}\jump{w_{k_j}}$.
Let $\epsilon>0$ be arbitrary; then Corollary~\ref{cor:jump_term_limits} and the finiteness of $\norm{v}_{H^2_D(\Om;\Tp)}$ implies that there is a $\ell\in\mathbb{N}$ such that $\int_{\cF^{I+}\setminus \cF^{I\dagger}_{\ell}} h_+^{-1}\abs{\jump{\nabla v}}^2 < \epsilon$ and $\int_{\Fk^I\setminus\cF^{I\dagger}_k}h_+^{-1} \abs{\jump{\nabla v}}^2<\epsilon$ for all $k\geq \ell$, so that
\begin{equation*}
\left\lvert \int_{\mathcal{F}_{k_j}^I} h_{k_j}^{-1} \jump{\nabla v}\cdot\jump{\nabla w_{k_j}} - \int_{\cF^{I\dagger}_{\ell}} h_{+}^{-1}\jump{\nabla v}\cdot \jump{\nabla w_{k_j}} \right\rvert
= \left\lvert\int_{\mathcal{F}_{k_j}^{I}\setminus\cF^{I\dagger}_{\ell}} h_{k_j}^{-1} \jump{\nabla v}\cdot \jump{\nabla w_{k_j}}\right\rvert \leq 2 M \epsilon ,
\end{equation*}
where $M\coloneqq\sup_{k\in\mathbb{N}}\normk{w_k}$, with the inequality obtained by using the Cauchy--Schwarz inequality and the disjoint partitioning $\Fk^I\setminus \cF^{I\dagger}_{\ell} = (\Fk^I\setminus\cF^{I\dagger}_k)\cup(\cF^{I\dagger}_k\setminus \cF^{I\dagger}_{\ell})$ for all $k\geq \ell$.
Note that $\int_{\cF^{I\dagger}_{\ell}} h_{+}^{-1}\jump{\nabla v}\cdot \jump{\nabla w_{k_j}}$ converges to $\int_{\cF^{I\dagger}_{\ell}} h_+^{-1} \jump{\nabla v}\cdot \jump{\nabla w} $ as $j\rightarrow\infty$ for each $\ell\in\mathbb{N}$ since the convergence of the piecewise polynomials $\nabla w_{k_j} \rightarrow \nabla w$ in $L^2(\Om;\R^\dim)$ implies that $\jump{\nabla w_{k_j}}\rightarrow \jump{\nabla w}$ for each never-refined face $F\in\cF^{I\dagger}_{\ell}$.
Passing first to the limit $j\rightarrow \infty$ followed by $\ell\rightarrow \infty$, we therefore obtain $ \int_{\mathcal{F}_{k_j}^I} h_{k_j}^{-1} \jump{\nabla v}\cdot\jump{\nabla w_{k_j}} \rightarrow \int_{\cF^{I+}} h_+^{-1}\jump{\nabla v}\cdot\jump{\nabla w}$ as $j\rightarrow \infty$.
A similar argument shows that $\int_{\mathcal{F}_{k_j}}h_{k_j}^{-3}\jump{v}\jump{w_{k_j}} \rightarrow \int_{\cF^+} h_+^{-3}\jump{v}\jump{w}$ and thus $J_{k_j}^{\sigma,\rho}(v_{k_j},w_{k_j})\rightarrow J_\infty^{\sigma,\rho}(v,w)$ as $j\rightarrow\infty$, thereby completing the proof.
\end{proof}
We are now able to prove that the nonlinear form $A_\infty(\cdot;\cdot)$ is strongly monotone with the same choices of penalty parameters $\rho$ and $\sigma$ used for the numerical scheme.
\begin{lemma}\label{lem:limit_strong_monotonicity}
The nonlinear forms $A_\infty(\cdot;\cdot)$ is strongly monotone on $V_\infty^s$, and satisfies in particular
\begin{equation}\label{eq:limit_strong_monotonicity}
\begin{aligned}
\frac{1}{C_{\mathrm{mon}}} \norm{w-v}_{H^2_D(\Om;\Tp)}^2 \leq A_\infty(w;w-v)-A_\infty(v;w-v) &&&\forall v,\,w \in V_\infty^s,
\end{aligned}
\end{equation}
where the constant $C_{\mathrm{mon}}>0$ is the same as in~\eqref{eq:strong_monotonicity}.
\end{lemma}
\begin{proof}
Theorem~\ref{thm:limit_space_characterization} show that for any $v$ and $w \in V_\infty^s$, there exist sequences of functions $\{v_k\}_{k\in\mathbb{N}}$ and $\{w_k\}_{k\in\mathbb{N}}$ such that $v_k$, $w_k\inV_k^s$ for all $k\in\mathbb{N}$, and such that $\normk{v-v_k}+\normk{w-w_k} \rightarrow 0$ as $k\rightarrow \infty$.
Then, Lemma~\ref{lem:lifted_Hess_convergence} on the convergence of the lifting operators implies that the sequences of functions $\{v_k\}_{k\in\mathbb{N}}$, $\{w_k\}_{k\in\mathbb{N}}$ and $\{w_k-v_k\}_{k\in\mathbb{N}}$ satisfy the hypotheses of Lemma~\ref{lem:asymptotic_consistency}.
Therefore, we infer that
\begin{equation}\label{eq:limit_strong_monotonicity_1}
\begin{split}
\frac{1}{C_{\mathrm{mon}}}\norm{w-v}_{H^2_D(\Om;\Tp)}^2&=\lim_{k\rightarrow\infty} \frac{1}{C_{\mathrm{mon}}}\normk{w_k-v_k}^2
\\ &\leq \lim_{k\rightarrow\infty}\left(A_k(w_k,w_k-v_k)-A_k(v_k,w_k-v_k) \right)
\\ &= A_\infty(w;w-v)-A_\infty(v;w-v).
\end{split}
\end{equation}
where we have used Corollary~\ref{cor:jump_term_limits} for the first equality, followed by the strong monotonicity bound~\eqref{eq:strong_monotonicity}, and then an application of the asymptotic consistency shown by Lemma~\ref{lem:asymptotic_consistency}.
\end{proof}
\paragraph{Limit problem.}
We recall that the nonlinear form $A_\infty \colon V_\infty^s\timesV_\infty^s\rightarrow \mathbb{R}$ defined in~\eqref{eq:Ainfty_def} is Lipschitz continuous, see~\eqref{eq:Ainfty_lipschitz}, and is furthermore strongly monotone as shown by Lemma~\ref{lem:limit_strong_monotonicity}. Recall also that $V_\infty^s$ is a Hilbert space since it is a closed subspace of the Hilbert space $H^2_D(\Om;\Tp)$, see~Theorem~\ref{thm:completeness_H2}.
The Browder--Minty theorem can then be applied to deduce that there exists a unique solution $u_\infty \in V_\infty^s$ of the limit problem
\begin{equation}\label{eq:u_infty_def}
\begin{aligned}
A_\infty(u_\infty;v) = 0 &&&\forall v \inV_\infty^s.
\end{aligned}
\end{equation}
where it is noted that $u_\infty$ depends on $s$, but this is not reflected in the notation as there is no risk of confusion.
\subsection{Convergence of the numerical solutions and proof of Theorem~\ref{thm:main}}\label{sec:main_proof}
Our present goal is to show that the numerical approximations $u_k$ converge to $u_\infty$ and that $u_\infty=u $ the solution of~\eqref{eq:isaacs}, thereby proving~Theorem~\ref{thm:main}.
The following Lemma provides the first step by proving the convergence of the discrete solutions of the numerical scheme~\eqref{eq:num_scheme} to the solution of the limit problem~\eqref{eq:u_infty_def}, in the spirit of the analysis of Galerkin's method for strongly monotone operators.
\begin{lemma}[Convergence to $u_\infty$]\label{lem:uinfty_convergence}
The sequence of numerical solutions $u_k\inV_k^s$ defined by~\eqref{eq:num_scheme} satisfies
\begin{equation}\label{eq:uinfty_converges}
\lim_{k\rightarrow \infty}\normk{u_\infty- u_k} =0.
\end{equation}
\end{lemma}
\begin{proof}
Theorem~\ref{thm:limit_space_characterization} shows that there exists a sequence $\{v_k\}_{k\in\mathbb{N}}$ such that $v_k\inV_k^s$ for each $k\in\mathbb{N}$ and such that $\normk{u_\infty-v_k}\rightarrow 0$ as $k\rightarrow \infty$. Lemma~\ref{lem:lifted_Hess_convergence} shows that $\bm{H}_k v_k\rightarrow \bm{H}_\infty u_\infty $ and $\bm{r}_{k}(\jump{\nabla v_k})\rightarrow \bm{r}_\infty(\jump{\nabla v})$ in $L^2(\Om;\R^{\dim\times\dim})$ as $k\rightarrow \infty$.
Recall also that the sequence of numerical solutions is uniformly bounded, see~\eqref{eq:num_sol_bounded}, and thus Theorem~\ref{thm:weak_convergence} shows that there exists a $u_* \in V_\infty^s$ and a $\bm{r} \in L^2(\Om;\R^{\dim\times\dim})$ such that $\bm{r} \chi_{\Omega^+}=\bm{r}_\infty(\jump{\nabla u_*})$ a.e.\ in $\Omega$, and a subsequence $\{u_{k_j}\}_{j\in\mathbb{N}}$ such that $u_{k_j}\rightarrow u_* $ in $L^2(\Omega)$, $\nabla u_{k_j}\rightarrow \nabla u_*$ in $L^2(\Om;\R^\dim)$ and $\bm{H}_{k_j} u_{k_j}\rightharpoonup \bm{H}_\infty u_*$ and $\bm{r}_{k_j}(\jump{\nabla u_{k_j}})\rightharpoonup \bm{r}$ in $L^2(\Om;\R^{\dim\times\dim})$ as $j\rightarrow \infty$. The sequences $\{v_{k_j}\}_{j\in\mathbb{N}}$ and $\{v_{k_j}-u_{k_j}\}_{j\in\mathbb{N}}$ therefore satisfy the hypotheses of Lemma~\ref{lem:asymptotic_consistency}.
Therefore, using the strong monotonicity of the nonlinear forms and asymptotic consistency, we get
\begin{equation*}
\begin{split}
\lim_{j\rightarrow\infty}\frac{1}{C_{\mathrm{mon}}}\norm{v_{k_j}-u_{k_j}}_{k_j}^2 & \leq \lim_{j\rightarrow\infty} \left(A_{k_j}(v_{k_j};v_{k_j}-u_{k_j})-A_{k_j}(u_{k_j};v_{k_j}-u_{k_j}) \right)
\\ &= \lim_{j\rightarrow \infty} A_{k_j}(v_{k_j};v_{k_j}-u_{k_j}) = A_\infty(u_\infty;u_\infty-u_*) =0,
\end{split}
\end{equation*}
where we have used the definition of numerical scheme~\eqref{eq:num_scheme}, then we have passed to the limit using~\eqref{eq:asymptotic_consistency} and finally we have used the definition of the limit problem~\eqref{eq:u_infty_def}. Therefore, the triangle inequality and the convergence of the $v_k$ to $u_\infty$ imply that $\norm{u_\infty-u_{k_j}}_{k_j}\rightarrow 0 $ as $j\rightarrow \infty$. Since $u_\infty \in V_\infty^s$ is uniquely defined, the uniqueness of limits and a standard contradiction argument then imply that the whole sequence $u_k$ converges to $u_\infty$ and that \eqref{eq:uinfty_converges} holds.
\end{proof}
The next Lemma proves that the maximum element-wise error estimator of the numerical approximations converges to zero in the limit as a consequence of the marking condition~\eqref{eq:max_marking}.
Recall that the elementwise estimators $\{\eta_k(u_k,K)\}_{K\in\cT_k}$ are defined by~\eqref{eq:estimator_def}.
\begin{lemma}\label{lem:max_estimator_vanishes}
For any marking scheme that satisfies~\eqref{eq:max_marking}, we have
\begin{equation}\label{eq:max_estimator_vanishes}
\lim_{k\rightarrow\infty} \max_{K\in\cT_k}\eta_k(u_k,K) =0.
\end{equation}
\end{lemma}
\begin{proof}
The marking condition~\eqref{eq:max_marking}, the fact that any marked element is refined, and the Lipschitz continuity of $F_{\gamma}$ imply that
\begin{multline}\label{eq:max_estimator_vanishes_1}
\max_{K\in\cT_k}\eta_k(u_k,K)^2 = \max_{K\in\cT_k^-}[\eta_k(u_k,K)]^2 \lesssim \normk{u_\infty-u_k}^2 \\ + \max_{K\in\cT_k^-}\left[\int_K \abs{F_{\gamma}[u_\infty]}^2 + \sum_{F\in\Fk^I;F\subset \partial K} \int_F h_k^{-1}\abs{\jump{\nabla u_\infty}}^2 + \sum_{F\in\cF_k; F\subset \partial K}\int_{F} h_k^{-3} \abs{\jump{u_\infty}}^2 \right].
\end{multline}
Note that $\normk{u_\infty-u_k}\rightarrow 0$ as $k\rightarrow \infty$ as shown by Lemma~\ref{lem:uinfty_convergence}.
Lemma~\ref{lem:hjvanishes} shows that the elements of $\cT_k^-$ have uniformly vanishing measures in the limit, and thus the square integrability of $F_{\gamma}[u_\infty]$ implies that $ \max_{K\in\cT_k^-}\int_K \abs{F_{\gamma}[u_\infty]}^2 \rightarrow 0$ as $k\rightarrow \infty$.
Finally, for any $K\in\cT_k^-$, the faces of $K$ belong to $\cF_k\setminus \cF^{\dagger}_k$ and thus
\begin{multline*}
\max_{K\in\cT_k^-}\left[\sum_{F\in\Fk^I;F\subset \partial K} \int_F h_k^{-1}\abs{\jump{\nabla u_\infty}}^2+\sum_{F\in\cF_k; F\subset \partial K}\int_{F} h_k^{-3} \abs{\jump{u_\infty}}^2 \right]
\\ \leq \int_{\Fk^I\setminus\cF^{I\dagger}_k} h_k^{-1}\abs{\jump{\nabla u_\infty}}^2 + \int_{\cF_k\setminus\cF^{\dagger}_k} h_k^{-3}\abs{\jump{u_\infty}}^2.
\end{multline*}
Using~\eqref{eq:jump_term_vanish}, we then deduce that all terms on the right-hand side of~\eqref{eq:max_estimator_vanishes_1} vanish in the limit as $k\rightarrow \infty$, which implies~\eqref{eq:max_estimator_vanishes}.
\end{proof}
We are now ready to prove our the main result of this work.
\noindent\emph{Proof of Theorem~\ref{thm:main}.}
The proof consists of several steps.
\emph{Step~1.} We first show that the jump $\jump{u_\infty}_F$, respectively $\jump{\nabla u_\infty}_F$, vanishes identically for all faces $F\in\cF^+$, respectively $F\in\cF^{I+}$, which will imply that $u_\infty \in H^2(\Omega)\cap H^1_0(\Omega)$.
Moreover we show that $F_{\gamma}[u_\infty]=0$ a.e. in $\Omega^+$.
To do so, consider an arbitrary but fixed $K\in\cT^+$; then, there exists an $\ell\in\mathbb{N}$ such that $K\in\mathcal{T}_{k}^{1+}$ for all $k\geq \ell$. Note then that each face of $F$ of $K$ is in $\cF^{\dagger}_k$ and $h_k|_F=h_+|_F$ all $k\geq \ell$. So, for all $k\geq \ell$, the triangle inequality shows that
\begin{multline}\label{eq:u_infty_est_vanish}
\int_K \abs{F_{\gamma}[u_\infty]}^2 + \sum_{F\in\cF^{I+};F\subset \partial K} \int_F h_+^{-1}\abs{\jump{\nabla u_\infty}}^2 + \sum_{F\in\cF^+; F\subset \partial K}\int_{F} h_+^{-3} \abs{\jump{u_\infty}}^2 \\
\lesssim \normk{u_\infty-u_k}^2 + [\eta_k(u_k,K)]^2 \leq \normk{u_\infty-u_k}^2 + \max_{K^\prime\in\cT_k}[\eta_k(u_k,K^\prime)]^2.
\end{multline}
Then, Lemmas~\ref{lem:uinfty_convergence} and~\ref{lem:max_estimator_vanishes} imply that all of the terms in the right-hand side of~\eqref{eq:u_infty_est_vanish} above vanish in the limit as $k\rightarrow\infty$, and thus the left-hand side, which is independent of $k$, vanishes identically.
Therefore, $F_{\gamma}[u_\infty]=0$ a.e.\ on $K$ and $\jump{\nabla u_\infty}_F=0$ for each interior face $F$ of $K$ and $\jump{u_\infty}_F=0$ for each face $F$ of $K$.
Recalling that $K\in\cT^+$ was arbitrary, it follows that $F_{\gamma}[u_\infty]=0$ a.e.\ in~$\Omega^+$ since $\Omega^+$ is the countable union of all elements in $\cT^+$.
Furthermore, since each face of $\cF^+$ is a face of an element in $\cT^+$, we also conclude that $\jump{u_\infty}_F=0$ for all faces $F\in\cF^+$ and that $\jump{\nabla u_\infty}_F=0$ for all faces $F\in\cF^{I+}$.
We then infer that $u_\infty \in H^2(\Omega)\cap H^1_0(\Omega)$ from the definition of the space $H^2_D(\Om;\Tp)$ in Definition~\ref{def:HD_def}, the forms of the first and second distributional derivatives of $u_\infty$ in \eqref{eq:distderiv_H100Tp} and \eqref{eq:distderiv_h2}, and from the characterization of $H^1_0(\Omega)$ in~\cite[Theorem~5.29]{AdamsFournier03}.
\emph{Step~2.} The fact that $u_\infty \in H^2(\Omega)\cap H^1_0(\Omega)$ and Lemma~\ref{lem:uinfty_convergence} then imply that the jump seminorms of the numerical solutions vanish in the limit, i.e.\
\begin{equation}\label{eq:jump_numsol_vanish}
\lim_{k\rightarrow \infty} \absJ{u_k} = \lim_{k\rightarrow\infty} \absJ{u_k-u_\infty} \leq \lim_{k\rightarrow\infty} \normk{u_k-u_\infty} =0,
\end{equation}
where it is recalled that the jump seminorm $\absJ{\cdot}$ is defined in~\eqref{eq:norm_def}.
\emph{Step~3.} We now prove that $u_\infty=u $ is the exact solution of~\eqref{eq:isaacs}. Let $z \coloneqq u_\infty-u$, and note that $z\in H^2(\Omega)\cap H^1_0(\Omega)$.
Since the mesh size vanishes uniformly in the limit on $\Omega^-$, c.f.~Lemma~\ref{lem:hjvanishes},
by using a similar quasi-interpolant as the one in the proof of Theorem~\ref{thm:limit_space_characterization}, we see that there exists a $z_k \in V_k^s$ such that $\normk{z_k}\lesssim \norm{z}_{H^2(\Omega)}$ for all $k\in\mathbb{N}$, and such that $\norm{\nabla^m(z-z_k)}_{\Omega^-} \rightarrow 0$ as $k \rightarrow \infty$ for each $m\in\{0,1,2\}$ as a consequence of Lemma~\ref{lem:hjvanishes}.
Then, the strong monotonicity bound~\eqref{eq:continuous_strong_monotonicity} implies that
\begin{equation}
\norm{u_\infty-u}_{H^2(\Omega)}^2 \lesssim \int_{\Omega} (F_{\gamma}[u_\infty]-F_{\gamma}[u])\Delta z = \int_{\Omega}F_{\gamma}[u_\infty] \Delta z.
\end{equation}
Then, by addition/subtraction of $\int_\Omega F_{\gamma}[u_k]\Delta z$ and using $A_k(u_k;z_k)=0$ by~\eqref{eq:num_scheme}, we find that
\begin{equation}\label{eq:convergence_1}
\norm{u_\infty-u}_{H^2(\Omega)}^2 \lesssim \int_{\Omega} (F_{\gamma}[u_\infty]-F_{\gamma}[u_k])\Delta z + \int_\Omega F_{\gamma}[u_k]\Delta(z-z_k) - \theta S_k(u_k,z_k) - J_k^{\sigma,\rho}(u_k,z_k).
\end{equation}
We now claim that each of the terms on the right-hand side of~\eqref{eq:convergence_1} vanish in the limit as $k\rightarrow \infty$, which will then imply that $u_\infty = u$. The first term $\int_\Omega (F_{\gamma}[u_\infty]-F_{\gamma}[u_k])\Delta z$ vanishes in the limit owing to the Lipschitz continuity of $F_{\gamma}$ and to the strong convergence $\normk{u_\infty-u}\rightarrow 0$ as $k\rightarrow \infty$.
Turning our attention towards the second term in the right-hand side of~\eqref{eq:convergence_1}, we find that
\begin{equation}\label{eq:convergence_2}
\begin{split}
\left\lvert\int_\Omega F_{\gamma}[u_k]\Delta(z-z_k)\right\rvert & \leq \left\lvert \int_{\Omega^+}(F_{\gamma}[u_k]-F_{\gamma}[u_\infty])\Delta(z-z_k) \right\rvert + \left\lvert\int_{\Omega^-}F_{\gamma}[u_k]\Delta(z-z_k)\right\rvert
\\ & \lesssim \normk{u_k-u_\infty}\norm{z}_{H^2(\Omega)}+\normk{u_k}\norm{\nabla^2(z-z_k)}_{\Omega^-},
\end{split}
\end{equation}
where in the first inequality we used the fact that $F_{\gamma}[u_\infty]=0$ a.e.\ in $\Omega^+$, and in the second inequality we have used the stability bound $\norm{\Delta(z-z_k)}_{\Omega^+}\lesssim\norm{z}_{H^2(\Omega)}$. Therefore, we infer that $\left\lvert\int_\Omega F_{\gamma}[u_k]\Delta(z-z_k)\right\rvert \rightarrow 0$ as $k\rightarrow \infty$ from the boundedness of the sequence of numerical solutions, see~\eqref{eq:num_sol_bounded}, from the limit $\normk{u_\infty-u_k}\rightarrow 0$ and from the convergence $\norm{\nabla^m(z-z_k)}_{\Omega^-} \rightarrow 0$ for all $m\in\{0,1,2\}$ as $k \rightarrow \infty$.
For the last two remaining terms in~\eqref{eq:convergence_1}, we apply Theorem~\ref{thm:b_k_jump_bound} and deduce that
\begin{equation}\label{eq:convergence_3}
\left\lvert S_k(u_k,z_k) \right\rvert + \left\lvert J_k^{\sigma,\rho}(u_k,z_k)\right\rvert\lesssim C_{\sigma,\rho} \absJ{u_k}\absJ{z_k},
\end{equation}
where $C_{\sigma,\rho}$ is a constant depending only on $\sigma$ and $\rho$. We then use the convergence of the jump seminorms in~\eqref{eq:jump_numsol_vanish} and the boundedness $\absJ{z_k}\lesssim\normk{z_k}\lesssim \norm{z}_{H^2(\Omega)}$ to conclude that $ S_k(u_k,z_k) \rightarrow 0$ and $J_k^{\sigma,\rho}(u_k,z_k)\rightarrow 0$ as $k\rightarrow \infty$. Thus we have established that all terms in the right-hand side of~\eqref{eq:convergence_1} vanish in the limit as $k\rightarrow \infty$ and we deduce that $u_\infty=u$ is the exact solution of~\eqref{eq:isaacs}.
We then conclude that $\normk{u-u_k}=\normk{u_\infty-u_k}\rightarrow 0$ as $k\rightarrow \infty$, which proves the first statement in~\eqref{eq:main}. The convergence of the estimators $\eta_k(u_k)\rightarrow 0$ as $k\rightarrow 0$ then follows immediately from the global efficiency bound~\eqref{eq:global_efficiency}, thus completing the proof of \eqref{eq:main} and of Theorem~\ref{thm:main}.\qed
|
1,941,325,220,014 | arxiv | \section{INTRODUCTION}
Nearly $10 \, \%$ of Active Galactic Nuclei (AGN) possess two oppositely-oriented collimated beams which emit photons up to the TeV energy range. When one of the beams is oriented towards us, the AGN is called a blazar.
Thanks to the observations carried out with the Imaging Atmospheric Cherenkov Telescopes (IACTs) like H.E.S.S.~\cite{hess}, MAGIC~\cite{magic} and VERITAS~\cite{veritas}, according to the Tevcat catalog 43 blazars with known redshift have been detected in the very-high-energy (VHE) range ($100 \, {\rm GeV} - 100 \, {\rm TeV}$)~\cite{tevcat}. We stress that 40 of them are in the flaring state, whose typical lifetime ranges from a few hours to a few days. As it will explained below, 3 of them 1ES 0229+200, PKS 1441+25 and S3 0218+35 will be discarded for the sake of the present analysis.
All observed spectra of the considered VHE blazars are well fitted by a single power-law, and so they have the form
$\Phi_{\rm obs}(E_0, z) \propto K_{{\rm obs},0} (z) \, E_0^{- \Gamma_{\rm obs}(z)}$, where $E_0$ is the observed energy, while $K_{{\rm obs},0} (z)$ and $\Gamma_{\rm obs} (z)$ denote the normalization constant and the observed slope, respectively, for a source at redshift $z$.
Unfortunately, the observational results do not provide any {\it direct} information about the intrinsic properties of the sources, as the VHE gamma-ray data strongly depend on the nature of photon propagation. Indeed, according to conventional physics the blazar spectra in the VHE range are strongly affected by the presence of the Extragalactic Background Light (EBL), namely the infrared/optical/ultraviolet background photons emitted by all galaxies since their birth (for a review, see~\cite{ebl}). However, it should be kept in mind that if some yet-to-be-discovered new physics changes the photon propagation, then some intrinsic source properties that are currently believed to be true may actually be incorrect.
We restrict our discussion to the two standard competing models for VHE photon emission by blazars, namely the Synchrotron-Self-Compton (SSC) mechanism~\cite{ssc1,ssc2} and the Hadronic Pion Production (HPP) in proton-proton scattering~\cite{hpp}. Both predict emitted spectra which, to a good approximation, have a single power-law behavior $\Phi_{\rm em} (E) = K_{\rm em} \,
E^{- \Gamma_{\rm em}}$ for all the considered VHE blazars, where $K_{\rm em}$ is the normalization constant and $\Gamma_{\rm em}$ is the emitted slope.
The relation between $\Phi_{\rm obs}(E_0, z)$ and $\Phi_{\rm em}(E)$ can be expressed in general terms as
\begin{equation}
\Phi_{\rm obs}(E_0, z) = P_{\gamma \to \gamma} (E_0, z) \, \Phi_{\rm em} \bigl(E_0 (1 + z) \bigr)~,
\label{a1}
\end{equation}
where $P_{\gamma \to \gamma} (E_0, z)$ is the photon survival probability from the source to us, and is usually written in
terms of the optical depth $\tau_{\gamma} (E_0, z)$ as
\begin{equation}
P_{\gamma \to \gamma} (E_0, z) = e^{ - \tau_{\gamma} (E_0, z)}~.
\label{a2}
\end{equation}
Before proceeding, a remark is compelling. A few years ago, a radically different mechanism has been put forward in order to explain the IACT observations. Basically, the idea is that {\it protons} are accelerated inside blazars up to energies of order $10^{11} \, {\rm GeV}$, while VHE emitted photons are neglected altogether. When the proton distance from the Galaxy is in the range $10 - 100 \, {\rm Mpc}$, they scatter off EBL photons through the process $p + \gamma \to p + \pi^0$, so that the immediate decays $\pi^0 \to \gamma + \gamma$ produce an electromagnetic shower of secondary photons: it is {\it these photons} that replace the emitted photons in such a scenario~\cite{esseykusenko2010}. Two characteristic features of this mechanism should be stressed in connection with the present analysis.
\begin{itemize}
\item It is expected this effect to be important for sources at redshifts $z > 0.15$ and energies $E_0 > 1 \, {\rm TeV}$~\cite{prosekinesseykusenkoaharonian2012}.
\item Observed blazars variability shorter than $0.1 \, {\rm yr}$ can be explained only for $z > 0.20$ at $E_0 > 1 \, {\rm TeV}$~\cite{prosekinesseykusenkoaharonian2012}.
\end{itemize}
Now, a glance at the values of $z$ for the considered sources reported in Table~\ref{tabSource} shows that we have 28 blazars with $z < 0.15$, and so -- owing to the first item -- these blazars cannot be explained by the mechanism in question. Let us next turn to the issue of variability. The overwhelming majority of the sources considered here are flaring, with a lifetime typically ranging from a few hours to a few days. Only the blazars 1ES 0229+200, 1ES 0806+524 and 1ES 1101-232 have a constant $\gamma$-ray luminosity~\cite{murasedermertakamimigliori2012}. A look at the energy range $\Delta E_0$ over which each source is observed (reported in Table~\ref{tabSource}) shows that the lowest values of $E_0$ are much below $1 \, {\rm TeV}$ (with one exception to be addressed below). Because the shapes of the observed spectra do not exhibit any peculiar feature, it is evident that a {\it single} blazar is explained by the a {\it single mechanism}, and so we must conclude that the considered approach can explain at most 3 sources. Thus, here we do not address such an alternative possibility -- whose relevance can become important at much larger energies -- up to an exception. An analysis of the properties of the blazar 1ES 0229+200 has shown that it can hardly be explained by the SSC mechanism. Moreover, since its VHE luminosity is constant, this source is more likely to be explained by the proton emission model~\cite{bonnolietal2015}. For this reason, we discard it from our discussion.
The main issue we are concerned with in this paper is a possible correlation between the distribution of blazar VHE {\it emitted spectra} and the {\it redshift}.
Superficially, the reader might well wonder about such a question. Why should a correlation of this kind exist? Cosmological evolutionary effects are certainly harmless out to redshift $z \simeq 0.5$, and when observational selection biases are properly taken into account no such a correlation is expected to show up. As we shell see, this is not the case, and it is indeed this fact that has prompted our analysis.
We are now in position to outline the structure of the paper. In Section II we report all observational information needed for the present analysis. Section III is devoted to inferring for each considered source the emitted slope $\Gamma_{\rm em}^{\rm CP} (z)$ and the emitted flux normalization $K_{\rm em}^{\rm CP} (z)$ starting from the observed ones $\Gamma_{\rm obs} (z)$ and $K_{{\rm obs},0} (z)$ assuming conventional physics, namely taking into account the effect of the EBL absorption alone. We next plot the values of $\Gamma_{\rm em}^{\rm CP} (z)$ and $K_{\rm em}^{\rm CP} (z)$ as a function of redshift $z$ for all considered blazars. After performing a statistical analysis of the $\{\Gamma_{\rm em}^{\rm CP} (z) \}$ distribution, we end up with the conclusion that the resulting best-fit regression line {\it decreases with increasing redshift}. While this trend might be interpreted as an observational selection effect, a deeper scrutiny based on observational information shows that this is by no means the case. So, we are led to the conclusion that such a behavior is at odd with physical intuition, which would instead demand the best-fit regression line to be redshift-independent. In Section IV we try to fit the same data in the $\Gamma_{\rm em}^{\rm CP} - z$ plane with a horizontal straight line, but we conclude that it does not work. As an attempt to achieve a physically satisfactory scenario, in Section V we introduce Axion-Like Particles (ALP) (for a review, see~\cite{alp}). They are spin-zero, neutral and extremely light pseudo-scalar bosons predicted by several extensions of the Standard Model of particle physics, and especially by those based on superstring theories~\cite{turok1996,string1,string2,string3,string4,string5,axiverse,abk2010,cicoli2012,dias2014}. They interact only with two photons. Depending on their mass and two-photon coupling, they can be quite good candidates for cold dark matter~\cite{arias} and give rise to very interesting astrophysical effects (to be discussed in Section VI), so that nowadays ALPs are attracting growing interest. Specifically, we suppose that photon-ALP oscillations take place in extragalactic magnetic fields of strength about $0.1 \, {\rm nG}$ -- in agreement with the predictions of the galactic outflows models~\cite{fl2001,bve2006} -- as first proposed in~\cite{darma}. As a consequence, photon propagation gets affected by EBL as well as by photon-ALP oscillations, whose combined effect is to {\it substantially reduce} the cosmic opacity brought about by the EBL alone, thereby considerably widening the conventional $\gamma$-ray horizon~\cite{dgr2013}. Accordingly, we re-derive for every source the emitted slope $\Gamma_{\rm em}^{\rm ALP} (z)$ and the emitted flux normalization $K_{\rm em}^{\rm ALP} (z)$ starting from the observed ones $\Gamma_{\rm obs} (z)$ and $K_{{\rm obs},0} (z)$. Proceeding as before, we plot again the values of $\Gamma_{\rm em}^{\rm ALP} (z)$ and $K_{\rm em}^{\rm ALP} (z)$ as a function of redshift $z$ for all considered sources. A statistical analysis of the $\{\Gamma_{\rm em}^{\rm ALP} (z) \}$ distribution now shows that for a realistic choice of the parameters the corresponding best-fit regression line turns out to be indeed {\it independent} of redshift. Moreover, the values of $\Gamma_{\rm em}^{\rm ALP} (z)$ for the individual blazars exhibit a small scatter about such a best-fit straight regression line. We stress that the fact that the best-fit regression becomes a straight line horizontal in the $\Gamma_{\rm em}^{\rm ALP} - z$ plane is an amazing fact, which is the only one in agreement with physical intuition and provides a strong hint of the existence of ALPs. Finally, in Section VI we briefly discuss a new view of VHE blazars emerging from our result, its relevance -- also in connection with other VHE astrophysical achievements employing ALPs -- and its implications for the future of VHE astrophysics. Finally, in order to avoid breaking the main line of thought by somewhat involved technicalities concerning the evaluation of the photon survival probability in the presence of ALPs, we report this matter in the Appendix.
\section{OBSERVATIONAL INFORMATION}
The observational quantities concerning every blazar which are relevant for the present analysis are: the redshift $z$, the observed flux $\Phi_{\rm obs}(E_0, z)$ and the energy range $\Delta E_0$ where each source is observed.
Some care should be payed to the normalization constant entering the expression of $\Phi_{\rm obs}(E_0, z)$, which is usually written as
\begin{equation}
\label{a313}
\Phi_{\rm obs}(E_0,z) = K_{{\rm obs},0} (z) \left(\frac{E_0}{E_{\rm ref}} \right)^{ - \Gamma_{\rm obs} (z)}~,
\end{equation}
where $E_{\rm ref}$ is an arbitrary energy value needed to make the exponential dimensionless, and in general {\it varies} from source to source. Manifestly, for the sake of comparison among the flux normalization constants for the different sources we have to recast Eq. (\ref{a313}) into a form such that $E_{\rm ref}$ gets replaced by a quantity which is equal for all sources. Choosing as a fiducial normalization energy $300 \, {\rm GeV}$, it amounts to defining the new normalization constant as
\begin{equation}
\label{a314}
K_{\rm obs} (z) \equiv K_{{\rm obs},0} (z) \left(\frac{300 \, {\rm GeV}}{E_{\rm ref}} \right)^{ - \Gamma_{\rm obs} (z)}~,
\end{equation}
so that Eq. (\ref{a313}) becomes
\begin{equation}
\label{a314}
\Phi_{\rm obs}(E_0,z) = K_{\rm obs} (z) \left(\frac{E_0}{300 \, {\rm GeV}} \right)^{ - \Gamma_{\rm obs} (z)}~.
\end{equation}
Note that we have $\Phi_{\rm obs}(300 \, {\rm GeV},z) = K_{\rm obs} (z)$. Henceforth, we shall deal exclusively with the values of $K_{\rm obs} (z)$. Therefore, we need to know both $K_{\rm obs} (z)$ and $\Gamma_{\rm obs} (z)$ for any source.
The values of $z$, $\Gamma_{\rm obs} (z)$, $\Delta E_0$ and $K_{\rm obs} (z)$ for all considered blazars are reported in Table~\ref{tabSource}. Observe that while the error bars associated with $\Gamma_{\rm obs} (z)$ are quoted, those referring to $K_{\rm obs} (z)$ are not because of their lack in many of the published papers, but for our consideration this is not a big problem (nevertheless, the reader should keep this point in mind throughout the paper). The observed slope $\Gamma_{\rm obs}$ and the normalization constant $K_{\rm obs}$ as a function of $z$ are plotted in the left and right panels of Fig.~\ref{fig1}, respectively.
\begin{figure}[h]
\centering
\includegraphics[width=.50\textwidth]{fig1-1.pdf}\includegraphics[width=.50\textwidth]{kobs-1.pdf}
\caption{\label{fig1} {\it Left panel}: The values of the slope $\Gamma_{\rm obs}$ are plotted versus the source redshift $z$ for all considered blazars. {\it Right panel}: The values of the normalization constant $K_{\rm obs}$ are similarly plotted versus the source redshift $z$ for the same blazars in the left panel.}
\end{figure}
Finally, denoting by $N_{\rm obs}$ the total number of detected photons per unit time, since $\Phi_{\rm obs}(E_0, z) = d N_{\rm obs}/(d E_0 \, d A)$ it follows that observed $\gamma$-ray luminosity per unit area $F_{{\rm obs}, \Delta E_0} (z)$ is the integral of $\Phi_{\rm obs}(E_0, z)$ over $\Delta E_0$.
\section{CONVENTIONAL PROPAGATION IN EXTRAGALACTIC SPACE}
After a long period of uncertainty on the EBL precise spectral energy distribution and photon number density, today a convergence seems to be reached~\cite{ebl}, well represented e.g. by the model of Franceschini, Rodighiero and Vaccari (FRV)~\cite{frv}, which we use for convenience.
Owing to $\gamma \gamma \to e^+ e^-$ scattering off EBL photons~\cite{bw,heitler}, the emitted VHE photons undergo an energy-dependent absorption, so that the VHE photon survival probability is given by Eq. (\ref{a2}) with $\tau_{\gamma} (E_0, z) \to \tau_{\gamma}^{\rm FRV} (E_0, z)$, where $\tau_{\gamma}^{\rm FRV} (E_0, z)$ is the optical depth of the EBL as evaluated within the FRV model in a standard fashion using the photon spectral number density~\cite{nikishov,gould,fazio}. As a consequence, the observed flux $\Phi_{\rm obs}(E_0, z)$ is related to the emitted one $\Phi_{\rm em}^{\rm CP} (E)$ by
\begin{equation}
\label{a3}
\Phi_{\rm obs}(E_0,z) = e^{- \tau_{\gamma}^{\rm FRV} (E_0, z)} \, \Phi_{\rm em}^{\rm CP} \bigl(E_0 (1+z) \bigr)~.
\end{equation}
Let us begin by deriving the emitted spectrum of every source, starting from the observed one, within conventional physics. As a preliminary step, we rewrite Eq. (\ref{a3}) as
\begin{equation}
\label{a4}
\Phi_{\rm em}^{\rm CP} \bigl(E_0 (1+z) \bigr) = e^{\tau_{\gamma}^{\rm FRV} (E_0, z)} \, K_{\rm obs} (z) \, \left(\frac{E_0}{300 \, {\rm GeV}} \right)^{-\Gamma_{\rm obs}(z)}~.
\end{equation}
Owing to the presence of the exponential in the r.h.s. of Eq. (\ref{a4}), $\Phi_{\rm em}^{\rm CP} \bigl(E_0 (1+z) \bigr)$ cannot behave as an exact power law (unless $\tau_{\gamma}^{\rm FRV} (E_0, z)$ has a pure logarithmic $z$-dependence). Yet, we have pointed out that it is expected to be close to it. So, we best-fit $\Phi_{\rm em}^{\rm CP} \bigl(E_0 (1+z) \bigr)$ in Eq. (\ref{a4}) to the single power-law expression
\begin{equation}
\label{a418}
\Phi_{\rm em}^{\rm CP, BF} \bigl(E_0 (1+z) \bigr) = K_{\rm em}^{\rm CP} (z) \, \left(\frac{ (1+z) E_0}{300 \, {\rm GeV}} \right)^{- \Gamma_{\rm em}^{\rm CP} (z)}
\end{equation}
over the energy range $\Delta E_0$ where the source is observed. Incidentally, we neglect error bars in the values of $\tau_{\gamma}^{\rm FRV}(E_0,z)$ because they are not quoted by FRV. Since $\Gamma_{\rm em}^{\rm CP} (z)$ depends linearly on $\Gamma_{\rm obs} (z)$, the inferred values of $\Gamma_{\rm em}^{\rm CP} (z)$ have the same error bars as the values of $\Gamma_{\rm obs} (z)$ quoted in Table~\ref{tabSource}.
Of course, we are well aware that the best procedure would be to de-absorb each bin of the observed spectrum, thereby getting the emitted spectrum and next applying to it the above best-fitting procedure. Unfortunately, such a strategy is not viable in practice because the single observed energy bins with related error bars are not available from published papers. Nevertheless, the difference between the two procedures is expected to be relevant only for those sources whose highest energy points are affected by a large uncertainty, like 1ES 0229+200 -- observed up to about $11 \, {\rm TeV}$ -- which is however discarded.
The values of the emitted slope $\Gamma_{\rm em}^{\rm CP} (z)$ are reported in Table~\ref{tabGammaL4} and plotted in the left panel of Fig.~\ref{fig2}. Similarly, the values of the normalization constant $K_{\rm em}^{\rm CP} (z)$ are listed in Table~\ref{tabKappa} and plotted in the right panel of Fig.~\ref{fig2}. Again, denoting by $N_{\rm em}$ the total number of emitted photons per unit time, because $\Phi_{\rm em}(E, z) = d N_{\rm em}/(d E \, dA)$, we see -- similarly as before -- that the emitted $\gamma$-ray luminosity per unit area $F_{{\rm em}, \Delta E}^{\rm CP} (z)$ is the integral of $\Phi_{\rm em}(E, z)$ over $\Delta E$, whose values are reported in Table~\ref{tabLum}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=.50\textwidth]{fig2L-1.pdf}\includegraphics[width=.50\textwidth]{kemCP-1.pdf}
\end{center}
\caption{\label{fig2} {\it Left panel}: The values of the slope $\Gamma_{\rm em}^{\rm CP}$ are plotted versus the source redshift $z$ for all considered blazars. {\it Right panel}: The values of the normalization constant $K_{\rm em}^{\rm CP}$ are similarly plotted versus the source redshift $z$ for the same blazars in the left panel.}
\end{figure}
We proceed by performing a statistical analysis of all values of $\Gamma_{\rm em}^{\rm CP} (z)$ as a function of $z$. Specifically, we use the least square method and try to fit the data with one parameter (horizontal straight line), two parameters (first-order polynomial), and three parameters (second-order polynomial). In order to test the statistical significance of the fits we evaluate the corresponding $\chi^2_{\rm red}$. The values of the $\chi^2_{\rm red}$ obtained for the three fits are $\chi^2_{\rm red} = 2.35$,
$\chi^2_{\rm red} = 1.83$ and $\chi^2_{\rm red} = 1.87$, respectively. Thus, data appear to be best-fitted by the first-order polynomial
\begin{equation}
\label{a31427}
\Gamma_{\rm em}^{\rm CP} (z) = 2.68 - 2.21 \, z~.
\end{equation}
The of $\{\Gamma_{\rm em}^{\rm CP} \}$ distribution as a function of $z$ and the associated best-fit straight regression line as defined by the last equation are plotted in Fig.~\ref{fig2R.pdf}.
We stress that in order to appreciate the physical meaning of the best-fit straight regression line in question we should recall that $\Gamma_{\rm em}^{\rm CP} (z)$ is the {\it exponent} of the emitted energy entering $\Phi_{\rm em}^{\rm CP} (E)$. Hence, in the two extreme cases we have
\begin{equation}
\label{a412}
\Phi_{\rm em}^{\rm CP} (E, 0) \propto E^{ - 2.68}~, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \Phi_{\rm em}^{\rm CP} (E, 0.6) \propto
E^{ - 1.35}~,
\end{equation}
thereby implying that its nonvanishing slope gives rise to a {\it large variation} of the emitted flux with redshift.
\begin{figure}[h]
\begin{center}
\includegraphics[width=.50\textwidth]{fig2R-1.pdf}
\end{center}
\caption{\label{fig2R.pdf} Same as the left panel of Fig.~\ref{fig2} but with superimposed the best-fit straight regression line given by Eq.~\ref{a31427}.}
\end{figure}
Actually, one of the effects of the obtained best-fit straight regression line is that blazars with harder spectra are found {\it only} at larger redshift. What is the {\it physical meaning} of this fact?
Since we have intentionally neglected the two blazars PKS 1441+25 and S3 0218+35 both at $z \simeq 0.94$, our set of sources extends up to $z \simeq 0.54$ (3C 279). Therefore, we are concerned with a relatively local sample, and so cosmological evolutionary effects are insignificant.
Let us proceed to address all possible observational selection biases.
\begin{itemize}
\item As we look at larger distances only the brighter sources are observed while the fainter ones progressively disappear.
\item Looking at greater distances entails that larger regions of space are probed, and so -- under the assumption of an uniform source distribution -- a larger number of brighter blazars should be detected. Now, the physical explanation of the best-fit straight regression line in Fig.~\ref{fig2R.pdf} would naturally arise provided that $F_{{\rm em}, \Delta E}^{\rm CP} (z)$ {\it tightly correlates} with $\Gamma_{\rm em}^{\rm CP} (z)$ in such a way that {\it brighter sources have harder spectra}. Then the above selection bias translates into the statement that looking at greater distances implies that a larger number of blazars with harder spectra should be observed, which is just what Fig.~\ref{fig2R.pdf} tells us. So, the real question concerns the existence of a tight correlation between $F_{{\rm em}, \Delta E}^{\rm CP} (z)$ and $\Gamma_{\rm em}^{\rm CP} (z)$.
\item Similarly, a logically consistent possibility would be that the jet opening angle $\delta (z)$ were {\it tightly correlated} with $\Gamma_{\rm em}^{\rm CP} (z)$ so that sources with stronger beaming had harder spectra: in such a situation -- under the previous assumption of an uniform source distribution -- the probability that the beam points towards us increases as larger regions of space are probed, and so sources with harder spectra would be more copiously found at larger distances, again in agreement with what
Fig.~\ref{fig2R.pdf} entails.
\end{itemize}
In order to get a deeper insight into the issue addressed in the second item, we proceed by plotting $F_{{\rm em}, \Delta E}^{\rm CP}$ versus $\Gamma_{\rm em}^{\rm CP}$ in Fig.~\ref{FluxVsSlopecp}. We see that $F_{{\rm em}, \Delta E}^{\rm CP}$ and $\Gamma_{\rm em}^{\rm CP}$ are {\it totally uncorrelated}. As a consequence, the cheap explanation outlined in the second item concerning the behavior of the best-fit straight regression line in Fig.~\ref{fig2R.pdf} is doomed to failure.
\begin{figure}[h]
\begin{center}
\includegraphics[width=.50\textwidth]{FluxVsSlopeCP-1.pdf}
\end{center}
\caption{\label{FluxVsSlopecp} $F_{{\rm em}, \Delta E}^{\rm CP}$ is plotted versus $\Gamma_{\rm em}^{\rm CP}$.}
\end{figure}
Finally, the possibility contemplated in the third item can also be excluded very easily by comparing the values of $\Gamma_{\rm em}^{\rm CP} (z)$ as reported in Table~\ref{tabGammaL4} with those estimated for $\delta (z)$ in~\cite{tav2010}. What turns out is that no correlation whatsoever between $\delta (z)$ and $\Gamma_{\rm em}^{\rm CP} (z)$ exists.
It is very difficult to imagine an intrinsic mechanism which could explain the best-fit straight regression line in Fig.~\ref{fig2R.pdf} in a physically satisfactory way within conventional physics. Otherwise stated, how can a source get to know its redshift $z$ in such a way to adjust its emitted slope $\Gamma_{\rm em}^{\rm CP} (z)$ so as to reproduce the distribution with best-fit straight regression line reported in Fig.~\ref{fig2R.pdf}?
We are therefore led to regard the situation emerging from the above discussion as in manifest disagreement with physical intuition, which would instead demand the best-fit straight regression line to be {\it horizontal} in the $\Gamma_{\rm em}^{\rm CP} - z$ plane.
\section{AN ATTEMPT BASED ON CONVENTIONAL PHYSICS}
In spite of our previous finding, let us nevertheless try to {\it impose by hand} that the same data set $\{\Gamma_{\rm em}^{\rm CP} (z) \}$ considered before is fitted by a horizontal straight line, and see what happens. The result is exhibited in Fig.~\ref{fig3}. In this case, we have $\Gamma_{\rm em}^{\rm CP} = 2.41$ and $\chi^2_{\rm red} = 2.35$.
Manifestly, this scenario does not work, since the value of $\chi^2_{\rm red}$ is unduly large. This fact hardly comes as a surprise, since we are not best-fitting the data. Still, this is a useful exercise, since it quantifies the price we have to pay in order to have a horizontal fitting straight line within conventional physics, and will be a benchmark for comparison when the ALP scenario will be considered.
Much for the same reason, it is instructive to encompass $95 \, \%$ of the observed sources inside a strip centered on the horizontal fitting line $\Gamma_{\rm em}^{\rm CP} = 2.41$. What is its width $\Delta \Gamma_{\rm em}^{\rm CP}$? The answer is $\Delta \Gamma_{\rm em}^{\rm CP} = 0.94$, which is $39 \, \%$ of the value $\Gamma_{\rm em}^{\rm CP} = 2.41$.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{fig3-1.pdf}
\caption{\label{fig3} Horizontal fitting straight line in conventional physics. The values of $\Gamma_{\rm em}^{\rm CP}$ are plotted versus the source redshift $z$ for all considered blazars with the corresponding error bars. Superimposed is the horizontal fitting straight line $\Gamma_{\rm em}^{\rm CP} = 2.41$ with $\chi^2_{\rm red} = 2.35$. The grey strip encompasses $95 \, \%$ of the sources and its width is $\Delta \Gamma_{\rm em}^{\rm CP} = 0.94$, which equals $39 \, \%$ of the value $\Gamma_{\rm em}^{\rm CP} = 2.41$.}
\end{figure}
\newpage
\section{AN ATTEMPT BASED ON AXION-LIKE PARTICLES}
As an alternative possibility to achieve a physically satisfactory situation, we invoke new physics in the form of axion-like particles (ALPs). As discussed in the Appendix, their most characteristic feature is to couple only to two photons with a coupling constant $g_{a \gamma \gamma}$ according to the Feynman diagram shown in Fig.~\ref{feyALP}. Clearly, in the presence of the extragalactic magnetic field ${\bf B}$ one photon line in Fig.~\ref{feyALP} represents the ${\bf B}$ field, and so we see that in such a situation energy-conserving oscillations between VHE photons and ALPs take place~\cite{darma}. Accordingly, photons acquire a split personality, traveling for some time as real photons -- which suffer EBL absorption -- and for some time as ALPs, which are unaffected by the EBL (as explicitly shown in the Appendix). As a consequence, $\tau_{\gamma} (E_0, z)$ gets replaced by the effective optical depth $\tau_{\gamma}^{\rm eff} (E_0, z)$, which is manifestly {\it smaller} than $\tau_{\gamma} (E_0, z)$ and is a monotonically increasing function of $E_0$ and $z$. The crux of the argument is that since the photon survival probability is now $P_{\gamma \to \gamma}^{{\rm ALP}} (E_0, z) = e^{- \tau_{\gamma}^{\rm eff} (E_0, z)}$, even a {\it small} decrease of $\tau_{\gamma}^{\rm eff} (E_0, z)$ with respect to $\tau_{\gamma}^{\rm FRV} (E_0, z)$ gives rise to a {\it large} increase of the photon survival probability, as compared to the case of conventional physics. So, the main consequence of photon-ALP oscillations is to {\it substantially attenuate} the EBL absorption and consequently to considerably enlarging the conventional $\gamma$-ray horizon~\cite{dgr2013}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=.30\textwidth]{feyALP}
\end{center}
\caption{\label{feyALP} Feynman diagram for the two-photon ALP coupling.}
\end{figure}
Actually, $P_{\gamma \to \gamma}^{\rm ALP} (E_0, z)$ can be computed exactly as outlined in the Appendix, where it is explained that the extragalactic magnetic field likely has a domain-like structure (at least in first approximation), namely that ${\bf B}$ is homogeneous over a domain of size $L_{\rm dom}$ and has approximately the same strength $B$ in all domains, but its direction randomly changes from one domain to the next~\cite{kronberg,bbo1999,gr2001} (more about this, in the Appendix). Furthermore, it will be shown that the considered ALP scenario contains only two free parameters $\xi \propto g_{a \gamma \gamma} \, B$ and $L_{\rm dom}$: they obviously show up in $P_{\gamma \to \gamma}^{\rm ALP} (E_0, z)$ even though such a dependence is not explicitly exhibited for notational simplicity. Realistic values of these parameters are $\xi = 0.1, 0.5, 1, 5$ and $L_{\rm dom} = 4 \, {\rm Mpc}, 10 \, {\rm Mpc}$, which will be regarded as our benchmark values. In reality, a third free parameter is the ALP mass $m$, but we assume $m < 10^{- 9} \, {\rm eV}$ which entails that $P_{\gamma \to \gamma}^{\rm ALP} (E_0, z)$ is {\it independent} of $m$ (as discussed in the Appendix).
At this point, knowing $P_{\gamma \to \gamma}^{\rm ALP} (E_0, z)$, we proceed as above. To wit, we first write the emitted flux of every source as
\begin{equation}
\label{a2bis}
\Phi_{\rm em}^{{\rm ALP}} \bigl(E_0 (1+z) \bigr) = \Bigl(P_{\gamma \to \gamma}^{{\rm ALP}} (E_0, z) \Bigr)^{- 1} \, K_{\rm obs} (z) \, \left(\frac{E_0}{300 \, {\rm GeV}} \right)^{-\Gamma_{\rm obs}(z)}~.
\end{equation}
Next, we best-fit $\Phi_{\rm em}^{{\rm ALP}} \bigl(E_0 (1+z) \bigr)$ in Eq. (\ref{a2bis}) to the single power-law expression
\begin{equation}
\label{a418q}
\Phi_{\rm em}^{\rm ALP, BF} \bigl(E_0 (1+z) \bigr) = K_{\rm em}^{\rm ALP} (z) \, \left(\frac{ (1+z) E_0}{300 \, {\rm GeV}} \right)^{- \Gamma_{\rm em}^{\rm ALP} (z)}
\end{equation}
over the energy range $\Delta E_0$ where the source is observed. It goes without saying that the remarks just below Eq. (\ref{a418}) apply here as well.
This procedure is performed for each benchmark value of $\xi$ and $L_{\rm dom}$. We report in Table~\ref{tabGammaL4} the values of $\Gamma_{\rm em}^{\rm ALP} (z)$ for $L_{\rm dom} = 4 \, {\rm Mpc}$, $\xi = 0.1, 0.5, 1, 5$, and similarly Table~\ref{tabGammaL10} contains the values of $\Gamma_{\rm em}^{\rm ALP} (z)$ for $L_{\rm dom} = 10 \, {\rm Mpc}$, $\xi = 0.1, 0.5, 1, 5$.
We can at this point carry out a statistical analysis of the values of $\Gamma_{\rm em}^{\rm ALP} (z)$ as a function of $z$, again for any benchmark value of $\xi$ and $L_{\rm dom}$. We still use the least square method and we try to fit the data with one parameter (horizontal line), two parameters (first-order polynomial) and three parameters (second-order polynomial). Finally, in order to quantify the statistical significance of each fit we compute the $\chi^2_{\rm red}$, whose values are reported in Table~\ref{tabChiL4} for $L_{\rm dom} = 4 \, {\rm Mpc}$, $\xi = 0.1, 0.5, 1, 5$, and in Table~\ref{tabChiL10} for $L_{\rm dom} = 10 \, {\rm Mpc}$, $\xi = 0.1, 0.5, 1, 5$.
\begin{table}[h]
\begin{center}
\begin{tabular}{l|c|cccc}
\hline
\multicolumn{1}{c|}{\# of fit parameters} &\multicolumn{1}{c|}{$\chi^2_{\rm red, CP}$} &\multicolumn{4}{c}{$\chi^2_{\rm red, ALP}$} \\
\hline
\hline
& & \ \ $\xi=0.1$ \ \ & \ \ $\xi=0.5$ \ \ & \ \ $\xi=1$ \ \ & \ \ $\xi=5$ \ \ \\
1 & 2.35 & 2.26 & {\bf 1.43} & 1.45 & 1.55 \\
2 & 1.83 & 1.79 & 1.46 & 1.48 & 1.57 \\
3 & 1.87 & 1.84 & 1.48 & 1.51 & 1.60 \\
\hline
\end{tabular}
\caption{The values of $\chi^2_{\rm red}$ are displayed for the three fitting models considered in the text. In the first column the number of parameters in each fitting model is reported. The second column concerns deabsorption according to conventional physics, using the EBL model of FRV. The other columns pertain to the photon-ALP oscillation scenario with the EBL still described by the FRV model, and exhibit the values of $\chi^2_{\rm red}$ for $L_{\rm dom} = 4 \, {\rm Mpc}$ and different choices of $\xi$. The value in bold-face is the minimum of $\chi^2_{\rm red}$.}
\label{tabChiL4}
\end{center}
\end{table}
\begin{table}[h]
\begin{center}
\begin{tabular}{l|c|cccc}
\hline
\multicolumn{1}{c|}{\# of fit parameters} &\multicolumn{1}{c|}{$\chi^2_{\rm red, CP}$} &\multicolumn{4}{c}{$\chi^2_{\rm red, ALP}$} \\
\hline
\hline
& & \ \ $\xi=0.1$ \ \ & \ \ $\xi=0.5$ \ \ & \ \ $\xi=1$ \ \ & \ \ $\xi=5$ \ \ \\
1 & 2.35 & 2.05 & {\bf 1.39} & 1.51 & 1.55 \\
2 & 1.83 & 1.72 & 1.43 & 1.54 & 1.57 \\
3 & 1.87 & 1.77 & 1.47 & 1.57 & 1.60 \\
\hline
\end{tabular}
\caption{Same as Table~\ref{tabChiL4}, but with $L_{\rm dom} = 10 \, {\rm Mpc}$. The value in bold-face is the minimum of $\chi^2_{\rm red}$.}
\label{tabChiL10}
\end{center}
\end{table}
\vskip 2 cm
Hence, we see that the best-fitting procedure singles out the two following preferred cases.
\begin{itemize}
\item $L_{\rm dom} = 4 \, {\rm Mpc}$, $\xi = 0.5$, the {\it horizontal} straight regression line with equation $\Gamma_{\rm em}^{\rm ALP} = 2.52$ and $\chi^2_{\rm red, ALP} = 1.43$.
\item $L_{\rm dom} = 10 \, {\rm Mpc}$, $\xi = 0.5$, the {\it horizontal} straight regression line with equation $\Gamma_{\rm em}^{\rm ALP} = 2.58$ and $\chi^2_{\rm red, ALP} = 1.39$.
\end{itemize}
For simplicity, only for the two preferred cases are the values of $\Gamma_{\rm em}^{\rm ALP} (z)$ plotted in the left panels of Fig.~\ref{fig4}. Also, only for these two cases are the values of $K_{\rm em}^{\rm ALP} (z)$ reported in Table~\ref{tabKappa} and plotted in the right panels of Fig.~\ref{fig4}.
Clearly, for any choice of $L_{\rm dom}$ the best-fitting procedure fixes the values of $\xi$ and $\chi^2_{\rm red, ALP}$. Even though we have taken realistic values for $L_{\rm dom}$, in the absence of any information about its actual value it can well happen that other interesting results can emerge for different values of $L_{\rm dom}$.
Let us summarize somewhat schematically the main achievements offered by the present scenario, especially emphasizing analogies and differences with respect to the situation based on conventional physics (considered in Sections III and IV).
\begin{itemize}
\item The existence of an ALP with mass $m < 10^{- 9} \, {\rm eV}$ and suitable realistic values of the parameters -- such as $\xi = 0.5$ and $L_{\rm dom} = 4 \, {\rm Mpc}$ or $L_{\rm dom} = 10 \, {\rm Mpc}$ -- gives rise to sizable photon-ALP oscillations in extragalactic space and yields a best-fit straight regression line for the $\{\Gamma_{\rm em}^{\rm ALP} (z) \}$ distribution which is just {\it horizontal}, in perfect agreement with the expectation based on physical intuition. We stress that this is an {\it automatic} consequence of the present model, and not an {\it ad hoc} requirement as in the case discussed in Section IV. The situation is illustrated in the left panels of Fig.~\ref{fig4}.
\item The lack of correlation between $F_{{\rm em}, \Delta E}$ and $\Gamma_{\rm em}$ found in the context of conventional physics remains true in the presence of photon-ALP oscillations, thereby implying that again observational selection biases play no r\^ole. This is shown in Fig.~\ref{figalp}.
\item A look at Table~\ref{tabLum} shows that the values of $F_{{\rm em}, \Delta E}^{\rm ALP}$ are systematically slightly larger than those of $F_{{\rm em}, \Delta E}^{\rm CP}$. This fact is in line with our expectations, and the reason is as follows. Since the values of
$F_{{\rm obs}, \Delta E}$ are obviously fixed, such a difference has to be due to the photon propagation in the presence of photon-ALP oscillations and is the result of two competing effects. One of them is that at energies slightly larger than $100 \, {\rm GeV}$ the EBL absorption is negligible. Accordingly, in the presence of many magnetic domains (as in the considered situation) it is well known that the emitted photon flux gets reduced by a factor $1/3$, due to the equipartition among the three degrees of freedom (2 photon polarization states and 1 ALP state), thereby producing a dimming of the source as compared to conventional physics~\cite{grz2002}. The other effect occurs at larger energies, where EBL absorption becomes important. Correspondingly, photon-ALP oscillation enhance the observed photon flux as compared to conventional physics, but since the emitted photon flux is larger al lower energies, $F_{{\rm em}, \Delta E}^{\rm ALP}$ has to be larger than $F_{{\rm em}, \Delta E}^{\rm CP}$ in order to produce the same $F_{{\rm obs}, \Delta E}$.
\item As in Section IV, it is interesting to compute the width $\Delta \Gamma_{\rm em}^{\rm ALP}$ of the strip -- centered on the best-fit straight regression line $\Gamma_{\rm em}^{\rm ALP}$ -- which encompasses $95 \, \%$ of the considered blazars. We
find $\Delta \Gamma_{\rm em}^{\rm ALP} = 0.70$ which is $28 \, \%$ of the value $\Gamma_{\rm em}^{\rm ALP} = 2.52$ for
$L_{\rm dom} = 4 \, {\rm Mpc}$, and $\Delta \Gamma_{\rm em}^{\rm ALP} = 0.60$ which is $23 \, \%$ of the value $\Gamma_{\rm em}^{\rm ALP} = 2.58$ for $L_{\rm dom} = 10 \, {\rm Mpc}$. These widths are considerably smaller than what we found within conventional physics in Section IV, namely $\Delta \Gamma_{\rm em}^{\rm CP} = 0.94$.
\end{itemize}
\begin{figure}[h]
\begin{center}
\includegraphics[width=.50\textwidth]{fig4L-1.pdf}\includegraphics[width=.50\textwidth]{kemALPL4-1.pdf}
\includegraphics[width=.50\textwidth]{fig5L-1.pdf}\includegraphics[width=.50\textwidth]{kemALPL10-1.pdf}
\end{center}
\caption{\label{fig4} {\it Left panels}: The values of the slope $\Gamma_{\rm em}^{\rm ALP}$ are plotted versus the source redshift $z$ for all considered blazars. {\it Right panels}: The values of the normalization constant $K_{\rm em}^{\rm ALP}$ are similarly plotted versus the source redshift $z$ for the same blazars in the left panels. The {\it upper row} refers to the case $\xi = 0.5$ and $L_{\rm dom} = 4 \, {\rm Mpc}$, while the {\it lower row} to the case $\xi = 0.5$ and $L_{\rm dom} = 10 \, {\rm Mpc}$.}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=.50\textwidth]{fig4R-1.pdf}\includegraphics[width=.50\textwidth]{fig5R-1.pdf}
\end{center}
\caption{\label{fig419} Same as the left panels of Fig.~\ref{fig4} but with superimposed the best-fit horizontal straight regression
line. Moreover, in either case the grey band encompasses $95 \, \%$ of the considered sources.}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=.50\textwidth]{FluxVsSlopeALPL4-1.pdf}\includegraphics[width=.50\textwidth]{FluxVsSlopeALPL10-1.pdf}
\end{center}
\caption{\label{figalp} {\it Left panel}: $F_{{\rm em}, \Delta E}^{\rm ALP}$ is plotted versus $\Gamma_{\rm em}^{\rm ALP}$ for $\xi = 0.5$ and $L_{\rm dom} = 4 \, {\rm Mpc}$. {\it Right panel}: Same as left panel but for $\xi = 0.5$ and $L_{\rm dom} = 10 \, {\rm Mpc}$.}
\end{figure}
\newpage
\section{DISCUSSION AND CONCLUSIONS}
The main goal of the investigation reported in this paper concerns a possible correlation between the distribution of VHE blazar {\it emitted spectra} and the {\it redshift}.
Broadly speaking, two logically distinct results have been obtained, which can somewhat schematically be summarize as follows.
Working within conventional physics -- and in particular within the two standard VHE photon emission models -- we have shown that the emitted slope distribution $\{\Gamma_{\rm em}^{\rm CP} (z) \}$ exhibits a {\it correlation with} $z$, since the associated best-fit regression line is a {\it decreasing} function of $z$ given by $\Gamma_{\rm em}^{\rm CP} (z) = 2.68 - 2.21 \, z$, with $\chi^2_{\rm red} = 1.83$. This fact runs against physical intuition, since it is hard to understand how the source distribution can get to know the redshifts in such a way to adjust their $\Gamma_{\rm em} (z)$ values in such a way to reproduce such a statistical correlation. Indeed, this situation cannot be explained either by cosmological evolutionary effects or by observational selection biases. A further consequence is that blazars with harder spectra to be found {\it only} at larger redshift.
We have shown that a way out of this conundrum involves new physics in the form of ALPs, with photon-ALP oscillations taking place in the extragalactic magnetic field. Their result is ultimately to substantially reduce the level of EBL absorption. We have focusses our attention on two realistic benchmark cases: $L_{\rm dom} = 4 \, {\rm Mpc}$ and $L_{\rm dom} = 10 \, {\rm Mpc}$. After having worked out the effect of photon-ALP oscillations on the VHE emitted spectra starting from the observed ones, we have discovered that in either case the best-fit regression line pertaining to the $\{\Gamma_{\rm em}^{\rm ALP} (z) \}$ distribution turns out to be a straight line in the $\Gamma_{\rm em}^{\rm ALP} - z$ plane, hence {\it independent} of $z$. This circumstance looks astonishing. Of course, by changing the effective level of EBL absorption we obviously expect the $z$-dependence of the best-fit $\{\Gamma_{\rm em}^{\rm ALP} (z)\}$ distribution to differ from that of the $\{\Gamma_{\rm em}^{\rm CP} (z) \}$ distribution. As a consequence, the inclination of the best-fit straight regression line in the $\Gamma_{\rm em} - z$ plane should change. But to become exactly horizontal -- which is the only possibility in agreement with physical expectation -- looks almost like a miracle. In addition, the values of $\Gamma_{\rm em}^{\rm ALP} (z)$ for the individual sources turn out to have a fairly small scatter about the considered best-fit straight regression line.
Actually, the considered ALP framework possesses further nice features.
\begin{itemize}
\item Even from a purely statistical point of view it is better than the conventional scenario, in which we have found $\chi^2_{\rm red} = 1.83$ for the best-fit straight regression line (Section III) or $\chi^2_{\rm red} = 2.35$ for the horizontal fitting straight line (Section IV). Instead now the best-fit straight regression lines in the right panels of Fig.~\ref{fig4} have $\chi^2_{\rm red} = 1.43$ for $L_{\rm dom} = 4 \, {\rm Mpc}$ and $\chi^2_{\rm red} = 1.39$ for $L_{\rm dom} = 10 \, {\rm Mpc}$. Note that this result has been obtained with a single free parameter.
\item A new picture arises wherein a sharp distinction between {\it fundamental physics} and {\it boundary conditions} naturally emerges. The above discussion implies that $95 \, \%$ of the considered sources have a {\it small spread} in the values of $\Gamma_{\rm em}^{\rm ALP} (z)$. Specifically, $\Gamma_{\rm em}^{\rm ALP} (z)$ departs from the value of the best-fit straight regression line by {\it at most} $14 \, \%$ for $\xi = 0.5$ and $L_{\rm dom} = 4 \, {\rm Mpc}$ and by $12 \, \%$ for $\xi = 0.5$ and $L_{\rm dom} = 10 \, {\rm Mpc}$. Actually, the small scatter in the values of $\Gamma_{\rm em}^{\rm ALP} (z)$ implies that the emission mechanism is basically identical for all sources. On the other hand, the larger scatter in the values of $K_{\rm em} (z)$ -- presumably unaffected by photon-ALP oscillations when error bars are taken into account -- is naturally traced back to the different environmental state of each source, such as for instance the accretion rate. A natural question should finally be addressed. How is it possible that the large scatter in $\{\Gamma_{\rm obs} (z) \}$ distribution exhibited in the left panel of Fig.~\ref{fig1} arises from the small scatter in the $\{\Gamma_{\rm em}^{\rm ALP} (z) \}$ distribution shown in the left panels of Fig.~\ref{fig4}? The answer is very simple: most of the scatter in the $\{\Gamma_{\rm obs} (z) \}$ distribution arises from the large scatter in the source redshift.
\end{itemize}
Before closing this Section, we find it worthwhile to put the result of our investigation into its proper perspective.
During the last decade the interest in ALPs has been steadily growing. Various reasons have conspired towards this circumstance. With different motivations, the astrophysical implications of ALPs have been addressed since twenty years~\cite{massot1995,bcr1996,gmt1996,massot1997,carrol1998,csaki2002a,csaki2002b,grz2002,cf2003}. Certainly the claimed discovery of an ALP by the PVLAS collaboration~\cite{pvlas1} in 2005, which -- even if subsequently withdrown by the PVLAS collaboration itself~\cite{pvlas2} -- has provided a stimulus to look for astrophysical cross-checks~\cite{dupays,mr2005,ci2007,fairbairn,mirizzi2007}. Soon thereafter, it has been realized that several superstring theories predict the existence of not only a single kind of ALP but more generally of a family of ALPs with very small masses~\cite{turok1996,string1,string2,string3,string4,string5,axiverse,abk2010,cicoli2012,dias2014}. As a matter of fact, the study of the relevance of ALPs for high-energy astrophysics has continued until the present~
\cite{darma,bischeri,sigl2007,dmr2008,shs2008,dmpr,mm2009,crpc2009,cg2009,bds2009,prada1,bmr2010,mrvn2011,prada2,gh2011,pch2011,frt2011,hornsmeyer2012,hmmr2012,wb2012,hmmmr2012,trgb2012,friedland2013,wp2013,cmarsh2013,mhr2013,hessbound,wb2013z,gr2013,hmpks,mmc2014,acmpw2014,straniero,rt2014,tgr2014,wb2014,hc2014,mc2014,payez2015}.
A tension between the predicted EBL level causing photon absorption and observations in the VHE range has been claimed from time to time (see e.g.~\cite{2000protheroe, aharonian2006}), but then a better determination of the EBL properties has shown that no problem exists. As already stressed at the beginning of Section III, nowadays the situation is different, since different techniques basically lead to the same EBL model. Yet, it has recently been claimed that VHE observations require an EBL level even lower than that predicted by the minimal EBL model normalized to the galaxy counts only~\cite{kd2010}. This
is the so-called {\it pair-production anomaly}, which is based on the Kolmogorov test and so does not rely upon the estimated errors. It has thoroughly been quantified by a global statistical analysis of a large sample of observed blazars, showing that measurements in the regime of large optical depth deviate by 4.2 $\sigma$ from measurements in the optically thin regime~\cite{hornsmeyer2012}. Systematic effects have been shown to be insufficient to account for such the pair-production anomaly, which looks therefore real. Actually, the discovery of new blazars at large redshift like the observation of PKS 1424+240 have strengthened the case for the pair-production anomaly~\cite{hmpks}. Quite recently, the existence of the pair-production anomaly has been questioned by using a new EBL model and a $\chi^2$ test, in which errors play instead an important role~\cite{biteau}. Because the Kolmogorov test looks more robust in that it avoids taking errors into account, we tend to believe that the pair-production anomaly is indeed at the level of 4.2 $\sigma$.
Amazingly, the existence of photon-ALP oscillations with the {\it same} realistic choice of the model parameters provides an excellent explanation for three {\it completely different} phenomena occurring in the VHE band.
\begin{itemize}
\item The pair-production anomaly is naturally explained in terms of photon-ALP oscillations in extragalactic magnetic fields~\cite{hornsmeyer2012,mhr2013,hmpks}. This should hardly come as surprise, since -- as already explained -- the ultimate effect of photon-ALP oscillation is to substantially lower the effective EBL absorption level.
\item According to conventional physics, flat spectrum radio quasars (FSRQs) should not emit above $20 - 30 \, {\rm GeV}$. This is due to the fact that higher energy photons accelerated in the jet enter -- at a distance of about $1 \, {\rm kpc}$ from the centre -- the so-called broad-line region (BLR), whose high density of ultraviolet photons gives rise to an optical depth $\tau \simeq 15$ owing to the same $\gamma \gamma \to e^+ e^-$ absorption process considered above. Even in this context, photon-ALP oscillations substantially lower the photon absorption level inside the BLR while still staying inside the standard blazar models, thereby allowing VHE photons to escape the BLR and be emitted, in remarkable quantitative agreement with observations~\cite{trgb2012}. We stress that the detection of VHE photons from FSRQs still represents a serious challenge for conventional models.
\item Our findings described in the present paper show that the conventional scenario of photon propagation in extragalactic space is seriously challenged. Nevertheless, a physically satisfactory picture emerges by considering photon-ALP oscillations in the extragalactic magnetic field.
\end{itemize}
Altogether, such a situation evidently strongly suggests a preliminary evidence for the existence of an ALP, whose parameters make
it a good candidate for cold dark matter~\cite{arias}. Moreover, it looks tantalizing that the issue can definitively be settled not only with the advent of the new gamma-ray detectors like the CTA (Cherenkov Telescope Array)~\cite{cta}, HAWC (High-Altitude Water Cherenkov Observatory)~\cite{hawc}, GAMMA-400 (Gamma Astronomical Multifunctional Modular Apparatus)~\cite{gamma400}, LHAASO (Large High Altitude Air Shower Observatory)~\cite{lhaaso} and HiSCORE (Hundred Square km Cosmic Origin Explorer)~\cite{hiscore}, but also thanks to laboratory data based on the planned experiments ALPS II (Any Light Particle Search)~\cite{alpsii,baere} and IAXO (International Axion Observatory)~\cite{iaxo},
which will have the capability to discover an ALP with the properties assumed in the present analysis. Also experiments exploiting the techniques discussed in~\cite{avignone1,avignone2,avignone3} are quite promising for the laboratory detection of an ALP of this kind.
\section*{ACKNOWLEDGMENTS}
We thank Patrizia Caraveo and Fabrizio Tavecchio for very useful discussions. We also thank Abelardo Moralejo, Andreas Ringwald and Sergey Troitsky for their comments. The work of M. R. is supported by INFN TAsP and CTA grants.
\newpage
\begin{table}[h]
\begin{center}
\begin{tabular}{lcclc}
\hline
Source \ \ \ \ & $z$ \ \ \ \ \ & ${\Gamma}_{\rm obs}$ \ \ \ \ & \, $\Delta E_0$ [TeV] & \,\,\,\, $K_{\rm obs} \,[\rm cm^{-2} \, s^{-1} \, TeV^{-1}]$ \\
\hline
\hline
3C 66B \ \ \ \ & 0.0215 \ \ \ \ & $3.10 \pm 0.37$ \ \ \ \ & \, $0.12-1.8$ & $1.74 \cdot 10^{-11}$\\
Mrk 421 \ \ \ \ & 0.031 \ \ \ \ & $2.20 \pm 0.22 $ \ \ \ \ & \, $0.13-2.7$ & $6.43 \cdot 10^{-10}$\\
Mrk 501 \ \ \ \ & 0.034 \ \ \ \ & $2.72 \pm 0.18$ \ \ \ \ & \, $0.21-2.5$ & $1.53 \cdot 10^{-10}$\\
1ES 2344+514 \ \ \ \ & 0.044 \ \ \ \ & $2.95 \pm 0.23 $ \ \ \ \ & \, $0.17-4.0$ & $5.42 \cdot 10^{-11}$\\
Mrk 180 \ \ \ \ & 0.045 \ \ \ \ & $3.30 \pm 0.73$ \ \ \ \ & \, $0.18-1.3$ & $4.50 \cdot 10^{-11}$\\
1ES 1959+650 \ \ \ \ & 0.048 \ \ \ \ & $2.72 \pm 0.24$ \ \ \ \ & \, $0.19-1.5$ & $8.99 \cdot 10^{-11}$\\
1ES 1959+650 \ \ \ \ & 0.048 \ \ \ \ & $2.58 \pm 0.27$ \ \ \ \ & \, $0.19-2.4$ & $6.03 \cdot 10^{-11}$\\
AP LIB \ \ \ \ & 0.049 \ \ \ \ & $2.50 \pm 0.22$ \ \ \ \ & \, $0.30 -3.0$ & ?\\
1ES 1727+502 \ \ \ \ & 0.055 \ \ \ \ & $2.70 \pm 0.54$ \ \ \ \ & \, $0.10 -0.6$ & $9.60 \cdot 10^{-12}$\\
PKS 0548-322 \ \ \ \ & 0.069 \ \ \ \ & $2.86 \pm 0.35$ \ \ \ \ & \, $0.32-3.5$ & $1.10 \cdot 10^{-11}$\\
BL Lacertae \ \ \ \ & 0.069 \ \ \ \ & $3.60 \pm 0.43$ \ \ \ \ & \, $0.15-0.7$ & $5.80 \cdot 10^{-10}$\\
PKS 2005-489 \ \ \ \ & 0.071 \ \ \ \ & $3.20 \pm 0.19$ \ \ \ \ & \, $0.32-3.3$ & $3.44 \cdot 10^{-11}$\\
RGB J0152+017 \ \ \ \ & 0.08 \ \ \ \ & $2.95 \pm 0.41$ \ \ \ \ & \, $0.32-3.0$ & $1.99 \cdot 10^{-11}$\\
1ES 1741+196 \ \ \ \ & 0.083 \ \ \ \ & ? \ \ \ \ & \, \,\,\,\,\,\,\,\,\,\,\, ? & ?\\
SHBL J001355.9-185406 \ \ \ \ & 0.095 \ \ \ \ & $3.40 \pm 0.54$ \ \ \ \ & \, $0.42-2.0$ & $7.05 \cdot 10^{-12}$\\
W Comae \ \ \ \ & 0.102 \ \ \ \ & $3.81 \pm 0.49$ \ \ \ \ & \, $0.27-1.1$ & $5.98 \cdot 10^{-11}$\\
1ES 1312-423 \ \ \ \ & 0.105 \ \ \ \ & $2.85 \pm 0.51$ \ \ \ \ & \, $0.36-4.0$ & $5.85 \cdot 10^{-12}$ \\
VER J0521+211 \ \ \ \ & 0.108 \ \ \ \ & $3.44 \pm 0.36$ \ \ \ \ & \, $0.22-1.1$ & $5.35 \cdot 10^{-11}$\\
PKS 2155-304 \ \ \ \ & 0.116 \ \ \ \ & $3.53 \pm 0.12$ \ \ \ \ & \, $0.21-4.1$ & $1.27 \cdot 10^{-10}$\\
B3 2247+381 \ \ \ \ & 0.1187 \ \ \ \ & $3.20 \pm 0.71$ \ \ \ \ & \, $0.15-0.84$ & $1.40 \cdot 10^{-11}$\\
RGB J0710+591 \ \ \ \ & 0.125 \ \ \ \ & $2.69 \pm 0.33$ \ \ \ \ & \, $0.37-3.4$ & $1.49 \cdot 10^{-11}$\\
H 1426+428 \ \ \ \ & 0.129 \ \ \ \ & $3.55 \pm 0.49$ \ \ \ \ & \, $0.28-0.43$ & $1.46 \cdot 10^{-10}$\\
1ES 1215+303 \ \ \ \ & 0.13 \ \ \ \ & $3.60 \pm 0.50$ \ \ \ \ & \, $0.30-0.85$ & $2.30 \cdot 10^{-11}$\\
1ES 1215+303 \ \ \ \ & 0.13 \ \ \ \ & $2.96 \pm 0.21$ \ \ \ \ & $0.095-1.3$ & $2.27 \cdot 10^{-11}$\\
RX J1136.5+6737 \ \ \ \ & 0.1342 \ \ \ \ & ? \ \ \ \ & \, \,\,\,\,\,\,\,\,\,\,\, ? & ?\\
1ES 0806+524 \ \ \ \ & 0.138 \ \ \ \ & $3.60 \pm 1.04$ \ \ \ \ & \, $0.32-0.63$ & $1.92 \cdot 10^{-11}$\\
1ES 0229+200 \ \ \ \ & 0.14 \ \ \ \ & $2.50 \pm 0.21$ \ \ \ \ & \, $0.58-11$ & $1.12 \cdot 10^{-11}$\\
1RXS J101015.9-311909 \ \ \ \ & 0.142639 \ \ \ \ & $3.08 \pm 0.47$ \ \ \ \ & \, $0.26-2.2$ & $7.63 \cdot 10^{-12}$\\
H 2356-309 \ \ \ \ & 0.165 \ \ \ \ & $3.09 \pm 0.26$ \ \ \ \ & \, $0.22-0.9$ & $1.24 \cdot 10^{-11}$\\
RX J0648.7+1516 \ \ \ \ & 0.179 \ \ \ \ & $4.40 \pm 0.85$ \ \ \ \ & \, $0.21-0.47$ & $2.30 \cdot 10^{-11}$\\
1ES 1218+304 \ \ \ \ & 0.182 \ \ \ \ & $3.08 \pm 0.39$ \ \ \ \ & \, $0.18-1.4$ & $3.62 \cdot 10^{-11}$\\
1ES 1101-232 \ \ \ \ & 0.186 \ \ \ \ & $2.94 \pm 0.22$ \ \ \ \ & \, $0.28-3.2$ & $1.94 \cdot 10^{-11}$\\
1ES 0347-121 \ \ \ \ & 0.188 \ \ \ \ & $3.10 \pm 0.25$ \ \ \ \ & \, $0.30 -3.0$ & $1.89 \cdot 10^{-11}$\\
RBS 0413 \ \ \ \ & 0.19 \ \ \ \ & $3.18 \pm 0.74$ \ \ \ \ & \, $0.30 -0.85$ & $1.38 \cdot 10^{-11}$\\
RBS 0723 \ \ \ \ & 0.198 \ \ \ \ & ? \ \ \ \ & \, \,\,\,\,\,\,\,\,\,\,\, ? & ?\\
1ES 1011+496 \ \ \ \ & 0.212 \ \ \ \ & $4.00 \pm 0.54$ \ \ \ \ & \, $0.16-0.6$ & $3.95 \cdot 10^{-11}$\\
MS 1221.8+2452 \ \ \ \ & 0.218 \ \ \ \ & ? \ \ \ \ & \, \,\,\,\,\,\,\,\,\,\,\, ? & ?\\
PKS 0301-243 \ \ \ \ & 0.2657 \ \ \ \ & $4.60 \pm 0.73$ \ \ \ \ & \, $0.25-0.52$ & $8.56 \cdot 10^{-12}$\\
1ES 0414+009 \ \ \ \ & 0.287 \ \ \ \ & $3.45 \pm 0.32$ \ \ \ \ & \, $0.18-1.1$ & $6.03 \cdot 10^{-12}$\\
S5 0716+714 \ \ \ \ & 0.31 \ \ \ \ & $3.45 \pm 0.58$ \ \ \ \ & \, $0.18-0.68$ & $1.40 \cdot 10^{-10}$\\
1ES 0502+675 \ \ \ \ & 0.341 \ \ \ \ & $3.92 \pm 0.36$ \ \ \ \ & \, \,\,\,\,\,\,\,\,\,\,\, ? & ?\\
PKS 1510-089 \ \ \ \ & 0.361 \ \ \ \ & $5.40 \pm 0.76$ \ \ \ \ & \, $0.14-0.32$ & $6.97 \cdot 10^{-12}$\\
3C 66A \ \ \ \ & 0.41 \ \ \ \ & $4.10 \pm 0.72$ \ \ \ \ & \, $0.23-0.47$ & $4.00 \cdot 10^{-11}$\\
PKS 1222+216 \ \ \ \ & 0.432 \ \ \ \ & $3.75 \pm 0.34$ \ \ \ \ & \, $0.08-0.36$ & $1.71 \cdot 10^{-10}$\\
1ES 0647+250 \ \ \ \ & 0.45 \ \ \ \ & ? \ \ \ \ & \, \,\,\,\,\,\,\,\,\,\,\, ? & ?\\
PG 1553+113 \ \ \ \ & 0.5 \ \ \ \ & $4.50 \pm 0.32$ \ \ \ \ & \, $0.23-1.1$ & $4.68 \cdot 10^{-11}$\\
3C 279 \ \ \ \ & 0.5362 \ \ \ \ & $4.10 \pm 0.73$ \ \ \ \ & \, $0.08-0.46$ & $9.86 \cdot 10^{-11}$\\
PKS 1424+240 \ \ \ \ & $\ge$ 0.6035 \ \ \ \ & $3.80 \pm 0.58$ \ \ \ \ & \, $0.14-0.5$ & $1.09 \cdot 10^{-11}$\\
\hline
\end{tabular}
\caption{Considered VHE blazars with known energy, redshift, spectral slope ${\Gamma}_{\rm obs}$, energy range and normalization constant $K_{\rm obs}$. Statistical and systematic errors are added in quadrature to produce the total error reported on the measured spectral slope. When only statistical errors are quoted, systematic errors are taken to be 0.1 for H.E.S.S., 0.15 for VERITAS, and 0.2 for MAGIC. Sources with question marks lack information to perform our analysis and are discarded. For PKS 1424+240 only a lower limit on the redshift exists in the literature: thus, it is neglected in our analysis.}
\label{tabSource}
\end{center}
\end{table}
\newpage
\begin{table}[h]
\begin{center}
\begin{tabular}{l|c|cccc}
\hline
\multicolumn{1}{c|}{Source} &\multicolumn{1}{c|}{\, \, $\Gamma_{\rm em}^{\rm CP}$ \, \,} &\multicolumn{4}{c}{$\Gamma_{\rm em}^{\rm ALP}$} \\
\hline
\hline
& \ \ & \ \ $\xi=0.1$ \ \ & \ \ $\xi=0.5$ \ \ & \ \ $\xi=1$ \ \ & \ \ $\xi=5$ \ \ \\
3C 66B & 3.00 & 3.00 & 3.00 & 3.01 & 3.03 \\
Mrk 421 & 2.05 & 2.05 & 2.05 & 2.06 & 2.10 \\
Mrk 501 & 2.54 & 2.54 & 2.54 & 2.55 & 2.59 \\
1ES 2344+514 & 2.71 & 2.71 & 2.71 & 2.73 & 2.79 \\
Mrk 180 & 3.07 & 3.07 & 3.07 & 3.09 & 3.14 \\
1ES 1959+650 & 2.46 & 2.46 & 2.47 & 2.49 & 2.55 \\
1ES 1959+650 & 2.32 & 2.32 & 2.32 & 2.35 & 2.40 \\
AP LIB & 2.21 & 2.21 & 2.22 & 2.24 & 2.31 \\
1ES 1727+502 & 2.52 & 2.52 & 2.52 & 2.54 & 2.58 \\
PKS 0548-322 & 2.44 & 2.44 & 2.45 & 2.51 & 2.58 \\
BL Lacertae & 3.29 & 3.29 & 3.30 & 3.33 & 3.39 \\
PKS 2005-489 & 2.77 & 2.77 & 2.79 & 2.85 & 2.91 \\
RGB J0152+017 & 2.47 & 2.47 & 2.48 & 2.56 & 2.63 \\
1ES 1741+196 & ? & ? & ? & ? & ? \\
SHBL J001355.9-185406 \, & 2.81 & 2.81 & 2.84 & 2.94 & 3.01 \\
W Comae & 3.17 & 3.17 & 3.20 & 3.31 & 3.38 \\
1ES 1312-423 & 2.17 & 2.17 & 2.21 & 2.34 & 2.40 \\
VER J0521+211 & 2.79 & 2.79 & 2.82 & 2.93 & 3.00 \\
PKS 2155-304 & 2.81 & 2.81 & 2.86 & 3.00 & 3.05 \\
B3 2247+381 & 2.60 & 2.60 & 2.64 & 2.74 & 2.80 \\
RGB J0710+591 & 1.89 & 1.89 & 1.96 & 2.12 & 2.16 \\
H 1426+428 & 2.85 & 2.85 & 2.89 & 3.01 & 3.08 \\
1ES 1215+303 & 2.75 & 2.75 & 2.81 & 2.97 & 3.03 \\
1ES 1215+303 & 2.35 & 2.35 & 2.40 & 2.51 & 2.55 \\
RX J1136.5+6737 & ? & ? & ? & ? & ? \\
1ES 0806+524 & 2.70 & 2.70 & 2.77 & 2.93 & 3.00 \\
1ES 0229+200 & 0.61 & 0.62 & 1.20 & 1.43 & 1.26 \\
1RXS J101015.9-311909 & 2.20 & 2.20 & 2.29 & 2.45 & 2.49 \\
H 2356-309 & 2.05 & 2.05 & 2.17 & 2.35 & 2.40 \\
RX J0648.7+1516 & 3.45 & 3.45 & 3.55 & 3.71 & 3.77 \\
1ES 1218+304 & 1.97 & 1.97 & 2.13 & 2.31 & 2.34 \\
1ES 1101-232 & 1.72 & 1.73 & 1.96 & 2.13 & 2.13 \\
1ES 0347-121 & 1.87 & 1.87 & 2.11 & 2.28 & 2.28 \\
RBS 0413 & 1.88 & 1.89 & 2.07 & 2.28 & 2.32 \\
RBS 0723 & ? & ? & ? & ? & ? \\
1ES 1011+496 & 2.90 & 2.90 & 3.06 & 3.22 & 3.26 \\
MS 1221.8+2452 & ? & ? & ? & ? & ? \\
PKS 0301-243 & 2.93 & 2.93 & 3.27 & 3.46 & 3.49 \\
1ES 0414+009 & 1.65 & 1.65 & 2.12 & 2.26 & 2.25 \\
S5 0716+714 & 1.60 & 1.61 & 2.07 & 2.22 & 2.22 \\
1ES 0502+675 & ? & ? & ? & ? & ? \\
PKS 1510-089 & 4.02 & 4.02 & 4.33 & 4.45 & 4.48 \\
3C 66A & 1.53 & 1.54 & 2.31 & 2.40 & 2.39 \\
PKS 1222+216 & 2.46 & 2.46 & 2.79 & 2.87 & 2.89 \\
1ES 0647+250 & ? & ? & ? & ? & ? \\
PG 1553+113 & 0.98 & 1.09 & 2.52 & 2.30 & 2.16 \\
3C 279 & 2.05 & 2.06 & 2.71 & 2.74 & 2.73 \\
PKS 1424+240 & $\le$ 0.44 & $\le$ 0.50 & $\le$ 1.66 & $\le$ 1.61 & $\le$ 1.56 \\
\hline
\end{tabular}
\caption{Blazars considered in the Table \ref{tabSource}. For each of them (first column), the deabsorbed value $\Gamma_{\rm em}$ is reported for different deabsorbing situations. The second column concerns deabsorption according to conventional physics, using the EBL model of FRV. The subsequent columns pertain to the photon-ALP oscillation scenario with the EBL still described by the model of FRV, and report the values of $\Gamma_{\rm em}$ for different choices of our benchmark values of the model parameter $\xi$ and for $L_{\rm dom} = 4 \, {\rm Mpc}$. The total error is the same as for ${\Gamma}_{\rm obs}$ and reported in the Table \ref{tabSource} (for more details see text). Sources with question marks lack information to perform our analysis and are discarded. For PKS 1424+240 only a lower limit for its $z$ exists in the literature: thus, it is neglected in our analysis.}
\label{tabGammaL4}
\end{center}
\end{table}
\newpage
\begin{table}[h]
\begin{center}
\begin{tabular}{l|c|cccc}
\hline
\multicolumn{1}{c|}{Source} &\multicolumn{1}{c|}{\, \, $\Gamma_{\rm em}^{\rm CP}$ \, \,} &\multicolumn{4}{c}{$\Gamma_{\rm em}^{\rm ALP}$} \\
\hline
\hline
& \ \ & \ \ $\xi=0.1$ \ \ & \ \ $\xi=0.5$ \ \ & \ \ $\xi=1$ \ \ & \ \ $\xi=5$ \ \ \\
3C 66B & 3.00 & 3.00 & 3.01 & 3.01 & 3.04 \\
Mrk 421 & 2.05 & 2.05 & 2.05 & 2.07 & 2.10 \\
Mrk 501 & 2.54 & 2.54 & 2.54 & 2.57 & 2.60 \\
1ES 2344+514 & 2.71 & 2.71 & 2.72 & 2.76 & 2.79 \\
Mrk 180 & 3.07 & 3.07 & 3.08 & 3.12 & 3.14 \\
1ES 1959+650 & 2.46 & 2.46 & 2.48 & 2.52 & 2.55 \\
1ES 1959+650 & 2.32 & 2.32 & 2.33 & 2.38 & 2.41 \\
AP LIB & 2.21 & 2.21 & 2.23 & 2.28 & 2.31 \\
1ES 1727+502 & 2.52 & 2.52 & 2.53 & 2.56 & 2.58 \\
PKS 0548-322 & 2.44 & 2.44 & 2.48 & 2.56 & 2.58 \\
BL Lacertae & 3.29 & 3.29 & 3.31 & 3.37 & 3.39 \\
PKS 2005-489 & 2.77 & 2.77 & 2.82 & 2.89 & 2.92 \\
RGB J0152+017 & 2.47 & 2.47 & 2.53 & 2.60 & 2.63 \\
1ES 1741+196 & ? & ? & ? & ? & ? \\
SHBL J001355.9-185406 \, & 2.81 & 2.81 & 2.90 & 2.99 & 3.01 \\
W Comae & 3.17 & 3.17 & 3.27 & 3.36 & 3.38 \\
1ES 1312-423 & 2.17 & 2.17 & 2.30 & 2.38 & 2.40 \\
VER J0521+211 & 2.79 & 2.79 & 2.89 & 2.98 & 3.00 \\
PKS 2155-304 & 2.81 & 2.81 & 2.95 & 3.03 & 3.05 \\
B3 2247+381 & 2.60 & 2.60 & 2.71 & 2.78 & 2.80 \\
RGB J0710+591 & 1.89 & 1.89 & 2.07 & 2.15 & 2.16 \\
H 1426+428 & 2.85 & 2.85 & 2.97 & 3.06 & 3.08 \\
1ES 1215+303 & 2.75 & 2.75 & 2.92 & 3.01 & 3.03 \\
1ES 1215+303 & 2.35 & 2.35 & 2.47 & 2.54 & 2.56 \\
RX J1136.5+6737 & ? & ? & ? & ? & ? \\
1ES 0806+524 & 2.70 & 2.71 & 2.88 & 2.98 & 3.00 \\
1ES 0229+200 & 0.61 & 0.65 & 1.41 & 1.36 & 1.25 \\
1RXS J101015.9-311909 & 2.20 & 2.20 & 2.41 & 2.48 & 2.49 \\
H 2356-309 & 2.05 & 2.06 & 2.30 & 2.38 & 2.40 \\
RX J0648.7+1516 & 3.45 & 3.45 & 3.67 & 3.75 & 3.77 \\
1ES 1218+304 & 1.97 & 1.97 & 2.27 & 2.33 & 2.34 \\
1ES 1101-232 & 1.72 & 1.73 & 2.10 & 2.14 & 2.13 \\
1ES 0347-121 & 1.87 & 1.88 & 2.26 & 2.29 & 2.28 \\
RBS 0413 & 1.88 & 1.89 & 2.24 & 2.31 & 2.32 \\
RBS 0723 & ? & ? & ? & ? & ? \\
1ES 1011+496 & 2.90 & 2.90 & 3.19 & 3.25 & 3.26 \\
MS 1221.8+2452 & ? & ? & ? & ? & ? \\
PKS 0301-243 & 2.93 & 2.94 & 3.43 & 3.48 & 3.49 \\
1ES 0414+009 & 1.65 & 1.67 & 2.25 & 2.26 & 2.25 \\
S5 0716+714 & 1.60 & 1.62 & 2.20 & 2.22 & 2.22 \\
1ES 0502+675 & ? & ? & ? & ? & ? \\
PKS 1510-089 & 4.02 & 4.03 & 4.43 & 4.47 & 4.48 \\
3C 66A & 1.53 & 1.58 & 2.40 & 2.39 & 2.39 \\
PKS 1222+216 & 2.46 & 2.48 & 2.86 & 2.88 & 2.89 \\
1ES 0647+250 & ? & ? & ? & ? & ? \\
PG 1553+113 & 0.98 & 1.42 & 2.38 & 2.22 & 2.16 \\
3C 279 & 2.05 & 2.13 & 2.74 & 2.74 & 2.73 \\
PKS 1424+240 & $\le$ 0.44 & $\le$ 0.70 & $\le$ 1.63 & $\le$ 1.58 & $\le$ 1.56 \\
\hline
\end{tabular}
\caption{Same as Table \ref{tabGammaL4}, but with $L_{\rm dom} = 10 \, {\rm Mpc}$.}
\label{tabGammaL10}
\end{center}
\end{table}
\newpage
\section*{APPENDIX}
Here we summarize the main properties of the ALP scenario considered in the text and the evaluation of the photon survival probability $P_{\gamma \to \gamma}^{\rm ALP} (E_0, z)$.
\subsection{General properties of axion-like particles (ALPs)}
As already stated, ALPs are spin-zero, neutral and extremely light pseudo-scalar bosons similar to the {\it axion}, namely the pseudo-Goldstone boson associated with the global Peccei-Quinn symmetry ${\rm U}(1)_{\rm PQ}$ proposed as a natural solution to the strong CP problem~\cite{kc2010}. The axion interacts with fermions, two gluons and two photons, and one of its characteristic feature is the existence of a strict linear relationship between its mass $m$ and the two-photon coupling constant
$g_{a \gamma \gamma}$
\begin{equation}
\label{ma1}
m = 0.7 \, k \, \Bigl(g_{a \gamma \gamma} \,10^{10} \, {\rm GeV} \Bigr) \, {\rm eV}~,
\end{equation}
where $k$ is a model-dependent constant of order 1~\cite{cgn1995}. Nevertheless, ALPs differ from axions in two respects: (1) their mass $m$ and two-photon coupling constant $g_{a \gamma \gamma}$ are {\it unrelated}, and (2) ALPs are supposed to interact {\it only} with two photons~\cite{alp}. Hence, they are described by the Lagrangian
\begin{equation}
\label{SI1}
{\cal L}_{\rm ALP} = \frac{1}{2} \, \partial^{\mu} a \, \partial_{\mu} a - \frac{1}{2} \, m^2 \, a^2 - \frac{1}{4} \, g_{a \gamma \gamma}\, F_{\mu\nu} \tilde{F}^{\mu\nu} a = \frac{1}{2} \, \partial^{\mu} a \, \partial_{\mu} a - \frac{1}{2} \, m^2 \, a^2 + g_{a \gamma \gamma} \, {\bf E} \cdot
{\bf B} \, a~,
\end{equation}
where $a$ and $m$ are the ALP field and mass, respectively, and ${\bf E}$ and ${\bf B}$ are the electric and magnetic components of the field strength $F_{\mu\nu}$ ($\tilde F^{\mu \nu}$ is its dual).
We shall be concerned with the particular case of a monochromatic photon/ALP beam emitted by a blazar at redshift $z$ and traveling through the ionized extragalactic space, where a magnetic field ${\bf B}$ is supposed to exists with the domain-like structure described in Section VI. Therefore ${\bf E}$ is the electric field of a beam photon while ${\bf B}$ is the extragalactic magnetic field. In such a situation, the mass matrix of the photon-ALP system is non-diagonal, so that the propagation eigenstates differ from the interaction eigenstates. This fact gives rise to photon-ALP oscillations in the beam~\cite{sikivie,rs1988}, much in the same way as it happens in a beam of massive neutrinos of different flavor (however for photon-ALP oscillations an external magnetic field is needed to compensate for the spin mismatch). In order to avoid notational confusion, the symbol $E$ will henceforth denote the energy of the beam particles.
Since we suppose that $E \gg m$ (more about this, later), we can employ the {\it short wavelength approximation}. As a consequence, the photon/ALP beam propagation turns out to be described by the following first-order Schr\"odinger-like equation with the time replaced by the $y$-coordinate along the beam~\cite{rs1988}
\begin{equation}
\label{SI3}
\left( i \frac{d}{d y} + E + {\cal M} \right) \, \psi (y) = 0
\end{equation}
with wave function
\begin{equation}
\label{SI4}
\psi (y) \equiv \left(
\begin{array}{c}
A_x(y) \\
A_z(y) \\
a(y) \\
\end{array}
\right)~,
\end{equation}
where $A_x(y)$ and $A_z(y)$ denote the photon amplitudes with polarization along the $x$- and $z$-axis, respectively, while $a(y)$ is the amplitude associated with the ALP. It is useful to introduce the 3-dimensional basis vectors $\{| {\gamma}_x \rangle, | {\gamma}_z \rangle, |a \rangle \}$ where $| {\gamma}_x \rangle \equiv (1,0,0)^T$ and $| {\gamma}_z \rangle \equiv (0,1,0)^T$ represent the two photon linear polarization states along the $x$- and $z$-axis, respectively, and $|a \rangle \equiv (0,0,1)^T$ denotes the ALP state. Accordingly, we can write $\psi (y)$ as
\begin{equation}
\label{SI8}
\psi (y) = A_x (y) \, |{\gamma}_x \rangle + A_z (y) \, |{\gamma}_z \rangle + a (y) \,
|a \rangle~.
\end{equation}
Note that the quantity ${\cal H} \equiv - (E + {\cal M})$ plays formally the role of the Hamiltonian. Denoting by ${\cal U} (y, y_0)$ the transfer matrix -- namely the solution of Eq. (\ref{SI3}) with initial condition ${\cal U} (y_0, y_0) = 1$ -- the propagation of a generic wave function across the domain in question can be represented as
\begin{equation}
\label{k3lasq}
\psi (y) = {\cal U} (y, y_0) \, \psi (y_0)~.
\end{equation}
So far our attention has been restricted to the wave function, which describes a polarized beam. However, since in the gamma-ray band the polarization cannot be measured, we are forced to employ the density matrix $\rho (y)$. As in quantum mechanics, it obeys the analog of the Von Neumann equation associated with Eq. (\ref{SI3}), whose form is
\begin{equation}
\label{ds1}
i \frac{d \rho}{d y} = \rho {\cal M}^{\dagger} - {\cal M} \rho~.
\end{equation}
As we shall see, EBL absorption implies that ${\cal H}$ -- and so ${\cal M}$ and $\rho (y)$ -- are not self-adjoint. Hence
${\cal U} (y, y_0)$ fails to be unitary. Nevertheless, it is trivial to check that the familiar relation
\begin{equation}
\label{ds2}
\rho (y) = {\cal U} (y, y_0) \, \rho (y_0) \, {\cal U}^{\dagger} (y, y_0)
\end{equation}
retains its validity. Moreover, the probability that the beam in the state $\rho_1$ at $y_0$ will be found in the state $\rho_2$ at
$y$ is still given by the standard relation
\begin{equation}
\label{ds3}
P_{\rho_1 \to \rho_2} (y_0, y) = {\rm Tr} \bigl(\rho_2 \, {\cal U} (y, y_0) \, \rho_1 (y_0) \, {\cal U}^{\dagger} (y, y_0) \bigr)~,
\end{equation}
where it is supposed as usual that ${\rm Tr} \rho_1 = {\rm Tr} \rho_2 =1$.
We stress that the advantage arising from the short wavelength approximation is that the beam propagation can {\it formally} be described as a three-level non-relativistic decaying quantum system.
It is quite enlightening to start by considering the beam propagation over a single magnetic domain and to neglect EBL absorption \cite{dmr2008,rs1988} (this attitude indeed makes sense, since EBL absorption is independent of photon-ALP oscillations). Because then ${\bf B}$ is homogeneous, we can choose the $z$-axis along ${\bf B}$ so that $B_x = 0$. Correspondingly, the matrix
${\cal M}$ entering Eq. (\ref{SI3}) takes the form
\begin{equation}
\label{SI9}
{\cal M} = \left(
\begin{array}{ccc}
- \omega_{\rm pl}^2/2 E & 0 & 0 \\
0 & - \omega_{\rm pl}^2/2 E & g_{a \gamma \gamma} \, B/2 \\
0 & g_{a \gamma \gamma} \, B/2 & - m^2/2 E \\
\end{array}
\right)~,
\end{equation}
where $\omega_{\rm pl}$ is the plasma frequency of the ionized intergalactic medium (IGM), which is related to the mean electron density $n_e$ by $\omega_{\rm pl} = 3.69 \cdot 10^{- 11} (n_e/{\rm cm}^{- 3})^{1/2} \, {\rm eV}$~\cite{rs1988}. So, by defining the oscillation wave number as
\begin{equation}
\label{SI9bis}
\Delta_{\rm osc} \equiv \left[\left(\frac{m^2 - \omega_{\rm pl}^2 }{2 E} \right)^2 + \left(g_{a \gamma \gamma} \, B \right)^2 \right]^{1/2}~,
\end{equation}
it is a simple exercise to show that the photon-ALP conversion probability across the considered domain is
\begin{equation}
\label{SI9tris}
P_{\gamma \to a} (L_{\rm dom}) = \left(\frac{g_{a \gamma \gamma} \, B}{ \Delta_{\rm osc}} \right)^2 {\rm sin}^2 \left(\frac{\Delta_{\rm osc} \, L_{\rm dom}}{2} \right)~.
\end{equation}
Thus, defining the energy threshold~\cite{dmr2008}
\begin{equation}
\label{SI9quater}
E_L \equiv \frac{|m^2 - \omega_{\rm pl}^2|}{2 \, g_{a \gamma \gamma} \, B}~,
\end{equation}
we see that in the {\it strong-mixing regime} -- namely when condition $E > E_L$ is met -- Eq. (\ref{SI9bis}) reduces to
\begin{equation}
\label{SI9bisL}
\Delta_{\rm osc} \simeq g_{a \gamma \gamma} \, B
\end{equation}
and consequently the photon-ALP conversion probability becomes maximal as well as {\it energy-independent}. Finally, since
$B$ and $g_{a \gamma \gamma}$ enter ${\cal L}_{\rm ALP}$ only in the combination $g_{a \gamma \gamma} \, B$, it proves instrumental to define the quantity
\begin{equation}
\label{SI9b}
\xi \equiv \left(\frac{B}{{\rm nG}} \right) \left(g_{a \gamma \gamma} \, 10^{11} \, {\rm GeV} \right)~,
\end{equation}
in terms of which Eq. (\ref{SI9quater}) can be rewritten as
\begin{equation}
\label{SI9c}
E_L \simeq \frac{25.64}{\xi} \left| \left(\frac{m}{10^{- 10} \, {\rm eV}} \right)^2 - \left(\frac{\omega_{\rm pl}}{10^{- 10} \, {\rm eV}} \right)^2 \right| \, {\rm GeV}~.
\end{equation}
\subsection{Model parameters}
Before proceeding further, it is worthwhile to discuss the allowed values of the free parameters of the model. In this field, a signal is considered a discovery if its statistical significance is at least $5 \sigma$. Accordingly, the bounds should be divided into two classes, depending on whether their statistical significance is larger or smaller than $5 \sigma$.
\begin{itemize}
\item Those with statistical significance {\it larger} than $5 \sigma$ prevent any discovery of an ALP, and to date only one bound belongs to this class: the one obtained by the CAST experiment at CERN, which reads $g_{a \gamma \gamma} < 8.8 \cdot 10^{- 11} \, {\rm GeV}$ for $m < 0.02 \, {\rm eV}$~\cite{cast2009}.
\item All other weaker bounds provide useful information, but still allow for the discovery of an ALP. Various bounds of this sort have ben derived, but we discuss here only those that we believe to be most relevant. One is based on the study of 39 Galactic globular clusters and slightly improves the CAST constraint giving $g_{a \gamma \gamma} < 6.6 \cdot 10^{- 11} \, {\rm GeV}$ at the $2 \sigma$ level (the mass range of its validity is not specified)~\cite{straniero}. Another has been obtained by the H.E.S.S. collaboration, looking at the source PKS $2155-304$, from the absence of the characteristic fluctuating behavior of the realizations of the beam propagation around $E_L$ (more about this, later) and reads $g_{a \gamma \gamma} < 2.1 \cdot 10^{- 11} \, {\rm GeV}$ for $1.5 \cdot 10^{- 8} \, {\rm eV} < m < 6.0 \cdot 10^{- 8} \, {\rm eV}$ at the $2 \sigma$ level~\cite{hessbound}. A similar strategy has been applied to the source to the Hydra galaxy cluster, yielding $g_{a \gamma \gamma} < 8.3 \cdot 10^{- 12} \, {\rm GeV}$ for ${\rm eV} < m < 7.0 \cdot 10^{- 12} \, {\rm eV}$ at the $2 \sigma$ level~\cite{wb2013z}. Finally, the explosion of the 1987A supernova has been used to set an upper bound on $g_{a \gamma \gamma}$ depending on the value of $m$. The logic is as follows. Along with a neutrino burst lasting about 10 seconds, also an ALP burst should be emitted during the collapse. Inside the Galaxy magnetic field, some of the ALPs should convert into X-rays and next be detected by the Solar Maximal Mission (SMM) satellite. From the absence of detection, originally the bounds $g_{a \gamma \gamma} < 1.0 \cdot 10^{- 11} \, {\rm GeV}$ for $m < 1.0 \cdot 10^{- 9} \, {\rm eV}$~\cite{bcr1996} and $g_{a \gamma \gamma} < 3.0 \cdot 10^{- 12} \, {\rm GeV}$ for $m < 1.0 \cdot 10^{- 9} \, {\rm eV}$ have been set~\cite{gmt1996}. A very recent analysis updating these two early works has been carried out, especially using the present state-of-the art knowledge of supernova explosions (even if we still do not know why they actually explode). However, an unavoidable uncertainty affects the detection efficiency of the SMM, for instance the systematic error and the effective area, even because SN1987A was observed perpendicularly to the viewing direction of the SMM. The authors do not quote the statistical significance of this bound, which is certainly less than $2 \sigma$~\cite{payez2015}.
\end{itemize}
As far as the extragalactic magnetic field ${\bf B}$ is concerned, the situation is much less clear-cut. A general consensus exists that it possesses a domain-like morphology (at least in first approximation)~\cite{kronberg,bbo1999,gr2001}. As mentioned in Section IV, it is supposed that ${\bf B}$ is homogeneous over a domain of size $L_{\rm dom}$ equal to its coherence length, with ${\bf B}$
{\it randomly} changing its direction from one domain to another but keeping approximately the same strength. Unfortunately, the values of both $B$ and $L_{\rm dom}$ are largely unknown. We assume $B$ to be pretty large according to the predictions of the galactic outflows models~\cite{fl2001,bve2006}. Actually, within these models it looks natural to suppose that $L_{\rm dom}$ is of the order of the galactic correlation length, which amounts to take $L_{\rm dom} = (1 - 10) \, {\rm Mpc}$ in the present Universe ($z = 0$), even though larger values cannot be excluded. Then Fig. 4 of~\cite{dn2013} implies that $B \leq 1 \, {\rm nG}$, and so by taking the most restrictive bound on $g_{a \gamma \gamma}$ it follows from Eq. (\ref{SI9b}) that $\xi <8$. The upper bound on the mean diffuse extragalactic electron density $n_e < 2.7 \cdot 10^{- 7} \, {\rm cm}^{- 3}$ is provided by the WMAP measurement of the baryon density~\cite{wmap}, which -- thanks to the above relation -- translates into the upper bound $\omega_{\rm pl} < 1.92 \cdot 10^{- 14} \, {\rm eV}$. What about $m$? In order to maximize the effect of photon-ALP oscillations we work throughout within the strong mixing regime. Recalling that the lower end of the VHE band is $E = 100 \, {\rm GeV}$, we then have to require $E_L < 100 \, {\rm GeV}$. Putting everything together and using Eq. (\ref{SI9c}) we find $m < 1.97 \cdot 10^{- 10} \, \xi^{1/2}$, and at any rate we should have $m < 5 \cdot 10^{- 10} \, {\rm eV}$. However, we remark that the size of the effect remains basically unchanged if the bound $E_L < 100 \, {\rm GeV}$ becomes less restrictive by a factor of a few, and so for all practical purposes we can assume $m < 10^{- 9} \, {\rm eV}$. Finally, by combining the CAST bound $g_{a \gamma \gamma} < 8.8 \cdot 10^{- 11} \, {\rm GeV}$ with the interaction term in ${\cal L}_{\rm ALP}$, it is straightforward to get the following order-of-magnitude estimate for the cross-sections $\sigma (a \, \gamma \to f^+ f^-) \sim \sigma (a \, f^{\pm} \to \gamma \, f^{\pm}) < 10^{- 50} \, {\rm cm}^2$ (here $f$ denotes any charged fermion), which show that effectively ALPs do not interact with anything and in particular with the EBL.
\subsection{Strategy}
One point should now be emphasized. Due to the nature of the extragalactic magnetic field, the angle of {\bf B} in each domain with a fixed fiducial direction equal for all domains (which we identify with the $z$-axis) is a random variable, and so the propagation of the photon/ALP beam becomes a $N_d$-dimensional {\it stochastic process}, where $N_d$ denotes the total number of magnetic domains crossed by the beam. Therefore we identify the photon survival probability with its average value. Moreover, we shall see that the whole photon/ALP beam propagation can be recovered by iterating $N_d$ times the propagation over a single magnetic domain, changing each time the value of the random angle. Thanks to the fact that ${\bf B}$ is homogeneous in every domain, the beam propagation equation can be solved exactly in a single domain. Clearly, at the end we have to average the photon survival probability as evaluated for one arbitrary realization of the whole propagation process -- namely for a particular choice of the random angle in each domain -- over all possible realizations of the considered stochastic process (i.e. over all values of the random angle in each of the $N_d$ domains)~\cite{ckpt}.
Our discussion is framed within the standard $\Lambda$CDM cosmological model with $\Omega_M = 0.3$ and $\Omega_{\Lambda} = 0.7$, and so the redshift is the natural parameter to express distances. In particular, the proper length $L_{\rm dom} (z_a, z_b)$ extending over the redshift interval $[z_a, z_b]$ is
\begin{equation}
\label{MR1}
L (z_a, z_b) \simeq 4.29 \cdot 10^3 \int_{z_a}^{z_b} \frac{d z}{(1 + z) [ 0.7 + 0.3 (1 + z)^3 ]^{1/2}} \, {\rm Mpc} \simeq 2.96 \cdot 10^3 \, {\rm ln} \left(\frac{1 + 1.45 \, z_b}{1 + 1.45 \, z_a} \right) {\rm Mpc}~.
\end{equation}
Accordingly, the overall structure of the cellular configuration of the extragalactic magnetic field is naturally described by a {\it uniform} mesh in redshift space with elementary step $\Delta z$, which is therefore the same for all domains. This mesh can be constructed as follows. We denote by $L_{\rm dom}^{(n)} = L \bigl((n - 1) \Delta z, n \Delta z \bigr)$ the proper length along the $y$-direction of the generic $n$-th domain, with $1 \leq n \leq N_d$, where the total number of magnetic domains $N_d$ towards the considered blazar is the maximal integer contained in the number $z/\Delta z$, hence $N_d \simeq z/\Delta z$. In order to fix $\Delta z$ we consider the domain closest to us, labelled by $1$ and -- with the help of Eq. (\ref{MR1}) -- we write its proper length as
$\bigl(L_{\rm dom}^{(1)}/ 5 \, {\rm Mpc} \bigr) \, 5 \, {\rm Mpc} = L (0, \Delta z) = 2.96 \cdot 10^3 \, {\rm ln} \, (1 + 1.45 \, \Delta z) \, {\rm Mpc}$, from which we get $\Delta z \simeq 1.17 \cdot 10^{- 3} \, \bigl(L_{\rm dom}^{(1)}/ 5 \, {\rm Mpc} \bigr)$. So, once $L_{\rm dom}^{(1)}$ is chosen in agreement with the previous considerations, the size of {\it all} magnetic domains in redshift space is fixed. At this point, two further quantities can be determined. First, the total number of the considered domains is $N_d \simeq z/\Delta z \simeq 0.85 \cdot 10^3 \, \bigl(5 \, {\rm Mpc}/L_{\rm dom}^{(1)} \bigr) \, z$. Second, the proper length of the $n$-th domain along the
$y$-direction follows from Eq. (\ref{MR1}) with $z_a \to (n - 1) \Delta z, z_b \to n \, \Delta z$. Whence
\begin{equation}
\label{mag2ZHx}
L_{\rm dom}^{(n)} \simeq 2.96 \cdot 10^3 \, {\rm ln} \left( 1 + \frac{1.45 \, \Delta z}{1 + 1.45 \, (n - 1) \Delta z} \right) \, {\rm Mpc}~.
\end{equation}
\subsection{Propagation over a single domain}
What still has to be done is to take EBL absorption into account and to determine the magnetic field strength $B^{(n)}$ in the generic $n$-th domain.
The first goal can be achieved by simply following the discussion of~\cite{ckpt}. Because the domain size is so small as compared to the cosmological standards, we can safely drop cosmological evolutionary effects when considering a single domain. Then as far as absorption is concerned what matters is the mean free path $\lambda_{\gamma}$ for the reaction $\gamma \gamma \to e^+ e^-$, and the term $i/2 \lambda_{\gamma}$ should be inserted into the 11 and 22 entries of the ${\cal M}$ matrix. In order to evaluate $\lambda_{\gamma}$, we imagine that two hypothetical sources located at both edges of the $n$-th domain are observed. Therefore, we insert Eq. (\ref{a2}) into Eq. (\ref{a1}) of Section I and further apply the resulting equation to both of them. With the notational simplifications $\Phi_{\rm obs} (E_0, z) \to \Phi (E_0)$ and $\Phi_{\rm em} \bigl(E_0 (1 + z) \bigr) \to \Phi \bigl(E_0 (1 + z) \bigr)$, we have
\begin{equation}
\label{MR2}
\Phi (E_0) = e^{- \tau_{\gamma} \bigl(E_0, (n -1) \Delta z \bigr)} \, \Phi \bigl(E_0 [1 + (n - 1) \Delta z ] \bigr)~, \ \ \ \ \ \ \Phi (E_0) = e^{- \tau_{\gamma} (E_0, n \, \Delta z)} \, \Phi \bigl(E_0 ( 1 + n \, \Delta z) \bigr)~,
\end{equation}
which upon combination imply that the flux change across the domain in question is
\begin{equation}
\label{MR3}
\Phi \bigl(E_0 [1 + (n - 1) \Delta z] \bigr) = e^{- \bigl[\tau_{\gamma} (E_0, n \, \Delta z) - \tau_{\gamma} \bigl(E_0, (n -1) \Delta z \bigr) \bigr]} \, \Phi \bigl(E_0 ( 1 + n \, \Delta z) \bigr)~.
\end{equation}
Because cosmological effects can be neglected across a single domain, Eq. (\ref{MR3}) should have the usual form
\begin{equation}
\label{MR4}
\Phi \bigl(E_0 [1 + (n - 1) \Delta z] \bigr) = e^{- L^{(n)}_{\rm dom}/\lambda^{(n)}_{\gamma} (E_0)} \,
\Phi \bigl(E_0, (1 + n \, \Delta z) \bigr)~,
\end{equation}
and the comparison with Eq. (\ref{MR3}) ultimately yields
\begin{equation}
\label{MR5}
\lambda^{(n)}_{\gamma} (E_0) = \frac{L^{(n)}_{\rm dom}}{\tau_{\gamma} (E_0, n \, \Delta z) - \tau_{\gamma} \bigl(E_0, (n -1) \Delta z \bigr)}~,
\end{equation}
where the optical depth is evaluated as stated in the text for the EBL model of FRV~\cite{frv}.
To accomplish the second task, we note that because of the high conductivity of the IGM medium the magnetic flux lines can be thought as frozen inside it~\cite{gr2001}. Therefore the flux conservation during the cosmic expansion entails that $B$ scales like $(1 + z)^2$, so that the magnetic field strength in a domain at redshift $z$ is $B (z) = B (z = 0) (1 + z)^2$. Hence in the $n$-th magnetic domain we have $B^{(n)} = B^{(1)} \bigl(1 + (n - 1) \Delta z \bigr)^2$.
So, at this stage the matrix ${\cal M}$ in Eq. (\ref{SI3}) as explicitly written in the $n$-th domain reads
\begin{equation}
\label{MRz}
{\cal M}^{(n)} = \left(
\begin{array}{ccc}
i/2 {\lambda}^{(n)}_{\gamma} & 0 & B^{(n)} \, {\rm sin} \, \psi_n \, g_{a \gamma \gamma}/2 \\
0 & i/2 {\lambda}^{(n)}_{\gamma} & B^{(n)} \, {\rm cos} \, \psi_n \, g_{a \gamma \gamma}/2 \\
B^{(n)} \, {\rm sin} \, \psi_n \, g_{a \gamma \gamma}/2 & B^{(n)} \, {\rm cos} \, \psi_n \, g_{a \gamma \gamma}/2 & 0 \\
\end{array}
\right)~,
\end{equation}
where $\psi_n$ is the random angle between ${\bf B}^{(n)}$ and the fixed fiducial direction along the $z$-axis (note that indeed ${\cal M}^{\dagger} \neq {\cal M}$). Apart from $\psi_n$, all other matrix elements entering ${\cal M}^{(n)}$ are known. Finding the transfer matrix of Eq. (\ref{SI3}) with ${\cal M}^{(n)}$ given by Eq. (\ref{MRz}) is a straightforward even if somewhat boring game. The result is
\begin{equation}
\label{mravvq2abcQQ}
{\cal U}_n (E_n, \psi_n) = e^{i E_n L_{\rm dom}^{(n)}} \Bigg[ e^{i \left({\lambda}^{(n)}_1 \, L_{\rm dom}^{(n)} \right)} \, T_1 (\psi_n) + e^{i \left({\lambda}^{(n)}_2 \, L_{\rm dom}^{(n)} \right)} \, T_2 (\psi_n) + e^{i \left({\lambda}^{(n)}_3 \, L_{\rm dom}^{(n)} \right)} \, T_3 (\psi_n)~ \Bigg]
\end{equation}
with
\begin{equation}
\label{mravvq2Q1a}
T_1 (\psi_n) \equiv
\left(
\begin{array}{ccc}
\cos^2 \psi_n & -\sin \psi_n \cos \psi_n & 0 \\
- \sin \psi_n \cos \psi_n & \sin^2 \psi_n & 0 \\
0 & 0 & 0
\end{array}
\right)~,
\end{equation}
\begin{equation}
\label{mravvq3Q1b}
T_2 (\psi_n) \equiv
\left(
\begin{array}{ccc}
\frac{- 1 + \sqrt{1 - 4 {\delta}_n^2}}{2 \sqrt{1 - 4 {\delta}_n^2}} \sin^2 \psi_n & \frac{- 1 + \sqrt{1 - 4 {\delta}_n^2}}{2 \sqrt{1 - 4 {\delta}_n^2}} \sin \psi_n \cos \psi_n & \frac{i \delta_n}{\sqrt{1 - 4 {\delta}_n^2}} \sin \psi_n \\
\frac{- 1 + \sqrt{1 - 4 {\delta}_n^2}}{2 \sqrt{1 - 4 {\delta}_n^2}} \sin \psi_n \cos \psi_n & \frac{- 1 + \sqrt{1 - 4 {\delta}_n^2}}{2 \sqrt{1 - 4 {\delta}_n^2}} \cos^2 \psi_n & \frac{i \delta_n}{\sqrt{1 - 4 {\delta}_n^2}} \cos \psi_n \\
\frac{i \delta_n}{\sqrt{1 - 4 {\delta}_n^2}} \sin \psi_n & \frac{i \delta_n}{\sqrt{1 - 4 {\delta_n}^2}} \cos \psi_n & \frac{ 1 + \sqrt{1 - 4 {\delta}_n^2}}{2 \sqrt{1 - 4 {\delta}_n^2}}
\end{array}
\right)~,
\end{equation}
\begin{equation}
\label{mravvq2Q1b}
T_3 (\psi_n) \equiv
\left(
\begin{array}{ccc}
\frac{ 1 + \sqrt{1 - 4 {\delta}_n^2}}{2 \sqrt{1 - 4 {\delta}_n^2}} \sin^2 \psi_n & \frac{ 1 + \sqrt{1 - 4 {\delta}_n^2}}{2 \sqrt{1 - 4 {\delta}_n^2}} \sin \psi_n \cos \psi_n & \frac{- i \delta_n}{\sqrt{1 - 4 {\delta}_n^2}} \sin \psi_n \\
\frac{ 1 + \sqrt{1 - 4 {\delta}_n^2}}{2 \sqrt{1 - 4 {\delta}_n^2}} \sin \psi_n \cos \psi_n & \frac{ 1 + \sqrt{1 - 4 {\delta}_n^2}}{2 \sqrt{1 - 4 {\delta}_n^2}} \cos^2 \psi_n & \frac{- i \delta_n}{\sqrt{1 - 4 {\delta}_n^2}} \cos \psi_n \\
\frac{- i \delta_n}{\sqrt{1 - 4 {\delta}_n^2}} \sin \psi_n & \frac{- i \delta_n}{\sqrt{1 - 4 {\delta}_n^2}} \cos \psi_n & \frac{- 1 + \sqrt{1 - 4 {\delta}_n^2}}{2 \sqrt{1 - 4 {\delta}_n^2}}
\end{array}
\right)~,
\end{equation}
where we have set
\begin{equation}
\label{a91212a1PW}
{\lambda}^{(n)}_{1} \equiv \frac{i}{2 \, {\lambda}^{(n)}_{\gamma} (E_0)}~, \ \ \ {\lambda}^{(n)}_{2} \equiv \frac{i}{4 \, {\lambda}^{(n)}_{\gamma}} \left(1 - \sqrt{1 - 4 \, \delta^2_n} \right)~, \ \ \ {\lambda}^{(n)}_{3} \equiv \frac{i}{4 \, {\lambda}^{(n)}_{\gamma}} \left(1 + \sqrt{1 - 4 \, \delta^2_n} \right)
\end{equation}
with
\begin{equation}
\label{a17PW14022011}
E_n \equiv E_0 \, \Bigl[1 + (n - 1) \, \Delta z) \Bigr]~, \ \ \ \ \ \ {\delta}_n \equiv \xi_n \, {\lambda}^{(n)}_{\gamma} (E_0) \left(\frac{{\rm nG}}{10^{11} \, {\rm GeV}} \right)~,
\end{equation}
where $\xi_n$ is just $\xi$ as defined by Eq. (\ref{SI9b}) and evaluated in the $n$-th domain.
\subsection{Calculation of the photon survival probability in the presence of photon-ALP oscillations}
Our aim is to derive the photon survival probability $P^{\rm ALP}_{\gamma \gamma} (E_0, z)$ entering the text. So far, we have dealt with a single magnetic domain but now we enlarge our view so as to encompass the whole propagation process of the beam from the source to us. This goal is trivially achieved thanks to the analogy with non-relativistic quantum mechanics, according to which -- for a fixed arbitrary choice of the angles $\{\psi_n \}_{1 \leq n \leq N_d}$ -- the whole transfer matrix describing the propagation of the photon/ALP beam from the source at redshift $z$ to us is
\begin{equation}
\label{as1g}
{\cal U} \left(E_0, z; \psi_1, ... , \psi_{N_d} \right) = \prod^{N_d}_{n = 1} \, {\cal U}_n \left(E_n, \psi_n \right)~.
\end{equation}
Moreover, the probability that a photon/ALP beam emitted by a blazar at $z$ in the state $\rho_1$ will be detected in the state
$\rho_2$ for the above choice of $\{\psi_n \}_{1 \leq n \leq N_d}$ is given by Eq. (\ref{ds3}). Whence
\begin{equation}
\label{k3lwf1w1Q}
P_{\rho_1 \to \rho_2} \left(E_0, z; \psi_1, ... , \psi_{N_d} \right) =
{{\rm Tr} \left( {\rho}_2 \, {\cal U} \left(E_0, z; \psi_1, ... , \psi_{N_d} \right) \, \rho_1 \, {\cal U}^{\dagger} \left(E_0, z; \psi_1, ... , \psi_{N_d} \right) \right)}
\end{equation}
with ${\rm Tr} \rho_1 = {\rm Tr} \rho_2 = 1$.
Since the actual values of the angles $\{\psi_n \}_{1 \leq n \leq N_d}$ are unknown, the best that we can do is to evaluate the probability entering Eq. (\ref{k3lwf1w1Q}) as averaged over all possible values of the considered angles, namely
\begin{equation}
\label{k3lwf1w1W}
P_{\rho_1 \to \rho_2} \left(E_0, z \right) = \Big\langle P_{\rho_1 \to \rho_2} \left(E_0, z; \psi_1, ... , \psi_{N_d} \right) \Big\rangle_{\psi_1, ... , \psi_{N_d}}~,
\end{equation}
indeed in accordance with the strategy outlined above. In practice, this is accomplished by evaluating the r.h.s. of Eq. (\ref{k3lwf1w1Q}) over a very large number of realizations of the propagation process (we take 5000 realizations) randomly choosing the values of all angles $\{\psi_n \}_{1 \leq n \leq N_d}$ for every realization, adding the results and dividing by the number of realizations.
Because the photon polarization cannot be measured at the considered energies, we have to sum the result over the two final polarization states
\begin{equation}
\label{a9s11A}
{\rho}_x = \left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0 \\
\end{array}
\right)~,
\end{equation}
\begin{equation}
\label{a9s11B}
{\rho}_z = \left(
\begin{array}{ccc}
0 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 0 \\
\end{array}
\right)~.
\end{equation}
Moreover, we suppose here that the emitted beam consists $100 \, \%$ of unpolarized photons, so that the initial beam state is described by the density matrix
\begin{equation}
\label{a9s11C}
{\rho}_{\rm unpol} = \frac{1}{2}\left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 0 \\
\end{array}
\right)~.
\end{equation}
We find in this way the photon survival probability $P^{\rm ALP}_{\gamma \gamma} \left(E_0, z \right)$
\begin{eqnarray}
\label{k3lwf1w1WW}
P^{\rm ALP}_{\gamma \gamma} \left(E_0, z \right) = \Big\langle P_{\rho_{\rm unpol} \to \rho_x} \left(E_0, z; \psi_1, ... , \psi_{N_d} \right) \Big\rangle_{\psi_1, ... , \psi_{N_d}} + \\ \nonumber
+ \, \Big\langle P_{\rho_{\rm unpol} \to \rho_z} \left(E_0, z; \psi_1, ... , \psi_{N_d} \right)
\Big\rangle_{\psi_1, ... , \psi_{N_d}}~, \ \ \ \ \ \ \ \ \ \ \
\end{eqnarray}
which is indeed the quantity entering Eq.~(\ref{a2bis}) of the text.
A final remark is in order. It is obvious that the beam follows a single realization of the considered stochastic process at once, but since we do not know which one is actually selected the best we can do is to evaluate the average photon survival probability. Nevertheless, it can be shown that the considered realizations exhibit an oscillatory behavior for an energy close to the threshold $E_L$ defined by Eq. (\ref{SI9quater}). Therefore, observations performed at energies close enough to $E_L$ should exhibit oscillations in the energy spectrum~\cite{wb2012,gr2013}. This fact has been used to derive the previous bounds discussed in~\cite{hessbound,wb2013z}.
\newpage
\begin{table}[h]
\begin{center}
\begin{tabular}{l|c|cc}
\hline
\multicolumn{1}{c|}{Source} &\multicolumn{1}{c|}{ $K_{\rm em}^{\rm CP} \,[\rm cm^{-2} \, s^{-1} \, TeV^{-1}]$ } &\multicolumn{2}{c}{$K_{\rm em}^{\rm ALP} \,[\rm cm^{-2} \, s^{-1} \, TeV^{-1}]$} \\
\hline
\hline
& \ \ & \ $L_{\rm dom}=4 \, {\rm Mpc}$ \ & \ $L_{\rm dom}=10 \, {\rm Mpc}$ \ \\
3C 66B & $1.99 \cdot 10^{-11}$ & $2.13 \cdot 10^{-11}$ & $2.32 \cdot 10^{-11}$ \\
Mrk 421 & $7.51 \cdot 10^{-10}$ & $8.27 \cdot 10^{-10}$ & $9.18 \cdot 10^{-10}$ \\
Mrk 501 & $1.80 \cdot 10^{-10}$ & $2.00 \cdot 10^{-10}$ & $2.23 \cdot 10^{-10}$ \\
1ES 2344+514 & $6.83 \cdot 10^{-11}$ & $7.77 \cdot 10^{-11}$ & $8.80 \cdot 10^{-11}$ \\
Mrk 180 &5.82 $ \cdot 10^{-11}$ &6.64 $ \cdot 10^{-11}$ & $7.52 \cdot 10^{-11}$ \\
1ES 1959+650 & $1.14 \cdot 10^{-10}$ & $1.31 \cdot 10^{-10}$ & $1.49 \cdot 10^{-10}$ \\
1ES 1959+650 & $7.62 \cdot 10^{-11}$ & $8.75 \cdot 10^{-11}$ & $9.94 \cdot 10^{-11}$ \\
AP LIB & ? & ? & ? \\
1ES 1727+502 & $1.29 \cdot 10^{-11}$ & $1.51 \cdot 10^{-11}$ & $1.72 \cdot 10^{-11}$ \\
PKS 0548-322 & $1.50 \cdot 10^{-11}$ & $1.81 \cdot 10^{-11}$ & $2.07 \cdot 10^{-11}$ \\
BL Lacertae & $8.83 \cdot 10^{-10}$ & $1.06 \cdot 10^{-9}$ & $1.21 \cdot 10^{-9}$ \\
PKS 2005-489 & $4.84 \cdot 10^{-11}$ & $5.86 \cdot 10^{-11}$ & $6.70 \cdot 10^{-11}$ \\
RGB J0152+017 & $2.87 \cdot 10^{-11}$ & $3.53 \cdot 10^{-11}$ & $4.02 \cdot 10^{-11}$ \\
1ES 1741+196 & ? & ? & ? \\
SHBL J001355.9-185406 \, & $1.12 \cdot 10^{-11}$ & $1.42 \cdot 10^{-11}$ & $1.60 \cdot 10^{-11}$ \\
W Comae & $1.03 \cdot 10^{-10}$ & $1.32 \cdot 10^{-10}$ & $1.47 \cdot 10^{-10}$ \\
1ES 1312-423 & $9.03 \cdot 10^{-12}$ & $1.17 \cdot 10^{-11}$ & $1.32 \cdot 10^{-11}$ \\
VER J0521+211 & $9.39 \cdot 10^{-11}$ & $1.21 \cdot 10^{-10}$ & $1.33 \cdot 10^{-10}$ \\
PKS 2155-304 & $2.32 \cdot 10^{-10}$ & $3.04 \cdot 10^{-10}$ & $3.35 \cdot 10^{-10}$ \\
B3 2247+381 & $2.77 \cdot 10^{-11}$ & $3.58 \cdot 10^{-11}$ & $3.86 \cdot 10^{-11}$ \\
RGB J0710+591 & $2.49 \cdot 10^{-11}$ & $3.32 \cdot 10^{-11}$ & $3.66 \cdot 10^{-11}$ \\
H 1426+428 & $2.86 \cdot 10^{-10}$ & $3.76 \cdot 10^{-10}$ & $4.06 \cdot 10^{-10}$ \\
1ES 1215+303 & $4.38 \cdot 10^{-11}$ & $5.81 \cdot 10^{-11}$ & $6.30 \cdot 10^{-11}$ \\
1ES 1215+303 & $5.12 \cdot 10^{-11}$ & $6.64 \cdot 10^{-11}$ & $6.96 \cdot 10^{-11}$ \\
RX J1136.5+6737 & ? & ? & ? \\
1ES 0806+524 & $3.76 \cdot 10^{-11}$ &5.03 $ \cdot 10^{-11}$ & $5.41 \cdot 10^{-11}$ \\
1ES 0229+200 & $4.04 \cdot 10^{-12}$ & $1.21 \cdot 10^{-11}$ & $1.50 \cdot 10^{-11}$ \\
1RXS J101015.9-311909 & $1.51 \cdot 10^{-11}$ & $2.03 \cdot 10^{-11}$ & $2.17 \cdot 10^{-11}$ \\
H 2356-309 & $2.70 \cdot 10^{-11}$ & $3.66 \cdot 10^{-11}$ & $3.80 \cdot 10^{-11}$ \\
RX J0648.7+1516 & $6.83 \cdot 10^{-11}$ & $9.27 \cdot 10^{-11}$ & $9.45 \cdot 10^{-11}$ \\
1ES 1218+304 & $9.04 \cdot 10^{-11}$ & $1.23 \cdot 10^{-10}$ & $1.24 \cdot 10^{-10}$ \\
1ES 1101-232 & $4.35 \cdot 10^{-11}$ & $6.26 \cdot 10^{-11}$ & $6.38 \cdot 10^{-11}$ \\
1ES 0347-121 & $4.37 \cdot 10^{-11}$ & $6.35\cdot 10^{-11}$ & $6.45 \cdot 10^{-11}$ \\
RBS 0413 & $3.16 \cdot 10^{-11}$ & $4.41 \cdot 10^{-11}$ & $4.52 \cdot 10^{-11}$ \\
RBS 0723 & ? & ? & ? \\
1ES 1011+496 & $1.43 \cdot 10^{-10}$ & $1.92 \cdot 10^{-10}$ & $1.89 \cdot 10^{-10}$ \\
MS 1221.8+2452 & ? & ? & ? \\
PKS 0301-243 & $3.95 \cdot 10^{-11}$ & $5.47 \cdot 10^{-11}$ & $5.32 \cdot 10^{-11}$ \\
1ES 0414+009 & $2.62 \cdot 10^{-11}$ & $3.51 \cdot 10^{-11}$ & $3.36 \cdot 10^{-11}$ \\
S5 0716+714 & $6.96 \cdot 10^{-10}$ & $9.06 \cdot 10^{-10}$ & $8.63 \cdot 10^{-10}$ \\
1ES 0502+675 & ? & ? & ? \\
PKS 1510-089 & $7.75 \cdot 10^{-11}$ & $9.87 \cdot 10^{-11}$ & $9.37 \cdot 10^{-11}$ \\
3C 66A & $2.98 \cdot 10^{-10}$ & $4.01 \cdot 10^{-10}$ & $3.81 \cdot 10^{-10}$ \\
PKS 1222+216 & $1.80 \cdot 10^{-9}$ & $2.09 \cdot 10^{-9}$ & $1.98 \cdot 10^{-9}$ \\
1ES 0647+250 & ? & ? & ? \\
PG 1553+113 & $4.73 \cdot 10^{-10}$ & $7.86 \cdot 10^{-10}$ & $6.83 \cdot 10^{-10}$ \\
3C 279 & $2.30 \cdot 10^{-9}$ & $2.28 \cdot 10^{-9}$ & $2.19 \cdot 10^{-9}$ \\
PKS 1424+240 & $\ge 1.92 \cdot 10^{-10}$ & $\ge 2.15 \cdot 10^{-10}$ & $\ge 2.06 \cdot 10^{-10}$ \\
\hline
\end{tabular}
\caption{Blazars considered in the Table \ref{tabSource}. For each of them (first column), the deabsorbed value $K_{\rm em}$ is reported for different deabsorbing situations. The second column concerns deabsorption according to conventional physics, using the EBL model of FRV. The subsequent columns pertain to the photon-ALP oscillation scenario with the EBL still described by the model of FRV, and report the values of $K_{\rm em}$ for different choices of our benchmark values of the model parameter $L_{\rm dom}$ and for $\xi=0.5$ -- which are the best values of the model (for more details see text). Sources with question marks lack information to perform our analysis and are discarded. For PKS 1424+240 only a lower limit for its $z$ exists in the literature: thus, it is neglected in our analysis.}
\label{tabKappa}
\end{center}
\end{table}
\newpage
\begin{table}[h]
\begin{center}
\begin{tabular}{l|c|cc}
\hline
\multicolumn{1}{c|}{Source} &\multicolumn{1}{c|}{ $F_{{\rm em}, \Delta E}^{\rm CP} \,[\rm cm^{-2} \, s^{-1}]$ } &\multicolumn{2}{c}{$F_{{\rm em}, \Delta E}^{\rm ALP} \,[\rm cm^{-2} \, s^{-1}]$} \\
\hline
\hline
& \ \ & \ $L_{\rm dom}=4 \, {\rm Mpc}$ \ & \ $L_{\rm dom}=10 \, {\rm Mpc}$ \ \\
3C 66B & $1.75 \cdot 10^{-11}$ & $1.87 \cdot 10^{-11}$ & $2.03 \cdot 10^{-11}$ \\
Mrk 421 & $4.65 \cdot 10^{-10}$ & $5.12 \cdot 10^{-10}$ & $5.69 \cdot 10^{-10}$ \\
Mrk 501 & $5.46 \cdot 10^{-11}$ & $6.06 \cdot 10^{-11}$ & $6.76 \cdot 10^{-11}$ \\
1ES 2344+514 & $2.80 \cdot 10^{-11}$ & $3.19 \cdot 10^{-11}$ & $3.61 \cdot 10^{-11}$ \\
Mrk 180 & $2.09 \cdot 10^{-11}$ & $2.38 \cdot 10^{-11}$ & $2.70 \cdot 10^{-11}$ \\
1ES 1959+650 & $3.88 \cdot 10^{-11}$ & $4.45 \cdot 10^{-11}$ & $5.05 \cdot 10^{-11}$ \\
1ES 1959+650 & $2.74 \cdot 10^{-11}$ & $3.14 \cdot 10^{-11}$ & $3.56 \cdot 10^{-11}$ \\
AP LIB & ? & ? & ? \\
1ES 1727+502 & $1.11 \cdot 10^{-11}$ & $1.29 \cdot 10^{-11}$ & $1.48 \cdot 10^{-11}$ \\
PKS 0548-322 & $2.33 \cdot 10^{-12}$ & $2.79 \cdot 10^{-12}$ & $3.12 \cdot 10^{-12}$ \\
BL Lacertae & $4.41 \cdot 10^{-10}$ & $5.31 \cdot 10^{-10}$ & $6.06 \cdot 10^{-10}$ \\
PKS 2005-489 & $5.94 \cdot 10^{-12}$ & $7.14 \cdot 10^{-12}$ & $7.99 \cdot 10^{-12}$ \\
RGB J0152+017 & $4.24 \cdot 10^{-12}$ & $5.17 \cdot 10^{-12}$ & $5.70 \cdot 10^{-12}$ \\
1ES 1741+196 & ? & ? & ? \\
SHBL J001355.9-185406 \, & $7.34 \cdot 10^{-13}$ & $9.09 \cdot 10^{-13}$ & $9.70 \cdot 10^{-13}$ \\
W Comae & $1.26 \cdot 10^{-11}$ & $1.59 \cdot 10^{-11}$ & $1.73 \cdot 10^{-11}$ \\
1ES 1312-423 & $1.42 \cdot 10^{-12}$ & $1.76 \cdot 10^{-12}$ & $1.82 \cdot 10^{-12}$ \\
VER J0521+211 & $1.95 \cdot 10^{-11}$ & $2.48 \cdot 10^{-11}$ & $2.69 \cdot 10^{-11}$ \\
PKS 2155-304 & $5.38 \cdot 10^{-11}$ & $6.93 \cdot 10^{-11}$ & $7.44 \cdot 10^{-11}$ \\
B3 2247+381 & $1.10 \cdot 10^{-11}$ & $1.43 \cdot 10^{-11}$ & $1.55 \cdot 10^{-11}$ \\
RGB J0710+591 & $4.79 \cdot 10^{-12}$ & $5.92 \cdot 10^{-12}$ & $5.81 \cdot 10^{-12}$ \\
H 1426+428 & $2.04 \cdot 10^{-11}$ & $2.66 \cdot 10^{-11}$ & $2.82 \cdot 10^{-11}$ \\
1ES 1215+303 & $4.49 \cdot 10^{-12}$ & $5.78 \cdot 10^{-12}$ & $5.96 \cdot 10^{-12}$ \\
1ES 1215+303 & $3.92 \cdot 10^{-11}$ & $5.17 \cdot 10^{-11}$ & $5.58 \cdot 10^{-11}$ \\
RX J1136.5+6737 & ? & ? & ? \\
1ES 0806+524 & $2.86 \cdot 10^{-12}$ & $3.71 \cdot 10^{-12}$ & $3.78 \cdot 10^{-12}$ \\
1ES 0229+200 & $7.93 \cdot 10^{-12}$ & $6.06 \cdot 10^{-12}$ & $4.84 \cdot 10^{-12}$ \\
1RXS J101015.9-311909 & $3.10 \cdot 10^{-12}$ & $3.94 \cdot 10^{-12}$ & $3.89 \cdot 10^{-12}$ \\
H 2356-309 & $6.02 \cdot 10^{-12}$ & $7.83 \cdot 10^{-12}$ & $7.75 \cdot 10^{-12}$ \\
RX J0648.7+1516 & $9.78 \cdot 10^{-12}$ & $1.32 \cdot 10^{-11}$ & $1.33 \cdot 10^{-11}$ \\
1ES 1218+304 & $2.85 \cdot 10^{-11}$ & $3.67 \cdot 10^{-11}$ & $3.56 \cdot 10^{-11}$ \\
1ES 1101-232 & $1.17 \cdot 10^{-11}$ & $1.36 \cdot 10^{-11}$ & $1.22 \cdot 10^{-11}$ \\
1ES 0347-121 & $9.46 \cdot 10^{-12}$ & $1.10 \cdot 10^{-11}$ & $9.86 \cdot 10^{-12}$ \\
RBS 0413 & $4.64 \cdot 10^{-12}$ & $5.79 \cdot 10^{-12}$ & $5.38 \cdot 10^{-12}$ \\
RBS 0723 & ? & ? & ? \\
1ES 1011+496 & $3.93 \cdot 10^{-11}$ & $5.30 \cdot 10^{-11}$ & $5.24 \cdot 10^{-11}$ \\
MS 1221.8+2452 & ? & ? & ? \\
PKS 0301-243 & $3.31 \cdot 10^{-12}$ & $4.10 \cdot 10^{-12}$ & $3.79 \cdot 10^{-12}$ \\
1ES 0414+009 & $7.70 \cdot 10^{-12}$ & $8.49 \cdot 10^{-12}$ & $7.76 \cdot 10^{-12}$ \\
S5 0716+714 & $1.69 \cdot 10^{-10}$ & $1.90 \cdot 10^{-10}$ & $1.75 \cdot 10^{-10}$ \\
1ES 0502+675 & ? & ? & ? \\
PKS 1510-089 & $2.04 \cdot 10^{-11}$ & $2.77 \cdot 10^{-11}$ & $2.69 \cdot 10^{-11}$ \\
3C 66A & $3.62 \cdot 10^{-11}$ & $3.58 \cdot 10^{-11}$ & $3.29 \cdot 10^{-11}$ \\
PKS 1222+216 & $9.36 \cdot 10^{-10}$ & $1.28 \cdot 10^{-9}$ & $1.26 \cdot 10^{-9}$ \\
1ES 0647+250 & ? & ? & ? \\
PG 1553+113 & $1.50 \cdot 10^{-10}$ & $7.59 \cdot 10^{-11}$ & $7.23 \cdot 10^{-11}$ \\
3C 279 & $9.19 \cdot 10^{-10}$ & $1.14 \cdot 10^{-9}$ & $1.11 \cdot 10^{-9}$ \\
PKS 1424+240 & $\ge 5.65 \cdot 10^{-11}$ & $\ge 4.20 \cdot 10^{-11}$ & $\ge 4.05 \cdot 10^{-11}$ \\
\hline
\end{tabular}
\caption{Blazars considered in the Table \ref{tabSource}. For each of them (first column), the deabsorbed value $F_{{\rm em}, \Delta E}$ is reported for different deabsorbing situations. The second column concerns deabsorption according to conventional physics, using the EBL model of FRV. The subsequent columns pertain to the photon-ALP oscillation scenario with the EBL still described by the model of FRV, and report the values of $F_{{\rm em}, \Delta E}$ for different choices of our benchmark values of the model parameter $L_{\rm dom}$ and for $\xi=0.5$ -- which are the best values of the model (for more details see text). Sources with question marks lack information to perform our analysis and are discarded. For PKS 1424+240 only a lower limit for its $z$ exists in the literature: thus, it is neglected in our analysis.}
\label{tabLum}
\end{center}
\end{table}
\newpage
|
1,941,325,220,015 | arxiv | \section{Introduction}
Solving the electronic Schr\"{o}dinger equation\cite{Schrodinger1926} remains a fundamental challenge in quantum chemistry, in principle enabling an exact theoretical description of chemical properties and reactivity.
However, exact solutions remain elusive beyond the simplest of chemical systems.\cite{Loos2012}
Research has therefore focused on exploiting physical and chemical understanding to develop approximations to the exact electronic structure.\cite{Hirata2012}
At the heart of most approximations lies the self-consistent field (SCF) approach,\cite{SzaboBook} usually through the form of Hartree--Fock (HF) or Kohn--Sham Density-Functional Theory (KS-DFT).
However, beyond simply providing a `reference state' for correlated approaches,\cite{HelgakerBook} the SCF approximation is itself a rich theory with the potential to provide chemical insights into excited states\cite{Gilbert2008} and reactive bond-breaking processes.\cite{Jensen2018}
SCF methods are usually presented as iterative approaches.
On each iteration, the electron density obtained from the previous step is used to build an approximate electronic potential that is then used to re-optimise the electron density or wave function as an input for the next iteration.
This process is repeated until self-consistency is reached.\cite{SzaboBook}
Alternatively, the SCF energy can be considered as a non-linear function of the one-electron density, with the global minimum corresponding to the approximate electronic ground state.
Besides the global minimum, this non-linear function can possess several stationary points that each represent an optimal SCF state and correspond to local minima, maxima, or saddle points of the SCF energy.\cite{Fukutome1971,Stanton1968,Thom2008}
Historically, the existence of multiple SCF solutions has been considered as a computational obstacle, particularly when the lowest energy HF ground state is required\cite{Thouless1960,Adams1962,Cizek1967,Seeger1977} or during \textit{ab initio} molecular dynamics simulations involving molecules with multiple low-lying states.\cite{Vaucher2017}
Alternatively, recent research has developed and exploited physical interpretations of multiple SCF solutions themselves. \cite{Gilbert2008, Barca2014}
For example, encouraged by new computational methods that make identifying higher energy stationary points relatively routine,\cite{Gilbert2008,Thom2008} multiple SCF solutions have been used as mean-field approximations to excited states.\cite{Gilbert2008,Besley2009,Barca2014}
Furthermore, the similarities between dominant electron configurations in strongly correlated molecules and multiple HF states have motivated their use as a basis for multireference ground- and excited-state wave functions.
Since each HF solution comprises an independent set of molecular orbitals (MOs), these multireference calculations take the form of a nonorthogonal configuration interaction (NOCI).\cite{Thom2008,Sundstrom2014,Burton2019c}
However, in many cases, these applications have been hindered by the disappearance of SCF solutions at so-called Coulson--Fischer points as molecular structure changes, with the low-lying unrestricted (UHF) states in \ce{H2} providing the archetypal example.\cite{Coulson1949}
Recently, holomorphic Hartree--Fock (h-HF) theory has been developed as a method for extending real HF states into the complex plane, beyond the Coulson--Fischer points at which they vanish in conventional HF.\cite{Hiscock2014, Burton2016, Burton2018, Burton2019a}
In h-HF, the complex-conjugation of orbital coefficients is removed from the conventional energy function to define a complex-analytic analytic continuation of the conventional HF equations.\cite{Hiscock2014, Burton2016, Burton2018}
The resulting h-HF stationary points then exist across all molecular geometries, allowing methods such as NOCI to be generalised as alternatives to conventional multireference approaches such as the complete active-space SCF framework.\cite{Burton2016,Burton2019c}
Furthermore, h-HF theory has provided extensive insight into the fundamental nature of multiple SCF solutions, revealing that discrete HF solutions can be connected as one continuous structure in the complex plane\cite{Burton2019a} and allowing new symmetries to be identified that ensure real HF energies with non-Hermitian Fock matrices.\cite{Burton2019b}
HF theory represents only one example of the SCF approximation and, as a mean-field method, fails to accurately reproduce the electron-electron correlation that is essential for the correct prediction of chemistry.\cite{SzaboBook,electronCorrelation2, tew, electronCorrelation}
An alternative approach, DFT has been developed to capture electron correlation in the SCF framework.\cite{HK, KS}
DFT approximations generally utilise empirical energy functionals of the electron density to describe the most physically relevant electron correlation effects. \cite{KohnNobel, DFTperspective}
The relative accuracy, low-order scaling, and computational simplicity of DFT has led to its widespread application as one of the most popular electronic structure techniques.\cite{100papers}
In principle, the SCF nature of DFT can also produce multiple stationary points with the same potential applications as multiple HF states.
For example, higher energy solutions can be exploited as approximations to excited states through the $\Delta$SCF framework.\cite{Theophilou1979,Gunnarsson1976,Gorling1999,Jones1989}
However, the behaviour of multiple DFT solutions as the molecular structure or chosen functional changes, and their relationship to standard HF solutions, appears relatively unexplored.
This lack of knowledge is both surprising and concerning given that certain DFT solutions are known to also disappear at Coulson--Fischer points, leading to kinks and discontinuities along the corresponding potential energy surfaces.\cite{MoriSanchez2014}
We therefore believe that a detailed investigation into the relationship between multiple HF and DFT solutions is well overdue.
In this work, we aim to extend our understanding of multiple DFT solutions by following solutions along a path between HF theory and DFT.
In Section \ref{sec:motivation} the relationship between the solutions in HF and two fundamental DFT functionals are investigated for a typical electron transfer model.\cite{Jensen2018}
We find that DFT solutions can coalesce and vanish in exactly the same manner as real HF solutions.
Motivated by this discovery, in Section \ref{sec:theory} and beyond we investigate a holomorphic extension of DFT with the potential to analytically continue DFT solutions across all molecular structures.
In doing so, we reveal fundamental relationships between the SCF states of DFT functionals and those of HF, laying the foundation for a more informed exploitation of multiple DFT solutions in chemical applications.
\section{Scaling Between Hartree--Fock and DFT}
\label{sec:motivation}
HF theory provides the foundation for almost all sophisticated wavefunction-based electronic structure calculations.
The inadequacies in the HF description of molecules have been well-investigated and thus, although it does not produce the exact electronic energy, crucial understanding can be obtained from a HF calculation.\cite{SzaboBook}
It is therefore interesting to investigate how SCF states evolve from this approximate but well-defined HF description to (hopefully) more accurate, but often empirical DFT functional.
\begin{figure}[b!]
\center
\includegraphics[width=\columnwidth]{new_plots/updated_DAE_states.pdf}
\caption[Electronic energies of the electron transfer model]
{ \label{fig:reactionEnergy}
Electronic energies along the electron transfer reaction trajectory for the model electron transfer system shown.
The donor (D) and acceptor (A) states are interconverted by symmetry at the transition state, while the E state is symmetric across all geometries.
Real HF energies (solid lines) have previously been reported in Ref.~\onlinecite{Jensen2018} and are plotted relative to the minimum energy of the E state.
Dashed lines indicate the holomorphic continuation of a given state into the complex plane, where only the real component of the h-HF energy is plotted.
Only one low-energy state can be identified using B3LYP-DFT, and this is plotted relative to its minimum energy.
}
\end{figure}
The electron transfer model \ce{C_7H_6F_4^{.+}} studied by Jensen \textit{et al.}\cite{Jensen2018}\ provides an interesting case-study for comparing the HF and DFT approximations.
In this model, a single electron transfers from one carbon-di-fluoride group to its symmetric counterpart along a collective reaction coordinate (see Figure~\ref{fig:reactionEnergy}).
When applying HF theory, three chemically relevant SCF states can be identified corresponding to the symmetry-broken diabatic electron donor (D) and acceptor (A) configurations, and a third delocalised symmetric state (E) that represents the transferring electron.
All three states are stationary solutions to the real HF equations at the minimum energy crossing point (MECP) of the D and A states.
However, as the molecule distorts away from the MECP towards the donor or acceptor structure, the A/D and E configurations coalesce and vanish.
These properties of the real HF states were described in detail in Ref.~\onlinecite{Jensen2018}, although we can now report the existence of the complex-valued h-HF extensions shown in Figure~\ref{fig:reactionEnergy}.
Alternatively, the B3LYP-DFT functional only yields one low-lying stationary state with an electron density at the MECP that most closely resembles the E state (orange line in Fig.~\ref{fig:reactionEnergy}).
This DFT solution predicts a single energy minimum along the reaction coordinate, providing a contrasting picture to the electron transfer predicted by the symmetry-broken HF states (although the reaction trajectory is not optimised for the B3LYP energy).
It is not immediately obvious which features of the HF and B3LYP potentials cause these qualitatively different energy surfaces, or which potential would provide the most faithful representation of the electron transfer process.
However, it is surprising that the multiple symmetry-broken HF states, which appear to resemble diabatic electron transfer configurations, appear completely absent in the B3LYP-DFT description.
To understand \emph{why} these additional states no longer exist using B3LYP-DFT, we follow the real HF states as the SCF approximation is continuously scaled from HF to DFT.
Unless otherwise stated, all further calculations are performed at the MECP geometry where the relevant HF states exist and the D and A states become degenerate.
For simplicity, we consider the minimal STO-3G basis rather than the cc-pVDZ basis used by Jensen \textit{et al.},\cite{Jensen2018} although this does not change any qualitative features of the SCF solutions.
The relationship between HF theory and DFT can be seen when the HF\cite{SzaboBook} and KS-DFT\cite{KS, KohnNobel} equations are written respectively as
\begin{subequations}
\begin{align}
\left[-\frac{1}{2} \nabla^2 + v_\text{eN}(\mathbf{r}) + j(\mathbf{r}) + \hat k(\mathbf{r})\right] \phi_i^\text{HF}(\mathbf{r})
&= \epsilon_i^\text{HF} \phi_i^\text{HF}(\mathbf{r})
\label{HFscf}
\\
\left[-\frac{1}{2} \nabla^2 + v_\text{eN}(\mathbf{r}) + j(\mathbf{r}) + v_\text{XC}(\mathbf{r})\right]\phi_i^\text{KS}(\mathbf{r})
&= \epsilon_i^\text{KS} \phi_i^\text{KS}(\mathbf{r}).
\label{KSscf}
\end{align}
\end{subequations}
Here, the electronic kinetic operator $-\frac{1}{2}\nabla^2$, electron-nuclear potential $v_\text{eN}(\mathbf{r})$, Coulomb potential $j(\mathbf{r})$ and the respective exchange operator $\hat k(\mathbf{r})$ and exchange correlation potential $v_\text{XC}(\mathbf{r})$ are applied to the HF orbitals $\phi_i^\text{HF}(\mathbf{r})$ or the KS orbitals $\phi_i^\text{KS}(\mathbf{r})$ to obtain the Lagrange multipliers $\epsilon_i^\text{HF}$ or $\epsilon_i^\text{KS}$.
Each molecular orbital is expanded in terms of the $m$-dimensional finite basis set with the orbital coefficients $c_{\cdot i}^{\mu \cdot}$ as
\begin{equation}
\phi_i(\mathbf{r}) = \sum_{\mu}^{m} \chi_\mu (\mathbf{r}) c_{\cdot i}^{\mu \cdot},
\end{equation}
where the atomic orbitals (AOs) and MOs are given by $\chi_\mu (\mathbf{r})$ and $\phi_i(\mathbf{r})$ respectively.
Herer we employ the nonorthogonal tensor notation defined Ref.~\onlinecite{HeadGordon1998}, and apply the Einstein summation convention whenever summation is not indicated explicitly.
These HF and DFT equations can be conceptually unified by introducing a parametrised exchange-correlation operator $\hat v_{\text{XC}}(\mathbf{r}; q)$ that interpolates between HF exchange and DFT exchange-correlation with the form
\begin{equation}
\hat v_{\text{XC}}(\mathbf{r}; q) = (1-q)\, \hat k(\mathbf{r}) + q\, \nu_{\text{XC}}(\mathbf{r}).
\label{convScaling}
\end{equation}
A scaling parameter of $q=0$ corresponds to a pure HF calculation, while $q=1$ refers to a pure DFT calculation.
Individual ground- and excited-state SCF solutions can then be traced between different functionals using the maximum overlap method (MOM),\cite{Gilbert2008, Barca2018} where non-\textit{Aufbau} optimisation is achieved by selecting the new occupied orbitals on each SCF iteration according to their overlap with the occupied orbitals on the previous iteration.
As the physical functional evolves, the relationship between different SCF solutions can be visualised by considering a similarity measure for two SCF states $\kappa$ and $\lambda$.
Here we apply the distance measure introduced by Thom \textit{et al.},\cite{Thom2008} which uses the density matrices ${^{\kappa}\kern-0.15em P}$ and ${^{\lambda}\kern-0.15em P}$ to define the distance between two $N$-electron states as
\begin{equation}
d_{\kappa \lambda}^2
= \norm{ {^{\kappa}\kern-0.15em P} - {^{\lambda}\kern-0.15em P} }^2
= N - {^{\kappa}\kern-0.15em P}^{\mu \nu} S_{\nu \sigma} {^{\lambda}\kern-0.15em P}^{\sigma \tau} S_{\tau \mu}.
\label{eq:sqDistance}
\end{equation}
The density matrix for a given state $\kappa$ is defined in terms of the occupied MO coefficients $({^{\kappa}\kern-0.15em c})_{\cdot i}^{\mu \cdot}$ as
\begin{equation}
{^{\kappa}\kern-0.15em P}^{\mu \nu}
= ({^{\kappa}\kern-0.15em c}^{\vphantom{*}})_{\cdot i}^{\mu \cdot} ({^{\kappa}\kern-0.15em c}^{*})_{i \cdot}^{\cdot \nu},
\end{equation}
where $S_{\nu \sigma}$ denotes the AO overlap matrix, and the summation of repeated indices is implicit.
The second equality in Eq.~\eqref{eq:sqDistance} bounds the distance measure as $d_{\kappa \lambda}^2 \in [0,N]$, giving the distance measure in units of `electron number'.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\columnwidth,trim=70pt 70pt 80pt 60pt, clip]{figures/LDA/mecp}
\subcaption{}
\label{subfig:LDA_mecp}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\columnwidth,trim=70pt 70pt 80pt 60pt, clip]{figures/LDA/not_mecp}
\subcaption{}
\label{subfig:LDA_not_mecp}
\end{subfigure}
\caption{Relative distances of the SCF solutions as the exchange correlation functional is scaled from exact HF exchange $k$ to the LDA-exchange functional.\cite{bookLDA}.
The grey lines between solutions at each plane of constant $q$ correspond to the square-root of the inter-state distances~\eqref{eq:sqDistance}.
(\subref{subfig:LDA_mecp}) At the MECP, the E (green line), D (red line), and A (blue line) states all simultaneously coalesce at approximately $q=0.3$ to leave a single SCF solution (black line).
(\subref{subfig:LDA_not_mecp}) When the structure is distorted towards the acceptor structure, the E (green line) and D (red line) states coalesce and both vanish at approximately $q=0.3$, while the A state (blue line) remains independent across all values of $q$.
}
\label{xOnlyLDADist}
\end{figure*}
We first consider scaling between HF and the analytic Local Density Approximation (LDA) exchange functional.\cite{bookLDA}
At the MECP, the three SCF states A, D, and E simultaneously coalesce as $q$ scales between the HF and LDA-exchange description, as demonstrated in Figure~\ref{subfig:LDA_mecp} where the distance measure between the states falls to zero at the point of coalescence.
This three-fold coalescence occurs at a ``confluence'' point,\cite{Fukutome1975,Burton2018} where the degenerate A and D solutions coalesce with the higher energy E state to leave only the E state for larger values of $q$ (black line).
In contrast, when the molecular geometry is marginally distorted towards the acceptor structure, the symmetry-broken D state and the symmetric E state coalesce and vanish at a ``pair annhilation'' point,\cite{Fukutome1975,Burton2018} while the A state can be traced continuously from HF to LDA (Figure~\ref{subfig:LDA_not_mecp}).
This observation indicates that the single LDA solution is not a direct mirror of the E state in HF, but evolves continuously from the A state to the D state as the molecular structure changes.
The LDA-DFT stationary state therefore appears to behave as an adiabatic state (as one would expect for states obtained using the exact functional), in contrast to the diabatic behaviour of the multiple HF states.
But why do DFT functionals yield adiabatic states rather than the multiple symmetry-broken diabatic states observed using HF?
One possible explanation for the coalescence of SCF states is the self-interaction error (SIE), which is a well-known problem of not only LDA-DFT but also more elaborate DFT functionals such as B3LYP.\cite{Perdew1981, Zhang1998, Lundberg2005, MardirossianReview2017}
It has been shown that the exchange contribution of different DFT functionals may include dynamic correlation effects through the SIE, and these effects can dominate the change of electron density between different correlation functionals.\cite{Graefenstein2009}
\begin{figure*}[thb!]
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth,trim=70pt 70pt 80pt 40pt, clip]{figures/B3LYP/full_B3LYP}
\subcaption{}
\label{xcOnlyDist}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth,trim=70pt 70pt 80pt 40pt, clip]{figures/B3LYP/conly_B3LYP}
\subcaption{}
\label{cOnlyDist}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth,trim=70pt 70pt 80pt 40pt, clip]{figures/B3LYP/xonly_B3LYP}
\subcaption{}
\label{xOnlyDist}
\end{subfigure}
\caption{
Relative distances for the three SCF states in the different scaling modes.
(a) Full B3LYP exchange-correlation functional.\cite{B88, LYP}
(b) LYP correlation functional:\cite{LYP} No coalescence of the SCF states is observed and the distance measure remains virtually unchanged.
(c) B3LYP exchange functional:\cite{B88} Coalescence of the three SCF states occurs in a similar manner to the introduction of the full B3LYP functional.
}
\end{figure*}
To understand how different components in a DFT functional affect the existence of multiple SCF solutions, we scale the same electron transfer model between HF and the popular B3LYP functional\cite{B88, LYP, 100papers} and find that it leads to the same pattern of coalescence between the three SCF states.
We then decompose the B3LYP-DFT functional into its constituent exchange and correlation energy contributions and consider scaling between the HF and B3LYP-DFT potential using only the LYP correlation term (Figure~\ref{cOnlyDist}) or the exchange description (Figure~\ref{xOnlyDist}), and the scaling between HF and the full B3LYP functional (Figure~\ref{xcOnlyDist}).
When only the LYP correlation term is included (Figure~\ref{cOnlyDist}), the SCF states remain distinct for all values of the scaling parameter and no coalescence is observed.
In contrast, introducing only the exchange contribution (Figure~\ref{xOnlyDist}) causes all three SCF states to coalesce at approximately the same scaling level as the full B3LYP picture.
This coalescence demonstrates that the exchange correlation functional provides the driving force for the coalescence of states as the SCF approximation is scaled between HF and B3LYP.
Furthermore, inspecting the individual components of the energy (not shown) reveals that the magnitude of the total exchange contribution decreases as one moves from HF to B3LYP.
The coalescence of the symmetry-broken SCF states is therefore driven by the overall strength of the exchange interaction.
This result is entirely consistent with other instances of symmetry-breaking in HF theory, for example the emergence of spin-density waves in antiferromagnetic materials.\cite{Slater1951}
The electron transfer model reveals that smoothly changing the exchange-correlation functional can lead to the coalescence of SCF solutions in exactly the same way as changing the molecular structure.
Furthermore, we have found that the strength of the exchange interaction is a key factor in controlling whether several symmetry-broken SCF stationary states can be identified.
While there are three distinct solutions using the HF exchange functional, a small perturbation of this exchange description towards DFT is sufficient to collapse these diabatic solutions onto one adiabatic state.
However, to completely connect the SCF states from HF to DFT, we require an approach that extends SCF solutions beyond the scaling levels at which SCF states vanish.
Following the framework of holomorphic HF, we believe that a holomorphic extension to DFT will allow SCF states to be analytically continued into the complex plane,
and developing such a method forms the focus for the remainder of this paper.
\section{Holomorphic Density Functional Theory}
\label{sec:theory}
Before deriving a holomorphic extension to DFT, we first review the h-HF approach itself.\cite{Hiscock2014,Burton2016,Burton2018,BurtonThesis}
The original motivation for h-HF theory is rooted in the desire to extend real HF states across all molecular geometries and construct a continuous basis for multireference NOCI calculations.\cite{Thom2009}
To extend real HF states beyond the Coulson--Fischer points at which they vanish, the h-HF energy function $\tilde{E}$ is formulated as a complex-analytic extension of the real HF energy,\cite{Hiscock2014,Burton2016} given for a closed-shell $N$-electron system with $m$ basis functions as
\begin{align}
\tilde{E}
&= E_{\text{nuc}} + \sum_{\mu \nu}^m \tilde{P}^{\nu \mu} \qty( 2h_{\mu \nu} + 2 j_{\mu \nu} - k_{\mu \nu} ),
\label{eq:HoloE}
\end{align}
where the holomorphic density matrix has been introduced as
\begin{equation}
\tilde{P}^{\nu \mu} = \sum_{i}^{N/2} c^{\nu \cdot}_{\cdot i} c^{\cdot \mu}_{i \cdot}.
\label{eq:HoloDensity}
\end{equation}
Here, $h_{\mu \nu}$ denotes the one-electron integrals and the self-consistent Coulomb and exchange matrices are defined in terms of two-electron integrals as
\begin{subequations}
\begin{align}
j_{\mu \nu} = \sum_{\sigma \tau}^{m} \langle \mu \sigma |\nu \tau\rangle \tilde{P}^{\tau \sigma},
\\
k_{\mu \nu} = \sum_{\sigma \tau}^{m} \langle\mu \sigma | \tau \nu\rangle \tilde{P}^{\tau \sigma}.
\end{align}
\end{subequations}
By removing the complex-conjugation of orbital coefficients in the holomorphic density matrix \eqref{eq:HoloDensity}, the energy function \eqref{eq:HoloE} satisfies the Cauchy--Riemann conditions\cite{FischerBook} and becomes a complex-analytic polynomial of the orbital coefficients, in contrast to the standard formulation of the HF energy using complex molecular coefficients (see Ref.~\onlinecite{Pople1971}).
As a result, every real HF state remains a stationary point of the h-HF energy and, when a real HF state disappears, its holomorphic counterpart continues to exist with complex-valued orbital coefficients.\cite{Hiscock2014, Burton2016, Burton2018,Burton2019c}
Crucially, the fact that $\tilde{E}$ is a polynomial of only the orbital coefficients and not their complex conjugates is sufficient to allow a rigorous proof that the number of h-HF states for two-electron systems is constant for all molecular structures.\cite{Burton2018,BurtonThesis}
As a complex-analytic extension to the real HF energy, the operator form of the holomorphic energy function remains the same as the conventional HF energy function~\eqref{HFscf}, and can still be written in terms of the one-electron integrals $h_{\mu \nu}$, and the self-consistent integrals $j_{\mu \nu}$ and $k_{\mu \nu}$ representing the Coulomb interaction and the exact exchange term respectively.
Furthermore, the holomorphic electron density matrix $\tilde{P}^{\nu \mu}$ can be considered as a complex-analytic extension of the real density matrix, although the holomorphic density matrix is complex-symmetric rather than Hermitian.\cite{Burton2018}
In analogy to h-HF theory, we expect that the holomorphic DFT (h-DFT) energy in the Kohn--Sham formalism should also form a complex-analytic function of only the orbital coefficients (and not their complex conjugates) to ensure that its stationary states never disappear.
However, the form of the DFT exchange-correlation functional is not known \textit{a priori} and the exchange-correlation functional is not necessarily a pure polynomial of the MO coefficients.
Instead, we retain the DFT tradition of focussing on the electron density and define the holomorphic electron density $\tilde{\rho}\qty(\mathbf{r})$.
We require this holomorphic electron density to depend on only the orbital coefficients $\{c_{\cdot i}^{\mu \cdot}\}$ and not their complex-conjugates by defining $\tilde{\rho}\qty(\mathbf{r})$ as
\begin{equation}
\begin{split}
\tilde{\rho}(\mathbf{r})
&= \sum_i^{N/2} \bigg(\sum_{\mu}^{m} \chi_{\mu}\qty(\mathbf{r}) c^{\mu \cdot}_{\cdot i} \bigg) \bigg(\sum_{\nu}^{m} c^{\cdot \nu}_{i \cdot} \chi_{\nu}\qty(\mathbf{r}) \bigg)
\\
&= \sum_{\mu \nu}^{m} \tilde{P}^{\mu \nu} \chi_{\mu}\qty(\mathbf{r}) \chi_{\nu}\qty(\mathbf{r}),
\end{split}
\label{holoDensity}
\end{equation}
where the holomorphic density matrix, $\tilde{P}^{\mu \nu}$, is given by Eq.~\eqref{eq:HoloDensity}.
\subsection{Holomorphic Density Fitting}
\label{sec:DensityFitting}
Following the initial h-HF investigation,\cite{Hiscock2014} we first attempt to identify h-DFT solutions by analytically solving the h-DFT equations.
Since exchange-correlation functionals used in DFT often contain fractional exponents of the electron density, we retain a complex-analytic polynomial
form by introducing density fitting methods.\cite{densFit1, densFit2, densFit3, Dunlap1979, densFit5, densFit6, densFit7, densFit8, densFit9, densFit10, densFit11, densFit12}
For the specific case of the LDA exchange energy functional $E_\text{X}^{\text{LDA}}$, given by
\begin{equation}
E_{\text{X}}^{\text{LDA}}[\rho] =-\frac{3}{4} \left( \frac{3}{\pi} \right)^{\frac13} \int \rho(\mathbf{r})^{\frac43} \text{ d$^3$\textbf{r}},
\end{equation}
we express the holomorphic cubed-root electron density $\tilde{\rho}\qty(\mathbf{r})^{\frac13}$ in a polynomial form as
\begin{equation}
\tilde{\rho}\qty(\mathbf{r})^{\frac13} = \sum_{\alpha}^{\infty} f^{\alpha} \xi_{\alpha}(\mathbf{r})
\label{eq:CubedRootDen}
\end{equation}
where the density-fitting basis functions, $\xi_{\alpha}(\mathbf{r})$, are different to the AO basis.
The holomorphic LDA exchange functional $\tilde{E}_\text{X}^\text{LDA}$ is then expressed as
\begin{align}
\hspace{-2em}&\tilde{E}_\text{X}^\text{LDA}[\tilde{\rho}^{\frac13}]
= - \frac{3}{4} \left( \frac{3}{\pi} \right)^{\frac13} \hspace{-0.5em}\bigintsss \left( \sum_{\alpha} f^{\alpha} \xi_{\alpha} (\mathbf{r}) \right)^4 \text{d}^3\mathbf{r}
\\
&= - \frac{3}{4} \left( \frac{3}{\pi} \right)^{\frac13} \hspace{-0.5em} \bigintsss \sum_{\nu \mu \sigma \tau} f^{\alpha} f^{\beta} f^{\gamma} f^{\delta} \xi_{\alpha}(\mathbf{r}) \xi_{\beta}(\mathbf{r}) \xi_{\gamma}(\mathbf{r}) \xi_{\delta}(\mathbf{r}) \text{d}^3\mathbf{r} \nonumber
\end{align}
leading to a fourth-order polynomial in the new coefficient set $\{f^{\alpha}\}$.
The expansion \eqref{eq:CubedRootDen} now allows the holomorphic DFT energy $\tilde{E}$ to be expressed as a complex-analytic polynomial of the MO coefficients $\{c_{\cdot i}^{\mu \cdot}\}$ and the cubed-root electron-density expansion coefficients $\{f^{\alpha}\}$.
We therefore expect that this polynomial energy functional will enable complex-analytic continuations of SCF states in a combined HF and DFT framework to be identified beyond the points where real SCF states coalesce and disappear, and our current implementation is described in Appendix~\ref{sec:appendix}.
The most famous example of SCF states coalescing occurs in the bond dissociation of $\ce{H2}$, as shown for the conventional restricted HF and LDA-DFT methods using the minimal STO-3G basis in Figure~\ref{fig:hf_lda_overlay}.
A total of $n=4$ stationary SCF states can be identified at large bond lengths,
labelled by their molecular structure as $\sig_{\text{g}}^2$, $\sig_{\text{u}}^2$ and the two degenerate ionic configurations H$^{\pm}$..H$^{\mp}$.\cite{Burton2018}
In both LDA-DFT and HF, the $\sig_{\text{u}}^2$ and H$^{\pm}$..H$^{\mp}$ states coalesce as the bond length is shortened.
The relative behaviour of the SCF states is the same for both LDA-DFT and HF, with the energy ordering and degeneracies of each state unchanged.
However, the absolute LDA-DFT energies, and therefore the location of the coalescence point, are shifted compared to the HF energies.
In HF theory, this coalescence point occurs at approximately $\SI{1.15}{\angstrom}$, whereas in LDA-DFT it is located at approximately $\SI{0.87}{\angstrom}$.
\begin{figure}[hbt!]
\includegraphics[width=0.96\columnwidth]{new_plots/conv_H2_holo.pdf}
\caption{Real contribution to the (holomorphic) electronic SCF energies of \ce{H2} along the bond dissociation coordinate using restricted HF theory (solid line) and restricted LDA-DFT (dashed line). The dark green dashed lines correspond to holomorphic LDA states which are obtained by tracing from holomorphic HF theory at the indicated points ($R=\SI{0.70}{\angstrom}$ and $R=\SI{1.10}{\angstrom}$).
}
\label{fig:hf_lda_overlay}
\end{figure}
\begin{figure*}[htbp!]
\begin{subfigure}[b]{0.32\linewidth}
\begin{tikzpicture}
\draw (0,0) node[align=center] (plot) {%
\includegraphics[width=1\columnwidth]{figures/H2_Mathematica/R400_q_plot.pdf}};
\draw (-1.5,-1.3) node[align=center] {$\sig_{\text{g}}^2$};
\draw (0.5,2.8) node[align=center] { \ce{H^{+}-H^{-}} };
\draw (2.2,2.6) node[align=center] { \ce{H^{-}-H^{+}} };
\draw (0.0,-1.7) node[align=center] {$\sig_{\text{u}}^2$};
\end{tikzpicture}
\subcaption{$R=\SI{4.00}{\angstrom}$}
\label{fig:uncoalesced}
\end{subfigure}
\begin{subfigure}[b]{0.32\linewidth}
\begin{tikzpicture}
\draw (0,0) node[align=center] (plot) {%
\includegraphics[width=1\columnwidth]{figures/H2_Mathematica/R110_q_plot.pdf}};
\draw (-1.2,-1.4) node[align=center] {$\sig_{\text{g}}^2$};
\draw (0.5,2.8) node[align=center] { \ce{H^{+}-H^{-}} };
\draw (2.2,2.6) node[align=center] { \ce{H^{-}-H^{+}} };
\draw (-1.1,1.4) node[align=center] {$\sig_{\text{u}}^2$};
\end{tikzpicture}
\subcaption{$R=\SI{1.10}{\angstrom}$}
\label{fig:inbetween}
\end{subfigure}
\begin{subfigure}[b]{0.32\linewidth}
\begin{tikzpicture}
\draw (0,0) node[align=center] (plot) {%
\includegraphics[width=1\columnwidth]{figures/H2_Mathematica/R075_q_plot.pdf}};
\draw (-1.2,-1.4) node[align=center] {$\sig_{\text{g}}^2$};
\draw (1.2,2.8) node[align=center] {$\sig_{\text{u}}^2$};
\end{tikzpicture}
\subcaption{$R=\SI{0.75}{\angstrom}$}
\label{fig:coalesced}
\end{subfigure}
\caption{%
SCF energy as a function of the rotation angle $\theta$ between the symmetry orbitals $\sig_{\text{g}}$ and $\sig_{\text{u}}$ and the scaling parameter $q$ between HF theory and LDA-DFT as described in Eq.~\eqref{convScaling}.
(\subref{fig:uncoalesced}) At a bond length of $R=\SI{4.00}{\angstrom}$, all four SCF solutions can be identified.
(\subref{fig:inbetween}) At a bond length of $R=\SI{1.10}{\angstrom}$, between the coalescence points of HF and LDA-DFT, the ionic states appear in the LDA-DFT limit ($q=1$) but have already coalesced in a HF framework ($q=0$).
(\subref{fig:coalesced}) In the equilibrium regime at a bond length of $R=\SI{0.75}{\angstrom}$, only two stationary points are observed and the ionic SCF states have vanished in both LDA-DFT and HF.}
\label{rotations}
\end{figure*}
The closed-shell SCF wave function for \ce{H2} contains only one doubly occupied spatial orbital $\phi\qty(\mathbf{r})$ which can be expanded in terms of the (real orthogonal) MO basis using the rotation angle $\theta$ as
\begin{equation}
\phi\qty(\mathbf{r}) = \sig_{\text{g}}\qty(\mathbf{r}) \cos\theta + \sig_{\text{u}}\qty(\mathbf{r}) \sin\theta
\label{eq:OccOrb}
\end{equation}
The coalescence of states of \ce{H2} for a linear interpolation between HF and LDA-DFT can be visualised by parametrising the SCF energy surface in terms of the exchange-correlation scaling $q$ and the orbital rotation angle $\theta$, as shown in Figure~\ref{rotations}.
For a given scaling $q$ or bond length $R$, the number of real SCF states corresponds to the number of stationary points with respect to $\theta$.
As expected,\cite{Burton2018} there are four stationary points for both HF and LDA-DFT at large bond lengths (Figure~\ref{fig:uncoalesced}), and only two stationary points at the equilibrium geometry while the ionic solutions having vanished at the coalescence point (Figure~\ref{fig:coalesced}) .
The bonding $\sig_{\text{g}}^2$ and anti-bonding $\sig_{\text{u}}^2$ states can be identified for all bond lengths in both HF and LDA-DFT.
In contrast, between the HF and LDA-DFT Coulson--Fischer points, the ionic states that have disappeared in the HF framework reappear in the LDA-DFT one (Figure~\ref{fig:inbetween}).
This observation is surprising since it is the \emph{opposite} scenario to the model electron transfer, where the symmetry-broken SCF states existed in the HF case but not LDA-DFT.
We believe that
different types of SCF symmetry breaking (eg.\ singlet or triplet instabilities) may have distinct coalescence patterns upon scaling between HF and DFT, in turn suggesting fundamental differences in the types of symmetry breaking using various functionals.
At bond lengths shorter than the coalescence point, it it is known that h-HF solutions continue to exist with complex-valued orbital coefficients.\cite{Burton2018}
We expect our analytic h-DFT approximation to allow both real- and complex-valued h-HF stationary states to be mapped onto the h-LDA states.
Like h-HF theory, we require the density-fitting h-DFT energy to retain the conventional (real) SCF solutions when they exist.
To verify our h-DFT approach, we therefore applied the density-fitting method to study paths between the SCF states that are real in both HF and LDA-DFT.
We rewrite the nuclear-attraction energy $E_\text{nuc}$ in terms of the density-fitting basis to ensure that the HF energy is also dependent on the fitting parameters $f$,
allowing us to continuously link holomorphic HF theory and DFT.
Comparing the electronic energies for the $\sig_{\text{g}}^2$ and $\sig_{\text{u}}^2$ states from the holomorphic density fitting and a reference calculation using \textsc{Q-Chem~5.2}\cite{Q-Chem} demonstrates a sub-$\text{mE}_{\text{h}}$ agreement for both HF and LDA-DFT calculations, as shown at $R=\SI{6.35}{\angstrom}$ in Table~\ref{12_bohr}.
However, although \ce{H2} has exactly four true h-RHF solutions using a minimal basis,\cite{Burton2018}
solving for all RHF states in the density-fitting implementation leads to more than four solutions.
Many of these solutions have the same energies as the exact h-HF solutions, and we believe that the additional solutions arise from implicitly allowing different cubed-roots of the electron density.
Clearly only the real cube-root of the electron density provides a physical solution since the additional SCF solutions using the density-fitting equations do not link to the real-valued HF energies where they exist.
We therefore believe that these additional solutions are mathematical artefacts, and safely be ignored.
\begin{table}[h!]
\begin{ruledtabular}
\begin{tabular}{ld{2.6}d{2.6}}
& \head{$E\qty(\sig_{\text{g}}^2)$ } & \head{$E \qty(\sig_{\text{u}}^2)$}
\Tstrut\Bstrut\\
\hline\Tstrut
\textbf{HF ($q=0.0$)} & &
\\
Holomorphic Density Fitting& -0.587464 & -0.241891
\\
Reference (\textsc{Q-Chem}) & -0.587531 & -0.241891
\\
\hline\Tstrut
\textbf{LDA-DFT ($q=1.0$)} & &
\\
Holomorphic Density Fitting& -0.685653 & -0.131509
\\
Reference (\textsc{Q-Chem}) & -0.685748 & -0.131498
\\
\end{tabular}
\end{ruledtabular}
\caption{%
HF and LDA-DFT energies of the conventional SCF states computed using the holomorphic density fitting approach and a reference calculation using \textsc{Q-Chem~5.2}\cite{Q-Chem} are found to agree with sub-$\text{mE}_\text{h}$ accuracy at $R = \SI{6.35}{\angstrom}$.
All energies are given in atomic units of Hartrees ($\text{E}_{\text{h}}$).
\label{12_bohr}
}
\end{table}
\begin{figure*}[htbp]
\centering
\begin{subfigure}[b]{.8\columnwidth}
\centering
\hspace{-3em}\includegraphics[width=\columnwidth]{new_plots/real_070_z.pdf}
\subcaption{}
\label{fig:real_holoDens_R070}
\end{subfigure}
\begin{subfigure}[b]{.8\columnwidth}
\centering
\hspace{-3em}\includegraphics[width=\columnwidth]{new_plots/imag_070_z.pdf}
\subcaption{}
\label{fig:imag_holoDens_R070}
\end{subfigure}
\begin{subfigure}[b]{.8\columnwidth}
\centering
\hspace{-3em}\includegraphics[width=\textwidth]{new_plots/real_209_up_z.pdf}
\subcaption{}
\label{fig:holo_cdens_r_b}
\end{subfigure}
\begin{subfigure}[b]{.8\columnwidth}
\centering
\hspace{-3em}\includegraphics[width=\columnwidth]{new_plots/imag_209_up_z.pdf}
\subcaption{}
\label{fig:holo_cdens_i_b}
\end{subfigure}
\begin{subfigure}[b]{.8\columnwidth}
\centering
\hspace{-3em}\includegraphics[width=\columnwidth]{new_plots/real_cf_209_z.pdf}
\subcaption{}
\label{fig:fitting_r}
\end{subfigure}
\begin{subfigure}[b]{.8\columnwidth}
\centering
\hspace{-3em}\includegraphics[width=\columnwidth]{new_plots/imag_cf_209_z.pdf}
\subcaption{}
\label{fig:fitting_i}
\end{subfigure}
\caption{Real (\subref{fig:real_holoDens_R070}) and imaginary (\subref{fig:imag_holoDens_R070}) components of the holomorphic electron density $\tilde{\rho}\qty(\mathbf{r})$ of the h-SCF state for different scaling factors $q$ at a bond length of $R=\SI{0.70}{\angstrom}$.
Real (\subref{fig:holo_cdens_r_b}) and imaginary (\subref{fig:holo_cdens_i_b}) components of the holomorphic electron density $\tilde{\rho}\qty(\mathbf{r})$ of the ionic state h-SCF for different scaling factors $q$ at a bond length of $R=\SI{1.10}{\angstrom}$.
Real (\subref{fig:fitting_r}) and imaginary (\subref{fig:fitting_i}) components of the holomorphic electron density $\tilde{\rho}\qty(\mathbf{r})$ of the h-RHF state at a bond length of $R=\SI{1.10}{\angstrom}$ calculated from the AO basis $\ket{\Psi}$ (solid line) and the density fitting basis $\ket{\xi}$ (dashed line).
}
\label{adiabat_change}
\end{figure*}
Beyond the coalescence point in HF theory, we can always successfully identify the complex h-HF extension of the ionic state using the density fitting method.
In principle, the holomorphic density-fitting method therefore allows both real and holomorphic SCF states to be continuously mapped from HF theory to LDA-DFT.
In practice, however, the success of the density fitting depends strongly on the size and choice of the density fitting basis.
To identify LDA extensions of the h-HF state, we first traced the h-HF solution using the density fitting across at $\SI{0.70}{\angstrom}$, where we expect the complex h-HF state to evolve continuously into the complex h-LDA solution.
While there are three degenerate density-fitting solutions in the h-HF framework that provide a real holomorphic energy which matches with the conventional h-HF implementation,\cite{Burton2018} only one of these solutions retains a meaningful holomorphic energy when traced along to h-LDA-DFT, corresponding to the correct cube-root density.
Tracking the holomorphic ionic state from HF to LDA-DFT by relaxing the wave function at each scaling value, we find that the MO coefficients, and in turn the holomorphic electron density $\tilde{\rho}\qty(\mathbf{r})$, do not change significantly between the two functionals, as shown in Figures~\ref{fig:real_holoDens_R070} and \ref{fig:imag_holoDens_R070}.
Within the h-LDA-DFT framework, we then traced this solution along bond dissociation coordinate, giving the binding curve shown in Figure~\ref{fig:hf_lda_overlay}.
Surprisingly, the energy of this state did not converge onto the energy of the conventional ionic LDA-state at the coalescence point ($R=\SI{0.87}{\angstrom}$), as would be expected for a complex-analytic extension of the real ionic state.
To understand this effect, we then tracked the h-HF solution from HF to LDA at a bond length between the coalescence points of the two potentials ($R=\SI{1.10}{\angstrom}$), where the ionic state has already coalesced in HF theory but remains a real stationary point in the LDA-DFT framework.
For this bond length, there is a large rearrangement in the holomorphic electron density and, contrary to what we expect, we recover a complex-valued holomorphic SCF solution in h-LDA-DFT as well.
The real part of the holomorphic density (Figure \ref{fig:holo_cdens_r_b}) retains symmetry-broken ionic character in the limit of a LDA-DFT calculation, as expected since the ionic LDA-DFT state has not yet coalesced, but reduces to a more symmetric distribution in the HF case.
The imaginary component of the holomorphic electron density $\tilde{\rho}\qty(\mathbf{r})$ however remains non-zero when tracking the holomorphic SCF state from HF to LDA-DFT, even though the ionic states exist in LDA-DFT (Figure~\ref{fig:holo_cdens_i_b}).
This state also yields a complex-valued holomorphic energy rather than recovering the real-valued ionic LDA-DFT energy that would be expected.
Moreover, its energy and electron density do not match the h-LDA state traced from shorter bond lengths, and following this new solution along the binding curve
reveals that it also does not converge onto the real LDA ionic state (see Figure \ref{fig:hf_lda_overlay}).
Clearly, the density fitting method is capable of yielding complex h-LDA states, but we have not been able to find a unique solution that corresponds to a complex-analytic extension of the real ionic state.
The current density-fitting implementation's failure to recover the holomorphic ionic state can be attributed to the fitting quality for the h-HF electron density.
Figures \ref{fig:fitting_r} and \ref{fig:fitting_i} compare the real and imaginary part of the holomorphic electron density calculated using the MO coefficients and the density-fitting coefficients.
Both real and imaginary parts are not fitted particularly well using the Gaussian-like fitting basis, indicating a need for a larger fitting basis, potentially with a different functional form.
While the density-fitting approach is conceptually exact, and recovers the desired polynomial electronic energy functional, it is too numerically challenging to apply in practice.
\subsection{Holomorphic Kohn--Sham Theory}
\label{sec:KohnSham}
\begin{figure*}[tbhp]
\begin{subfigure}[b]{0.49\linewidth}
\includegraphics[width=\textwidth, trim=0pt 52pt 0pt 0pt, clip]{figures/H2_Mathematica/R075_hRHF_Re_surf_annotated_crop.pdf}\\
\includegraphics[width=\textwidth, trim=0pt 0pt 0pt 1pt, clip]{figures/H2_Mathematica/R075_hRHF_Im_surf_annotated_crop.pdf}
\subcaption{Hartree--Fock}
\label{subfig:SurfHF}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\linewidth}
\includegraphics[width=\textwidth, trim=0pt 52pt 0pt 0pt, clip]{figures/H2_Mathematica/R075_hRKS_Re_surf_annotated_crop.pdf} \\
\includegraphics[width=\textwidth, trim=0pt 0pt 0pt 1pt, clip]{figures/H2_Mathematica/R075_hRKS_Im_surf_annotated_crop.pdf}
\subcaption{LDA}
\label{subfig:SurfLDA}
\end{subfigure}
\caption{Comparison of the real (top) and imaginary (bottom) components of the (\subref{subfig:SurfHF}) h-HF and (\subref{subfig:SurfLDA}) h-LDA energy surfaces for a restricted SCF wave function parameterised by the single occupied orbital~\eqref{eq:OccOrb}.
The parity-time symmetric line (dashed red) has purely real h-SCF energies.
Complex-valued h-HF and h-LDA solutions are indicated by a green diamond, and occur along the parity-time symmetric line for both potentials.
The symmetry-pure $\sig_{\text{g}}^2$ and $\sig_{\text{u}}^2$ solutions correspond to the red circles at $\theta=0$~and~$\pm \pi / 2$ respectively.
}
\label{fig:SCFsurfaces}
\end{figure*}
To overcome the numerical issues of the density fitting approach, we also considered a na\"{i}ve implementation of h-DFT by removing any complex conjugation
of the MO coefficients from the SCF energy function.
Following the philosophy of Ref.~\onlinecite{Burton2016}, we simply defined the h-LDA functional using the holomorphic density defined in Eq.~\eqref{eq:HoloDensity}.
The corresponding restricted SCF energy surfaces can then be visualised as a function of the complex rotation angle $\theta$ that defines
the single occupied orbital Eq.~\eqref{eq:OccOrb}, as shown in Figure~\ref{fig:SCFsurfaces}.
We find a remarkable similarity between the topology of the h-HF and h-LDA energy surfaces for all complex values of $\theta$.
The number of stationary points is the same for both potentials, with two solutions along the real axis corresponding to the $\sig_{\text{g}}^2$ and $\sig_{\text{u}}^2$ states,
and two stationary points with complex-valued orbitals.
These complex-valued stationary points correspond to the holomorphic extensions of the ionic SCF states,
confirming that complex holomorphic extensions can also be identified for solutions that disappear using the LDA functional.
Analogously to the HF case, the complex h-LDA solutions occur along a line of strictly real energies defined by $\theta = \frac{\pi}{2} + \vartheta \mathrm{i}$,
for $\vartheta \in \mathbb{R}$ (red dashed line in Figure~\ref{fig:SCFsurfaces}).
In HF theory, this line has recently been shown to correspond with the conservation of parity-time symmetry in the molecular wave function,\cite{Burton2019b} providing a weaker
condition than Hermiticity for ensuring real electronic energies.\cite{Bender1998}
Our observations therefore provide the first evidence of parity-time symmetry in the DFT framework, paving the way for novel
applications of this new symmetry across single-particle approximations.
Encouraged by the existence of complex stationary points on the h-LDA energy surface, we implemented a holomorphic KS (h-KS) approach to iteratively identify h-DFT solutions.
Following the original h-HF SCF approach,\cite{Burton2016} the closed-shell holomorphic Fock matrix for the LDA functional is defined as
\begin{equation}
\tilde{F}_{\mu \nu} = h_{\mu \nu} + j_{\mu \nu} + \tilde{F}^{\text{LDA}}_{\mu \nu},
\label{eq:HoloFock}
\end{equation}
where the exchange-correlation contribution\cite{Pople1992} to the Fock matrix $\tilde{F}^{\text{LDA}}_{\mu \nu}$ is defined in terms of the holomorphic density as
\begin{equation}
\tilde{F}^{\text{LDA}}_{\mu \nu} = -\qty(\frac{6}{\pi})^{\frac{1}{3}} \int \tilde{\rho}\qty(\mathbf{r})^{\frac{1}{3}} \mathrm{d}^3 \mathbf{r}.
\end{equation}
The h-KS algorithm then proceeds in a analogous manner to the h-HF approach,\cite{Burton2016} with new occupied orbitals
on each iteration selected using a complex-symmetric extension of the maximum overlap method,\cite{Gilbert2008}
and convergence accelerated using the DIIS (direct inversion in the iterative subspace) extrapolation scheme.\cite{Pulay}
This approach can be trivially extended to a spin-unrestricted formalism by defining $\alpha$ and $\beta$ Fock matrices in terms of the corresponding $\alpha$ and $\beta$
holomorphic densities.
The use of the complex-symmetric holomorphic density matrix is closely related to non-Hermitian extensions of KS-DFT developed to describe
metastable resonance states,\cite{Ernzerhof2006} although here we do not introduce any complex absorbing potential.
\begin{figure}[b!]
\includegraphics[width=\linewidth]{figures/hKS/h2_hKS.pdf}
\caption{Binding curves for the eight KS-LDA solutions. When the real symmetry-broken (sb-) RKS and UKS solutions disappear, complex-valued h-RKS and h-UKS solutions continue to exist (dashed lines).}
\label{fig:hKS_bindingCurve}
\end{figure}
Taking initial guesses for the optimal orbital coefficients from the h-LDA energy surfaces in Figure~\ref{fig:SCFsurfaces}, we have
identified a total of four self-consistent h-RKS solutions at $R=\SI{0.70}{\angstrom}$.
Two of these solutions have real orbital coefficients, corresponding to the $\sig_{\text{g}}^2$ and $\sig_{\text{u}}^2$ solutions, while the remaining two have complex-valued orbital
coefficients and occur as the h-RKS complex conjugate pair.
In addition, we identified a further four holomorphic unrestricted KS (h-UKS) solutions, including two with complex-valued orbital
coefficients that correspond to the h-UHF states seen in previous studies.\cite{Burton2016}
All eight solutions can then be traced along the full binding curve by using the converged coefficients at one geometry as the
initial guess at the next geometry, as shown in Figure~\ref{fig:hKS_bindingCurve}.
Numerical values for the orbital coefficients and h-LDA energies of each stationary point at bond lengths of $R = 0.70$, $1.10$, and $\SI{4.00}{\angstrom}$ are
available in the Supporting Information.
Crucially, we find that the h-RKS and h-UKS solutions emerge from the coalescence points at which the real symmetry-broken ionic (sb-RKS) or diradical (sb-UKS) solutions disappear respectively.
These self-consistent h-KS solutions therefore provide the rigorous complex-analytic extensions of real KS solutions that disappear as the molecular structure changes.
\begin{figure*}[htbp]
\centering
\begin{subfigure}[b]{.8\columnwidth}
\centering
\hspace{-3em}\includegraphics[width=\columnwidth]{figures/H2_Mathematica/hRKS_re_density_r0-70_scale.pdf}
\subcaption{}
\label{subfig:hRKS_re_dens_R070}
\end{subfigure}
\begin{subfigure}[b]{.8\columnwidth}
\centering
\hspace{-3em}\includegraphics[width=\columnwidth]{figures/H2_Mathematica/hRKS_im_density_r0-70_scale.pdf}
\subcaption{}
\label{subfig:hRKS_im_dens_R070}
\end{subfigure}
\begin{subfigure}[b]{.8\columnwidth}
\centering
\hspace{-3em}\includegraphics[width=\columnwidth]{figures/H2_Mathematica/hRKS_re_density_r1-10_scale.pdf}
\subcaption{}
\label{subfig:hRKS_re_dens_R100}
\end{subfigure}
\begin{subfigure}[b]{.8\columnwidth}
\centering
\hspace{-3em}\includegraphics[width=\columnwidth]{figures/H2_Mathematica/hRKS_im_density_r1-10_scale.pdf}
\subcaption{}
\label{subfig:hRKS_im_dens_R100}
\end{subfigure}
\caption{Real (\subref{subfig:hRKS_re_dens_R070}) and imaginary (\subref{subfig:hRKS_im_dens_R070}) components of the holomorphic electron density $\tilde{\rho}\qty(\mathbf{r})$ for the h-RKS solution at different scaling factors $q$ (bond length: $\SI{0.70}{\angstrom}$).
Real (\subref{subfig:hRKS_re_dens_R100}) and imaginary (\subref{subfig:hRKS_im_dens_R100}) components of the holomorphic electron density $\tilde{\rho}\qty(\mathbf{r})$ for the h-RKS solution at different scaling factors $q$ (bond length: $\SI{1.10}{\angstrom}$).
The internuclear axis is aligned with the $z$-axis.
}
\label{fig:hRKSdensityChange}
\end{figure*}
The iterative h-KS approach now allows the evolution of all h-HF solutions into the h-LDA functional to be directly visualised.
The corresponding closed-shell density for the holomorphic ionic state is illustrated at bond lengths of $0.70$ and $\SI{1.10}{\angstrom}$ in
Figure~\ref{fig:hRKSdensityChange}.
At the intermediate bond length $\SI{1.10}{\angstrom}$ between the two coalescence points,
there is now a clear evolution from the complex-valued h-HF density to the real-valued ionic KS-LDA density, confirming
that the two solutions are linked across the SCF approximation.
Comparing Figures~\ref{adiabat_change} and \ref{fig:hRKSdensityChange} demonstrates that the h-RKS solutions at both $0.70$ and $\SI{1.10}{\angstrom}$
show a more pronounced relaxation of the holomorphic density between HF and LDA than the density-fitting.
This greater relaxation is to be expected if the density-fitting basis is not large enough to adequately fit the true holomorphic density, as described in Section~\ref{sec:DensityFitting}.
The rigorous map between multiple HF and DFT solutions finally allows us to understand how the change in exchange-correlation
potential affects the existence of symmetry-broken SCF solutions in \ce{H2}.
In particular, the coalescence point for the unrestricted symmetry-broken solution
occurs at a longer bond length in LDA than HF, in contrast to the coalescence for symmetry-broken ionic solution
that disappears at a shorter bond length in LDA than HF.
This observation suggests that the LDA exchange-correlation functional disfavours the spin-symmetry breaking of
the low-energy unrestricted solution, in line with our previous conclusions in the model electron transfer
system (Section~\ref{sec:motivation}).
\section{Concluding Remarks}
\label{conclusion}
The HF approximation and DFT are often considered as distinct methods in electronic structure.
However, both approaches intimately linked through the SCF approximation.
While it is well known that the self-consistency of HF theory can yield several optimal solutions, which may correspond to physically distinct electronic states, the
existence of multiple DFT solutions is far less understood.
Furthermore, to the best our knowledge, direct connections between multiple HF and DFT solutions have never previously been explored.
In this work, we have performed a first investigation into the mapping of multiple SCF solutions between the HF and DFT energy surfaces.
Using a model electron transfer system,\cite{Jensen2018} we have found that the three low-lying HF states representing diabatic electron transfer configurations coalesce onto one DFT state.
This single DFT solution appears to be adiabatic in nature and maps continuously onto the lowest energy HF state at any geometry.
As the SCF approximation is scaled between HF and DFT, we have shown that the disappearance of two SCF solutions states occurs in an analogous way to the coalescence of
real HF states as the molecular structure changes.
Furthermore, we have shown that the coalescence of these SCF states is induced by an overall reduction in the exchange interaction between HF and the corresponding DFT functional,
highlighting the effect that this energy contribution has in driving spin-symmetry breaking.
To extend SCF solutions across all molecular structures and exchange-correlation functionals, we have developed two complex-analytic holomorphic extensions that can be applied in both
the HF and DFT frameworks.
from the conventional electron density.
The first approach, based on a density-fitting approximation, allows the DFT energy to be expressed a complex-analytic polynomial functional and provides a mathematically
rigorous extension to h-HF theory for guaranteeing the existence of multiple solutions.
However, solving the density-fitting equations relies on the introduction of auxiliary basis sets and appears to suffer from severe numerical challenges.
In contrast, the second approach considers a ``na\"{i}ve'' extension of h-HF theory whereby the KS-DFT equations are self-consistently solved using the complex-symmetric
holomorphic density.
This h-KS approach requires minimal modifications to a standard KS-DFT implementation, and appears to allow complex-valued holomorphic DFT solutions to be uniquely identified
beyond the coalescence points where real DFT solutions coalesce and vanish.
Using the h-DFT method, we have investigated the complete mapping of the closed-shell states in \ce{H2} between HF theory and LDA-DFT.
By considering both restricted and unrestricted SCF solutions, we have identified fundamental differences in the way that different types of symmetry breaking evolve between HF theory and DFT.
In particular, spin-symmetry breaking in the ground-state unrestricted SCF solution appears to be discouraged using the LDA functional, with the coalescence point occurring at a larger bond length than in the HF approximation.
On the other hand, spatial-symmetry breaking in the ionic restricted SCF solution occurs at a shorter bond length in the LDA functional.
Alongside the model electron transfer model, these results suggest that different types of symmetry breaking are induced by changes in different relative components of the energy, such as the
exchange interaction in the spin-symmetry-broken UHF solution.
The nature of multiple DFT solutions, and their relationship to multiple HF states, is only beginning to be understood.
Like any single-determinant theory, one of the major deficiencies of DFT is the challenge of accurately describing static correlation effects.\cite{DFTdevelopmentReview}
Linear expansions of multiple DFT solutions have already been proposed as one extension beyond the single-reference DFT approximation,\cite{CDFT-CI}
providing a direct analogy to the construction of multireference NOCI wave functions using multiple HF solutions.\cite{Thom2009}
Complex-valued h-HF solutions have been essential in the development of continuous NOCI basis sets across all molecular structures,\cite{Burton2019c} and it is
likely that h-DFT solutions will serve a similar purpose for multireference DFT expansions.
Furthermore, understanding exactly how the choice of DFT functional affects the existence or coalescence of SCF solutions will be essential if multiple DFT solutions
are to be routinely used to interpret chemical processes, and we hope to continue this investigation in future publications.
Finally, it has been suggested that any modern density-functional approximation must, to some extent, include nonlocality in the exchange description,
as provided exactly by HF theory.\cite{DFTperspective}
While hybrid functionals achieve this nonlocality by empirically mixing local and nonlocal energy contributions, other density functionals employ
use range separation or long-range correction. \cite{B3LYP, LR-DFT1, LR-DFT2, LR-DFT3, range_separated+MR, range_separated}
However, the choice of mixing or range is ultimately made by fitting to empirical data.
In contrast, we have revealed that the existence of symmetry-broken DFT solutions that include local or nonlocal electron densities is strongly dependent on the relative strengths
of the Coulomb and exchange contributions, and is heavily influenced by the choice of exchange-crrelation functional.
We therefore believe that, by establishing a universal framework of multiple SCF states across all molecular structures or exchange-correlation functionals, the h-DFT approach will
provide an entirely new approach for identifying the ``ideal'' combination of locality and nonlocality in DFT functionals through a theoretically justified, rather than empirical, foundation.
\section*{Acknowledgements}
HGAB acknowledges the Cambridge Trust for a PhD scholarship. AJWT thanks the Royal Society
for a University Research Fellowship (UF110161).
\section*{Supporting Information}
Numerical results for the h-RKS solutions of \ce{H2} using the minimal STO-3G basis set at bond lengths of $4.00$, $1.10$, and $\SI{0.70}{\angstrom}$.
\begin{appendix}
\section{Implementation of Holomorphic Density Fitting}
\label{sec:appendix}
The holomorphic DFT as presented here relies on the method of density-fitting. This requires the introduction of an additional basis set for fitting the density, with basis functions $\xi_{\alpha}(\mathbf{r})$ and density-fitting coefficients $\{f^{\alpha}\}$.
Due to the form of the LDA exchange functional, it is helpful to fit the cube-root of the density to retain sets of polynomial equations, rather than the density itself, giving
\begin{equation}
\tilde{\rho}\qty(\mathbf{r})^{\frac13} = \sum_{\alpha}^{\infty} f^{\alpha} \xi_{\alpha}(\mathbf{r}).
\label{cubedRootDen}
\end{equation}
Following the form of the Kohn--Sham energy, the holomorphic Kohn--Sham energy is given by \begin{equation}
\tilde{E}^{\text{KS}}[\tilde{\rho}]=\tilde{T}_{\text{s}}[\tilde{\rho}]+\tilde{E}_{\text{eN}}[\tilde{\rho}]+\tilde{E}_{\text{J}}[\tilde{\rho}]+\tilde{E}_{\text{XC}}[\tilde{\rho}],
\end{equation}
consisting of non-interacting kinetic, electron-nuclear, and electron-electron Coulomb and exchange-correlation terms respectively. The holomorphic form of the latter three is identical to the conventional expressions, merely substituting the holomorphic density for the real density.
The non-interacting kinetic energy, $\tilde{T}_{\text{s}}$, however,
cannot be expressed polynomially in terms of the DF coefficients, because it depends explicitly upon MO coefficients, $\{c_{\cdot i}^{\mu \cdot}\}$ of the occupied Kohn--Sham orbitals, and only implicitly depends on the holomorphic electron density $\tilde{\rho}$.
In a two-electron system using a doubly occupied orbital, the non-interacting kinetic energy can be expressed exactly in terms of the density as $\frac{1}{8}\int \frac{|\mathbf{\nabla}\rho|^2}{\rho} d^3{\mathbf{r}}$, but this form is not amenable to a polynomial form.
We therefore express the holomorphic electronic energy as a function of both the MO coefficients $\{c_{\cdot i}^{\mu \cdot}\}$ and the set of density-fitting coefficients $\{f^{\alpha}\}$.
We constrain these coefficients by attempting to equate the holomorphic density derived from the density-fitting with that derived from the MO coefficients.
This constraint defines a unique relation between the two expansion sets given by
\begin{align}
\sum_{\mu \nu}^n c_{\cdot 1}^{\mu\cdot} c_{1\cdot}^{\cdot\nu} \chi_{\mu}(\mathbf{r}) \chi_{\nu}(\mathbf{r})
&=
\sum_{\alpha \beta \gamma}^m f^{\alpha} f^{\beta} f^{\gamma} \xi_{\alpha}(\mathbf{r}) \xi_{\beta}(\mathbf{r}) \xi_{\gamma}(\mathbf{r})
\label{ConnectBasises1}.
\end{align}
In addition, we equate the total derivative of the density with respect to one MO coefficient to derive the additional relation
\begin{equation}
\begin{split}
\frac{\text{d}}{\text{d} c_{\cdot 1}^{\mu\cdot}}
\sum_{\mu \nu}^n c_{\cdot 1}^{\mu\cdot} c_{1\cdot }^{\cdot \nu} \chi_{\mu}(\mathbf{r}) \chi_{\nu}(\mathbf{r}) &=
\\
\frac{\text{d}}{\text{d} c_{\cdot 1}^{\mu\cdot}} \sum_{\alpha \beta \gamma}^m &f^{\alpha} f^{\beta} f^{\gamma} \xi_{\alpha}(\mathbf{r}) \xi_{\beta}(\mathbf{r}) \xi_{\gamma}(\mathbf{r}),
\end{split}
\label{ConnectBasises2}
\end{equation}
that, together with Eq.~\eqref{ConnectBasises1}, fully determines the set of density-fitting coefficients.
To illustrate our approach, consider a two-electron system where the electronic energy is described in terms of both sets of coefficients as $\tilde{E}(\{c_{\cdot 1}^{\mu \cdot}\}, \{f^{\alpha}\})$.
The lower index of $c$ is restricted to $1$ as there is only one doubly-occupied orbital.
The cube-root density given by $\tilde{\rho}\qty(\mathbf{r})^{\frac{1}{3}} = \sum_{\alpha}^m f^{\alpha} \xi_{\alpha}(\mathbf{r})$ and the occupied orbital is defined as $\phi(\mathbf{r}) = \sum^n_{\mu} c^{\mu\cdot}_{\cdot 1} \chi_{\mu}(\mathbf{r})$.
The polynomial system of equations required to describe the self-consistent field therefore contains $n+m$ unknowns, and thus identifying a self-consistent solution requires $n+m$ relations .
To set up the determined polynomial system, we derive the set of equations required for this two-electron, one-orbital system using two AO basis functions.
The set of MO coefficients is given explicitly by $\{c_{\cdot i}^{\mu \cdot}\} = \{c_{\cdot 1}^{1 \cdot}, c_{\cdot 1}^{2 \cdot}\}$.
The stationary condition if the holomorphic energy is then given by the vanishing derivatives,
\begin{align}
\frac{\text{d} E(\{c_{\cdot 1}^{\mu\cdot}\}, \{f^{\alpha}\})}{\text{d} c_{\cdot 1}^{\mu\cdot}} &= 0 \label{extremal},
\end{align}
and the normalization of the orbital leads to the constraint
\begin{align}
\sum_{\mu \nu}^n c_{1\cdot}^{\cdot\mu} S_{\mu\nu} c_{\cdot 1}^{\nu\cdot} &= 1
\label{ortho},
\end{align}
where $S_{\mu\nu}=\langle\chi_\mu|\chi_\nu\rangle$ is the AO overlap matrix.
The implicit relationship between the two MO coefficients defined by Eq.~\eqref{ortho} allows stationary points to be identified with the
total derivative of the energy with respect to only one orbital coefficient, as given by Eq.~\eqref{extremal}.
Furthermore, the derivatives with respect to the density-fitting coefficients can be derived by exploiting the relationships \eqref{ConnectBasises1} and \eqref{ConnectBasises2}.
Due to the incompleteness of the density-fitting basis, it may not always be possible to satisfy Eq.~\eqref{ConnectBasises1}.
Therefore, to solve \eqref{ConnectBasises1} and \eqref{ConnectBasises2}, they are projected onto another auxiliary basis $\ket{\tau}$ defined as
\begin{equation}
\ket{\tau(\mathbf{r})} = \{\tau_1(\mathbf{r}), ... , \tau_l(\mathbf{r})\}
\end{equation}
where the size $l$ of the basis $\ket{\tau}$ is defined as $l = n + m - 2$ for a one-orbital two-electron system.
The projection of relations \eqref{ConnectBasises1} and \eqref{ConnectBasises2} onto the $\lambda^\text{th}$ basis function $\ket{\tau_{\lambda}}$ leads to
\begin{align}
\sum_{\mu \nu}^n c_{\cdot 1}^{\mu} c_{\cdot 1}^{\nu} \langle \tau_{\lambda} | \chi_{\mu} \chi_{\nu} \rangle
&= \sum_{\alpha \beta \gamma}^m f^{\alpha} f^{\beta} f^{\gamma}\langle \tau_{\lambda} | \xi_{\alpha} \xi_{\beta}\xi_{\gamma} \rangle
\\
\frac{\text{d}}{\text{d} c_{\cdot 1}^{\mu\cdot}}
\sum_{\mu \nu}^n c_{\cdot 1}^{\mu\cdot} c_{1\cdot}^{\cdot\nu} \langle \tau_{\lambda} | \chi_{\mu} \chi_{\nu} \rangle
&=
\frac{\text{d}}{\text{d} c_{\cdot 1}^{\mu}} \sum_{\alpha \beta \gamma}^m f^{\alpha} f^{\beta} f^{\gamma} \langle \tau_{\lambda} | \xi_{\alpha} \xi_{\beta} \xi_{\gamma} \rangle.
\end{align}
Finally, the basis functions $\xi(\mathbf{r})$ of the density-fitting expansion and the auxiliary basis $\tau(\mathbf{r})$ are defined in relation to the atomic-orbital basis set $\chi(\mathbf{r})$.
For the investigated example of LDA-DFT, a possible choice of the density-fitting basis $\xi(\mathbf{r})$ in relation to the AO basis $\chi(\mathbf{r})$ is $\xi(\mathbf{r}) = \chi^{2/3}(\mathbf{r})$, with a suitable normalisation factor to ensure $\langle \tau_i | \xi_i \xi_i \xi_i \rangle = 1$.
The projection basis $\tau(\mathbf{r})$ can then be chosen such that $\tau(\mathbf{r}) = \chi^2(\mathbf{r})$.
\end{appendix}
\section*{References}
\section{Holomorphic Kohn--Sham Solutions of \ce{H2} (STO-3G)}
In this Section, we provide numerical data for the holomorphic Kohn--Sham (h-KS) solutions
identified in the minimal basis \ce{H2} molecule using the LDA exchange functional.\cite{bookLDA}
Molecular orbital coefficients of stationary states are represented in the atomic orbital STO-3G basis.
All stationary points are converged to a DIIS error of $10^{-8}$.
The holomorphic energy may be complex-valued in general, but in each case described, this energy remains purely real.
\begin{table}[h!]
\begin{tabular}{ S[table-format=4.8] S[table-format=4.8] S[table-format=4.8] }
\hline\hline
{$C^{1 \cdot}_{\cdot 1}$} & {$C^{2 \cdot}_{\cdot 1}$} & {$\text{Re}[\text{Energy}] / \text{E}_{\text{h}}$}
\\ \hline
0.706482 & 0.706482 & -0.688232
\\
0.707733 & -0.707733 & -0.683115
\\
-0.002607 & 1.000000 & -0.180309
\\
1.000000 & -0.002607 & -0.180309
\\
\hline \hline
\end{tabular}
\caption{Molecular orbital coefficients and holomorphic energies for the four h-RKS solutions
at $R=\SI{4.00}{\angstrom}$. Every solution corresponds exactly with a real RKS stationary point.}
\end{table}
\begin{table}[h!]
\begin{tabular}{ S[table-format=4.8] S[table-format=4.8] S[table-format=4.8] }
\hline\hline
{$C^{1 \cdot}_{\cdot 1}$} & {$C^{2 \cdot}_{\cdot 1}$} & {$\text{Re}[\text{Energy}] / \text{E}_{\text{h}}$}
\\ \hline
0.589341 & 0.589341 & -0.968592
\\
0.944561 & -0.944561 & -0.105412
\\
1.090490 & -0.680902 & -0.086347
\\
-0.680902 & 1.090490 & -0.086347
\\
\hline \hline
\end{tabular}
\caption{Molecular orbital coefficients and holomorphic energies for the four h-RKS solutions
at $R=\SI{1.10}{\angstrom}$. Every solution corresponds exactly with a real RKS stationary point.}
\end{table}
\begin{table}[h!]
\begin{tabular}{ S[table-format=7.11] S[table-format=7.11] S[table-format=4.8] }
\hline\hline
{$C^{1 \cdot}_{\cdot 1}$} & {$C^{2 \cdot}_{\cdot 1}$} & {$\text{Re}[\text{Energy}] / \text{E}_{\text{h}}$}
\\ \hline
0.544559 & 0.544559 & -1.023440
\\
1.262060 & -1.262060 & -0.587391
\\
{$1.431470+0.291466 \text{i}$} & {$-1.431470+0.291466 \text{i}$} & 0.678579
\\
{$1.431470-0.291466 \text{i}$} & {$-1.431470-0.291466 \text{i}$} & 0.678579
\\
\hline \hline
\end{tabular}
\caption{Molecular orbital coefficients and holomorphic energies for the four h-RKS solutions
at $R=\SI{0.70}{\angstrom}$.
Two solutions have complex-valued orbital coefficients and occur in a complex-conjugate pair.
The energy of these complex-valued solutions is strictly real as these stationary points satisfy
parity-time symmetry.\cite{Burton2019b}
}
\end{table}
\section*{References}
|
1,941,325,220,016 | arxiv | \section{Introduction}
\begin{figure}[!t]
\centering
\begin{minipage}{0.45\linewidth}\footnotesize
\centerline{\includegraphics[width=3.8cm]{./Fig/bp240_orig_crop_tiny.png}}
\centerline{(a) Original frame (Bpp/MS-SSIM)}
\end{minipage}
\begin{minipage}{0.45\linewidth}\footnotesize
\centerline{\includegraphics[width=3.8cm]{./Fig/bp240_264_crop_tiny.png}}
\centerline{(b) H.264 (0.0540Bpp/0.945)}
\end{minipage}
\begin{minipage}{0.45\linewidth}\footnotesize
\centerline{\includegraphics[width=3.8cm]{./Fig/bp240_265_crop_tiny.png}}
\centerline{(c) H.265 (0.082Bpp/0.960)}
\end{minipage}
\begin{minipage}{0.45\linewidth}\footnotesize
\centerline{\includegraphics[width=3.8cm]{./Fig/bp240_ours_crop_tiny.png}}
\centerline{(d) Ours ( \textbf{0.0529Bpp}/ \textbf{0.961})}
\end{minipage}
\caption{Visual quality of the reconstructed frames from different video compression systems. (a) is the original frame. (b)-(d) are the reconstructed frames from H.264, H.265 and our method. Our proposed method only consumes 0.0529pp while achieving the best perceptual quality (0.961) when measured by MS-SSIM. (Best viewed in color.) }
\end{figure}
Nowadays, video content contributes to more than 80\% internet traffic \cite{networking2016forecast}, and the percentage is expected to increase even further. Therefore, it is critical to build an efficient video compression system and generate higher quality frames at given bandwidth budget.
In addition, most video related computer vision tasks such as video object detection or video object tracking are sensitive to the quality of compressed videos, and efficient video compression may bring benefits for other computer vision tasks.
Meanwhile, the techniques in video compression are also helpful for action recognition \cite{wu2018compressed} and model compression \cite{han2015deep}.
However, in the past decades, video compression algorithms \cite{wiegand2003overview,sullivan2012overview} rely on hand-crafted modules, e.g., block based motion estimation and Discrete Cosine Transform (DCT), to reduce the redundancies in the video sequences.
Although each module is well designed, the whole compression system is not end-to-end optimized. It is desirable to further improve video compression performance by jointly optimizing the whole compression system.
Recently, deep neural network (DNN) based auto-encoder for image compression \cite{toderici2015variable, balle2016end,toderici2017full, agustsson2017soft,balle2018variational,johnston2017improved,theis2017lossy,li2017learning,rippel2017real,agustsson2018generative} has achieved comparable or even better performance than the traditional image codecs like JPEG \cite{wallace1992jpeg}, JPEG2000 \cite{skodras2001jpeg} or BPG \cite{BPG}.
One possible explanation is that the DNN based image compression methods can exploit large scale end-to-end training and highly non-linear transform, which are not used in the traditional approaches.
However, it is non-trivial to directly apply these techniques to build an end-to-end learning system for video compression.
\textbf{First}, it remains an open problem to learn how to generate and compress the motion information tailored for video compression.
Video compression methods heavily rely on motion information to reduce temporal redundancy in video sequences.
A straightforward solution is to use the learning based optical flow to represent motion information.
However, current learning based optical flow approaches aim at generating flow fields as accurate as possible.
But, the precise optical flow is often not optimal for a particular video task \cite{xue2017video}.
In addition, the data volume of optical flow increases significantly when compared with motion information in the traditional compression systems and directly applying the existing compression approaches in \cite{wiegand2003overview, sullivan2012overview} to compress optical flow values will significantly increase the number of bits required for storing motion information.
\textbf{Second}, it is unclear how to build a DNN based video compression system by minimizing the rate-distortion based objective for both residual and motion information. Rate-distortion optimization (RDO) aims at achieving higher quality of reconstructed frame (\ie, less distortion) when the number of bits (or bit rate) for compression is given. RDO is important for video compression performance. In order to exploit the power of end-to-end training for learning based compression system, the RDO strategy is required to optimize the whole system.
In this paper, we propose the first end-to-end deep video compression (DVC) model that jointly learns motion estimation, motion compression, and residual compression.
The advantages of the this network can be summarized as follows:
\begin{itemize}
\setlength\itemsep{0em}
\item All key components in video compression, \ie, motion estimation, motion compensation, residual compression, motion compression, quantization, and bit rate estimation, are implemented with an end-to-end neural networks.
\item The key components in video compression are jointly optimized based on rate-distortion trade-off through a single loss function, which leads to higher compression efficiency.
\item There is one-to-one mapping between the components of conventional video compression approaches and our proposed DVC model. This work serves as the bridge for researchers working on video compression, computer vision, and deep model design. For example, better model for optical flow estimation and image compression can be easily plugged into our framework.
Researchers working on these fields can use our DVC model as a starting point for their future research.
\end{itemize}
Experimental results show that estimating and compressing motion information by using our neural network based approach can significantly improve the compression performance.
Our framework outperforms the widely used video codec H.264 when measured by PSNR and be on par with the latest video codec H.265 when measured by the multi-scale structural similarity index (MS-SSIM) \cite{wang2003multi}.
\section{Related Work}
\subsection{Image Compression}
A lot of image compression algorithms have been proposed in the past decades \cite{wallace1992jpeg, skodras2001jpeg, BPG}.
These methods heavily rely on handcrafted techniques.
For example, the JPEG standard linearly maps the pixels to another representation by using DCT, and quantizes the corresponding coefficients before entropy coding\cite{wallace1992jpeg}.
One disadvantage is that these modules are separately optimized and may not achieve optimal compression performance.
Recently, DNN based image compression methods have attracted more and more attention \cite{toderici2015variable,toderici2017full,balle2016end, balle2018variational, theis2017lossy, agustsson2017soft,li2017learning, rippel2017real,mentzer2018conditional,agustsson2018generative}.
In \cite{toderici2015variable,toderici2017full, johnston2017improved}, recurrent neural networks (RNNs) are utilized to build a progressive image compression scheme.
Other methods employed the CNNs to design an auto-encoder style network for image compression \cite{balle2016end, balle2018variational, theis2017lossy}.
To optimize the neural network, the work in \cite{toderici2015variable, toderici2017full, johnston2017improved} only tried to minimize the distortion (\eg, mean square error) between original frames and reconstructed frames without considering the number of bits used for compression.
Rate-distortion optimization technique was adopted in \cite{balle2016end, balle2018variational, theis2017lossy, li2017learning} for higher compression efficiency by introducing the number of bits in the optimization procedure.
To estimate the bit rates, context models are learned for the adaptive arithmetic coding method in \cite{rippel2017real,li2017learning,mentzer2018conditional}, while non-adaptive arithmetic coding is used in \cite{balle2016end, theis2017lossy}.
In addition, other techniques such as generalized divisive normalization (GDN) \cite{balle2016end}, multi-scale image decomposition \cite{rippel2017real}, adversarial training \cite{rippel2017real}, importance map \cite{li2017learning, mentzer2018conditional} and intra prediction \cite{minnen2017spatially, baig2017learning} are proposed to improve the image compression performance. These existing works are important building blocks for our video compression network.
\subsection{Video Compression}
In the past decades, several traditional video compression algorithms have been proposed, such as H.264 \cite{wiegand2003overview} and H.265 \cite{sullivan2012overview}. Most of these algorithms follow the predictive coding architecture.
Although they provide highly efficient compression performance, they are manually designed and cannot be jointly optimized in an end-to-end way.
For the video compression task, a lot of DNN based methods have been proposed for intra prediction and residual coding\cite{chen2017deepcoder}, mode decision \cite{liu2016cu}, entropy coding \cite{song2017neural}, post-processing \cite{Lu_2018_ECCV}.
These methods are used to improve the performance of one particular module of the traditional video compression algorithms instead of building an end-to-end compression scheme.
In \cite{chen2018learning}, Chen \textit{et al.} proposed a block based learning approach for video compression.
However, it will inevitably generate blockness artifact in the boundary between blocks.
In addition, they used the motion information propagated from previous reconstructed frames through traditional block based motion estimation, which will degrade compression performance.
Tsai \textit{et al.} proposed an auto-encoder network to compress the residual from the H.264 encoder for specific domain videos \cite{tsai2018learning}. This work does not use deep model for motion estimation, motion compensation or motion compression.
The most related work is the RNN based approach in \cite{Wu_2018_ECCV}, where video compression is formulated as frame interpolation.
However, the motion information in their approach is also generated by traditional block based motion estimation, which is encoded by the existing non-deep learning based image compression method \cite{WebP}. In other words, estimation and compression of motion are not accomplished by deep model and jointly optimized with other components.
In addition, the video codec in \cite{Wu_2018_ECCV} only aims at minimizing the distortion (\ie, mean square error) between the original frame and reconstructed frame without considering rate-distortion trade-off in the training procedure.
In comparison, in our network, motion estimation and compression are achieved by DNN, which is jointly optimized with other components by considering the rate-distortion trade-off of the whole compression system.
\begin{figure*}[!t]
\includegraphics[width=\linewidth]{./Fig/Overview2.pdf}
\caption{(a): The predictive coding architecture used by the traditional video codec H.264 \cite{wiegand2003overview} or H.265 \cite{sullivan2012overview}. (b): The proposed end-to-end video compression network. The modules with \textcolor{blue}{blue color} are not included in the decoder side.}
\label{fig:overview}
\end{figure*}
\subsection{Motion Estimation}
Motion estimation is a key component in the video compression system. Traditional video codecs use the block based motion estimation algorithm \cite{wiegand2003overview}, which well supports hardware implementation.
In the computer vision tasks, optical flow is widely used to exploit temporal relationship.
Recently, a lot of learning based optical flow estimation methods \cite{dosovitskiy2015flownet, ranjan2017optical,sun2018pwc,hui2018liteflownet,hui2019lightweight} have been proposed. These approaches motivate us to integrate optical flow estimation into our end-to-end learning framework.
Compared with block based motion estimation method in the existing video compression approaches, learning based optical flow methods can provide accurate motion information at pixel-level, which can be also optimized in an end-to-end manner.
However, much more bits are required to compress motion information if optical flow values are encoded by traditional video compression approaches.
\section{Proposed Method}
\textbf{Introduction of Notations.}
Let $\mathcal{V}=\{x_1, x_2,..., x_{t-1}, x_t,...\}$ denote the current video sequences, where $x_t$ is the frame at time step $t$.
The predicted frame is denoted as $\bar{x}_t$ and the reconstructed/decoded frame is denoted as $\hat{x}_t$.
$r_t$ represents the residual (error) between the original frame $x_t$ and the predicted frame $\bar{x}_t$.
$\hat{r}_t$ represents the reconstructed/decoded residual.
In order to reduce temporal redundancy, motion information is required.
Among them, $v_t$ represents the motion vector or optical flow value. And $\hat{v}_t$ is its corresponding reconstructed version.
Linear or nonlinear transform can be used to improve the compression efficiency.
Therefore, residual information $r_t$ is transformed to $y_t$, and motion information $v_t$ can be transformed to $m_t$,
$\hat{r}_t$ and $\hat{m}_t$ are the corresponding quantized versions, respectively.
\subsection{Brief Introduction of Video Compression}
\label{sec:classic}
In this section, we give a brief introduction of video compression. More details are provided in \cite{wiegand2003overview,sullivan2012overview}.
Generally, the video compression encoder generates the bitstream based on the input current frames.
And the decoder reconstructs the video frames based on the received bitstreams.
In Fig. \ref{fig:overview}, all the modules are included in the encoder side while \textcolor{blue}{blue color} modules are not included in the decoder side.
The classic framework of video compression in Fig. \ref{fig:overview}(a) follows the predict-transform architecture.
Specifically, the input frame $x_{t}$ is split into a set of blocks, \ie, square regions, of the same size (\eg, $8\times 8$).
The encoding procedure of the traditional video compression algorithm in the encoder side is shown as follows,
\textbf{Step 1. Motion estimation.} Estimate the motion between the current frame $x_t$ and the previous reconstructed frame $\Hat{x}_{t-1}$. The corresponding motion vector $v_t$ for each block is obtained.
\textbf{Step 2. Motion compensation.} The predicted frame $\bar{x}_t$ is obtained by copying the corresponding pixels in the previous reconstructed frame to the current frame based on the motion vector $v_t$ defined in Step 1. The residual $r_t$ between the original frame $x_t$ and the predicted frame $\bar{x}_t$ is obtained as $r_t = x_t - \bar{x}_t$.
\textbf{Step 3. Transform and quantization.} The residual $r_t$ from Step 2 is quantized to $\hat{y}_t$. A linear transform (\eg, DCT) is used before quantization for better compression performance.
\textbf{Step 4. Inverse transform.}
The quantized result $\hat{y}_t$ in Step 3 is used by inverse transform for obtaining the reconstructed residual $\hat{r}_t$.
\textbf{Step 5. Entropy coding.} Both the motion vector $v_t$ in Step 1 and the quantized result $\hat{y}_t$ in Step 3 are encoded into bits by the entropy coding method and sent to the decoder.
\textbf{Step 6. Frame reconstruction.} The reconstructed frame $\Hat{x}_t$ is obtained by adding $\bar{x}_t$ in Step 2 and $\hat{r}_t$ in Step 4, \ie $\Hat{x}_t =\hat{r}_t + \bar{x}_t$. The reconstructed frame will be used by the $(t+1)$th frame at Step 1 for motion estimation.
For the decoder, based on the bits provided by the encoder at Step 5, motion compensation at Step 2, inverse quantization at Step 4, and then frame reconstruction at Step 6 are performed to obtain the reconstructed frame $\hat{x}_t$.
\subsection{Overview of the Proposed Method}
Fig. \ref{fig:overview} (b) provides a high-level overview of our end-to-end video compression framework.
There is one-to-one correspondence between the traditional video compression framework and our proposed deep learning based framework.
The relationship and brief summarization on the differences are introduced as follows:
\textbf{Step N1. Motion estimation and compression.} We use a CNN model to estimate the optical flow \cite{ranjan2017optical}, which is considered as motion information ${v}_t$. Instead of directly encoding the raw optical flow values, an MV encoder-decoder network is proposed in Fig. \ref{fig:overview} to compress and decode the optical flow values, in which the quantized motion representation is denoted as $\hat{m}_t$. Then the corresponding reconstructed motion information $\hat{v}_t$ can be decoded by using the MV decoder net. Details are given in Section \ref{sec: mv_compress}.
\textbf{Step N2. Motion compensation.} A motion compensation network is designed to obtain the predicted frame $\bar{x}_t$ based on the optical flow obtained in Step N1.
More information is provided in Section \ref{sec: mcnet}.
\textbf{Step N3-N4. Transform, quantization and inverse transform.}
We replace the linear transform in Step 3 by using a highly non-linear residual encoder-decoder network, and the residual $r_t$ is non-linearly maped to the representation $y_t$.
Then $y_t$ is quantized to $\hat{y}_t$.
In order to build an end-to-end training scheme, we use the quantization method in \cite{balle2016end}.
The quantized representation $\hat{y}_t$ is fed into the residual decoder network to obtain the reconstructed residual $\hat{r}_t$.
Details are presented in Section \ref{sec:res_codec} and \ref{sec:training}.
\textbf{Step N5. Entropy coding.}
At the testing stage, the quantized motion representation $\hat{m}_t$ from Step N1 and the residual representation $\hat{y}_t$ from Step N3 are coded into bits and sent to the decoder.
At the training stage, to estimate the number of bits cost in our proposed approach, we use the CNNs (Bit rate estimation net in Fig. \ref{fig:overview}) to obtain the probability distribution of each symbol in $\hat{m}_t$ and $\hat{y}_t$.
More information is provided in Section \ref{sec:training}.
\textbf{Step N6. Frame reconstruction.} It is the same as Step 6 in Section \ref{sec:classic}.
\subsection{MV Encoder and Decoder Network}
\label{sec: mv_compress}
In order to compress motion information at Step N1, we design a CNN to transform the optical flow to the corresponding representations for better encoding.
Specifically, we utilize an auto-encoder style network to compress the optical flow, which is first proposed by \cite{balle2016end} for the image compression task.
The whole MV compression network is shown in Fig. \ref{fig:mvencoder}.
The optical flow $v_t$ is fed into a series of convolution operation and nonlinear transform. The number of output channels for convolution (deconvolution) is 128 except for the last deconvolution layer, which is equal to 2.
Given optical flow $v_{t}$ with the size of $M \times N \times 2$, the MV encoder will generate the motion representation $m_{t}$ with the size of $M/16 \times N/16 \times 128$. Then $m_{t}$ is quantized to $\hat{m}_{t}$.
The MV decoder receives the quantized representation and reconstruct motion information $\hat{v}_t$.
In addition, the quantized representation $ \hat{m}_t$ will be used for entropy coding.
\begin{figure}[!t]
\includegraphics[width=\linewidth]{./Fig/F2.pdf}
\caption{Our MV Encoder-decoder network. Conv(3,128,2) represents the convoluation operation with the kernel size of 3x3, the output channel of 128 and the stride of 2. GDN/IGDN \cite{balle2016end} is the nonlinear transform function. The binary feature map is only used for illustration. }
\label{fig:mvencoder}
\end{figure}
\subsection{Motion Compensation Network}
\label{sec: mcnet}
Given the previous reconstructed frame $\hat{x}_{t-1}$ and the motion vector $\hat{v}_t$, the motion compensation network obtains the predicted frame $\bar{x}_t$, which is expected to as close to the current frame $x_t$ as possible.
First, the previous frame $\hat{x}_{t-1}$ is warped to the current frame based on the motion information $\hat{v}_{t}$. The warped frame still has artifacts. To remove the artifacts, we concatenate the warped frame $w(\hat{x}_{t-1}, \hat{v}_{t})$, the reference frame $\hat{x}_{t-1}$, and the motion vector $\hat{v}_t$ as the input, then feed them into another CNN to obtain the refined predicted frame $\bar{x}_t$.
The overall architecture of the proposed network is shown in Fig. \ref{fig:mc_net}.
The detail of the CNN in Fig. \ref{fig:mc_net} is provided in supplementary material.
Our proposed method is a pixel-wise motion compensation approach, which can provide more accurate temporal information and avoid the blockness artifact in the traditional block based motion compensation method.
It means that we do not need the hand-crafted loop filter or the sample adaptive offset technique \cite{wiegand2003overview, sullivan2012overview} for post processing.
\subsection{Residual Encoder and Decoder Network}
\label{sec:res_codec}
The residual information $r_t$ between the original frame $x_t$ and the predicted frame $\bar{x}_t$ is encoded by the residual encoder network as shown in Fig. \ref{fig:overview}.
In this paper, we rely on the highly non-linear neural network in \cite{balle2018variational} to transform the residuals to the corresponding latent representation.
Compared with discrete cosine transform in the traditional video compression system, our approach can better exploit the power of non-linear transform and achieve higher compression efficiency.
\subsection{Training Strategy}
\label{sec:training}
\textbf{Loss Function.}
The goal of our video compression framework is to minimize the number of bits used for encoding the video, while at the same time reduce the distortion between the original input frame $x_t$ and the reconstructed frame $\hat{x}_t$. Therefore, we propose the following rate-distortion optimization problem,
\begin{equation}
\lambda D+ R = \lambda d(x_t, \hat{x}_t) + (H(\hat{m}_t) + H(\hat{y}_t)),
\label{eq:rdo}
\end{equation}
where $d(x_t, \hat{x}_t)$ denotes the distortion between $x_t$ and $\hat{x}_t$, and we use mean square error (MSE) in our implementation.
$H(\cdot)$ represents the number of bits used for encoding the representations. In our approach, both residual representation $\hat{y}_t$ and motion representation $\hat{m}_t$ should be encoded into the bitstreams. $\lambda$ is the Lagrange multiplier that determines the trade-off between the number of bits and distortion.
As shown in Fig. \ref{fig:overview}(b), the reconstructed frame $\hat{x}_t$, the original frame $x_{t}$ and the estimated bits are input to the loss function.
\begin{figure}[!t]
\includegraphics[width=\linewidth]{./Fig/MC_Net.pdf}
\caption{Our Motion Compensation Network.}
\label{fig:mc_net}
\end{figure}
\begin{figure*}[!t]
\centering
\begin{minipage}{0.30\textwidth}
\centerline{\includegraphics[width=5cm]{./Fig/UVG_PSNR.pdf}}
\end{minipage}
\begin{minipage}{0.30\textwidth}
\centerline{\includegraphics[width=5cm]{./Fig/ClassB_PSNR.pdf}}
\end{minipage}
\begin{minipage}{0.30\textwidth}
\centerline{\includegraphics[width=5cm]{./Fig/ClassE_PSNR.pdf}}
\end{minipage}
\begin{minipage}{0.30\textwidth}
\centerline{\includegraphics[width=5cm]{./Fig/UVG_SSIM.pdf}}
\end{minipage}
\begin{minipage}{0.30\textwidth}
\centerline{\includegraphics[width=5cm]{./Fig/ClassB_SSIM.pdf}}
\end{minipage}
\begin{minipage}{0.30\textwidth}
\centerline{\includegraphics[width=5cm]{./Fig/ClassE_SSIM.pdf}}
\end{minipage}
\caption{Comparsion between our proposed method with the learning based video codec in \cite{Wu_2018_ECCV}, H.264 \cite{wiegand2003overview} and H.265 \cite{sullivan2012overview}. Our method outperforms H.264 when measured by both PSNR ans MS-SSIM. Meanwhile, our method achieves similar or better compression performance when compared with H.265 in terms of MS-SSIM.}
\label{fig:mainresults}
\end{figure*}
\textbf{Quantization.}
Latent representations such as residual representation $y_t$ and motion representation $m_t$ are required to be quantized before entropy coding. However, quantization operation is not differential, which makes end-to-end training impossible.
To address this problem, a lot of methods have been proposed \cite{toderici2015variable, agustsson2017soft, balle2016end}.
In this paper, we use the method in \cite{balle2016end} and replace the quantization operation by adding uniform noise in the training stage.
Take $y_t$ as an example, the quantized representation $\hat{y}_t$ in the training stage is approximated by adding uniform noise to $y_t$, \ie, $\hat{y}_t=y_t + \eta$, where $\eta$ is uniform noise.
In the inference stage, we use the rounding operation directly, \ie, $\hat{y}_t= round(y_t)$.
\textbf{Bit Rate Estimation.}
In order to optimize the whole network for both number of bits and distortion, we need to obtain the bit rate ($H(\hat{y}_t)$ and $H(\hat{m}_t)$) of the generated latent representations ($\hat{y}_t$ and $\hat{m}_t$).
The correct measure for bitrate is the entropy of the corresponding latent representation symbols.
Therefore, we can estimate the probability distributions of $\hat{y}_t$ and $\hat{m}_t$, and then obtain the corresponding entropy.
In this paper, we use the CNNs in \cite{balle2018variational} to estimate the distributions.
\textbf{Buffering Previous Frames.}
As shown in Fig. \ref{fig:overview}, the previous reconstructed frame $\hat{x}_{t-1}$ is required in the motion estimation and motion compensation network when compressing the current frame. However, the previous reconstructed frame $\hat{x}_{t-1}$ is the output of our network for the previous frame, which is based on the reconstructed frame $\hat{x}_{t-2}$, and so on. Therefore, the frames $x_1, \ldots, x_{t-1}$ might be required during the training procedure for the frame $x_t$, which reduces the variation of training samples in a mini-batch and could be impossible to be stored in a GPU when $t$ is large.
To solve this problem, we adopt an on line updating strategy.
Specifically, the reconstructed frame $\hat{x}_t$ in each iteration will be saved in a buffer. In the following iterations, $\hat{x}_{t}$ in the buffer will be used for motion estimation and motion compensation when encoding $x_{t+1}$.
Therefore, each training sample in the buffer will be updated in an epoch. In this way, we can optimize and store one frame for a video clip in each iteration, which is more efficient.
\section{Experiments}
\subsection{Experimental Setup}
\textbf{Datasets.} We train the proposed video compression framework using the Vimeo-90k dataset \cite{xue2017video}, which is recently built for evaluating different video processing tasks, such as video denoising and video super-resolution.
It consists of 89,800 independent clips that are different from each other in content.
To report the performance of our proposed method, we evaluate our proposed algorithm on the UVG dataset \cite{UVG}, and the HEVC Standard Test Sequences (Class B, Class C, Class D and Class E) \cite{sullivan2012overview}.
The content and resolution of these datasets are diversified and they are widely used to measure the performance of video compression algorithms.
\textbf{Evaluation Method}
To measure the distortion of the reconstructed frames, we use two evaluation metrics: PSNR and MS-SSIM \cite{wang2003multi}.
MS-SSIM correlates better with human perception of distortion than PSNR.
To measure the number of bits for encoding the representations, we use bits per pixel(Bpp) to represent the required bits for each pixel in the current frame.
\textbf{Implementation Details}
We train four models with different $\lambda$ ($\lambda = 256, 512, 1024, 2048$).
For each model, we use the Adam optimizer \cite{kingma2014adam} by setting the initial learning rate as 0.0001, $\beta_1$ as 0.9 and $\beta_2$ as 0.999, respectively.
The learning rate is divided by 10 when the loss becomes stable.
The mini-batch size is set as 4.
The resolution of training images is $256 \times 256$.
The motion estimation module is initialized with the pretrained weights in \cite{ranjan2017optical}.
The whole system is implemented based on Tensorflow and it takes about 7 days to train the whole network using two Titan X GPUs.
\subsection{Experimental Results }
In this section, both H.264 \cite{wiegand2003overview} and H.265 \cite{sullivan2012overview} are included for comparison. In addition, a learning based video compression system in \cite{Wu_2018_ECCV}, denoted by Wu\_ECCV2018, is also included for comparison.
To generate the compressed frames by the H.264 and H.265, we follow the setting in \cite{Wu_2018_ECCV} and use the FFmpeg with \textit{very fast} mode. The GOP sizes for the UVG dataset and HEVC dataset are 12 and 10, respectively.
Please refer to supplementary material for more details about the H.264/H.265 settings.
Fig. \ref{fig:mainresults} shows the experimental results on the UVG dataset, the HEVC Class B and Class E datasets.
The results for HEVC Class C and Class D are provided in supplementary material.
It is obvious that our method outperforms the recent work on video compression \cite{Wu_2018_ECCV} by a large margin.
On the UVG dataset, the proposed method achieved about 0.6dB gain at the same Bpp level.
It should be mentioned that our method only uses one previous reference frame while the work by Wu \textit{et al.} \cite{Wu_2018_ECCV} utilizes bidirectional frame prediction and requires two neighbouring frames.
Therefore, it is possible to further improve the compression performance of our framework by exploiting temporal information from multiple reference frames.
On most of the datasets, our proposed framework outperforms the H.264 standard when measured by PSNR and MS-SSIM.
In addition, our method achieves similar or better compression performance when compared with H.265 in terms of MS-SSIM.
As mentioned before, the distortion term in our loss function is measured by MSE. Nevertheless, our method can still provide reasonable visual quality in terms of MS-SSIM.
\begin{figure}[!t]
\centering
\begin{minipage}{0.45\linewidth}\footnotesize
\centerline{\includegraphics[width=8.2cm]{./Fig/ClassB_Ablation.pdf}}
\end{minipage}
\caption{Ablation study. We report the compression performance in the following settings. 1. The strategy of buffering previous frame is not adopted(\textcolor{red}{W/O update}). 2. Motion compensation network is removed (\textcolor{green}{W/O MC}). 3. Motion estimation module is not jointly optimized (\textcolor{blue}{W/O Joint Training}). 4. Motion compression network is removed (\textcolor{magenta}{W/O MVC}). 5. Without relying on motion information (\textcolor[rgb]{0.749,0.749,0.239}{W/O Motion Information}).}
\label{fig:ablation}
\end{figure}
\subsection{Ablation Study and Model Analysis}
\label{sec:ablation}
\textbf{Motion Estimation.}
In our proposed method, we exploit the advantage of the end-to-end training strategy and optimize the motion estimation module within the whole network.
Therefore, based on rate-distortion optimization, the optical flow in our system is expected to be more compressible, leading to more accurate warped frames.
To demonstrate the effectiveness, we perform a experiment by fixing the parameters of the initialized motion estimation module in the whole training stage. In this case, the motion estimation module is pretrained only for estimating optical flow more accurately, but not for optimal rate-distortion.
Experimental result in Fig. \ref{fig:ablation} shows that our approach with joint training for motion estimation can improve the performance significantly when compared with the approach by fixing motion estimation, which is the denoted by \emph{W/O Joint Training} in Fig. \ref{fig:ablation} (see the \textcolor{blue}{blue curve}).
We report the average bits costs for encoding the optical flow and the corresponding PSNR of the warped frame in Table \ref{tab:jointMV}.
Specifically,
when the motion estimation module is fixed during the training stage, it needs 0.044bpp to encode the generated optical flow and the corresponding PSNR of the warped frame is 27.33db.
In contrast, we need 0.029bpp to encode the optical flow in our proposed method, and the PSNR of warped frame is higher (28.17dB). Therefore, the joint learning strategy not only saves the number of bits required for encoding the motion, but also has better warped image quality. These experimental results clearly show that putting motion estimation into the rate-distortion optimization improves compression performance.
In Fig. \ref{fig:flow_visual}, we provide further visual comparisons.
Fig. \ref{fig:flow_visual} (a) and (b) represent the frame 5 and frame 6 from the Kimono sequence.
Fig. \ref{fig:flow_visual} (c) denotes the reconstructed optical flow map when the optical flow network is fixed during the training procedure.
Fig. \ref{fig:flow_visual} (d) represents the reconstructed optical flow map after using the joint training strategy.
Fig. \ref{fig:flow_visual} (e) and (f) are the corresponding probability distributions of optical flow magnitudes.
It can be observed that the reconstructed optical flow by using our method contains more pixels with zero flow magnitude (e.g., in the area of human body).
Although zero value is not the true optical flow value in these areas, our method can still generate accurate motion compensated results in the homogeneous region.
More importantly, the optical flow map with more zero magnitudes requires much less bits for encoding.
For example, it needs 0.045bpp for encoding the optical flow map in Fig. \ref{fig:flow_visual} (c) while it only needs 0.038bpp for encoding optical flow map in Fig. \ref{fig:flow_visual} (d).
It should be mentioned that in the H.264 \cite{wiegand2003overview} or H.265 \cite{sullivan2012overview}, a lot of motion vectors are assigned to zero for achieving better compression performance.
Surprisingly, our proposed framework can learn a similar motion distribution without relying on any complex hand-crafted motion estimation strategy as in \cite{wiegand2003overview, sullivan2012overview}.
\begin{table}
\centering
\begin{tabular}{|cc|cc|cc|}
\hline \multicolumn{2}{|c|}{Fix ME} & \multicolumn{2}{|c|}{W/O MVC} & \multicolumn{2}{|c|}{Ours}\\
\hline Bpp & PSNR & Bpp & PSNR & Bpp & PSNR\\
\hline 0.044&27.33& 0.20& 24.32 & 0.029& 28.17\\
\hline
\end{tabular}
\caption{The bit cost for encoding optical flow and the corresponding PSNR of warped frame from optical flow for different setting are provided.}
\label{tab:jointMV}
\end{table}
\begin{figure}[!t]
\captionsetup[subfigure]{aboveskip=-0.8pt,belowskip=-1pt}
\centering
\begin{subfigure}[t]{0.235\textwidth}\footnotesize
\centering
\includegraphics[width=\linewidth]{./Fig/KIim005.jpg}
\caption{Frame No.5}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.235\textwidth}\footnotesize
\centering
\includegraphics[width=\linewidth]{./Fig/KIim006.jpg}
\caption{Frame No.6}
\end{subfigure}
\begin{subfigure}[t]{0.235\textwidth}\footnotesize
\centering
\includegraphics[width=\linewidth]{./Fig/FIXMEim006_flow.jpg}
\caption{Reconstructed optical flow when fixing ME Net.}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.235\textwidth}\footnotesize
\centering
\includegraphics[width=\linewidth]{./Fig/JOINTim006_flow.jpg}
\caption{Reconstructed optical flow with the joint training strategy.}
\end{subfigure}
\begin{subfigure}[t]{0.235\textwidth}\footnotesize
\centering
\includegraphics[width=\linewidth]{./Fig/fixme_hist.pdf}
\caption{Magnitude distribution of the optical flow map (c).}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.235\textwidth}\footnotesize
\centering
\includegraphics[width=\linewidth]{./Fig/joint_hist.pdf}
\caption{Magnitude distribution of the optical flow map (d).}
\end{subfigure}
\caption{Flow visualize and statistic analysis. }
\label{fig:flow_visual}
\end{figure}
\textbf{Motion Compensation.}
In this paper, the motion compensation network is utilized to refine the warped frame based on the estimated optical flow.
To evaluate the effectiveness of this module, we perform another experiment by removing the motion compensation network in the proposed system.
Experimental results of the alternative approach denoted by \emph{W/O MC} (see the \textcolor{green}{green curve} in Fig. \ref{fig:ablation}) show that the PSNR without the motion compensation network will drop by about 1.0 dB at the same bpp level.
\textbf{Updating Strategy.}
As mentioned in Section \ref{sec:training}, we use an on-line buffer to store previously reconstructed frames $\hat{x}_{t-1}$ in the training stage when encoding the current frame $x_t$.
We also report the compression performance when the previous reconstructed frame $\hat{x}_{t-1}$ is directly replaced by the previous original frame $x_{t-1}$ in the training stage.
This result of the alternative approach denoted by \textit{W/O update} (see the \textcolor{red}{red curve} ) is shown in Fig. \ref{fig:ablation}.
It demonstrates that the buffering strategy can provide about 0.2dB gain at the same bpp level.
\textbf{MV Encoder and Decoder Network.}
In our proposed framework, we design a CNN model to compress the optical flow and encode the corresponding motion representations.
It is also feasible to directly quantize the raw optical flow values and encode it without using any CNN.
We perform a new experiment by removing the MV encoder and decoder network. The experimental result in Fig. \ref{fig:ablation} shows that
the PSNR of the alternative approach denoted by \emph{W/O MVC} (see the \textcolor{magenta}{magenta curve} ) will drop by more than 2 dB after removing the motion compression network.
In addition, the bit cost for encoding the optical flow in this setting and the corresponding PSNR of the warped frame are also provided in Table \ref{tab:jointMV} (denoted by W/O MVC).
It is obvious that it requires much more bits (0.20Bpp) to directly encode raw optical flow values and the corresponding PSNR(24.43dB) is much worse than our proposed method(28.17dB). Therefore, compression of motion is crucial when optical flow is used for estimating motion.
\textbf{Motion Information.}
In Fig. \ref{fig:overview}(b), we also investigate the setting which only retains the residual encoder and decoder network.
Treating each frame independently without using any motion estimation approach (see the \textcolor[rgb]{0.749,0.749,0.239}{yellow curve} denoted by \emph{W/O Motion Information}) leads to more than 2dB drop in PSNR when compared with our method.
\begin{figure}[!t]
\captionsetup[subfigure]{aboveskip=-0.8pt,belowskip=-1pt}
\centering
\begin{subfigure}[t]{0.235\textwidth}\footnotesize
\centering
\includegraphics[width=\linewidth]{./Fig/bitrate_analysis.pdf}
\caption{Actual and estimated bit rate.}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.235\textwidth}\footnotesize
\centering
\includegraphics[width=\linewidth]{./Fig/mvbpp_3.pdf}
\caption{Motion information percentages.}
\end{subfigure}
\caption{Bit rate analysis. }
\label{fig:bitrate_analysis}
\end{figure}
\textbf{Running Time and Model Complexity.}
The total number of parameters of our proposed end-to-end video compression framework is about 11M.
In order to test the speed of different codecs, we perform the experiments using the computer with Intel Xeon E5-2640 v4 CPU and single Titan 1080Ti GPU.
For videos with the resolution of 352x288, the encoding (\textit{resp.} decoding) speed of each iteration of Wu \textit{et al.}'s work \cite{Wu_2018_ECCV} is 29fps (\textit{resp.} 38fps), while the overall speed of ours is 24.5fps (\textit{resp.} 41fps).
The corresponding encoding speeds of H.264 and H.265 based on the official software JM \cite{JM} and HM \cite{HM} are 2.4fps and 0.35fps, respectively.
The encoding speed of the commercial software x264 \cite{x264} and x265 \cite{x265} are 250fps and 42fps, respectively.
Although the commercial codec x264 \cite{x264} and x265 \cite{x265} can provide much faster encoding speed than ours, they need a lot of code optimization. Correspondingly, recent deep model compression approaches are off-the-shelf for making the deep model much faster, which is beyond the scope of this paper.
\textbf{Bit Rate Analysis.}
In this paper, we use a probability estimation network in \cite{balle2018variational} to estimate the bit rate for encoding motion information and residual information.
To verify the reliability, we compare the estimated bit rate and the actual bit rate by using arithmetic coding in Fig. \ref{fig:bitrate_analysis}(a).
It is obvious that the estimated bit rate is closed to the actual bit rate.
In addition, we further investigate on the components of bit rate.
In Fig. \ref{fig:bitrate_analysis}(b), we provide the $\lambda$ value and the percentage of motion information at each point. When $\lambda$ in our objective function $\lambda * D + R$ becomes larger, the whole Bpp also becomes larger while the corresponding percentage of motion information drops.
\section{Conclusion}
In this paper, we have proposed the fully end-to-end deep learning framework for video compression.
Our framework inherits the advantages of both classic predictive coding scheme in the traditional video compression standards and the powerful non-linear representation ability from DNNs.
Experimental results show that our approach outperforms the widely used H.264 video compression standard and the recent learning based video compression system.
The work provides a promising framework for applying deep neural network for video compression.
Based on the proposed framework, other new techniques for optical flow, image compression, bi-directional prediction and rate control can be readily plugged into this framework.
\noindent
\textbf{Acknowledgement}
This work was supported in part by National Natural Science Foundation of China (61771306) Natural Science Foundation of Shanghai(18ZR1418100), Chinese National Key S\&T Special Program(2013ZX01033001-002-002), Shanghai Key Laboratory of Digital Media Processing and Transmissions(STCSM 18DZ2270700).
{\small
\bibliographystyle{ieee}
|
1,941,325,220,017 | arxiv | \section{Introduction}
The theory of quasi-random graphs asks the following fundamental question: which properties
of graphs are such that any graph that satisfies them, resembles an appropriate random graph
(namely, the graph satisfies the properties that a random graph would
satisfy, with high probability). Such properties are called {\em quasi-random}.
The theory of quasi-random graphs was initiated by Thomason \cite{Th1,Th2} and then
followed by Chung, Graham and Wilson who proved the fundamental theorem of quasi-random
graphs \cite{ChGrWi}. Since then there have been many papers on this subject (see, e.g.
the excellent survey \cite{KrSu}).
Quasi-random properties were also studied for
other combinatorial structures such as set systems \cite{ChGr}, tournaments \cite{ChGr1}, and
hypergraphs \cite{ChGr2}. There are also some very recent results on quasi-random groups
\cite{Go} and generalized quasi-random graphs \cite{LoSo}.
In order to formally define $p$-quasi-randomness we need to state the fundamental theorem of
quasi-random graphs. As usual, a {\em labeled copy} of a graph $H$ in a graph
$G$ is an injective mapping $\phi$ from $V(H)$ to $V(G)$ that maps edges to edges.
That is $(x,y) \in E(H)$ implies $(\phi(x), \phi(y)) \in E(G)$.
For a set of vertices $U \subset V(G)$ we denote by $H[U]$ the number of labeled
copies of $H$ in the subgraph of $G$ induced by $U$ and by $e(U)$ the number of edges
of $G$ with both endpoints in $U$. A graph sequence $(G_n)$ is an infinite sequence of graphs
$\{G_1,G_2,\ldots\}$ where $G_n$ has $n$ vertices. The following
result of Chung, Graham, and Wilson \cite{ChGrWi}
shows that many properties of different nature are equivalent to the notion of
quasi-randomness, defined using edge distribution.
The original theorem lists seven such equivalent properties, but we only state four of them
here.
\begin{theo}[Chung, Graham, and Wilson \cite{ChGrWi}]
\label{t-CGW}
Fix any $1 < p < 1$.
For any graph sequence $(G_n)$ the following properties are equivalent:
\begin{itemize}
\item[${\cal P}_1(t)$:]
For an even integer $t \ge 4$, let
$C_t$ denote the cycle of length $t$. Then $e(G_n) = \frac{1}{2}pn^2+o(n^2)$ and
$C_t[G_n] = p^tn^t + o(n^t)$.
\item[${\cal P}_2$:]
For any subset of vertices $U \subseteq V(G_n)$ we have
$e(U)=\frac{1}{2}p|U|^2 +o(n^2)$.
\item[${\cal P}_3$:]
For any subset of vertices $U \subseteq V(G_n)$ of size $n/2$ we have
$e(U)=\frac{1}{2}p|U|^2 +o(n^2)$.
\item[${\cal P}_4(\alpha)$:]
Fix an $\alpha \in (0,\frac12)$. For any
$U \subseteq V(G_n)$ of size $\alpha n$ we have $e(U, V \setminus U)=p\alpha(1-\alpha)n^2
+o(n^2)$.
\end{itemize}
\end{theo}
The {\em formal} meaning of the properties being equivalent is expressed, as
usual, using $\epsilon,\delta$ notation. For example the meaning that ${\cal P}_3$ implies
${\cal P}_2$ is that for any $\epsilon > 0$ there exist $\delta=\delta(\epsilon)$
and $N=N(\epsilon)$ so that for all $n > N$, if $G$ is a graph with $n$ vertices
having the property that any subset of vertices $U$ of size $n/2$ satisfies
$|e(U)-\frac{1}{2}p|U|^2| < \delta n^2$ then also for any subset of vertices $W$
we have $|e(W)-\frac{1}{2}p|W|^2| < \epsilon n^2$.
Given Theorem \ref{t-CGW} we say that a graph property is $p$-quasi-random if it is
equivalent to any (and therefore all) of the four properties defined in that theorem.
(We will usually just say {\em quasi-random} instead of {\em $p$-quasi-random} since $p$
is fixed throughout the proofs).
Note, that each of the four properties in Theorem \ref{t-CGW} is a property we would expect
$G(n,p)$ to satisfy with high probability.
It is far from true, however, that any property that almost surely holds for $G(n,p)$ is
quasi-random. For example, it is easy to see that having vertex degrees
$np(1+o(1))$ is not a quasi-random property (just take vertex-disjoint cliques of
size roughly $np$ each). An important family of {\em non} quasi-random properties are those requiring the
graphs in the sequence to have the correct number of copies of a fixed graph $H$. Note that
${\cal P}_1(t)$ guarantees that for any {\em even} $t$, if a graph sequence
has the correct number of edges as well as the correct number of copies of $H=C_t$ then the
sequence is quasi-random. As observed in \cite{ChGrWi} this is not true for all graphs.
In fact, already for $H=K_3$ there are simple constructions showing that this is not true.
Simonovits and S\'os observed that the standard counter-examples showing that for some graphs
$H$, having the correct number of copies of $H$ is not enough to guarantee quasi-randomness,
have the property that the number of copies of $H$ in some of the induced subgraphs of these
counter-examples deviates significantly from what it should be.
As quasi-randomness is a hereditary property, in the sense that we expect a
sub-structure of a random-like object to be random-like as well, they introduced the
following variant
of property ${\cal P}_1$ of Theorem \ref{t-CGW}, where now we require all subsets of vertices
to contains the ``correct'' number of copies of $H$.
\begin{definition}[${\cal P}_H$]
\label{d-ss}
For a fixed graph $H$ with $h$ vertices and $r$ edges,
we say that a graph sequence $(G_n)$ satisfies ${\cal P}_H$ if
all subsets of vertices $U \subset V(G_n)$ with $|U|=\alpha n$
satisfy $H[U] = p^r|U|^h+o(n^h)$.
\end{definition}
As opposed to ${\cal P}_1$, which is quasi-random only for even cycles, Simonovits
and S\'os \cite{SiSo} showed that ${\cal P}_H$ is quasi-random for any nonempty graph $H$.
\begin{theo}
\label{t-ss}
For any fixed $H$ that has edges, property ${\cal P}_H$ is quasi-random.
\end{theo}
We can view property ${\cal P}_H$ as a generalization of property ${\cal P}_2$ in
Theorem \ref{t-CGW}, since ${\cal P}_2$ is just the special case ${\cal P}_{K_2}$.
Now, property ${\cal P}_3$ in Theorem \ref{t-CGW} guarantees that in order
to infer that a sequence is quasi-random, and thus satisfies ${\cal P}_2$,
it is enough to require only the sets
of vertices of size $n/2$ to contain the correct number of edges.
An open problem raised by Simonovits and S\'os \cite{SiSo}, and in a stronger form by Shapira
\cite{Sh}, is that the analogous condition also holds for any $H$. Namely, in order
to infer that a sequence is quasi-random, and thus satisfies ${\cal P}_H$,
it is enough, say, to require only the sets of vertices of size $n/2$ to contain the correct
number of copies of $H$. Shapira \cite{Sh} proved that is it enough to consider sets of
vertices of size
$n/(h+1)$. Hence, in his result, the cardinality of the sets {\em depends} on $h$.
Thus, if $H$ has 1000 vertices, Shpiras's result shows that it suffices to check vertex
subsets having a fraction smaller than $1/1000$ of the total number of vertices.
His proof method cannot be extended to obtain the same result for fractions larger than
$1/(h+\epsilon)$.
In this paper we settle the above mentioned open problem completely. In fact, we show that
for any $H$,
not only is it enough to check only subsets of size $n/2$, but, more generally, we show that
it is enough to check subsets of size $\alpha n$ for any fixed $\alpha \in (0,1)$.
More formally, we define:
\begin{definition}[${\cal P}_{H,\alpha}$]
\label{d-main}
For a fixed graph $H$ with $h$ vertices and $r$ edges and fixed $0 < \alpha < 1$
we say that a graph sequence $(G_n)$ satisfies ${\cal P}_{H,\alpha}$ if
all subsets of vertices $U \subset V(G_n)$ with $|U|=\lfloor \alpha n \rfloor$
satisfy $H[U] = p^r|U|^h+o(n^h)$.
\end{definition}
\noindent
Our main result is, therefore:
\begin{theo}
\label{t-main}
For any fixed graph $H$ and any fixed $0 < \alpha < 1$, property ${\cal P}_{H, \alpha}$ is
quasi-random.
\end{theo}
\section{Proof of the main result}
For the remainder of this section let $H$ be a fixed graph with $h > 1$ vertices
and $r > 0$ edges, and let $\alpha \in (0,1)$ be fixed.
Throughout this section we ignore rounding issues and, in particular, assume that $\alpha n$
is an integer, as this has no effect on the asymptotic nature of the results.
Suppose that the graph sequence $(G_n)$ satisfies ${\cal P}_{H,\alpha}$.
We will prove that it is quasi-random by showing that it also satisfies
${\cal P}_H$. In other words, we need to prove the following lemma which,
together with Theorem \ref{t-ss}, yields Theorem \ref{t-main}.
\begin{lemma}
\label{l-main}
For any $\epsilon > 0$ there exists $N=N(\epsilon,h,\alpha)$ and
$\delta=\delta(\epsilon,h,\alpha)$ so that for all $n > N$,
if $G$ is a graph with $n$ vertices satisfying that for all $U \subset V(G)$
with $|U|=\alpha n$ we have $|H[U] - p^r|U|^h| < \delta n^h$
then $G$ also satisfies that for all $W \subset V(G)$
we have $|H[W] - p^r|W|^h| < \epsilon n^h$.
\end{lemma}
{\bf Proof:}\,
Suppose therefore that $\epsilon > 0$ is given. Let
$N=N(\epsilon,h,\alpha)$, $\epsilon'=\epsilon'(\epsilon,h,\alpha)$ and
$\delta=\delta(\epsilon,h,\alpha)$ be parameters
to be chosen so that $N$ is sufficiently large and $\delta \ll \epsilon'$ are both
sufficiently
small to satisfy the inequalities that will follow, and it will be clear that
they are indeed only functions of $\epsilon,h$, and $\alpha$.
Now, let $G$ be a graph with $n > N$ vertices satisfying that
for all $U \subset V(G)$ with $|U|=\alpha n$ we have $|H[U] - p^r|U|^h| < \delta n^h$.
Consider any subset $W \subset V(G)$. We need to prove that
$|H[W] - p^r|W|^h| < \epsilon n^h$.
For convenience, set $k=\alpha n$.
Let us first prove this for the case where $|W|=m > k$.
This case can rather easily be proved via a simple counting argument.
Denote by ${\cal U}$ the set of ${m \choose k}$ $k$-subsets of $W$.
Hence, by the given condition on $k$-subsets,
\begin{equation}
\label{e1}
{m \choose k}(p^rk^h - \delta n^h) < \sum_{U \in {\cal U}} H[U] < {m \choose k}(p^rk^h +
\delta n^h) \,.
\end{equation}
Every copy of $H$ in $W$ appears in precisely ${{m-h} \choose {k-h}}$ distinct
$U \in {\cal U}$. it follows from (\ref{e1}) that
\begin{equation}
\label{e2}
H[W] = \frac{1}{{{m-h} \choose {k-h}}}\sum_{U \in {\cal U}} H[U] <
\frac{{m \choose k}}{{{m-h} \choose {k-h}}}(p^rk^h + \delta n^h) <
p^rm^h + \frac{\epsilon'}{2} n^h \,,
\end{equation}
and similarly from (\ref{e1})
\begin{equation}
\label{e3}
H[W] = \frac{1}{{{m-h} \choose {k-h}}}\sum_{U \in {\cal U}} H[U] >
\frac{{m \choose k}}{{{m-h} \choose {k-h}}}(p^rk^h - \delta n^h) >
p^rm^h - \frac{\epsilon'}{2} n^h \,.
\end{equation}
We now consider the case where $|W| = m = \beta n < \alpha n = k$.
Notice that we can assume that $\beta \ge \epsilon$ since otherwise the
result is trivially true.
The set ${\cal H}$ of $H$-subgraphs of $G$ can be partitioned into
$h+1$ types, according to the number of vertices they have in $W$.
Hence, for $j=0,\ldots,h$ let ${\cal H}_j$ be the set of $H$-subgraphs of
$G$ that contain precisely $j$ vertices in $V \setminus W$. Notice that, by definition,
$|{\cal H}_0| = H[W]$. For convenience, denote $w_j = |{\cal H}_j|/n^h$.
We therefore have
\begin{equation}
\label{e4}
w_0 + w_1 + \cdots + w_h = \frac{|{\cal H}|}{n^h} = \frac{H[V]}{n^h} = p^r+\mu
\end{equation}
where $|\mu| < \epsilon'/2$.
Define $\lambda=\frac{(1-\alpha)}{h+1}$ and set $k_i=k+i\lambda n$ for
$i=1,\ldots,h$. Let $Y_i \subset V \setminus W$ be a random set of $k_i-m$
vertices, chosen uniformly at random from all ${{n-m} \choose {k_i-m}}$ subsets
of size $k_i-m$ of $V \setminus W$. Denote $K_i=Y_i \cup W$ and notice that $|K_i| = k_i >
\alpha n$.
We will now estimate the number of elements of ${\cal H}_j$ that ``survive'' in $K_i$.
Formally, let ${\cal H}_{j,i}$ be the set of elements of ${\cal H}_j$ that have all of their
vertices in $K_i$, and let $m_{j,i}=|{\cal H}_{j,i}|$. Clearly, $m_{0,i}=H[W]$ since
$W \subset K_i$. Furthermore, by (\ref{e2}) and (\ref{e3}),
\begin{equation}
\label{e5}
m_{0,i}+m_{1,i}+\cdots+m_{h,i} = H[K_i] = p^rk_i^h+\rho_in^h
\end{equation}
where $\rho_i$ is a random variable with $|\rho_i| < \epsilon'/2$.
For an $H$-copy
$T \in {\cal H}_j$ we compute the probability $p_{j,i}$ that $T \in H[K_i]$.
Since $T \in H[K_i]$ if and only if all the $j$ vertices of $T$ in $V \setminus W$
appear in $Y_i$ we have
$$
p_{j,i} = \frac{{{n-m-j} \choose {k_i-m-j}}}{{{n-m} \choose {k_i-m}}} =
\frac{(k_i-m)\cdots(k_i-m-j+1)}{(n-m) \cdots (n-m-j+1)}\,.
$$
Defining $x_i = (k_i-m)/(n-m)$ and noticing that
$$
x_i = \frac{k_i-m}{n-m} = \frac{\alpha - \beta}{1-\beta}+\frac{\lambda}{1-\beta}i
$$
it follows that
\begin{equation}
\label{e6}
\left|p_{j,i}-x_i^j\right| < \frac{\epsilon'}{2}\,.
\end{equation}
Clearly, the expectation of $m_{j,i}$ is ${\rm E}[m_{j,i}]=p_{j,i}|{\cal H}_j|$.
By linearity of expectation we have from (\ref{e5}) that
$$
{\rm E}[m_{0,i}]+{\rm E}[m_{1,i}]+\cdots+{\rm E}[m_{h,i}] =
{\rm E}[H[K_i]] = p^rk_i^h+{\rm E}[\rho_i]n^h.
$$
Dividing the last equality by $n^h$ we obtain
\begin{equation}
\label{e7}
p_{0,i}w_0 + \cdots + p_{h,i}w_h = p^r\left(\alpha+\lambda i\right)^h+E[\rho_i]\,.
\end{equation}
By (\ref{e6}) and (\ref{e7}) we therefore have
\begin{equation}
\label{e8}
\sum_{j=0}^h x_i^j w_j = p^r\left(\alpha+\lambda i\right)^h+\mu_i
\end{equation}
where $\mu_i = E[\rho_i]+\zeta_i$ and $|\zeta_i| < \epsilon'/2$.
Since also $|\rho_i| < \epsilon'/2$ we have that $|\mu_i| < \epsilon'$.
Now, (\ref{e4}) and (\ref{e8}) form together a system of $h+1$ linear equations with
the $h+1$ variables $w_0,\ldots,w_h$. The coefficient matrix of this system
is just the Vandermonde matrix $A=A(x_1,\ldots,x_h,1)$.
Since $x_1,\ldots,x_h,1$ are all distinct, and, in fact, the gap between any two of them
is at least $\lambda/(1-\beta)=(1-\alpha)/((h+1)(1-\beta)) \ge (1-\alpha)/(h+1)$,
we have that the system has a unique solution which is $A^{-1}b$ where
$b \in R^{h+1}$ is the column vector whose $i$'th coordinate is $p^r\left(\alpha+\lambda
i\right)^h+\mu_i$
for $i=1,\ldots,h$ and whose last coordinate is $p^r + \mu$.
Consider now the vector $b^*$ which is the same as $b$, just without the $\mu_i$'s. Namely
$b^* \in R^{h+1}$ is the column vector whose $i$'th coordinate is $p^r\left(\alpha+\lambda
i\right)^h$
for $i=1,\ldots,h$ and whose last coordinate is $p^r$.
Then the system $A^{-1}b^*$ also has a unique solution and, in fact, we {\em know} explicitly
what this solution
is. It is the vector $w^*=(w_0^*,\ldots,w^*_h)$ where
$$
w_j^* = p^r {h \choose j}\beta^{h-j}(1-\beta)^j\,.
$$
Indeed, it is straightforward to verify the equality
$$
\sum_{j=0}^h
p^r {h \choose j}\beta^{h-j}(1-\beta)^j = p^r
$$
and, for all $i=1,\ldots,h$ the equalities
$$
\sum_{j=0}^h \left(\frac{\alpha - \beta}{1-\beta}+\frac{\lambda}{1-\beta}i\right)^j
p^r {h \choose j}\beta^{h-j}(1-\beta)^j = p^r\left(\alpha+\lambda i\right)^h\,.
$$
Now, since the mapping $F: R^{h+1} \rightarrow R^{h+1}$ mapping a vector $c$ to $A^{-1}c$ is
continuous,
we know that for $\epsilon'$ sufficiently small, if each coordinate of $c$ has absolute value
less than $\epsilon'$, then each coordinate of $A^{-1}c$ has absolute value at most
$\epsilon$.
Now, define $c=b-b^*=(\mu_1,\ldots,\mu_h,\mu)$. Then we have that each coordinate $w_i$ of
$A^{-1}b$
differs from the corresponding coordinate $w_i^*$ of $A^{-1}b^*$ by at most $\epsilon$.
In particular,
$$
|w_0 -w_0^*| = |w_0 - p^r\beta^h| < \epsilon.
$$
Hence,
$$
|H[W] - n^hp^r\beta^h | = |H[W] - p^r|W|^h| < \epsilon n^h
$$
as required.
\hfill\square\bigskip
|
1,941,325,220,018 | arxiv | \section{Introduction}\label{sec:introduction}
Intelligent robot assistants are increasingly expected to solve complex tasks in industrial environments and at home. Different from traditional industrial robots that complete a specific task in simple structured environments repeatedly, intelligent robot assistants may be required to perform different tasks in varying unstructured scenarios. Although Learning from Demonstration (LfD)
provides ordinary users with practical interfaces to teach robots manipulation skills\cite{billard2008survey}\cite{calinon2009robot}, an imitation learning framework with generalization ability is still needed to further reduce human intervention and to improve robots adaptation ability to environment changes. Besides, as robot assistants usually operate in human-populated environments, the manipulation compliance
should also be carefully scheduled to ensure safe interaction while robots completing the target task.
Dynamic Movement Primitives (DMP) model, firstly introduced in \cite{ijspeert2001trajectory} and further improved in
\cite{hoffmann2009biologically},\cite{ijspeert2013dynamical}, becomes popular in
the LfD community because of its powerful generalization ability. In general, DMP modulates each dimension of the
demonstrated movement trajectory as a second-order damped spring system. By approximating the non-linear force terms
and adjusting the attractor points, DMP can then generalize the demonstrated trajectory to similar situations while
keeping its overall shape. With this property, DMP has been used in various robot manipulation scenarios, such as
grasping objects at different positions\cite{hoffmann2009biologically}, playing drums at different heights\cite{hoffmann2009biologically}, and tennis swings \cite{ijspeert2002learning}. In these
papers, by elbowing robots with generalizable manipulation skills from once kinesthetic teaching, DMP greatly reduces human intervention during robot skill acquisition. However, the manipulation compliance is still ignored in most DMP-based skill learning framework.
\begin{figure*}[hbt]
\centering
\includegraphics[trim=0.2cm 0.2cm 0.2cm 0.3cm,clip, width=\textwidth, height=4cm]{overview.pdf}
\caption{The overview of our proposed learning framework.}
\label{fig:Overview of the framework}
\vspace{-0.5cm}
\end{figure*}
Impedance Control (IC) \cite{hogan1985impedance} is commonly introduced in robot controller design for achieving complaint motions, in which
the controller can be viewed as a virtual spring-damp system between the robot end-effector and the environment. By
adapting the impedance parameters based on task requirements and environment dynamics, Variable Impedance Control (VIC)
can vary the manipulation compliance to ensure safe interaction and proper task completion \cite{ude2010task}. In
\cite{buchli2011learning}, the author
proposed a Reinforcement Learning (RL) framework titled Policy Improvement with Path Integrals (PI$^2$) where DMP was
firstly integrated with impedance parameters optimization. This RL framework parameterizes the movement trajectory and
impedance parameters with DMP models and then optimizes the parameters with the policy search optimization method.
Different from PI$^2$ where impedance parameters are learned indirectly, the authors in \cite{ajoudani2012tele} managed to explicitly estimate
human arm stiffness profiles based on the electromyographic (EMG) signals when human performing tasks. In \cite{wu2020framework}, we combined this EMG-based human arm stiffness estimation
method with optimal control theory and proposed an autonomous impedance regulation framework in a class of manipulation
tasks. While previously presented stiffness estimation methods show their potential in variable impedance manipulation skill acquisition, the complexity of estimating a set of parameters and the requirement of multiple EMG sensors make them inefficient and impractical for robot users.
In contrast, estimating impedance parameters from human demonstrations can be a more efficient way. In \cite{rozo2013learning}, the
authors proposed a human-robot collaborative assembly framework where stiffness matrix is estimated by Weighted
Least-Square (WLS) algorithms, with a small number of demonstrations and sensed force information. In \cite{abu2018force}, the authors
considered trajectory as a virtual spring-damper system, and estimated stiffness profiles based on demonstrated
kinesthetic trajectory and the associated sensed Cartesian force information. In \cite{calinon2010learning}, the authors also modeled
trajectory with a virtual spring-damper system, but they estimated the gain parameters of this spring-damper system
from demonstrated position trajectory to formulate a compliant controller. In their model, the demonstrated position
trajectory distribution is generated with the Gaussian Mixture Model-Gaussian Mixture Regression (GMM-GMR) algorithm
\cite{ghahramani1994supervised},\cite{calinon2007learning}, and the stiffness profiles are shaped so that the robot changes to a high stiffness in directions of low
variance. In \cite{kronander2012online}, this stiffness adaptation method is further applied to online decrease stiffness profiles when
humans perturbing the robot end-effector around its equilibrium point. However, most previously presented methods mainly focus
on estimating translational stiffness, rotational stiffness is ignored.
In this paper, we modified the stiffness estimation method in \cite{calinon2010learning} by integrating quaternion logarithmic
mapping function. This modification allows us to transform quaternions into decoupled 3D tangent vectors
and then to estimate rotational stiffness profiles based on the variances of tangent vectors. Besides, we noticed most works
in robot variable impedance manipulation skill learning focus on variable impedance skill reproducing and trajectory generalization, while the stiffness profiles are seldom generalized.
In this paper, we argue that generalizing stiffness profiles like trajectory can further improve robots' adaptability to
environment changes. To this end, we extend the classical DMP motion trajectory scheduling equations with stiffness generalization parts. The resulted learning framework is similar to the one presented in \cite{yang2018dmps} where joint stiffness profiles are estimated with EMG signals and then generalized in joint space. The main differences between us are that: 1) in our work, we estimate stiffness profiles from human demonstrations, which is more practical and efficient than EMG-based methods. 2) More importantly, we learn and generalize variable impedance manipulation skills in Cartesian space. This is a more natural way to regulate trajectory and stiffness profiles, as the goals for skill generalization
and the task constraints are normally presented in Cartesian space.
This paper is organized as follows. In Section 2, we introduce the methodology with the overview of our learning framework shown in Fig.\ref{fig:Overview of the framework}; In Section 3, we show the real-world validation experiments and analyze the results; In Section 4 and Section 5, we discuss and conclude this paper.
\section{Methodology}\label{sec:methodology}
As shown in Fig.\ref{fig:Overview of the framework}, our learning framework mainly consists of four parts: \textit{Trajectory
Collecting}, \textit{Variable Impedance Skill Generation}, \textit{Skill Reproduction and Generalization}, and\textit{
Real-world Robot Control}.
\textit{Trajectory Collecting}: a human demonstrator demonstrates to the robot how to accomplish one specific task
several times. Then, the demonstrated trajectories are collected and aligned into the same time scale.
\textit{Variable Impedance Skill Generation}: we transform the aligned quaternions $\{\mathbf{\hat{q}}\}$ into tangent
vectors $\{\mathbf{\hat u}\}$ through the Quaternion Logarithmic Mapping Function. Then, we use GMM-GMR to encode both
$\{ \mathbf{{\hat u}}\}$ and $\{ \mathbf{{\hat p}} \}$ to obtain the trajectory distribution. The mean of
$\{ \mathbf{{\hat u}} \}$ is then transformed back to quaternions with the Quaternion Exponential Mapping Function. Meanwhile,
the variances of demonstrations are mapped to the reference stiffness profiles $\{ {\mathbf{r}_k}(t) \}$ with the Stiffness
Indicator Function.
\textit{Skill Reproduction and Generalization}: the extended DMP framework mainly consists of two parts: DMP movement
regulation block and DMP stiffness scheduling block. DMP movement regulation block generalizes the generated reference
pose trajectory $\{ {\mathbf{r}_p}(t), {\mathbf{r}_q}(t) \}$ to new scenarios and DMP stiffness scheduling block regulates
the reference stiffness profiles $\{ {\mathbf{r}_k}(t) \}$ to adapt to environment changes. Then, the torque commands
are calculated based on VIC equation with $\{{\mathbf{{r}_p}(t),\mathbf{{r}_q}(t)} \}$ and $\{ {\mathbf{{r}_k}(t)}\}$.
\textit{Real-world Control}: The calculated torque commands are then sent to real-world robots through Robot Operation System (ROS).
\subsection{Pre-processing}\label{subsec:preprocessing}
At first, N trajectories consist of positions and orientations of the end-effector are collected through kinesthetic
teaching. Each demonstration ${\mathbf{O}_i}(t){\rm{=}}\left\{{\mathbf{p}_i}({t_j}), {\mathbf{q}_i}({t_j}) \right\}$ ,
$ i=1,2,...,N$ is a ${M_i} \times7$ matrix, where ${M_i}$ indicates the total number of datapoints of the ${i^{th}}$
demonstrated trajectory, ${\mathbf{p}_i}({t_j}) = \left\{ {{p_{i,x}}({t_j}),{p_{i,y}}({t_j}),{p_{i,z}}({t_j})} \right\}$
and ${\mathbf{q}_i}({t_j}) = \left\{ {{q_{i,w}}({t_j}),{q_{i,x}}({t_j}),{q_{i,y}}({t_j}),{q_{i,z}}({t_j})} \right\}$
represent the position and unit quaternion of ${i^{th}}$ trajectory at ${t_j}$ timestep, respectively. Next, we
align the collected trajectories into the same time scale $\left[ {0,T} \right]$, for a given $T > 0$. This time
alignment process is done as follows: assume ${t_0}$ and ${t_1}$ be the initial and final time of a given trajectory
${\mathbf{O}_i}(t)$ . The aligned trajectory is then represented as:
\begin{equation}
{\mathbf{\hat O}_i}(t){\rm{ = }}{\mathbf{O}_i}(\frac{{T(t - {t_0})}}{{{t_1} - {t_0}}}),i = 1,2,...,N
\end{equation}
with ${\mathbf{{\hat O}}_i}(t){\rm{ = }}\left\{ {{{\mathbf{\hat p}}_i({t_j})},{{\mathbf{\hat q}}_i({t_j})}} \right\}$ .
\subsection{Variable Impedance Skill Generation}\label{subsec:Skill Generation}
\subsubsection{Quaternion Logarithmic and Exponential Mapping Functions}\label{subsubsec:quaternion maps}
Unlike positional part, there is no minimal and singularity-free representation for orientational part. The stiffness estimation
method in \cite{calinon2010learning} is effective to estimate translational stiffness profiles from position trajectories. However, it might not be
feasible to encode orientation trajectories and estimate rotational stiffness information directly. Inspired by
Quaternion-based DMP framework in \cite{ude2014orientation}\cite{pastor2011online}, where Quaternion Logarithmic Mapping Function is applied to calculate the 3D
decoupled distance vector between two quaternions, in this paper, we also utilize this mapping function to transform
quaternions into decoupled 3D vectors. The generated 3D vectors can be considered as the distance vectors between the transformed
quaternion and the original quaternion $\mathbf{1}{\rm{ = }}(1,0,0,0) $. Besides, Quaternion Exponential Mapping Function
is also presented here to transform the distance vectors back into quaternions for orientation trajectory encoding and
generalization with the extended DMP framework later.
Given an unit quaternion, the Quaternion Logarithmic Map Function $({S^3}\to{R^3})$ is written as:
\begin{equation}
\mathbf{\hat u} = \log (\mathbf{{\hat q}}) = \left\{ \begin{array}{l}
\arccos ({{\hat q}_w})\frac{{({{\hat q}_x},{{\hat q}_y},{{\hat q}_z})}}{{||{{\hat q}_x},{{\hat q}_y},{{\hat q}_z}||}},({{\hat q}_x},{{\hat q}_y},{{\hat q}_z}) \ne\mathbf{ \mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}
\over 0}} \\
(0,0,0),otherwise
\end{array} \right.
\label{con:quat_log}
\end{equation}
Correspondingly, the Quaternion Exponential Map Function $({R^3} \to {S^3})$ is defined by:
\begin{equation}
\mathbf{\hat q} = \exp (\mathbf{\hat u}) = \left\{ \begin{array}{l}
(\cos |\mathbf{\hat u}||,\frac{{\sin ||\mathbf{\hat u}||}}{{||\mathbf{\hat u}||}} \bullet \mathbf{\hat u}),\mathbf{\hat u} \ne (0,0,0)\\
\mathbf{1}{\rm{ = }}(1,0,0,0),otherwise
\end{array} \right.
\label{con:quat_exp}
\end{equation}
where $\mathbf{\hat u }= ({\hat u_x},{\hat u_y},{\hat u_z}) \in {T_1}{S^3} \equiv {R^3}$ represents a tangent vector in
the tangent space ${T_1}{S^3}$.
In Eq. \eqref{con:quat_exp}, the
exponential map transforms a tangent vector $\mathbf{\hat u}$ into a unit quaternion $\mathbf{\hat q}$, a point
in ${S^3}$ at distance $||\mathbf{\hat u}||$ from $\mathbf{1}$ along the geodesic curve beginning from $\mathbf{1}$ in
the direction of $\mathbf{\mathbf{\hat u}}$. Additionally, when we limit ${\rm{||}}\mathbf{\hat u}|{\rm{| < }}\pi {\rm{ }}$
and $\mathbf{\hat q }\ne (- 1,0,0,0)$, these two mapping is continuously differentiable and inverse to each other.
\subsubsection{Stiffness Indicator Function}
\label{subsubsec:Stiffness Indicator Function}
The stiffness indicator function maps the variances of demonstrated trajectories to stiffness profiles. The basic idea is that: in the region where demonstrated-trajectory has a low variance, the robot should keep at a high stiffness level to track the reference trajectory precisely. While for the high variance parts, the robot can keep at a relatively low stiffness level to ensure manipulation compliance.
To generate relatively lower stiffness profiles, we apply the left part of a quadratic function with a positive quadratic coefficient as the stiffness indicator function:
\begin{equation}
{k_l}(t) = {a_l}{({d_l}(t) - {d_{l}^{max}})^2} + {k_{l}^{min}}
\label{con:indicator}
\end{equation}
\[{a_l} = \frac{{k_l^{\max } - k_l^{\min }}}{{{{(d_l^{\min } - d_l^{\max })}^2}}} > 0\]
where $k_l^{\min },k_l^{\max }$ are the minimal and maximal translational or rotational stiffness values in direction
$l \in \left\{ {x,y,z} \right\}$, given based on the robot hardware limitations and real-world task constraints. Besides, $d_l^{\min },d_l^{\max }$ indicate the minimum and maximum of the standard deviation of the demonstrated trajectories in direction $l$.
\subsection{Skill Reproduction and Generalization}\label{subsec:skill_generalization}
\subsubsection{Extended DMP Model}\label{subsubsec:dmp}
DMP model considers a trajectory as a second-order damped spring system with a non-linear force term $f(\bullet)$,
like Eq.\eqref{con:dmp_pos}. Given a demonstrated trajectory, by solving the regression problem of the non-linear force term, DMP
can imitate this trajectory and generalize it to new similar scenarios by adjusting the goals. However, when transferring human skills to robots, most classical DMP only encodes pose trajectories, which may lose part of the compliance of demonstrated skills. To learn and generalize compliant variable impedance manipulation skills, we extend the original DMP model by integrating the stiffness scheduling equations
in Eq.\eqref{con:dmp_stiffness}. Meanwhile, Quaternion-based DMP, Eq. \eqref{con:dmp_quat} is united in our extended model to
encode the reference orientation trajectory.
\begin{equation}
\tau \mathbf{y} = \alpha_{p}(\beta_{p}(\mathbf{p}_{g}-\mathbf{p})-
\mathbf{y})+\mathbf{f}_{p}(x)
\label{con:dmp_pos}
\end{equation}
\begin{equation}
\tau\mathbf{z} = \alpha_{k}(\beta_{k}(\mathbf{k}_{g}-\mathbf{k})-\mathbf{z})+\mathbf{f}_{k}(x)
\label{con:dmp_stiffness}
\end{equation}
\begin{equation}
\tau \boldsymbol{\dot{\eta }}={{\alpha }_{q}}({{\beta }_{q}}2\log ({\mathbf{{q}_{g}}}*\mathbf{\bar{q}})-
\boldsymbol{\eta} )+{\mathbf{{f}}_{q}(x)}
\label{con:dmp_quat}
\end{equation}
\begin{equation}
\tau \mathbf{p}=\mathbf{y}
\end{equation}
\begin{equation}
\tau \mathbf{k}= \mathbf{z}
\end{equation}
\begin{equation}
\tau \mathbf{\dot{q}}=\frac{1}{2}\boldsymbol{\eta} *\mathbf{q}
\label{con:quat_deriv}
\end{equation}
where $\mathbf{p},\mathbf{p}_g \in {R^3}$ indicate current position of the robot's end-effector in Cartesian space and the final goal position; $\mathbf{k},\mathbf{k}_g \in {R^6}$
represent the main diagonal elements of stiffness matrix and their target values, respectively;
$\mathbf{q},\mathbf{q}_g \in {S^3}$ are robot's current orientation and the final goal orientation; $ {\alpha _p},{\alpha _k},
{\alpha _q},{\beta _p},{\beta _k},{\beta _q}$ are constant parameters; $\tau $ indicates the time scaling factor
that is used to adjust the duration of the task; $\mathbf{y},\mathbf{z}$ represent position
velocity, the derivative of stiffness, and the tangent vector calculated by the quaternion logarithmic map in
Eq. (2); $\boldsymbol{\dot q}$ is the quaternion derivative that satisfies Eq.\eqref{con:quat_deriv}, where $\boldsymbol{\eta}$ is the angular velocity; Besides,
$\mathbf{\bar q}$ denotes the quaternion conjugation, with the definition: $\mathbf{\bar q} = ({q_w}, - {q_x},
- {q_y}, - {q_z}) $. Finally, the symbol $ * $ indicates the quaternion product.
The whole extended DMPs model is synchronized by the canonical system:
\begin{equation}
\tau \dot x = - {\alpha _x}x
\end{equation}
where $x$ is the phase variable to avoid explicit time dependency; ${\alpha _x}$ is a positive constant and $x(0)$ is set as 1.
The non-linear forcing terms ${\mathbf{f}_p}(x),{\mathbf{f}_q}(x),{\mathbf{f}_k}(x)$ are functions of $x$ and can be
regressed with Locally Weighted Regression (LWR) algorithm:
\begin{equation}
\mathbf{f}(x) = \frac{{\sum\nolimits_{s = 1}^S {{\boldsymbol{\theta} _s}{\psi _s}({x_j})} }}{{\sum\nolimits_{s = 1}^S
{{\psi _s}({x_j})} }}x
\end{equation}
where $\mathbf{f}(x)$ represents ${\mathbf{f}_p}(x),{\mathbf{f}_q}(x),{\mathbf{f}_k}(x)$ in general. $S$ is the number
of radial basis functions used.
Given demonstrated trajectories, S-column parameter matrix $\boldsymbol{\theta} $ can be obtained by solving the following equations:
\begin{equation}
\mathbf{f}_{p}({x_j})= \mathbf{G}_{p}^{ - 1}(\tau^2{\mathbf{p}_j}+\tau\alpha_{p}({\mathbf{p}_j}-\alpha_{p}\beta_{p}({\mathbf{p}_{g}}-{\mathbf{p}_j}))
\end{equation}
\begin{equation}
\mathbf{f}_{k}({x_j})= \mathbf{G}_{k}^{ - 1}(\tau^2{\mathbf{k}_j}+\tau\alpha_{k}({\mathbf{k}_j}-\alpha_{k}\beta_{k}({\mathbf{k}_{g}}-{\mathbf{k}_j}))
\end{equation}
\begin{equation}
{\mathbf{f}_q}({x_j}) = \mathbf{G}_q^{ - 1}(\tau {\dot {\boldsymbol{\eta} _j} - {\alpha _q}({\beta _q}2\log
({\mathbf{q}_g} * \mathbf{\bar q}_j}) - {\boldsymbol{\eta}_j})
\end{equation}
\begin{equation}
{\psi _s}(x) = \exp ( - {h_s}{(x - {c_s})^2})
\end{equation}
where ${\boldsymbol{G}_{p}}=diag({\mathbf{p}_g}-{ {\mathbf{p}_{_{\rm{0}}}}})\in {R^{3 \times 3}}$, ${\boldsymbol{G}_{k}} = diag({\mathbf{k}_g} -{{\mathbf{k}_{_{\rm{0}}}}})\in {R^{6 \times 6}}$, ${\mathbf{G}_q}=diag(2\log\mathbf{q}_g-\mathbf{q}_0)\in {R^{3\times 3}}$are spatial scaling factors.${h_s},{c_s}$ are the width
and center of Gaussian distribution ${\psi_s}(x)$.
\subsubsection{Variable Impedance Control}\label{subsubsec:vic}
With the scheduled pose trajectory $\left\{ {\boldsymbol{p},\boldsymbol{q}} \right\}$
and stiffness profiles $\boldsymbol{k}$ , we can calculate the command torques based on the variable impedance control
equation:
\begin{equation}
\mathbf{\Gamma} = {\mathbf{J}^T}(\mathbf{K}\boldsymbol{e} + \mathbf{D}\boldsymbol{\dot{e}} ) + {\mathbf{\Gamma_{ff}}}
\end{equation}
where the stiffness matrix $\mathbf{K} = diag(\boldsymbol{k}) \in {R^{6 \times 6}}$ and damping matrix $\boldsymbol{D}=\sqrt{2\boldsymbol{K}}$; $\mathbf{J}$ is the Jacobian matrix, ${\mathbf{\Gamma}}$ represents the joint torque and $\mathbf{\Gamma}_{ff} $ is the joint torque contributed by feed-forward term.
${\mathbf{e}},{\mathbf{\dot{e}}}$ denote the reference pose trajectory tracking error and tracking velocity.
\section{Experimental Evaluation}
In this section, the 7DoF Franka Emika Robot(Panda) is used in our experimental study. For all the experiments, Panda
is controlled under libfranka scheme with 1kHz control frequency. A toy task, serving drinks is illustrated to show
that our framework 1) learns a reasonable variable impedance manipulation skill from human demonstrations; 2) enables the
robot to generalize the reference stiffness profiles to adapt to the changes of the robot end-effector.
As discussed in \cite{pastor2011online}, pouring the content of a bottle into a cup can be done with kinematic control model. However, in human-populated environments, a person may incautiously push the robot while it reaching for the cup. A stiff controller will respond with high forces, which may hurt the user and cause the liquid to spill. It would thus be desirable to control the way that the robot responds to translational and rotational perturbations.
In the real world, human tends to gradually increase the translational and rotational stiffness while reaching the cup and keep at high stiffness while pouring drinks. Besides, humans can also easily adapt to the shape and weight changes of the grasping bottle. In this experiment, we show that our framework learns reasonable variable impedance manipulation skills and also enables Panda to adapt to the weight and shape changes of the bottle like humans, by endowing it with variable impedance manipulation skill generalization ability.
The experiment set up is shown in Fig.\ref{fig:exp_setup}. Panda is expected to 1) pour water from a 0.25 kg plastic bottle into the middle cup on the table by reproducing the demonstrated pose trajectory and stiffness profiles; 2) pour water into the other two cups by generalizing the reference pose trajectory; 3) pour wine from a 0.9 kg glass bottle into the cups by generalizing the reference trajectory and stiffness profiles simultaneously.
\begin{figure}[htb!]
\centering
\includegraphics[trim=0.2cm 0.2cm 0.2cm 0.3cm,clip,width=1\linewidth]{setup.pdf}
\caption{Experiment setup}
\label{fig:exp_setup}
\vspace{-0.5cm}
\end{figure}
\begin{figure}[htb!]
\centering
\includegraphics[trim=0.2cm 0.2cm 0.2cm 0.3cm,clip,width=1\linewidth]{demonstration.pdf}
\caption{Kinesthetic teaching.The robot user demonstrates Panda how to pour water into the middle cup.}
\label{fig:teaching}
\vspace{-0.3cm}
\end{figure}
\subsection{Learning Variable Impedance Manipulation Skill}\label{subsec:skill learning}
\begin{figure}[hbt!]
\centering
\includegraphics[trim=0.3cm 0.3cm 0.5cm 0.5cm,clip,width=1\linewidth]{gmm.pdf}
\caption{GMM-GMR encodes the positional and orientational datapoints.The demonstrated trajectories, the estimated mean functions, and the trained Gaussian kernels are marked with blue lines, black lines, and colorful ellipses, respectively.}
\label{fig:trajectory processing}
\vspace{-0.3cm}
\end{figure}
We first show Panda how to pour water into the middle cup on the table 8 times in slightly different situations, with
kinesthetic teaching (Fig.\ref{fig:teaching}). Then, the collected pose trajectories were aligned into the same time scale T = 11 seconds.
Next, we transformed the unit quaternions into tangent vectors with the Quaternion Logarithmic Mapping Function, Eq.\eqref{con:quat_log}, and encoded both positional and orientational datapoints with GMM-GMR, with H=6 Gaussian components, shown in Fig.\ref{fig:trajectory processing} Finally, the mean tangent vectors in Fig.\ref{fig:trajectory processing} (d-f) were converted back into unit quaternions through
the Quaternion Exponential Mapping Function, Eq.\eqref{con:quat_exp}.
The reference stiffness profiles is estimated with our Stiffness Indicator Function, Eq.\eqref{con:indicator}, based on the variances of the demonstrated trajectories. For pouring water experiment, the minimal and maximal translational stiffness values allowed are set as $k^{\min }= 200N/m$ and $k^{\max } = 550N/m$, and the values for the rotational stiffness are $k^{\min } = 10N/(rad \bullet m)$, $k^{\max}= 20N/(rad \bullet m)$. The estimated stiffness profiles are presented together with the standard deviations of the collected trajecotries in Fig.\ref{fig:stiffness mapping results}.
\begin{figure}[hbt!]
\centering
\includegraphics[trim=0.3cm 1.0cm 0.3cm 0.3cm,clip,width=0.9\linewidth]{stiffness.pdf}
\caption{a-b) represent the standard deviation of the collected motion trajectories; c-d) show the estimated stiffness profiles}
\label{fig:stiffness mapping results}
\vspace{-0.3cm}
\end{figure}
Overall, the estimated translational stiffness profiles increase when the robot reaching for the cup and keep high when pouring water. Before beginning the experiment, we expect the rotational stiffness profiles show the same tendency to the translational part. Nevertheless, they actually start from high values, decrease gradually at around 3s and keep increasing to high values from around 7s until the end. This tendency is actually more reasonable than what we expected. At the beginning phase of pouring water, we unconsciously perform high rotational stiffness to avoid our hand rotate to prevent the water from spilling out. Therefore, our framework can indeed generate reasonable stiffness profiles from human demonstrations.
To illustrate we successfully transferred the stiffness features to Panda in the real world, we recorded the mean tracking errors of pouring water into the middle cup with 1)the estimated stiffness profiles, 2)the minimal stiffness value allowed, 3) the maximal stiffness value allowed. The result is shown in Fig.\ref{fig:real-world experiment 2}
\begin{figure}[hbt!]
\centering
\includegraphics[trim=0.5cm 1.5cm 0.5cm 0.3cm,clip,width=1\linewidth]{realworldstiffness.pdf}
\caption{Comparison of mean tracking pose errors in different stiffness modes}
\label{fig:real-world experiment 2}
\vspace{-0.3cm}
\end{figure}
We firstly set rotational stiffness at $20 N/(rad\bullet m)$, then used $200 N/m$, $550 N/m$ and the reference translational stiffness profiles to accomplish the pouring water experiment for 3 times for each translational stiffness mode. The mean positional
errors were shown in the upper 3 graphs in Fig.\ref{fig:real-world experiment 2}. It is noticeable that our variable
impedance controller behaves like a 200 N/m constant stiffness controller from 0s to around 4s, as the yellow lines are
close to the blue lines during this period. After 4s, the yellow lines almost coincide with the red lines which represent the tracking errors of the 550 N/m constant stiffness controller. This exactly reflects the tendency of the translational stiffness in Fig.\ref{fig:stiffness mapping results}$c)$. As for the rotational part, we set the translational stiffness at 550 N/m, and tested the mean tracking errors of $10N/(rad\bullet m)$, $20N/(rad\bullet m)$, and the reference orientational stiffness profiles. The results are shown in the lower 3 graphs in Fig.\ref{fig:real-world experiment 2}. The results also inflect the overall tendency of the orientational stiffness in Fig.\ref{fig:stiffness mapping results}$d)$. Therefore, with our extended-DMP model, Panda learns reasonable variable impedance manipulation pouring water skill.
\subsection{Variable Impedance Skill Generalization}\label{subsec:skill generalization}
In this section, we generalize the learned variable impedance manipulation skill to new scenarios and show our extended-DMP model further improves Panda$'$s adaptability to the shape and weight changes of the grasping bottle. First, we used the reference stiffness profiles and generalized the pose trajectory to show our framework inherits the motion generalization ability of DMP model. As shown in Fig.\ref{fig:real-world experiment 1}, with the generalized pose trajectory, Panda can successfully pour water into the three cups on the table with a similar trajectory shape.
\label{subsec:pouring water}
\begin{figure}[hbt!]
\centering
\includegraphics[trim=0.3cm 1.0cm 0.1cm 0.3cm,clip,width=0.99\linewidth]{realworld.pdf}
\caption{Shortcuts of real-world pouring water experiment}
\label{fig:real-world experiment 1}
\vspace{0.3cm}
\end{figure}
Then, we replaced the light plastic bottle with a heavier glass wine bottle. As the wine bottle is longer than the plastic one, this will require different goal poses for pouring wine into the same cups. The generalized pose trajectories for pouring wine are shown in Fig.\ref{fig:pourwine}.
\begin{figure}[hbt!]
\centering
\includegraphics[trim=0.5cm 1.0cm 0.5cm 0.5cm,clip,width=1\linewidth]{pourwine.pdf}
\caption{Generalized pose trajectories for pouring wine task}
\label{fig:pourwine}
\vspace{-0.3cm}
\end{figure}
When we executed the new pose trajectories with the reference stiffness profiles for pouring water, Panda failed to pour wine into the first cup and the third cup, and managed to pour wine into the second cup only once, shown in Fig.\ref{fig:real-world_experiment2} a). The main reasons for this failure are that: 1) the glass bottle is heavier than the plastic one. The reference translational stiffness profiles should be increased to compensate for the mass change of the robot end-effector, particularly the stiffness in Axe Z; 2) A longer bottle will enlarge the positional distance errors between the cup and the bottleneck when there are orientational errors. Besides, a heavier bottle also introduced larger external torques at the end-effector. Therefore, the values of rotational stiffness should be increased correctly to compensate for the external torques and to reduce the orientational tracking errors. Meanwhile, no matter it is pouring water or pouring wine, the overall stiffness shape should be kept, as the task constraints did not change for the pouring tasks.
Therefore, we generalized both the stiffness profiles to new goal values and keep its overall tendency with Eq.\eqref{con:dmp_stiffness}. We re-run the pouring wine tests for 3 times. Panda successfully adapted the changes and managed to pour wine into the cups, just like what we have shown in Fig.\ref{fig:real-world experiment 1}
\begin{figure}[htb!]
\centering
\includegraphics[trim=0.3cm 0.5cm 0.5cm 0.5cm,clip,width=1\linewidth]{realworldpourwine_generalizedstiffness.pdf}
\caption{a) Performance of the reference pouring water stiffness profiles in pouring wine task. Panda did not reach the range for pouring wine and even crushed the cup. b) Generalized stiffness profiles that accomplished the pouring wine task}
\label{fig:real-world_experiment2}
\vspace{-0.3cm}
\end{figure}
\section{Discussion}
\label{sec: Discussion }
It should be emphasized that our stiffness estimation method estimates both translational and rotational stiffness profiles, while most previously presented methods can not achieve this goal. Another advantage of our method is its efficiency and effectiveness while estimating stiffness profiles. In this paper, with only 8 collected trajectories, we could generate reasonable stiffness profiles to reproduce the demonstrated skill and to further generalize it to new scenarios.
Besides, in this article, we mainly focus on learning and generalizing stiffness profiles from human demonstrations. However, the estimated stiffness profiles may not be optimal for the target task. Improving the stiffness profiles with optimization methods, like PI$^2$\cite{buchli2011learning}, can be useful to further improve the performance of the learned variable impedance manipulation skill in solving the target task.
\section{Conclusion}
\label{sec: Conclusion}
In this work, we proposed an efficient DMP-based imitation learning framework for learning and generating variable impedance manipulation skills from human demonstrations in Cartesian space. This framework not only estimates both translational and rotational stiffness profiles from demonstrated trajectories, but also improves robots' adaptability to environment changes(i.e.the weight and shape changes of the robot's end-effector) by generalizing generated stiffness profiles. The experimental study validates the effectiveness our proposed framework. Besides, we believe it can be used on robots with different configurations, as our framework learns and generalizes skills in Cartesian space.
For future work, we will test our proposed approach in more complex human-robot interaction tasks. It is also an interesting direction to further optimize the generated reference trajectory and stiffness profiles through reinforcement learning algorithms.
|
1,941,325,220,019 | arxiv | \section{Introduction}
Optical voltage sensor (OVS) based on The Pockels effect has the advantages of small volume, good insulation performance, no ferromagnetic saturation, large dynamic measurement range and high bandwidth, and is the development direction of primary measurement equipment for smart grid [1,2]. However, its long-term stability is poor and it has not been widely used in power system. It is generally believed that temperature, stress, linear birefringence and vibration, aging are the main factors affecting the long-term operation stability of OVS, and these factors are coupled with each other [3-8].
In order to solve the above problems, the research groups all over the world has put forward a variety of optimization methods, which are discussed as following categories:
\paragraph{Stress birefringence suppression}
Dual optical path[3] and dual crystal method[4] are always utilized to suppress the linear birefringence. Since the linear birefringence has the charactors of random and fluctuaion. The method could not fully eliminate the interference.
\paragraph{Temperature drift compensation}
Bohnert[5] et al proposed utilizing the temperature drift of the dielectric coefficient to compensate the one of electro-optic coefficient by using quartz voltage divider. Also, software temperature compensation method is often utilized with the help of measuring the crystal's temperature in real time[6].
\paragraph{phase modulation with closed loop control}
For all fiber optical current sensor(AFOCS)[7,8],
by adding the phase delay modulation with square wave shape
the static operating point would be always shifted to the linear region. However, the closed-loop control is needed because the random drift is always exist. However, when temperature drift is alter dramatically, the PID parameters would be unreliable, and the system would be unstable.
This paper proposes a method based on rotating heterogeneous electrode which modulated the electric field's direction and magnitude. With the aid of DSP, the digital lock-in amplifier could be implemented to restore the original signal which would be disturbed by the temperature drift, vibration and linear birefringence. A simulation conducted by Simulink has proved the feasibility of the rotated-electrode OVS. The paper is organized as follows: Section II describe the structure of the sensor, and briefly discuss the function of several elements. Section II discuss the modulation wave shape results from the rotation electrode. Section III calculate the reference signal. Section IV shows the principle of Lock-in amplifier in the case that the modulation and reference signal are neither sinusoidal wave or square one. Section V show the simulation results with the aid of Simulink. Section VI give the conclusion.
\section{constituents of sensor}
As shown in Fig 1, the whole system consists of rotating electrode, high voltage electrode, BGO crystal, polarized optical elements, semiconductor light source, Photo diode, motor, digital signal processor (DSP) and so on. The first five components constitute an conventional optical voltage sensor based on Pockels effect. Since the quarter wave plate (QWP) shifts the working zero point to the linear region, the change of output light intensity is proportional to the applied voltage and the electric field inside the crystal. Electrical field modulation and lock-in amplifier system are composed of rotating ground electrode, motor, LED, PD and DSP. Due to the rotation of ground electrode, the direction and magnitude of field inside BGO crystal are altered in period, leading to the modulation of output light power. By utilizing digital lock-in amplifier technology with DSP,
the effective signal is displaced from the noisy low-frequency band(~50Hz) to the middle-low frequency (~2kHz) band, avoiding the disturb from coupling of linear birefringence, temperature and vibration.
\begin{figure}[htbp]
\centerline{\includegraphics[width=0.35\textwidth]{setup2.png}}
\caption{The schematic of experimental set-up.}
\label{fig}
\end{figure}
The rotation speed of the hollow-cup DC-motor and three-phase brush-less motor could reach 30,000-150,000 round per minute(RPM), that is, the rotation frequency of 500-2500Hz. If the rotated electrode adopts multi-blade structure, the modulation frequency would increase up to 10kHz, whose bandwidth is sufficient to detect the switch noise raised from the accessed distributed power devices.
The light switch composed of LED and PD monitors the speed of motor. It grantee the constant speed of the motor and thus the constant modulation frequency. On the other hand, it provide a reference signal for the demodulation of phase-locked amplifier.
The key points to be investigated in the system are:
\textit{1. The waveform of modulated light intensity.} Due to the shape of electrode's blade, the shape of wave would deviates from the perfect sinusoidal signal and contains high harmonic component.
\textit{2. The waveform of the reference signal. }Because of the LED light's divergence, the light spot on electrode's blade usually has a certain area, which may larger than the blade itself. And the final received reference signal is neither an ideal sinusoidal signal nor a square wave signal, need to be studied in detail.
\textit{3. Analysis of the lock-in amplifier's principle.} Since neither the modulation signal nor the reference signal is an ideal sinusoidal signal, corresponding analysis is required.
The three aspects will be discussed in the following sectors.
\section{performance of rotated electrode}
The structure of OVS sensor-head with rotating electrode is shown in the Figure 2. The optical path for measurement of electric field is perpendicular to the one for reference signal (rotation frequency measurement). The strip region in the BGO crystal is where light path is located and the finite element grids is denser than adjacent area for investigating precise change of the electric field. The rotating electrode is a mono 30-degree fan-shaped blade, the rotation axis is shown as a black arrow in Fig 2.
\begin{figure}[htbp]
\centerline{\includegraphics[width=0.25\textwidth]{sim_model.png}}
\caption{The structure of rotated electrode sensor head}
\label{fig}
\end{figure}
As shown in Fig 3, When the blade runs directly covering the light-pass region, it is equivalent to the conventional OVS structure. In this position, the potential gradient in the crystal changes most, and the electric field and optical signal alters the most. When the blade rotates to a position far away from the crystal, electric potential changes in the crystal are small, the direction of the electric field is inclined to 45 degrees, and the electric field is the smallest and the optical signal is the lowest.
\begin{figure}[htbp]
\centerline{\includegraphics[width=0.28\textwidth]{voltage.png}}
\caption{The voltage potential simulation of BGO crystal for different position of rotated electrode.}
\label{fig}
\end{figure}
\begin{figure}[htbp]
\centerline{\includegraphics[width=0.3\textwidth]{IvsAng.png}}
\caption{The waveform of modulated signal.}
\label{fig}
\end{figure}
The electro-optic coupling wave theory was utilized to simulate the output optical signal at different positions of electrodes. And their relationship is depicts in Fig 4, which shows two complete cycles, namely 720 degrees. Fourier series fitting was carried out, and the fit function is shown in equation (1).
\begin{equation}
f(\alpha) =\sum_{i=1}^7 A_i \cos{(i \alpha + \Phi)}
\end{equation}
The fitting coefficients are shown in the Table I. It could be found that the modulated light intensity is not a perfect sinusoidal signal, and the amplitude of the second harmonic wave is of the same magnitude as that of the fundamental wave.
\begin{table}[htbp]
\renewcommand{\arraystretch}{1.3}
\caption{Fitting parameter of modulated wave}
\label{table_example}
\centering
\begin{tabular}{c c c c c}
\hline
$\bf{ A_1}$ & $\bf{ A_2}$ & $\bf{A_3}$ &$\bf{A_4}$ & $\bf{ A_5}$\\
$\bf{ A_6}$ & $\bf{A_7}$ & $\bf{\Phi}$ & \bf{c}& \\
\hline\hline
0.366 & 0.118& 0.032 & 0.018&
5.4e-3\\
-6.2e-3& -4.3e-3& -2.4e-5&
0.471&\\
\hline
\end{tabular}
\end{table}
\section{generation of reference signal}
Reference signal can be generated by a variety of ways, its principle is similar to that of encoder of three-phase brushless DC motor, including Hall measurement, inverse electrodynamic potential measurement, photoelectric type and so on. In this paper, the photoelectric signal generation method is applied. Thanks to the optical switch containing LED and Photo-diode, The rotation speed measurement could be conducted easily and the reference signal is provided.
According to the instruction of the optical switch, the relation between the emitted light intensity and the emission angle is shown in the Figure A1, which can be fitted by formula 2,
\begin{align}
I(\beta) =& A\cos{(k\beta)} + c
\end{align}
where $\beta$ is the emission angle of LED, A,k and c is the fitting parameter.
the fitting results is A = 4.113, k = 0.0789*180/$\pi$, c=4.227.
Its shape is a better cosine function waveform.
Suppose that the distance between LED and the center of rotating electrode is d, and the distance between the rotation center and the center of LED spot is R0, and the distance between the calcuated point and the spot center is r, which is related to the emission angle $\beta$. we have
\begin{align}
\beta =& \arctan{(r/d)}
\end{align}
For large spot, that is, the spot area cannot be completely blocked by the blade,as shonw in Fig A2. the expression of the light intensity pass-through is shown in equation (A1).
and the bow function is
\begin{align}
I_{bow}^{\mp}(\alpha) =& \int_{\pm r_0 \cos{\alpha}}^{ r_0 } \int_{- \sqrt{r_0^2-x^2}}^{\sqrt{r_0^2-x^2}} I(\beta) dxdy\\
r =& \sqrt{x^2+y^2}\\
\alpha =& \arccos{(R_0 \sin{\theta} / r_0)}
\end{align}
where
In the actual situation, the receiving surface of PD is very small, only a few mm, which is equivalent to the small area of the effective spot. At this point, the spot will be completely blocked by the sector area, and its expression is
\begin{equation}
{I_{out}} = \begin{cases}
1, &{\text{if}}\ -\pi<\theta<-\theta_{\text{max}} \\
I_{\text{bow}}^{+}(\theta),&{\text{if}}\ -\theta_{\text{max}}<\theta<0 \\
{I_{\text{bow}}^{-}(\theta),}&{\text{if}}\ 0<\theta<\theta_{\text{max}}\\
0, &{\text{if}}\ \theta_{\text{max}}<\theta<\theta_{\text{GND}}/2
\end{cases}
\end{equation}
where
\begin{align}
\theta_{\text{max}} =& \arcsin{(r_0/R_0)}
\end{align}
The calculated results is shown if Fig. 5, and the paramiter is $r_0$ = 0.5mm, $d$ = 2mm, the angle of the electrode with sector shape is $\theta_{\text{GND}}$ = 30 degree, and $R_0$ = 6mm.
\begin{figure}[htbp]
\centerline{\includegraphics[width=0.27\textwidth]{LED_square.png}}
\caption{ Calculated reference signal generated from a optical switch.}
\label{fig}
\end{figure}
It can be seen from the above results that if PD with large receiving area is adopted and multi-blade design of ground electrode is adopted, the reference signal is likely to be an ideal sinusoidal signal. However, the cost of the large area PD or four quadrant detector is higher. In this paper, the actual output is a trapezoidal wave, but the duty cycle is too large to be directly used as reference signals. However the digital square and sinusoidal signals used for demodulation could be generated from the real-time value of trapezoidal wave's period.
\section{principle of Lock-in Amplifier}
The digital phase-locked amplification technique has been described in many literatures[9-11].
the product of modulated signal and the reference sinusoidal signal is integrated within a modulation period to obtain the original signal, and the rate of amplitude and phase delay between the modulating signal plus the original signal and the reference are calculated.
The above technology generally consider both the modulating signal and the reference one to be sine wave. However, it is not true in this paper, thus this paper mainly discuss scenario that both the modulation and reference signals are neither perfect sinusoidal wave nor square one, and contains considerable higher harmonics.
Lock-in amplifier technology is able to measure any waveform, including the signal of AC bus system or DC bus system. Suppose the voltage signal to be measured as S(T). and the modulation signal is M(t). and the modulated signal would be written as
\begin{align}
S_{m}(t) = M(t) S(t)
\end{align}
where M is a period function whose wave is shown in Fig 4, and could be expand to Fourier series as follows,
\begin{align}
M(t) = m_{0}+ \sum_{j=1}^{l} m_x^j\cos (2 \pi jf_m t )
+ m_y^j\sin (2 \pi jf_m t )
\end{align}
Suppose the reference signal is not perfect square or sinusoidal, could also be expand to Fourier series,
\begin{equation}
R(t) = m_{0} + \sum_{k=1}^{l} r_x^k\cos (2 \pi kf_m t )
+ r_y^k\sin (2 \pi kf_m t )
\end{equation}
In another respective, R could be considered as the combination of odd and even components, and
\begin{align}
R_{odd} =& \sum_{j=0}^{l} m_x^j\sin 2 \pi( j-1)f_m t \\
R_{even} =& \sum_{j=0}^{l} m_x^j\cos 2 \pi( j-1)f_m t
\end{align}
In DSP or MCU, the even component of R is multiplied with modulated signal $S_m(t)$,
\begin{equation}
\begin{aligned}
S_m(t)&R_{even}(t) \\
= & S(t)\sum_{j=0}^{l}\sum_{k=0}^{l}
m_x^jr_x^k\cos (2 \pi jf_m t)\cos (2 \pi kf_m t )\\
&+m_y^jr_x^k\sin(2 \pi jf_m t)\cos (2 \pi kf_m t )
\end{aligned}
\end{equation}
Integrate both side of equation 16 within the time of $T_m = 1/f_m$ ,and note that
\begin{align}
\int_{0} ^{T_m}\cos(2\pi jf_m t)\cos(2\pi kf_k t) dt &=\delta_{mk} \\
\int_{0} ^{T_m}\sin(2\pi jf_m t)\sin(2\pi kf_k t) dt &=\delta_{mk} \\
\int_{0} ^{T_m}\sin(2\pi jf_m t)\cos(2\pi kf_k t) dt &=0
\end{align}
we get
\begin{equation}
\int_{0} ^{T_m} S_m(t)R_{even}(t) = S(t)\sum_j^l m_x^j r_x^j
\end{equation}
and
\begin{equation}
S(t) = \frac{\int_{0} ^{T_m} S_m(t)R_{even}(t)}{\sum_j^l m_x^j r_x^j }
\end{equation}
also we get
\begin{equation}
S(t) = \frac{\int_{0} ^{T_m} S_m(t)R_{odd}(t)} {\sum_j^l m_y^j r_y^j }
\end{equation}
Especially, for modulated wave S(t)M(t), the amplitude and phase delay for every harmonic wave could be written as
\begin{align}
\begin{aligned}
X_{out}^i =&S(t)m_x^i
\\=& \frac{m_x^i}{\sum_j^l m_x^j r_x^j}\int_{0} ^{T_m} S_m(t)R_{even}(t) dt
\end{aligned}
\end{align}
and
\begin{align}
\begin{aligned}
Y_{out}^i =&S(t)m_y^i
\\=& \frac{m_y^i}{\sum_j^l m_y^j r_y^j}\int_{0} ^{T_m} S_m(t)R_{odd}(t) dt
\end{aligned}
\end{align}
So the magnitude and phase of the \textit{i}th harmonic wave is
\begin{align}
M_{out}^i=&\sqrt{(X_{out}^i)^2+(Y_{out}^i)^2}\\
A_{out}^i=&\arctan{(X_{out}^i/Y_{out}^i)}
\end{align}
\section{simulation of system}
The Simulink model consist of four parts in total, as shown in Figure 6. The left-upper part is the reference signal(R(t)) module, which could be selected from 2.5kHz square wave or 2.5kHz sinusoidal wave. The left-center is modulation module. The signal wave S(t) is set to be a 50Hz sinusoidal signal with an amplitude of 1. The modulation waveform M(t) is shown in Figure 4 with a frequency of 2.5kHz as well. The two signal are multiplied to produce the modulated signal S(t)M(t). The left-lower module is the noise (N(t)) module, which could be selected between the random square wave (white noise module) and the 10Hz sinusoidal wave with amplitude of 10. The former represent random step signal in DC measurement due to the polarization or discharge, and the latter simulates random and fluctuating disturb signal results from the linear birefringence in optical fiber.
\begin{figure}[htbp]
\centerline{\includegraphics[width=0.4\textwidth]{simulink.png}}
\caption{Simulation model of Lock-in for rotation electrode sensor}
\label{fig}
\end{figure}
the final signal is the modulated signal S(t)M(t) added the noise N(t). To recover the original signal S(t), S(t)M(t) is multiplied with R(t), and then integrate within a time of $T_m$. The definite integral module consists of two integral module with a phase delay of $T_m$, and their subtraction is the needed output.
The simulation parameter is set as follows, the time step is 2e-6 s, and the time delay of the two integration model is set 200 time steps, namely the period of the reference signal. And Running time is set to 0.03 second. The reference signal has a phase delay of $\pi/6$ compared to the modulation signal.
\begin{figure}[htbp]
\centerline{\includegraphics[width=0.35\textwidth]{simulink_n.png}}
\caption{Simulation results.}
\label{fig}
\end{figure}
Simulink results are shown in Figure 7, from top to bottom, the first signal represents the step noise caused by discharge and polarization, which is independent of the measured voltage. The second one is a modulated 50Hz signal with an amplitude of 1. The third is the modulated signal after adding interference, and the characteristics of original signal would hardly be observed. The fourth is restored signal after integration, and the spike is generated at the step point of the noise.
To further suppress the spike of restored signal, it is down-sampled at a frequency of 2.5kHz, that is, the reference signal frequency. And Only the points with modulation phase of 0 were retain to obtain the perfect original signal, as shown in the figure 8. The red curve shows the perfect restore signal with 180 degree delay, which has been artificially shifted up by 3 for ease of observation. The disadvantage of down-sampling is that the actual sampling rate is equivalent to the frequency of modulated signal. According to Nyquist's Theorem, the effective bandwidth is only 1.25kHz.
\begin{figure}[htbp]
\centerline{\includegraphics[width=0.35\textwidth]{recover2.png}}
\caption{Restored signal(blue) and its down-sampling signal(red), the latter is shift up of 3.}
\label{fig}
\end{figure}
\section{conclusion}
In this paper, a rotated-electrode optical voltage sensor is designed to suppress low-frequency birefringence and eliminate the interference of temperature and low-frequency vibration on the measurement results. A DC motor is used to drive the ground electrode to rotate in its plane, so as to modulate the direction of the electric field and get the modulation optical signal related to the measured voltage. The signal is demodulated using lock-in technology, and the scheme is verified by Simulink. The restoration of the original waveform needs to reduce the actual sampling number of the signal,namely the modulation frequency of the signal in this paper. The methods to further improve the sampling number and resolution are increasing the number of ground electrode blades, increasing the rotational speed and using modulation frequency dither technology[11].
\section{ACKNOWLEDGMENT}
The author would like to thank Prof. Xinghua Lu in Institute of Physics, Chinese Academic of Science for the valuable and inspired discussion. The work is supported by National Nature Science Foundation of China(No. 51807030).
|
1,941,325,220,020 | arxiv | \section{Introduction}\label{sec:Intro}
The nonlocal diffusion problems in the Euclidean space $\mathbb{R}^n$ have been recently widely used to model diffusion processes. More precisely, in \cite{Fi} the authors consider some $u(x, t)$ that models the probabilist density function of a single population at the point $x$ at time $t$. Let $J$ be a symmetric function with $\displaystyle\int_{\mathbb{R}^n}J(x)\,dx=1$; $J(x-y)$ as a probability distribution of jumping from location $y$ to location $x$; $\displaystyle J\ast u(x,t) = \int_{\mathbb{R}^n}J(y-x)u(y,t)\,dy$ is the rate at which individuals are arriving to position $x$ from all other places, and $\displaystyle−u(x, t) = \int_{\mathbb{R}^n}J(x-y)u(x,t)\,dy$ is the rate at which they are leaving location $x$ to travel to all other sites. Then $u$ satisfies a nonlocal evolution equation of the form
\begin{align}\label{0.1}
u_t(x,t)=J\ast u(x,t)-u(x,t).
\end{align}
In the work \cite{CER} the authors prove that solutions of properly rescaled nonlocal Dirichlet problems of the equation \eqref{0.1} uniformly approximate the solution of the corresponding Dirichlet problem for the classical heat equation in $\mathbb{R}^n$.
These type of problems have been used to model very different applied situations,
for example in biology \cite{CF} and \cite{ML}, image processing \cite{KOJ}, particle systems \cite{BV},
coagulation models \cite{FL}, etc.
In the context of the Euclidean space $\mathbb{R}^n$ some of these results have been generalized for kernels that are not convolution. More precisely, we will consider the following problems which are originally set in $\mathbb{R}^{n}$, in the more general context of the Carnot groups:
\begin{itemize}
\item In the work \cite{MR} the authors prove that smooth solutions to the Dirichlet problem for the parabolic equation
$$
v_t(x,t)=\sum_{i,j}^na_{i,j}(x)\frac{\partial^2 v(x,t)}{\partial x_i\partial x_j}+\sum_i^nb_i\frac{\partial v(x,t)}{\partial x_i}, \qquad x\in\Omega,
$$
with $v(x,t) = g(x, t)$, $x \in\partial\Omega$, can be uniformly approximated by solutions of nonlocal problems of the form
$$
u_t^\epsilon(x,t)=\int_{\mathbb{R}^n}K_{\epsilon}(x,y)(u^\epsilon(y,t)-u^\epsilon(x,t))\,dy, \qquad x\in \Omega
$$
with $u^\epsilon(x,t) = g(x, t)$ for $x\notin \Omega$ as $\epsilon \to 0$, for an appropriate rescaled kernel $K_\epsilon$.
\item On the other hand, in \cite{SLY} the authors consider the next Fokker-Planck equation
$$v_{t}(x,t)=\sum\limits_{i=1}^{n} (a(x)v(x,t))_{x_{i}x_{i}},\qquad x\in \Omega,$$
with $u(x,t) = g(x, t)$ for $x\notin \Omega$ and the coefficients $a\in C^{\infty}(\mathbb{R}^{n})$.
They prove that the solutions of this problem can be uniformly approximated by the solutions of the non-local problem
$$u_{t}(x,y)=\int\limits_{R^{n}} a(y)J(x-y)u(y,t)dy - a(x)u(x,t),\qquad x\in \Omega, $$
properly rescaled, where $\int\limits_{\mathbb{R}^{n}}J(x)dx=1$ and $u(x,t) = g(x, t)$ for $x\notin \Omega$.
\end{itemize}
In this way, in \cite{MR} and \cite{SLY} the authors show that the usual local
evolution problems with spatial dependence can be approximated by nonlocal ones.
As an antecedent for working in another setting rather than the Euclidean case, we cite \cite{Vi}, where the author considers a nonlocal diffusion problem on the Heisenberg group and analogous results to those obtained in the works \cite{CCR} and \cite{CER}. Both the Euclidean space and the Heisenberg group are examples of Carnot groups.
The study of Carnot groups and PDE's on them has been increasing in the last years, since the topology is similar to the Euclidean topology and the hypoelliptic equations are easily defined (see the fundamental work of H\"{o}rmander \cite{H}). Regularity results, study of fundamental solutions, computation of a priori estimates, study of asymptotic behaviour, etc., in this context, and mainly for the subLaplacian and for the heat operator, can be found in, for example, the works \cite{CG}, \cite{BF}, \cite{BLU}, \cite{BBLU}, \cite{DG}, \cite{R}, and references therein. Let us finally remark that this list is by no means exhaustive.
A Carnot group is a simply connected and connected Lie group $G$, whose Lie algebra $\mathfrak{g}$ is stratified, this means that $\mathfrak{g}$ admits a vector space decomposition $\mathfrak{g}=V_1\oplus \cdots\oplus V_m$ with grading $[V_1,V_j]=V_{j+1}$, for $j=1,\cdots, m-1,$ and has a family of dilations $\{\delta_\epsilon\}_{\epsilon>0}$ such that $\delta_\epsilon X=\epsilon^j X$ if $X\in V_j$.
Let be $\{X_1,\dots,X_{n_1}\}$ a basis of $V_1$ and $\{X_{n_1+1},\dots,X_{n_1+n_2}\}$ a basis of $V_2$ and let $\Omega\subset G$ be a bounded $C^{2+\alpha}$, $0<\alpha<1$, domain.
We consider the following second order local parabolic differential equation with Dirichlet boundary conditions
\begin{equation}\label{0.2}%
\begin{cases}
v_{t}(x,t)=\sum\limits_{i=1}^{n_1}\sum\limits_{j=1}^{n_1}a_{ij}(x)X_i X_j v(x,t)+\sum\limits_{i=1}^{n_1+n_2}b_{i}(x)X_iv(x,t),\qquad & x\in \Omega,\quad t>0,\\
v(x,t)=g(x,t), & x\in \partial\Omega,\quad t>0,\\
v(x,0)=u_{0}(x), & x\in\Omega,
\end{cases}
\end{equation}
where the coefficients $a_{ij}(x)$, $b_i(x)$ are smooth in $\overline\Omega$ and $(a_{ij}(x))$ is a symmetric positive definite matrix, i.e., $\sum_{i,j} a_{i,j}(x)\xi_i\xi_j\geq \alpha|\xi|^2$ for every real vector $\xi = (\xi_1,\dots,\xi_{n_1}) \neq 0$ and for some $\alpha>0$. Also we have the following nonlocal rescaled Dirichlet problem
\begin{equation}\label{0.3}%
\begin{cases}
u^{\epsilon}_{t}(x,t)=\mathcal{K}_{\epsilon}(u^\epsilon)(x,t),\qquad & x\in \Omega,\quad t>0,\\
u^{\epsilon}(x,t)=g(x,t), & x\notin \Omega,\quad t>0,\\
u^{\epsilon}(x,0)=u_{0}(x), & x\in\Omega,
\end{cases}
\end{equation}
where $\mathcal{K}_{\epsilon}$ is a nonlocal operator defined by a rescaled kernel (see section \ref{subsec:EvolEqn.kernel}). We will prove the next Theorem:
\begin{thm}\label{thm:MainMR}
Let $u^{\epsilon}$ be the solution of problem \eqref{0.3} where $\mathcal{K_{\epsilon}}$ is defined by formula \eqref{Kepsilon}, $g\in C^{2+\alpha}(\Omega^{c}\times [0,T])$ and $u_{0}\in C^{2+\alpha}(\Omega)$. Then there exists a positive constant $c$ such that
\begin{align*}
||u^{\epsilon}-v||_{L^{\infty}(\Omega\times[0,T])}\le c\epsilon^{\alpha},
\end{align*}
where $v$ is the solution of problem \eqref{0.2}.
\end{thm}
Finally we also study the the Fokker-Planck parabolic problem with Dirichlet condition
\begin{equation}\label{0.4}%
\begin{cases}
v_{t}(x,t)=\sum\limits_{i=1}^{n_1}X_{i}X_{i}(a(\cdot)v(\cdot,t))(x),\qquad & x\in \Omega,\quad t>0,\\
v(x,t)=g(x,t), & x\in \partial\Omega,\quad t>0,\\
v(x,0)=u_{0}(x), & x\in\Omega,
\end{cases}
\end{equation}
where the coefficient $a\in C^{\infty}(G)$; and the nonlocal reescaled Dirichlet problem given by
\begin{equation}\label{0.5}%
\begin{cases}
u^{\epsilon}_{t}(x,t)=\mathcal{L}_{\epsilon}(u^\epsilon)(x,t),\qquad & x\in \Omega,\quad t>0,\\
u^{\epsilon}(x,t)=g(x,t), & x\notin \Omega,\quad t>0,\\
u^{\epsilon}(x,0)=u_{0}(x), & x\in\Omega,
\end{cases}
\end{equation}
where $\mathcal{L}_{\epsilon}(u)$ is defined in section \ref{subsec:SYL}. We will prove the next Theorem:
\begin{thm}\label{thm:MainSYL}
Let $u^{\epsilon}$ be the solution of problem \eqref{0.5} where $\mathcal{L_{\epsilon}}$ is defined by formula \eqref{Lepsilon}, $g\in C^{2+\alpha}(\Omega^{c}\times [0,T])$ and $u_{0}\in C^{2+\alpha}(\Omega)$. Then there exists a positive constant $c$ such that
\begin{align*}
||u^{\epsilon}-v||_{L^{\infty}(\Omega\times[0,T])}\le c\epsilon^{\alpha},
\end{align*}
where $v$ is the solution of problem \eqref{0.4}.
\end{thm}
Results such as Theorems \ref{thm:MainMR} and \ref{thm:MainSYL} allow to approximate the solutions of flow equations in Carnot groups.
It is important to stress that here we will use that \eqref{0.2} and \eqref{0.4} has smooth solutions. In fact, under regularity assumptions on the boundary data $g$, the domain $\Omega$ and the initial condition $u_0$, we have that the solutions of \eqref{0.2} are $C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T])$. For such a regularity result, we refer to the previously cited articles.
\par The rest of the paper is organized as follows. In section \ref{sec:Prelim} we recall some definitions and results on Carnot groups and stablish the notation to be used later. In section \ref{sec:kernels} we define and study the operators $\mathcal{K}_{\epsilon}$ and $\mathcal{L}_{\epsilon}$. In section \ref{sect3.1} we study the existence, uniqueness and properties of the solutions of the problems \eqref{0.3} and \eqref{0.5}. In section \ref{sec:Proofs} we prove the Theorems \ref{thm:MainMR} and \ref{thm:MainSYL}.
\section{Preliminaries}\label{sec:Prelim}
\subsection{Homogeneous Lie groups}\label{subsec:Hom.L.Grps.}
\par Let $\mathfrak{g}$ be a real Lie algebra of finite dimension $n$ and let $G$ be its corresponding connected and simply connected Lie group. Recall that if $G$ is nilpotent, the exponential map $\exp:\mathfrak{g}\to G$ is a diffeomorphism.
\par If $\mathfrak{g}$ is nilpotent and we choose a basis $\{X_{1},\dots,X_{n}\}$ for $\mathfrak{g}$, we identify $\mathbb{R}^{n}$ with the group $G$ via the exponential map: let $\varphi:\mathbb{R}^{n}\to G$ be such that every $(x_{1},\dots,x_{n})\in\mathbb{R}^{n}$ is identified with $\varphi(x_{1},\dots,x_{n})=\exp(x_{1}X_{1}+\dots+x_{n}X_{n})$. Since the Campbell-Hausdorff-Baker series has finitely many terms, the group law is a polynomial map and may be written as $xy=(p_{1}(x,y),\dots,p_{n}(x,y))$, where the $p_{j}$ are polynomials maps.
\par A \textit{family of dilations} on $\mathfrak{g}$ is a one parameter family of automorphisms $\{\delta_{r}:r>0\}$ of $\mathfrak{g}$ of the form $\delta_{r}=\text{Exp}(A\log r)$, where $\text{Exp}$ denotes the matrix exponential function, and $A$ is a diagonalizable linear transformation on $\mathfrak{g}$ with positive eigenvalues. Thus, \begin{align*}
\delta_{r}(X)=&\text{Exp}(AX\log r)=\sum\limits_{l=0}^{\infty}\frac{1}{l!}(\log r AX)^{l}.\end{align*}
If a Lie algebra admits a family of dilations then $\mathfrak{g}$ is nilpotent (see \cite{FS} for example), and $G$ is called an \textit{homogeneous Lie group}. Let us remark that not every nilpotent Lie algebra admits a family of dilations (see \cite{D}).
\par If $G$ is an homogeneous Lie group it is nilpotent, hence the dilations on $\mathfrak{g}$ lift via the exponential map to a one parameter group of automorphisms on $G$. We also call the maps $\exp \circ \delta_{r} \circ (\exp)^{-1}$ \textit{dilations} on $G$ and denote them again by $\delta_{r}$. The \textit{homogeneous dimension} of $G$ is defined to be the number $Q=trace(A)=\lambda_{1}+\dots +\lambda_{n}$ where $\lambda_{1},\dots,\lambda_{n}$ are the eigenvalues of $A$. Since $\text{Exp}(\alpha A \log r)=\delta_{r^{\alpha}}$ for any $\alpha>0$, we may assume that the smallest eigenvalue of $A$ is $1$, and moreover, we may assume that $1=\lambda_{1}\le\dots\le \lambda_{n}=\overline{\lambda}$. Let $\beta=\{X_{1},\dots,X_{n}\}$ be a basis of eigenvectors of $A$ with $AX_{j}=\lambda_{j} X_{j}$ where the automorphism $A$ is diagonal: $$[A]_{\beta}=\begin{pmatrix} \lambda_{1} & \dots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \dots & \lambda_{n} \end{pmatrix},$$ hence also, since the dilations are automorphisms, $$[\delta_{r}]_{\beta}=\begin{pmatrix} r^{\lambda_{1}} & \dots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \dots & r^{\lambda_{n}} \end{pmatrix},$$
then $r^{\lambda_{j}}$ is an eigenvalue for $\delta_{r}$ and $X_{j}$ is an associated eigenvector. We have that $\delta_{r}[X_{i},X_{j}]=[\delta_{r}X_{i},\delta_{r}X_{j}]=r^{\lambda_{i}+\lambda_{j}}[X_{i},X_{j}]$
\par A Lie algebra $\mathfrak{g}$ is called \textit{graded} if it is endowed with a vector space decomposition $\mathfrak{g}=\bigoplus\limits_{j=1}^{\infty}V_{j}$ (where all but finitely many of the $V_{k}$'s are $\{0\}$), such that $[V_{i},V_{j}]\subset V_{i+j}$. The Lie group $G$ is also called \textit{graded}.
\par A graded Lie algebra $\mathfrak{g}$ is said to be \textit{stratified} if it admits a vector space decomposition as follows: there exists $m\le n$ such that $\mathfrak{g}=V_{1}\oplus\dots\oplus V_{m}$, where $V_{k+1}=[V_{1},V_{k}]\neq\{0\}$, for all $1\le k< m$ and $V_{k}=\{0\}$ if $k>m$. This also means that $V_{1}$ generates $\mathfrak{g}$ as a Lie algebra. A stratified Lie algebra is nilpotent of step $m$ and there is a natural family of dilations on $\mathfrak{g}$ given by $\delta_{r}\left(\sum\limits_{k=1}^{m}Y_{k}\right)=\sum\limits_{k=1}^{m}r^{k}Y_{k}$, where each $Y_{k}\in V_{k}$. The Lie group $G$ is also called \textit{stratified} or \textit{Carnot group}.
\par In the case of a stratified Lie algebra $\mathfrak{g}$, the following notation will be used:
\begin{itemize}
\item the set of eigenvalues of $A$ is $\mathcal{A}=\{1,2,\dots,m\}$,
\item the set of eigenvalues of each $\delta_{r}$, $r>0$, is $\{r^{1},\dots,r^{m}\}$,
\item the basis $\beta=\{X_{1},\dots,X_{n}\}$ of $\mathfrak{g}$ is adapted to the gradation in the following sense: if $\dim(V_{k})=n_{k}$ for $1\le k\le m$, then $n=n_{1}+\dots+n_{m}$, and
\begin{itemize}
\item $\beta_{1}=\{X_{1},\dots,X_{n_{1}}\}$ is a basis of $V_{1}$ of eigenvectors associated to the eigenvalue $\lambda=1$,
\item $\beta_{2}=\{X_{n_{1}+1},\dots,X_{n_{1}+n_{2}}\}$ is a basis of $V_{2}$ of eigenvectors associated to the eigenvalue $\lambda=2$,
\item $\dots$
\item $\beta_{m}=\{X_{n_{1}+\dots+n_{m-1}+1},\dots,X_{n}\}$ is a basis of $V_{m}$ of eigenvectors associated to the eigenvalue $\lambda=m$.
\end{itemize}
\item the homogeneous dimension is $Q=\sum\limits_{k=1}^{m}kn_{k}$.
\item the identification $\phi=(\phi_{1},\dots,\phi_{n}):\mathfrak{g}\to\mathbb{R}^{n}$ such that if $X=x_{1}X_{1}+\dots + x_{n}X_{n}\in\mathfrak{g}$, then $\phi(X)=(x_{1},\dots,x_{n})$, and $\phi_{j}(X)=x_{j}$. Observe that if $x\in G$, $\phi_{j}(\exp^{-1}(x))=\pi_{j}(\varphi^{-1}(x))$, where $\pi_{j}:\mathbb{R}^{n}\to\mathbb{R}$ denotes the projection.
\end{itemize}
\par Let us consider until the end of the section an homogeneous Lie algebra $\mathfrak{g}$ with homogeneous Lie group $G$.
\par We define an Euclidean norm $||.||$ on $G$ as follows: if $\{X_{1},\dots,X_{n}\}$ is a basis of $\mathfrak{g}$, just define $||.||$ by establishing that $X_{i}$ and $X_{j}$ are orthogonal for all $1\le i, j \le n$, $i\neq j$, then lift it to $G$ via the exponential map, that is, for $x\in G$ define $||x||=||\exp^{-1}x||$. For practical reasons we will use an \textit{homogeneous norm} $|.|$ which we construct as follows: if $X=\sum\limits_{1}^{n}c_{j}X_{j} \in \mathfrak{g}$ then $||\delta_{r}X||=\left( \sum\limits_{1}^{n} c_{j}^{2} r^{2\lambda_{j}} \right)^{\frac{1}{2}}$. If $X\neq 0$ then $||\delta_{r}X||$ is a strictly increasing function of $r$ which tends to $0$ or $\infty$ along with $r$. Hence there is a unique $r(X)>0$ such that $||\delta_{r(X)}X||=1$, and we set $|0|=0$ and $|x|=\frac{1}{r}||\exp^{-1}x||$ for $x\neq 0$.
\par The Lebesgue measure on $\mathfrak{g}$ induces a biinvariant Haar measure on $G$, and we fix the normalization of Haar measure on $G$ by requiring that the measure of the unitary ball to be $1$. We shall denote with $|E|$ the measure of a measurable set $E$ and with $\int f=\int f dx$ the integral of a function $f$ with respect to this measure. Hence, $|\delta_{r}(E)|=r^{Q}|E|$ and $d(rx)=r^{Q}dx$. In particular, $|B(r,x)|=r^{Q}$ for all $r>0$ and $x\in G$.
\par A function $f$ on $G\backslash\{0\}$ will be called \textit{homogeneous of degree $\sigma$} if $f\circ\delta_{r}=r^{\sigma}f$ for $r>0$. For any $f$ and $g$ we have that $\int f(x)(g\circ\delta_{r})(x)dx=r^{-Q}\int(f\circ\delta_{\frac{1}{r}})(x)g(x)dx$, if the integrals exist. Hence we extend the map $f\to f\circ{\delta_{r}}$ to distributions as follows: $<f\circ\delta_{r},\varphi>=r^{-Q}<f,\varphi\circ\delta_{\frac{1}{r}}>$, for a distribution $f$ and a test function $\varphi$. We say that a distribution is \textit{homogeneous of degree $\sigma$} if $f\circ_{\delta_{r}}=r^{\sigma}f$. A differential operator $D$ on $G$ is \textit{homogeneous of degree $\sigma$} if $D(f\circ\delta_{r})=r^{\sigma}(Df)\circ\delta_{r}$ for any $f$. Observe that if $D$ is homogeneous of degree $\sigma$ and $f$ is homogeneous of degree $\mu$ then $Df$ is homogeneous of degree $\mu-\sigma$.
\par The approximations to the identity in this context take the following form: if $\psi$ is a function on $G$ and $t>0$, we define $\psi_{t}=t^{-Q}\psi\circ\delta_{\frac{1}{t}}$. Observe that if $\psi\in L^{1}$ then $\int\psi_{t}(x)dx$ is independent of $t$.
\par We will also use the following multiindex notation: if $I=(i_{1},\dots,i_{n})\in\mathbb{N}_{0}^{n}$, we set $X^{I}=X_{1}^{i_{1}}X_{2}^{i_{2}}\dots X_{n}^{i_{n}}$. The operators $X^{I}$ form a basis for the algebra of left invariant differential operators on $G$, by the Poincar\'{e}-Birkhoff-Witt Theorem. The order of the differential operators $X^{I}$ is $|I|=i_{1}+i_{2}+\dots+i_{n}$ and its \textit{homogeneous degree} is $d(I)=\lambda_{1}i_{1}+\lambda_{2}i_{2}+\dots +\lambda_{n}i_{n}$. Finally, let $\triangle$ be the additive subsemigroup of $\mathbb{R}$ generated by $0,\lambda_{1},\dots,\lambda_{n}$. Observe that $\triangle=\{d(I):I\in\mathbb{N}^{n}\}\supset\mathbb{N}$ (since $\lambda_{1}=1$), and if $G$ is a Carnot group $\triangle=\mathbb{N}$.
\par Finally, if $G$ is a Carnot group, it is clear that $X\in\mathfrak{g}$ is homogeneous of degree $k$ if and only if $X\in V_{k}$. We have defined the basis $\beta$ of eigenvectors such that $X_{1},\dots,X_{n_{1}}$ is a basis for $V_{1}$. Let us now define $\mathcal{J}=\sum\limits_{j=1}^{n_{1}}X_{j}^{2}$, thus $-\mathcal{J}$ is a left invariant second order differential operator which is homogeneous of degree 2 called the subLaplacian of $G$ (relative to the stratification and the basis). Its role on $G$ is analogous to (minus) the ordinary Laplacian in $\mathbb{R}^{n}$.
\subsection{Taylor Series Expansions on a Lie Group}\label{subsec:TSE}
\par Consider any Lie group $G$ with Lie algebra $\mathfrak{g}$. Let $x\in G$, $X\in\mathfrak{g}$. We have that $t\to\exp tX$ is the integral curve of the vector field $X$ though the point $x$ and that, if $f$ is any function defined and analytic around $x$, $(Xf)(x)=f(x;X)=\left(\frac{d}{dt}f(x\exp tX)\right)_{t=0}$. Here we do not necesarily consider a basis for $\mathfrak{g}$, hence observe that the elements $X_{1},\dots,X_{s}$ of $\mathfrak{g}$ will not necesarily belong to a basis.
\begin{lem}[Lemma 2.12.1 of \cite{V}]\label{Lem:higher.order.deriv}
Let $x\in G$, $X\in\mathfrak{g}$. Then for any integer $k\ge 0$ and any function $f$ defined and $C^{\infty}$ around $x$, $(X^{k}f)(x)=f(x;X^{k})=\left(\frac{d^{k}}{dt^{k}}f(x\exp tX)\right)_{t=0}$. If $f$ is analytic around $x$, we have, for all sufficiently small $|t|$, $f(x\exp tX)=\sum\limits_{n=0}^{\infty}f(x;X^{n})\frac{t^{n}}{n!}$.
\end{lem}
\begin{lem}[Lemma 2.12.2 of \cite{V}]\label{Lem:multipl.deriv}
Let $x\in G$, $X_{1},\dots,X_{s}\in\mathfrak{g}$. If $f$ is a function defined and $C^{\infty}$ around $x$ then $$(X_{1}\dots X_{s}f)(x)=f(x;X_{1}\dots X_{s})=\left( \frac{\partial^{s}}{\partial t_{1}\dots\partial t_{s}}f(x\exp t_{1}X_{1}\dots\exp t_{x}X_{s})\right)_{t_{1}=\dots=t_{s}=0}.$$
\end{lem}
\par To obtain a general expansion formula for functions on $G$ we need some notation. Fix an $s\in\mathbb{N}$ and $X_{1},\dots,X_{s}\in\mathfrak{g}$. For any ordered $s-$tuple $\textbf{n}=(n_{1},\dots,n_{s})\in\mathbb{N}_{0}^{s}$ we have that $|\textbf{n}|=n_{1}+\dots+n_{s}$, $\textbf{n}!=n_{1}!\dots n_{s}!$ and write $X(\textbf{n})$ for the coefficient of $t_{1}^{n_{1}}\dots t_{s}^{n_{s}}$ in the formal polynomial $\frac{\textbf{n}!}{|\textbf{n}|!}(t_{1}X_{1}+\dots +t_{s}X_{s})^{n_{1}+\dots + n_{s}}$ (recall that in general $X_{1},\dots,X_{s}$ do not commute). When $n_{1}=n_{2}=\dots=n_{s}=0$ we put $X(\textbf{n})=1$. Each $X(\textbf{n})$ is an element of the universal envelopping algebra $\mathcal{U}$ of $G$. The order of the differential operator $X(\textbf{n})$ is $\le |\textbf{n}|$. Let us describe $X(\textbf{n})$ in another manner. Define the elements $Z_{1},\dots,Z_{|\textbf{n}|}\in\mathfrak{g}$ by $$\left\{ \begin{array}{rcl} Z_{j}=X_{1} & \mbox{ if } & |\le j\le n_{1} \\
Z_{n_{1}+\dots+n_{k}+j}=X_{k+1} & \mbox{ if } & 1\le j\le n_{k+1}, 1\le k\le s-1. \end{array}\right.$$ Then $X(\textbf{n})=\frac{1}{|\textbf{n}|!}\sum\limits_{(i_{1},\dots,i_{n})}Z_{i_{1}}Z_{i_{2}}\dots Z_{i_{n}}$, where the sum extends over all permutations $(i_{1},\dots,i_{|\textbf{n}|})$ of $(1,2,\dots,|\textbf{n}|)$.
\begin{thm}[Theorem 2.12.3 of \cite{V}]\label{thm:Taylor.series}
Let $x\in G$ and let $f$ be a function defined and analytic around $x$. Let $X_{1},\dots X_{s}\in\mathfrak{g}$. Then there is an $a>0$ such that $$f(x\exp(t_{1}X_{1}+\dots+t_{s}X_{s}))=\sum\limits_{n_{1},\dots,n_{s}\ge 0}\frac{t_{1}^{n_{1}}\dots t_{s}^{n_{s}}}{\textbf{n}!}f(x;X(\textbf{n})),$$ the series converging absolutely and uniformly in the cube $I_{a}^{s}=\{(t_{1},\dots,t_{s})\in\mathbb{R}^{s}:|t_{i}|<a, i=1,\dots,s\}$.
\end{thm}
\subsection{Taylor polynomials in homogeneous Lie groups}
\par We say that a function $P$ on $G$ is a \textit{polynomial} if $P\circ\exp$ is a polynomial on $\mathfrak{g}$. Let $\xi_{1},\dots,\xi_{n}$ be the basis for the linear forms on $\mathfrak{g}$ dual to the basis $X_{1},\dots,X_{n}$ on $\mathfrak{g}$. Let us set $\eta_{j}=\xi_{j}\circ\exp^{-1}$, then $\eta_{1},\dots,\eta_{n}$ are polynomials on $G$ which form a global coordinate system on $G$, and generate the algebra of polynomials on $G$. Thus, every polynomial on $G$ can be written uniquely as $P=\sum\limits_{I}a_{I}\eta^{I}$, for $\eta^{I}=\eta_{1}^{i_{1}}\dots\eta_{n}^{i_{n}}$, $a_{I}\in\mathbb{C}$ where all but finitely many of them vanish. Since $\eta^{I}$ is homogeneous of degree $d(I)$, the set of possible degrees of homogeneity for polynomials is the set $\triangle$. We call the degree of a polynomial $\max\{|I|:a_{I}\neq 0\}$ the \textit{isotropic degree}, and its \textit{homogeneous degree} is $\max\{d(I):a_{I}\neq 0\}$. For each $N\in\mathbb{N}$ we define the space $\mathcal{P}_{N}^{iso}$ of polynomials of isotropic degree $\le N$, and for each $j\in\triangle$ we define the space $\mathcal{P}_{j}$ of polynomials of homogeneous degree $\le j$. It follows that $\mathcal{P}_{N}\subset \mathcal{P}_{N}^{iso}\subset \mathcal{P}_{\overline{\lambda}N}$. The space $\mathcal{P}_{j}$ is invariant under left and right translations (see Prosition 1.25 of \cite{FS}), but the space $\mathcal{P}_{N}^{iso}$ is not (unless $N=0$ or $G$ is abelian).
\par For a function $f$ whose derivatives $X^{I}f$ are continuous functions on a neighbourhood of a point $x\in G$, and for $j\in\triangle$ such that $d(I)\le j$ we define the \textit{left Taylor polynomial of $f$ at $x$ of homogeneous degree $j$} to be the unique polynomial $P\in\mathcal{P}_{j}$ such that $X^{I}P(0)=X^{I}f(x)$.
\subsubsection{Stratified Taylor inequality}\label{subsec:Str.T.Ineq}
\par Throughout this section we will consider a fixed stratified group $G$ with the notation described previously. We will regard the elements of the basis of $\mathfrak{g}$ adapted to the gradation as left invariant differential operators on $G$.
\par Since $V_{1}$ generates $\mathfrak{g}$ as a Lie algebra, we have that $\exp(V_{1})$ generates $G$. More precisely:
\begin{lem}[Lemma 1.40 of \cite{FS}]\label{Lem:strat.coords}
If $G$ is stratified there exist $C>0$ and $N\in\mathbb{N}$ such that any $x\in G$ can be expressed as $x=x_{1}\dots x_{N}$ with $x_{j}\in\exp(V_{1})$ and $|x_{j}|\le C|x|$, for all $j$.
\end{lem}
\par For $k\in\mathbb{N}$ we define $C^{k}$ to be the space of continuous functions $f$ on $G$ whose derivatives $X^{I}f$ are continuous functions on $G$ for $d(I)\le k$. Another important consequence of the fact that $V_{1}$ generates $\mathfrak{g}$ is that the set of left invariant differential operators which are homogeneous of degree $k$ (which is the linear span of $\{X^{I}:d(I)=k\}$) is precisely the linear span of the operators $X_{i_{1}}\dots X_{i_{k}}$ with $1\le i_{j}\le n_{1}$ for $j=1,\dots,k$. We thus have the following results:
\begin{thm}[Theorem 1.41, Stratified Mean Value Theorem, \cite{FS}]\label{Thm:SMVT} Suppose $G$ is \newline stratified. There exist $C>0$ and $b>0$ such that for all $f\in C^{1}$ and all $x,y\in G$, $$|f(xy)-f(x)|\le C|y|\sup\limits_{|z|\le b|y|, 1\le k\le n_{1}}|X_{k}f(xz)|.$$
\end{thm}
\begin{thm}[Theorem 1.42, Stratified Taylor Inequality, of \cite{FS}]\label{Thm:STI} Suppose $G$ is \newline stratified. For each positive integer $k$ there is a constant $C_{k}$ such that for all $f\in C^{k}$ and all $x,y\in G$, $$|f(xy)-P_{x}(y)|\le C_{k}|y|^{k}\eta(x,b^{k}|y|),$$ where $P_{x}$ is the (left) Taylor polynomial of $f$ at $x$ of homogeneous degree $k$, $b$ is as in the Stratified Mean Value Theorem, and for $r>0$, $$\eta(x,r)=\sup\limits_{|z|\le r, d(I)=k} |X^{I}f(xz)-X^{I}f(x)|.$$
\end{thm}
\par Finally, we define the space $C^{k}(\Omega)$ of those functions $f$ defined on $\Omega$ such that $Df$ is continuous for every differential operator $D$ of homogeneous degree less or equal to $k$.
\par For a function $f\in C^{k}(\Omega)$ and $x\in\Omega$ let $P_{x}$ denote the Taylor polynomial of $f$ at $x$ of homogeneous degree $k$. By Theorem \ref{Thm:STI} we have that for $\epsilon>0$,
\begin{align*}
\frac{1}{\epsilon^{k}}|f(x\delta_{\epsilon} y)-P_{x}(y)|\le & \frac{c_{k}|\delta_{\epsilon}y|^{k}}{\epsilon^{k}} \eta(x,b^{k}|\delta_{\epsilon}y|) = c_{k} |y|^{k} \eta(x,b^{k}|\delta_{\epsilon}y|),
\end{align*}
which goes to $0$ as $\epsilon$ does. Hence, if $f$ is analitic on $\Omega$ we have from Theorem \ref{thm:Taylor.series} the following Taylor polynomial expansion of $f$ at $x$ of homogeneous degree $k=2$:
\begin{align}\label{simple.Taylor.expansion} \nonumber
f(x\exp(t_{1}X_{1}+\dots+t_{n}X_{n}))= & f(x)+\sum\limits_{i=1}^{n_{1}+n_{2}}t_{i}X_{i}f(x) \\ & + \frac{1}{2} \sum\limits_{i,j=1}^{n_{1}}t_{i}t_{j}X_{i}X_{j}f(x)+o(|t_{1}X_{1}+\dots+t_{n}X_{n}|^{2}),
\end{align}
in the sense that
\begin{align*}
\lim\limits_{\epsilon\to 0} \frac{o(|\delta_{\epsilon}(t_{1}X_{1}+\dots+t_{n}X_{n})|^{2})}{\epsilon^{2}}=0.&
\end{align*}
\section{Some nonlocal diffusion problems}\label{sec:kernels}
\par Throughout this section we let $G$ be a Carnot group with Lie algebra $\mathfrak{g}$ and let $\Omega$ be an open, bounded and connected subset of $G$. The aim of this section is to properly define the operators $\mathcal{K}_{\epsilon}$ and $\mathcal{L}_{\epsilon}$ from the introduction. In order to understand the techniques involved, we will first work with an evolution operator of a much simple form, namely the operator given in \eqref{0.1}, in the context of the Carnot group $G$. We will see that the solutions to the nonlocal Dirichlet rescaled problems uniformly approximate the solution of the classical heat equation with Dirichlet conditions.
\par Let us consider a positive function $J\in L^{1}(G)$ with compact support $F$ such that $J$ is symmetric in the sense that $J(x)=J(x^{-1})$ for all $x\in G$, whence for $i=1,\dots,n$
\begin{align}\label{J.x}
&\int\limits_{\mathbb{R}^{n}} J(\exp(t_{1}X_{1}+\dots+t_{n}X_{n})) t_{i} dt_{1}\dots dt_{n} =0;
\end{align}
and also
\begin{align}\label{J.x^2}
&\int\limits_{\mathbb{R}^{n}} J(\exp(t_{1}X_{1}+\dots+t_{n}X_{n})) t_{i}^{2} dt_{1}\dots dt_{n} = C(J),
\end{align}
for a constant $C(J)>0$, $i=1,\dots,n$. From both properties it follows that for $i,j=1,\dots,n$,
\begin{align}\label{J.deltaij}
&\int\limits_{\mathbb{R}^{n}} J(\exp(t_{1}X_{1}+\dots+t_{n}X_{n})) t_{i}t_{j} dt_{1}\dots dt_{n} = C(J)\delta_{ij}.
\end{align}
\subsection{An evolution equation}\label{subsec:EvolEqn}
\par The evolution equation \eqref{0.1} is given in our context by the \textit{evolution operator}
\begin{align}\label{evol.op}
\mathcal{E}u=J\ast u - u,
\end{align}
namely for a suitable domain $\Omega\times[0,T]$,
\begin{align}\label{evol.eqn}
u_{t}(x,t) = & \mathcal{E}u(x,t).
\end{align}
\par For each $\epsilon >0$ we define the rescaled operator
\begin{align}\label{reesc.evol.op}
\mathcal{E}_{\epsilon}u(x)=\frac{1}{\epsilon^{2}}\left[(u\ast J_{\epsilon})(x)-u(x)\right],
\end{align}
we have that
\begin{align} \nonumber
\mathcal{E}_{\epsilon}u(x)=&\frac{1}{\epsilon^{2}}\left[(u\ast J_{\epsilon})(x)-u(x)\right] = \frac{1}{\epsilon^{2}}\left[\int\limits_{G}u(xy^{-1})J_{\epsilon}(y)dy-u(x)\right] \\ \nonumber
=& \frac{1}{\epsilon^{2}}\left[\int\limits_{G}u(xy^{-1})\frac{1}{\epsilon^{Q}}J\left(\delta_{\frac{1}{\epsilon}}y\right)dy-u(x)\right] \\ \nonumber
=& \frac{1}{\epsilon^{2}}\left[\int\limits_{G}u(x(\delta_{\epsilon}(y))^{-1})\frac{\epsilon^{Q}}{\epsilon^{Q}}J(y)dy-\int\limits_{G}J(y)u(x)dy\right]\\
=& \frac{1}{\epsilon^{2}}\int\limits_{G}\left[u(x(\delta_{\epsilon}(y^{-1})))-u(x)\right]J(y)dy. \label{reesc.evol.eqn}
\end{align}
\subsection{An evolution equation given by a kernel}\label{subsec:EvolEqn.kernel}
\par In \cite{MR} Molino and Rossi studied the integral operator
\begin{align*}
\mathcal{K_\epsilon}u(x)=\int\limits_{G}K_\epsilon(x,y)(u(y)-u(x))dy,
\end{align*}
for $G=\mathbb{R}^{n}$, where the kernel $K_\epsilon(x,y)$ is a positive function with compact support in $\Omega\times \Omega$ for $\Omega\subset G$ a bounded domain such that $0<\sup\limits_{y\in\Omega}K_\epsilon(x,y)=c_\epsilon(x)\in L^{\infty}(\Omega)$.
\par Following the ideas of Molino and Rossi, let us consider:
\begin{itemize}
\item A function $J$ as defined in the begining of the section
\item A $n_{1}\times n_{1}$ symmetric and positive definite matrix $\tilde A(x)=(a_{ij}(x))$, where the coefficients are smooth in $\overline\Omega$ with $\tilde A(x)=\tilde L(x)\tilde L^{t}(x)$ its Cholesky factorization, with $\tilde L(x)=(l_{ij}(x))$ and $\tilde L^{-1}(x)=(l^*_{ij}(x))$. Also, let $A(x)$ be the $n\times n$ matrix defined by blocks as follows: \begin{align*} A(x)= & \left( \begin{array}{c|c}\tilde A(x) & 0 \\ \hline 0 & \begin{array}{ccc} 1 & & 0 \\ & \ddots & \\ 0 & & 1 \end{array} \end{array} \right). \end{align*} That is, $A(x)$ is the matrix $\tilde A(x)$ extended by the identity to size $n\times n$, and let $L(x)$ and $L^{t}(x)$ be similarly defined.
\item A $n\times n$ diagonal matrix $W(x)=\text{diag}(\tilde{b}_{1}(x),\dots,\tilde{b}_{n}(x))$ where $\tilde{b}_{i}(x)=b_{i}(x)$ if $1\le i\le n_{1}$, $\tilde{b}_{i}(x)=\frac{b_{i}(x)}{\epsilon^{2}}$ if $n_{1}<i\le n_{1}+n_{2}$ and $\tilde{b}_{i}(x)=1$ if $n_{1}+n_{2}<i\le n$.
\item A function $a:G\to \mathbb{R}$ defined by $a(x)=\sum\limits_{i=1}^{n} \phi_{i}(\exp^{-1}(x)) +M$, where $M>0$ is large enough to ensure $a(x)\ge \beta>0$ for $x$ belonging to an appropiate set $F'$ defined as
\begin{align}\label{F'} F'&=\{x\in G: x=y\exp\delta_{\epsilon}L(y)\exp^{-1}(z^{-1}), \forall y\in\Omega, \forall z\in F\}. \end{align}
where $F$ is the support of $J$.
\end{itemize}
Thus, we will work with the scaled kernels defined for each $\epsilon>0$ by
\begin{align}\label{kernel}
K_{\epsilon}(x,y) = & \frac{c(x)}{\epsilon^{Q+2}}a((\exp(E(x)\exp^{-1}(y^{-1}x)))^{-1})J(\exp(L^{-1}(x)\exp^{-1}(\delta_{\epsilon^{-1}}y^{-1}x))),
\end{align}
where for $x\in G$, $c(x)=2\left[C(J)M(\det(A(x)))^{\frac{1}{2}}\right]^{-1}$ and $E(x)=\frac{M}{2}W(x)A(x)^{-1}$. Let us observe that we understand the action of a $n\times n$ matrix $\textbf{M}$ on $\mathfrak{g}$ via the identification $\phi$ with $\mathbb{R}^{n}$ (with respect to the basis $\beta$) as follows: if $\textbf{M}=(m_{ij})$ and $X=\sum x_{i}X_{i}\in\mathfrak{g}$, $$\textbf{M}X=\sum\limits_{i=1}^{n}\left(\sum_{k=1}^{n} m_{ik}x_{k}\right)X_{i}.$$
Also, since the matrix $A(x)$, $L(x)$ and $W(x)$ are defined by blocks (with corresponding blocks of the same size), and the matrix which defines $\delta_{\epsilon}$ is also defined by blocks (again, of the same corresponding sizes) as a constant times the identity on each block, we have that $\delta_{\epsilon}$ commutes with all of them.
\par Hence, for these rescaled kernels we will study the integral operators
\begin{align}\label{Kepsilon}
\mathcal{K}_{\epsilon}u(x)=\frac{c(x)}{\epsilon^{Q+2}}\int\limits_{G}a((\exp(E(x)\exp^{-1}(y^{-1}x)))^{-1})J(\exp(L^{-1}(x)\exp^{-1}(\delta_{\epsilon^{-1}}y^{-1}x)))(u(y)-u(x))dy.
\end{align}
More precisely, we will prove that $\mathcal{K}_{\epsilon}u$ approximates $\mathcal{K}v$ where $\mathcal{K}$ is the second order operator defined by
\begin{align}\label{operator}
\mathcal{K}(v)(x) = & \sum\limits_{i,j=1}^{n_{1}}a_{ij}(x)X_{i}X_{j}v(x)+\sum\limits_{i=1}^{n_{1}+n_{2}} b_{i}(x)X_{i}v(x).
\end{align}
\subsection{A reaction-diffusion equation}\label{subsec:SYL}
\par In \cite{SLY} the authors work in the same spirit as Molino and Rossi to approximate the solutions of the Fokker-Planck equation by solutions of operators defined by reescaled kernels which in our present context assume the form, respectively:
\begin{align}\label{operator1}
\mathcal{L}(v)(x)=\sum\limits_{1}^{n}X_{i}X_{i}(a(x)v(x))
\end{align}
\begin{align}\label{Lepsilon}
\mathcal{L}_{\epsilon}(u)(x)=&\frac{2C(J)}{\epsilon^{Q+2}} \int\limits_{G} J(\delta_{\epsilon^{-1}}y^{-1}x)[a(y)u(y)-a(x)u(x)]dy,
\end{align}
with the coefficient $a\in C^{\infty}(G)$.
\section{Existence and uniqueness of solutions}\label{sect3.1}
We shall now derive the existence and uniqueness of
solutions of
\begin{align}\label{pro2}
\left\{ \begin{array}{rclcc}
u^{\epsilon}_{t}(x,t) & = & \int_G K_\epsilon(x,y)\left(u(y,t)-u(x,t)\right)dy & \mbox{ for } & (x,t)\in\Omega\times[0,T],\\
u^{\epsilon}(x,t) & = & g(x,t) & \mbox{ for } & x\notin\Omega, t\in[0,T],\\
u^{\epsilon}(x,0) & = & u_{0}(x) & \mbox{ for } & x\in\Omega,
\end{array}
\right.
\end{align}
which is a consequence of Banach's fixed point theorem. The main arguments are basically the same of \cite{CCR} or \cite{CER}, but we write them here to make the paper self-contained. Let us also remark that the analogous results for operator $\mathcal{L}_{\epsilon}$ holds and the proofs are completely similar.
\par Recall the definition of the set $F'$ \eqref{F'}.
\begin{thm}\label{31}
Let $u_0\in L^1(\Omega)$ and let $J$ and $K_\epsilon$ defined as in Section \ref{subsec:EvolEqn.kernel}, with $K_\epsilon(x,y)\leq C_\epsilon(x)\in L^\infty(\Omega)$ for $(x,y)\in\Omega\times F'$. Then there exists a unique solution $u$ of problem \eqref{pro2} such that $u\in C([0,\infty),L^1(\Omega))$.
\end{thm}
\begin{proof}
We will use the Banach's Fixed Point Theorem. For $t_0 > 0$ let us consider the Banach space
\begin{align*}
X_{t_0}:=\left\{w\in C([0,t_0]; L^1 (\Omega))\right\}
\end{align*}
with the norm
\begin{align*}
|||w|||:=\max_{0\leq t\leq t_0}\|w(\cdot,t)\|_{L^1(\Omega)}.
\end{align*}
Our aim is to obtain the solution of \eqref{pro2} as a fixed point of the operator $\mathfrak{T} : X_{ t_ 0} \rightarrow X_{ t_ 0}$ defined by
\begin{equation*}
\mathfrak{T} (w)(x,t):=
\begin{cases}
w_0(x)+\displaystyle\int_{0}^{t}\int_GK_\epsilon(x,y)\left(w(y,r)-w(x,r)\right)\,dydr \qquad & \text{if}\,\, x \in \Omega,\\
g(x,t) & \text{if}\,\, x \notin \Omega,
\end{cases}
\end{equation*}
where $w_0(x)=w(x,0)$.
Let $w,\, v \in X_{t_ 0}$. Then there exists a constant $C$ depending on $K_\epsilon$ and $\Omega$ such that
\begin{align}\label{3.1}
|||\mathfrak{T}(w)-\mathfrak{T}(v)|||\leq Ct_0|||w-v|||+\|w_0-v_0\|_{L^1(\Omega)}.
\end{align} Indeed, since if $x\notin\Omega$ then $(w-v)(x,t)=0$, it follows that
\begin{align*}
&\int_{\Omega}\left|\mathfrak{T}(w)-\mathfrak{T}(v)\right|(x,t)dx\leq\int_{\Omega}|w_0-v_0|(x)dx\\
&\qquad \quad +\int_{\Omega}\left|\int_{0}^{t}\int_G K_\epsilon(x,y)\left((w-v)(y,r)-(w-v)(x,r)\right)\,dydr\right|dx\\
&\qquad \leq\|w_0-v_0\|_{L^1(\Omega)}+t\|C_\epsilon(x)\|_{L^\infty(\Omega)} 2 |\Omega| |||(w-v)|||
\end{align*}
Taking the maximum in $t$ \eqref{3.1} follows.
Now, taking $v_0\equiv v\equiv 0$ in \eqref{3.1} we get that $\mathfrak{T}(w) \in C([0,t_0]; L^1 (\Omega))$
and this says that $\mathfrak{T}$ maps $X_{t_0}$ into $X_{t_0}$.
Finally, we will consider $X_{t_0,u_0}=\{u\in X_{t_0}:\,u(x,0)=u_0(x)\}$. $\mathfrak{T}$ maps $X_{t_0,u_0}$ into $X_{t_0,u_0}$ and taking $t_ 0$ such that $2\|C_\epsilon(x)\|_{L^\infty(\Omega)}|\Omega|t_ 0 < 1$
we can apply the Banach's fixed point theorem in the interval $[0,t_0]$ because $ \mathfrak{T}$ is a strict contraction in $X_{ t_ 0,u_0}$. From this we get the existence and uniqueness of the solution in $[0,t_0]$. To extend the solution to $[0,\infty)$ we may take as initial data $u(x, t_ 0 ) \in L^1 (\Omega)$ and obtain a solution up to $[0, 2t_ 0 ]$. Iterating this procedure we get a solution defined in $[0,\infty)$.
\end{proof}
In order to prove a comparison principle of the problem given in (\ref{pro2}) we need to introduce the definition of sub and super solutions.
\begin{defin}
A function $u \in C([0, T ]; L^1 (\Omega))$ is a supersolution of \eqref{pro2} if
\begin{equation}\label{pro2sub}
\begin{cases}
u_t(x,t)\geq \int_G K_\epsilon(x,y)\left(u(y,t)-u(x,t)\right)\,dy, \qquad&\text{for}\,\, x \in \Omega \,\, \text{and}\,\, t>0,\\
u_t(x,t)\geq g(z,t), &\text{for}\,\, x \notin \Omega \,\, \text{and}\,\, t>0,\\
u(x,0)\geq u_0(z,s), &\text{for}\,\, x \in \Omega.
\end{cases}
\end{equation}
As usual, subsolutions are defined analogously by reversing the inequalities.
\end{defin}
\begin{lem}\label{com}
Let $u_0 \in C(\overline\Omega)$, $u_0 \geq 0$, and let $u \in C(\overline\Omega\times[0, T ])$ be a supersolution of
\eqref{pro2} with $g \geq 0$. Then, $u \geq 0$.
\end{lem}
\begin{proof}
Assume to the contrary that $u(x, t)$ is negative in some point. Let $v(x, t) = u(x, t) + \gamma t$ with $\gamma>0$ small such that $v$ is still negative somewhere. Then, if $(x_0, t_ 0 )$ is a point where $v$ attains its negative minimum, there it holds that $t_ 0 > 0$ and
\begin{align*}
v_t(x_0, t_ 0 )&=u_t(x_0, t_0 )+\gamma>\int_G K_\epsilon(x_0,y)\left(u(y,t_0)-u(x_0,t_0)\right)dy\\
&=\int_G K_\epsilon(x_0,y)\left(v(y,t_0)-v(x_0,t_0)\right)dy\geq0.
\end{align*}
This contradicts that $(x_0 , t_ 0 )$ is a minimum of $v$. Thus, $u \geq 0$.
\end{proof}
\begin{cor}\label{comp}
Let $K_\epsilon\in L^\infty (G)$. Let $u_0$ and $v_0$ in $L^1(\Omega)$ with $u_0 \geq v_ 0$ and let the functions
$g, h \in L^\infty ((0, T ); L^1 (G\setminus\Omega))$ with $g \geq h$. Let $u$ be a solution of \eqref{pro2} with $u(x, 0) = u _0(x)$ and Dirichlet datum $g$, and let $v$ be a solution of \eqref{pro2} with $v(x, 0) = v_0(x)$
and datum $h$. Then, $u \geq v$ a.e. $\Omega$.
\end{cor}
\begin{proof}
Let $w = u - v$. Then, $w$ is a supersolution with initial datum $u_ 0 - v_ 0 \geq 0$ and datum $g - h \geq 0$. Using the continuity of the solutions with respect to the data and the fact that $K_\epsilon\in L^\infty (G)$, we may assume that $u, v \in C(\Omega\times [0, T ])$. By Lemma \eqref{com} we obtain that $w = u - v \geq 0$. So the corollary is proved.
\end{proof}
\begin{cor}\label{comp2}
Let $u \in C(\Omega\times [0, T ])$ (respectively, $v$) be a supersolution (respectively,
subsolution) of \eqref{pro2}. Then, $u \geq v$.
\end{cor}
\begin{proof}
It follows from the proof of the previous corollary.
\end{proof}
\section{Proof of the Main Theorems}\label{sec:Proofs}
\par The following Lemmas are the key for the proof of Theorems \ref{thm:MainMR} and \ref{thm:MainSYL}. To illustrate the technique we first prove a result which refers to the evolution problem stated in section \eqref{subsec:EvolEqn}.
\begin{lem}\label{lem:aproxSublap} Let $\Omega\subset G$ be a bounded domain (that is, open an connected), and let $v\in C^{2+\alpha}(\overline{\Omega})$ for some $\alpha\le 0$. Then for each $\epsilon>0$ there exists constants $c_{1}$ and $c_{2}$ such that
\begin{align*} \left|\left|\mathcal{E}_{\epsilon}(v)-\frac{c_{1}}{2}\mathcal{J}(v)\right|\right|_{L^{\infty}(\Omega)} \le & c_{2} \epsilon^{\alpha},
\end{align*}
where $\mathcal{J}(v)(x)=\sum\limits_{i=1}^{n_{1}}X_{i}^{2}v(x)$ denotes minus the subLaplacian.
\end{lem}
\begin{proof}
\par Let us begin by writing the formula that defines $\mathcal{E}_{\epsilon}$ by means of the global chart given by the fixed basis of the stratified Lie algebra $\mathfrak{g}$: for $x\in\Omega$, since
\begin{align*}
(\delta_{\epsilon}(y))^{-1} = & \exp(-\epsilon t_{1}X_{1} - \dots - \epsilon t_{n_{1}}X_{n_{1}}-\epsilon^{2}t_{n_{1}+1}X_{n_{1}+1}-\dots-\epsilon^{2}t_{n_{1}+n_{2}}X_{n_{1}+n_{2}}-\dots-\epsilon^{m}t_{n}X_{n}),
\end{align*}
for the coordinates $(t_{1},\dots,t_{n})\in\mathbb{R}^{n}$ adapted to the basis, we can write
\begin{align*}
\mathcal{E}_{\epsilon}v(x) = & \frac{1}{\epsilon^{2}}\int\limits_{G}\left[v(x(\delta_{\epsilon}(y^{-1})))-v(x)\right]J(y)dy \\
= & \frac{1}{\epsilon^{2}} \int\limits_{\mathbb{R}^{n}} \left( v\left( x\exp\left( -\epsilon t_{1}X_{1} -\dots-\epsilon^{m}t_{n}X_{n}\right) \right) -v(x) \right) J(\exp(t_{1}X_{1}+\dots+t_{n}X_{n})) dt_{1}\dots dt_{n}.
\end{align*}
\par Thus, from the Taylor expansion \eqref{simple.Taylor.expansion} discussed in section \ref{sec:Prelim},
\begin{align*}
v\left( x\exp\left( -\epsilon t_{1}X_{1} -\dots-\epsilon^{m}t_{n}X_{n}\right) \right) -v(x) = & -\epsilon \sum\limits_{i=1}^{n_{1}+n_{2}} t_{i}X_{i}v(x)+\frac{\epsilon^{2}}{2} \sum\limits_{i,j=1}^{n_{1}} t_{i}t_{j}X_{i}X_{j}v(x)+o(\epsilon^{2}).
\end{align*}
\par Therefore,
\begin{align*}
\mathcal{E}_{\epsilon}v(x) = & \frac{1}{\epsilon^{2}} I + \frac{1}{\epsilon^{2}} II +\frac{o(\epsilon^{2})}{\epsilon^{2}} ||J||_{L^{1}(G)},
\end{align*}
where from properties \eqref{J.x}, \eqref{J.x^2} and \eqref{J.deltaij} we can compute
\begin{align*}
I = & - \epsilon \sum\limits_{i=1}^{n_{1}+n_{2}}X_{i}v(x)\int\limits_{\mathbb{R}^{n}} J(\exp(t_{1}X_{1}+\dots +t_{n}X_{n}))t_{i}dt_{1}\dots dt_{n} = 0, \\
II = & \frac{\epsilon^{2}}{2} \sum\limits_{i,j=1}^{n_{1}} X_{i}X_{j}v(x) \int\limits_{\mathbb{R}^{n}} J(\exp(t_{1}X_{1}+\dots +t_{n}X_{n})) t_{i}t_{j} dt_{1}\dots dt_{n} = c \frac{\epsilon^{2}}{2} \sum\limits_{i,j=1}^{n_{1}} X_{i}^{2}v(x).
\end{align*}
\par Finally,
\begin{align*}
\mathcal{E}_{\epsilon}v(x)= &\frac{c}{2}\mathcal{J}v(x)+\frac{o(\epsilon^{2})}{\epsilon^{2}} ||J||_{L^{1}(G)},
\end{align*}
hence
\begin{align*}
\left|\left|\mathcal{E}_{\epsilon}v(x)-\frac{c}{2}\mathcal{J}v(x)\right|\right|_{L^{\infty}(\Omega)} \le & \epsilon^{\alpha}||J||_{L^{1}(G)}|\Omega|.
\end{align*}
\end{proof}
\begin{lem}\label{lem:cuentas} Let $\Omega\subset G$ be a bounded domain (that is, open an connected), and let $v\in C^{2+\alpha}(\overline{\Omega})$ for some $\alpha\le 0$. Then, with the notation above, for each $\epsilon>0$ there exists a constant $c$ such that
\begin{align*} \left|\left|\mathcal{K}_{\epsilon}(v)-\mathcal{K}(v)\right|\right|_{L^{\infty}(\Omega)} \le & c \epsilon^{\alpha},
\end{align*}
where $\mathcal{K}$ is the operator defined in \eqref{operator}.
\end{lem}
\begin{proof}
\par By changing variables via $z=\exp(L^{-1}(x)\exp^{-1}(\delta_{\epsilon^{-1}}(y^{-1}x)))$, since thus we have that $y=x\exp(\delta_{\epsilon}L(x)\exp^{-1}(z^{-1}))$ and $dy=\epsilon^{Q}\det(L(x))dz$
for $\epsilon>0$, the reescaled kernel operator becomes
\begin{align*}
\mathcal{K}_{\epsilon}u(x)=\frac{c(x)\det(L(x))}{\epsilon^{2}}\int\limits_{G}&a\left(\left(\exp\frac{M}{2}\delta_{\epsilon}W(x)(L^{t}(x))^{-1}\exp^{-1}z\right)^{-1}\right)J(z)\\
& (u(x\exp(\delta_{\epsilon}L(x)\exp^{-1}(z^{-1})))-u(x))dz,
\end{align*}
and by definition of the function $a$ it finally assumes the form
\begin{align*}
\mathcal{K}_{\epsilon}u(x)=\frac{2}{\epsilon^{2}C(J)M} \int\limits_{G}&\left(-\frac{M}{2}\sum\limits_{j=1}^{n}\epsilon^{\lambda_{j}}\tilde{b}_{j}(x)\sum\limits_{h=1}^{n}l_{hj}^{\ast}(x)\phi_{h}(\exp^{-1}z)+M\right) J(z)\\
& (u(x\exp(\delta_{\epsilon}L(x)\exp^{-1}(z^{-1})))-u(x))dz.
\end{align*}
\par Now let us write the formula in terms of the global chart as we did before (recall the proof of Lemma \ref{lem:aproxSublap}):
\begin{align*}
\mathcal{K}_{\epsilon}u(x)=\frac{2}{\epsilon^{2}C(J)M} \int\limits_{\mathbb{R}^{n}}&\left(-\frac{M}{2}\sum\limits_{j=1}^{n}\epsilon^{\lambda_{j}}\tilde{b}_{j}(x)\sum\limits_{h=1}^{n}l_{hj}^{\ast}(x)t_{h}+M\right) J\left(\exp\sum\limits_{r=1}^{n}t_{r}X_{r}\right)\\
& \left(u\left(x\exp\left(-\sum\limits_{i=1}^{n}\epsilon^{\lambda_{i}}\sum\limits_{k=1}^{n}l_{ik}(x)t_{k}X_{i}\right)\right)-u(x)\right)dt_{1}\dots dt_{n},
\end{align*}
where $t_h=\phi_{h}(\exp^{-1}z)$.
\par For the last factor we apply the Taylor expansion of homogeneous degree $2$ (recall formula \eqref{simple.Taylor.expansion})
\begin{align*}
& u\left(x\exp\left(-\sum\limits_{i=1}^{n}\epsilon^{\lambda_{i}}\sum\limits_{k=1}^{n}l_{ik}(x)t_{k}X_{i}\right)\right)-u(x) \\
& = -\sum\limits_{i=1}^{n_{1}+n_{2}}\epsilon^{\lambda_{i}}\sum\limits_{k=1}^{n} l_{ik}(x)t_{k} X_{i}u(x) + \frac{\epsilon^{2}}{2} \sum\limits_{i,j=1}^{n_{1}}\left(\sum\limits_{k=1}^{n} l_{ik}(x)t_{k}\right)\left(\sum\limits_{h=1}^{n}l_{jh}(x)t_{h}\right)X_{i}X_{j}u(x) + o(\epsilon^{2+\alpha}).
\end{align*}
Then we can split as follows
\begin{align*}
&\mathcal{K}_{\epsilon}u(x)=\mathcal{K}_{\epsilon,1}u(x)+\mathcal{K}_{\epsilon,2}u(x)+o(\epsilon^\alpha),
\end{align*}
where by extensive use of properties \eqref{J.x}, \eqref{J.x^2} and \eqref{J.deltaij}, we have
\begin{align*}
\mathcal{K}_{\epsilon,1}u(x) = & \frac{1}{\epsilon^{2}C(J)}\sum\limits_{i=1}^{n_{1}+n_{2}}\epsilon^{\lambda_{i}}\sum\limits_{k=1}^{n}l_{ik}(x)\sum\limits_{j=1}^{n}\epsilon^{\lambda_{j}}\tilde{b}_{j}(x)\sum\limits_{h=1}^{n}l^{\ast}_{hj}(x) \\
& \left(\,\int\limits_{\mathbb{R}^{n}}J\left(\exp\sum\limits_{r=1}^{n}t_{r}X_{r}\right)t_kt_hdt_{1}\dots dt_{n}\right)X_{i}u(x)\\
= & \frac{1}{\epsilon^{2}}\sum\limits_{i=1}^{n_{1}+n_{2}}\epsilon^{\lambda_{i}}\sum\limits_{j=1}^{n}\epsilon^{\lambda_{j}}\tilde{b}_{j}(x)\sum\limits_{k=1}^{n}l_{ik}(x)l^{\ast}_{kj}(x)X_{i}u(x) \\
= & \frac{1}{\epsilon^{2}}\sum\limits_{i=1}^{n_{1}+n_{2}}\epsilon^{\lambda_{i}}\sum\limits_{j=1}^{n}\epsilon^{\lambda_{j}}\tilde{b}_{j}(x)\delta_{ij}X_{i}u(x)\\
= & \frac{1}{\epsilon^{2}}\sum\limits_{i=1}^{n_{1}+n_{2}}\epsilon^{2\lambda_{i}}\tilde{b}_{i}(x)X_{i}u(x) \\
= &\sum\limits_{i=1}^{n_{1}+n_{2}}b_{i}(x)X_{i}u(x);
\end{align*}
\begin{align*}
\mathcal{K}_{\epsilon,2}u(x) = & \frac{2}{\epsilon^{2}C(J)} \sum\limits_{i,j=1}^{n_{1}}\frac{\epsilon^{2}}{2}\left(\sum\limits_{k=1}^{n}l_{ik}(x)\sum\limits_{h=1}^{n}l_{jh}(x)\right) \\
& \left(\int\limits_{\mathbb{R}^{n}}J\left(\exp\sum\limits_{r=1}^{n}t_{r}X_{r}\right)t_kt_hdt_{1}\dots dt_{n}\right)X_{i}X_{j}u(x) \\
= & \sum\limits_{i,j=1}^{n_{1}}a_{ij}(x)X_{i}X_{j}u(x).
\end{align*}
\par Thus the proof ends.
\end{proof}
\begin{lem} Let $G$ be a stratified Lie group, and let $u\in C^{2+\alpha}(G)$ for some $\alpha\ge 0$. Then, with the notation above, for each $\epsilon>0$ there exists a constant $c$ such that
\begin{align*} ||\mathcal{L}_{\epsilon}(u)-\mathcal{L}(v)||_{L^{\infty}(G)} \le C\epsilon^{\alpha},
\end{align*}
where $\mathcal{L}$ is the operator defined in \eqref{operator1}.
\end{lem}
\begin{proof}
\par Let us rewrite the operators as follows:
\begin{align*}
\mathcal{L}_{\epsilon}(u)(x)=&\frac{2C(J)}{\epsilon^{Q+2}} \int\limits_{G} J(\delta_{\epsilon^{-1}}y^{-1}x)a(y)[u(y)-u(x)]dy+\frac{2C(J)}{\epsilon^{Q+2}} \int\limits_{G} J(\delta_{\epsilon^{-1}}y^{-1}x)[a(y)-a(x)]u(x)dy.
\end{align*}
\par As usual, let us first change variables according to $z=\delta_{\epsilon^{-1}}y^{-1}x$, hence $y=\delta_{\epsilon}xz^{-1}$ and $dz=-\epsilon^{Q}dy$ and then write it in coordinates:
\begin{align*}
\mathcal{L}_{\epsilon}(u)(x) = &\frac{2C(J)}{\epsilon^{2}} \int\limits_{G} J(z)a(\delta_{\epsilon}xz^{-1})[u(\delta_{\epsilon}xz^{-1})-u(x)]dz+\frac{2C(J)}{\epsilon^{2}} \int\limits_{G} J(z)[a(\delta_{\epsilon}xz^{-1})-a(x)]u(x)dz \\
= & \frac{2C(J)}{\epsilon^{2}} \int\limits_{\mathbb{R}^{n}} J\left(\exp\sum\limits_{r=1}^{n}t_{r}X_{r}\right)a\left(x\exp\left(-\sum\limits_{k=1}^{n}\epsilon^{\lambda_{k}}t_{k}X_{k}\right)\right) \\
& \left[u\left(x\exp\left(-\sum\limits_{i=1}^{n}\epsilon^{\lambda_{i}}t_{i}X_{i}\right)\right)-u(x)\right]dt_{1}\dots dt_{n} \\
& +\frac{2C(J)}{\epsilon^{2}} \int\limits_{\mathbb{R}^{n}} J\left(\exp\sum\limits_{r=1}^{n}t_{r}X_{r}\right)\left[a\left(x\exp\left(-\sum\limits_{i=1}^{n}\epsilon^{\lambda_{i}}t_{i}X_{i}\right)\right)-a(x)\right]u(x)dt_{1}\dots dt_{n} = I + II,
\end{align*}
where $t_k=\phi_{k}(\exp^{-1}z)$.
The next step is to apply Taylor decomposition of homogeneous degree $2$ (recall formula \eqref{simple.Taylor.expansion}) to $u$ in $I$ and to $a$ in $II$:
\begin{align*}
u\left(x\exp\left(-\sum\limits_{i=1}^{n}\epsilon^{\lambda_{i}}t_{i}X_{i}\right)\right)-u(x) = -\sum\limits_{i=1}^{n_{1}+n_{2}}\epsilon^{\lambda_{i}}t_{i} X_{i}u(x) + \frac{\epsilon^{2}}{2} \sum\limits_{i,j=1}^{n_{1}}t_{i}t_{j}X_{i}X_{j}u(x) + o(\epsilon^{2+\alpha}),
\end{align*}
hence
\begin{align*}
I = & \frac{2C(J)}{\epsilon^{2}} \int\limits_{\mathbb{R}^{n}} J\left(\exp\sum\limits_{r=1}^{n}t_{r}X_{r}\right)a\left(x\exp\left(-\sum\limits_{k=1}^{n}\epsilon^{\lambda_{k}}t_{k}X_{k}\right)\right)\left(-\sum\limits_{i=1}^{n_{1}+n_{2}}\epsilon^{\lambda_{i}}t_{i} X_{i}u(x) \right)dt_{1}\dots dt_{n} \\
& + C(J)\int\limits_{\mathbb{R}^{n}} J\left(\exp\sum\limits_{r=1}^{n}t_{r}X_{r}\right)a\left(x\exp\left(-\sum\limits_{k=1}^{n}\epsilon^{\lambda_{k}}t_{k}X_{k}\right)\right)\left( \sum\limits_{i,j=1}^{n_{1}}t_{i}t_{j}X_{i}X_{j}u(x)\right)dt_{1}\dots dt_{n} \\
& + o(\epsilon^{\alpha}) 2C(J)\int\limits_{\mathbb{R}^{n}} J\left(\exp\sum\limits_{r=1}^{n}t_{r}X_{r}\right)a\left(x\exp\left(-\sum\limits_{k=1}^{n}\epsilon^{\lambda_{k}}t_{k}X_{k}\right)\right)dt_{1}\dots dt_{n} = I_{1}+I_{2}+o(\epsilon^\alpha),
\end{align*}
and by applying Taylor formula again to a, but this time of homogeneous degree $1$, and extensive use of formulas \eqref{J.x}, \eqref{J.x^2} and \eqref{J.deltaij} it follows that
\begin{align*}
I_{1} = & \frac{2C(J)}{\epsilon^{2}} \int\limits_{\mathbb{R}^{n}} J\left(\exp\sum\limits_{r=1}^{n}t_{r}X_{r}\right)\left(a(x)-\sum\limits_{k=1}^{n_{1}}\epsilon t_{k}X_{k}(a)(x)+o(\epsilon)\right)\left(-\sum\limits_{i=1}^{n_{1}+n_{2}}\epsilon^{\lambda_{i}}t_{i} X_{i}(u)(x) \right)dt_{1}\dots dt_{n} \\
= & \frac{2C(J)}{\epsilon^{2}} \sum\limits_{i=1}^{n_{1}+n_{2}} \sum\limits_{k=1}^{n_{1}}\epsilon^{\lambda_{i}+1} X_{k}(a)(x) X_{i}(u)(x)\int\limits_{\mathbb{R}^{n}} J\left(\exp\sum\limits_{r=1}^{n}t_{r}X_{r}\right) t_{k}t_{i} dt_{1}\dots dt_{n} = 2\sum\limits_{i=1}^{n_{1}} X_{i}(a)(x) X_{i}(u)(x), \\
I_{2} = & C(J)\int\limits_{\mathbb{R}^{n}} J\left(\exp\sum\limits_{r=1}^{n}t_{r}X_{r}\right)\left(a(x)-\sum\limits_{k=1}^{n_{1}}\epsilon t_{k}X_{k}(a)(x)+o(\epsilon)\right)\left( \sum\limits_{i,j=1}^{n_{1}}t_{i}t_{j}X_{i}X_{j}u(x)\right)dt_{1}\dots dt_{n} \\
= & C(J)\sum\limits_{j,i=1}^{n_{1}}a(x)X_{j}X_{i}(u)(x) \int\limits_{\mathbb{R}^{n}} J\left(\exp\sum\limits_{rr=1}^{n}t_{r}X_{r}\right)t_{j}t_j dt_{1}\dots dt_{n}+o(\epsilon)= \sum\limits_{i=1}^{n_{1}}a(x)X_{i}X_{i}(u)(x)+o(\epsilon),
\end{align*}
and finally,
\begin{align*}
II = & \frac{2C(J)}{\epsilon^{2}} \int\limits_{\mathbb{R}^{n}} J\left(\exp\sum\limits_{r=1}^{n}t_{r}X_{r}\right)\left[-\sum\limits_{i=1}^{n_{1}+n_{2}}\epsilon^{\lambda_{i}}t_{i} X_{i}(a)(x) + \frac{\epsilon^{2}}{2} \sum\limits_{i,j=1}^{n_{1}}t_{i}t_{j}X_{i}X_{j}(a)(x) + o(\epsilon^{3})\right]u(x)dt_{1}\dots dt_{n} \\
= & C(J) \sum\limits_{j,i=1}^{n_{1}} X_{j}X_{i}(a)(x)u(x) \int\limits_{\mathbb{R}^{n}} J\left(\exp\sum\limits_{r=1}^{n}t_{r}X_{r}\right)t_{i}t_j dt_{1}\dots dt_{n} +o(\epsilon)= \sum\limits_{i=1}^{n_{1}} X_{i}X_{i}(a)(x)u(x)+o(\epsilon). \\
\end{align*}
\end{proof}
\par Next we turn to the proof of Theorem \ref{thm:MainMR}. Let us remark that the proof of Theorem \ref{thm:MainSYL} follows the same lines.
\begin{proof}[Proof of Theorem \ref{thm:MainMR}]
\par Let $v(\cdot,t)\in C^{2+\alpha}(\Omega)$ be a solution of problem \eqref{0.2}, and define an extension $\tilde{v}$ of $v$ to the space $C^{2+\alpha,1+\alpha}(G\times [0,T])$ such that \begin{equation}\label{5.1}%
\begin{cases}
\tilde{v}_{t}(x,t)=\mathcal{K}(\tilde{v})(x,t),\qquad & x\in \Omega,\quad t>0,\\
\tilde{v}(x,t)=\tilde{g}(x,t), & x\notin \Omega,\quad t>0,\\
\tilde{v}(x,0)=u_{0}(x), & x\in\Omega,
\end{cases}
\end{equation}
where $\tilde{g}$ is a smooth function which satisfies $\tilde{g}(x,t)=g(x,t)$ if $x\in\partial\Omega$ and $\tilde{g}(x,t)=g(x,t)+o(\epsilon)$ if $x\approx\partial\Omega$.
\par Let us define now the difference $w^{\epsilon}(x,t)=\tilde{v}(x,t)-u^{\epsilon}(x,t)$. Thus defined, $w^{\epsilon}$ satisfies
\begin{equation}\label{5.1}%
\begin{cases}
w^{\epsilon}_{t}(x,t)=\mathcal{K}(\tilde{v})(x,t)-\mathcal{K}_{\epsilon}\tilde{v}(x,t)+\mathcal{K}_{\epsilon}w^{\epsilon}(x,t),\qquad & x\in \Omega,\quad t>0,\\
w^{\epsilon}(x,t)=g(x,t)-\tilde{g}(x,t), & x\notin \Omega,\quad t>0,\\
w^{\epsilon}(x,0)=0, & x\in\Omega.
\end{cases}
\end{equation}
Let $\overline{w}(x,t)=K_{1}\theta(\epsilon)t+K_{2}\epsilon$ be a supersolution with $K_{1},K_{2}>0$ independent of $\epsilon$. From Lemma \ref{lem:cuentas} and the fact that $\mathcal{K}_{\epsilon}(\overline{w})=0$ (since $\overline{w}(x,t)$ does not depend on $x$), it follows that
\begin{align*}
\overline{w}_{t}(x,t)=K_{1}\theta(\epsilon)\ge\mathcal{K}\tilde{v}(x,t)-\mathcal{K}_{\epsilon}\tilde{v}(x,t)+\mathcal{K}_{\epsilon}w^{\epsilon}(x,t).
\end{align*}
Also, since $\tilde{w}(x,0)>0$ and the definition of $\tilde{g}$, we have that
\begin{align*}
\overline{w}(x,t)\ge K_{2}\epsilon \ge\theta(\epsilon)
\end{align*}
for $x\in\Omega^{c}$, $x\approx\partial\Omega$, $t>0$.
From the comparison principle (Corollary \ref{comp2}) we get that $\tilde{v}-u^{\epsilon}\le \overline{w}(x,t)=K_{1}\theta(\epsilon)t+K_{2}\epsilon$.
\par Applying the same arguments for $\underline{w}(x,t)=-\overline{w}(x,t)$ we obtain that $\underline{w}(x,t)$ is a subsolution of problem \eqref{5.1} and again by the comparison principle,
\begin{align*}
-K_{1}\theta(\epsilon)t-K_{2}\epsilon\le\tilde{v}-u^{\epsilon}\le K_{1}\theta(\epsilon)t+K_{2}\epsilon.
\end{align*}
Therefore,
\begin{align*}
||\tilde{v}-u^{\epsilon}||_{L^{\infty}(\Omega\times[0,T])} \le K_{1}\theta(\epsilon)T+K_{2}\epsilon \to 0.
\end{align*}
\end{proof}
|
1,941,325,220,021 | arxiv | \section{Implementation Details of Dropout Methods on Recommendation Models}\label{append:implementation}
\begin{figure}
\centering
\includegraphics[width=0.75\linewidth]{figs/BPR_1.png}
\caption{Dropout on BPR.}
\label{fig:BPR_1}
\end{figure}
We first introduce the criterion we adopt to select the implementations of dropout methods, then we introduce the implementation details of dropout methods on each recommendation model.
\subsection{Criterion of the Implementations}
For dropping model structures, we implement dropout according to the original papers of the recommendation models: NFM \cite{he2017neural}, GRU4Rec \cite{hidasi2015session}, SASRec \cite{kang2018self}, and LightGCN \cite{he2020lightgcn}. Because dropping model structure is a universal type of dropout, all the neural recommendation models have the corresponding implementations. The original paper of BPR \cite{rendle2009bpr} is published before dropout was proposed, so we randomly drops the parameters in BPR model, which is consistent with the definition of dropping model structure. Our implementation of dropping graph information is also according to the original paper of LightGCN, which referred its implementation to NGCF \cite{wang2019neural}.
For dropping embeddings, we implement dropout according to the original papers (ACCM \cite{shi2018attention} and AFS \cite{shi2019adaptive}) where embedding dropout was first proposed as a formal method.
For dropping input information, we implement dropout according to the definition of “input information” in recommendation models, i.e., user ids, item ids, and features.
To sum up, if the original papers of the recommendation models have implemented certain kinds of dropout, then our implementations are consistent with the original papers. Otherwise, our implementations are consistent with the definitions of corresponding types of dropout methods, which are also the most popular strategies of implementations. It’s worth noting that many dropout methods in the aforementioned four categories are highly specific to model structure and application scenarios, so our implementation strategy to the maximum extent guarantees the consistency of dropout methods between different models, making the results more comparable.
Following subsections are the implementation details of dropout methods on each recommendation model.
\subsection{Dropout on BPR}
\noindent $\bullet$ \textbf{Drop model structure}: standard dropout, achieved by adding a dropout layer after the user and item embedding matrix.
\noindent $\bullet$ \textbf{Drop input information}: randomly set some of the user and item numbers in each batch to random numbers.
\noindent $\bullet$ \textbf{Drop embedding}: randomly set the user and item embeddings in each batch to random embeddings.
The schematic diagram is shown in figure \ref{fig:BPR_1}. The \ding{172} in the figure indicates the dropout of model structure, \ding{173} indicates the dropout of input information, and \ding{174} indicates the dropout of embeddings.
\subsection{Dropout on NFM}
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{figs/NFM_1.png}
\caption{Dropout on NFM.}
\label{fig:NFM_1}
\end{figure}
\noindent $\bullet$ \textbf{Drop model structure}: standard dropout, achieved by adding a dropout layer after the ReLU layer in the deep part.
\noindent $\bullet$ \textbf{Drop input information}: randomly drops the attribute information of users and items in each batch by setting them to random values.
\noindent $\bullet$ \textbf{Drop embedding}: randomly drops the embeddings corresponding to the attribute information of users and items in each batch by setting them to random embeddings.
The schematic diagram is shown in figure \ref{fig:NFM_1}. The \ding{172} in the figure indicates the dropout of model structure, \ding{173} indicates the dropout of input information, and \ding{174} indicates the dropout of embeddings.
\subsection{Dropout on GRU4Rec}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/GRU4Rec_1.png}
\caption{Dropout on GRU4Rec.}
\label{fig:GRU4Rec_1}
\end{figure}
\noindent $\bullet$ \textbf{Drop model structure}: randomly dropout of feed-forward connections, following Zaremba et al. \cite{zaremba2014recurrent}.
\noindent $\bullet$ \textbf{Drop input information}: randomly set some of the user and item numbers in each batch to random numbers.
\noindent $\bullet$ \textbf{Drop embedding}: randomly set the user and item embeddings in each batch to random embeddings.
The schematic diagram is shown in figure \ref{fig:GRU4Rec_1}. The \ding{172} in the figure indicates the dropout of model structure, \ding{173} indicates the dropout of input information, and \ding{174} indicates the dropout of embeddings.
\subsection{Dropout on SASRec}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/SASRec_1.png}
\caption{Dropout on SASRec.}
\label{fig:SASRec_1}
\end{figure}
\noindent $\bullet$ \textbf{Drop model structure}: adding dropout layers within the self-attentive block to dropout neuron outputs, following the original article \cite{kang2018self}.
\noindent $\bullet$ \textbf{Drop input information}: randomly set some of the user and item numbers in each batch to random numbers.
\noindent $\bullet$ \textbf{Drop embedding}: randomly set the user and item embeddings in each batch to random embeddings.
The schematic diagram is shown in figure \ref{fig:SASRec_1}. The \ding{172} in the figure indicates the dropout of model structure, \ding{173} indicates the dropout of input information, and \ding{174} indicates the dropout of embeddings.
\subsection{Dropout on LightGCN}
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figs/LightGCN_1.png}
\caption{Dropout on LightGCN.}
\label{fig:LightGCN_1}
\end{figure}
\noindent $\bullet$ \textbf{Drop model structure}: add a dropout layer after the user-item embedding matrix.
\noindent $\bullet$ \textbf{Drop input information}: randomly set some of the user and item numbers in each batch to random numbers.
\noindent $\bullet$ \textbf{Drop embedding}: randomly set the user and item embeddings in each batch to random embeddings.
\noindent $\bullet$ \textbf{Drop graph information}: for each batch, randomly drop some edges in the graph. This can be achieved by randomly dropping elements of the symmetrically normalized adjacency matrix $\Tilde{\mathbf{A}}$ \cite{he2020lightgcn}.
The schematic is shown in figure \ref{fig:LightGCN_1}. The \ding{172} in the figure indicates the dropout of model structure, \ding{173} indicates the dropout of input information, \ding{174} indicates the dropout of embeddings, and \ding{175} indicates the dropout of the edges.
\section{Experiment Data}\label{append:exp data}
\begin{table*}[ht]\normalsize
\renewcommand{\arraystretch}{1.1}
\centering
\caption{Detailed results for dropout methods on BPRMF, ml1m-cold dataset.}
\begin{threeparttable}
\begin{tabular}{ccllllll}
\toprule
\makecell[c]{Dropout\\Methods} & \makecell[c]{Dropout\\Ratio} & \makecell[c]{NDCG\\@5} & \makecell[c]{NDCG\\@10} & \makecell[c]{NDCG\\@20} & \makecell[c]{NDCG\\@50} & \makecell[c]{HR\\@10} & \makecell[c]{HR\\@20} \\
\midrule
Origin & \multicolumn{1}{c|}{-} & 0.0251 & 0.0339 & 0.0458 & 0.0667 & 0.0678 & 0.1155 \\ \cline{3-8}
\rule{-2.3pt}{12pt}
\multirow{3}{*}{\makecell[c]{Drop Model\\Structure}} & \multicolumn{1}{c|}{0.1} & 0.0258 & 0.0355* & 0.0479* & 0.0697** & 0.0718* & 0.1213* \\
& \multicolumn{1}{c|}{0.2} & \textbf{0.0267}* & 0.0359** & 0.0487** & \textbf{0.0709}** & 0.0715** & \textbf{0.1225}** \\
& \multicolumn{1}{c|}{0.3} & \textbf{0.0267}** & \textbf{0.0364}** & \textbf{0.0488}** & 0.0701** & \textbf{0.0721}** & 0.1220** \\
\multirow{3}{*}{\makecell[c]{Drop Input\\Info}} & \multicolumn{1}{c|}{0.1} & 0.0233** & 0.0324* & 0.0444* & 0.0658 & 0.0668 & 0.1148 \\
& \multicolumn{1}{c|}{0.2} & 0.0241* & 0.0331 & 0.0448 & 0.0660 & 0.0671 & 0.1138 \\
& \multicolumn{1}{c|}{0.3} & 0.0217** & 0.0294** & 0.0394** & 0.0573** & 0.0586** & 0.0988** \\
\multirow{3}{*}{\makecell[c]{Drop\\Embedding}} & \multicolumn{1}{c|}{0.1} & 0.0236* & 0.0324* & 0.0450 & 0.0660 & 0.0658 & 0.1162 \\
& \multicolumn{1}{c|}{0.2} & 0.0236* & 0.0330 & 0.0451 & 0.0660 & 0.0672 & 0.1155 \\
& \multicolumn{1}{c|}{0.3} & 0.0241 & 0.0332 & 0.0453 & 0.0658 & 0.0672 & 0.1154 \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[]* for $p<0.05$, ** for $p<0.01$, compared to the origin (not using any dropout methods). Bold numbers are the best results of each column.
\end{tablenotes}
\end{threeparttable}
\label{tab:ml1m-cold-BPR}
\end{table*}
\begin{table*}[ht]\normalsize
\centering
\caption{Detailed results for dropout methods on BPRMF, Amazon Baby 5-core dataset.}
\begin{threeparttable}
\begin{tabular}{ccllllll}
\toprule
\makecell[c]{Dropout\\Methods} & \makecell[c]{Dropout\\Ratio} & \makecell[c]{NDCG\\@5} & \makecell[c]{NDCG\\@10} & \makecell[c]{NDCG\\@20} & \makecell[c]{NDCG\\@50} & \makecell[c]{HR\\@10} & \makecell[c]{HR\\@20} \\
\midrule
Origin & \multicolumn{1}{c|}{-} & 0.00709 & 0.00969 & 0.01300 & 0.01936 & 0.01942 & 0.03263 \\ \cline{3-8}
\rule{-2.3pt}{12pt}
\multirow{3}{*}{\makecell[c]{Drop Model\\Structure}} & \multicolumn{1}{c|}{0.1} & 0.00693 & 0.00935 & 0.01253 & 0.01946 & 0.01872 & 0.03145 \\
& \multicolumn{1}{c|}{0.2} & \textbf{0.00723} & \textbf{0.00973} & \textbf{0.01311} & \textbf{0.01991} & \textbf{0.01943} & 0.03292 \\
& \multicolumn{1}{c|}{0.3} & 0.00706 & 0.00959 & \textbf{0.01311} & 0.01984* & 0.01907 & \textbf{0.03315}\\
\multirow{3}{*}{\makecell[c]{Drop Input\\Info}} & \multicolumn{1}{c|}{0.1} & 0.00631* & 0.00888** & 0.01229* & 0.01892 & 0.01839* & 0.03202 \\
& \multicolumn{1}{c|}{0.2} & 0.00610** & 0.00839** & 0.01180** & 0.01836** & 0.01720** & 0.03078* \\
& \multicolumn{1}{c|}{0.3} & 0.00544** & 0.00761** & 0.01077** & 0.01671** & 0.01581** & 0.02844**\\
\multirow{3}{*}{\makecell[c]{Drop\\Embedding}} & \multicolumn{1}{c|}{0.1} & 0.00664* & 0.00916* & 0.01271 & 0.01933 & 0.01864 & 0.03283 \\
& \multicolumn{1}{c|}{0.2} & 0.00641** & 0.00900** & 0.01241** & 0.01943 & 0.01868* & 0.03231 \\
& \multicolumn{1}{c|}{0.3} & 0.00661 & 0.00911* & 0.01241* & 0.01914 & 0.01862 & 0.03177\\
\bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[]* for $p<0.05$, ** for $p<0.01$, compared to the origin (not using any dropout methods). Bold numbers are the best results of each column.
\end{tablenotes}
\end{threeparttable}
\label{tab:Amazon-Baby-5-core-BPR}
\end{table*}
\begin{table*}[ht]\normalsize
\centering
\caption{Detailed results for dropout methods on NFM, ml1m-cold dataset.}
\begin{threeparttable}
\begin{tabular}{ccllllll}
\toprule
\makecell[c]{Dropout\\Methods} & \makecell[c]{Dropout\\Ratio} & \makecell[c]{NDCG\\@5} & \makecell[c]{NDCG\\@10} & \makecell[c]{NDCG\\@20} & \makecell[c]{NDCG\\@50} & \makecell[c]{HR\\@10} & \makecell[c]{HR\\@20} \\
\midrule
Origin & \multicolumn{1}{c|}{-} & 0.0246 & 0.0335 & 0.0454 & 0.0664 & 0.0680 & 0.1153 \\ \cline{3-8}
\rule{-2.3pt}{12pt}
\multirow{3}{*}{\makecell[c]{Drop Model\\Structure}} & \multicolumn{1}{c|}{0.1} & 0.0255 & 0.0350* & 0.0472* & 0.0682** & 0.0708* & \textbf{0.1193}* \\
& \multicolumn{1}{c|}{0.2} & \textbf{0.0260}* & 0.0353* & 0.0473* & 0.0687** & 0.0704* & 0.1184 \\
& \multicolumn{1}{c|}{0.3} & 0.0257* & 0.0352** & \textbf{0.0474}** & 0.0684** & 0.0705* & 0.1191* \\
\multirow{3}{*}{\makecell[c]{Drop Input\\Info}} & \multicolumn{1}{c|}{0.1} & 0.0242 & 0.0334 & 0.0457 & 0.0678** & 0.0677 & 0.1167 \\
& \multicolumn{1}{c|}{0.2} & 0.0240 & 0.0332 & 0.0458 & 0.0675* & 0.0670 & 0.1172 \\
& \multicolumn{1}{c|}{0.3} & 0.0135** & 0.0184** & 0.0244** & 0.0371** & 0.0369** & 0.0608** \\
\multirow{3}{*}{\makecell[c]{Drop\\Embedding}} & \multicolumn{1}{c|}{0.1} & 0.0259** & \textbf{0.0354}** & 0.0473** & \textbf{0.0691}** & \textbf{0.0716}** & 0.1188* \\
& \multicolumn{1}{c|}{0.2} & 0.0251 & 0.0345 & 0.0470 & 0.0686** & 0.0695 & \textbf{0.1193}* \\
& \multicolumn{1}{c|}{0.3} & 0.0231** & 0.0326 & 0.0449 & 0.0666 & 0.0667 & 0.1159 \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[]* for $p<0.05$, ** for $p<0.01$, compared to the origin (not using any dropout methods). Bold numbers are the best results of each column.
\end{tablenotes}
\end{threeparttable}
\label{tab:ml1m-cold-NFM}
\end{table*}
\begin{table*}[ht]\normalsize
\centering
\caption{Detailed results for dropout methods on NFM, Amazon Baby 5-core dataset.}
\begin{threeparttable}
\begin{tabular}{ccllllll}
\toprule
\makecell[c]{Dropout\\Methods} & \makecell[c]{Dropout\\Ratio} & \makecell[c]{NDCG\\@5} & \makecell[c]{NDCG\\@10} & \makecell[c]{NDCG\\@20} & \makecell[c]{NDCG\\@50} & \makecell[c]{HR\\@10} & \makecell[c]{HR\\@20} \\
\midrule
Origin & \multicolumn{1}{c|}{-} & 0.00458 & 0.00657 & 0.00926 & 0.01444 & 0.01376 & 0.02451 \\ \cline{3-8}
\rule{-2.3pt}{12pt}
\multirow{3}{*}{\makecell[c]{Drop Model\\Structure}} & \multicolumn{1}{c|}{0.1} & 0.00515* & 0.00715* & 0.01010** & 0.01592** & 0.01496** & 0.02677** \\
& \multicolumn{1}{c|}{0.2} & \textbf{0.00518}* & \textbf{0.00737}** & \textbf{0.01046}** & \textbf{0.01618}** & \textbf{0.01545}** & \textbf{0.02778}** \\
& \multicolumn{1}{c|}{0.3} & 0.00499 & 0.00705 & 0.00985 & 0.01534 & 0.01475 & 0.02588\\
\multirow{3}{*}{\makecell[c]{Drop Input\\Info}} & \multicolumn{1}{c|}{0.1} & 0.00460 & 0.00643 & 0.00898 & 0.01441 & 0.01357 & 0.02386 \\
& \multicolumn{1}{c|}{0.2} & 0.00465 & 0.00654 & 0.00923 & 0.01489 & 0.01388 & 0.02469 \\
& \multicolumn{1}{c|}{0.3} & 0.00468 & 0.00649 & 0.00899 & 0.01467 & 0.01379 & 0.02381\\
\multirow{3}{*}{\makecell[c]{Drop\\Embedding}} & \multicolumn{1}{c|}{0.1} & 0.00424 & 0.00610 & 0.00858 & 0.01348 & 0.01264 & 0.02255 \\
& \multicolumn{1}{c|}{0.2} & 0.00460 & 0.00640 & 0.00892 & 0.01440 & 0.01352 & 0.02359 \\
& \multicolumn{1}{c|}{0.3} & 0.00473 & 0.00647 & 0.00913 & 0.01479 & 0.01369 & 0.02435\\
\bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[]* for $p<0.05$, ** for $p<0.01$, compared to the origin (not using any dropout methods). Bold numbers are the best results of each column.
\end{tablenotes}
\end{threeparttable}
\label{tab:Amazon-Baby-5-core-NFM}
\end{table*}
\begin{table*}[ht]\normalsize
\centering
\caption{Detailed results for dropout methods on GRU4Rec, ml1m-cold dataset.}
\begin{threeparttable}
\begin{tabular}{ccllllll}
\toprule
\makecell[c]{Dropout\\Methods} & \makecell[c]{Dropout\\Ratio} & \makecell[c]{NDCG\\@5} & \makecell[c]{NDCG\\@10} & \makecell[c]{NDCG\\@20} & \makecell[c]{NDCG\\@50} & \makecell[c]{HR\\@10} & \makecell[c]{HR\\@20} \\
\midrule
Origin & \multicolumn{1}{c|}{-} & 0.0752 & 0.0964 & 0.1196 & 0.1496 & 0.1818 & 0.2739 \\ \cline{3-8}
\rule{-2.3pt}{12pt}
\multirow{3}{*}{\makecell[c]{Drop Model\\Structure}} & \multicolumn{1}{c|}{0.1} & 0.0782* & 0.1003** & 0.1233** & 0.1536** & 0.1892** & 0.2805** \\
& \multicolumn{1}{c|}{0.2} & 0.0792** & 0.1010** & 0.1245** & 0.1544** & 0.1895** & 0.2830** \\
& \multicolumn{1}{c|}{0.3} & 0.0780* & 0.1004** & 0.1239** & 0.1538** & 0.1902** & 0.2834** \\
\multirow{3}{*}{\makecell[c]{Drop Input\\Info}} & \multicolumn{1}{c|}{0.1} & \textbf{0.0859}** & \textbf{0.1084}** & \textbf{0.1323}** & \textbf{0.1625}** & \textbf{0.2012}** & \textbf{0.2957}** \\
& \multicolumn{1}{c|}{0.2} & 0.0829** & 0.1056** & 0.1297** & 0.1598** & 0.1973** & 0.2931** \\
& \multicolumn{1}{c|}{0.3} & 0.0811** & 0.1024** & 0.1255** & 0.1555** & 0.1904** & 0.2818** \\
\multirow{3}{*}{\makecell[c]{Drop\\Embedding}} & \multicolumn{1}{c|}{0.1} & 0.0791** & 0.1013** & 0.1246** & 0.1552** & 0.1911** & 0.2840** \\
& \multicolumn{1}{c|}{0.2} & 0.0784** & 0.1012** & 0.1245** & 0.1554** & 0.1925** & 0.2852** \\
& \multicolumn{1}{c|}{0.3} & 0.0779* & 0.1007** & 0.1248** & 0.1557** & 0.1920** & 0.2875** \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[]* for $p<0.05$, ** for $p<0.01$, compared to the origin (not using any dropout methods). Bold numbers are the best results of each column.
\end{tablenotes}
\end{threeparttable}
\label{tab:ml1m-cold-GRU4Rec}
\end{table*}
\begin{table*}[ht]\normalsize
\centering
\caption{Detailed results for dropout methods on GRU4Rec, Amazon Baby 5-core dataset.}
\begin{threeparttable}
\begin{tabular}{ccllllll}
\toprule
\makecell[c]{Dropout\\Methods} & \makecell[c]{Dropout\\Ratio} & \makecell[c]{NDCG\\@5} & \makecell[c]{NDCG\\@10} & \makecell[c]{NDCG\\@20} & \makecell[c]{NDCG\\@50} & \makecell[c]{HR\\@10} & \makecell[c]{HR\\@20} \\
\midrule
Origin & \multicolumn{1}{c|}{-} & 0.00762 & 0.01071 & 0.01487 & 0.02261 & 0.02199 & 0.03856 \\ \cline{3-8}
\rule{-2.3pt}{12pt}
\multirow{3}{*}{\makecell[c]{Drop Model\\Structure}} & \multicolumn{1}{c|}{0.1} & 0.00786 & 0.01132* & 0.01555* & 0.02337* & 0.02347** & 0.04036 \\
& \multicolumn{1}{c|}{0.2} & 0.00810 & 0.01146 & 0.01579* & 0.02342 & 0.02374* & 0.04101* \\
& \multicolumn{1}{c|}{0.3} & 0.00800 & 0.01138* & 0.01565** & 0.02348* & 0.02340** & 0.04043**\\
\multirow{3}{*}{\makecell[c]{Drop Input\\Info}} & \multicolumn{1}{c|}{0.1} & 0.01013** & 0.01383** & 0.01864** & 0.02698** & 0.02781** & 0.04700** \\
& \multicolumn{1}{c|}{0.2} & 0.01185** & 0.01622** & 0.02170** & 0.03073** & 0.03257** & 0.05440** \\
& \multicolumn{1}{c|}{0.3} & \textbf{0.01318}** & \textbf{0.01777}** & \textbf{0.02343}** & \textbf{0.03301}** & \textbf{0.03543}** & \textbf{0.05797}**\\
\multirow{3}{*}{\makecell[c]{Drop\\Embedding}} & \multicolumn{1}{c|}{0.1} & 0.00817 & 0.01129 & 0.01556 & 0.02327 & 0.02311 & 0.04010 \\
& \multicolumn{1}{c|}{0.2} & 0.00792 & 0.01109 & 0.01536 & 0.02311 & 0.02294 & 0.03991 \\
& \multicolumn{1}{c|}{0.3} & 0.00785 & 0.01096 & 0.01514 & 0.02275 & 0.02255 & 0.03926\\
\bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[]* for $p<0.05$, ** for $p<0.01$, compared to the origin (not using any dropout methods). Bold numbers are the best results of each column.
\end{tablenotes}
\end{threeparttable}
\label{tab:Amazon-Baby-5-core-GRU4Rec}
\end{table*}
\begin{table*}[ht]\normalsize
\centering
\caption{Detailed results for dropout methods on SASRec, ml1m-cold dataset.}
\begin{threeparttable}
\begin{tabular}{ccllllll}
\toprule
\makecell[c]{Dropout\\Methods} & \makecell[c]{Dropout\\Ratio} & \makecell[c]{NDCG\\@5} & \makecell[c]{NDCG\\@10} & \makecell[c]{NDCG\\@20} & \makecell[c]{NDCG\\@50} & \makecell[c]{HR\\@10} & \makecell[c]{HR\\@20} \\
\midrule
Origin & \multicolumn{1}{c|}{-} & 0.0840 & 0.1064 & 0.1296 & 0.1593 & 0.1981 & 0.2903 \\ \cline{3-8}
\rule{-2.3pt}{12pt}
\multirow{3}{*}{\makecell[c]{Drop Model\\Structure}} & \multicolumn{1}{c|}{0.1} & 0.0864* & 0.1092** & 0.1326* & 0.1627** & 0.2013 & 0.2941 \\
& \multicolumn{1}{c|}{0.2} & 0.0852 & 0.1077 & 0.1308 & 0.1606 & 0.1996 & 0.2912 \\
& \multicolumn{1}{c|}{0.3} & 0.0836 & 0.1059 & 0.1290 & 0.1585 & 0.1961 & 0.2878 \\
\multirow{3}{*}{\makecell[c]{Drop Input\\Info}} & \multicolumn{1}{c|}{0.1} & \textbf{0.0868}* & \textbf{0.1093} & \textbf{0.1330}* & \textbf{0.1632}* & \textbf{0.2019} & \textbf{0.2957}* \\
& \multicolumn{1}{c|}{0.2} & 0.0816* & 0.1041* & 0.1282 & 0.1589 & 0.1942* & 0.2901 \\
& \multicolumn{1}{c|}{0.3} & 0.0705** & 0.0915** & 0.1142** & 0.1445** & 0.1742** & 0.2645** \\
\multirow{3}{*}{\makecell[c]{Drop\\Embedding}} & \multicolumn{1}{c|}{0.1} & 0.0835 & 0.1063 & 0.1301 & 0.1606 & 0.1980 & 0.2923 \\
& \multicolumn{1}{c|}{0.2} & 0.0814* & 0.1042 & 0.1278 & 0.1588 & 0.1962 & 0.2901 \\
& \multicolumn{1}{c|}{0.3} & 0.0797* & 0.1022* & 0.1259* & 0.1572 & 0.1935 & 0.2876 \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[]* for $p<0.05$, ** for $p<0.01$, compared to the origin (not using any dropout methods). Bold numbers are the best results of each column.
\end{tablenotes}
\end{threeparttable}
\label{tab:ml1m-cold-SASRec}
\end{table*}
\begin{table*}[ht]\normalsize
\centering
\caption{Detailed results for dropout methods on SASRec, Amazon Baby 5-core dataset.}
\begin{threeparttable}
\begin{tabular}{ccllllll}
\toprule
\makecell[c]{Dropout\\Methods} & \makecell[c]{Dropout\\Ratio} & \makecell[c]{NDCG\\@5} & \makecell[c]{NDCG\\@10} & \makecell[c]{NDCG\\@20} & \makecell[c]{NDCG\\@50} & \makecell[c]{HR\\@10} & \makecell[c]{HR\\@20} \\
\midrule
Origin & \multicolumn{1}{c|}{-} & 0.01039 & 0.01428 & 0.01913 & 0.02787 & 0.02889 & 0.04824 \\ \cline{3-8}
\rule{-2.3pt}{12pt}
\multirow{3}{*}{\makecell[c]{Drop Model\\Structure}} & \multicolumn{1}{c|}{0.1} & 0.01048 & 0.01468 & 0.01990 & 0.02873 & 0.03016 & 0.05100* \\
& \multicolumn{1}{c|}{0.2} & 0.01140** & 0.01542* & 0.02067** & 0.02961** & 0.03076* & 0.05170** \\
& \multicolumn{1}{c|}{0.3} & 0.01145* & 0.01562* & 0.02095** & 0.03004** & 0.03154* & 0.05277**\\
\multirow{3}{*}{\makecell[c]{Drop Input\\Info}} & \multicolumn{1}{c|}{0.1} & 0.01308** & 0.01783** & 0.02375** & 0.03326** & 0.03600** & 0.05961** \\
& \multicolumn{1}{c|}{0.2} & 0.01595** & 0.02119** & 0.02714** & 0.03750** & 0.04182** & 0.06563** \\
& \multicolumn{1}{c|}{0.3} & \textbf{0.01612}** & \textbf{0.02143}** & \textbf{0.02768}** & \textbf{0.03828}** & \textbf{0.04207}** & \textbf{0.06701}**\\
\multirow{3}{*}{\makecell[c]{Drop\\Embedding}} & \multicolumn{1}{c|}{0.1} & 0.01045 & 0.01440 & 0.01964 & 0.02849 & 0.02934 & 0.05020* \\
& \multicolumn{1}{c|}{0.2} & 0.01071 & 0.01484 & 0.01985 & 0.02904 & 0.03010 & 0.05008 \\
& \multicolumn{1}{c|}{0.3} & 0.01094 & 0.01502 & 0.02009 & 0.02950* & 0.03030 & 0.05056*\\
\bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[]* for $p<0.05$, ** for $p<0.01$, compared to the origin (not using any dropout methods). Bold numbers are the best results of each column.
\end{tablenotes}
\end{threeparttable}
\label{tab:Amazon-Baby-5-core-SASRec}
\end{table*}
\begin{table*}[ht]\normalsize
\centering
\caption{Detailed results for dropout methods on LightGCN, ml1m-cold dataset.}
\begin{threeparttable}
\begin{tabular}{ccllllll}
\toprule
\makecell[c]{Dropout\\Methods} & \makecell[c]{Dropout\\Ratio} & \makecell[c]{NDCG\\@5} & \makecell[c]{NDCG\\@10} & \makecell[c]{NDCG\\@20} & \makecell[c]{NDCG\\@50} & \makecell[c]{HR\\@10} & \makecell[c]{HR\\@20} \\
\midrule
Origin & \multicolumn{1}{c|}{-} & 0.0281 & 0.0377 & 0.0504 & 0.0722 & 0.0748 & 0.1255 \\ \cline{3-8}
\rule{-2.3pt}{12pt}
\multirow{3}{*}{\makecell[c]{Drop Model\\Structure}} & \multicolumn{1}{c|}{0.1} & 0.0287 & 0.0385 & 0.0510 & 0.0732 & 0.0761 & 0.1261 \\
& \multicolumn{1}{c|}{0.2} & \textbf{0.0291}* & \textbf{0.0388} & \textbf{0.0516} & \textbf{0.0737}* & \textbf{0.0765} & \textbf{0.1277} \\
& \multicolumn{1}{c|}{0.3} & 0.0286 & 0.0384 & 0.0513 & 0.0730 & 0.0757 & 0.1273 \\
\multirow{3}{*}{\makecell[c]{Drop Input\\Info}} & \multicolumn{1}{c|}{0.1} & 0.0264** & 0.0361** & 0.0484** & 0.0705** & 0.0721** & 0.1213** \\
& \multicolumn{1}{c|}{0.2} & 0.0254** & 0.0347** & 0.0466** & 0.0674** & 0.0690** & 0.1163** \\
& \multicolumn{1}{c|}{0.3} & 0.0239** & 0.0328** & 0.0443** & 0.0642** & 0.0653** & 0.1113** \\
\multirow{3}{*}{\makecell[c]{Drop\\Embedding}} & \multicolumn{1}{c|}{0.1} & 0.0280 & 0.0376 & 0.0505 & 0.0723 & 0.0748 & 0.1259 \\
& \multicolumn{1}{c|}{0.2} & 0.0272 & 0.0371 & 0.0497 & 0.0709 & 0.0741 & 0.1242 \\
& \multicolumn{1}{c|}{0.3} & 0.0264* & 0.0361** & 0.0482** & 0.0696** & 0.0724* & 0.1205* \\
\multirow{3}{*}{\makecell[c]{Drop Graph\\Info}} & \multicolumn{1}{c|}{0.1} & 0.0282 & 0.0380 & 0.0508 & 0.0723 & 0.0754 & 0.1265 \\
& \multicolumn{1}{c|}{0.2} & 0.0284 & 0.0383 & 0.0511 & 0.0729 & 0.0764 & 0.1273 \\
& \multicolumn{1}{c|}{0.3} & 0.0283 & 0.0380 & 0.0509 & 0.0727 & 0.0756 & 0.1269 \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[]* for $p<0.05$, ** for $p<0.01$, compared to the origin (not using any dropout methods). Bold numbers are the best results of each column.
\end{tablenotes}
\end{threeparttable}
\label{tab:ml1m-cold-LightGCN}
\end{table*}
\begin{table*}[ht]\normalsize
\centering
\caption{Detailed results for dropout methods on LightGCN, Amazon Baby 5-core dataset.}
\begin{threeparttable}
\begin{tabular}{ccllllll}
\toprule
\makecell[c]{Dropout\\Methods} & \makecell[c]{Dropout\\Ratio} & \makecell[c]{NDCG\\@5} & \makecell[c]{NDCG\\@10} & \makecell[c]{NDCG\\@20} & \makecell[c]{NDCG\\@50} & \makecell[c]{HR\\@10} & \makecell[c]{HR\\@20} \\
\midrule
Origin & \multicolumn{1}{c|}{-} & 0.01032 & 0.01392 & 0.01843 & 0.02738 & 0.02779 & 0.04577 \\ \cline{3-8}
\rule{-2.3pt}{12pt}
\multirow{3}{*}{\makecell[c]{Drop Model\\Structure}} & \multicolumn{1}{c|}{0.1} & 0.01017 & 0.01392 & 0.01835 & 0.02743 & 0.02809 & 0.04577 \\
& \multicolumn{1}{c|}{0.2} & 0.01001 & 0.01370 & 0.01807 & 0.02730 & 0.02763 & 0.04502 \\
& \multicolumn{1}{c|}{0.3} & 0.01001 & 0.01357 & 0.01811 & 0.02703 & 0.02693 & 0.04509\\
\multirow{3}{*}{\makecell[c]{Drop Input\\Info}} & \multicolumn{1}{c|}{0.1} & 0.00936** & 0.01275** & 0.01710** & 0.02598** & 0.02559** & 0.04301** \\
& \multicolumn{1}{c|}{0.2} & 0.00872** & 0.01207** & 0.01651** & 0.02532** & 0.02435** & 0.04209** \\
& \multicolumn{1}{c|}{0.3} & 0.00824** & 0.01140** & 0.01583** & 0.02456** & 0.02321** & 0.04095**\\
\multirow{3}{*}{\makecell[c]{Drop\\Embedding}} & \multicolumn{1}{c|}{0.1} & \textbf{0.01043} & 0.01402 & 0.01866 & 0.02754 & 0.02781 & 0.04633 \\
& \multicolumn{1}{c|}{0.2} & 0.01034 & \textbf{0.01420} & \textbf{0.01869} & \textbf{0.02773}* & \textbf{0.02853} & \textbf{0.04651} \\
& \multicolumn{1}{c|}{0.3} & 0.01029 & 0.01389 & 0.01841 & 0.02757 & 0.02798 & 0.04595\\
\multirow{3}{*}{\makecell[c]{Drop Graph\\Info}} & \multicolumn{1}{c|}{0.1} & 0.01020 & 0.01403 & 0.01838 & 0.02733 & 0.02821 & 0.04552 \\
& \multicolumn{1}{c|}{0.2} & 0.01005* & 0.01375 & 0.01811* & 0.02722 & 0.02766 & 0.04503 \\
& \multicolumn{1}{c|}{0.3} & 0.01021 & 0.01395 & 0.01843 & 0.02736 & 0.02799 & 0.04584\\
\bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[]* for $p<0.05$, ** for $p<0.01$, compared to the origin (not using any dropout methods). Bold numbers are the best results of each column.
\end{tablenotes}
\end{threeparttable}
\label{tab:Amazon-Baby-5-core-LightGCN}
\end{table*}
\section{Protocol used in the selection process of the articles in this survey}\label{append:protocol}
We searched with the query “dropout” on Google Scholar, first got about 1000 articles containing this query. We checked the title and abstract of each article and filtered out papers that were not about dropout methods in neural models. Then we got around 200 articles. We further considered each article carefully, selecting the articles that met the following requirements:
$\bullet$ The article was published on top AI conferences or journals.
$\bullet$ The article had proposed new dropout methods, not just applied existing dropout methods.
$\bullet$ Some articles available on arxiv but not published on conferences or journals, which have proposed good dropout methods with some citations, were also included.
Then we checked the references of these articles, added the articles they cited and meeting the above requirements but missed by the search engine into our list. In this way, we finally got the current about 80 articles.
We sort the articles first according to our classification taxonomy. Within the same category, the articles are sorted mainly based on their publication date. Some highly related papers which are introduced together are placed in adjacent positions.
\section{Experimental Settings and Model Parameters}\label{append:model_param}
Parameter values taken for all models in common are shown in Table \ref{tab:ExperimentSettings}. Parameters specific to each model are shown in Table \ref{tab:ExperimentSettings2}.
\begin{table}[ht]
\vspace{-4pt}
\centering
\caption{Global Parameters}
\vspace{-4pt}
\begin{tabular}{lll}
\toprule
Parameter & Value \\
\midrule
Learning rate & 0.001 \\
Optimizer & Adam \\
Batch size & 128 \\
Early stop & 50 \\
Validation metrics & NDCG@10 \\
Evaluation metrics & NDCG@5,10,20,50; HR@10,20 \\
Neg. sample during training & 1 \\
Neg. sample during testing & all\cite{krichene2020sampled} \\
Embedding size & 64 \\
Loss function & BPR loss\cite{rendle2009bpr} \\
\bottomrule
\end{tabular}
\label{tab:ExperimentSettings}
\vspace{-2pt}
\end{table}
\begin{table*}[ht]
\vspace{-4pt}
\centering
\caption{Parameters specific to each model}
\vspace{-4pt}
\begin{tabular}{ccc}
\toprule
Model & Parameter & Value \\
\midrule
\multirow{2}{*}{NFM} & Number of layers & 1 \\
& Hidden state size & 64 \\
\rule{-2.3pt}{9pt}
\multirow{3}{*}{GRU4Rec} & User history length & 20 \\
& Number of layers & 2 \\
& Hidden vector size & 64 \\
\rule{-2.3pt}{9pt}
\multirow{2}{*}{SASRec} & User history length & 20 \\
& Number of self-attention heads & 1 \\
\rule{-2.3pt}{9pt}
LightGCN & Number of layers & 3 \\
\bottomrule
\end{tabular}
\label{tab:ExperimentSettings2}
\vspace{-2pt}
\end{table*}
\section{Introduction}\label{sec:introduction}}
\subsection{Backgrounds}\label{subsec:backgrounds}
\IEEEPARstart{O}{verfitting} is a common problem in the training process of neural network models \cite{dietterich1995overfitting}. Due to the large number of parameters and strong fitting ability, most neural models perform well on the training set, while they may perform poorly on the test set. Some methods have been proposed in previous studies to address the overfitting problem, such as adding a regularization term to penalize the total size of model parameters \cite{van2017l2} and applying Batch Normalization \cite{ioffe2015batch} or Weight Normalization \cite{salimans2016weight} to regularize deep neural networks.
In 2012, Hinton et al. proposed Dropout \cite{hinton2012improving} to cope with overfitting. The idea is to randomly drop neurons of the neural network during training. This is, in each parameter update, only part of the model parameters will be updated. Through this process, it can prevent complex co-adaptations of neurons on training data. It is important to note that in the testing phase, dropout must be disabled, and the whole network is used for prediction. From this beginning, numerous dropout-based training methods have been proposed and achieved better performances.
In the beginning, this dropout-based training method was applied only to fully connected layers \cite{hinton2012improving, srivastava2014dropout, ba2013adaptive}. Later, it is extended to more network structures such as convolutional layers in convolutional neural networks (CNNs) \cite{tompson2015efficient, wu2015towards, park2016analysis}, residual networks (ResNet)\cite{huang2016deep, kang2016shakeout, li2016whiteout}, recurrent layers in recurrent neural networks (RNNs) \cite{ pham2014dropout, zaremba2014recurrent, moon2015rnndrop}, etc. In terms of the stage where the dropout operation is performed, there are not only dropout of model structure, but also dropout of input information \cite{sennrich2016edinburgh, devries2017improved, hamilton2017inductive} and dropout of embeddings \cite{ volkovs2017dropoutnet, shi2018attention, shi2019adaptive}. In terms of contributions, dropout methods were first used only to prevent overfitting. Besides, many studies have been made to explore other aspects of its usefulness, such as model compression \cite{molchanov2017variational, neklyudov2017structured, gomez2018targeted}, model uncertainty measurement \cite{gal2016dropout}, data augmentation \cite{bouthillier2015dropout}, enhancing data representations in the pre-training phase \cite{ devlin2019bert}, and prevention of over-smoothing problem in graph neural networks \cite{rong2019dropedge}.
Despite their wide applications, a dropout method that works for one model structure may have no significant effect on another. For example, the use of standard dropout in CNNs does not improve the effect significantly \cite{tompson2015efficient}. The same is true for the direct use of standard dropout to the recurrent connections of RNNs \cite{zaremba2014recurrent}. Dropout at different stages of a machine learning task achieves different purposes. Therefore, in this paper, we classify these wide varieties of dropout methods and summarize their effectiveness, application scenarios, connections, and contributions.
Most of the methods with commonality also fail to compare their results under the same scenario. Therefore, we evaluate and compare different dropout methods under the recommendation scenario to fairly compare the effect of these methods, providing references for future works related to dropout. Recommendation models have been widely used to extract user and item features to improve user experience in many online scenarios. They utilize various heterogeneous information, such as: user and item interaction histories, content features, or social connections \cite{hu2018leveraging}. With the rapid development of the internet, various recommendation models have been proposed in the past years, utilizing different kinds of information \cite{zhang2016collaborative, zhao2017meta}. Such a variety of recommendation models and heterogeneous input information provides a suitable environment for our comparisons and verification of different dropout methods.
\subsection{Contributions}\label{subsec:contributions}
Our contributions in this paper are three fold:
First, we provide a comprehensive review of more than seventy dropout methods. We propose a new taxonomy based on the stage where dropout operations are performed in machine learning tasks. Each category is then supplemented with operation granularity and application scenarios for more detailed classification and discussion.
Second, we discuss the connections between dropout methods of different categories and summarize their various contributions other than preventting overfitting.
Third, we experimentally investigate the effect of dropout methods in recommendation scenarios. The rich heterogeneous input information makes recommendation scenarios suitable for the comparison between different types of dropout methods. We verify and compare dropout methods' effectiveness under a unified experimental environment. And finally, we provide potential research directions about dropout methods.
\subsection{Outline}\label{subsec:outline}
The organization of this paper is as follows: Section \ref{sec:related works} introduces background concepts of dropout methods and recommendation systems. In Section \ref{sec:survey}, we review dropout methods according to the stage where the dropout operation is performed in a machine learning task. We summarize their applications on different neural models and discuss their connections. In Section \ref{sec:discussion}, we analyze the contributions of dropout methods other than preventing overfitting. In Section \ref{sec:experiments}, we present an experimental verification of dropout methods on recommendation models.
In Section \ref{sec:discussion2}, we provide further discussions on dropout methods and analyze potential research directions in this field. Finally, we conclude the entire paper in Section \ref{sec:conclusion}.
\section{Background Concepts}\label{sec:related works}
This section introduces the fundamental knowledge of dropout and recommender systems by summarizing related works of these two fields.
\subsection{Dropout methods}
Dropout is a class of training methods effectively coping with overfitting. Hinton et al. \cite{hinton2012improving} proposed the original dropout, whose idea is to randomly drop neurons of the neural network during training. Through this process, it prevents complex co-adaptations of neurons on training data. From this beginning, numerous drop-based methods have been proposed, achieving better results and higher performances. For exapmle, DropConnect \cite{wan2013regularization} randomly drops neuron connections instead of neurons, while Annealed Dropout \cite{rennie2014annealed} and Curriculum Dropout \cite{morerio2017curriculum} adjust dropout ratio throughout the training process.
Besides dropping individual neurons, a series of dropout methods that drop neuron groups are proposed for certain neural model structures. For CNNs, SpatialDropout \cite{tompson2015efficient} randomly drops feature map, and DropBlock \cite{ghiasi2018dropblock} drops continuous region of neurons to prevent overfitting. For RNNs, early applications of dropout \cite{pham2014dropout} only drop feed-forward connections, in order to preserve the memory ability of RNN. Later approaches including RNNDrop \cite{moon2015rnndrop} and Recurrent Dropout \cite{semeniuta2016recurrent} allow dropping recurrent connections as well. For ResNets, there are also specified dropout methods such as Stochastic Depth \cite{huang2016deep} and ShakeDrop \cite{yamada2019shakedrop}.
Except for dropping model structure during training, dropping input information or embeddings of input data is also applied in several scenarios. DropoutNet \cite{volkovs2017dropoutnet} and ACCM \cite{shi2018attention} drop embeddings of users and items in recommendation scenarios to handle the cold-start problem. BERT \cite{devlin2019bert} randomly masks tokens at pre-training stage, enhancing data representations in NLP tasks. CutOut \cite{devries2017improved} and GridMask \cite{chen2020gridmask} randomly drop part of the input images during training, serving as a regularization and data augmentation technique. GraphSAGE \cite{hamilton2017inductive} and DropEdge \cite{rong2019dropedge} randomly drop nodes or edges during GCN training, preventing overfitting and over-smoothing problem in GCN.
The former survey about dropout methods \cite{labach2019survey} was made by Labach et al. in 2019, which is the only comprehensive survey about this topic. It reviews dropout methods from the aspect of neural models that dropout methods are performed on, including fully connected layers, convolutional layers and recurrent layers.
Our work has three major differences with this former one.
First, we cover a wider range of dropout methods, including those proposed in recent three years. Especially new methods published in the top AI conferences.
Second, we present a more precise and general classification. In the former survey, dropout methods were classified according to the neural models they perform on, which means classification requires that when a new model structure appears, the corresponding dropout methods are classified into a new category. Nevertheless, dropout methods themselves may be similar to the ones that already existed before, so it seems cumbersome to put them into a new category just because their applications change. Our work classifies dropout methods according to the stage where dropout operations are performed in machine learning tasks. With this setting, new methods must fit into one existing category, making our classification more reasonable. We also review these methods from the perspective of application scenarios, their interconnections, and contributions other than preventing overfitting.
Third, there is no experimental comparison of the effectiveness of dropout methods in \cite{labach2019survey}, while we experimentally verify and compare their effectiveness under recommendation scenarios.
\subsection{Recommender Systems}
In terms of the input information types, recommender systems are mainly categrized into collaborative filtering (CF) based, content-based (CB), and hybrid \cite{adomavicius2005toward}. CF based recommender systems make predictions based on the interaction histories of users and items \cite{su2009survey, li2018deep}, while CB recommender systems use content feature of users and items \cite{sun2019research}. Hybrid recommender systems use multiple types of input information to extract interaction similarity as well as content similarity, such as users' social networks \cite{massa2007trust, jamali2009trustwalker} and item reviews \cite{zheng2017joint, xu2018exploiting}. In recent years, some recommendation algorithms specified for certain tasks utilize specific form of input data, such as in sequential recommendation the input data is structured as temporal sequences \cite{hidasi2015session, kang2018self}, and in graph recommendation the input data is structured as graphs \cite{wang2019knowledge, he2020lightgcn}. However, the input information in real world scenarios may not be sufficient for the recommender systems to make good predictions, and this problem is called \textit{Cold Start}. Works addressing cold-start problem have been emerging in recent years \cite{xu2021multi, qian2020attribute, zhu2019addressing, zhang2020deep, li2019both, lu2020meta}.
Such rich variety of input information in recommendation scenario provides a good environment for the verification and comparison of different dropout methods. Besides conducting a survey on dropout methods, we also do experimental verification and put them under the same scenario for comparison.
\section{Survey of Dropout Methods}\label{sec:survey}
In this section, we review papers of dropout methods. Based on the stage where the dropout operation is performed in a machine learning algorithm, we classify them into three main categories: drop model structures (Section \ref{subsec:drop structures}), drop embeddings (Section \ref{subsec:drop embeddings}), and drop input information (Section \ref{subsec:drop inputs}) as shown in Figure \ref{fig:classification}. We introduce how these methods perform dropout, their effectiveness, and their applications in different neural models. Finally, we discuss the interconnections between dropout methods in different categories (Section \ref{subsec:interconnections}).
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/classification.png}
\vspace{-16pt}
\caption{Classification of dropout methods. The number below each box is referring to the corresponding category of dropout methods.}
\label{fig:classification}
\vspace{-4pt}
\end{figure}
\subsection{Drop Model Structures}\label{subsec:drop structures}
Methods in this category drop model structures, which is, randomly setting part of neuron outputs to zero or other value during training. We first introduce methods dropping individual neurons, then methods dropping neuron groups.
\subsubsection{Drop Individual Neurons}\label{subsubsec:drop single neurons}
Hinton et al. \cite{hinton2012improving} first proposed the standard dropout in 2012, and further detailed in \cite{srivastava2014dropout}. Specifically, for a neural network with $L$ hidden layers, $l\in {1, \dots, L}$, $z^{(l)}$ and $y^{(l)}$ are the input and output of the $l$th layer respectively, and $y^{(0)}$ is the input of the network. $W^{(l)}$ and $b^{(l)}$ are the weight matrix and bias vectors (biases) of the $l$th layer, respectively. The standard feed-forward operation when dropout is not performed is
\begin{equation} \label{eq:dropout 1}
\begin{aligned}
z_i^{(l+1)} = \mathbf{w}_i^{(l+1)}\mathbf{y}^{(l)} + b_i^{(l+1)},\quad
y_i^{(l+1)} = f(z_i^{(l+1)})
\end{aligned}
\vspace{-2pt}
\end{equation}
Dropout sets a proportion of $p$ of the neuron outputs to zero during training, where $p\in (0, 1)$ is the dropout ratio. When performing dropout, the feed-forward operation becomes
\begin{equation} \label{eq:dropout 2}
\begin{aligned}
r_j^{(l)} &\sim \mathrm{Bernoulli}(p),\quad
\Tilde{\mathbf{y}}^{(l)} = \mathbf{r}^{(l)} * \mathbf{y}^{(l)} \\
z_i^{(l+1)} &= \mathbf{w}_i^{(l+1)}\Tilde{\mathbf{y}}^{(l)} + b_i^{(l+1)},\quad
y_i^{(l+1)} = f(z_i^{(l+1)})
\end{aligned}
\vspace{-2pt}
\end{equation}
During testing no dropout is performed so all neurons have output. To keep training and testing conditions consistent, all the weights need to be multiplied by $p$ during testing, i.e., take $\mathbf{W}_{test}^{(l)} = p\mathbf{W}^{(l)}$; or it multiplies all outputs by $1/p$ during training so that the expected output is consistent with testing time. Srivastava et al. \cite{srivastava2014dropout} also mentioned that besides generating the dropout mask from Bernoulli distribution, it is also possible to generate it from a continuous distribution with the same expectation and variance, such as Gaussian distribution $\mathcal{N}\sim (1, \alpha)$, $\alpha=p/(1-p)$. The standard dropout has achieved good performance since it was proposed, and the authors showed the effectiveness of standard dropout on image classification tasks \cite{krizhevsky2012imagenet} in 2013.
Following the standard dropout, a series of methods that randomly drop individual neurons have been proposed.
Ba et al. \cite{ba2013adaptive} proposed Standout in 2013. The method treats dropout as a Bayesian learning process. A Bayesian network (Belief Network) is added above the original network to control the dropout ratio:
\begin{equation} \label{eq:standout 1}
\begin{aligned}
y = f(\mathbf{Wx})\circ m, \quad m\sim \mathrm{Bernoulli}(g(\mathbf{W}_s \mathbf{x}))
\end{aligned}
\vspace{-4pt}
\end{equation}
where $\mathbf{W_s}$ and $g(\cdot)$ is the weight and the activation function of each layer of the Bayesian network. In application the authors find that $\mathbf{W_s}$ can be chosen as an affine transformation of $\mathbf{W}$, and the test output $y$ is computed as
\begin{equation} \label{eq:standout 2}
\begin{aligned}
\mathbf{W}_s = \alpha \mathbf{W} + \beta,\quad
y = f(\mathbf{Wx})\circ g(\mathbf{W}_s \mathbf{x})
\end{aligned}
\vspace{-4pt}
\end{equation}
Wang and Manning \cite{wang2013fast} proposed Fast Dropout in 2013. In standard dropout, only one of the possible network structures is sampled at a time; a proportion of $p$ neurons are not trained in each epoch, making the network's training slower. Fast Dropout, on the other hand, explains dropout from a Bayesian perspective, showing that the output of a layer that has undergone dropout can be considered as sampling from a potential approximate Gaussian distribution. We can then sample directly from this distribution to obtain results or use its parameters to propagate information about the entire dropout set. This allows for faster training than the standard dropout and is also known as Gaussian Dropout.
\begin{table*}[ht]
\centering
\newcolumntype{Y}{>{\raggedleft\arraybackslash}X}
\newcolumntype{Z}{>{\centering\arraybackslash}X}
\caption{Table of methods that drop model structures.}
\vspace{-6pt}
\begin{threeparttable}
\begin{tabular}{llllll}
\toprule
Method & Year & Category & Brief Description & \makecell[l]{Original\\Scenario} & Source \\
\midrule
Dropout\cite{hinton2012improving, srivastava2014dropout} & 2012 & 1.1\dag & Randomly drop neurons & FCL* & JMLR \\
Standout\cite{ba2013adaptive} & 2013 & 1.1 & Add a Bayesian NN to control the dropout ratio & FCL & NeurIPS \\
Fast Dropout\cite{wang2013fast} & 2013 & 1.1 & Sample outputs directly from a distribution & FCL & ICML \\
DropConnect\cite{wan2013regularization} & 2013 & 1.1 & Drop weights instead of neurons & FCL & ICML \\
Maxout\cite{goodfellow2013maxout} & 2013 & 1.1 & Computes several outputs for each input & FCL & ICML \\
Annealed Dropout\cite{rennie2014annealed} & 2014 & 1.1 & Dropout ratio decreases with training epochs & FCL & SLT \\
Variational Dropout\cite{kingma2015variational} & 2015 & 1.1 & Dropout ratio can be learned in training & FCL & NeurIPS \\
\rule{-2.3pt}{12pt}
Monte Carlo Dropout\cite{gal2016dropout} & 2016 & 1.1 & \makecell[l]{Intepret dropout as a Bayesian approximation\\of deep Gaussian process} & FCL & ICML \\
\rule{-2.3pt}{7pt}
DropIn\cite{smith2016gradual} & 2016 & 1.1 & Pass dropped values directly to the next layer & FCL & CVPR \\
Evolutional Dropout\cite{li2016improved} & 2016 & 1.1 & Calculate dropout ratio from input & FCL & NeurIPS \\
\rule{-2.3pt}{12pt}
Concrete Dropout\cite{gal2017concrete} & 2017 & 1.1 & \makecell[l]{Automatically adjust dropout ratio compared\\to Monte Carlo Dropout} & FCL & NeurIPS \\
\rule{-2.3pt}{7pt}
Curriculum Dropout\cite{morerio2017curriculum} & 2017 & 1.1 & Dropout ratio increases with training epochs & FCL & ICCV \\
Targeted Dropout\cite{gomez2018targeted, gomez2019learning} & 2018 & 1.1 & Dropout for neural pruning & FCL & NeurIPS \\
Ising-Dropout\cite{salehinejad2019ising} & 2019 & 1.1 & Incorporate Ising model & FCL & ICASSP \\
EDropout\cite{salehinejad2021edropout} & 2021 & 1.1 & Use EBM to decide pruning state & FCL & TNNLS \\
LocalDrop\cite{lu2021localdrop} & 2021 & 1.1 & Based on local Rademacher complexity & FCL & TPAMI \\
SimCSE\cite{gao2021simcse} & 2021 & 1.1 & Data augmentation by dropout twice & FCL & EMNLP \\
Child-Tuning\cite{xu2021raise} & 2021 & 1.1 & Mask gradient when back-propagation & FCL & EMNLP \\
R-Drop\cite{liang2021r} & 2021 & 1.1 & Dropout twice to regularize & FCL & arxiv \\
AS-Dropout\cite{chen2021adaptive} & 2021 & 1.1 & Adaptive sparse dropout & FCL & Neurocomput. \\
SpatialDropout\cite{tompson2015efficient} & 2015 & 1.2.1\dag & Drop feature maps in CNN & CNN & CVPR \\
Max-pooling Dropout\cite{wu2015towards} & 2015 & 1.2.1 & Drop neurons before pooling layer & CNN & NN \\
Convolutional Dropout\cite{wu2015towards} & 2015 & 1.2.1 & Drop neurons before convolutional layer & CNN & NN \\
Max-drop\cite{park2016analysis} & 2016 & 1.2.1 & Drop feature maps with high activations & CNN & ACCV \\
Stochastic Dropout\cite{park2016analysis} & 2016 & 1.2.1 & Dropout ratio sampled from normal distribution & CNN & ACCV \\
DropBlock\cite{ghiasi2018dropblock} & 2018 & 1.2.1 & Drop contiguous regions on each feature map & CNN & NeurIPS \\
Spectral Dropout\cite{khan2018regularization} & 2018 & 1.2.1 & Dropout in the frequency domain & CNN & NN \\
Drop-Conv2d\cite{cai2019effective} & 2019 & 1.2.1 & Dropout before convolution instead of BN & CNN & arxiv \\
Weighted Channel Dropout\cite{hou2019weighted} & 2019 & 1.2.1 & Drop weighted feature channels & CNN & AAAI \\
CorrDrop\cite{zeng2021correlation} & 2021 & 1.2.1 & Drop neurons according to feature correlation & CNN & PR \\
LocalDrop\cite{lu2021localdrop} & 2021 & 1.2.1 & Based on local Rademacher complexity & CNN & TPAMI \\
AutoDropout\cite{pham2021autodropout} & 2021 & 1.2.1 & Optimize dropout patterns by RL & CNN & AAAI \\
Vanilla drop for RNN\cite{pham2014dropout, zaremba2014recurrent} & 2014 & 1.2.2\dag & Drop feed-forward connections only & RNN & ICFHR \\
RNNDrop\cite{moon2015rnndrop} & 2015 & 1.2.2 & One dropping mask for each layer & RNN & ASRU \\
Variational RNN Dropout\cite{gal2016theoretically} & 2015 & 1.2.2 & Variational inference based dropout & RNN & NeurIPS \\
Recurrent Dropout\cite{semeniuta2016recurrent} & 2016 & 1.2.2 & Drop only the vectors generating hidden states & RNN & COLING \\
Zoneout\cite{krueger2016zoneout} & 2016 & 1.2.2 & Residual connections between timestamps & RNN & ICLR \\
Weighted-dropped LSTM\cite{merity2018regularizing} & 2017 & 1.2.2 & Drop weights like DropConnect & RNN & ICLR \\
\rule{-2.3pt}{12pt}
Fraternal Dropout\cite{zolna2018fraternal} & 2018 & 1.2.2 & \makecell[l]{Train two identical RNNs with different\\dropout masks} & RNN & ICLR \\
\rule{-2.3pt}{7pt}
Stochastic Depth\cite{huang2016deep} & 2016 & 1.2.3\dag & Drop blocks and retain only residual connections & ResNet & ECCV \\
Shakeout\cite{kang2016shakeout} & 2016 & 1.2.3 & Assign new weights to neurons & ResNet & AAAI \\
Whiteout\cite{li2016whiteout} & 2016 & 1.2.3 & Introduce Gaussian noise compared to Shakeout & ResNet & arxiv \\
\rule{-2.3pt}{12pt}
Swapout\cite{singh2016swapout} & 2016 & 1.2.3 & \makecell[l]{A synthesis of standard Dropout and Stochastic\\Depth} & ResNet & NeurIPS \\
\rule{-2.3pt}{7pt}
DropPath\cite{larsson2016fractalnet} & 2016 & 1.2.3 & Drop subpaths in Fractalnet & DNN & ICLR \\
Shake-Shake\cite{gastaldi2017shake} & 2017 & 1.2.3 & Assign weights in 3-way ResNet & ResNet & arxiv \\
ShakeDrop\cite{yamada2019shakedrop} & 2018 & 1.2.3 & Improve Shake-Shake to other form of ResNet & ResNet & IEEE Access \\
Scheduled DropPath\cite{zoph2018learning} & 2018 & 1.2.3 & Dropout ratio increases linearly & DNN & CVPR \\
DropHead\cite{zhou2020scheduled} & 2020 & 1.2.3 & Drop attention heads of Transformer & Transformer & EMNLP \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[\dag] 1.1 refers to dropping individual neurons, 1.2.1 dropping 2D neuron groups, 1.2.2 dropping recurrent connections, and 1.2.3 dropping residual connections or others.
\item[*] FCL refers to Fully Connected Layers.
\end{tablenotes}
\end{threeparttable}
\label{tab:drop neuron groups}
\vspace{-12pt}
\end{table*}
Wan et al. \cite{wan2013regularization} proposed DropConnect in 2013. Compared to standard dropout which randomly zeroes the output of neurons, DropConnect randomly zeroes elements of the weight matrix of each layer:
\begin{equation} \label{eq:dropconnect 1}
\begin{aligned}
\mathbf{y} = f((\mathbf{W}\circ \mathbf{M})\mathbf{x}), \quad m_{ij}\sim \mathrm{Bernoulli}(1-p)
\end{aligned}
\vspace{-2pt}
\end{equation}
This approach removes ``connections'' in fully connected layers, hence the name ``DropConnect''.
Goodfellow et al. \cite{goodfellow2013maxout} proposed Maxout in 2013. Maxout is an improvement of standard dropout \cite{hinton2012improving}. Specifically, the output of each hidden layer is computed as
\begin{equation} \label{eq:maxout 1}
\begin{aligned}
h_i(\mathbf{x}) = \max_{j\in [1, k]}{z_{ij}}
,\ \mathrm{where}\ z_{ij} = \mathbf{x}^T \mathbf{W}_{:ij} + b_{ij}
\end{aligned}
\vspace{-2pt}
\end{equation}
where the weight matrix $\mathbf{W}\in \mathbb{R}^{d\times m\times k}$ and the bias vector $\mathbf{b}\in \mathbb{R}^{m\times k}$ are training parameters. $\mathbf{x}$, $d$, $m$, and $k$ are the input, the input dimension, the output dimension, and Maxout parameter, respectively. As can be seen, unlike in standard dropout \cite{hinton2012improving} where only one output is computed for each input at each layer, Maxout computes $k$ outputs for each input at each layer. Then it takes the maximum of the $k$ outputs as the output of this layer. This operation makes Maxout essentially a nonlinear activation function, which gives the method its name. Maxout has been applied in computer vision tasks including object detection \cite{zhou2017cad}.
Kingma et al. \cite{kingma2015variational} proposed Variational Dropout in 2015. This work studies Stochastic Gradient Variational Bayesian Inference (SGVB) problem and found its connection with dropout: Gaussian Dropout proposed in \cite{srivastava2014dropout} is a local reparameterization of SGVB. This paper thus proposes Variational Dropout so that the dropout ratio $p$ is not a pre-set hyperparameter that requires human adjusting but a parameter that can be learned through training. In \cite{molchanov2017variational} the authors show that Variational Dropout is a efficient way to perform model compression, which can significantly reduce the number of parameters of neural networks with a negligible decrease of accuracy.
Gal and Ghahramani proposed Monte Carlo Dropout \cite{gal2016dropout} in 2016. The authors interpret dropout as a Bayesian approximation of deep Gaussian processes. The output of a deep Gaussian process is a probability distribution, and using standard dropout in testing phase can estimate some properties of this potential distribution. For example, the estimated variance can be used to characterize the uncertainty of the model output, and this estimating method is called Monte Carlo Dropout. Monte Carlo Dropout has been applied in a series of works \cite{gal2016uncertainty, zhu2017deep, jungo2017towards, lakshminarayanan2017simple}. It can mitigate the problem of representing uncertainty in deep learning more efficiently without sacrificing test accuracy.
Smith et al. \cite{smith2016gradual} proposed DropIn in 2016. The feed-forward operation for each layer performing DropIn is:
\begin{equation} \label{eq:dropin 1}
\begin{aligned}
\Tilde{\mathbf{y}}^{(l)} &= \mathbf{r}^{(l)}\circ \mathbf{y}^{(l)},\quad
\mathbf{z}^{(l+1)} = \mathbf{W}^{(l+1)} \Tilde{\mathbf{y}}^{(l)} + \mathbf{b}^{(l+1)} \\
\mathbf{y}^{(l+1)} &= f(\mathbf{z}^{(l+1)}) + (1 - \mathbf{r}^{(l)})\circ \mathbf{y}^{(l)}
\end{aligned}
\vspace{-2pt}
\end{equation}
As can be seen, in addition to passing the kept outputs ($\mathbf{r}^{(l)}\circ \mathbf{y}^{(l)}$) to the next layer, DropIn also passes the values of dropped positions ($(1 - \mathbf{r}^{(l)})\circ \mathbf{y}^{(l)}$) directly to the next layer without going through the nonlinear activation function. This operation increases the depth of the network while avoiding vanishing gradient problem while serving the same regularization effect as standard dropout.
Li et al. \cite{li2016improved} proposed Evolutional Dropout in 2016. Intuitively, the importance of neurons corresponding to different features in a neural network is different, so their corresponding dropout probabilities should be different. This paper applies this idea to both shallow and deep neural networks: for shallow networks, the dropout ratio is calculated from the second-order statistics of the input data features; for deep networks, the dropout ratio of each layer is calculated in real-time from the output of that layer of each batch. Compared with the standard dropout, Evolutional Dropout improves the accuracy of the results while greatly increasing the convergence speed.
Gal et al. \cite{gal2017concrete} proposed Concrete Dropout in 2017. Concrete Dropout is an improvement on Monte Carlo Dropout \cite{gal2016dropout}. Monte Carlo Dropout can estimate the model uncertainty, achieved by performing a grid search on the dropout ratio parameter. This is unfeasible for deeper models (e.g., those in computer vision tasks) and reinforcement learning models because of the excessive computational time and resources. Based on the development of Bayesian learning, this paper uses a continuous relaxation of the dropout discrete mask. Concrete Dropout proposes a new objective function which allows automatic adjustment of the dropout parameters on large models, reducing the time required for experiments. It also allows the agent in reinforcement learning to dynamically adjust its uncertainty as the training process goes on and more training data is exposed. Experiments show that Concrete Dropout can decrease the time of model training by weeks by automatically learning the dropout probabilities in reinforcement learning compared to conventional dropout methods.
Rennie et al. \cite{rennie2014annealed} proposed Annealed Dropout in 2014. As the name implies, ``annealed'' dropout is the decrease of dropout ratio as the number of training epochs increases:
\begin{equation} \label{eq:annealed dropout 1}
\vspace{-2pt}
\begin{aligned}
p[t] = p[t-1] + \alpha_t(\theta)
\end{aligned}
\vspace{-2pt}
\end{equation}
$\alpha_t(\theta)$ is the parameter that controls the dropout ratio. A simple approach is to decrease linearly: the initial dropout ratio is $p[0]$ and decreases to $0$ after $N$ rounds:
\begin{equation} \label{eq:annealed dropout 2}
\vspace{-2pt}
\begin{aligned}
p[t] = \max(0, 1 - \dfrac{t}{N}) p[0]
\end{aligned}
\vspace{-2pt}
\end{equation}
The explanation for this approach is that at the beginning when we are exposed to little training data, we only need to ``explain'' the data with a simple model, i.e., we only make fewer neurons work, and more neurons are dropped out. Later on, when more training data is exposed, we can allow a more ``complex'' model to ``explain'' the data, reducing the dropout ratio and making more neurons work.
Morerio et al. \cite{morerio2017curriculum} proposed Curriculum Dropout in 2017. Inspired by curriculum learning \cite{bengio2009curriculum}, in contrast to Annealed Dropout \cite{rennie2014annealed}, Curriculum Dropout increases dropout ratio as the number of training epochs increases. The explanation for this approach is to simulate the learning process from easy to difficult in human learning: the dropout ratio is small at the beginning, introducing less noise and analogous to the ``easy'' task; then the dropout ratio increases, introducing more noise and making the task ``harder''.
Gomez et al. \cite{gomez2018targeted, gomez2019learning} proposed Targeted Dropout in 2018. Neural pruning is a neural network compression method \cite{han2015deep} that can be used to reduce the number of network parameters and improve training efficiency. The Targeted Dropout selects and drops those neurons whose absence can make the model most suitable for neural network pruning, facilitating model compression. Targeted Dropout is able to achieve better performance with only half of total number of parameters compared to the original networks without dropout.
Salehinejad and Valaee \cite{salehinejad2019ising} proposed Ising-Dropout in 2019. Borrowing the concept of Ising model in physics, Ising-Dropout adds an image Ising model to a neural network to detect and drop out those neurons that are least useful. It could compress the number of parameters up to 41.18\% and 55.86\% for the classification task on the MNIST and Fashion-MNIST datasets respectively. The authors also proposed EDropout \cite{salehinejad2021edropout} working for neural pruning in 2021. It utilizes an Energy-Based Model (EBM) to decide the pruning state.
In 2020, {\.I}rsoy and Alpayd{\i}n \cite{irsoy2021dropout} proposed a dropout method for hierarchically gated models \cite{titsias2002mixture, qi2016hierarchically}, to prevent overfitting in decision-tree-like models. Ragusa et al. \cite{ragusa2021random} employ dropout on Internet of Things (IoT) models.
Gao et al. \cite{gao2021simcse} proposed SimCSE in 2021. SimCSE is a contrastive learning method for NLP. Specifically, it performs dropout twice for the same instance to get two positive samples and treats all other in-batch instances as negative. Performing data augmentation by dropout twice achieves good results on this contrastive learning task. Child-Tuning \cite{xu2021raise} is another application of dropout in NLP. It randomly masks gradient when back-propagation.
Liang et al. \cite{liang2021r} proposed R-Drop (``R'' for ``Regularized'') in 2021. R-Drop generalized the idea of ``dropout twice'' \cite{gao2021simcse} from contrastive learning to general tasks. A training instance goes through the network twice with random dropout. On the one hand, we want the two predictions as close as possible to the label, by which we compute cross-entropy loss $\mathcal{L}^{(CE)}$, on the other hand, we want the two predictions as close as possible to each other, by which we compute Kullback-Leibler divergence loss $\mathcal{L}^{(KL)}$. Thus we get final loss function consists of two parts:
\begin{equation} \label{eq:r-drop 1}
\begin{aligned}
\mathcal{L} = \mathcal{L}^{(CE)} + \alpha \mathcal{L}^{(KL)}
\end{aligned}
\vspace{-4pt}
\end{equation}
and the second part is ``regularizing'' the first part.
Chen and Yi \cite{chen2021adaptive} proposed AS-Dropout (Adaptive Sparse Dropout) in 2021. AS-Dropout calculates dropout probability adaptively according to the neuron's activation function, such that only a small proportion of neurons are active in each training epoch.
Dropout methods that drop individual neurons are summarized in Table \ref{tab:drop neuron groups}. The application scenario for the dropout of individual neurons is usually fully connected layers. The basic operation of this type of methods is easy to implement and can be applied to a wide range of neural models. It is likely to lead to a stable improvement on the model performance in most cases, as will be shown in Section \ref{sec:experiments}. The generality and effectiveness has made it the most popular type of dropout methods. However, it is not suitable for models with specific structure (e.g. CNN, RNN, Transformer), for which the dropout methods usually drop neuron groups.
\subsubsection{Drop Neuron Groups}\label{subsubsec:drop neuron groups}
Dropout methods in \ref{subsubsec:drop single neurons} are mainly performed on fully connected layers. For neural networks with special structures, such as convolutional neural networks, residual networks, and recurrent neural networks, neurons are aggregated together forming specific structures to perform certain functions. Directly dropping individual neurons randomly may not have expected effect on these networks, so a series of dropout methods have been proposed specified for them.
\paragraph{\textbf{CNNs}} \label{para:drop cnn}
Tompson et al. \cite{tompson2015efficient} first proposed SpatialDropout specifically for convolutional neural networks in 2014. In CNNs, for the same feature map, all pixel features within the coverage of the same convolutional kernel are used to compute the output of the next layer, resulting a strong gradient correlation between adjacent pixels. The standard dropout removes individual pixel features randomly, which has little effect on reducing the interdependence between neurons, and is therefore ineffective. In contrast, for a feature tensor with size $n_{\mathrm{feats}}\times \mathrm{height}\times \mathrm{ width}$, SpatialDropout selects only $n_{\mathrm{feats}}$ dropout values, i.e., the whole feature map is either dropped for all or kept for all. This reduces the interdependence between neurons in CNN and has a good regularization effect.
Wu and Gu \cite{wu2015towards} proposed Max-pooling Dropout in 2015. For a neural network containing convolutional layers and pooling layers, if the $l$th layer is immediately followed by a pooling layer, the feed-forward operation is expressed as
\begin{equation} \label{eq:max-pooling dropout 1}
\begin{aligned}
a_j^{(l+1)} = pool(a_1^{(l)}, \dots, a_i^{(l)}, \dots, a_n^{(l)}), \quad i\in R_j^{(l)}
\end{aligned}
\vspace{-3pt}
\end{equation}
where $R_j^{(l)}$ is the $j$th pooling region of the $l$th layer, $a_i^{(l)}$ is the activation value of each neuron, and $pool()$ is the pooling function. Two common choices are average pooling, which averages the outputs of all neurons, and max-pooling, which takes the maximum of all neuron outputs. The authors take the latter one. Max-pooling Dropout randomly drops out the output $a_i^{(l)}$ of individual neurons during training phase, which is then passed into the pooling layer:
\begin{equation} \label{eq:max-pooling dropout 2}
\begin{aligned}
\hat{a}_i^{(l)} &= m_i^{(l)} * a_i^{(l)}, \quad m_i^{(l)} \sim \mathrm{Bernoulli}(p) \\
a_j^{(l+1)} &= pool(\hat{a}_1^{(l)}, \dots, \hat{a}_i^{(l)}, \dots, \hat{a}_n^{(l)}), \quad i\in R_j^{(l)}
\end{aligned}
\vspace{-2pt}
\end{equation}
\begin{figure}
\centering
\includegraphics[width=0.55\linewidth]{figs/max-pooling.png}
\vspace{-6pt}
\caption{Max-pooling Dropout \cite{wu2015towards} drops neuron outputs before they are passed to pooling layer.}
\label{fig:max-pooling}
\vspace{-10pt}
\end{figure}
as shown in Figure \ref{fig:max-pooling}. This is therefore equivalent to selecting neuron outputs from a multinomial distribution:
\begin{equation} \label{eq:max-pooling dropout 3}
\begin{aligned}
\mathrm{Pr}(a_j^{(l+1)} = a^{\prime (l)}_i) = p_i = pq^{n-i}, \quad i = 1, 2, \dots, n
\end{aligned}
\vspace{-2pt}
\end{equation}
If the $l$th layer is immediately followed by a convolutional layer, this paper also proposes Convolutional Dropout. The feed-forward operation in the training phase is
\begin{equation} \label{eq:max-pooling dropout 5}
\begin{aligned}
m_k^{(l)}(i) &\sim \mathrm{Bernoulli}(p),\quad
\hat{a}_k^{(l)} = a_k^{(l)} \ast m_k^{(l)}, \\
z_j^{(l+1)} &= \sum_{k=1}^{n^{(l)}} \mathrm{conv}(W_j^{(l+1)}, \hat{a}_k^{(l)}),\\
a_j^{(l+1)} &= f(z_j^{(l+1)}).
\end{aligned}
\vspace{-4pt}
\end{equation}
where $a_k^{(l)}$ is the $k$th feature map of the $l$th layer. No dropout is performed during testing and all neuron outputs need to be multiplied by the retain probability of the training phase.
In 2016, Park and Kwak \cite{park2016analysis} makes two improvements to SpatialDropout \cite{tompson2015efficient}. The first is to select and drop out high activation values on the feature maps or channels; the second is that the dropout ratio is not a fixed value but is obtained by sampling from a normal distribution. The authors refer to these two improved dropout methods as Max-drop and Stochastic Dropout, respectively.
Ghiasi et al. \cite{ghiasi2018dropblock} proposed DropBlock in 2018. For each layer of the feature map, DropBlock randomly drops multiple contiguous regions of size $block\_size\times block\_size$. When $block\_size = 1$, DropBlock is reduced to standard dropout; when $block\_size = feature\_map\_size$, i.e., one block can cover the whole layer of feature map, DropBlock is equivalent to SpatialDropout \cite{tompson2015efficient}.
Khan et al. \cite{khan2018regularization} proposed Spectral Dropout in 2018. A Spectral Dropout operation is added between two layers of a CNN. The operation has three steps: transforming activation values of the previous layer to frequency domain; dropping the components below a certain threshold in frequency domain; and changing back to the original value domain. The feed-forward operation between two layers without Spectral Dropout is expressed in the following equation:
\begin{equation} \label{eq:spectral dropout 1}
\begin{aligned}
\mathbf{A}_l^{\prime} = f(\mathbf{F}_l\otimes \mathbf{A}_{l-1} + \mathbf{b}_l)
\end{aligned}
\vspace{-2pt}
\end{equation}
When performing Spectral Dropout, the operation becomes
\begin{equation} \label{eq:spectral dropout 2}
\begin{aligned}
\mathbf{A}_l = \mathcal{T}^{-1}( \mathbf{M}\circ \mathcal{T}( f(\mathbf{F}_l\otimes \mathbf{A}_{l-1} + \mathbf{b}_l) ) )
\end{aligned}
\vspace{-2pt}
\end{equation}
where $\mathcal{T}$ denotes the frequency transformation and $\mathbf{M}$ is the masking matrix of dropout in frequency domain. This approach serves to filter input noises and effectively speeds up the convergence of network training.
In 2019, Cai et al. \cite{cai2019effective} analyze why standard dropout does not work in convolutional neural networks: it conflicts with the effect of Batch Normalization \cite{ioffe2015batch}. It is experimentally verified that putting dropout operation before convolution operation instead of batch normalization operation can effectively improve the dropout effect. Then the authors proposed Drop-Conv2d method to improve the training effect by combining dropout at feature-channel level and dropout at forward-path level in CNN.
Hou and Wang \cite{hou2019weighted} proposed Weighted Channel Dropout for feature channel dropout in 2019. The operation steps are in two stages: scoring feature channels and selecting feature channels. In the first stage, feature channels are scored using Global Average Pooling (GAP) method:
\begin{equation}
\label{eq:weighted channel dropout 1}
\begin{aligned}
score_i = \dfrac{1}{W\times H} \sum_{j=1}^W \sum_{k=1}^H x_i(j, k)
\end{aligned}
\vspace{-4pt}
\end{equation}
In the second stage, the weighted random selection (WRS) and random number generation (RNG) steps are used to select feature channels for dropout and retention.
Zeng et al. \cite{zeng2021correlation} proposed CorrDrop in 2021. CorrDrop drops out CNN neurons based on their feature correlation and can do it in both spatial manner and channel manner.
Lu et al. \cite{lu2021localdrop} proposed LocalDrop in 2021. This regularization method is based on theoretical analysis of local Rademacher complexity and can be applied to both fully connected layers and convolutional layers.
Pham and Le \cite{pham2021autodropout} proposed AutoDropout in 2021. AutoDropout uses reinforcement learning to train a controller selecting the optimal dropout pattern to train the model. The controller eliminates the need of manually adjusting dropout patterns as in previous methods such as DropBlock.
\paragraph{\textbf{RNNs}}\label{para:drop rnn}
Pachitariu et al. \cite{pachitariu2013regularization} applied standard dropout directly to RNNs in 2013 to randomly drop the outputs of neurons in RNNs. Bayer et al. \cite{bayer2013fast}, in the same year, applied Fast Dropout \cite{wang2013fast} directly to RNN.
\begin{figure}
\centering
\includegraphics[width=0.37\linewidth]{figs/rnn_regularization.png}
\vspace{-6pt}
\caption{Vanilla dropout for RNNs, which only drops feed-forward connections but not recurrent connections. \cite{zaremba2014recurrent}}
\label{fig:rnn_regularization}
\vspace{-10pt}
\end{figure}
Pham et al. \cite{pham2014dropout} first proposed a dropout method specific to RNN structure in 2014, rather than just applying random dropout directly to RNNs. Instead of dropping connections between hidden states at different timestamps (recurrent connections), only the connections from input to output direction (feed-forward connections) are dropped. The dropout is performed in the same way as the standard dropout, i.e., training with a mask $\mathbf{m}$ of Bernoulli distribution and an elementary multiplication of the hidden state vector at each layer, and testing with all neurons working thus multiplying the output by the retention probability $p$:
\begin{equation} \label{eq:rnn dropout 1}
\begin{aligned}
\mathbf{h}_{\mathrm{train}} = \mathbf{m}\odot\mathbf{h},\quad
\mathbf{h}_{\mathrm{test}} = p\mathbf{h}
\end{aligned}
\vspace{-2pt}
\end{equation}
Zaremba et al. \cite{zaremba2014recurrent} also elaborated this method more systematically in 2014. For multilayer LSTMs \cite{hochreiter1997long}, only the connections between layers are dropped, not for connections at different timestamps within the same layer. Let $h_t^l$ be the hidden state at moment $t$ of the $l$th layer, $c_t^l$ be the memory unit at moment $t$ of the $l$th layer, and $T_{n, m}: \mathbb{R}^n \rightarrow\mathbb{R}^m$ be an affine transformation, then performing dropout only to the connections between layers is:
\begin{equation} \label{eq:rnn dropout 3}
\begin{aligned}
\begin{pmatrix}
i\\f\\o\\g
\end{pmatrix} &= \begin{pmatrix}
\mathrm{sigm}\\\mathrm{sigm}\\\mathrm{sigm}\\\tanh
\end{pmatrix} T_{2n,4n} \begin{pmatrix}
\mathbf{D}(h_t^{l-1})\\h_{t-1}^l
\end{pmatrix}, \\
c_t^l &= f\odot c_{t-1}^l + i\odot g,\quad
h_t^l = o\odot \tanh(c_t^l)
\end{aligned}
\vspace{-2pt}
\end{equation}
where $\mathbf{D}$ is dropout operation matrix, which acts only on the output $h_t^{l-1}$ of previous layer at the same timestamp, and not on the output $h_{t-1}^l$ at the previous timestamp of this layer, as shown in Figure \ref{fig:rnn_regularization}\cite{zaremba2014recurrent}.
The dashed lines in Figure \ref{fig:rnn_regularization} indicate the connections to which dropout is applied and may be dropped, while the solid lines indicate the connections retained.
The motivation of both the above articles \cite{pham2014dropout, zaremba2014recurrent} is that the strength of RNN is its memory capacity, but if recurrent connections are dropped, the memory capacity of RNN will be impaired. To test this idea, the authors of \cite{pham2014dropout} did a series of experiments in \cite{bluche2015apply} 2015 to discuss dropout for RNNs and where in the network dropout operation should be performed. The authors examined the effect of adding dropout layers before LSTM input, on LSTM recurrent connection direction, and after LSTM output, respectively. They found that in most cases, adding dropout layers on LSTM input and output directions is better than adding them on recurrent connection direction, which experimentally verifies the ideas in the previous two papers.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figs/rnndrop.png}
\vspace{-6pt}
\caption{RNNDrop (right) outperforms standard dropout (left) by generating one mask for each layer in RNN and keeping it throughout the sequence. \cite{moon2015rnndrop, labach2019survey}}
\label{fig:rnndrop}
\vspace{-10pt}
\end{figure}
Moon et al. \cite{moon2015rnndrop} proposed RNNDrop in 2015, which gives a method for dropping recurrent connections between different timestamps of the same layer. This is done by generating only one dropping mask for each layer and then using this one mask at all timestamps of that layer. In this way, elements that are dropped at the first timestamp will not be used at subsequent timestamps, and elements that are kept at the first timestamp will be passed through to the last timestamp. The schematic is shown in Figure \ref{fig:rnndrop}\cite{labach2019survey}. The left side of the figure shows the dropout of RNNs using random dropout method, and the right shows how RNNDrop working, with the same colors indicating the same dropout masks. In this way, the model is regularized with dropout while retaining RNN memory capability.
Gal and Ghahramani \cite{gal2016theoretically} proposed Variational RNN Dropout in 2015. The authors view dropout as an approximate inference process in Bayesian neural networks, which can also perform dropout of both feed-forward connections and recurrent connections. In this regard, Variational RNN Dropout can be seen as a variant of RNNDrop \cite{moon2015rnndrop}.
Recurrent Dropout proposed by Semeniuta et al. \cite{semeniuta2016recurrent} in 2016 is another dropout method that preserves memory capacity of RNNs and generates random dropout masks at each step, just like standard dropout does \cite{hinton2012improving, srivastava2014dropout}. This is done by dropping only the vectors used to generate hidden state vectors, but not dropping hidden state vectors themselves. Recurrent Dropout is specialized for gated RNNs, such as LSTM\cite{hochreiter1997long} and GRU\cite{chung2014empirical}. For LSTM in Eq. \ref{eq:recurrent dropout 1},
\begin{equation} \label{eq:recurrent dropout 1}
\begin{aligned}
\begin{pmatrix}
\mathbf{i}_t\\\mathbf{f}_t\\\mathbf{o}_t\\\mathbf{g}_t
\end{pmatrix} &= \begin{pmatrix}
\sigma(\mathbf{W}_i[\mathbf{x}_t,\mathbf{h}_{t-1}] + \mathbf{b}_i)\\
\sigma(\mathbf{W}_f[\mathbf{x}_t,\mathbf{h}_{t-1}] + \mathbf{b}_f)\\
\sigma(\mathbf{W}_o[\mathbf{x}_t,\mathbf{h}_{t-1}] + \mathbf{b}_o)\\
f(\mathbf{W}_g[\mathbf{x}_t,\mathbf{h}_{t-1}] + \mathbf{b}_g)
\end{pmatrix}, \\
\mathbf{c}_t = &\mathbf{f}_t\ast \mathbf{c}_{t-1} + \mathbf{i}_t\ast \mathbf{g}_t,\quad
\mathbf{h}_t = \mathbf{o}_t\ast f(\mathbf{c}_t)
\end{aligned}
\vspace{-2pt}
\end{equation}
Variational RNN Dropout\cite{gal2016theoretically} performs dropout as Equation \ref{eq:recurrent dropout 2},
\begin{equation} \label{eq:recurrent dropout 2}
\begin{aligned}
\begin{pmatrix}
\mathbf{i}_t\\\mathbf{f}_t\\\mathbf{o}_t\\\mathbf{g}_t
\end{pmatrix} &= \begin{pmatrix}
\sigma(\mathbf{W}_i[\mathbf{x}_t,d(\mathbf{h}_{t-1})] + \mathbf{b}_i)\\
\sigma(\mathbf{W}_f[\mathbf{x}_t,d(\mathbf{h}_{t-1})] + \mathbf{b}_f)\\
\sigma(\mathbf{W}_o[\mathbf{x}_t,d(\mathbf{h}_{t-1})] + \mathbf{b}_o)\\
f(\mathbf{W}_g[\mathbf{x}_t,d(\mathbf{h}_{t-1})] + \mathbf{b}_g)
\end{pmatrix}
\end{aligned}
\vspace{-2pt}
\end{equation}
RNNDrop\cite{moon2015rnndrop} performs dropout as Equation \ref{eq:recurrent dropout 3}.
\begin{equation} \label{eq:recurrent dropout 3}
\begin{aligned}
\mathbf{c}_t = d(\mathbf{f}_t\ast \mathbf{c}_{t-1} + \mathbf{i}_t\ast \mathbf{g}_t)
\end{aligned}
\vspace{-2pt}
\end{equation}
This method, on the other hand, performs dropout as Equation \ref{eq:recurrent dropout 4}.
\begin{equation} \label{eq:recurrent dropout 4}
\begin{aligned}
\mathbf{c}_t = \mathbf{f}_t\ast \mathbf{c}_{t-1} + \mathbf{i}_t\ast d(\mathbf{g}_t)
\end{aligned}
\vspace{-2pt}
\end{equation}
Dropout operations performed by RNNDrop\cite{moon2015rnndrop}, Variational RNN Dropout\cite{gal2016theoretically} and Recurrent Dropout are shown in Figure \ref{fig:recurrent_dropout}\cite{semeniuta2016recurrent}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/recurrent_dropout.png}
\vspace{-12pt}
\caption{Comparison of RNNRrop, Variational RNN Dropout and Recurrent Dropout \cite{semeniuta2016recurrent}.}
\label{fig:recurrent_dropout}
\vspace{-7pt}
\end{figure}
Krueger et al. \cite{krueger2016zoneout} proposed Zoneout in 2016. RNNs sometimes face the problem of vanishing gradient \cite{hochreiter1991untersuchungen, bengio1994learning}. Inspired by dropout of residual networks with the method Stochastic Depth\cite{huang2016deep} and Swapout\cite{singh2016swapout}, Zoneout randomly replaces the output at a certain timestamp in RNN with the output of a previous timestamp. Denote the hidden state transfer operation as $h_t = \mathcal{T}(h_{t-1},x_t)$, where $\mathcal{T}$ is often an affine transformation. To perform dropout operation is to replace the original transfer operation $\mathcal{T}$ with a new operation $\Tilde{\mathcal{T}}$. Standard dropout and Zoneout can be expressed as follows, respectively:
\begin{equation} \label{eq:zoneout 1}
\begin{aligned}
\mathrm{Dropout:}\quad \Tilde{\mathcal{T}} &= d_t\odot \mathcal{T} + (1-d_t)\odot 0 \\
\mathrm{Zoneout:}\quad \Tilde{\mathcal{T}} &= d_t\odot \mathcal{T} + (1-d_t)\odot 1 \\
\end{aligned}
\vspace{-2pt}
\end{equation}
where $d_t$ is a mask vector obeying Bernoulli distribution. In this way, Zoneout passes the output of previous timestamp to the next timestamp with probability $p$ instead of dropping it with probability $p$. In this way Zoneout solves vanishing gradient problem while acting as a regularizer.
Merity et al. \cite{merity2018regularizing} proposed Weighted-dropped LSTM in 2017. Borrowing the idea of DropConnect\cite{wan2013regularization}, instead of dropping neuron activations, Weighted-dropped LSTM drops elements in the weight matrix. That is, for LSTM in Eq. \ref{eq:recurrent dropout 1},
Weighted-dropped LSTM drops weight matrix
$[\mathbf{W}_i, \mathbf{W}_f, \mathbf{W}_o]$ instead of hidden state $\mathbf{h}_{t-1}$.
In 2017, Melis et al.\cite{melis2018state} did a comprehensive review about the effectiveness of RNN applications in language models. The covered dropout methods perform on the input of RNN, feed-forward connections, recurrent connections, and the output of the last layer.
Zolna et al. \cite{zolna2018fraternal} proposed Fraternal Dropout in 2018. According to Ma et al.'s analysis\cite{ma2016dropout}, for networks trained with dropout, the expected outputs of training and prediction will differ. Using different dropout masks results in different outputs, i.e., the outputs are related to dropout masks, which is not what we want. We want model outputs to be irrelevant to dropout masks, they should be as same as possible under different masks, and their variance be as small as possible. Following this idea, Fraternal Dropout, trains two neural networks with the same structure and shared parameters. The only difference between them is their dropout masks. Fraternal Dropout optimize both objective functions of the two networks and the difference between their outputs.
The authors also prove that the upper bound of the regularization term here is the objective function of expected linear dropout in \cite{ma2016dropout}.
\paragraph{\textbf{ResNets and others}}\label{para:drop resnet}
Residual network (ResNet) \cite{he2016deep} is a structure designed to solve problems such as vanishing gradient \cite{hochreiter1991untersuchungen, bengio1994learning} and long training time caused by excessive network depth. Let the output of the $l$th layer of network be $h_l$ and the transfer function from the $l-1$th layer to the $l$th layer be $f_l()$ (which may contain one or more convolution functions, batch normalization functions, and activation functions), then the feed-forward operation containing residual block is
\begin{equation} \label{eq:resnet 1}
\begin{aligned}
h_l = \mathrm{ReLU}\big(f_l(h_{l-1}) + \mathrm{id}(h_{l-1})\big)
\end{aligned}
\vspace{-2pt}
\end{equation}
where $\mathrm{id}$ denotes identity function, i.e., $h_{l-1}$ is passed to the $l$th layer directly. A series of dropout methods are also proposed for models with ResNet structure.
Huang et al. \cite{huang2016deep} proposed Stochastic Depth in 2016. Stochastic Depth randomly drops some operation blocks and retains only residual connections of that layer:
\begin{equation} \label{eq:stochastic depth 1}
\begin{aligned}
h_l = \mathrm{ReLU}\big(b_l f_l(h_{l-1}) + \mathrm{id}(h_{l-1})\big)
\end{aligned}
\vspace{-2pt}
\end{equation}
where $b_l\sim \mathrm{Bernoulli}(p_l)$ is retention probability. When $b_l=0$, the layer has only one identity function, which is equivalent to directly copying results of the previous layer, i.e., the network becomes shallower. In this way, it is possible to train a network with a desired shallow depth and use the whole deep network during testing, alleviating the vanishing gradient problem \cite{hochreiter1991untersuchungen, bengio1994learning} and the problem of long training time.
Kang et al. \cite{kang2016shakeout} proposed Shakeout in 2016. Different from standard dropout, Shakeout acts on the neurons not by making them choose between 0 and the original value, but between two new weights.
The authors also show that Shakeout regularization combines three regularization terms, $L_0$, $L_1$ and $L_2$.
Li and Liu \cite{li2016whiteout} proposed Whiteout in 2016. Both standard dropout \cite{hinton2012improving, srivastava2014dropout} and Shakeout \cite{kang2016shakeout} only introduce Bernoulli noise, while Whiteout introduces Gaussian noise into training process. This is done by adding additive or multiplicative Gaussian noise to the output of each neuron.
Whiteout is the first noise injection regularization technique (NIRT) that imposes an extensive $L_\gamma$, $\gamma\in (0, 2)$ sparse regularization without involving $L_2$ regularization.
Gastaldi \cite{gastaldi2017shake} proposed Shake-Shake in 2017. Shake-Shake is used for three-way ResNet. The original feed-forward operation is
\begin{equation} \label{eq:shake-shake 1}
\begin{aligned}
x_{i+1} = \sigma(x_i + \mathcal{F}(x_i, \mathcal{W}_i^{(1)}) + \mathcal{F}(x_i, \mathcal{W}_i^{(2)}))
\end{aligned}
\vspace{-2pt}
\end{equation}
Shake-Shake introduces random variables $\alpha_i\sim U(0, 1)$ to assign weights to the two-way transfer function:
\begin{equation} \label{eq:shake-shake 2}
\begin{aligned}
x_{i+1} = \sigma(x_i + \alpha_i\mathcal{F}(x_i, \mathcal{W}_i^{(1)}) + (1-\alpha_i)\mathcal{F}(x_i, \mathcal{W}_i^{(2)}))
\end{aligned}
\vspace{-2pt}
\end{equation}
It takes a weight of 0.5 for each path when testing:
\begin{equation} \label{eq:shake-shake 3}
\begin{aligned}
x_{i+1} = \sigma(x_i + 0.5\mathcal{F}(x_i, \mathcal{W}_i^{(1)}) + 0.5\mathcal{F}(x_i, \mathcal{W}_i^{(2)}))
\end{aligned}
\vspace{-2pt}
\end{equation}
Yamada et al. \cite{yamada2019shakedrop} proposed ShakeDrop in 2018. Stochastic Depth \cite{huang2016deep} simply drops or retains a layer, Shake-Shake \cite{gastaldi2017shake } can assign weights to different pathways but is applied only to a three-way ResNet. ShakeDrop combines both functions. It has two parameters $\alpha$ and $\beta_l$ to control the assigned weights. $b_l\sim \mathrm{Bernoulli}(p_l)$ is the dropout ratio. ShakeDrop is expressed as
\begin{equation} \label{eq:shakedrop 1}
G(x) = \begin{cases}
x + (b_l + \alpha - b_l\alpha)F(x),\quad \text{in train-fwd} \\
x + (b_l + \beta - b_l\beta)F(x),\quad \text{in train-bwd} \\
x + \mathbb{E}[b_l + \alpha - b_l\alpha]F(x),\quad \text{in test}
\end{cases}
\vspace{-2pt}
\end{equation}
Larsson et al. \cite{larsson2016fractalnet} proposed DropPath in 2016. The authors first proposed a network structure Fractalnet, which achieves extremely deep neural networks based on self-similarity of structure.
It is shown that Fractalnet can also alleviate vanishing gradient problem in deep neural networks, just as ResNet does. The authors then proposed a regularization approach for Fractalnet, which is randomly dropping sub-paths between input and output within each fractal block. Just as standard dropout can reduce dependencies between neurons, this operation reduces dependencies between sub-paths to act as a regularizer \cite{larsson2016fractalnet}.
Zoph et al. \cite{zoph2018learning} proposed Scheduled DropPath in 2018 as an improvement to DropPath \cite{larsson2016fractalnet}. Dropout ratio increases linearly with the number of training epochs rather than a fixed value, and Scheduled DropPath achieves better performance than the original DropPath.
Singh et al. \cite{singh2016swapout} proposed Swapout in 2016, a synthesis of standard dropout and Stochastic Depth. Let $X$ be the input of a block in neural network, and the block computes the output $Y = F(X)$. The output of the $u$th neuron in the block is noted as $F^{(u)}(X)$. $\Theta$ is a tensor with the same shape as $F(X)$ whose elements obey Bernoulli distribution. Standard dropout makes
the output of each neuron choosed from $\{0$, $F^{(u)}(X)\}$. Stochastic Depth is such that the output of each neuron is choosed from $\{X^{(u)}$, $X^{(u)} + F^{(u)}(X)\}$. Swapout, on the other hand, extends the range of possible output values. For a layer with $N$ blocks $F_1, \dots, F_N$, define $N$ independent Bernoulli tensor $\Theta_1, \dots, \Theta_N$ such that the output is computed as
\begin{equation} \label{eq:swapout 2}
\begin{aligned}
Y = \sum_{i=1}^N \Theta_i \odot F_i(X)
\end{aligned}
\vspace{-2pt}
\end{equation}
In this way, the output of each neuron has $2^N$ possible values. Consider the simplest case of $N=2$ in ResNet,
\begin{equation} \label{eq:swapout 3}
\begin{aligned}
Y = \Theta_1 \odot X + \Theta_2 \odot F(X)
\end{aligned}
\vspace{-2pt}
\end{equation}
The output of each neuron can take 4 values: $\{0$, $X^{(u)}$, $F^{(u)}(X)$, $X^{(u)} + F^{(u)}(X)\}$. Each value corresponds to neuron's state as
\begin{enumerate}
\vspace{-4pt}
\item $0$: dropped
\item $X^{(u)}$: skipped by the residual connection
\item $F^{(u)}(X)$: normal
\item $X^{(u)} + F^{(u)}(X)$: a complete residual unit
\vspace{-4pt}
\end{enumerate}
Zhou et al. proposed DropHead \cite{zhou2020scheduled} in 2020, dropping attention heads in multi-head attention mechanism, which is a core component of Tranformer. It also adaptively adjusts dropout ratio during training to achieve better performance.
Dropout methods that drop neuron groups are summarized in Table \ref{tab:drop neuron groups}. The application scenarios of dropping neuron groups are usually the models with certain structures such as CNN, RNN, ResNet, or Transformer. This type of dropout methods can boost the performance of these models, while its implementation is also subject to the model structures.
\subsection{Drop Embeddings}\label{subsec:drop embeddings}
In some machine learning tasks, the input data is first converted to embeddings then goes through the model. Dropout methods introduced in this section drop embeddings during training. For example, in recommendation, some dropout methods drop embeddings of user and item interaction histories during training to cope with the cold-start problem, while others randomly drop user and item feature embeddings to handle the possible missing information problem in real scenarios.
Volkovs et al. \cite{volkovs2017dropoutnet} proposed DropoutNet in 2017. The embedding matrices of users and items are noted as $\mathbf{U}$ and $\mathbf{I}$. The embedding vectors of the $u$th user and $i$th item are $\mathbf{U}_u$ and $\mathbf{I}_i$, respectively.
The context information matrices of users and items are $\mathbf{\Phi}^{\mathcal{U}}$ and $\mathbf{\Phi}^{\mathcal{I}}$, respectively. DropoutNet proposed a new form of objective dealing with the problem of missing interaction history. Previous models would add extra terms of context information into objective, hoping these newly added terms can be useful when the terms of interaction history are not available. However, it is difficult to determine the weights of these two components. DropoutNet's objective handles this problem automatically:
\begin{equation} \label{eq:dropoutnet 1}
\begin{aligned}
L &= \sum_{u, i}\big(\mathbf{U}_u\mathbf{I}_i^T - f_{\mathcal{U}}(\mathbf{U}_u, \mathbf{\Phi}_u^{\mathcal{U}})f_{\mathcal{I}}(\mathbf{I}_i, \mathbf{\Phi}_i^{\mathcal{I}})^T \big)^2
\end{aligned}
\vspace{-2pt}
\end{equation}
During training, DropoutNet randomly drops a portion of input $\mathbf{U}_u$ or $\mathbf{I}_i$ by setting them to zero. For training instances that are kept, the objective will make the model ignore context information part ($\mathbf{\Phi}^{\mathcal{U}}$ and $\mathbf{\Phi}^{\mathcal{I}}$) as much as possible like Equation \ref{eq:dropoutnet 1} shows. For training instances with user or item inputs dropped, the objective will make the model rely as much as possible on context information part, as shown in Equation \ref{eq:dropoutnet 2}.
\begin{equation} \label{eq:dropoutnet 2}
\begin{aligned}
\textit{u}\text{ cold start: }
L_{ui} &= \big(\mathbf{U}_u\mathbf{I}_i^T - f_{\mathcal{U}}(\mathbf{0}, \mathbf{\Phi}_u^{\mathcal{U}}) f_{\mathcal{I}}(\mathbf{I}_i, \mathbf{\Phi}_i^{\mathcal{I}})^T \big)^2 \\
\textit{i}\text{ cold start: }
L_{ui} &= \big(\mathbf{U}_u\mathbf{I}_i^T - f_{\mathcal{U}}(\mathbf{U}_u, \mathbf{\Phi}_u^{\mathcal{U}}) f_{\mathcal{I}}(\mathbf{0}, \mathbf{\Phi}_i^{\mathcal{I}})^T \big)^2
\end{aligned}
\vspace{-2pt}
\end{equation}
\begin{table*}[ht]
\centering
\caption{Table of methods that drop embeddings or input information.}
\vspace{-6pt}
\begin{threeparttable}
\begin{tabular}{llllll}
\toprule
Method & Year & Category & Brief Description & \makecell[l]{Original\\Scenario} & Source \\
\midrule
DropoutNet\cite{volkovs2017dropoutnet} & 2017 & 2\dag & Randomly drop interactions & Recom. & NeurIPS \\
ACCM\cite{shi2018attention} & 2018 & 2 & Drop interactions \& use attention mechanism & Recom. & CIKM \\
AFS\cite{shi2019adaptive} & 2019 & 2 & Drop interactions and attribute values & Recom. & CIKM \\
WordDropout\cite{sennrich2016edinburgh} & 2016 & 3.1\dag & Drop words in machine translation & NLP & WMT16 \\
BERT\cite{devlin2019bert} & 2018 & 3.1 & Mask and predict tokens in pre-training phase & NLP & NAACL \\
Mask-Predict\cite{ghazvininejad2019mask} & 2019 & 3.1 & Mask and regenerate words in machine translation & NLP & EMNLP \\
ERNIE\cite{sun2019ernie} & 2019 & 3.1 & Incorporate human knowledge into pre-training & NLP & arxiv \\
Whole Word Masking\cite{cui2019pre} & 2019 & 3.1 & Randomly mask chinese words & NLP & arxiv \\
Mask and Infill\cite{wu2019mask} & 2019 & 3.1 & Mask and infill tokens in pre-training & NLP & IJCAI \\
AMS\cite{ye2019align} & 2019 & 3.1 & Incorporate general knowledge using ConceptNet & NLP & arxiv \\
PEGASUS\cite{zhang2020pegasus} & 2019 & 3.1 & Mask sentences for summary generation & NLP & ICML \\
Token Drop\cite{zhang2020token} & 2020 & 3.1 & Drop tokens instead of words & NLP & COLING \\
Selective Masking \cite{gu2020train} & 2020 & 3.1 & Introduce a task-guided pre-training stage & NLP & EMNLP \\
S3-Rec\cite{zhou2020s3} & 2020 & 3.1 & Enhance interaction sequence like BERT & Recom. & CIKM \\
CutOut\cite{devries2017improved} & 2017 & 3.2\dag & Drop a square region on the input image & CV & arxiv \\
Random Erasing\cite{zhong2020random} & 2017 & 3.2 & Drop a rectangular region on the input image & CV & AAAI \\
Hide-and-Seek\cite{singh2017hide} & 2017 & 3.2 & Drop several square regions & CV & ICCV \\
Mixup\cite{zhang2017mixup} & 2017 & 3.2 & Take linear interpolations of training instances as input & CV & ICLR \\
Manifold Mixup\cite{verma2019manifold} & 2019 & 3.2 & Generalize Mixup to feature level & CV & ICML \\
CutMix\cite{yun2019cutmix} & 2019 & 3.2 & Replace regions of one image with another's & CV & ICCV \\
GridMask\cite{chen2020gridmask} & 2020 & 3.2 & Drop regularly tiled square regions & CV & arxiv \\
Attentive CutMix\cite{walawalkar2020attentive} & 2020 & 3.2 & Improve CutMix with attention mechanism & CV & ICASSP \\
MAE\cite{he2021masked} & 2021 & 3.2 & Mask and reconstruct patches of input images & CV & arxiv \\
GraphSAGE\cite{hamilton2017inductive} & 2017 & 3.3\dag & Randomly sample nodes & Graph & NeurIPS \\
FastGCN\cite{chen2018fastgcn} & 2018 & 3.3 & Sample nodes from the whole graph & Graph & ICLR \\
AS-GCN\cite{huang2018adaptive} & 2018 & 3.3 & Node sampling layer by layer & Graph & NeurIPS \\
GAT\cite{velivckovic2017graph} & 2018 & 3.3 & Attention on edges & Graph & ICLR \\
LADIES\cite{zou2019layer} & 2019 & 3.3 & Adaptively sample nodes by layer & Graph & NeurIPS \\
GRAND\cite{feng2020graph} & 2020 & 3.3 & Random propagation on graph & Graph & NeurIPS \\
SGAT\cite{ye2021sparse} & 2020 & 3.3 & Learn sparse attention coefficients on graph & Graph & TKDE \\
DropEdge\cite{rong2019dropedge} & 2020 & 3.3 & Randomly drop edges & Graph & ICLR \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[\dag] 2 refers to dropping embeddings, 3.1 dropping one-dimensional information, 3.2 dropping two-dimensional information, and 3.3 dropping graph information.
\end{tablenotes}
\end{threeparttable}
\label{tab:drop embeddings}
\vspace{-12pt}
\end{table*}
Shi et al. \cite{shi2018attention} proposed ACCM model in 2018. DropoutNet \cite{volkovs2017dropoutnet}, while automatically processing both interaction history information and context information, is implemented in such a way that its objective function is completely backward to one side. When interaction history is available, the model tends to completely rely on it and ignore attribute information; when interaction history is missing, the model tends to completely rely on attribute information. ACCM model, on the other hand, achieves flexible control of the weights of the two components by using attention mechanism \cite{vaswani2017attention}.
The model contains a user part and an item part. Each part computes embeddings by both interaction history and context information. For the user part, the attention network computes attention weights for two kinds of information separately, and then obtains the final embedding $\mathbf{u}$:
\begin{equation} \label{eq:accm 1}
\begin{aligned}
h_{CF}^u = \mathbf{h}^T \tanh&(\mathbf{W}\mathbf{u}_{CF} + \mathbf{b}),\
h_{CB}^u = \mathbf{h}^T \tanh(\mathbf{W}\mathbf{u}_{CB} + \mathbf{b}) \\
a_{CF}^{u} &= \dfrac{\exp(h_{CF}^u)}{\exp(h_{CF}^u) + \exp(h_{CB}^u)} = 1 - a_{CB}^{u} \\
\mathbf{u} &= a_{CF}^{u}\mathbf{u}_{CF} + a_{CB}^{u}\mathbf{u}_{CB}
\end{aligned}
\vspace{-2pt}
\end{equation}
Item embedding $\mathbf{v}$ is generated in the same way as $\mathbf{u}$. The model prediction is
\begin{equation} \label{eq:accm 2}
\begin{aligned}
y = b_g + b_u + b_v + \mathbf{u}\mathbf{v}
\end{aligned}
\vspace{-2pt}
\end{equation}
Like DropoutNet, the interaction history embeddings are randomly dropped during training, replaced with random vectors:
\begin{equation} \label{eq:accm 3}
\begin{aligned}
\mathbf{u} &= a_{CF}^{u}[(1-c^u)\mathbf{u}_{CF} + c^u\mathbf{u}_r] + a_{CB}^{u}\mathbf{u}_{CB} \\
\mathbf{v} &= a_{CF}^{v}[(1-c^v)\mathbf{v}_{CF} + c^v\mathbf{v}_r] + a_{CB}^{v}\mathbf{v}_{CB} \\
y &= b_g + c^u b_u + c^v b_v + \mathbf{u}\mathbf{v}
\end{aligned}
\vspace{-2pt}
\end{equation}
where $c^u, c^v \sim \mathrm{Bernoulli}(p)$, $p$ is the dropout probability. $\mathbf{u}_r, \mathbf{v}_r$ are random vectors with the same initial distribution of user and item vectors. By using attention mechanism and randomly dropping embddings, ACCM better solves cold start problem in recommendation.
Similar to missing interaction history, missing content information is sometimes encountered in recommendation. Since cold-start problem can be better solved by embedding dropout and attention mechanism together, missing attribute problem can also be solved in a similar way, which is the idea of AFS (Adaptive Feature Sampling) \cite{shi2019adaptive}. AFS drops a portion of user and item context information randomly during training to simulate missing attribute values, making the model more robust when testing.
Dropout methods that drop embeddings are summarized in Table \ref{tab:drop embeddings}. Recommendation models are usually the application scenarios of dropping embeddings, whose input information usually needs to be converted into vector representations for model operations. Dropping embeddings can be effective in such scenarios, while it can only be used when there are embeddings of input data.
\subsection{Drop Input Information}\label{subsec:drop inputs}
Some dropout methods drop part of input information directly during training, which serves for various purposes under different scenarios, such as regularization, data augmentation, or data representation enhancement of pre-training stage.
\subsubsection{One-dimensional Information}\label{subsubsec:drop 1d info}
Sennrich et al. \cite{sennrich2016edinburgh} in 2016 use WordDropout in machine translation to drop out words from input data.
Ghazvinine et al. \cite{ghazvininejad2019mask} proposed Mask-Predict in 2019. While most machine translation systems generate text from left to right, Mask-Predict uses a masking approach to train the model. It first predicts all target words and then iteratively masks and regenerates a subset of words in which the model has least confidence. Unlike BERT \cite{devlin2019bert}, this paper does not use masked language model for pre-training but uses it directly with Mask-Predict to generate text.
Zhang et al. \cite{zhang2020token} in 2020 proposed Token Drop mechanism for neural network machine translation. WordDropout \cite{sennrich2016edinburgh} randomly drops words from sentences, while Token Drop method drops tokens.
Devlin et al. \cite{devlin2019bert} proposed BERT in 2019. In the pre-training phase, 15\% of the tokens are randomly masked. These masked tokens are then predicted in both directions using a self-attentive transformer to enhance the data representation in pre-training phase. After BERT came out, many BERT-like or BERT-based methods have been proposed for enhancing data representation in NLP pre-training.
Cui et al. \cite{cui2019pre} in 2019 proposed Whole Word Masking for Chinese language models. BERT randomly masks words for English language models, but randomly masking Chinese characters is less appropriate because Chinese characters may not be a complete semantic unit. Whole Word Masking masks words instead of Chinese characters when training Chinese language models.
Sun et al. \cite{sun2019ernie} proposed ERNIE in 2019. ERNIE introduces human knowledge into word vector training. It achieves this by considering masking operations of three levels. The first level, like BERT, randomly masks English or Chinese words. The second level randomly masks phrases identified by existing toolkits, incorporating phrase information into training. The third level randomly masks entities pre-defined by human, incorporating prior human knowledge into the training of word vectors.
Wu et al. \cite{wu2019mask} proposed Mask and Infill in 2019. The pre-training phase is divided into a masking phase and a filling phase used to accomplish the task of sentiment transfer.
Ye et al. \cite{ye2019align} proposed AMS method in 2019. A general knowledge Q\&A dataset is generated through ConceptNet \cite{speer2017conceptnet}, and the general knowledge concepts in each utterance of this dataset are masked and predicted so that the model learns general knowledge through this process.
Zhang et al. \cite{zhang2020pegasus} in 2019 proposed PEGASUS for summary generation tasks. During pre-training, not only word tokens are randomly masked, but also important sentences. These sentences are part of the summary to be generated and need to be predicted by the model.
Wang et al. \cite{wang2020semantic} in 2019 imitates BERT to introduce dropout in speech recognition, which randomly masks acoustic signals and features in the input audio.
Selective Masking \cite{gu2020train} proposed by Gu et al. in 2020 introduced a task-guided pre-training stage between general pre-training and fine-tuning stage.
There are similarities between the input of sequential recommendation and the input of NLP tasks, for both of them are temporal one-dimensional information. So there are also recommendation models that borrows the idea of BERT: Zhou et al. \cite{zhou2020s3} proposed S3-Rec in 2020. It divides the recommendation task into a pre-training stage and a fine-tuning stage just like BERT. S3-Rec randomly masks a portion of item ids and attributes in pre-training phase to enhance the representation between item ids, item attributes, and item sequences.
Dropout methods that drop one-dimentional input information are summarized in Table \ref{tab:drop embeddings}, which are mainly applied in NLP or sequential recommendation tasks, where input data is organized as temporal one-dimensional sequences.
\subsubsection{Two-dimensional Information}\label{subsubsec:drop 2d info}
Existing data enhancement methods can be broadly classified into three categories: spatial transformation, color distortion, and information dropping. Dropping two-dimensional input information is generally regarded as a data enhancement method of information dropping.
DeVries and Taylor \cite{devries2017improved} proposed CutOut in 2017. For every training image, CutOut randomly selects a square region and sets the pixel values within this region to zero. The difference of CutOut with methods in Section \ref{subsubsec:drop neuron groups} such as SpatialDropout \cite{tompson2015efficient} or DropBlock \cite{ghiasi2018dropblock} is that CutOut is performed at the level of input information. Compared to dropping model structure, it is easier to implement by directly dropping a part of the input image. In addition, dropping input information is equivalent to generating a new training sample, so there is no need to multiply the neuron output by a factor to eliminate the bias during testing as the methods in Section \ref{subsec:drop structures} does.
Zhong et al. \cite{zhong2020random} proposed Random Erasing in 2017. Similar to CutOut \cite{devries2017improved}, the training image is covered with a rectangular box with random position and random size. The pixel values within the rectangular box are also random.
Singh et al. \cite{singh2017hide} proposed Hide-and-Seek in 2017. CutOut \cite{devries2017improved} and Random Erasing \cite{ zhong2020random} drop only one rectangular region for each input image, while Hide-and-Seek divides the image into $S\times S$ small squares, each square is dropped with $p_{hide}$ probability. The purpose of Hide-and-Seek is to let the model be capable of extracting features from other parts of the image after the most discriminative part has been dropped, preventing the model from relying too much on certain parts.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/GridMask1.png}
\vspace{-16pt}
\caption{Comparison of dropout patterns of CutOut \cite{devries2017improved}, Random Erasing \cite{zhong2020random}, Hide-and-Seek \cite{singh2017hide} and GridMask \cite{chen2020gridmask}.}
\label{fig:gridmask}
\vspace{-10pt}
\end{figure}
Chen et al. \cite{chen2020gridmask} proposed GridMask in 2020. Dropout pattern of GridMask is a number of equally spaced square regions tiled on a plane, determined by four parameters $(r, d, \delta_x, \delta_y)$. The dropout pattern of GridMask is more regular compared to CutOut, Random Erasing and Hide-and-Seek. Compared to AutoAugment \cite{cubuk2019autoaugment} which employs reinforcement learning to search for dropout patterns, GridMask consumes much less training cost. The above four methods are schematically shown in Figure \ref{fig:gridmask}.
\emph{Dropping} input data can also be seen as \emph{introducing noise} to input data, while this noise is Bernoulli noise. Some of the following data enhancement methods do not necessarily \emph{drop} input data, but \emph{introduce noise} to input data.
Zhang et al. \cite{zhang2017mixup} proposed Mixup in 2017. Mixup augments data in a simple way: the linear interpolations of training instances are also taken as training instances. Specifically, for training instances $(\mathbf{x}_i, y_i)$ and $(\mathbf{x}_j, y_j)$,
\begin{equation} \label{eq:mixup 1}
\begin{aligned}
\Tilde{\mathbf{x}} &= \lambda \mathbf{x}_i + (1 - \lambda)\mathbf{x}_j \\
\Tilde{y} &= \lambda y_i + (1 - \lambda)y_j
\end{aligned}
\vspace{-4pt}
\end{equation}
Mixup takes $(\Tilde{\mathbf{x}}, \Tilde{y})$ as a training instance as well.
Verma et al. \cite{verma2019manifold} proposed Manifold Mixup in 2019. Manifold Mixup generalizes Mixup \cite{zhang2017mixup} operation to feature level. The motivation is that features have higher-order semantic information, and interpolation in feature level could yield more meaningful samples.
Yun et al. \cite{yun2019cutmix} proposed CutMix in 2019, which is an improvement on Mixup \cite{zhang2017mixup} and Cutout \cite{devries2017improved}. Cutout fills part of the image with meaningless regions, which is not conducive to the model making full use of training data. Mixup uses linear interpolation to augment the data, however these newly produced images are not natural images. CutMix, on the other hand, randomly selects some rectangular regions of the image $\mathbf{x}_A$ and replaces them with regions at the same locations of image $\mathbf{x}_B$. The corresponding labels are replaced by a combination of the two images:
\begin{equation} \label{eq:cutmix 1}
\begin{aligned}
\Tilde{\mathbf{x}} &= \mathbf{M}\odot \mathbf{x}_A + (1 - \mathbf{M})\odot \mathbf{x}_B \\
\Tilde{y} &= \lambda y_A + (1 - \lambda)y_B
\end{aligned}
\vspace{-4pt}
\end{equation}
where $\mathbf{M}$ is the masking matrix.
Walawalkar et al. \cite{walawalkar2020attentive} proposed Attentive CutMix in 2020, which further improved CutMix by using an attention mechanism to select the most discriminative regions for replacement. Operations of Mixup, CutOut, CutMix and Attentive CutMix are shown in Figure \ref{fig:attentivecutmix}\cite{walawalkar2020attentive}.
He et al. \cite{he2021masked} proposed Masked Autoencoders (MAE) in 2021. It randomly masks patches of the input image and reconstructs the missing pixels during pre-training.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figs/Attentive_CutMix.png}
\vspace{-6pt}
\caption{Comparison of dropout patterns of Mixup \cite{verma2019manifold}, CutOut \cite{devries2017improved}, CutMix \cite{yun2019cutmix} and Attentive CutMix. \cite{walawalkar2020attentive}}
\label{fig:attentivecutmix}
\vspace{-10pt}
\end{figure}
Dropout methods that drop two-dimensional information are summarized in Table \ref{tab:drop embeddings}. They are mainly applied in computer vision tasks, where input data is organized as pixel matrices.
\subsubsection{Graph Information}\label{subsubsec:drop graph info}
Graph neural networks (GNNs) have a wide range of applications in various tasks such as node classification, cluster detection, and recommender systems \cite{weng2020gain, huan2021search, wang2021forecasting}. In the training of GNNs, some methods randomly drop nodes or edges and use only part of graph information for training, serving as a regularization technique.
\paragraph{\textbf{Drop Nodes}}
Hamilton et al. \cite{hamilton2017inductive} proposed GraphSAGE (SAmple and AGGreGatE) in 2017. Before this, GCNs were generally trained in the way of \emph{transductive learning}, which requires all nodes to be visible at training time and needs to calculate a node's embedding by all its neighbors. GraphSAGE, on the other hand, adopts \emph{inductive learning}, which requires only some of the neighbors to predict a node's embedding. To achieve this, GraphSAGE does not directly train node embeddings but trains aggregation functions, which compute node embeddings from its neighbors. When a new node is added to the graph during testing, the trained aggregation functions can predict its embedding from its neighbors. Parameters of the aggregation functions are trained with an unsupervised learning objective that makes the representations of closer nodes more similar and farther nodes less similar,
\begin{equation} \label{eq:graphsage 1}
\begin{aligned}
J_{\mathcal{G}}(\mathbf{z}_u) = &-\log\big(\sigma(\mathbf{z}_u^T\mathbf{z}_v)\big) \\
&- Q\cdot \mathbb{E}_{v_n\sim P_n(v)}\log\big(\sigma(-\mathbf{z}_u^T\mathbf{z}_{v_n}) \big)
\end{aligned}
\vspace{-2pt}
\end{equation}
$v$ is some node reachable within a fixed number of steps near $u$. $P_n$ is the negative sampling distribution, and $Q$ is the number of negative samples. This objective can be replaced with any task-specific objective for other downstream tasks. This training method of sampling only some nodes makes GCN more generalizable, reduces the computational complexity of training, and improves the model performance.
A series of node-dropout training methods have been proposed after GraphSAGE. Chen et al. \cite{chen2018fastgcn} proposed FastGCN in 2018, whose idea is similar to GraphSAGE \cite{hamilton2017inductive}. The difference is that FastGCN randomly samples all nodes in the whole graph, not only the neighbors of a certain node. Considering node sampling efficiency, FastGCN is significantly faster than the original GCN as well as GraphSAGE, while maintaining comparable prediction performance.
Huang et al. \cite{huang2018adaptive} proposed AS-GCN in 2018. Again, the nodes are sampled during training; its differences with GraphSAGE \cite{hamilton2017inductive} and FastGCN \cite{chen2018fastgcn} are threefold: AS-GCN samples nodes layer by layer instead of independently; the sampler is adaptive; and AS-GCN skips some edges when transferring information between two nodes with long distances, enhancing the efficiency of information propagation. The experiments on running time show that AS-GCN is faster than the original GCN and the former node-wise sampling methods.
Zou et al. \cite{zou2019layer} proposed LADIES in 2019. The previous single-node-based sampling methods suffer from the problem of too many neighbors, while the layer-based sampling methods suffer from too sparse connections. LADIES also samples layer by layer, but it calculates the importance of each node in the next layer and samples the most critical nodes among them. This alleviate the problem of sparse connections while limiting the number of neighbors.
Feng et al. \cite{feng2020graph} proposed GRAND in 2020. It performs data augmentation using Random Propagation method. For the graph feature matrix, the authors compare two dropout methods: random dropout of matrix elements and random dropout matrix rows. The former is equivalent to a standard dropout in feature level, while the latter is dropping nodes. The latter performs better in experimental comparison.
\paragraph{\textbf{Drop Edges}}
Veli{\v{c}}kovi{\'c} et al. \cite{velivckovic2017graph} proposed GAT (Graph ATtention Networks) in 2017. GAT uses attention mechanism to compute the importance of different edges and train GNN using more important information. In 2021 Ye and Ji \cite{ye2021sparse} improved on GAT and proposed SGAT. SGAT learns sparse attention coefficients on the graph and produces an edge-sparsified graph.
Rong et al. \cite{rong2019dropedge} proposed DropEdge in 2020. GCN training process is prone to overfitting and over-smoothing problems. Over-smoothing is a phenomenon that representations of all nodes on the graph tend to be the same, occurring when GCN is too deep. The authors address these two problems by randomly dropping edges during training: dropping edges can be seen as a data augmentation method introducing noise into input data to prevent overfitting; dropping edges also reduces the information propagation through edges, thus prevents the over-smoothing problem.
Dropout methods that drop graph information are widely used in GCN, and we summarize them in Table \ref{tab:drop embeddings}.
Dropping input information can be an effective way of data augmentation or regularization. As will be shown in \ref{sec:experiments}, it is a good way of augmenting sequences in recommendation. It can be applied to a wide range of scenarios as all machine learning tasks have input information. Meanwhile, it is not performed on model parameters but input data, so its regularization effectiveness is not as stable as dropping model structures.
\subsection{Summary and Interconnections between Dropout Methods}\label{subsec:interconnections}
Based on where in a machine learning task the dropout operation performs, we classify commonly used dropout methods into three major categories: dropping model structures, dropping embeddings and dropping input information. Dropping model structures is divided into two subcategories of dropping individual neurons and dropping neuron groups, according to the granularity of dropout operation. Dropping input information is divided into three subcategories of dropping one-dimensional information, two-dimensional information, and graph information according to the form of input information.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/discussion.png}
\vspace{-16pt}
\caption{Training procedure and dropout position in neural models. We classify dropout methods into three major category based on where their operations perform in a training process: dropping input information (\ding{172}), dropping embeddings (\ding{173}), and dropping model structures (\ding{174}).}
\label{fig:discussion}
\vspace{-10pt}
\end{figure}
The three types of dropout methods, dropout of model structure, dropout of input information, and dropout of embedding, can be represented as \ding{172}, \ding{173}, and \ding{174} in Figure \ref{fig:discussion}, respectively. They are performed at three different stages of the training process, which is our classification criteria.
Within each block, there may also be different layers. For example, for the model part, if we use a deep network, we can perform dropout in every layer or just do it for some layers. If we use a recurrent neural network, we can perform dropout in its feed-forward direction, its recurrent direction, or part of the layers. Since a common implementation of dropout is to zero the neuron outputs, if the output of layer $l$ is considered as the input data of layer $l+1$, then dropping neurons of layer $l$ can be considered as dropping the input data of layer $l+1$. Thus dropping model structure and dropping input data are not clear-cut, they are highly correlated. This is more clear in computer vision tasks where convolutional neural networks are used: methods in Section \ref{para:drop cnn} and Section \ref{subsubsec:drop 2d info} operate at different levels, but their actual implementations are highly similar. By performing dropout operation in Section \ref{para:drop cnn} on the input image level, it becomes the operation in Section \ref{subsubsec:drop 2d info}.
\section{Contributions of Dropout Methods}\label{sec:discussion}
In Section \ref{sec:survey}, we classify commonly used dropout methods into three major categories according to the stage at which the dropout operation is performed, discuss their applications in neural models and analyze their interconnections. In this section, we discuss the contributions of dropout methods from the perspective of effectiveness and efficiency.
\subsection{Improving Effectiveness}\label{subsec:improving_effectiveness}
Dropout makes models to better utilize training data and promotes model effectiveness in many ways.
\noindent $\bullet$ \textbf{Preventing overfitting.} Most of the dropout methods are used as regularization methods \cite{wager2013dropout, helmbold2015inductive} to prevent overfitting. For the methods of dropping model structure, during the training process, a part of neurons is dropped randomly to reduce the interdependence between neurons and prevent overfitting. The method of dropping input information enhances the robustness of the model by introducing noise into the input data. Meanwhile, many dropout methods have contributions other than preventing overfitting.
\noindent $\bullet$ \textbf{Simulating testing phase.} Some methods are used to simulate the possible situations in the testing phase \cite{volkovs2017dropoutnet, shi2018attention, shi2019adaptive}. The testing phase may face an information deficit, and the model needs to give predictions under the absence of information. Therefore, these methods drop part of the information at training time so that the model does not rely excessively on this possibly missing information and improves the performance of the testing phase.
\noindent $\bullet$ \textbf{Data augmentation.} Some methods are used for data augmentation \cite{bouthillier2015dropout, devries2017improved, zhong2020random, chen2020gridmask, yun2019cutmix}. Noise is introduced into the input data to create more training samples and improve the training effect of the model. Dropping training data can be seen as introducing Bernoulli noise into data.
\noindent $\bullet$ \textbf{Enhancing data representation.} Some methods are used to enhance data representation in pre-training phase \cite{sun2019ernie, wu2019mask, zhang2020pegasus, zhou2020s3}. These BERT \cite{devlin2019bert} based methods randomly mask part of the input data and use the unmasked part to predict the masked part to enhance data representation.
\noindent $\bullet$ \textbf{Preventing over-smoothing.} Dropout methods in graph neural networks can also solve the over-smoothing problem \cite{rong2019dropedge}. Over-smoothing occurs when GCN is too deep that the representation of all nodes on the graph tends to be the same. Randomly dropping edges during training can reduce information propagation through edges and prevent the over-smoothing problem.
\subsection{Improving Efficiency}\label{subsec:improving_efficiency}
Besides improving model effectiveness, some dropout methods can also improve model efficiency for certain tasks.
\noindent $\bullet$ \textbf{Accelerating GCN training.} In GCN scenarios, node sampling technique proposed by GraphSAGE \cite{hamilton2017inductive} efficiently accelerates GCN training. It only needs some of the nodes to perform the training process instead of needing all node neighbors. Later works like FastGCN\cite{chen2018fastgcn} and AS-GCN \cite{huang2018adaptive} improve this sampling technique, making it faster or sample in an adaptive way.
\noindent $\bullet$ \textbf{Model compression.} Some methods are used for model compression \cite{molchanov2017variational, neklyudov2017structured, gomez2018targeted, gomez2019learning, salehinejad2019ising, salehinejad2021edropout}. These methods make the model structure easier to compress after random dropout of neurons, e.g., easier to perform neural pruning. Model compression reduces model parameters, which can improve training efficiency and prevent overfitting.
\noindent $\bullet$ \textbf{Model uncertainty measurement.} Some methods are used to measure the model uncertainty \cite{gal2016dropout, gal2016uncertainty, gal2017concrete, lakshminarayanan2017simple}. These methods view dropout as a Bayesian learning process. For example, in Monte Carlo Dropout \cite{gal2016dropout}, the authors interpret dropout as a Bayesian approximation of a deep Gaussian process.
Monte Carlo Dropout estimates the uncertainty of the model output by performing a grid search on the dropout rate, which is almost unusable for deeper models (those in computer vision tasks) and reinforcement learning models because of the excessive computational time and computational resources consumed. Thanks to the development of Bayesian learning, Concrete Dropout \cite{gal2017concrete} uses a continuous relaxation of the dropout discrete mask. A new objective function is proposed to automatically adjust the dropout rate on large models, reducing the time required for experiments. It also allows the agent in reinforcement learning to dynamically adjust its uncertainty as the training process proceeds and more training data is observed.
\section{Dropout Experiments in Recommendation Models}\label{sec:experiments}
We have reviewed multiple types of dropout methods and discussed their interconnections and contributions. However, each work has its own experiments to verify the effect of its dropout method, so the methods' effectiveness actually has not been investigated under a unified framework and evaluation system. In this section, we experimentally investigate four classes of dropout methods on recommendation models. Choosing recommendation models as our experiment scenario is because they utilize various heterogeneous information, which are transformed into different forms, from input data, to embeddings, and to model structures, covering the range of dropout operations we reviewed in Section \ref{sec:survey}. Such a variety of information sources and forms provides a suitable environment for our comparisons and verification of different dropout methods.
We first introduce the selected recommendation models, the implementations of the four dropout methods on each of them (Section \ref{subsec:implementations}), the datasets and experimental settings (Section \ref{subsec:exp setup}). Then, we analyze the experimental results and present comparison to evaluate the effectiveness of each dropout method (Section \ref{subsec:results_and_analysis}). Finally, we explore the effect of dropout ratio on model performances (Section \ref{subsec:effect_of_dropout_ratio}).
\subsection{Recommendation Models and Implementations of Dropout Methods}\label{subsec:implementations}
We choose five recommendation models belonging to four classes:
\begin{itemize}
\vspace{-4pt}
\item Traditional recommendation model: BPRMF\cite{rendle2009bpr}
\item Neural recommendation model utilizing context information: NFM\cite{he2017neural}
\item Sequential recommendation model: GRU4Rec\cite{hidasi2015session} and SASRec\cite{kang2018self}
\item Graph recommendation model: LightGCN\cite{he2020lightgcn}
\vspace{-4pt}
\end{itemize}
The four dropout methods are dropout of model structure, dropout of input information, dropout of embeddings, and dropout of graph information. Since there are significant differences between graph information and other input information in recommender systems, we treat them as different methods. We elaborate on the implementations of the four dropout methods on each recommendation model in Appendix \ref{append:implementation}.
\subsection{Datasets and Experiment Settings}\label{subsec:exp setup}
\begin{table}[ht]
\vspace{-4pt}
\centering
\caption{Dataset Statistics}
\vspace{-4pt}
\begin{tabular}{lllll}
\toprule
& \#user & \#item & \#interaction & density(\%) \\
\midrule
MovieLens-1M & 6040 & 3883 & 1000209 & 4.26 \\
ml1m-cold & 6040 & 3883 & 797675 & 3.40 \\
Amazon Baby 5-core & 19445 & 7050 & 160792 & 0.117 \\
\bottomrule
\end{tabular}
\label{tab:dataset_attributes}
\vspace{-2pt}
\end{table}
We use two datasets for experiments: MovieLens-1M-cold and Amazon Baby 5-core
\cite{mcauley2015image, he2016ups}.
MovieLens-1M-cold is obtained by artificially creating a cold-start condition based on MovieLens-1M
\cite{harper2015movielens}. Each user in MovieLens-1M has at least 20 interactions, so there is no cold-start scenario for users. To make our experimental environment closer to real-world recommendation, we randomly select 10\% of users and 10\% of items and remove all of their interactions in training set, constructing a group of cold-start users and items. We also randomly select user and item attributes and set the value to zero (unknown) so that unknown attribute values in the dataset account for 10\% of all attribute values.
After this process, the resulting dataset is named MovieLens-1M-cold, or ml1m-cold for short.
The statistics of the datasets are shown in Table \ref{tab:dataset_attributes}. The density of Amazon Baby 5-core is only 1/30 of ml1m-cold. We choose a dense and a sparse dataset to make our experiment results more general.
We consider Top-N recommendation task. The parameter values taken for all models in common are shown in Table \ref{tab:ExperimentSettings} in Appendix \ref{append:model_param}.
This ensures a consistent evaluation environment and comparability of evaluation results. According to the analysis of Krichene et al. \cite{krichene2020sampled}, negative sampling during testing can bias the results. Therefore, we conduct non-sampling evaluation for all our experiments.
Parameters specific to each model are shown in Table \ref{tab:ExperimentSettings2} in Appendix \ref{append:model_param}. The training batch size is set to 1024 on ml1m-cold for faster training.
We searched $L_2$-coefficient and choose $1e^{-6}$ for all models on ml1m-cold dataset. On Amazon Baby 5-core, we choose $1e^{-5}$ for BPR, NFM, and GRU4Rec; $1e^{-6}$ for SASRec; and $1e^{-4}$ for LightGCN.
\subsection{Results and Analysis} \label{subsec:results_and_analysis}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figs/Baby_results.png}
\vspace{-6pt}
\caption{NDCG@10 on Amazon Baby 5-core}
\label{fig:Baby_results}
\vspace{-10pt}
\end{figure}
\subsubsection{Overall Results} \label{subsubsec:overall_results}
\begin{table}[ht]
\vspace{-2pt}
\centering
\caption{Overall NDCG@10 results}
\vspace{-4pt}
\begin{threeparttable}
\begin{tabular}{clllll}
\toprule
& Origin & \makecell[c]{Drop Model\\Structure} & \makecell[c]{Drop Input\\Info} & \makecell[c]{Drop\\Embedding} \\
\midrule
\multicolumn{5}{c}{Amazon Baby 5-core}\\
\multicolumn{1}{l|}{BPR} & 0.00969 & 0.00973 & 0.00888** & 0.00916* \\
\multicolumn{1}{l|}{NFM} & 0.00657 & 0.00737** & 0.00654 & 0.00647 \\
\multicolumn{1}{l|}{GRU4Rec} & 0.01071 & 0.01146 & 0.01777** & 0.01096 \\
\multicolumn{1}{l|}{SASRec} & 0.01428 & 0.01562* & 0.02143** & 0.01502 \\
\multicolumn{1}{l|}{LightGCN\dag} & 0.01392 & 0.01392 & 0.01275 & 0.01420 \\ \hline \\
\multicolumn{5}{c}{ml1m-cold}\\
\multicolumn{1}{l|}{BPR} & 0.0339 & 0.0364** & 0.0331 & 0.0332 \\
\multicolumn{1}{l|}{NFM} & 0.0335 & 0.0353* & 0.0334 & 0.0354** \\
\multicolumn{1}{l|}{GRU4Rec} & 0.0964 & 0.1010** & 0.1084** & 0.1013** \\
\multicolumn{1}{l|}{SASRec} & 0.1064 & 0.1092** & 0.1093 & 0.1063 \\
\multicolumn{1}{l|}{LightGCN\dag} & 0.0377 & 0.0388 & 0.0361** & 0.0376 \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[]*: $p < 0.05$, **: $p < 0.01$, compared to the origin value (not using any dropout methods)
\item[]\dag: Drop graph info for LightGCN on Amazon Baby 5-core is 0.01403, on ml1m-cold is 0.0383
\end{tablenotes}
\end{threeparttable}
\label{tab:results_all}
\vspace{-2pt}
\end{table}
We present the overall results of NDCG@10 in Table \ref{tab:results_all}, and each value in the table is the best result for one dropout method on one model with the dropout ratio among $\{0.1, 0.2, 0.3\}$. We plot the results in lines showing in Figure \ref{fig:Baby_results} and \ref{fig:ml1m-cold_results}. All detailed experimental results of other evaluation metrics (NDCG@5, 20, 50; HR@10, 20) and each dropout ratio are in Appendix \ref{append:exp data}.
According to the experimental results, we summarize the effect of different dropout methods in the five recommendation models.
For traditional recommendation models that do not use neural networks, dropping model structure can have a regularizing effect and improve model performance. Dropping input information and dropping embeddings can be detrimental to the performance of the model.
For neural network models using contextual information, both dropping model structure and dropping embeddings can improve the model performance. This can be because the former acts as a regularizer, and the latter allows the model to take advantage of multiple aspects of information to better cope with cold start situations. Dropping input information does not affect model effectiveness.
For sequential models, all three dropout methods lead to improved model effects. Among them, dropping input information has the most significant improvement if dropout ratio is properly chosen. This is because dropping items in input sequences can be viewed as a type of sequence augmentation \cite{liu2021augmenting}. Dropping model structure has the most stable improvement. Dropping embeddings also allows the model to get improved or, at least, remain unchanged.
For the graph recommendation model, this paper only explores the lightweight model LightGCN, which contains a small quantity of parameters thus does not require too much regularization. So all the four dropout methods do not affect the model performance, or even have a detrimental effect.
\subsubsection{Discussion on the Applications of the Four Dropout Methods}\label{subsubsec:conclude 2}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figs/ml1m-cold_results.png}
\vspace{-6pt}
\caption{NDCG@10 on ml1m-cold}
\label{fig:ml1m-cold_results}
\vspace{-10pt}
\end{figure}
As described in Section \ref{sec:survey}, the four dropout methods belong to different levels and may have different contributions, and Section \ref{subsubsec:overall_results} shows their performances vary in different scenarios. Therefore, it is not feasible to simply determine which of them is better or worse generally, but we can analyze their features. This section provides some analysis and discussions on the properties of the four dropout methods.
Dropping model structure is the most stable dropout method in our experiment. It makes the model performance gain or at least unchanged for all models, where most of them have significant improvement. This reveals that this classical dropout method is still the most effective way of regularization, and dropout according to the structural properties of the model can achieve good results, except for the model with very few parameters (like LightGCN) which does not need much regularization.
Dropping input information has significant side effects on the performance of traditional models and graph recommendation models, and has no effect on the neural model utilizing contextual information. This indicates that recommendation models utilizing less information and having fewer parameters, such as BPRMF and LightGCN in this paper, do not need too much regularization, and dropping input information will harm their performances. However, it significantly improves sequential recommendation models and is the most effective one among the four methods. This is because dropping input information of sequential models can be viewed as a way of sequence augmentation.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/drop_rate_Baby.png}
\vspace{-14pt}
\caption{Effect of dropout ratio on Amazon Baby 5-core}
\label{fig:drop_rate_Baby}
\vspace{-10pt}
\end{figure}
Dropping embedding has no significant effect on model performance in most cases, while slightly improves NFM. The method of embedding dropout is from \cite{shi2018attention, shi2019adaptive}, which use the attention mechanism to make the model select the information to be exploited automatically. Then the model can automatically rely on the information that is kept after dropping part of the embeddings. In contrast, the model used in this experiment does not have this attention mechanism, so the improving effect is limited.
For dropping edges in graph, there is neither an improvement nor a decrease on model performance. This is determined by the nature of LightGCN \cite{he2020lightgcn}, which only has a small quantity of parameters. Dropping edges or nodes may be effective to other models with a larger number of parameters such as NGCF \cite{wang2019neural}.
\subsection{Effect of Dropout Ratio}\label{subsec:effect_of_dropout_ratio}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/drop_rate_ml1m-cold.png}
\vspace{-14pt}
\caption{Effect of dropout ratio on ml1m-cold}
\label{fig:drop_rate_ml1m-cold}
\vspace{-10pt}
\end{figure}
In this section, we analyze the parameter sensitivity: how does dropout ratio affect model performances?
For each dataset, each model, and each evaluation metric, we test dropout ratios of $\{0.1, 0.2, 0.3\}$. We plotted the values of NDCG@10 in Figure \ref{fig:drop_rate_Baby} and \ref{fig:drop_rate_ml1m-cold}.
As can be seen in Figure \ref{fig:drop_rate_Baby} and \ref{fig:drop_rate_ml1m-cold},
the orange line keeps staying above the blue line, indicating that dropping model structure almost always leads to a stable improvement on model performances. For the two sequential recommendation models, GRU4Rec and SASRec, the red line is higher than the other three colored lines, indicating that dropping input information has the most significant improvement effect on sequential recommendation models, when choosing dropout ratio properly. The appropriate range of dropout rate for SASRec on ml1m-cold is low, and 0.2 and 0.3 are too large, causing its performance to decline. The purple line improves NFM on ml1m-cold because ml1m-cold contains rich contextual information, and dropping embeddings allows NFM to utilize multifaceted information more robustly.
\section{Future Directions}\label{sec:discussion2}
Based on our review and experiments, in this section, we further discuss several topics about dropout and analyzes some potential research directions in this field.
\subsection{Selection of Dropout Methods}
Though specific dropout methods achieve impressive improvement on certain neural models, deciding which type of dropout method to use is not an easy task. Based on our experimental results in Section \ref{sec:experiments}, distinct dropout methods work best for various types of models, like dropping input information for sequential recommendation models, dropping embedding for recommendation models that utilize contextual information, and dropping model structure for all models. But can we determine which dropout method should be used according to the characteristic of the model before we conduct experiments? If so, lots of time on enumerating dropout methods during training could be saved.
\subsection{Hyperparameter Optimization of Dropout}
In the beginning, dropout methods require manually setting dropout ratio and patterns. Standard Dropout \cite{hinton2012improving} and DropConnect \cite{wan2013regularization} need manually setting dropout ratio, and the dropout patterns of the methods like DropBlock \cite{ghiasi2018dropblock} and GridMask \cite{chen2020gridmask} need to be deliberately designed.
Some later methods make progress to automate this process to some extent. As more data is exposed throughout the training process, Variational Dropout \cite{kingma2015variational}, Concrete Dropout \cite{gal2017concrete}, and Curriculum Dropout \cite{morerio2017curriculum} can automatically adjust dropout ratio towards more suitable values. Instead of randomly dropping neurons, Targeted Dropout \cite{gomez2018targeted} and Ising-Dropout \cite{salehinejad2019ising} calculate the most suitable neurons to drop, making the dropout pattern design more automatic.
AutoDropout \cite{pham2021autodropout} uses reinforcement learning to train a controller, which decides the dropout patterns in CNN and Transformers. In the future, more efficient ways of optimizing dropout hyperparameters could be explored.
\subsection{Efficient Dropout}
Besides effectiveness, efficiency is also needed to be considered when using dropout, since the dropout operation itself takes time. Some aforementioned methods in Section \ref{sec:survey} add additional attention parts to calculate dropout patterns, which may prolong the training time; edge dropping operation in Section \ref{sec:experiments} slows down our training; and the methods that use reinforcement learning \cite{pham2021autodropout} requires excessive computational time and resources.
Fast Dropout \cite{wang2013fast} takes the first step of improving the efficiency of dropout operation itself, and many methods have made progress in this direction: Concrete Dropout optimizes the model uncertainty estimation process of Monte Carlo Dropout; and the series of GCN-based dropout methods \cite{hamilton2017inductive, huang2018adaptive, feng2020graph} have been making improvement on node-dropping training. Since now more advanced techniques like reinforcement learning have been adopted for dropout, improving their efficiency to speed up the dropout operation could be another research direction.
\subsection{Understanding Dropout Theoretically}
The effectiveness of dropout has been irrefutably verified by experiments of hundreds of works. However, mathematical proofs of the validity of dropout has been rare. Baldi and Sadowski in 2013 gave a mathematical formality of dropout and use it to analyze the averaging and regularization properties of dropout \cite{baldi2013understanding}. Gal and Ghahramani in 2016 cast dropout network training as approximate inference of Bayesian neural networks, achieving a significant improvement in experiment results without increasing time complexity \cite{gal2015bayesian}. Gal and Ghahramani's other works \cite{gal2016dropout, gal2016theoretically} also perform mathematical derivation on the validity of proposed dropout methods. In the future, proving the effectiveness of dropout not only from intuitive explanation and experimental verification but also from mathematical proof could be a challenging research direction.
\subsection{Unified Evaluations under Different Scenarios}
The analysis of broader applicability of dropout is also needed. We evaluate the effectiveness of dropout methods in recommendation scenarios, but how well does each one of aforementioned three types of dropout methods work in other scenarios? In the future, it is important for unified evaluations under other scenarios such as natural language processing, computer vision, and graph neural networks, to analyze the general applicability of different types of dropout methods.
\subsection{Dropout Applications in Different Domains}
Most dropout methods that drop input information are domain specific, designed for a certain domain like NLP or CV, as we have reviewed in Section \ref{subsec:drop inputs}. However recently, MAE \cite{he2021masked} brought the randomly masking strategy from NLP pre-training to CV and achieved good results. This indicates that the applications of dropout methods in different domain could be embarking on similar trajectories.
\vspace{-6pt}
\section{Conclusions}\label{sec:conclusion}
In this paper, we investigate more than seventy dropout methods in neural network models and classify them into three major categories and six subcategories according to the stage where the dropout operation is performed. We discuss their applications in neural models, their contributions and interconnections.
We conduct experiments on five recommendation models to verify the effectiveness of each type of dropout method under our classification framework, and find that dropping model structure has the most general and stable improvement effect on the models, while dropping input information and dropping embeddings are model-specific.
Finally, we present some open problems and potential research directions, hoping to promote the research in this field. Dropout methods are basic and universally used training techniques in today's neural model, helping with our steps towards better machine learning and artificial intelligence. We hope this survey paper can help readers better understand the works in this research area.
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
This work is supported by the National Key Research and Development Program of China (2018YFC0831900), Natural Science Foundation of China (Grant No. 62002191, 61672311, 61532011) and Tsinghua University Guoqiang Research Institute.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
1,941,325,220,022 | arxiv | \section{Introduction}
The dangerous dynamics of the coronavirus spread throughout the world gives rise to numerous studies in a wide scientific spectrum. Improving known epidemic models and developing new models are complicated tasks because the lack of verified statistics on the infection spread and disease dynamics. Unstable predictability of infection, ambiguity with drugs \cite{Corey:2020}, uncertainty regarding the immunity of disease \cite{Nature:2020}, and other factors (such as a viral mutation) make it difficult to predict the dynamics of pandemic.
This may relay to some mathematical models, which depend on a significant number of free (statistically-driven) parameters. The known models of the SIR family give solutions in a form of smooth functions\cite{Choisy:2007} (solutions of differential equations) that only indirectly include important random factors.
Naturally, that in such a situation, most statistically reliable forecasts are obtained by methods based on the direct application of the central limit theorem with a predicted error of $1/\sqrt N$, see \cite{Barmparis:2020a},\cite{Eugene:2020a},\cite{Fandaou:2020a} and references therein.
In this paper, we propose the use of the dynamic Monte Carlo (DMC) method that self-consistently includes various dynamic random factors. Such a technique was previously used to study the processes associated with aggregation, viscous flow properties, the formation of biological structures, and allows to scale the associated geometric and dynamic quantities that characterize these phenomena \cite{Vicsek:1995},\cite{Solon:2015},\cite{Solon:2015a}.
In our study, as a control (free) parameter, we use the generalized risk factor $\beta$, which includes some of the factors mentioned above in an integral form. In our 2D toy model, the transmission of infection occurs due to contacts of randomly moving individuals, that determines the complex dynamics of the infection spread and various critical aspects of such a dynamics.
The paper is organized as follows. In Section 2, we formulate our approach and examine the behavior of infected individuals (order parameter $A(t,\beta)$), which, as it turns out latter, critically depends on the value $\beta$. It also is discussed the similarity of the asymptotic behavior of the infection dynamics with the critical phase transition in a two-dimensional (2D) percolation system. In the next Section, we analyze the dynamic properties of the extended system, where we deal with two additional parameters which allow to on/off the quarantine state. The next Section, contains the study of dynamics of the infection spreading and the formation of immunity for infected individuals. The last Section summarizes our conclusions.
\section{Dynamic Monte Carlo simulations}
First we explain which the properties of dynamic Monte Carlo (DMC) approach we deal with. In order to study the infection dynamic in epidemiological system (that is far from equilibrium) the DMC method is used, that allows investigation both temporal and spatial properties by the numerical simulations.
As a toy model, we choose a 2D $L\times L$ (where $L$ is
size) bounded system that contains a disordered population of $N$ individuals.
Following the classifications commonly known from SIR model \cite{Choisy:2007} in our DMC model we divide the host population into a set of distinct categories,
according to its epidemiological status, that are susceptible (S), currently infectious (I ), and recovered (R). The total size of the host population is then $ N = S + I + R $ and all the individuals are born in the susceptible
category\cite{Choisy:2007}. Following the actual situations we assume that initially the maternally derived immunity is clear. (The effect of immunity is studied in the following sections). Upon contact with infectious individuals, the susceptibles may get infected and move into the infectious category. To apply the DMC approach it is constructed the Person class (individual, alias object) that encapsulates properties of a randomly placed and moving individual and contains the following significant attribute
\begin{equation}
\{x,y,v_{x},v_{y},I,M\},\label{attrib
\end{equation}
where $x,y$ are the components of position, $v_{x},v_{y}$ are the components
of velocity, the parameters $I,M$ describe the states: infected/uninfected
and immunized/non-immunized, respectively. The list of Persons that represents a total host population is used in our DMC simulations.
One of the underlying reasons why epidemiological systems exhibit variation is due to a
different way that the individuals in a population have contact with each other.
In our DMC simulation we assume that the spreading (transmission) of the infection occurs
because of random contacts for moving individuals.
To do that in DMC simulations we use the following strategy. (i) Any contact can occur only between two nearest individuals. (ii) At any contact, the state of an infected transmits to the other contact person. But the infected one can still be recovered with probability $1-\beta$ (recall that $\beta \in [0,1]$ is a risk factor). This means that if $\beta \lesssim 1$, the probability to a recovery is small.
To take an advantage of the visualizations at applying the DMC technique, we allow each object to have a visual representation, which is a yellow circle (non-immunized individual), a green circle (immunized but not infected individual) and a red circle (infected individual), see Fig.\ref{Pic_Fig1a}.
We used the interaction radius $r$ as the unit scale to measure distances (used $r = 6$, see Fig.\ref{Pic_Fig1a}) between the individuals, while the time unit $\Delta t = 1$ was the
time interval between two updatings of the directions and positions.
In our simulations we used the
simplest initial conditions: at time $t=0$ the positions and velocities for all the $N$ individuals are randomly distributed.
We use the velocity scale such that random $v_x, v_y \in [2,10]$ for which the
individuals always interact with their actual neighbors and move fast enough to change the configuration after a few
updates. According to our simulations, the variation of
actual interval of values of $v_x, v_y$ does not affect the results.
We also investigated the cases when the basic parameter of the model, the density
$\rho = N/L^2$ is slightly varied.
\begin{figure}[ptb
\centering
\includegraphics[
width=0.45\textwidth
{Fig1a.png
\caption{(Color on line.) The snapshots ($N=1000, \beta=0.95$, $L=400$) show the dynamics of the infection spreading at: (a) t=10, (b) t=30, (c) t=50, (d) t=70. We observe that for shown case at $t=70$ nearly all the individuals are infected.
\label{Pic_Fig1a
\end{figure}
When the simulation time runs a lot contacts occur between nearest randomly moving persons that leads to fast and uncontrollable transfer of infection between many individuals, see Fig. \ref{Pic_Fig1a}.
\begin{figure}[ptb
\centering
\includegraphics[
width=0.5\textwidth
{Fig2a.png
\caption{(Color on line.) The dynamics of the infections spreading
coefficient (order parameter) $A=I/N$ for times $t<8$ at different values of the risk factor (control parameter) $\beta$. [In this figure the abscissa axis shows the fitting time $0.01t$]
The blue
lines (arrows A) show the numerical simulations (DMC) data, while red lines
(arrows B) display the fitting function Eq.(\ref{fitting}), where only $a_{0}$ coefficient changes considerably at $\beta$\ variation: (a) shows
case $\beta=0.99$, (b) $\beta=0.90$, (c) $\beta=0.80$, (d) $\beta = 0.60$. At
$\beta<0.60$ the DMC solution rapidly converges to 0 ($A=0$). This means that for
$\beta<\beta_{c}\approx0.60$ all the infected individuals will be recovered up to $t=8$.
\label{Pic_Fig2a
\end{figure}
It is of great interest to investigate the temporal infection dynamics at
various risk factors $\beta$. Such a dynamics of the infections
spreading (coefficient $A(t)=I(t)/N$) as function of time $t$ is displayed in Fig.\ref{Pic_Fig2a}.
Since $A(t)$ is a random-valued function we will fit (see \cite{Press:2002}) $A(t)$ by a suitable
fitting function that is chosen as
\begin{equation}
f(t)=a_{0}t^{a_{1}}\tanh(t^{a_{2}}),\label{fitting
\end{equation}
where $a_{0,1,2}$ are the fitting coefficients. We found that $a_{1}$ is very small $\sim 10^{-5}$
and $a_{2}\simeq2$ for all the cases, but the amplitude $a_{0}$ changes considerably at
the $\beta$\ variation. In Fig.\ref{Pic_Fig2a} the blue lines
show the numerical simulations (DMC) data, while the red lines display the
fitting function $f(t)$. Fig.\ref{Pic_Fig2a} (a) shows the case $\beta=0.99$, (b) $\beta=0.90$,
(c) $\beta=0.80$, (d) $\beta=0.60$. We indicate a \textit{remarkable} observation that for $\beta<0.60$ the system asymptotically converges to a
trivial solution with $A\simeq a_0=0$ already for $t\simeq 6$
\begin{figure}[ptb
\centering
\includegraphics[
width=0.5\textwidth
{Fig3a.png
\caption{(Color on line.) The comparison of the order parameter functions for the infection spreading $A(\beta)$ and the order parameter for 2D percolating $P(p)$, where $p$ is is the occupation probability of defect state\cite{Grimmett:1999a}. Red line shows the dependence of (slightly tailored) amplitude parameter the fitting function $a_0$ corresponding
to $A=I/N$ at fixed time $t=8$ as function
of the risk factor $\beta$. Blue line shows the dependence of the order
parameters $P(p)$ for 2D percolating material as function of the occupation probability $p$. We observe that both curves are very close and the phase transitions to
infected/percolating state occurs similarly at $\beta_{c}\simeq p_{c}\simeq0.6$ for both cases.
\label{Pic_Fig3a
\end{figure}
Such observation leads to an interesting assumption that the studied dynamics of the infection spread can (asymptotically) be associated with a critical transition in the two-dimensional (2D) percolation system, that occurs when the occupation probability of defects is $p_{c}=0.594$ \cite{Isichenko:1992a},\cite{Grimmett:1999a},\cite{Stauffer:1992},\cite{Burlak:2009a},\cite{Burlak:2015} see Fig.\ref{Pic_Fig3a}. Such an assumption is studied in the following Section.
\section{Critical value of the risk factor}
Fig. \ref{Pic_Fig3a} displays a comparison of above mentioned dependencies. In Fig.
\ref{Pic_Fig3a} the red line shows the dependence $a_0(\beta)$ (see Fig.\ref{Pic_Fig2a}) associated with the infecting parameter $A(\beta)=I/N$, and the blue
line shows the percolating order parameters $P(p)$ as function of the
occupation defect probability $p$. We observe that both dependencies are in excellent agreement and at $\beta_{c}\simeq p_{c}\simeq0.6$ the phase transition to infected/percolating state occurs. From Fig. \ref{Pic_Fig3a} we can assume that the parameter
$A(\beta)$ can be mentioned further as an order parameter (similarly $P(p)$).
This results that the formalism of the percolation critical percolating phase transition \cite{Isichenko:1992a},\cite{Grimmett:1999a},\cite{Stauffer:1992}
can be applied to investigation the asymptotic of infection spreading at various $\beta$.
On the other hand, good agreement between the results of DMC modeling and the critical transition in 2D percolation system shows the general applicability of DMC approach to analyze the dynamics of infection spread in the epidemiological system.
\section{The extension of model}
\subsection{Quarantine regime}
Mass infection shown in Fig. \ref{Pic_Fig1a} is an extremely dangerous and highly undesirable scenario for the development of epidemiological situation. This Section discusses the extension of the model, which in principle allows
suspending this trend. One of the simple solutions proposed recently is introducing the quarantine by localizing of infected individuals in order to significantly reduce the number of contacts that leads to the transmission of infection. This can be modeled by setting $v_x=v_y=0$
for infected individuals and ignoring all the contacts with them in our approach.
We call such a regime of simulation as a quarantine mode. In order to do this we
introduce two new parameters into the model, $A_{\max}$ (the infection level
when the quarantine is automatically turned on), and $A_{\min}$ (the infection
level when the quarantine is turned off).
\begin{figure}[ptb
\centering
\includegraphics[
width=0.5\textwidth
{Fig4a.png
\caption{(Color on line.) The dynamics of infection parameter $A$ in quarantine mode with $A_{\max}=0.7$, $A_{\min}=0.36$ at moderate values $\beta$, (a) $\beta=0.78$, ( b) $\beta=0.80$, and (c) $\beta=0.82$; panel (d) shows the typical dynamics $A$ at initial times. We observe the generation of irregular oscillations of $A$ with large amplitudes between $A_{\max}$ and $A_{\min}$. We calculated (by the method \cite{Alanwolf:1985a}) that the Lyapunov exponent for such irregular oscillations is about $0.3$.
\label{Pic_Fig4a
\end{figure}
Fig. \ref{Pic_Fig4a} shows the dynamics of infections $A(t)$ in quarantine mode with $A_{\max}=0.7$, $A_{\min}=0.36$ at moderate values $\beta$, (a) $\beta=0.78$, (b) $\beta=0.80$, and (c) $\beta=0.82$; panel (d) shows the typical dynamics $A$ at initial times. For such parameters from Fig.\ref{Pic_Fig4a} we observe that the system transmits to unexpected dynamic state: the generation of irregular oscillations of $A$ with large amplitudes between $A_{\max}$ and $A_{\min}$. We have calculated (by the method \cite{Alanwolf:1985a}) that the Lyapunov exponent for such irregular oscillations is about $0.3$.
This means that if quarantine is turned off too early, the growth of infections suppressed, but the system goes into dynamic mode with irregular oscillations. In this case, a significant number of infected and recovered individuals can be re-infected, therefore, a full recovery does not occur.
\begin{figure}[ptb
\centering
\includegraphics[
width=0.5\textwidth
{Fig5a.png
\caption{(Color on line.) The same quarantine case as in Fig.\ref{Pic_Fig4a} but for large the risk factor $\beta$: (a) $\beta=0.88$, (b) $\beta=0.90$, (c) $\beta=0.92$, (d) $\beta=0.88$ for small times. We observe that for large $\beta$ the evolution of the infections has monotonic shape (with small random variations) but without the large oscillations shown in Fig.\ref{Pic_Fig4a}. One can see that the dynamics $A(t)$ in (b) for $\beta=0.90$ is already suppressed and considerably differs with respect to situation without the quarantine shown in Fig.\ref{Pic_Fig2a} (b).
\label{Pic_Fig5a
\end{figure}
Fig.\ref{Pic_Fig5a} shows the infections dynamics $A(t)$ in the quarantine mode but for large the risk factors $\beta$: (a) $\beta=0.88$, (b) $\beta=0.90$, (c) $\beta=0.92$, (d) $\beta=0.88$. We observe that for large $\beta$ the evolution of the infections has monotonic shape (with small random variations) but without the oscillations as in Fig.\ref{Pic_Fig4a}. However the dynamics $A(t)$ in (b) for $\beta=0.90$ is already suppressed and strongly differs with respect to a case without the quarantine shown in Fig.\ref{Pic_Fig2a} (b).
\subsection{The immunity}
Although actually there are no reliable statistics for the congenital or acquired immunity for persons (for animals see Ref.\cite{Chandrashekar:2020}), in this Section we analyze this aspect in framework of our model. Following the Ref.\cite{Choisy:2007}, we assume that in the host population there is no innate immunity to the virus. But we suppose that the persons (at least a large majority) will acquire this immunity, as is usually the case. To this end, in our model we use the parameter $M$, see Eq. (\ref{attrib}). Following \cite{Choisy:2007}, we assume that this parameter acquires a non-zero value (the presence of immunity) only after first infection and recovery. Re-infection no longer occurs even at contacts with infected persons.
Fig. \ref{Pic_Fig6a} shows the dynamics of recovery at presence of immunity in
the quarantine mode for fixed parameters $\beta=0.94$ and $A_{\max}=0.24$ and
different $A_{\min}=0.17, 0.1, 0.05, 0.02, 0.01$. One can see that now the
oscillations shown in Fig.\ref{Pic_Fig6a} acquire shape of strongly damped
picks that results the number of infected (order parameter $A$) to rapidly decrease. This
allows to predicts that after the first high pick of infection (that has nearly fixed amplitude for all the cases) may occur a second pick but with lesser amplitude and then the complete recover may become
\begin{figure}[ptb
\centering
\includegraphics[
width=0.5\textwidth
{Fig6a.png
\caption{(Color on line.) The dynamics of recovery in presence of immunity
in the quarantine mode for the parameters $\beta = 0.94 $, $ A_{\max} = 0.24 $ for different $A {\min} = 0.17, 0.1, 0.05, 0.02, 0.01$. One can see that after the high peak, the oscillations rapidly decay that leads to a decrease of infections (the order parameter $A(t) $ rapidly decreases).
\label{Pic_Fig6a
\end{figure}
Now we compare the effects of quarantine and immunity factors for
recovery. Fig.\ref{Pic_Fig7a} shows the dynamics of infections (order parameter $A$) for different
values of the risk factor $\beta=0.99, 0.94, 0.9, 0.8$ at situation without the quarantine
when only the personal immunity $M>0$ presents (see Eq.(\ref{attrib})). This
simulation shows that in such case the complete recover can occur even for a lesser time comparing to Fig.\ref{Pic_Fig6a}.
\begin{figure}[ptb
\centering
\includegraphics[
width=0.5\textwidth
{Fig7a.png
\caption{(Color on line.) The dynamics of infections (order parameter $A$) for different
values of the risk factor $\beta=0.99,0.94,0.9,0.8$ for situation when only the effective
personal immunity $M>0$ presents (without the quarantine), see Eq.(\ref{attrib}). This
simulation shows that in such case the complete recover can occur for a lesser time comparing to Fig.\ref{Pic_Fig6a}.
\label{Pic_Fig7a
\end{figure}
\section{Discussion and Conclusion}
We studied the dynamics of the infection spread at various values of the risk factors $\beta$ (control parameter) using the dynamic Monte Carlo method (DMC). In our model, it is accepted that the infection is transmitted through the contacts of randomly moving individuals. We show that the behavior of recovered individuals critically dependents on
the value $\beta$. For sub-critical values $\beta<\beta_{c}\sim 0.6$, the number of infected cases (the order parameters $A(t)$) asymptotically converges to zero, so that at moderate risk factor, the infection can quickly disappear.
\begin{figure}[ptb
\centering
\includegraphics[
width=0.5\textwidth
{Fig8a.png
\caption{(Color on line.) The fraction of infections $A(t,\beta)$ as function of time $t$ at different risk factors $\beta$ near the critical transition at $\beta_{c} \sim 0.6$ for $N=1000$ and initial number of infections $I_0=100$, with $\beta$: (a) 0.56,(b) 0.57,(c) 0.58,(d) 0.581. We observe that for $\beta \lesssim 0.58$ the number of infections rapidly reach zero. However for $\beta>0.8$ the process of recovering may be long.}
\label{Pic_Fig8a
\end{figure}
However such a nontrivial behavior has to be confirmed by direct calculation. Fig. \ref{Pic_Fig8a} shows the dynamics of infections fraction $A(t,\beta)$ with time for different risk factors $\beta$ near the critical transition $\beta \sim \beta_{c}=0.6$ for $N=1000$ and rather large the initial number of infections $I_0=100$. We observe that really for $\beta \lesssim 0.58$ the number of infections rapidly reach zero. We also analyzed the extended system, which currently is widely used to prevent the spread of the virus. In our approach such a system includes two additional parameters on/off the quarantine state. It was revealed that early exit from the quarantine leads to irregular oscillating dynamics (with positive Lyapunov exponent) of the infection. However when the lower limit of the quarantine off is sufficiently small, the infection dynamics acquires a characteristic nonmonotonic shape with several damped peaks. The dynamics of the infection spread in case of individuals with immunity was studied too. Our comparison of the quarantine and the immunity factors on a recovery shows that in case of stable immunity a complete recovery occurs faster than in a quarantine mode.
\section{Acknowledgment}
This work was supported in part by CONACYT (M\'{e}xico) under the grant No. A1-S-9201.
|
1,941,325,220,023 | arxiv | \subsection{Innovations representation and estimation} We refer the readers to \cite{Kailath:70Proc} for an exposition of historical developments of the innovations approach. In general, an innovations representation of a stationary and ergodic process may not exist. The existence of a causal encoder that maps $(x_t)$ to a uniform $i.i.d$ sequence holds quite generally for a large number of popular nonlinear time series models \citep{Wiener:58Book, Rosenblatt:59,Rosenblatt:09,Wu:05PNAS,Wu11:SI}. The existence of a causal decoder that recovers the original sequence from an innovations sequence requires additional assumptions \citep{Rosenblatt:59,Rosenblatt:09,Wu11:SI}. Whereas general conditions for the existence of an innovations representation are elusive, a relaxation on the requirement for the decoder to produce a random sequence $(\hat{x}_t)$ with the same conditional (on past observations) distributions as that of $(x_t)$ makes the innovations representation applicable to a significantly larger class of practical applications as shown in \cite{Wu:05PNAS,Wu11:SI}. In this paper, we shall side-step the question of the existence of an innovations representation and focus on learning the innovations representation $(H, G)$ in (\ref{eq:G}-\ref{eq:H}) when such a representation does exist.
Although there are no known ways to extract (or estimate) innovations when the underlying probability model is unknown, several existing techniques can be tailored for this purpose. One way is to estimate $(\nu_t)$ by the error sequence of a linear or nonlinear MMSE predictor, which can be implemented and trained using a causal convolutional neural network (CNN). Such an approach can be motivated by viewing $(x_t)$ as a sampled process from a {\em continuous-time process} $\tilde{x}(t) = \tilde{z}(t) + w(t)$ in some interval, where $w(t)$ is a white Gaussian noise and $z(t)$ a possibly non-Gaussian but strictly stationary process. Under mild conditions \citep{Kailath:71BJST}, the continuous-time innovations process $\tilde{\nu}(t)$ of $\tilde{x}(t)$ turns out to be the MMSE prediction error process. Unfortunately, there is no guarantee that the discrete-time version of the nonlinear MMSE predictor will be an innovation sequence for $(x_t)$.
If we ignore the requirement that the innovation process $(\nu_t)$ needs to be a {\em causally invertible transform} of $(x_t)$, one can view the innovations sequence $(\nu_t)$ as independent components of $(x_t)$, for which there is an extensive literature since the seminal work of \cite{Jutten&Herault:91} and \cite{Comon:94} on independent component analysis (ICA). Originally proposed for linear models, ICA is a generalization of the principal component analysis (PCA) by enforcing statistical independence on the latent variables. A line of approaches akin to modern machine learning is to pass $x_t$ through a nonlinear transform to obtain an estimate of an {\it i.i.d.} sequence $(\nu_t)$, where the nonlinear transform can be updated based on some objective function that enforces independence conditions. Examples of objective functions include information-theoretic and higher-order moment based measures \citep{Comon:94,Karhunen&etal:97NN,Naik&Kumar}.
The ICA approach most related to this paper is ANICA proposed by \cite{Brakel&Bengio:17}, where a deep learning approach to nonlinear ICA via an autoencoder is trained by the Wasserstein GAN \citep{Arjovsky17} technique. The main difference between IAE and ANICA lies in how causality and statistical independence are enforced in the learning process. Different from IAE, ANICA does not enforce causality in training and its implementation. It achieves statistical independence among extracted components through repeated re-samplings.
Also relevant is NICE \citep{DinhKruegerBengio2015} where a class of bijective mappings with unity Jacobian is proposed to transform blocks of $(x_t)$ to Gaussian {\it i.i.d.} components. The property of unity Jacobian makes NICE a particularly attractive architecture capable of evaluating relevant likelihood functions for real-time decisions. However, the special form of bijective mappings destroys the causality of the data sequence, and the requirement of {\it i.i.d.} training samples in the maximum-likelihood-based learning is difficult to satisfy for time series models.
It is natural to cast the problem of extracting independent components as one of designing an autoencoder where the latent variables are constrained to be statistically independent. Several variational auto-encoder (VAE) techniques \citep{KingmaWelling14,Kingma17,Tucker2018,MaaloeEtal19,VahdatKautz21} have been proposed to produce generative models for the observation process using independent latent variables. Without requiring that the latent variables are directly encoded from and capable of reproducing the original process, these techniques do not guarantee that the latent independent components are part of an innovations sequence.
Unlike the non-parametric approach to obtaining innovations representations considered in this work, there is the literature on parametric techniques to extract innovations by assuming that $(x_t)$ is a causal transform of an innovations sequence $(\nu_t)$. By identifying the transform parameters, one can construct a causal inverse of the transform (if it exists). From this perspective, extracting innovations can be solved in two steps: estimating first the parameter estimation of a time series model, and (ii) constructing a causal inverse of the time series model. Under relatively general conditions, parameters of multivariate moving-average and auto-regressive moving average models can be learned by moment methods involving high-order statistics \citep{Cardoso:89ICASSP,Swami&Mendel:92TAC, Tong:96SP}.
\subsection{One-class anomalous sequence detection}
The one-class anomalous sequence detection problem\footnote{In this paper, we do not consider the broader class of one-class anomaly detection problems where unlabeled training data or scarcely labeled anomalous training data are also used.} is a special instance of semi-supervised anomaly detection that classifies a data sequence as anomalous or anomaly-free
when training samples are given only for the anomaly-free model.
To our best knowledge, there is no machine learning techniques specifically designed for time series, although there is an extensive literature on the related problem of detecting outlier or OoD samples.
A well know technique for the one-class anomaly detection is the the one-class support vector machine (OCSVM) \citep{Scholkopf:99NIPS} and its many variants for different applications \citep{Khan&Madden:04}. OCSVM finds the decision region for the anomaly-free samples by fencing in training samples with a ceratin margin that takes into account unobserved anomaly-free samples. A related idea is to separate the anomaly and anomaly-free model in the latent variable space of an autoencoder. One such technique is f-AnoGAN proposed by \cite{Schlegl&Seebock:19} where the OoD detection is made by fencing out all samples that result in large decoding error by an autoencoder trained with anomaly-free samples. Other similar techniques include \citep{Bergmann19,GongEtal2019}.
These discriminative methods rely, {\em implicitly}, on an assumption that the anomaly distribution complements that of the anomaly-free, {\it i.e.,\ \/} the anomaly distribution concentrates in the region that the anomaly-free training samples do not or less likely to appear.
Therefore, they often under or over generalize the anomaly-free model and perform poorly when the domains of anomaly and anomaly-free models overlap completely.
Another set of techniques attach some kind of confidence scores on samples in the feature space, leveraging the neural network's ability to learn the posterior distribution of the anomaly-free model \citep{Hendrycks&Gimpel:17ICLR,Lakshminarayanan&Pritzel&Blundell:17NIPS,Liang&Li&Srikant:18ICLR,Lee&etal:18NIPS}.
These techniques construct confidence scores from the learned anomaly-free model without attempting to learn or infer the anomaly model. As shown in \citep{Lan&Dinh:21}. even with the perfect density estimate, OoD detection may still perform poorly.
The third type of techniques simulate OoD samples in someway, often around the anomaly-free models, as proposed in \citep{Lee&etal:18ICLR,Hendrycks&etal:18ICLR, Ren&etal:19ICLR}. With simulated OoD samples, it is possible, in principle, to capture fully the difference between the anomaly and anomaly-free distributions and derive a likelihood ratio test as proposed by \cite{Ren&etal:19ICLR}. In practice, however, there could be uncountably many OoDs, and simulating OoD samples are highly nontrivial. A heuristic solution is to perturb training samples from the anomaly-free model and use them to create a proxy of OoD samples.
Existing OoD detection techniques, under the most favorable conditions when training samples are unlimited, learning algorithm most powerful, and the complexity of neural network unbounded, are fundamentally limited in two aspects. First, in general, there does not exist a uniformly most power test (even asymptotically) for all possible anomaly models. This means that, for every detection rule, there are anomaly cases for which the power of detection is suboptimal. Second, they do not provide Chernoff consistency \citep{Shao:03book} defined as the type I (false positive) and type II (false negative) errors approach to zero as the number of observations increases. For detecting anomalies in times series, Chernoff consistency is essential.
The source of such apparently fundamental limits arises, perhaps, from the lack of a clear characterization of the OoD model; the standard notion of OoD being something other than the anomaly-free distribution is simply not precise enough to provide a theoretical guarantee. Indeed, \cite{Lan&Dinh:21} argue that even perfect density models for the anomaly-free data cannot guarantee reliable anomaly detection, and there are ample examples that demonstrate OoD samples can easily fool standard OoD detectors \citep{Goodfellow&Shlen&Szegedy:15}. To achieve Chernoff consistency or the asymptotic uniformly most power performance, there needs to be a positive ``distance'' between the distributions of the anomaly-free model and those of the anomaly. It is the constraint on anomaly models being $\epsilon$-distance away in our formulation makes it possible, under ideal training, implementation, and sampling conditions, to achieve Chernoff consistency. See Sec.~\ref{sec:detection} for one such approach, building on an earlier result of universal anomaly detection \citep{Mestav&Tong:20SPL}.
\section{Introduction}
\label{sec:intro}
\input{intro_final}
\section{Background and Related Work}
\label{sec:background}
\input{Background_final}
\section{Innovations Autoencoder}
\label{sec:learning}
\input{learning_final}
\section{Anomalous Sequence Detection via Innovations Autoencoder}
\label{sec:detection}
\input{detection_final}
\section{Performance Evaluation}
\input{performance_eval_final}
\section{Conclusion}
\label{sec:conclusion}
IAE is a machine learning technique that extracts innovations sequence from real-time observations. When properly trained, IAE can serve as a font-end processing unit that transforms processes of unknown temporal dependency and probability structures to a standard uniform {\it i.i.d} sequence. (Extension to other marginal distributions is trivial.) IAE is, in someway, an attempt to realize Wiener's original vision of encoding stationary random processes with the simplest possible form, although the existence of such an autoencoder is not guaranteed \citep{Rosenblatt:59}. From an engineering perspective, however, the success of Wiener and Kalman filtering in practice is a powerful testament that many applications in practice can be approximated by innovations representations. It is under such an assumption that IAE serves to remove the modeling assumptions in Wiener and Kalman filtering and pursues a data-driven machine learning solution. As an example, the IAE-based anomaly detection is shown to be quite effective for the one-class anaomalous time series detection problem, for which there are few solutions.
\acks{We would like to acknowledge support for this project
from the National Science Foundation (NSF grant 1932501 and 1816397). }
\newpage
\vskip 0.2in
\section{Proof of Theorem 1}
\label{app:theorem}
{\em Proof:} By A2, there exists a sequence of IAEs $\tilde{{\cal A}}_m=(G_{\tilde{\theta}_m}, H_{\tilde{\eta}_m})$ of dimension $m$ that converges to ${\cal A}$. Let $(\tilde{\nu}_{m,t})$ be the output sequences of $G_{\tilde{\theta}_m}$ and $(\tilde{x}_{m,t})$ the output sequence of the decoder $H_{\eta_m}$. Similarly defined are vector outputs of the encoder and decoder $\tilde{{\bf x}}_{m,t}^{(N)}$ and $\tilde{\hbox{\boldmath$\nu$\unboldmath}}_{m,t}^{(N)}$.
We prove Theorem 1 in two steps:
\begin{enumerate}
\item {\bf Finite-block convergence of $\tilde{{\cal A}}_m$:} By assumption A2, the uniform convergence of
$G_{\tilde{\theta}_m} \rightarrow G$ implies that, for every $\epsilon_1>0$, there exists $M_1\in\mathbb{N}^+$ such that, for all realizations ${\bf x}^{(\infty)}_t$, $m>M_1$ and $t \in \mathbb{N}$,
\[
|\tilde{\nu}_{m,t}-\nu_t|< \epsilon_1~~\Rightarrow~~ \mathbb{E}\left(\|\tilde{\hbox{\boldmath$\nu$\unboldmath}}_{m,t}^{(N)}-\hbox{\boldmath$\nu$\unboldmath}^{(N)}_t\|^2\right) \le N\epsilon_1.
\]
Therefore, for all finite $N$, we have the following uniform convergence as $m\rightarrow \infty$:
\begin{equation} \label{eq:nutilde}
\tilde{\hbox{\boldmath$\nu$\unboldmath}}_{m,t}^{(N)} \xrightarrow{L_2} \hbox{\boldmath$\nu$\unboldmath}_t^{(N)}~~\Rightarrow~~ \tilde{\hbox{\boldmath$\nu$\unboldmath}}_{m,t}^{(N)} \xrightarrow{d} \hbox{\boldmath$\nu$\unboldmath}_t^{(N)},~~\Rightarrow~~W(\tilde{\hbox{\boldmath$\nu$\unboldmath}}_{m,t}^{(N)},\hbox{\boldmath$\nu$\unboldmath}_t^{(N)}) \rightarrow 0.
\end{equation}
Next we consider the decoder convergence. Fix $\epsilon_2>0.$
\begin{align*}
|x_t-\tilde{x}_{m,t}| &= |H\circ G({\bf x}_t^{(\infty)}) - H_{\tilde{\eta}_m} \circ G_{\tilde{\theta}_m}({\bf x}_t^{(m)})|\\
&\le |H\circ G({\bf x}_t^{(\infty)}) - H \circ G_{\tilde{\theta}_m}({\bf x}_t^{(m)})|\ + |H \circ G_{\tilde{\theta}_m}({\bf x}_t^{(m)}) - H_{\tilde{\eta}_m} \circ G_{\tilde{\theta}_m}({\bf x}_t^{(m)})|.
\end{align*}
Because $H$ is uniform continuous, there exists an $M_2(\epsilon_2)$ such that for all $m > M_2(\epsilon_2)$ and ${\bf x}_t^{(\infty)}$,
\[
|H\circ G({\bf x}_t^{(\infty)}) - H \circ G_{\tilde{\theta}_m}({\bf x}_t^{(m)})|\ < \epsilon_2/2.
\]
Because $H_{\eta_m}$ converges to $H$ uniformly, there exists an $M_2'(\epsilon_2)$ such that
\[
|H \circ G_{\tilde{\theta}_m}({\bf x}_t^{(m)}) - H_{\tilde{\eta}_m} \circ G_{\tilde{\theta}_m}({\bf x}_t^{(m)})| < \epsilon_2/2,
\]
for all $m > M_2'(\epsilon_2)$ and ${\bf x}_t^{(\infty)}$. Therefore, for all $m > \max\{ M_2(\epsilon_2),M_2'(\epsilon_2)\}$,
\[
|\tilde{x}_{m,t}-x_t|< \epsilon_2~~\Rightarrow~~ \mathbb{E}\left(\|\tilde{{\bf x}}_{m,t}^{(N)}-{\bf x}^{(N)}_t\|^2\right) \le N\epsilon_2~~\Rightarrow~~ \tilde{{\bf x}}_{m,t}^{(N)} \underset{m\rightarrow \infty}{\xrightarrow{L_2}} {\bf x}^{(N)}_t.
\]
The risk $\tilde{L}_m^{(N)} $ converges uniformly:
\begin{equation} \label{eq:Ltilde}
\tilde{L}_m^{(N)} := W(\tilde{\hbox{\boldmath$\nu$\unboldmath}}_{m,t}^{(N)},\hbox{\boldmath$\nu$\unboldmath}_t^{(N)}) + \mathbb{E}(\|\tilde{{\bf x}}^{(N)}_{m,t}-{\bf x}_t^{(N)}\|_2^2) \xrightarrow{m \rightarrow \infty} 0.
\end{equation}
\item {\bf Finite-block convergence of ${\cal A}_m^{(N)}$:} Fix the dimension of training at $N$. From the finite-block convergence of $\tilde{{\cal A}}_m$, $\forall \epsilon$, there exists $M_\epsilon$ such that, for all $m> M_\epsilon$,
\begin{equation}
\tilde{L}_m^{(N)} = W(\tilde{\hbox{\boldmath$\nu$\unboldmath}}^{(N)}_{m,t}-\hbox{\boldmath$\nu$\unboldmath}_t^{(N)})+ \mathbb{E}\left(\|\tilde{{\bf x}}_{m,t}^{(N)}-{\bf x}^{(N)}_t\|_2^2\right) \leq \epsilon.
\end{equation}
By the Kantorovich-Rubinstein duality theorem,
\[
W(\tilde{\hbox{\boldmath$\nu$\unboldmath}}^{(N)}_{m,t},\hbox{\boldmath$\nu$\unboldmath}_t^{(N)})
= \max_\gamma \mathbb{E}(D_\gamma (\tilde{\hbox{\boldmath$\nu$\unboldmath}}_{m,t}^{(N)}, \hbox{\boldmath$\nu$\unboldmath}_{t}^{(N)})).
\]
From (\ref{eq:IGAN}), let (without loss of generality assuming $\lambda=1$)
\[
L_m^{(N)} :=\min_{\theta,\eta}\max_\gamma \bigg(
\mathbb{E}(D_\gamma (\tilde{{\bf x}}_{m,t}^{(N)},\hbox{\boldmath$\nu$\unboldmath}_{t}^{(N)})
+ \mathbb{E}(\|{\bf x}^{(N)}_{m,t}-{\bf x}_t^{(N)}\|_2^2)\bigg). \nonumber
\]
We therefore have, for all $m \ge M_\epsilon$,
\[
L_m^{(N)} \le \tilde{L}_m^{(N)} \le \epsilon~~\Rightarrow~~
\left\{\begin{array}{l}
W(\hbox{\boldmath$\nu$\unboldmath}_{m,t}^{(N)}, \hbox{\boldmath$\nu$\unboldmath}_t^{(N)}) \le \epsilon,\\
\mathbb{E}(\|{\bf x}^{(N)}_{m,t}-{\bf x}_t^{(N)}\|^2_2) \le \epsilon,
\end{array}
\right.
\]
which completes the proof. \hfill $\Box\Box\Box$
\end{enumerate}
\section{Pseudocode}\label{sec:code}
\begin{algorithm}[h]
\caption{Training the Innovations Autoencoder}
\label{alg:IGAN}
\begin{algorithmic}
\STATE {\bfseries Input:} data $(x_t)$, encoder $H_\eta$, generator $G_\theta$, discriminator $D_\gamma$. $\lambda$ is the gradient penalty coefficient, $\mu$ the weight for auto-encoder, and $\alpha,\beta_1,\beta_2$ hyper-parameters for Adam optimizer.
\WHILE{Not converged}
\FOR{$t=1,\cdots,n_{c}$}
\FOR{$i=1,\cdots,B$}
\STATE Sample ${\bf x}_i$ from the input matrix $(x_t)$
\STATE Sample \textbf{$\mathbf{u}$}$=[u_1,\cdots,u_{n-m+1}]^T\stackrel{i.i.d}{\sim}\mathcal{U}[-1,1]$
\STATE Sample $\epsilon\sim\mathcal{U}[0,1]$
\STATE $\hat{\hbox{\boldmath$\nu$\unboldmath}}\leftarrow \mathbf{G}_\theta(\mathbf{x}_i)$
\STATE $\Bar{\hbox{\boldmath$\nu$\unboldmath}}\leftarrow \epsilon \mathbf{u}+(1-\epsilon)\hat{\hbox{\boldmath$\nu$\unboldmath}}$
\STATE $L^{(i)}\leftarrow \mathbf{D}_\gamma(\hat{\hbox{\boldmath$\nu$\unboldmath}})-\mathbf{D}_\gamma(\mathbf{u})+\lambda(\lVert \nabla_{\gamma} \mathbf{D}_{\gamma}(\Bar{\hbox{\boldmath$\nu$\unboldmath}})\rVert_2-1)^2$
\ENDFOR
\STATE $\gamma\leftarrow Adam(\nabla_\gamma\frac{1}{B}\sum_{i=1}^B L^{(i)},\alpha,\beta_1,\beta_2)$
\ENDFOR
\STATE Sample a batch of $\{\mathbf{x}_i\}_{i=1}^B$ from the input matrix $(x_t)$
\STATE $\theta\leftarrow Adam\bigg(\nabla_\theta\frac{1}{B}\sum_{i=1}^B\Big[-\mathbf{D}_\gamma(\mathbf{G}_\theta(\mathbf{x}_i))+$
$\mu\lVert \mathbf{H}_{\eta}(\mathbf{G}_\theta(\mathbf{x}_i))- \mathbf{x}_i\rVert_2\Big],\alpha,\beta_1,\beta_2\bigg)$
\STATE $\eta\leftarrow Adam\bigg(\nabla_\eta\frac{1}{B}\sum_{i=1}^B\left[\mu\lVert \mathbf{H}_{\eta}(\mathbf{G}_\theta(\mathbf{x}_i))- \mathbf{x}_i\rVert_2\right],$ $\alpha,\beta_1,\beta_2\bigg)$
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\section{Neural Network Parameter}
All the neural networks (encoder, decoder and discriminator) in the paper had three hidden layers, with the 100, 50, 25 neurons respectively. The input dimension for the generator was chosen such that $n=3m$. In the paper, $m=20$ was used for synthetic case, and $m=100$ for real data cases. The encoder and decoder both used hyperbolic tangent activation. The first two layers of the discriminator adopted hyperbolic tangent activation, and the last one linear activation.
The tuning parameter was chosen to be the same for all synthetic cases, with $\mu=0.1$, $\lambda=5$, $\alpha=0.0002$, $\beta_1=0.9$, $\beta_2=0.999$. For the two real data cases, the hyper-parameters were set to be $\mu=0.01$, $\lambda=3$, $\alpha=0.001$, $\beta_1=0.9$, $\beta_2=0.999$.
\subsection{A Nonparametric Anomaly Model}
We consider the problem of real-time detection of anomalies in a time series $(x_t)$ modeled as a random process with unknown temporal dependencies and probability distributions.
Let ${\bf x}_t$ be a vector consisting of a finite block of current and past sensor measurements. Under the null hypothesis ${\cal H}_0$ that models the anomaly-free measurements and the alternative ${\cal H}_1$ for the anomaliies, we consider the following hypothesis testing problem
\begin{equation} \label{eq:H0H1}
{\cal H}_0: {\bf x}_t \sim f_0~~{\rm\it vs.}~~{\cal H}_1: {\bf x}_t \sim f_1 \in \mathscr{F}_1 = \{f: ||f_1-f_{0}|| \ge \epsilon\}
\end{equation}
with unknown probability distribution $f_0$ under the anomaly-free hypothesis and a collection $\mathscr{F}_1$ of unknown anomaly distributions under ${\cal H}_1$. Parameter $\epsilon>0$ represents the degree of separation between the anomaly and anomaly-free models where $\|\cdot\|$ is the total variation distance
although other measures
such as the Shannon-Jensen distance and the Kullback-Leibler divergence are equally applicable. We assume that an anomaly-free dataset $\mathscr{X}_0$ is available for offline or online training.
Note that ${\cal H}_0$ is a simple hypothesis with a single distribution $f_0$ whereas ${\cal H}_1$ is a composite hypothesis that captures all possible anomalies in $\mathscr{F}_1$. Prescribing a positive distance $\epsilon>0$ between the anomaly-free and the collections of anomaly models is crucial to establish Chernoff consistency that drives the false positive and false negative rates to zero as the dimension of ${\bf x}_t$ increases.
\subsection{Anomaly detection via IAE and Uniformity Test}
A defining feature of IAE is that, under ${\cal H}_0$, the innovations encoder transforms the measurement time series with an unknown probability model to the standard uniform {\it i.i.d.} sequence. Through an IAE trained with anomaly-free data, the anomaly detection problem is transformed to testing whether the
transformed sequence is {\it i.i.d.} uniform, for which Chernoff-consistent detectors can be constructed. Implicitly assumed in this approach is that an anomaly process $\epsilon$-distance away from that of anomaly-free will not be mapped to a uniform {\it i.i.d.} process which is reasonable in practice.
\begin{figure}[h]
\center
\scalefig{0.8}\epsfbox{figs/uniformity.eps}
\caption{\small IAE Uniformity Test. Top: Implementation schematic and test statistics under ${\cal H}_0$. Bottom left: Histogram of $(x_t)$ and $(\nu_t)$ under ${\cal H}_0$ and ${\cal H}_1$. Bottom right: an example of coincidence statistics $(T_i)$ with 30 quantization levels and 15 samples.}
\label{fig:uniformity}
\end{figure}
Fig.~\ref{fig:uniformity} shows a schematic of the proposed IAE anomaly detection: the sensor measurements $(x_t)$ is passed through an innovations encoder that, under the anomaly-free hypothesis ${\cal H}_0$, generates a uniform ${\cal U}(-1,1)$ {\it i.i.d.} innovations sequence $(\nu_t)$. The innovation sequence is then passed through a uniform quantizer that puts $\nu_t$ in one of the $Q$ equal-length quantization bins to produce a sequence of discrete random variables $\tilde{\nu}_t \in \{1,\cdots, Q\}$:
\[
\tilde{\nu}_t = \left\{\begin{array}{ll}
1, & \nu_t \le -1 + 2/Q,\\
i, & -1 + 2(i-1)/Q < \nu \le -1+ 2i/Q,~~ i=2, \cdots, Q-1,\\
Q, & \nu \ge 1-2/Q.\\
\end{array}
\right.
\]
Under ${\cal H}_0$, we thus have a uniform $Q$-ary {\it i.i.d.} sequence $\tilde{\nu}_t$, transforming the original hypothesis testing problem to the following derived hypotheses from (\ref{eq:H0H1}):
\begin{equation} \label{eq:H0'H1'}
{\cal H}_0': \tilde{\nu}_t \sim P_0 = (\frac{1}{Q}, \cdots, \frac{1}{Q})~~\mbox{\rm\it vs.}~~{\cal H}_1': \tilde{\nu}_t \sim P_1 \in \mathscr{P}_1=\{ (p_1, \cdots, p_Q), ||P_1-P_0||_1 \ge \epsilon'\}.
\end{equation}
where $P_0$ and $P_1$ are $Q$-ary probability mass functions. Testing ${\cal H}_0'$ against ${\cal H}_1'$ is a classic problem \citep{David:50Biometrika,Viktorova:64TPA,Paninski:08TIT,Goldreich:17book}.
Consider (\ref{eq:H0'H1'}) with $N$ samples $\tilde{\hbox{\boldmath$\nu$\unboldmath}}^{(N)}_t = (\tilde{\nu}_t,\cdots, \tilde{\nu}_{t-N+1})$, a sufficient statistic equivalent to the histogram is the {\em coincidence statistic} $T=(T_0,\cdots, T_N)$ where $T_i$ is the number of quantization bins containing exactly $i$ samples. See~Fig.~\ref{fig:uniformity} (bottom right) for an example for $Q=30$ and $N=15$. The coincidence statistics such as $T_0$ and $T_1$ characterize the uniformity property particularly well when samples are ``sparse'' relative to the quantization level. For instance, when $Q$ is considerably larger than $N$, the $T_1$ value of a uniformly distributed $\tilde{\nu}_t$ tends to be large, and $T_0$ tends to be small, relatively.
It is shown in \cite{Paninski:08TIT} that using $T_1$ alone achieves Chernoff consistency, and the sample complexity is roughly at the order of $\sqrt{Q}$.
A general form of a coincidence test is a linear test given by
\begin{equation}
\sum_{i=1}^N \zeta_i T_i \begin{array}{c}{\cal H}_1\\\gtrless\\{\cal H}_0\\\end{array} \tau_{\epsilon'},
\label{eq:decision}
\end{equation}
where the threshold parameter, a function of $\epsilon'$ (and $N)$, controls the level of false positive rate. Paninski gives $ \tau_{\epsilon }$ for the sparse sample case when only $T_1$ is used, whereas \cite{Viktorova:64TPA} showed the coefficients for the asymptotically most powerful linear detector.
\subsection{Summary of results}
We develop a deep learning approach, referred to as Innovations Auto-Encoder (IAE), that provides a practical way to extract innovations of discrete-time stationary random processes with unknown probability structures, assuming that historical training samples are available. To this end, we propose a causal convolutional neural network, {\it a.k.a} time-delayed neural network \citep{Waibel&etal:89}, and a Wasserstein generative adversary network (GAN) learning algorithm for extracting innovations sequences \citep{Goodfellow&Shlen&Szegedy:15,Arjovsky17}. Because the implementation and training of an IAE involve only a finite dimensional data vectors, a convergence property is needed that ensures a sufficiently high-dimensional implementation will lead to a close approximation of the actual innovations representation. Under ideal training and implementation conditions, we establish a finite-block convergence property in Theorem~\ref{thm:converge}, which ensures that a sufficiently high-dimensional implementation of an IAE, trained with finite dimensional historical data vectors, produces a close approximation of the ideal innovations representation.
Next, we apply the idea of IAE to a ``one-class'' anomalous sequence detection problem where neither the anomaly-free nor the anomaly model is known, but anomaly-free training samples are given. By {\em anomaly sequence detection}, we mean to distinguish the underlying probability distributions of the anomaly-free model and that of the anomaly. Although there are many practical machine learning techniques for outlier and out-of-distribution (OoD) detection, to our best knowledge, the result presented here is the first one-class anomalous sequence detection approach for time series models with unknown underlying probability and dynamic models.
The problem of detecting anomalous sequence brings considerable computational and learning-theoretic challenges. Although one expects that taking a large block of consecutive samples can reasonably approximate statistical properties of a random process, applying existing detection schemes
to such high-dimensional vectors with unknown (sequential) dependencies among its components is nontrivial. The main contribution of this work is leveraging of the innovations representation to transform the anomaly-free time series to a sequence of (approximately) uniform {\it i.i.d.} innovations, thus making the anomalous sequence detection problem the classic problem of testing uniformity, for which we apply versions of coincidence test \citep{David:50Biometrika,Viktorova:64TPA,Paninski:08TIT,Goldreich:17book}. We then demonstrate, using field-collected and synthetic datasets, the effectiveness of the proposed approach on detecting system anomalies in a microgrid \citep{Pignati&etal:15PESISG}.
\subsection{Notations}
Notations used are standard. All variables and functions are real. We use $\mathbb{R}^m$ and $\mathbb{N}$ for the $m$-dimension of real vector space and the set of integers, respectively. Vectors are in boldface, and ${\bf x}=(x_1,\cdots, x_m) \in \mathbb{R}^m$ is a {\em column vector}. A time series is denoted as $(x_t)$ with $t \in \mathbb{N}$. Denote by ${\bf x}_t^{(m)}:=(x_t, \cdots, x_{t-m+1})$ the column vector of current and $(m-1)$ past samples of $(x_t)$.
Suppose that $F$ is an $m$-variate scaler function. Let ${\bf F}^{(n)}: \mathbb{R}^{m+n-1} \rightarrow \mathbb{R}^n$ be the $n$-fold time-shifted mapping of $F$ defined by ${\bf F}^{(n)}({\bf x}_t^{(n+m-1)}) := (F({\bf x}_t^{(m)}), \cdots F({\bf x}_{t-n+1}^{(m)}))$. We drop the superscripts when the dimensionality is immaterial or obvious from the context.
\subsection{Parameterization and Dimensionality of Innovations Autoencoder}
A parameterized innovations autoencoder (IAE), denoted by ${\cal A}_{(\theta,\eta)}=(G_\theta, H_\theta)$, is defined by an {\em innovations encoder} $G_\theta$ and an {\em innovations decoder} $H_\eta$ shown in Figure~\ref{fig:Causal}(left), both implemented by causal CNNs \citep{Waibel&etal:89} with parameters $\theta$ and $\eta$ respectively, as shown in Figure~\ref{fig:Causal}(right). Once trained, the innovations sequence $(v_t)$ is produced by the encoder $G_\theta$, and decoded time series $(\hat{x}_t)$ by $H_\eta$. We note here that the causal encoder and decoder can also be implemented using causal recurrent neural networks with suitably defined internal states.
\def0.5\linewidth{0.5\linewidth}
\begin{figure}[h]
\centering
\scalefig{0.45}\epsfbox{figs/IAE.eps}
\fontsize{8pt}{3pt}
\input{figs/Filter}
\caption{Structure of IAE (left) and a causal CNN implementation (right).}
\label{fig:Causal}
\end{figure}
In a practical implementation, at time $t$, only the current and a finite number of past samples are used to generate the output. We define the {\em dimension} of an IAE by the input dimension of its encoder. For an $m$-dimension IAE ${\cal A}_{(\theta_m,\eta_m)}$, or ${\cal A}_m$ in abbreviation, the input of the encoder\footnote{ Note that the index $m$ of $\theta_m$ is associated with the dimension of the autoencoder. The dimension of $\theta_m$ can be arbitrarily large. } $G_{\theta_m}$ is ${\bf x}^{(m)}_t:=(x_t,\cdots, x_{t-m+1})$ and the output is a scaler output $\nu_t$ causally produced by $G_{\theta_m}$. The decoder $H_{\eta_m}$ also takes a finite dimensional input vector $\hbox{\boldmath$\nu$\unboldmath}^{(n(m))}_t = (\nu_t, \cdots, \nu_{t-n(m)+1})$ and causally produces a scaler output $\hat{x}_t$ as an estimate of $x_t$. Herein, we assume that $n(m) = k_\nu m$ for some design parameter $k_\nu$.
The structure of IAE is similar to some of the existing VAEs in modeling (causally) sequential data \citep{Bayer&Osendorfer:14arxiv,Chung&etal:15NIPS,Goyal&etal:17NIPS}. The main difference between IAE and these VAEs is the way IAE is trained and the objective of training. Unlike IAE, these VAEs aims at obtaining a generative model that does not enforce matching encoder input realizations with those of the decoder output; their objective is to produce the underlying stochastic representations in the forms of probability distributions.
\subsection{Training of IAE: Dimensionality and the Training Objective}
The shaded boxes in Figure~\ref{fig:Causal}(left) represent algorithmic functionalities used in the training process, and the red lines represent input variables from the data flow and output variables used in adapting neural network coefficients. Two discriminators are used for acquiring the encoder-decoder neural networks. The {\em innovations discriminator} is trained via a Wasserstein GAN that evaluates the Wasserstein distance between the estimated innovations $(\nu_t)$ and the standard (uniform {\it i.i.d.}) sequence. The {\em decoding error discriminator} evaluates the Euclidian distance between input $(x_t)$ and the decoder output $(\hat{x}_t)$. The two discriminators generate stochastic gradients in updating the encoder and decoder neural networks.
In learning an $m$--dimensional IAE ${\cal A}_m$, the two discriminators can only take finite-dimensional samples as their inputs. In practice, the two discriminators may have different dimensions. For presentation convenience, we assume both discriminators have the sample dimension. We define the {\em dimension of training} as the dimension of the training vectors used by the discriminators to derive updates of the encoder and decoder coefficients.
For an $N$-dimensional training of an $m$-dimensional IAE, the innovations discriminator ${\cal D}_{\gamma_m}$ compares a set of $N$-dimensional encoder output samples $\{\hbox{\boldmath$\nu$\unboldmath}^{(N)}_{m,t}:={\bf G}^{(N)}_{\theta_m}({\bf x}^{(N+m-1)}_t)\}$ with a set of uniformly distributed i.i.d. samples $\{{\bf u}_t^{(N)}\}\in[-1,1]^N$ and
produces the empirical gradients of the Wasserstein distance $W(\hbox{\boldmath$\nu$\unboldmath}^{(N)}_{m,t},{\bf u}^{(N)}_t)$. The $N$-dimensional decoding error discriminator takes decoder outputs $\hat{{\bf x}}^{(N)}_{m,t}:= {\bf H}_{\eta_m}^{(N)}(\hbox{\boldmath$\nu$\unboldmath}_t^{(N+\kappa m-1)})$ and computes the decoding error $||\hat{{\bf x}}^{(N)}_{m,t}-{\bf x}^{(N)}_t||_2$. The two discriminators compute stochastic gradients and update encoder, decoder, and discriminator parameters $(\theta_m, \eta_m,\gamma_m)$ jointly. See a pseudocode in Appendix A.
The learning objective of IAE is minimizing a weighted sum of the Wasserstein distance between the probability distributions of $\hbox{\boldmath$\nu$\unboldmath}^{(N)}_{m,t}$ and ${\bf u}^{(N)}_t$ and the
decoding error of the autoencoder. By the Kantorovich-Rubinstein Duality, the training algorithm can be derived from the min-max optimization:
\begin{equation} \label{eq:IGAN}
\min_{\theta,\eta} \max_{\gamma} \bigg(L_m^{(N)}(\theta,\eta,\gamma) := \mathbb{E}[D_\gamma({\hbox{\boldmath$\nu$\unboldmath}}^{(N)}_{m,t},{\bf u}^{(N)}_t)]+\lambda \mathbb{E}[ ||\hat{{\bf x}}^{(N)}_{m,t} - {\bf x}^{(N)}_t||_2]\bigg),
\end{equation}
where the first term measure how close the innovation estimated ${\hbox{\boldmath$\nu$\unboldmath}}^{(N)}_{m,t} = {\bf G}_{\theta_m}^{(N)}({\bf x}^{(N+m-1)}_t )$ is
to a standard reference vector with a uniformly distributed {\it i.i.d} random vector ${\bf u}_t^{(N)}$. The second term measures how well ${\hbox{\boldmath$\nu$\unboldmath}}^{(N)}_{m,t}$ serves as an innovations sequence in reproducing ${\bf x}^{(N)}_t$. A pseudo code that implements the IAE learning is shown in the Appendix.
\subsection{Convergence Analysis}
We will not deal with the convergence of the learning algorithms, which is more or less standard; we shall assume that the learning algorithm converges to its global optimum. Here we address a ``structural'' convergence issue of some theoretical significance.
A practical implementation of IAE can only be of a finite dimension $m$. So is the dimension $N$ of the training process. Such a finite dimensional training can only enforce properties of a finite set of variables of the random process. Let ${\cal A}_{m}^{(N)} = (G_{\theta_m^{\tiny (N)}},H_{\theta_m^{\tiny (N)}})$ be the encoder and decoder output sequences of the $m$-dimensional autoencoder optimally trained with an $N$-dimensional training process according to (\ref{eq:IGAN}). Let ${\cal A}=(G,H)$ be the ideal IAE with encoder $G$ and decoder $H$. Let $(\nu_t)$ and $(x_t)$ be the output sequences of $G$ and $H$, respectively. We are interested in how ${\cal A}_m^{(N)}$ converges to ${\cal A}$ in some fashion.
Ideally, we would like to have $\nu_t \xrightarrow{d} \nu_t$ and $x_{m,t}\xrightarrow{L_2} x_t$, which, unfortunately, is not achievable with finite dimensional training.
Our goal, therefore, is to achieve a {\em finite-block convergence} defined as follows:
\begin{Definition}[Finite training-block convergence]
An $m$-dimensional IAE ${\cal A}_{m}^{(N)}$ trained with $N$-dimensional training samples converges in training block size $N$ to ${\cal A}=(G,H)$ if, for all $t$,
\begin{equation}
\hbox{\boldmath$\nu$\unboldmath}_{m,t}^{(N)} \xrightarrow{d} \hbox{\boldmath$\nu$\unboldmath}_t^{(N)},~~{\bf x}_{m,t}^{(N)} \xrightarrow{L_2} {\bf x}_{t}^{(N)},~~\mbox{as $m\rightarrow \infty$.}
\end{equation}
\end{Definition}
Note that, even though the dimension $N$ of the learning process can be arbitrarily large, the finite-block convergence does not guarantee the innovations vector of block size greater than $N$ consists of uniform {\it i.i.d.} entries, nor does it ensure that a block of the decoder output of size greater than $N$ can approximate the block of encoder inputs probabilistically unless the process $(x_t)$ has a short memory. In practice, unless there is an adaptive procedure to train IAE with increasingly higher dimensions, one may have to be content with a weaker measure of convergence as defined above.
Note also that, by the definition of innovations sequence, it suffices to require that the encoded vector ${\hbox{\boldmath$\nu$\unboldmath}}_{m,t}$ converges in distribution to a vector of uniform {\it i.i.d.} random variables. A stronger mode of convergence of the decoded sequence is necessary, however. Herein, we restrict ourselves to the $L_2$ (mean-square) convergence.
We make the following assumptions on ${\cal A}_m^{(N)}$ and ${\cal A}$:
\begin{enumerate}
\item[A1] {\bf Existence:} The random process $(x_t)$ has an innovations representation defined in (\ref{eq:G}-\ref{eq:H}), and there exists a causal encoder-decoder pair $(G, H)$ satisfying (\ref{eq:G}-\ref{eq:H}) with $H$ being uniform continuous.
\item[A2] {\bf Feasibility:} There exists a sequence of finite-dimensional IAE encoding-decoding functions $(G_{\tilde{\theta}_m}, H_{\tilde{\eta}_m})$ that converges uniformly to $(G,H)$ as $m \rightarrow \infty.$
\item[A3] {\bf Training:} The training sample sizes are infinite. The training algorithm for all finite dimensional IAE using finite dimensional training samples converges almost surely to the global optimal.
\end{enumerate}
With these assumptions, we have the following structural convergence. See Appendix A for a proof.
\begin{theorem} \label{thm:converge}
Let ${\cal A}_{m}^{(N)} = (G_{\theta_m^{\tiny (N)}},H_{\theta_m^{\tiny (N)}})$ be the $m$-dimensional autoencoder optimally trained with training sample dimension $N$ according to (\ref{eq:IGAN}). Under (A1-A4), ${\cal A}_{m}^{(N)}$ converges (in finite block size $N$) to ${\cal A}$.
\end{theorem}
We now consider the special case of an autoregressive process of finite order $K$ to gain insights into assumptions A1-A4 and Theorem~\ref{thm:converge}.
It is sufficient to demonstrate the case for the AR(1) process defined by
\[
x_t = \alpha x_{t-1} + \nu_t, ~~\alpha \in (0,1),
\]
where $\nu_t \sim {\cal U}(-1,1)$ is a uniformly distributed on $[-1,1]$ {\it i.i.d.} sequence. A natural IAE ${\cal A}=(G_\theta,H_\eta)$ is given by
\begin{align}
G_\theta:~~& \nu_t = G_\theta ( {\bf x}_t^{(\infty)}) = \hbox{$\bf{\theta}$}^{\mbox{\tiny T}} {\bf x}_t^{(\infty)},~~\hbox{$\bf{\theta}$}=(1, -\alpha, 0, 0, \cdots), \\
H_\eta:~~ & x_t = H_\eta( \hbox{\boldmath$\nu$\unboldmath}_t^{(\infty)}) = \hbox{\boldmath$\eta$\unboldmath}^{\mbox{\tiny T}} \hbox{\boldmath$\nu$\unboldmath}^{(\infty)}_{t},~~\hbox{\boldmath$\eta$\unboldmath} = (1, \alpha, \alpha^2,\cdots).
\end{align}
It is readily verified that both $H$ and $G$ are uniform continuous. Assumption A1 is satisfied.
Now consider the $m$-dimensional IAE $\tilde{A}_m=(G_{\tilde{\theta}_m},H_{\tilde{\eta}_m})$ defined by
\begin{align}
G_{\tilde{\theta}_m}:~~& \tilde{\nu}_{m,t} = G_{\tilde{\theta}_m} ({\bf x}_t^{(m)}) = \tilde{\hbox{$\bf{\theta}$}}_m^{\mbox{\tiny T}} {\bf x}_t^{(m)},~~\tilde{\hbox{$\bf{\theta}$}}_m=(1, -\alpha, 0, 0, \cdots). \\
H_{\tilde{\eta}_m}:~~ & \tilde{x}_{m,t} = H_\eta(\tilde{\hbox{\boldmath$\nu$\unboldmath}}_{m,t}^{(\kappa_\nu m)}) = \tilde{\hbox{\boldmath$\eta$\unboldmath}}_m^{\mbox{\tiny T}} \hbox{\boldmath$\nu$\unboldmath}^{(\kappa_\nu m)}_{t},~~\tilde{\hbox{\boldmath$\eta$\unboldmath}}_m = (1, \alpha, \alpha^2,\cdots, \alpha^{\kappa_\nu m-1}).
\end{align}
It is immediate that $G_{\tilde{\theta}_m} \rightarrow G_\theta$ and $H_{\tilde{\eta}_m} \rightarrow H_\eta$ uniformly as $m\rightarrow \infty$. Therefore assumptions A2-A3 are met.
From (\ref{eq:IGAN}), we have, for all $N$, $m>2$ and $\gamma$,
\[
L_m^{(N)}(\tilde{\theta}_m,\tilde{\eta}_m,\gamma) := \mathbb{E}[D_\gamma(\tilde{\hbox{\boldmath$\nu$\unboldmath}}^{(N)}_{m,t},{\bf u}^{(N)}_t)]+\lambda \mathbb{E}[ ||\tilde{{\bf x}}^{(N)}_{m,t} - {\bf x}^{(N)}_t||_2] =\lambda \mathbb{E}[ ||\tilde{{\bf x}}^{(N)}_{m,t} - {\bf x}^{(N)}_t||_2].
\]
Since $H_{\tilde{\eta}_m}$ is the best $l_2$ approximation of $H$, $\mathbb{E}( ||\tilde{{\bf x}}^{(N)}_{m,t} - {\bf x}^{(N)}_t||_2)=\min_{\theta,\eta} \mathbb{E}(||\hat{{\bf x}}^{(N)}_{m,t} - {\bf x}^{(N)}_t||_2)$. Therefore, $\tilde{A}_m=(G_{\tilde{\theta}_m},H_{\tilde{\eta}_m})$ is a global optimum of (\ref{eq:IGAN}). Therefore, Theorem~\ref{thm:converge} is verified. Further, with $\tilde{A}_m$, we have strong convergence of $\tilde{\nu}_{m,t} = \nu_t$ for all $m \ge 2$ and $(\tilde{x}_{m,t}) \xrightarrow{L_2} (x_t)$ as $m\rightarrow \infty$.
\subsection{Training and testing datasets}\label{sec:data}
We used two field-collected datasets of continuous point-on-wave (CPOW) measurements from two actual power systems.
The BESS dataset contained direct bus voltage measurements sampled at 50 kHz at a medium-voltage (20kV) substation collected from the
EPFL campus smart grid as described by \cite{SossanFabrizio&Namor_2016}. As shown in Fig.~\ref{fig:BESS System} (left), several circuits were connected at a bus via a medium voltage switchgear. Also connected to the same bus was a battery energy storage system (BESS) used to emulate physically anomaly power injections. The BESS dataset captured anomaly-free measurements and anomaly power injections that varied from 0 to 500 (kW).
Fig.~\ref{fig:BESS System} (right) shows segments of anomaly and anomaly-free measurements of a single phase CPOW voltage waveforms. The CPOW samples exhibited narrow-band (sinusoidal-like) characteristics with strong temporal correlations.
Because of the frequency and voltage regulation mechanisms in a power system, the voltage
magnitudes and frequencies were tightly controlled such that both anomaly-free and anomaly voltage CPOW data were very similar although a zoomed-in plots exhibited differences in high-order harmonics, as shown in the zoom-in plot of Fig.~3 (right). The detection of anomaly in such voltage CPOW measurements was quite challenging.
\begin{figure}[h]
\centering
\includegraphics[scale=0.3]{figs/BESS.eps}
\caption{Battery Energy System at EPFL. See \cite{SossanFabrizio&Namor_2016} for detailed description.}
\label{fig:BESS System}
\end{figure}
The second field-collected dataset (UTK) contained direct samples of voltage waveform at 6 kHz collected at the University of Tennessee. Only anomaly-free CPOW measurements were available. Similar to the BESS dataset, the UTK dataset contained strongly correlated narrow-band samples.
Besides the two field datasets (BESS and UTK), we also designed several synthetic datasets to evaluate specific properties of IAE and IAE-based anomaly detections. These datasets are described in Sec.~\ref{sec:performance} and Sec.~\ref{sec:Anomaly}.
\subsection{IAE performance}\label{sec:performance}
We evaluated the performance of IAE and several of benchmarks for extracting innovations sequences. In particular, we examined whether the estimated innovation sequences passed the test of being statistically independent and identically distributed. We also evaluated the mean-squared error (MSE) of the
reconstructed signal.
\subsubsection{Benchmarks}
Since there were very few techniques specifically designed for extracting innovation sequences, we compared IAE with four benchmarks adapted from existing techniques aimed at extracting independent components.
Among these benchmarks, three were deep learning solutions (NLLS, ANICA, f-AnoGAN) and two of those (ANICA, f-AnoGAN) were autoencoder based.
\begin{itemize}
\item {\em LLS:} LLS was a linear least-squares prediction error filter that generated the one-step prediction error sequence. For stationary Gaussian time series, a perfectly trained LLS predictor would produce a true innovations sequence.
\item {\em NLLS:} NLLS was a nonlinear least-squares prediction error filter that generated the one-step production error time series. If the measurement time series was obtained from a sampled (possibly) non-Gaussian process with additive Gaussian noise, the NLLS prediction error sequence would be a good approximation of an innovations process.
\item {\em ANICA:} ANICA was an adaption of the nonlinear ICA autoencoder proposed in \citep{Brakel&Bengio:17}. Aimed to extract independent components from a block of measurements, the original design did not enforce causality and was not intended to generate an innovations sequence.
\item {\em f-AnoGAN:} f-AnoGAN proposed by \cite{Schlegl&Seebock:19} was an autoencoder technique involving convolutional neural networks. The goal was to extract low-dimensional latent variables as features from which the decoder could recover the original. Since the autoencoder was trained with anomaly-free samples, the intuition was that anomaly data would have high reconstruction errors. Such a construction could be viewed as a nonlinear principle component analysis (PCA).
\end{itemize}
IAE was implemented by adapting the Wasserstein GAN with a few modifications \footnote{\url{https://keras.io/examples/generative/wgan_gp/}}. For all cases in this paper, similar neural network structures were used: the encoder and decoder both contained three hidden layers with 100, 50, 25 neurons respectively with hyperbolic tangent activation. The discriminator contained three hidden layers with 100, 50, and 25 neurons, of which the first two used hyperbolic tangent activation and the last one the linear activation. The tunning parameter used for each case is presented in the Appendix.
\begin{table}[h]
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{ll}
\toprule
Dataset & Model \\
\midrule
Moving Average (MA): &$x_t=\frac{1}{10}\sum_{i=1}^{10} \nu_{t-i}$ \\
Linear Autoregressive (LAR) &$x_t=0.5 x_{t-1}+\nu_t$ \\
Nonlinear Autoregressive (NLAR) &$x_t=0.5 x_{t-1}+0.4\mathbbm{1}(x_{t-2}<0.7) +\nu_t$ \\
\bottomrule
\end{tabular}
\end{sc}
\caption{Test Synthetic Datasets. $\nu_t\stackrel{\tiny\rm i.i.d}{\sim}\mathcal{U}[0,1]$. $\mathbbm{1}(\cdot)$ is the indicator function.}
\label{tb:Synthetic dataset}
\end{small}
\end{center}
\end{table}
\subsubsection{Test datasets}
Besides the two field datasets (BESS and UTK) described in Sec.~\ref{sec:data}, we included three synthetic datasets shown in Table.~\ref{tb:Synthetic dataset} to produce different levels of temporal dependencies and probability distributions in test data.
In particular, the linear autoregressive (LAR) dataset was chosen such that a properly trained LLS approach
would produce an innovations sequence. For the moving average (MA) and nonlinear autoregressive (NLAR) datasets, sufficiently complex
neural network implementation of NLLS and ANICA could produce approximations of the innovations sequences.
For all the synthetic cases, we used the memory size m = 20 dimensional IAE and training sample dimension n = 60 in the neural network training, and 100,000 samples were used for training for all cases.
The neural network memory size for real data cases were chosen to be m = 100, n = 250, due to stronger temporal dependency.
\subsubsection{Performance and discussion}
In evaluating the performance of the benchmarks, we adopted the runs up and down test that used the numbers of consecutively ascending or descending samples as test statistics of statistical independence.
It was shown (empirically) to have the best performance in \citep{Gibbons03}. We also evaluated the empirical mean-squared error of the reconstruction.
\begin{table}[h]
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lccccc}
\toprule
Method &MA &LAR &NLAR &UTK &BESS \\
\midrule
IAE (p-value) &0.9492 &0.8837 &0.7498 &0.9990 &0.8757\\
LLS (p-value) &0.3222 &0.9697 &0.0186 &$<$0.0001 &$<$0.0001\\
NLLS (p-value) &0.2674 &N/A &0.5116 &$<$0.0001 &$<$0.0001\\
ANICA (p-value) &$<$0.0001 &0.1502 &$<$0.00019 &$<$0.0001 &$<$0.0001\\
F-anoGAN (p-value) &$<$0.001 &0.0106 &$<$0.0001 &$<$0.0001 &$<$0.0001 \\\midrule
IAE(MSE) &6.3849 &8.5366 &9.398425 &14.5641 &21.0144\\
Anica(MSE) &137.2839 &274.3765 &283.31250 &315.6521 &319.9284\\
F-anoGAN(MSE) &6.7421 &12.4379 &11.6458 &11.8630 &11.8821\\
\bottomrule
\end{tabular}
\end{sc}
\caption{p-value of the runs test and the mean-squared error (MSE) of the reconstruction.}
\label{tb:p-value}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
Table.~\ref{tb:p-value} shows the p-values of the runs up and down test. NLLS prediction method was not implemented for LAR case because the linear least-square was sufficient for demonstration purposes. As autoencoder based methods, F-anoGAN and IAE achieved the comparable reconstruction error, with F-anoGAN performing better. ANICA failed to obtain a competitive reconstruction error.
As for the independence test for the BESS dataset, IAE achieved the highest p-values for all the scenarios except the synthetic LAR dataset designed specifically for the LLS algorithm. For the synthetic datasets, LLS and NLLS produced sequences that the runs tests could not easily reject the independence hypothesis. For the field datasets, LLS and NLLS failed the run tests. Not specifically designed for extracting innovations, ANICA failed the run tests for statistical independence.
\subsection{Detection of anomalies in power systems}\label{sec:Anomaly}
We evaluated the performances of several benchmarks in detecting system anomalies in field-collected dataset BESS, the UTK dataset with synthetically generated anomalies, and two synthetic time series datasets. We compared benchmark techniques using
their receiver characteristic curves (ROC) that plotted true positive rates (TPR) over a range of the false positive rate (FPR). The area under ROC (AUROC) was also calculated for all techniques.
\subsubsection{Test datasets}
In addition to the BESS dataset that included both anomaly and anomaly-free measurements, we also considered three additional datasets shown in Table~\ref{tb:Synthetic Test}, two synthetic datasets ({\tt SYN1, SYN2}) and one semi-synthetic dataset with anomaly waveforms added to the field-collected anomaly-free samples \citep{Wang&Liu&Tong:21TPS}. {\tt SYN1} and {\tt SNY2} had the identical anomaly-free models of AR(1) Gaussian. Under the anomaly hypothesis, {\tt SYN1} was AR(2) Gaussian, whereas {\tt SYN2} was an AR(1) with uniform innovations. Because only the anomaly-free training samples were assumed and the anomaly waveforms and probability distributions were arbitrary, the same anomaly detector trained based on the anomaly-free samples were tested under {\tt SYN1} and {\tt SYN2}.
\begin{table}[h]
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lccl}
\toprule
Test Case & Anomaly Free Samples & Anomaly Samples &Block Size ($N$) \\
\midrule
Syn1 &$x_t=0.5 x_{t-1}+\nu_t$ &$x_t=0.3x_{t-1}+0.3x_{t-2}+\nu_t$ &1000 \\
Syn2 &$x_t=0.5 x_{t-1}+\nu_t$ &$x_t=0.5 x_{t-1}+\nu'_t$ &1000\\
UTK &Real Data &GMM Noise &200\\
BESS &Real Data &Real Data &500\\
\bottomrule
\end{tabular}
\end{sc}
\caption{Data Detection Test Cases. $\nu_t\stackrel{i.i.d}{\sim}\mathcal{N}(0,1)$,~~$\nu'_t\stackrel{i.i.d}{\sim}\mathcal{U}[-1.5,1.5]$}
\label{tb:Synthetic Test}
\end{small}
\end{center}
\end{table}
\subsubsection{Benchmarks}
Few benchmark techniques were suitable for the anomaly sequence
detection problem considered here. Most relevant prior techniques that could be applied directly were the one-class support vector machine (OCSVM) proposed in \citep{Scholkopf:99NIPS} and f-AnoGAN in \citep{Schlegl&Seebock:19}. OCSVM, a semisupervised classification technique, was implemented with radial basis functions as its kernel and was trained with anomaly-free samples. Although not designed as an anomaly detection solution, ANICA \citep{Brakel&Bengio:17} was adapted to be a preprocessing algorithm (similar to IAE) before applying a uniformity test described in Sec.~\ref{sec:detection}.
We have also included the Quenouille test \citep{Priestley:81Book} designed to test the goodness of fit of an AR($k$) model (with {\tt SYN1} dataset.) Because of the asymptotic equivalence of the Quenouille test and the maximum likelihood test of \cite{Whittle51:thesis}, we used Quenouille test as a way to calibrate how well IAE and other nonparametric tests would perform under AR($k$) time series models with dataset {\tt SYN1} for which the Quenoulle test is asymptotically optimal.
\subsubsection{Performance on the BESS dataset}
The BESS dataset was used to test the proposed testing technique's ability to detect system anomalies. As shown in Fig.~\ref{fig:Real-BESS}, the anomaly and anomaly-free voltage signals were very similar due to the voltage regulation of the bus voltage in the power system. The detection based solely on the raw voltage signal can be very challenging. Fig.~\ref{fig:Real-BESS} (right) shows the ROC curves obtained using 500-sample blocks. Since the anomaly and anomaly-free samples are hard to distinguish, all the other methods apart from IAE didn't seem to work well in this case.
\begin{figure}[h]
\centering
\includegraphics[scale=0.8]{figs/ROC-BESS.eps}
\includegraphics[scale=0.55]{figs/t-BESS.eps}
\caption{\small Detection performance for the BESS dataset. Left: ROC curves. (AUROC: IAE:0.8354, ANICA:0.5027 OCSVM:0.4903 F-AnoGAN:0.4993) Right: Anomaly and anomaly-free traces.}
\label{fig:Real-BESS}
\end{figure}
\subsubsection{Performance on the UTK dataset}
We evaluated benchmark performance on the UTK dataset with synthetic anomaly test samples. To construct the anomaly samples, we added a comparably small Gaussian Mixture noise on the anomaly-free measurements. The signal to noise ratio of the Gaussian Mixture noise to the anomaly free signal is roughly 40dB, and the time-domain trajectories of anomaly and anomaly-free signals are shown in Fig.~\ref{fig:Real-UTK} (right), which demonstrate the level of similarity between anomaly and anomaly-free samples. Seen from Fig.~\ref{fig:Real-UTK} (left), IAE was the only detection method that was able to make reliable decisions, with ANICA performing slightly better than the rest.
\begin{figure}[h]
\centering
\includegraphics[scale=0.8]{figs/ROC-UTK.eps}
\includegraphics[scale=0.4]{figs/UTK-t.eps}
\caption{\small Detection performance for the UTK dataset. Left: ROC curves. (AUROC: IAE:0.9135, ANICA:0.5967, OCSMV:0.2978, F-AnoGAN:0.4393) Right: Anomaly and anomaly-free traces.}
\label{fig:Real-UTK}
\end{figure}
\subsubsection{Performance on synthetic datasets {\tt SYN 1} and {\tt SYN 2}}
We also conducted anomaly detection based on synthetic data generated by auto-regressive models ({\tt SYN1} and {\tt SYN2}). For both {\tt SYN1} and {\tt SYN2}, the anomaly free datasets were the same, and the two anomaly datasets were designed to highlight the the performance of the detectors facing different anomalies.
In {\tt SYN1}, we designed the anomaly to have the same marginal distribution as the anomaly-free data ($x_t\sim\mathcal{N}(0,4/3)$), as shown in Fig.~\ref{fig:Syn1} (right), intentionally making detection based on a single sample ineffective. As shown by Fig.~\ref{fig:Syn1} (left), IAE performed similarly well as the asymptotically optimal detector (Quenouille). ANICA performed better (with AUROC above 0.5) than the other two machine learning-based detection methods. Because the marginal distributions of the measurements are the same for both hypotheses, the other two machine learning-based techniques were not competitive under this setting.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.8]{figs/ROC-Syn1.eps}
\includegraphics[scale=0.8]{figs/AR_hist_syn1.eps}
\caption{Anomaly Detection for {\tt SYN1}. Left: ROC curves. (AUROC: IAE:0.9021, ANICA:0.6337, OCSVM:0.4455, F-AnoGAN:0.4881, Quenouille:0.9112) Right: Anomaly and anomaly-free histograms.}
\label{fig:Syn1}
\end{figure}
{\tt SYN2} adopted two auto-regressive models with the same parameters in temporal dependencies. The marginal distributions of the measurements, however, were slightly different under the anomaly and anomaly-free models. See. Fig~\ref{fig:Syn2} (right), which made it very challenging for OCSVM and F-AnoGAN. ANICA was also not effective in extracting independent components, causing failures in the uniformity test. Only IAE was able to capture the difference between the two datasets through the extraction of innovations and made reasonably reliable decisions, as shown in Fig~\ref{fig:Syn2} (left).
\begin{figure}[ht]
\centering
\includegraphics[scale=0.8]{figs/ROC-Syn2.eps}
\includegraphics[scale=0.8]{figs/AR_hist_syn2.eps}
\caption{Anomaly Detection for {\tt SYN1}. Left: ROC curves. (AUROC: IAE:0.9635, ANICA:0.6337, OCSVM:0.5407, F-AnoGAN:0.5020, Quenouille:0.5026) Right: Anomaly and anomaly-free histograms.}
\label{fig:Syn2}
\end{figure} |
1,941,325,220,024 | arxiv | \section{Introduction}
Since we celebrate today 20 years of beauty physics it may be
appropriate to start the discussion of hadronic weak interactions
by briefly recalling what was known about this subject in the seventies.
In spite of many years of intense research on $K$- and hyperon decays,
there was no coherent understanding of non-leptonic decays. For
example, the empirically found dominance of $|\Delta \vec I|=1/2$
transitions over $|\Delta \vec I|=3/2$ transitions by a factor 500 was
a complete mystery. Moreover, the strongest of all
weak decay amplitudes - the $K\to 2\pi$ amplitude -
was found to have to vanish in the $SU3$ symmetry limit (Gell-Mann's
theorem) and no close relation between $K$-decays and hyperon decays
could be seen. In 1974 an important step forward was made:
the construction of an effective Hamiltonian which incorporates
the effects of hard gluon exchange processes\cite{one}. Still, a factor
20 out of the factor 500 could not be explained, nor could
the specific pattern of hyperon
decays. The physics at this time
dealing with $u,d$ and $s$-quarks was not rich enough. In the
corresponding decay processes too few fundamentally different
decay channels are open.
The discovery of open charm in 1976 brought hope for enlightenment.
Many decay channels could now be studied. But also new puzzles
showed up. Unexpectedly, the non-leptonic widths of $D^0$ and $D^+$
turned out to differ by a factor 3 and a strong destructive
amplitude interference in exclusive decays was found. While
$D$-decays occur in a resonance region of the final
particles which complicates the analysis, the discovery of beauty
precisely 20 years ago gave us particles -- the $B$-mesons --
which are ideally suited for the study of non-leptonic decays.
Again, new interesting effects showed up, in particular and
contrary to the case in $D$-decays, a constructive amplitude
interference in charged $B$-decays. Recent results\cite{two} of large
Penguin-type contributions and sizeable transitions to the $\eta'$
particle have still to be understood. Moreover,
$B$-meson decays give
the first realistic possibility to find CP-violating
effects outside the $K$-system.
The dramatic effects observed in hadronic weak decays gave rise
to many speculations. It was a great challenge to find the correct
explanation. Today we know that the strong confining colour forces
among the quarks are the decisive factor. These forces are enormously
effective in low energy processes and still sizeable even in energetic
$B$-decays. Although a strict theoretical treatment of the intricate
interplay of weak and strong forces is not yet possible, a
semi-quantitative understanding of exclusive two-body decays
from $K$-decays to $D$- and $B$-decays has been achieved. The consequences
of the QCD-modified weak Hamiltonian can be explored by
relating the complicated matrix elements of 4-quark operators to
better known objects, to form factors and decay constants.
\newpage
In the present talk I will describe the generalized factorization
method developed recently\cite{three},
which also takes non-factorizeable contributions into
account and has been quite successful so far. It allows the prediction
of many exclusive $B$-decays. I will also show that the
interesting and so far puzzling pattern of amplitude interference in
$B$-, $D$- and $K$-decays is caused by the different values
of $\alpha_s$ acting in these cases.
\section{The effective Hamiltonian}
At the tree level non-leptonic weak decays are mediated by single
$W$-exchange. Hard gluon exchange between the quarks
can be accounted for by using the renormalization group
technique. One obtains an effective Hamiltonian incorporating
gluon exchange processes down to a scale $\mu$ of the order
of the heavy quark mass. For the case of $b\to c\bar ud$ transitions,
e.g., the effective Hamiltonian is
\begin{equation}\label{1}
H_{eff}=\frac{G_F}{\sqrt 2}V_{cb}V^\star_{ud}\left\{
c_1(\mu)(\bar du)(\bar cb)+c_2(\mu)(\bar cu)(\bar db)
\right\}\end{equation}
where $(\bar du)=(\bar d\gamma^\mu(1-\gamma_5)u)$ etc. are
left-handed, colour singlet quark currents. $c_1(\mu)$
and $c_2(\mu)$ are scale-dependent QCD coefficients known up to
next-to-leading order\cite {four}. Depending on the process considered,
specific forms of the four-quark operators in the effective
Hamiltonian can be adopted. Using Fierz identities one
can put together those quark fields which match the constituents
of one of the hadrons in the final state of the decay process.
Let us consider, as an example, the decays $B\to D\pi$. The
corresponding amplitudes are -- apart from a common factor --
\begin{eqnarray}\label{2}
{\cal A}_{\bar B^0\to D^+\pi^-}&=&(c_1+\frac{c_2}{N_c})\langle
D^+\pi^-|(\bar du)(\bar cb)|\bar B^0\rangle,\nonumber\\
&&+c_2\langle D^+\pi^-|\frac{1}{2}(\bar d t^au)(\bar ct^ab)|\bar B^0\rangle
\nonumber\\
{\cal A}_{\bar B^0\to D^0\pi^0}&=&(c_2+\frac{c_1}{N_c})\langle
D^0\pi^0|(\bar cu)(\bar db)|\bar B^0\rangle\nonumber\\
&&+c_1\langle D^0\pi^0|\frac{1}{2}(\bar c t^au)(\bar dt^ab)|\bar B^0\rangle
\nonumber\\
{\cal A}_{B^-\to D^0\pi^-}&=&{\cal A}_{\bar B^0\to D^+\pi^-}
-\sqrt2 {\cal A}_{\bar B^0\to D^0\pi^0}\quad.\end{eqnarray}
$N_c$ denotes the number of quark colours and $t^a$ the Gell-Mann
colour $SU(3)$ matrices. The last relation in (\ref{2})
follows from isospin symmetry of the strong interactions.
The three classes of decays illustrated in eq. (\ref{2}) are referred
to as class I, class II, and class III respectively.
\section{Generalized Factorization}
How shall we deal with the complicated and scale-dependent
four-quark operators?
Because the $(\bar du)$ and the $(\bar cu)$ currents
in (\ref{2}) can generate the $\pi^-$ and $D^0$ mesons,
respectively, the above amplitudes contain the scale-independent
factorizeable parts
\begin{eqnarray}\label{3}
{\cal F}_{(\bar BD)\pi}&=&\langle \pi^-|(\bar du)|0\rangle \langle D^+|(\bar cb)|\bar B^0\rangle,
\nonumber\\
{\cal F}_{(\bar B\pi)D}&=&\langle D^0|(\bar cu)|0\rangle \langle \pi^0|(\bar db)|\bar B^0\rangle
\end{eqnarray}
which can be expressed in terms of the decay constants
$f_\pi$ and $f_D$, and the single current transition form
factors $B\to D$ and $B\to\pi$, respectively. For the non-factorizeable
contributions we define hadronic parameters $\epsilon_1(\mu)$ and
$\epsilon_8(\mu)$ such that the amplitudes (\ref{2}) take the
form\cite{five,three}
\begin{eqnarray}\label{4}
&&{\cal A}_{\bar B^0\to D^+\pi^-}=a_1{\cal F}_{(BD)\pi}\nonumber\\
&&{\cal A}_{\bar B^0\to D^0\pi^0}=a_2{\cal F}_{(B\pi)D}\nonumber\\
&&a_1=(c_1(\mu)+\frac{c_2(\mu)}{N_c})(1+\epsilon_1^{(BD)\pi}
(\mu))+c_2(\mu)\epsilon_8^{(BD)\pi}\nonumber\\
&&a_2=(c_2(\mu)+\frac{c_1(\mu)}{N_c})(1+\epsilon_1^{(B\pi)D}
(\mu))+c_1(\mu)\epsilon_8^{(B\pi)D}\;.
\end{eqnarray}
The effective coefficients $a_1$ and $a_2$ are scale-independent.
$\epsilon_1$ and $\epsilon_8$ obey renormalization-group
equations and their scale dependence compensates the
scale dependence of the QCD coefficients $c_1$ and $c_2$ \cite{three}.
$a_1$ and $a_2$ are process-dependent quantities because
of the process dependence of the hadronic parameters $\epsilon_1$
and $\epsilon_8$. So far, then, Eq. (\ref{4}) provides
a parametrization of the amplitudes only and allows no predictions
to be made.
To get predictions, non-trivial properties of QCD have to be
taken into account. We employ at this point the $1/N_c$ expansion of QCD.
The large $N_c$ counting rules tell us that
$\epsilon_1=O(1/N_c^2)$ and $\epsilon_8=O(1/N_c)$.
Thus one obtains for $a_1$ and $a_2$ in (\ref{4})
\begin{eqnarray}\label{5}
a_1&=&c_1(\mu)+c_2(\mu)(\frac{1}{N_c}+\epsilon_8^{(BD)\pi}
(\mu))+O(1/N_c^2)\nonumber\\
a_2&=&c_2(\mu)+c_1(\mu)(\frac{1}{N_c}+\epsilon_8^{(B\pi)D}
(\mu))+O(1/N_c^2)\quad.\end{eqnarray}
For $B$-decays using $c_1(m_b)=1+O(1/N^2_c)$ and $c_2(m_b)=O(1/N_c)$
one finally gets\cite{three}
\newpage
\begin{eqnarray}\label{6}
&&a_1=c_1(m_b)+O(1/N_c^2)\nonumber\\
&&a_2=c_2(m_b)+\zeta^Bc_1(m_b)+O(1/N_c^3)\end{eqnarray}
with
\[c_1(m_b)\approx1\quad {\rm and}\quad \zeta^B=\frac{1}{N_c}
+\epsilon_8^{(B\pi)D}(m_b)\quad.\]
Now, neglecting $O(1/N^2_c)$ terms, we are left with a single
parameter $(\zeta^B)$ only. It should be emphasized that putting
this parameter equal to $1/N_c$ does not correspond to any consistent
limit of QCD. For $a_2$ the more general expression (\ref{6})
must be used\cite{7,six}.
$\zeta^B$ is a dynamical parameter: In general, it will take different
values for different decay channels. To deal with this, let us
introduce a process-dependent factorization scale $\mu_f$
defined by $\epsilon_8(\mu_f)=0$. The renormalization-group
equation then gives\cite{three}
\begin{equation}\label{7}
\epsilon_8(\mu)=-\frac{4\alpha_s}{3\pi}\ln \frac{\mu}{\mu_f}
+O(\alpha^2_s)\quad.\end{equation}
For different processes the variation of the factorization
scale $\mu_f$ is expected to scale with the energy release
to the outgoing hadrons in the decay process. With
$\mu_f\approx O(m_b)$ one gets from (\ref{6}), \ (\ref{7})
\begin{equation}\label{8}
\Delta\zeta^B\approx\frac{4\alpha_s}{3\pi}\frac{\Delta\mu_f}{m_b}
\approx \ {\rm few}\ \%\quad.\end{equation}
Thus, the process dependence of $\zeta^B$ is expected to be
very mild. To a good approximation a single value
appears sufficient for the description of two-body $B$-decays.
One finds (see section 4) $\zeta^B=0.45\pm0.05$.
A similar discussion also holds for $D$-decays. There
one is led to\cite{three}
\begin{eqnarray}\label{9}
a_1&\approx& c_1(m_c)+\zeta^{'D}c_2(m_c)\nonumber\\
a_2&\approx& c_2(m_c)+\zeta^Dc_1(m_c)\nonumber\\
\zeta^{'D}&\approx& \zeta^D \end{eqnarray}
and again expects only a mild process dependence of $\zeta^D$.
Indeed, the corresponding description of exclusive $D$-decays
brought reasonable success. $\zeta^D$ turned out to be
very small or zero. There is also theoretical support
(using QCD sum rule methods) for a
partial or full cancellation of the $1/N_c$ term by non-factorizeable
contributions\cite {seven}.
On the other hand, the
corresponding calculation of $\zeta^B$ is more involved\cite{eight}
and was so far not successful.
\section{Determination of \boldmath $\lowercase {a_1}$ and
\boldmath $\lowercase{a_2}$}
The most direct way to determine the effective constant $a_1$
consists in comparing non-leptonic decay rates with
the corresponding differential semi-leptonic rates at
momentum transfers equal to the masses of the current
generated particles\cite{nine}. One gets, for example,
\begin{equation}\label{10}
\frac{\Gamma(\bar B^0\to D^{(*)+}\rho^-)}{d\Gamma
(\bar B^0\to D^{(*)+}\ell^-\bar\nu)/dq^2
|_{q^2=m^2_\rho}}=6\pi^2|V_{ud}|^2f^2_\rho|a_1|^2\quad.\end{equation}
Because the generated particle is a vector particle
like the lepton pair, the form factor combinations occurring
in the nominator and denominator cancel precisely. Thus,
the ratio (\ref{10}) is solely determined by $|a_1|$
and the $\rho$-meson decay constant $f_\rho$.
Taking by convention $a_1$ real and positive, the measured
rates\cite{ten} give\cite{three}
$a_1=1.09\pm0.13$
in agreement with the expectation (\ref{6}). $a_1$ values
obtained from several other processes are in full agreement
with the above number. In transition to pseudoscalar particles
the form factor combinations in equations replacing (\ref{10})
do not cancel. But for $B\to D,D^*$ matrix elements all
form factors are well determined using experimental data and
the heavy quark effective theory\cite{eleven}. The latter relates
in particular longitudinal form factors to the
transversal ones.
Values for $|a_2|$ can be obtained from the analysis of class II
transitions. The decays $\bar B^0\to D^{0(*)}h^0$
$(h^0:\pi^0, \rho^0, a^0_1)$ have not yet been observed, but the
branching ratios for $\bar B\to K^{(*)} J/\psi$ and
$\bar B\to K^{(*)}\psi(2S)$ are available\cite{ten}. The analysis
requires model estimates for the heavy-to-light form factors,
which enter here. We use the NRSX model\cite{twelve} which is
based on the extrapolation of the BSW form factors\cite{six}
at $q^2=0$ by appropriate pole and dipole formulae.
Where available, more sophisticated calculations agree with
these results. (See. e.g. Ref. 14). We find\cite{three}
$|a_2|=0.21\pm0.01\pm0.04$, where the second error accounts
for the model dependence.
The relative phase between $a_2$ and $a_1$ together with
the magnitude of $a_2$ can be obtained from the decays
$B^-\to D^{(*)0}h^-$ where, as seen from (\ref{2}) and
(\ref{4}), the two amplitudes interfere.
The data for the ratios $\Gamma(B^-\to D^{(*)0}h^-)
/\Gamma(\bar B^0\to D^{(*)+}h^-)$ give conclusive
evidence for constructive interference\cite{ten}. Taking
$a_2$ to be a real number (vanishing final state interaction),
we find\cite{three} $a_2/a_1=+0.21\pm0.05\pm0.04$.
Combined with the value for $a_1$ this gives $a_2=+0.23\pm0.05\pm0.04$.
The nice agreement between the two determinations of $|a_2|$ shows
that the process dependence of this quantity cannot be
large. There is no evidence for it. An analysis with an alternative
and very simple form factor model gives slightly larger
values for $a_2$ but the results from different processes are
again consistent with each other\cite{three}.
The positive value for $a_2/a_1$ in exclusive $B$-decays
is remarkable. It is different from the value
of the same
ratio in exclusive $D$-decays. There $a_2/a_1$ is negative
causing a sizeable destructive amplitude interference. The change
of $a_2/a_1$ by
going from $B$- to $D$-and $K$-decays will be discussed
in section 6.
\section{Tests and Results}
The $B$-meson, because of its large mass, has many decay
channels. We learned from important examples the values
of $a_1$ and $a_2$ and their near process-independence in
energetic two-body decays. Thus numerous tests and
predictions for branching ratios and for the polarizations
of the outgoing particles can be made. I will be very brief
here and simply refer to Ref. 3 for the compilation of
branching ratios in tables, for a detailed discussion and
for comparison with the data. Also discussed there is the
possible influence of final state interactions. Limits
on the relative phases of isospin amplitudes are given.
In contrast to $D$-decays final state interactions do not
seem to play an important role for the dominant
exclusive $B$-decay modes.
For the much weaker Penguin-induced transitions,
$\bar B\to K^{(*)}\pi$ for example, this statement
does not hold. Small amplitudes can get an additional
contribution from stronger decay
channels\cite{six,fourteen}. In the $\bar B\to
K^{(*)}\pi$ case the decay can proceed via virtual
intermediate $D^{(*)}\bar D^{(*)}_s$ like channels
generated by the $b\to c\bar cs$
interaction. The colour octet $c\bar c$ pair, if at low
invariant mass, may then turn into a pair of light
quarks by gluon exchange. This gives rise to a
"long range Penguin" contribution\cite{fourteen} in
addition to the short distance Penguin amplitude.
In future application of our generalized factorization
method
to rare decays this should be kept in mind. Here,
however, I will not discuss this subject further.
Non-leptonic decays to two spin-1 particles also need a
separate discussion. Here one has 3 invariant amplitudes
corresponding to
outgoing $S$, $P$, and $D$-waves. Non-factorizeable
contributions to these amplitudes may, in general,
have an amplitude composition different from the factorizeable
one which cannot be dealt by introducing effective $a_1$ parameters.
Whether or not and to what
extent factorization also holds in these more complicated
circumstances can be learned from the polarization of the final
particles. In class I decays the factorization approximation
predicts a polarization identical to
the one occurring in the corresponding
semi-leptonic decays at the appropriate $q^2$ value.
For $B\to D^*V$ decays the theoretical predictions
have very small errors only\cite{three}. Another case of particular
interest is the
polarization of the $J/\psi$ particle in the decay
$B\to K^*\ J/\psi$. Form factor models predict
a longitudinal polarization of around 40\% .
A recent CLEO measurement\cite{fifteen} gives $(52\pm7\pm4)\%$ .
It can be shown\cite{sixteen} that small changes of the ratios
of form factors obtained in the NRSX model at $q^2=m^2_{J/\psi}$
are sufficient to get full agreement with the measurements
of the longitudinal as well as both
transverse polarizations. At present,
even with respect to polarization measurements, the generalized
factorization approximation is in agreement with
the data.
Because of its success, the generalized factorization method,
besides allowing many predictions for yet unmeasured decays,
can also be used to determine unknown decay constants.
A case in point is the determination of the decay constant
of the $D_s$ and $D^*_s$ particles.
Comparing non-leptonic decays to $D_s, D^*_s$ with those to
light mesons, we find\cite{three}
\begin{equation}\label{11}
f_{D_s}=(234\pm25)\ {\rm MeV},\quad f_{D^*_s}=
(271\pm33)\ {\rm MeV}\;.\end{equation}
In this determination $a_1$ cancels and, presumably, also
some of the experimental systematic errors. The value for
$f_{D_s}$ is in excellent agreement with the value $f_{D_s}
=(241\pm37)$ MeV obtained from the leptonic decay of the
$D_s$ meson\cite{seventeen}. There are several other decay constants
which can be measured this way. Of particular interest are the
decay constants of $P$-wave mesons like the
$a_0,a_1,K^*_0,K_1$ particles.
\section{From \boldmath $B$- to \boldmath $D$- to \boldmath $K$-Decays}
The process dependence of the coefficients $a_1$ and $a_2$
governing exclusive $B$-decays turned out to be very mild.
In fact, it is not seen within the errors of the present data.
But $a_1$ and $a_2$ change strongly by going from $B$-decays
to $D$-decays or even down to $K$-decays. In the generalized
factorization scheme this is expected because of the different
factorization scales and the corresponding $\alpha_s(\mu_f)$
values controlling the strength of the colour forces between
the quarks. In Fig. 1
the ratio $a_2/a_1$ is plotted as a function of
$\alpha_s(\mu_f)$ . We used for the Wilson coefficients
the renormalization group invariant definitions of Ref. 19.
It appears appropriate for describing the changes of
the scale-independent coefficients $a_1$ and $a_2$ with changing
the particle energy. As seen from the figure the positive
value of $a_2/a_1$ found for exclusive
$B$-decays indicates that here small values of $\alpha_s$ govern
the colour forces in the first instant of the decay process. This
is an impressive manifestation of the colour
transparency argument put forward by Bjorken\cite{nine}.
In $D$-decays the stronger gluon interactions redistribute
the quarks: the induced neutral current interaction is already
sizeable. We took the corresponding values of $a_1$ and $a_2$
from the measured isospin amplitudes. They are less affected
by final state interactions than the individual amplitudes.
The ratio $|A_{1/2}|/A_{3/2}|$ is already rather large
$(\approx 4)$ leading to $a_2/a_1 \approx -0.45$ . According
to the figure this corresponds to an effective value
$\alpha_s \approx 0.7$. The negative value of $a_2$,
and the corresponding destructive amplitude
interference in charged $D$-decays, has been known for many
years\cite{six,nineteen}. Since
the bulk of $D$-decays are two-body or quasi two-body decays,
it is the main cause for the lifetime difference of $D^+$
and $D^0$ in full accord with estimates of the relevant partial
inclusive decay rates\cite{twenty}.
\begin{figure}
\epsfxsize=7cm
\centerline{\epsffile{fig3.eps}}
\caption{\label{Fig:a2a1}
The ratio $a_2/a_1$ as a function of the running coupling constant
evaluated at the factorization scale. The bands indicate the
phenomenological values of $a_2/a_1$ extracted from $\bar B\to D\pi$
and $D\to K\pi$ decays.}
\end{figure}
Because of the onset of non-perturbative effects
one cannot extent Fig. 1 down to
larger $\alpha_s$ values. However, the trend to smaller
and smaller values of
the ratio of the Wilson coefficients $c_+(\mu_f)/c_-(\mu_f)$,
which is already down to
$\approx 0.17$ for $D$-decays, is visible. It indicates
a strong and, presumably, non-perturbative force in the
colour $3^*$ channel of two quarks, i.e. in the scalar diquark
channel\cite{twentyone}. In $K$-decays one is very close to the limiting
case $a_2/a_1=-1$ for which the $|\Delta \vec I|=1/2$ rule
would hold strictly.
\section{Conclusions}
The matrix elements of non-leptonic exclusive decays are
notoriously difficult to calculate. Factorization provides
for a connection with better known objects. If combined
with the $1/N_c$ expansion method and properly
applied and interpreted, it turns out to be very useful,
at least for energetic $B$-decays, and has passed
many tests. Thus it enables reliable predictions for many
decay channels to be made and also permits the determination of decay
constants which are difficult to measure otherwise. Factorization
does not necessarily hold to the same degree for
transitions to two vector particles. These are more sensitive
to non-factorizeable contributions and final state interactions.
The constant $a_1$ is predicted to be one
apart from $1/N^2_c$ corrections in exclusive $B$-decays and to
be practically
process-independent. The analysis confirmed these expectations.
The particularly interesting parameter $a_2$, within
errors, also does not show a process dependence.
The positive value of $a_2/a_1$ extracted from
exclusive $B$-decays is remarkable. The obvious interpretation
is that a fast-moving colour singlet quark pair interacts
little with soft gluons.
The constructive interference in energetic two-body $B^-$-decays
does not imply that the lifetime of the $B^-$-meson should be
shorter than the lifetime of the $\bar B^0$ meson: The majority
of transitions proceed into multi-body final states. For these
the relevant scale may be lower than $m_b$ leading to destructive
interference. Also, there are many decay channels for which
interference cannot occur.
The running of $a_1$ and $a_2$ with $\alpha_s(\mu_f)$, which in
turn depends on the energy release to the final particles,
is very interesting. It causes the change from constructive
amplitude interference in $B^-$-decays to strong destructive
interferences in $D$- and $K$-decays. Since exclusive two-body
and quasi two-body decays are dominant in $D$-decays this
destructive interference is the main cause of the lifetime
difference between $D^0$ and $D^+$. By going to low energies
the lowest isospin amplitude is seen to become more and more dominant.
Strange particle decays are the most spectacular manifestation
of the dramatic changes occuring when the effective $\alpha_s$
gets large. A unified picture of exclusive non-leptonic decays
emerges which ranges from very low scales to the large energy
scales relevant for $B$-decays.
\section{Acknowledgement}
The work reported here was performed in a fruitful and most
enjoyable collaboration with Matthias Neubert which is gratefully
acknowledged. The author also likes to thank Dan Kaplan and the
other organizers of the b20 symposium for the very pleasent meeting
and Matthias Jamin for a useful discussion.
|
1,941,325,220,025 | arxiv | \section{\bf Introduction}
In their much acclaimed work Lacey and Thiele \cite{lt1}, \cite{lt2} proved the boundedness of the bilinear Hilbert
transform and established a long standing conjecture of A.P. Calder\'on. Since then the study of bilinear multiplier
operators which commute with simultaneous translations have attracted a great deal of attention. For a
comprehensive survey we would like to refer the interested
reader to the article of Grafakos and Torres \cite{gtor}.
\medskip
One of the important themes of study of $L^p$ multipliers is about
the relationship between multipliers on the classical Euclidean groups
${\mathbb R}^n,\;{\mathbb T}^n,\;{\mathbb Z}^n$. de Leeuw \cite{dlw} studied the restrictions
of $L^p$ multipliers on ${\mathbb R}^n$ to ${\mathbb T}^n$. These kind of relations
between bilinear multiplier operators
defined on ${\mathbb R}$ and ${\mathbb T}$ have appeared in the work of Fan and Sato
\cite{fs}, Blasco, Carro and Gillespie \cite{bcg} and Grafakos
\cite{ghoz}.
Also, in the same paper de Leeuw observed that periodic multipliers on
${\mathbb R}$ are precisely the ones which are multipliers on ${\mathbb Z}$ and vice
versa. In section 3 we investigate the bilinear analogue of these
results.
The extension question from ${\mathbb T}^n$ to ${\mathbb R}^n$ was not fully
explored in de Leeuw's paper. However, in \cite{jod} Jodeit
addressed some natural extensions. A function of ${\mathbb Z}^n$ is extended to
${\mathbb R}^n$ by forming the sum of integer translates of a suitable
function
$\Lambda$. i.e.$$\Psi(\xi)=\sum\limits_{k\in{\mathbb Z}^n}\phi(k)\Lambda
(\xi-k).$$
(For $n=1$ if $\Lambda$ takes the value $1$ at zero and has support in $[0,1)$
then $\Psi$ and $\phi$ agree at integers). In \cite{sp} Madan and Mohanty addressed the bilinear analogue of
this using transference techniques. They have shown that the
piecewise constant extension of a bilinear multiplier symbol
$\phi(n,m)$ of an operator
${\mathcal {P}}_\phi:L^{p_1}({\mathbb T}) \times L^{p_2}({\mathbb T}) \rightarrow L^{p_3}({\mathbb T})$, where $\frac{1}{p_1}+ \frac{1}{p_2}=
\frac{1}{p_3}<1$, gives a bilinear multiplier on ${\mathbb R}$. In section \ref{jext}, we give other examples of $\Lambda$
as in Jodeit which complements the results of \cite{sp}.
Further the results hold for the entire admissible range of exponents $p_1, p_2$ and $p_3$.
\medskip
In Section \ref{prelim}, we give
basic definitions and notation.
\section{\bf Preliminaries}\label{prelim}
\medskip
Let $\mathcal{S}({\mathbb R})$ be the space of Schwartz class functions with
the usual topology and let $\mathcal{S'}({\mathbb R})$ be
its dual space. We say
that a triplet $(p_1, p_2, p_3)$ is H\"{o}lder related if
$\frac{1}{p_1}+ \frac{1}{p_2}= \frac{1}{p_3}$ ,where $p_1,p_2 \geq
1$ and $p_3 \geq \frac{1}{2}$.
\medskip
For $f, g \in \mathcal{S}({\mathbb R})$ the Bilinear Hilbert transform is given by
$$ H(f,g)(x):= p.v. \int_{\mathbb R} f(x-t) g(x+t) \frac{dt}{t} $$
and has the following alternative expression : $$H(f, g) (x)=
\int_{{\mathbb R}}\int_{{\mathbb R}}\hat{f}(\xi) \hat{g}(\eta)(-i )sgn(\xi - \eta)e^{2
\pi i x(\xi+\eta)} d\xi d\eta . $$
In \cite{lt1}, \cite{lt2}, Lacey and Thiele proved the boundedness of the above
operator $H$ from $L^{p_1}({\mathbb R}) \times L^{p_2}({\mathbb R}) \rightarrow
L^{p_3}({\mathbb R})$ for the H\"{o}lder related triplet $(p_1, p_2,
p_3)$, where $1 < p_1, p_2 \leq \infty $ and $\frac {2}{3}< p_3
<\infty $.
\medskip
It is known that for any continuous bilinear operator ${\mathcal {C}} : \mathcal{S}({\mathbb R}) \times \mathcal{S}({\mathbb R}) \rightarrow \mathcal{S'}({\mathbb R})$,
which commutes with simultaneous translations there exists a symbol $\psi_{{\mathcal {C}}}(\xi,\eta)$ such that for $f, g \in \mathcal{S}({\mathbb R})$
$${\mathcal {C}}(f, g) (x)= \int_{{\mathbb R}}\int_{{\mathbb R}}\hat{f}(\xi) \hat{g}(\eta)\psi_{{\mathcal {C}}}(\xi,\eta)e^{2 \pi i x(\xi+\eta)} d\xi d\eta $$
In the distributional sense we can write
$${\mathcal {C}}(f, g) (x)= \int_{{\mathbb R}}\int_{{\mathbb R}} f(x - u) g(x-v) K_{{\mathcal {C}}}(u, v) du dv $$
where $\hat{K_{{\mathcal {C}}}}= \psi_{{\mathcal {C}}}$ (in the sense of distributions).
\medskip
Unlike in the linear case, the boundedness of the symbol $\psi_{{\mathcal {C}}}$ is not known. In this article we will be
dealing with bounded symbols only.
\medskip
For $\psi \in L^{\infty}({\mathbb R}^2)$ and $f, g \in \mathcal{S}({\mathbb R})$, we write
\begin {eqnarray}\label{symbol side}
{\mathcal {C}}_{\psi}(f, g) (x)= \int_{{\mathbb R}}\int_{{\mathbb R}}\hat{f}(\xi) \hat{g}(\eta)\psi(\xi,\eta)e^{2 \pi i x(\xi+\eta)} d\xi d\eta
\end {eqnarray}\\
If for all $ f,g \in \mathcal{S}({\mathbb R})$ the bilinear operator ${\mathcal {C}}_{\psi}$ satisfies
\begin {eqnarray}\label{bilinear}||{\mathcal {C}}_{\psi}(f,g)||_{p_3} \leq c ||f||_{p_1} ||g||_{p_2},\end {eqnarray}
where $c$ is a constant independent of $f$, $g$ , then we say that
${\mathcal {C}}_{\psi}$ is a bilinear multiplier operator associated with the
symbol $\psi$ for the triplet $(p_1, p_2, p_3)$. The set of all
bounded bilinear multipliers for the triplet $(p_1, p_2, p_3)$ will
be denoted by $M_{p_1,p_2}^{p_3}({\mathbb R})$. For $p_3 \geq 1$,
$M_{p_1,p_2}^{p_3}({\mathbb R})$ becomes a Banach space under the operator
norm, whereas for $p_3 < 1$ it forms a quasi Banach space. We will use the notation $\|.\|$ for the operator norm and
for convenience we will not attach any $p$ with it. It will be understood from the context.
\medskip
Bilinear multiplier operators on ${\mathbb T}$ and ${\mathbb Z}$ can be defined similarly. We say that the operator ${\mathcal {P}}$ defined by
$${\mathcal {P}}(F, G)(x):=\Sigma_n \Sigma_m \hat{F}(n)\hat{G}(m)\phi(n,m) e^{2\pi i x(n+m)}; $$
is bounded from $L^{p_1}({\mathbb T})\times L^{p_2}({\mathbb T}) \rightarrow$
$L^{p_3}({\mathbb T})$ if for some constant $c$ and for all trigonometric
polynomials $F, G $ we have
$$||{\mathcal {P}}(F,G)||_{p_3} \leq c ||F||_{p_1} ||G||_{p_2}$$
Similarly on ${\mathbb Z}$ the operator
$${\mathcal {D}}(a, b)(l)= \int_{{\mathbb T}}\int_{{\mathbb T}}\hat{a}(\theta) \hat{b}(\rho)\psi(\theta,\rho)e^{2 \pi i l(\theta+\rho)} d\theta d\rho$$
is said to be bounded from $l^{p_1}({\mathbb Z})\times l^{p_2}({\mathbb Z}) \rightarrow l^{p_3}({\mathbb Z})$, if for some constant $c$ and for all
finite sequences $a, b$ we have
$$||{\mathcal {D}}(a,b)||_{l^{p_3}} \leq c ||a||_{l^{p_1}} ||b|_{l^{p_2}}$$
The space of bounded bilinear multipliers on ${\mathbb T}$ and ${\mathbb Z}$ for the
triplet $(p_1, p_2, p_3)$ will be denoted by $M_{p_1,p_2}^{p_3}({\mathbb T})$
and $M_{p_1,p_2}^{p_3}({\mathbb Z})$ respectively.
\section {\bf Periodic bilinear multipliers}\label{periodic}
\medskip
Let $ \psi({\xi,\eta}) \in M_{{p_1},{p_2}}^{p_3}({\mathbb R})$ be a periodic
function with period one in both variables, i.e.
$\psi({\xi,\eta})=\psi({\xi+1,\eta})=\psi({\xi,\eta+1}).$ A natural
question that arises is whether $\psi({\xi,\eta}) \in
M_{{p_1},{p_2}}^{p_3}({\mathbb Z})$. In \cite{blasco} Blasco proved a
partial result in this direction. Conversely, given $\psi \in
M_{{p_1},{p_2}}^{p_3}({\mathbb Z})$ one can ask whether $ \psi({\xi,\eta})
\in M_{{p_1},{p_2}}^{p_3}({\mathbb R})$. For linear multipliers see \cite{auscher}. We address these questions here for the entire admissible range of exponents. In particular, we show that
\begin{theorem}\label{periodic symbols} Let $\psi \in \, L^\infty({\mathbb R}^2)$ be
a $1$-periodic function in both variables. Then $\psi \in
M_{{p_1},{p_2}}^{p_3}({\mathbb R})$ if and only if $\psi \in
M_{{p_1},{p_2}}^{p_3}({\mathbb Z})$, where the triplet $(p_1, p_2, p_3)$ is
H\"{o}lder related.\end{theorem}
\noindent
{\bf Proof}: First we will prove that if $\psi\in M_{{p_1},{p_2}}^{p_3}({\mathbb R})$ then
$\psi\in M_{{p_1},{p_2}}^{p_3}({\mathbb Z})$. Let $\Phi\in \mathcal{S}({\mathbb R})$
be such that $supp(\Phi)\subseteq[0,1], \, 0\leq\Phi(x)\leq1$ and
$\Phi(x)=1, \forall \, x\in[\frac{1}{4},\frac{3}{4}]$. If $\{a_k\}$
and $\{b_l\}$ are two sequences with finitely many non-zero terms,
we define two functions $f_a,g_b$ as follows
$$f_a(x):= \sum\limits_k a_k \Phi(x-k)$$ and $$g_b(x):= \sum\limits_l b_l \Phi(x-l).$$
It is easy to see that $||f_a||_{L_{p_1}({{\mathbb R}})} \leq ||a||_{l_{p_1}({{\mathbb Z}})}$
and $||g_b||_{L_{p_2}({{\mathbb R}})} \leq ||b||_{l_{p_2}({{\mathbb Z}})}$. Then
$${\mathcal {C}}_\psi(f_a,g_b)(x)= \int_{{\mathbb R}}\int_{{\mathbb R}} \psi(\xi,\eta) \hat{f_a}(\xi)
\hat{g_b}(\eta) e^{2\pi i (\xi+\eta)x} d\xi d\eta$$
$$= \int_0^1\int_0^1 \psi(\xi,\eta)e^{2\pi i (\xi+\eta)x} \sum\limits_m \sum\limits_n\hat{f_a}(\xi+m)\hat{g_b}(\eta+n)
e^{2\pi i (m+n)x} d\xi d\eta $$
$$= \int_0^1\int_0^1 \psi(\xi,\eta)e^{2\pi i (\xi+\eta)x} \sum\limits_m\hat{f_a}(\xi+m)e^{2\pi i mx}
\sum\limits_n\hat{g_b}(\eta+n)e^{2\pi i nx} d\xi d\eta $$
$$= \int_0^1\int_0^1 \psi(\xi,\eta)\left(\sum\limits_m f_a(x+m)e^{-2\pi i \xi m}\right)\left(\sum\limits_n g_b(x+n) e^{-2\pi i \eta n}\right) d\xi d\eta $$
where we have used the Poisson summation formula in the last step.
\medskip
For $x\in[j+\frac{1}{4},j+\frac{3}{4}] = I_j$, we can write $x = [x]+
(x')$, where $(x')$ is the fractional part of $x$.Then
\begin{eqnarray*}
\sum\limits_m f_a(x+m)e^{-2\pi i \xi m}
&=& \sum\limits_m\sum\limits_k a_k \Phi(j+(x')+m-k)e^{-2\pi i \xi m}\\
&=& \sum\limits_m\ a_{j+m}e^{-2\pi i \xi m}\\
&=&\sum\limits_m\ a_m e^{-2\pi i \xi (m-j)}\\
&=& \hat{a}({\xi}) e^{2\pi i \xi j}
\end{eqnarray*}
Thus $$\sum\limits_m f_a(x+m)e^{-2\pi i \xi m}\chi_{I_j}(x) = \hat{a}({\xi}) e^{2\pi i \xi j} $$
Similarly $$\sum\limits_n g_b(x+n)e^{-2\pi i \eta n}\chi_{I_j}(x) = \hat{b}({\eta}) e^{2\pi i \eta j}$$
Substituting these we get,
\begin{eqnarray*}
{\mathcal {C}}_\psi(f_a,g_b)(x)\chi_{I_j}(x) &=& \int_0^1\int_0^1 \psi(\xi,\eta) \hat{a}(\xi) \hat{b}(\eta)
e^{2\pi i (\xi+\eta) j} d\xi d\eta \\
&=& {\mathcal {D}}_\psi(a,b)(j),
\end{eqnarray*}
where ${\mathcal {D}}_\psi$ is the bilinear operator
defined on $l_{p_1}({\mathbb Z}) \times l_{p_2}({\mathbb Z})$. Now
\begin{eqnarray*}
|{\mathcal {D}}_\psi(a,b)(j)|^{p_3} &=& |{\mathcal {C}}_\psi(f_a,g_b)(x)|^{p_3}\chi_{I_j}, \\
&=& 2 \int_{j+\frac{1}{4}}^{j+\frac{3}{4}} |{\mathcal {C}}_\psi(f_a,g_b)(x)|^{p_3} dx
\end{eqnarray*}
\medskip
Summing over $j \in {\mathbb Z}$,
\begin{eqnarray*}
\sum\limits_j |{\mathcal {D}}_\psi(a,b)(j)|^{p_3} &=& 2 \sum\limits_j \int_{j+\frac{1}{4}}^{j+\frac{3}{4}} |{\mathcal {C}}_\psi(f_a,g_b)(x)|^{p_3} dx \\
&\leq 2& \int_{{\mathbb R}}|{\mathcal {C}}_\psi(f_a,g_b)(x)|^{p_3} dx \\
&\leq 2& ||{\mathcal {C}}_\psi||^{p_3} ||f_a||_{p_1}^{p_3} ||g_b||_{p_2}^{p_3}
\end{eqnarray*}
Therefore
$$||{\mathcal {D}}_\psi(a,b)||_{p_3} \leq 2^{1/p_3}\ ||{\mathcal {C}}_\psi|| \ ||a||_{p_1} \ ||b||_{p_2} .$$
\medskip
For the converse:-
\medskip
Let $\psi\in M_{{p_1},{p_2}}^{p_3}({\mathbb Z})$. Define
$$K_{n,m}:= \int_I \int_I \psi(\xi,\eta)e^{2 \pi i (\xi
n+\eta m)} d\xi d\eta$$ where $I = [0,1]$.
Let $f,g \in C_c^\infty({\mathbb R})$,we have ,
\begin{eqnarray*}
{\mathcal {C}}_\psi(f,g)(x) &=& \int_{{\mathbb R}}\int_{{\mathbb R}}\hat{f}(\xi) \hat{g}(\eta)\psi(\xi,\eta)e^{2 \pi i x(\xi+\eta)} d\xi
d\eta\\
&=& \sum\limits_{n,m}
\int_I\int_I\hat{f}(\xi+n) \hat{g}(\eta+m)\psi(\xi+n,\eta+m)e^{2 \pi
i x(\xi+n+\eta+m)} d\xi d\eta\\
&=&\int_I \int_I \sum\limits_n \hat{f}(\xi+n) e^{2 \pi i x(\xi+n)}\sum\limits_m \hat{g}(\eta+m)
e^{2 \pi i x(\eta+m)}\psi(\xi,\eta)e^{2 \pi i x(\xi+\eta)} d\xi d\eta\\
&=&\int_I\int_I \left( \sum\limits_n f(x+n) e^{-2 \pi i
\xi n}\right) \left(\sum\limits_m g(x+m)e^{-2 \pi i \eta
m}\right) \psi(\xi,\eta) d\xi d\eta\\
& = & \sum\limits_n \sum\limits_m f(x+n) g(x+m)
\int_I\int_I\psi(\xi,\eta)e^{-2 \pi i (\xi n+\eta m)}d\xi
d\eta\\
& =& \sum\limits_n \sum\limits_m K_{n,m}f(x-n) g(x-m)
\end{eqnarray*}
Hence,
$$ \int_{{\mathbb R}}|{\mathcal {C}}_\psi(f,g)(x)|^{p_3} dx = \int_{{\mathbb R}}|\sum\limits_n \sum\limits_m K_{n,m}f(x-n) g(x-m)|^{p_3} dx $$
$$ = \int_I\sum\limits_l |\sum\limits_n \sum\limits_m K_{n,m}f(x+l-n) g(x+l-m)|^{p_3} dx$$
$$ \leq \int_I ||{\mathcal {D}}_\psi||^{p_3} \left(\sum\limits_n |f(x-n)|^{p_1} \right)^{\frac{p_3}{p_1}} \left(\sum\limits_m g(x-m)|^{p_2} \right)^{\frac{p_3}{p_2}} dx $$
Using the H\"{o}lder's inequality with the exponents
$\frac{p_1}{p_3},\frac{p_2}{p_3}$, we obtain
$$||{\mathcal {C}}_\psi(f,g)||_{p_3} \leq ||{\mathcal {D}}_\psi|| \ ||f||_{p_1} ||g||_{p_2}.$$
\qed
\medskip
We now turn our attention to periodic extensions of compactly supported bilinear multipliers.
We will prove the following result.
\medskip
\begin{theorem}\label{compact support}
Let $\psi \in M_{{p_1},{p_2}}^{p_3}({\mathbb R})$ be such that $supp (\psi)
\subseteq I \times I$. where $I=[-1/2 ,1/2]$. Consider $\psi^\sharp$
the periodic extension of $\psi$ given by
$$\psi^\sharp (\xi,\eta) = \sum\limits_n \sum\limits_m \psi(\xi-n,\eta-m)$$
Then $\psi^\sharp \in M_{{p_1},{p_2}}^{p_3}({\mathbb R})$. Moreover, $||\psi^\sharp|| \leq c ||\psi||$,
where $c$ is a constant independent of $\psi$.
\end{theorem}
As a consequence of theorem (\ref{periodic symbols}) it would suffice to prove the following.
\begin{proposition}\label{1}
Let $\psi \in M_{{p_1},{p_2}}^{p_3}({\mathbb R})$ be such that $supp(\psi) \subseteq I \times I$.
where $I=[-1/2 ,1/2]$. Then $\psi^\sharp \in M_{{p_1},{p_2}}^{p_3}({\mathbb Z})$.
Moreover, $||\psi^\sharp|| \leq c ||\psi||$.
where $c$ is a constant independent of $\psi$.
\end{proposition}
We will need the following two lemmas.
\begin{lemma}
Let $\psi \in M_{{p_1},{p_2}}^{p_3}({\mathbb R})$ be such that $supp(\psi) \subseteq I \times I$.
Then for $f,g \in \mathcal{S}({\mathbb R})$, $supp \ \widehat{{\mathcal {C}}_\psi(f,g)} \subset [-2,2]$.
\end{lemma}
\noindent
{\bf Proof:} Let $h \in \mathcal{S}({\mathbb R})$ be such that $supp \ \hat{h}\subset[-2,2]^c$.
Then
\begin{eqnarray*}
\left<\widehat{{\mathcal {C}}_\psi(f,g)},h \right> &=& \left<{\mathcal {C}}_\psi(f,g),\hat{h} \right>\\
&=& \int_{{\mathbb R}}\int_{{\mathbb R}}\int_{{\mathbb R}} \hat{f}(\xi) \hat{g}(\eta)\psi(\xi,\eta)e^{2 \pi i x(\xi+\eta)} d\xi
d\eta \hat{h}(x)dx \\
&=& \int_I\int_I \hat{f}(\xi) \hat{g}(\eta)\psi(\xi,\eta)\int_{{\mathbb R}}\hat{h}(x)e^{2 \pi i x(\xi+\eta)}dx d\xi d\eta \\
&=& \int_I\int_I \hat{f}(\xi) \hat{g}(\eta)\psi(\xi,\eta) h(\xi+\eta)d\xi d\eta = 0
\end{eqnarray*}
Thus
$$\left<\widehat{{\mathcal {C}}_\psi(f,g)},h \right> = 0$$
This proves the lemma.\qed
\medskip
For the proof of Proposition (\ref{1}) we will use the following result.
\begin{lemma}\label{AC}\cite{boas}
Let $0 < p \leq \infty$ and $g$ be a slowly increasing $C^{\infty} $ function such that
$supp(\hat{g})\subset [-R,R]$, then there exists a constant
$C> 0$, depending on $p$ such that
$$\sum\limits_n |g(n)|^p \leq C^p \ max(1,R)\int_{{\mathbb R}} |g(x)|^p dx $$
\end{lemma}
This is a well known sampling lemma.
\medskip
Now we prove Proposition (\ref{1}).
\medskip
\noindent
{\bf Proof:} Let $\Phi \in \mathcal{S}({\mathbb R})$.
Let $a = \{a_k\} $ and $b = \{b_l\} $ be two sequences with finitely many non-zero terms.
We define $f_a(x):= \sum\limits_k a_k \Phi(x-k) $ and $g_b(x):= \sum\limits_l b_l \Phi(x-l) $. It is easy to see that
$$||f_a||_{L^{p_1}({\mathbb R})} \leq c ||a||_{l^{p_1}({\mathbb Z})}$$
$$||g_b||_{L^{p_2}({\mathbb R})} \leq c ||b||_{l^{p_2}({\mathbb Z})}$$
where $c = \sum\limits_l |\Phi(x-l)| $.
Also, $\hat{f_a}(\xi) = \hat{\Phi}(\xi) \hat{a}(\xi) $. Similarly, \ $\hat{g_b}(\eta)= \hat{\Phi}(\eta) \hat{b}(\eta)$
We write the operator
\begin{eqnarray*}
{\mathcal {C}}_\psi(f_a,g_b)(x)&=&\int_{{\mathbb R}}\int_{{\mathbb R}} \hat{f_a}(\xi) \hat{g_b}(\eta)\psi(\xi,\eta)e^{2 \pi i x(\xi+\eta)}
d\xi d\eta \\
&=&\int_{I}\int_{I} \hat{\Phi}(\xi) \hat{a}(\xi) \hat{\Phi}(\eta) \hat{b}(\eta)\psi(\xi,\eta)
e^{2 \pi i x(\xi+\eta)} d\xi d\eta
\end{eqnarray*}
Choose $\Phi$ such that $\hat{\Phi}\equiv 1\ \ on \ \ I $. Then at the integer points we get
$${\mathcal {C}}_\psi(f_a,g_b)(n) = \int_{I}\int_{I} \hat{a}(\xi) \hat{b}(\eta)\psi(\xi,\eta)e^{2 \pi i n(\xi+\eta)}
d\xi d\eta ={\mathcal {D}}_\psi(a,b)(n) $$
Using lemma (\ref{AC}) we get,
\begin{eqnarray*}
\sum\limits_n |{\mathcal {D}}_\psi(a,b)(n)|^{p_3} = \sum\limits_n |{\mathcal {C}}_\psi(f_a,g_b)(n)|^{p_3}& \leq& C_{p_3}^{p_3} 2^{p_3} \int_{{\mathbb R}}|{\mathcal {C}}_\psi(f_a,g_b)(x)|^{p_3} dx\\
&\leq & C_{p_3}^{p_3} 2^{p_3} ||{\mathcal {C}}_\psi||^{p_3} ||f_a||_{p_1}^{p_3} ||g_b||_{p_2}^{p_3}\\
\end{eqnarray*}
i.e. $$||{\mathcal {D}}_\psi(a,b)||_{p_3} \leq C'_{p_3}||{\mathcal {C}}_\psi|| \ ||a||_{p_1} ||b||_{p_2} $$
\qed
\section {\bf Jodeit type extension theorems}\label{jext}
In this section we will explore some extensions of bilinear multipliers
on ${\mathbb T}$ to bilinear multipliers on ${\mathbb R}$. Essentially our results
are analogues of Jodeit type of extensions in the linear case. Our
proofs are refinements of Jodeit's original proofs. For the sake of
completeness we include the proofs here. We will need the following
lemmas which may be of independent interest.
\medskip
In what follows $J$ will denote the interval $[-1/2, 1/2)$.
\begin{lemma}\label{p3} Let $\phi \in M_{p_1,p_2}^{p_3}({\mathbb T})$.
\begin{enumerate}
\item[(i)] If $p_3\geq1$ and $a \in l^1({\mathbb Z}^2)$ then
$a*\phi \in M_{p_1,p_2}^{p_3}({\mathbb T})$ and $||a*\phi|| \leq ||a||_1
||\phi||$.
\item[(ii)]If $p_3<1$ and $a \in l^{p_3}({\mathbb Z}^2)$ then
$a*\phi \in M_{p_1,p_2}^{p_3}({\mathbb T})$ and $||a*\phi|| \leq ||a||_{p_3}
||\phi||$.
\end{enumerate}
\end{lemma}
\noindent
{\bf Proof:} For $p_3 \geq 1$, this is an immediate consequence of Minkowski's inequality.
Assume $p_3< 1$ and Let $T'$ be the operator corresponding to $a*\phi$.
For $f\in L^{p_1}({\mathbb T})$ and $g\in L^{p_2}({\mathbb T})$,
$$||T'(f,g)||_{p_3}^{p_3} = \int_J |\sum\limits_{n,m} a* \phi(n,m) \hat{f}(n) \hat{g}(m) \ e^{2 \pi i x (n+m)}|^{p_3} dx$$
$$= \int_{J} |\sum\limits_{l,k}\sum\limits_{n,m} a(l,k)\phi(n-l,m-k) \hat{f}(n) \hat{g}(m) \ e^{2 \pi i x (n+m)}|^{p_3} dx$$
$$\leq \int_{J}\sum\limits_{l,k}|a(l,k)|^{p_3}\ \ |\sum\limits_{n,m}\phi(n-l,m-k) \hat{f}(n) \hat{g}(m) \ e^{2 \pi i x (n+m)}|^{p_3} dx$$
$$\leq \|T\|^{p_3} \|a\|_{p_3}^{p_3} \|f\|_{p_1}^{p_3} \|g\|_{p_2}^{p_3}$$
The first inequality follows from $|\sum\limits_i \alpha_i|^p \leq \sum\limits_i|\alpha_i|^p$, $ 0<p<1$
and the second from the assumption that the operator $T$ is bounded. Hence we obtain
$$\|T'(f,g)\|_{p_3} \leq \|T\| \|a\|_{p_3} \|f\|_{p_1} \|g\|_{p_2} $$\qed
\begin{lemma}\label{dilation} Let $\phi \in M_{p_1,p_2}^{p_3}({\mathbb T})$. For a positive integer $k$ ,we define $\phi_k$ as follows
$\phi_k(n,m):= \phi (n/k,m/k)$ if $k$ divides both $n,m $, and $\phi_k(n,m):=0$
otherwise. Then $\phi_k\in M_{p_1,p_2}^{p_3}({\mathbb T})$ with norm not exceeding that of $\phi$.
\end{lemma}\
\noindent {\bf Proof:} For $f\in L^\infty({\mathbb T})$, let $F(x)= \frac{1}{k}\sum\limits_0^{k-1}
f(\frac{x+j}{k})$.
\begin{eqnarray*}
\hat{F}(n)&=& \int_J \frac{1}{k}\sum\limits_0^{k-1} f(\frac{x+j}{k})e^{-2 \pi i x.n}dx\\
&=& \frac{1}{k}\sum\limits_0^{k-1}\int_J f(\frac{x+j}{k})e^{-2 \pi i x.n}dx\\
&=& \int_J f(x)e^{-2 \pi i x.kn}dx\\
&=& \hat{f}(kn) \end{eqnarray*}
Also note that
$\|F\|_1 \leq \|f\|_1$ and $\|F\|_{\infty} \leq \|f\|_{\infty}$.
Hence $$\|F\|_p \leq \|f\|_p, \ \ 1 \leq p \leq \infty$$
Similarly for $g\in L^\infty({\mathbb T})$, we define $G$. Let $T_k$
be the operator corresponding to $\phi_k$. Then
\begin{eqnarray*}
T_k(f,g)(x) &=& \sum\limits_n \sum\limits_m \hat{f}(n)\hat{g}(m)\phi_k(n,m)
e^{2\pi i
x(n+m)}\\
&=& \sum\limits_n \sum\limits_m \hat{f}(kn)\hat{g}(km)\phi(n,m) e^{2\pi i
x(kn+km)}\\
&=& \sum\limits_n \sum\limits_m \hat{F}(n)\hat{G}(m)\phi(n,m) e^{2\pi i
x(kn+km)}\\
&=& T(F,G)(kx)
\end{eqnarray*}
Hence \begin{eqnarray*}
\|T_k(f,g)\|_{p_3}^{p_3} &=& \int_J |T(F,G)(kx)|^{p_3}dx\\
&=& \frac{1}{k} \int_{-k/2}^{k/2} |T(F,G)(x)|^{p_3}dx\\
&=& \int_J |T(F,G)(x)|^{p_3}dx\\
&\leq& \|T\|^{p_3} \|F\|_{p_1}^{p_3} \|G\|_{p_2}^{p_3}\\
&\leq& \|T\|^{p_3} \|f\|_{p_1}^{p_3} \|g\|_{p_2}^{p_3}
\end{eqnarray*}
Thus we obtain that $\phi_k$ is in $M_{p_1,p_2}^{p_3}({\mathbb T})$.\qed
\medskip
Our first extension result is the following theorem.
\begin{theorem}\label{extension1} Let $\phi$ be in $M_{p_1,p_2}^{p_3}({\mathbb T})$ and $S$ be a function supported in $\frac{1}{2} J \times \frac{1}{2} J $ such that its periodic extension $S^{\sharp}$ from $J \times J$ satisfies $\sum\limits_{n,m}|\hat{S^{\sharp}}(n,m)|^p< \infty $ , where $p =$ min $(1, p_3)$.\\
Then $\psi(\xi,\eta):= \sum\limits_{n,m} \phi (n,m) \hat{S}(\xi-n,\eta-m)\in M_{p_1,p_2}^{p_3}({\mathbb R})$. Moreover, $\|\psi\| \leq c \|\phi\|$, where $c = 2^{1/p}\sum\limits_{n,m} |\hat{S^{\sharp}}(n,m)|^p< \infty $.
\end{theorem}
\noindent {\bf Proof:} It is enough to prove that
$\psi_{r}(\xi,\eta) = \sum\limits_{l,k} \phi (l,k)\ r^{|l|+|k|}
\hat{S}(\xi-l,\eta-k)$ belongs to $M_{p_1,p_2}^{p_3}({\mathbb R})$ for
$0<r<1$ with $\|C_{\psi_r}\|\leq c\|\phi\|.$ Let $k_{\phi_r}$ be
the kernel corresponding to the bi-linear multiplier
$\phi(n,m)r^{|n|+|m|}$. Clearly, $\widehat{K_{\phi_r}S}=\psi_r$, considered as a function on ${\mathbb R}^2$. From lemma (\ref{p3})
$\widehat{K_{\phi_r}S^\#}=\hat K_{\phi_r}* \hat{S^\#}$ belongs to
$M_{p_1,p_2}^{p_3}({\mathbb T}).$
Let $f\in{\mathcal S}({\mathbb R})$ and for each $n\in{\mathbb Z}$, let $f_n$ denote the 1-periodic extension of the function
$f(x+n/2)\chi_J (x)$ from $J$.
Then it can be easily verified that
\begin{eqnarray*}\sum\limits_n\|f_n\|_{L^p({\mathbb T})}^p &\leq& 2\|f\|_p^p
\end{eqnarray*}
Now for $x\in\frac{1}{2}J$ we have,
\begin{eqnarray*}
C_{\psi_r}(f,g)(x+\frac{n}{2})&=&\int_{\frac{1}{2}J}\int_{\frac{1}{2}J}f(x-t+\frac{n}{2})g(x-s+\frac{n}{2})(K_{\phi_r}S)(t,s)
dt ds\\
&=& \int_{J}\int_{J}f_n(x-t)g_n(x-s)(K_{\phi_r}S^\#)(t,s) dt ds
\end{eqnarray*}
Thus,
\begin{eqnarray*}
\|C_{\psi_r}(f,g)\|_{p_3}^{p_3} & \leq & \sum\limits_n
(c\|\phi\|)^{p_3}\|f_n\|_{p_1}^{p_3}\|g_n\|_{p_2}^{p_3}\\
&\leq & 2(c\|\phi\|)^{p_3}\|f\|_{p_1}^{p_3}\|g\|_{p_2}^{p_3}
\end{eqnarray*}
The last inequality follows as an application of H\"{o}lder's
inequality.
\qed
\medskip
Our next result is the piecewise linear extension of $\phi\in M_{p_1,p_2}^{p_3}({\mathbb T})$ to $\psi\in M_{p_1,p_2}^{p_3}({\mathbb R})$.
\begin{theorem}\label{thm} Let $\phi \in M_{p_1,p_2}^{p_3}({\mathbb T})$ and $p_3>\frac{1}{2}$. If
$\Lambda(x_1,x_2) = (1-|x_1|)(1-|x_2|)$ in $[-1,1)\times [-1,1)$
and\ \ $0$ otherwise. Then the function $\psi$ defined as
$\psi(\xi,\eta):= \sum\limits_{n,m} \phi (n,m) \Lambda(\xi-n,\eta-m) \in
M_{p_1,p_2}^{p_3}({\mathbb R})$ and $||\psi|| \leq c ||\phi|| $.
\end{theorem}
\noindent {\bf Proof:} Consider $S(x,y)=\frac {sin^2 4\pi x}{(4\pi
x)^2}\frac {sin^2 4\pi y}{(4\pi y)^2}$. Let $S_{k,l}(x,y)$ be the 1 - periodic extension of $\chi_{\frac{k}{2}+{\frac{1}{2}J}}(x)\chi_{\frac{l}{2}+{\frac{1}{2}J}}(y) S(x,y)$ from $(\frac{k}{2}+J)\times (\frac{l}{2}+J)$.
An easy computation using integration by parts gives that for $n,m \neq 0$ $$|\hat
S_{k,l}(n,m)|\leq \frac{C}{(1+k^2)(1+n^2)(1+l^2)(1+m^2)}.$$ Hence,
by Theorem \ref{extension1} we have $\sum\limits_{n,m} \phi (n,m) \hat
S_{k,l}(\xi-n,\eta-m)$ belongs to $M_{p_1,p_2}^{p_3}({\mathbb R})$ with
bounds not exceeding $\frac{C}{(1+k^2)(1+l^2)}\|\phi\|$.\\
Thus $\sum\limits_{n,m} \phi (n,m) \hat
S(\xi-n,\eta-m)= \sum\limits_{n,m} \phi (n,m)
\Lambda(\frac{\xi-n}{4},\frac{\eta-m)}{4})$ is in
$M_{p_1,p_2}^{p_3}({\mathbb R})$. By applying Lemma \ref{dilation} we get
$\psi\in M_{p_1,p_2}^{p_3}({\mathbb R})$ with the required bound.\qed
\medskip
As a consequence of this we get the desired piecewise constant extension result i.e.,
\begin{theorem}\label{piecewise}Let $\phi$ be in $M_{p_1,p_2}^{p_3}({\mathbb T})$, where $p_1, p_2 > 1$.
Then $\psi(\xi,\eta):= \sum\limits_{n,m} \phi (n,m) \chi_{J \times
J}(\xi-n,\eta-m) \in M_{p_1,p_2}^{p_3}({\mathbb R})$.
\end{theorem}
\noindent {\bf Proof:} Define $\phi_2(n,m):=\phi(n/2, m/2)$
if $n,m$ both are even and $\phi_2(n,m):=0$ otherwise. By Lemma (\ref{dilation}),
$\phi_2(n,m) \in M_{p_1,p_2}^{p_3}({\mathbb T})$ and $||\phi_2|| \leq
||\phi||$. Consider $\theta_2(n,m) = \phi_2(n,m) + \phi_2(n-1,m)+\phi_2(n,m-1) + \phi_2(n-1,m-1)$.
Clearly, $\theta_2 \in M_{p_1,p_2}^{p_3}({\mathbb T})$ and $||\theta_2|| \leq
4 ||\phi||$.
Let $\Lambda(\xi, \eta) = (1-|\xi|)(1-|\eta|)$ in $[-1,1)\times
[-1,1)$, and $0$ elsewhere. Then by Theorem \ref{thm},
$\Theta_2(\xi,\eta)= \sum\limits_{n,m} \theta_2(n,m)\Lambda(\xi-n,\eta-m)$
is in $ M_{p_1,p_2}^{p_3}({\mathbb R})$. Note that $\Theta_2(\xi,\eta)=
\phi(n,m)$ if $(\xi, \eta)\in [2n,2n+1]\times [2m, 2m+1].$ Let
$\tilde{\chi}$ be the 2-periodic extension of a function which is $1$ for
$0<x<1$ and $0$ for $-1<x<0$. We know that $ \tilde{\chi} \in M_p({\mathbb R})$ for
$p>1$. Hence $\Psi_2(\xi, \eta) = \tilde{\chi}(\xi)\tilde{\chi} (\eta) \Theta_2(\xi,
\eta) \in M_{p_1,p_2}^{p_3}({\mathbb R})$. Now $\Psi_2(\xi, \eta) =
\phi(n,m)$ for $2n<\xi<2n+1$, $2m<\eta<2m+1$ and $\Psi_2(\xi,
\eta)=0$ otherwise. Since $\psi(\xi, \eta)= \Psi_2(2\xi, 2\eta)+\Psi_2(2\xi+1,
2\eta)+\Psi_2(2\xi, 2\eta+1)+\Psi_2(2\xi+1,2\eta+1)$. The result
follows. \qed
\medskip
{\bf Remark}: The above theorem does not hold if either of $p_1,
p_2$ is $1$. This is very easy to verify. Without loss of generality, we can assume that $p_1 = 1$.
Let $\tilde{\phi} \in M_1({\mathbb T})$, and $T \in M_1({\mathbb R})$ corresponding to the piece-wise constant extension of $\tilde{\phi}$. Put $\phi (n,m)= \tilde{\phi} (n)$. Then by H\"{o}lder's
inequality $\phi\in M_{1,p_2}^{p_3}({\mathbb R})$. Suppose the piecewise constant extension $\psi(\xi,\eta)= \sum\limits_{n,m} \phi (n,m) \chi_{J \times J}(\xi-n,\eta-m)= \sum\limits_n \tilde{\phi} (n) \chi_{J}(\xi-n)$ is in $M_{1,p_2}^{p_3}({\mathbb R})$. Then for $f \in L_1({\mathbb R})$ and $g\in L_{p_2}({\mathbb R})$, we have $\|C_\psi(f,g)\|_{p_3}= \|T(f).g \|_{p_3}\leq c \|f\|_1 \|g\|_{p_2}$. Now notice that $\frac{1}{p_3}$ and $\frac{p_2}{p_3}$ are conjugate indices and $\|f\|_1^{p_3} = \| |f|^{p_3} \|_{\frac{1}{p_3}}$. Hence by using duality we get $\|Tf\|_{1} \leq c \|f\|_{1}$. But this is not true for any nonconstant $\tilde{\phi}$.
|
1,941,325,220,026 | arxiv | \section{Introduction}
Various supergravity theories are known to arise as low energy effective
field theory limits of an underlying superstring or super $p$-brane
theory. For example, all supergravity theories in $D=10$ and $D=11$
are associated with certain superstring or super $p$-brane theories.
Supergravity theories can also serve as worldvolume field theories for
a suitable super $p$-brane theory, the most celebrated example of this
being the spinning string theory.
Of course, not all supergravity theories have been associated so far
with superstrings or super $p$-branes. An outstanding example is the
self-dual supergravity in $2+2$ dimensions \cite{bs,kgn,ws1}. There are a
number of reasons why this is a rather important example. For one thing,
the dimensional reduction to $1+1$ dimensions can give rise to a large
class integrable models. Secondly, it can teach us a great deal about
quantum gravity. Furthermore, and perhaps more interestingly, a suitable
version of self-dual supergravity in $2+2$ dimensions may in principle
serve as the worldvolume theory of an extended object propagating in $10+2$
dimensions, as has been suggested recently by Vafa \cite{vafa}. Further
tantalizing hints at the relevance of a worldvolume theory in $2+2$ dimensions
have been put forward recently \cite{km}.
Since the well known $N=2$ string theory has the critical dimension of
four, it is natural to examine this theory, or its variants, in search of
a stringy description of self-dual supergravity. It turns out that this
theory actually describes self-dual gravity in 2+2 dimensions, as was
shown by Ooguri and Vafa \cite{ov} sometime ago. Interestingly enough,
and contrary to what one would naively expect, the fermionic partner of
the graviton does not arise in the spectrum, and therefore self-dual
supergravity does not emerge \cite{ov}. This intriguing result led us to
look for a variant of the $N=2$ string theory where spacetime
supersymmetry is kept manifest from the outset, thereby providing a
natural framework for finding a stringy description of self-dual
supergravity. We have constructed two such variants \cite{us1,us2}, in
which (a) we use the basic variables of the $2+2$ superspace, and (b) we
consider constraints that are quadratic in these variables.
Surprizingly enough, we find that neither one of the two models describe
the self-dual supergravity, suggesting that we probably need to
introduce extra world-sheet variables and/or consider higher order
constraints. Nonetheless, we believe that our results may be of interest
in their own right, and with that in mind, we shall briefly describe
them in this note.
Both of the models mentioned above can be constructed by making use of
bilinear combinations of the bosonic coordinates $X^{\alpha\dot\alpha}$, fermionic
coordinates $\theta^\alpha$, and their conjugate momenta $p_\alpha$, to built the
currents of the underlying worldsheet algebras. The indices $\alpha$ and
$\dot\alpha$ label the two dimensional spinor representations of $SL(2)_R\times
SL(2)_L \approx SO(2,2)$. In terms of these variables, it is useful to
recall the currents of the small $N=4$ superconformal algebra, namely
\begin{eqnarray}
T&=& -\ft12 \partial
X^{\alpha\dot\alpha}\partial X_{\alpha\dot\alpha}
-p_\alpha \partial \theta^\alpha \ , \nonumber\\
G^{\dot \alpha} &=& p_{\alpha}\partial X^{\alpha\dot \alpha}\ , \quad
{\widetilde G}^{\dot \alpha} = \theta_{\alpha}\partial X^{\alpha\dot \alpha}
\ , \label{n4alg}\\
J_0 &=& p_{\alpha}\theta^{\alpha}\ , \quad J_+= p_\alpha p^\alpha\ ,
\quad J_-=\theta_\alpha\theta^\alpha\ . \nonumber
\end{eqnarray}
\noindent This is the twisted version of the usual realization, since here the
$(p,\theta)$ system has dimension $(1,0)$. An $N=2$ truncation of this
algebra is given by \cite{us1}
\begin{eqnarray}
T&=& -\ft12 \partial X^{\alpha\dot\alpha}\partial X_{\alpha\dot\alpha}
-p_\alpha \partial \theta^\alpha \ , \qquad
G^{\dot \alpha} = p_{\alpha}\partial X^{\alpha\dot\alpha} \ ,
\qquad J = p_\alpha p^\alpha \ . \label{tn4alg}
\end{eqnarray}
\noindent Naively, this system appears to be non-critical. However,
the currents are reducible, and a proper quantization requires the
identification of the irreducible subsets. Assuming that
\begin{enumerate}
\item[(a)]
the worldsheet field content is $(p_\alpha, \theta, X^{\alpha\dot\alpha})$,
\item[(b)]
the constraints are quadratic in worldsheet fields,
\item[(c)]
the constraints are irreducible,
\end{enumerate}
\noindent we have found that, there exists three possible $N=2$ string
theories. One of them is the old model shown by Ooguri and Vafa
\cite{ov} to descibe pure self-dual gravity. We will refer to this model
as the ``$n=0$ model''. The other two models were studied in refs.
\cite{us1,us2}. One of them, which we will refer to as the ``$n=1$
model'', has {\it spacetime} $N=1$ supersymmetry, and the other one,
which we will refer to as the ``new $n=0$ model'', has no spacetime
supersymmetry. In what follows, we shall give a very brief description
of these models.
\section{The $N=2$ String Models}
\subsection{The $n=0$ Model}
This is the usual $N=2$ string which has worldsheet $N=2$
supersymmetry, but lacks spacetime supersymmetry. The underlying $N=2$
superconformal algebra, in the twisted basis described above, is given by
\begin{eqnarray}
T&=& -\ft12 \partial X^{\alpha\dot\alpha}\partial X_{\alpha\dot\alpha}
-p_\alpha \partial \theta^\alpha \ , \qquad J=p_{\alpha}\theta^{\alpha}\ ,
\nonumber\\
G^{\dot 1} &=& \theta_{\alpha}\partial X^{\alpha\dot 1}\ , \qquad
G^{\dot 2} = p_{\alpha}\partial X^{\alpha\dot 2} \ . \label{n2alg}
\end{eqnarray}
The striking feature of this model is that the only continuous degree of
freedom it describes is that of the self-dual graviton \cite{ov}.
This model has been studied extensively in the literature. See, for
example, refs.~\cite{lp,l}, where a BRST analysis of the spectrum is
given, and various twists and GSO projections leading to massless
bosonic and fermionic vertex operators are considered. We now turn our
attention to the remaining two models, which we have constructed in
\cite{us1,us2}.
\subsection{The n=1 Model}
This model can covariantly be described by the set of currents given in
\eqn{tn4alg}
%
\footnote{ In \cite{ws2}, Siegel proposed to build a string theory
implementing the set of constraints given by $\Big\{\partial
X^{\alpha\dot\alpha}\partial X_{\alpha\dot\alpha},\,
p_\alpha\,\partial\theta^\alpha, \,p_\alpha p^\alpha,\, \partial\theta_\alpha\,
\partial\theta^\alpha,\, p_\alpha\,\partial X^{\alpha\dot\alpha},\,
\partial\theta_\alpha\, \partial X^{\alpha\dot\alpha}\Big\}$. However, we have
checked that the algebra of these constraints does not close \cite{us1}.
Actually, this non-closure occurs even at the classical level of Poisson
brackets, or single OPE contractions \cite{us1}.}.
Notice that all currents have spin two, and that the system
is critical. Nonetheless, this set of constraints is reducible. All the
relations among the constraints can be described in a concise form by
introducing a pair of spin-0 fermionic coordinates $\zeta^{\dot\alpha}$ on
the worldsheet. We can then define
\begin{equation}
{\cal P}^\alpha= p^\alpha +\zeta_{\dot\alpha}\, \partial X^{\alpha\dot\alpha} + \zeta_{\dot\alpha}\,
\zeta^{\dot\alpha}\, \partial\theta^\alpha\ ,\label{psuper}
\end{equation}
\noindent in terms of which the currents may be written as ${\cal T}={\cal P}_\alpha\,
{\cal P}^\alpha$, where
\begin{equation}
{\cal T}= J + \zeta_{\dot\alpha}\, G^{\dot\alpha} +
\zeta_{\dot\alpha}\, \zeta^{\dot\alpha}\, T\ .\label{tsuper}
\end{equation}
\noindent The reducibility relations among the constraints can now be written
in the concise form \cite{us1}
\be
{\cal P}_\alpha {\cal T}=0\ . \label{reduceq}
\ee
\noindent In fact, the system has
infinite order reducibility. This can be easily seen from the form
${\cal P}_\alpha\, {\cal T}=0$ for the reducibility relations, owing to the
fact that the functions ${\cal P}_\alpha$ are themselves reducible, since
${\cal P}_\alpha\, {\cal P}^\alpha$ gives back the constraints ${\cal T}$. This
infinite order of reducibility implies that a proper BRST treatment
requires an infinite number of ghosts for ghosts
\footnote{Note that, this situation is very nuch similar to the case of
systems with $\kappa$-symmetry.}.
The construction of the covariant BRST operator is rather cumbersome
problem. Some den progress is made on this problem in \cite{us1}, however,
thanks
to the fact that the covariant system is critical.
To have an insight into the physical spectrum of the theory, and its
basic interactions, it is sufficient to consider the independent subset
of constraints, at the expense of sacrificing manifest target space
supersymmetry. For example, we can choose the following set of
independent constraints \cite{us1}
\begin{equation}
T = -\ft12 \partial X^{\alpha\dot\alpha}\, \partial X_{\alpha\dot\alpha} -
p_\alpha\, \partial\theta^\alpha\ , \qquad G^{\dot1} = - p_\alpha\, \partial
X^{\alpha\dot1}\ ,\label{n1matcur}
\end{equation}
\noindent which in fact generate a subalgebra of the twisted $N=2$
superconformal algebra. Using \eqn{reduceq}, we can write the remaining
constraints, {\it i.e.~}the dependent ones, as linear functions of the
independent constraints
\footnote{Although the massless states can be shown to be annihilated by
the dependent constraints as well, it turns out that there are massive
operators with standard ghost structure which do not seem to be
annihilated by them \cite{us1}. Establishing the equivalence of the
massive spectra of the reducible and the irreducible systems
would require the analysis of the full cohomology and interactions,
including the physical states with non-standard ghost structure.}.
The BRST operator for the reducible system $(T, G^{\dot 1})$ can be easily
constructed. We introduce the anticommuting ghosts $(b,c)$
and the commuting ghosts $(r,s)$ for $T$ and $G^{\dot1}$ respectively.
The commuting ghosts $(r, s)$ are bosonized, {\it i.e.} $r=\partial\xi\,
e^{-\phi}$, $s=\eta\, e^{\phi}$. In terms of these fields, the BRST
operator $Q$ is given by \cite{us1}
\begin{eqnarray}
Q&=& c \Big(-\ft12 \partial
X^{\alpha\dot\alpha}\, \partial X_{\alpha\dot\alpha} - p_\alpha\, \partial\theta^\alpha - b\, \partial c
-\ft12 (\partial\phi)^2 -\ft32 \partial^2\phi - \eta\, \partial \xi\Big) \nonumber\\
&+& \eta\, e^{\phi}\, p_\alpha\,\partial X^{\alpha\dot1}\ .\label{n1brst}
\end{eqnarray}
The theory has spacetime supersymmetry, generated by \cite{us1}
\begin{eqnarray}
q^\alpha &=& \oint p^\alpha\ ,\nonumber\\
q^{\dot1} &=&\oint \theta_\alpha\, \partial X^{\alpha\dot1}\ ,\qquad
q^{\dot2} =\oint \theta_\alpha\, \partial X^{\alpha\dot2} + b\,\eta\,e^{\phi}\ .
\label{n1susygen}
\end{eqnarray}
\noindent The somewhat unusual ghost terms in $q^{\dot2}$ are necessary for the
generator to anti-commute with the BRST operator. It is straightforward to
verify that these supercharges generate the usual $N=1$ spacetime
superalgebra
\begin{equation}
\{q_\alpha,q_\beta\}=0= \{q^{\dot\alpha},q^{\dot\beta} \},\qquad
\{q^\alpha,q^{\dot\alpha} \}= P^{\alpha\dot\alpha} \ ,\label{n1susyalg}
\end{equation}
\noindent where $P^{\alpha\dot\alpha}=\oint \partial X^{\alpha\dot\alpha}$ is the
spacetime translation operator.
Since the zero mode of $\xi$ is not included in the Hilbert space of
physical states, there exists a BRST non-trivial picture-changing operator
$Z=\{Q, \xi\}$ which can give new BRST non-trivial physical operators when
normal ordered with others. Explicitly, it takes the form \cite{us1}
\begin{equation}
Z=c\,\partial \xi + p_\alpha\, \partial X^{\alpha\dot1} e^{\phi}\ . \label{n1pic}
\end{equation}
\noindent Unlike the picture-changing operator in the usual $N=1$ NSR
superstring, this operator has no inverse.
Let us now consider the physical spectrum with standard ghost structure.
There are two massless operators \cite{us1}
\begin{equation}
V=c\, e^{-\phi}\, e^{ip\cdot X}\ ,\qquad
\Psi = h_\alpha\, c\, e^{-\phi}\, \theta^\alpha\, e^{ip\cdot X}\ ,\label{n1mass0}
\end{equation}
\noindent which are physical provided with mass-shell condition $p^{\alpha\dot\alpha}\,
p_{\alpha\dot\alpha} = 0$ and spinor polarisation condition $p^{\alpha\dot1}\, h_a =0$.
The non-triviality of these operators can be established by the fact that
the conjugates of these operators with respect to the following
non-vanishing inner product
\begin{equation}
\Big\langle \partial^2c\,\partial c\, c\, e^{-3\phi}\, \theta^2 \Big\rangle
\label{n1innpro}
\end{equation}
\noindent are also annihilated by the BRST operator. The bosonic operator $V$
and the fermionic operator $\Psi$ form a supermultiplet under the $N=1$
spacetime supersymmetric transformation. The associated spacetime
fields $\phi$ and $\psi_\alpha$ transform as
\begin{equation}
\delta\phi = \epsilon_\alpha\, \psi^\alpha\, \qquad
\delta\psi_\alpha = \epsilon^{\dot\alpha}\, \partial_{\alpha\dot\alpha}\, \phi\ .
\end{equation}
We can build only one three-point amplitude among the massless
operators, namely \cite{us1}
\begin{equation}
\Big\langle V(z_1)\,\, \Psi(z_2)\,\, \Psi(z_3)\Big\rangle =c_{23}\ ,
\label{n13pf}
\end{equation}
\noindent where $b_{ij}$ is defined by
\begin{equation}
b_{ij} = h_{(i)\alpha}\,h^\alpha_{(j)}\ .\label{n13pf1}
\end{equation}
\noindent From this, we can deduce that the $V$ operator describes a spacetime
scalar whilst the $\Psi$ operator describes a spacetime chiral
spin-$\ft12$ fermion. Note that this is quite different from the case
of the $N=2$ string where there is only a massless boson and although it
is ostensibly a scalar, it is in fact, as emerges from the study of the
three-point amplitudes, a prepotential for self-dual Yang-Mills or
gravity.
With the one insertion of the picture-changing operator, we
can build a four-point function which vanishes for kinematic reasons
\cite{us1}:
\begin{equation}
\Big\langle ZV\, \Psi\, \oint b\Psi\, \Psi \Big\rangle =
(u\, b_{12}\, b_{34} + s\, b_{13}\, b_{24}){\Gamma(-\ft12 s)
\, \Gamma(-\ft12 t)\over \Gamma(\ft12 u)}\ ,\label{4pfmass0}
\end{equation}
\noindent where $s$, $t$, and $u$ are the Mandelstam variables and $h_{(1)}^\alpha
= p^{\alpha\dot1}_{(1)}$. The vanishing of the kinematic term, {\it i.e.}
$u\, b_{12}\, b_{34} + s\, b_{13}\, b_{24}=0$, is a straightforward
consequence of the mass-shell condition of the \noindent operators and
momentum conservation of the four-point amplitude \cite{lp}. It might
seem that the vanishing of the this four-point amplitude should be
automatically implied by the statistics of the operators since there is
an odd number of fermions. However, as shown in \cite{us1}, the
picture-changing operator has spacetime fermionic statistics. In fact,
that the four-point amplitude \eqn{4pfmass0} vanishes only on-shell, for
kinematic reasons, already implies that the picture changer $Z$ is a
fermion. Thus the picture changing of a physical operator changes its
spacetime statistics and hence does not establish the equivalence
between the two. On the other hand, since $Z^2=(ZZ)$ becomes a spacetime
bosonic operator, we can use $Z^2$ to identify the physical states with
different pictures.
Thus, we have a total of four massless operators,
namely $V$, $ZV$ and their supersymmetric partners. $V$ and its
superpartner $\Psi$ have standard ghost structure; $ZV$ and its
superpartner $Z\Psi$ have non-standard ghost structures.
So far we have discussed the massless physical operators. There are also
infinitely many massive states. The tachyonic type massive operators,
{\it i.e.~} those that become pure exponentials after bosonizing the
fermionic fields, are relatively easy to obtain, and they have been
discussed at length in \cite{us1}. An example of such massive operators
is as follows
\be
V_n= c(\partial^np)^2\cdots p^2\, e^{n\phi}\, e^{ip\cdot X}\ ,\qquad
{\cal M}^2 =(n+1)(n+2)\ , \label{n1masstats}
\ee
\noindent where $p^2 = p_\alpha\, p^\alpha$. These operators correspond to physical
states, provided the mass-shell condition is satisfied. Furthermore, they
all have non-standard ghost structures. From these operators, we can
build non-vanishing four-point amplitudes, which implies the existence
of further massive operators in the physical spectrum.
In summary, we emphasize that the model has $n=1$ supersymmetry in the
critical $2+2$ dimensional spacetime. It describes two massless scalar
supermultiplets, in addition to an infinite tower of massive states.
Examining their interactions, however, we find that they do not
correspond to those of self-dual supergravity.
\vspace{0.8cm}
\subsection{The New $n=0$ Model}
This model is described by the following set of currents \cite{us2}
\begin{eqnarray}
T&=& -\ft12 \partial X^{\alpha\dot\alpha}\partial X_{\alpha\dot\alpha}
-p_\alpha \partial \theta^\alpha \ , \qquad J =\partial (\theta_\alpha
\theta^\alpha) \ , \nonumber\\
G^{\dot 1} &=& p_{\alpha}\partial X^{\alpha\dot 1} \ , \qquad
{\widetilde G}^{\dot 1}=\theta_\alpha \partial X^{\alpha\dot 1}\ .
\label{newalg}
\end{eqnarray}
\noindent It is easy to see that the currents $(T, G^{\dot 1}, {\widetilde
G}^{\dot 1}, J)$ have spins $(2,2,1,1)$. In addition to the standard
OPEs of $T$ with $(T,J,G^{\dot 1},{\widetilde G}^{\dot 1})$, the only
non-vanishing OPE is
\begin{equation}
J(z)\, G^{\dot 1} (w) \sim {2{\widetilde G}^{\dot 1}\over (z-w)^2}+
{\partial {\widetilde G}^{\dot 1}\over (z-w)} \ .
\end{equation}
\noindent This algebra is related to the small $N=4$ superconformal algebra,
not directly as a subalgebra, but in the following way. The subset of
currents $T, G^{\dot 1}, \widetilde G^{\dot 1}$ and $J_-$ in
(\ref{n4alg}) form a critical closed algebra. However these currents
form a reducible set. To achieve irreducibility, we simply differentiate
the current $J_-$, thereby obtaining the set of currents given in
(\ref{newalg}). Note that taking the derivative of $J_-$ still gives a
primary current with the same anomaly contribution, since $12s^2-12s+2$
takes the same value for $s=0$ and $s=1$.
To proceed with the BRST quantisation of the model, we introduce the
fermionic ghost fields $(c,b)$ and $(\gamma,\beta)$ for the currents $T$
and $J$, and the bosonic ghost fields $(s,r)$ and $({\tilde s},{\tilde
r})$ for $G^{\dot 1}$ and ${\widetilde G}^{\dot 1}$. It is necessary to
bosonize the commuting ghosts, by writing $s=\eta e^\phi$, $r=\partial \xi
e^{-\phi}$, $\tilde s=\tilde \eta e^{\tilde \phi}$ and $\tilde r=\partial
\tilde\xi e^{-\tilde\phi}$. The BRST operator for the model is then
given by \cite{us2}
\begin{eqnarray}
Q&=& \oint c\Big( -\ft12 \partial X_{\alpha\dot\alpha}\partial
X^{\alpha\dot\alpha} -p_\alpha \partial \theta^\alpha-\ft12 ( \partial \phi)^2
-\ft12 (\partial\tilde \phi)^2 -\ft32 \partial^2 \phi -\ft12 \partial^2 \tilde\phi
\nonumber \\ && -\eta\partial\xi -\tilde\eta \partial \tilde \xi -b\partial c
-\beta\partial\gamma\Big)
\label{brst} \\
&&+\eta e^\phi p_\alpha \partial
X^{\alpha \dot 1} + \tilde \eta e^{\tilde\phi} \theta_\alpha \partial
X^{\alpha \dot 1} + \partial \gamma \Big( \ft12 \theta^\alpha\theta_\alpha
-\partial\tilde\xi\eta e^{\phi-\tilde\phi}\Big)\ . \nonumber
\end{eqnarray}
\noindent Since the zero modes of $\xi$ and $\tilde\xi$ do not appear in the
BRST operator, there exist BRST non-trivial picture-changing operators
\cite{us2}:
\begin{eqnarray}
Z_\xi &=& \{ Q, \xi\}=c\partial\xi+e^\phi p_\alpha\partial X^{\alpha\dot 1}
-\partial \gamma \partial\tilde\xi e^{\phi-\tilde\phi}\ , \nonumber\\
Z_{\tilde\xi} &=& \{ Q, \tilde\xi\}=c\partial\tilde\xi + e^{\tilde\phi}
\theta_\alpha\partial X^{\alpha\dot 1}\ . \label{pic}
\end{eqnarray}
\noindent It turns out that these two picture changers are not invertible.
Thus, one has the option of including the zero modes of $\xi$ and
$\tilde\xi$ in the Hilbert space of physical states. This would not be
true for a case where the picture changers were invertible. Under these
circumstances, the inclusion of the zero modes would mean that all
physical states would become trivial, since $|{\rm phys}\rangle =Q(\xi
Z_\xi^{-1} |{\rm phys}\rangle)$. In \cite{us2}, we chose to exclude the
zero modes of $\xi$ and $\tilde\xi$ from the Hilbert space. It is
interesting to note that in this model the zero mode of the ghost field
$\gamma$ for the spin--1 current is also absent in the BRST operator. If
one excludes this zero mode from the Hilbert space, one can then
introduce the corresponding picture-changing operator $Z_\gamma =
\{Q,\gamma\}=c\partial \gamma$. In \cite{us2}, we indeed chose to exclude
the zero mode of $\gamma$.
In order to discuss the cohomology of the BRST operator \eqn{brst}, it
is convenient first to define an inner product in the Hilbert space.
Since the zero modes of the $\xi,\tilde\xi$ and $\gamma$ are excluded,
the inner product is given by
\be
\langle \partial^2 c\,\partial c\, c\, \theta^\alpha\theta_\alpha\,
e^{-3\phi-\tilde\phi}\rangle=1\ . \label{innerp}
\ee
Let us first discuss the spectrum of massless states in the Neveu-Schwarz
sector. The simplest such state is given by \cite{us2}
\be
V=c\, e^{-\phi-\tilde\phi} e^{ip\cdot X} \ . \label{massless}
\ee
\noindent As in the case of the $N=2$ string discussed in \cite{lp}, since the
picture-changing operators are not invertible the massless states in
different pictures cannot necessarily all be identified. In fact, the
picture changers annihilate the massless operators such as
(\ref{massless}) when the momentum $p^{\alpha\dot 1}$ is zero. However,
massless operators in other pictures still exist at momentum
$p^{\alpha\dot 1}=0$. For example, in the same picture as the physical
operator $Z_{\tilde \xi} V$ that vanishes at $p^{\alpha\dot 1}=0$ is a
physical operator that is non-vanishing for all on-shell momenta, namely
\cite{us2}
\be
\Psi=h_\alpha\, c\,\theta^\alpha\, e^{-\phi}\, e^{ip\cdot X}\ ,
\label{psi}
\ee
\noindent which is physical provided that $p^{\alpha\dot 1}\, h_\alpha=0$ and
$p_{\alpha\dot\alpha} p^{\alpha\dot\alpha}=0$. In fact, $Z_{\tilde \xi}
V$ is nothing but $\Psi$ with the polarisation condition solved by
writing $h^\alpha=p^{\alpha\dot 1}$. However, we can choose instead to
solve the polarisation condition by writing $h^\alpha=p^{\alpha\dot 2}$,
which is non-vanishing even when $p^{\alpha\dot 1}=0$. Thus, the
operators $V$ and $\Psi$ cannot be identified under picture changing
when $p^{\alpha\dot1}=0$. In fact when $p^{\alpha\dot1}=0$ there is
another independent solution for $\Psi$, since the polarisation
condition becomes empty in this case. A convenient way to describe the
physical states is in terms of $\Psi$ given in \eqn{psi}, with the
polarisation condition re-written in the covariant form
$p^{\alpha\dot\alpha}\, h_\alpha=0$, together with a further physical
operator which is defined only when $p^{\alpha\dot1}=0$. In this
description, the physical operator $\Psi$ is defined for all on-shell
momenta.
If one adopts the traditional viewpoint that physical operators
related by picture changers describe the same physical degree of
freedom, one would then interpret the spectrum as containing a massless
operator \eqn{massless}, together with an infinite number of massless
operators that are subject to the further constraint $p^{\alpha\dot1}=0$
on the on-shell momentum
\footnote{It should be emphasized that the possibility of having
$p^{\alpha\dot 1}=0$ while $p^{\alpha\dot 2}\ne 0$ is a consequence of
our having chosen a real structure on the $(2,2)$ spacetime
\cite{us1,lp}, rather than the more customary complex structure
\cite{ov}.}.
This viewpoint is not altogether satisfactory in a
case such as ours, where the picture changing operators are not
invertible. An alternative, and moreover covariant, viewpoint is that
the physical operators in different pictures, such as $V$ and $\Psi$,
should be viewed as independent. At first sight one might think that
this description leads to an infinite number of massless operators.
However, as shown in \cite{us2}, the interactions of all the physical
operators can be effectively described by the interaction of just the
two operators $V$ and $\Psi$.
Thus the theory effectively reduces to one with just two massless
operators, one a scalar and the other a spinorial bosonic operator.
As for the massive states, an infinite tower of them exist, and they
have been discussed in \cite{us2}. They all have postive mass, and
non-standard ghost structure
\footnote{By considering the interactions, one can deduce the existence of
an infinite tower of massive states with standard ghost-structure as
well \cite{us2}.}.
A typical such tower is given by \cite{us2}
\be
V_n = c\, (\partial^n p)^2\cdots p^2\,
e^{n\phi-(n+2)\tilde\phi}\,
\partial^{2n+2}\gamma\cdots \partial\gamma\, e^{ip\cdot X}\ ,
\label{massive}
\ee
\noindent where $n> -1$ and the mass is given by ${\cal M}^2=(2n+2)(2n+3)$.
For subtleties concerning the exclusion of the zero-mode of
the $\gamma$ field in the Hilbert space of physical states, and the nature
of the picture-changin operators in massless versus massive sector of
the theory, we refer the reader to \cite{us2}.
As for the interactions, there is one three-point interaction between
the massless operators, namely \cite{us2}
\be
\Big \langle \Psi(z_1, p_{(1)})\, \Psi(z_2, p_{(2)})\, V(z_3, p_{(3)})
\Big\rangle = h_{(1)\alpha}\, h_{(2)}^\alpha \ .\label{3pf2}
\ee
\noindent Note that this three-point amplitude is manifestly Lorentz invariant.
There are also an infinite number of massless physical operators with
different pictures in the spectrum, and they can all be expressed in a
covariant way. As one steps through the picture numbers, the character
of the physical operators alternates between scalar and spinorial. The
three-point interactions of all these operators lead only to the one
amplitude given by \eqn{3pf2}. In view of their equivalent
interactions, all the scalar operators can be identified and all the
spinorial operators can be identified.
The massless spectrum can thus be effectively described by the scalar
operator \eqn{massless} and the spinorial operator \eqn{psi}. All
four-point and higher amplitudes vanish.
Although the theory contains an infinite tower of physical operators,
the massless sector and its interactions are remarkably simple. In
particular, although all the massive physical operators break Lorentz
invariance, the massless operators and their interactions have manifest
spacetime Lorentz invariance. If we associate spacetime fields $\phi$
and $\psi_\alpha$ with the physical operators $V$ and $\Psi$, it follows
from the three-point amplitude \eqn{3pf2} that we can write the field
equations \cite{us2}:
\be
\partial_{\alpha\dot\alpha}\partial^{\alpha\dot\alpha}\phi=\psi^\alpha\,
\psi_\alpha\, \qquad \partial_{\alpha\dot\alpha}\psi^\alpha =
\psi^\alpha\partial_{\alpha\dot \alpha}\phi\ .\label{feom}
\ee
We have suppressed Chan-Paton group theory factors that must be
introduced in order for the three-point amplitude to be non-vanishing in
the open string. It is easy to see even from the kinetic terms in the
field equations \eqn{feom} that there is no associated Lagrangian. Note
that there is no undifferentiated $\phi$ field, owing to the fact that
the theory is invariant under the transformation $\phi\longrightarrow
\phi + {\rm const.}$ It is of interest to obtain the higher-point
amplitudes from the field equations \eqn{feom}, which should be zero
if they are to reproduce the string interactions.
In summary, the new $n=0$ model has a massless scalar and fermion, in
addition to an infinite tower of massive particles. However, the model
lacks spacetime supersymmetry. Moreover, while the massless fields have
interesting interactions, for which we can write down the field
equations not derivable from a Lagrangian, the model does not seem to
describe the interactions of self-dual gravity.
\vspace{8mm}
\noindent{\bf Acknowledgements}
\vspace{4mm}
The work described in this paper owes greatly to H. L\"u and C.N. Pope,
who are gratefully acknowledged. This work was supported in part by the
National Science Foundation, under grant PHY--9411543.
\vfill\eject
|
1,941,325,220,027 | arxiv |
\section*{Abstract (Not appropriate in this style!)}%
\else \small
\begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center}%
\quotation
\fi
}%
}{%
}%
\@ifundefined{endabstract}{\def\endabstract
{\if@twocolumn\else\endquotation\fi}}{}%
\@ifundefined{maketitle}{\def\maketitle#1{}}{}%
\@ifundefined{affiliation}{\def\affiliation#1{}}{}%
\@ifundefined{proof}{\def\proof{\noindent{\bfseries Proof. }}}{}%
\@ifundefined{endproof}{\def\endproof{\mbox{\ \rule{.1in}{.1in}}}}{}%
\@ifundefined{newfield}{\def\newfield#1#2{}}{}%
\@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par }%
\newcount\c@chapter}{}%
\@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{}%
\@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{}%
\@ifundefined{subsection}{\def\subsection#1%
{\par(Subsection head:)#1\par }}{}%
\@ifundefined{subsubsection}{\def\subsubsection#1%
{\par(Subsubsection head:)#1\par }}{}%
\@ifundefined{paragraph}{\def\paragraph#1%
{\par(Subsubsubsection head:)#1\par }}{}%
\@ifundefined{subparagraph}{\def\subparagraph#1%
{\par(Subsubsubsubsection head:)#1\par }}{}%
\@ifundefined{therefore}{\def\therefore{}}{}%
\@ifundefined{backepsilon}{\def\backepsilon{}}{}%
\@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{}%
\@ifundefined{registered}{%
\def\registered{\relax\ifmmode{}\r@gistered
\else$\m@th\r@gistered$\fi}%
\def\r@gistered{^{\ooalign
{\hfil\raise.07ex\hbox{$\scriptstyle\rm\RIfM@\expandafter\text@\else\expandafter\mbox\fi{R}$}\hfil\crcr
\mathhexbox20D}}}}{}%
\@ifundefined{Eth}{\def\Eth{}}{}%
\@ifundefined{eth}{\def\eth{}}{}%
\@ifundefined{Thorn}{\def\Thorn{}}{}%
\@ifundefined{thorn}{\def\thorn{}}{}%
\def\TEXTsymbol#1{\mbox{$#1$}}%
\@ifundefined{degree}{\def\degree{{}^{\circ}}}{}%
\newdimen\theight
\@ifundefined{Column}{\def\Column{%
\vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol}%
\theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip
\kern -\theight \vbox to \theight{%
\rightline{\rlap{\box\z@}}%
\vss
}%
}%
}}{}%
\@ifundefined{qed}{\def\qed{%
\ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi
\hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@}%
}}{}%
\@ifundefined{cents}{\def\cents{\hbox{\rm\rlap c/}}}{}%
\@ifundefined{tciLaplace}{\def\tciLaplace{L}}{}%
\@ifundefined{tciFourier}{\def\tciFourier{F}}{}%
\@ifundefined{textcurrency}{\def\textcurrency{\hbox{\rm\rlap xo}}}{}%
\@ifundefined{texteuro}{\def\texteuro{\hbox{\rm\rlap C=}}}{}%
\@ifundefined{textfranc}{\def\textfranc{\hbox{\rm\rlap-F}}}{}%
\@ifundefined{textlira}{\def\textlira{\hbox{\rm\rlap L=}}}{}%
\@ifundefined{textpeseta}{\def\textpeseta{\hbox{\rm P\negthinspace s}}}{}%
\@ifundefined{miss}{\def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}}}{}%
\@ifundefined{vvert}{\def\vvert{\Vert}}{
\@ifundefined{tcol}{\def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column}}{}%
\@ifundefined{dB}{\def\dB{\hbox{{}}}}{
\@ifundefined{mB}{\def\mB#1{\hbox{$#1$}}}{
\@ifundefined{nB}{\def\nB#1{\hbox{#1}}}{
\@ifundefined{note}{\def\note{$^{\dag}}}{}%
\defLaTeX2e{LaTeX2e}
\ifx\fmtnameLaTeX2e
\DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm}
\DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf}
\DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt}
\DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf}
\DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit}
\DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl}
\DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc}
\fi
\def\alpha{{\Greekmath 010B}}%
\def\beta{{\Greekmath 010C}}%
\def\gamma{{\Greekmath 010D}}%
\def\delta{{\Greekmath 010E}}%
\def\epsilon{{\Greekmath 010F}}%
\def\zeta{{\Greekmath 0110}}%
\def\eta{{\Greekmath 0111}}%
\def\theta{{\Greekmath 0112}}%
\def\iota{{\Greekmath 0113}}%
\def\kappa{{\Greekmath 0114}}%
\def\lambda{{\Greekmath 0115}}%
\def\mu{{\Greekmath 0116}}%
\def\nu{{\Greekmath 0117}}%
\def\xi{{\Greekmath 0118}}%
\def\pi{{\Greekmath 0119}}%
\def\rho{{\Greekmath 011A}}%
\def\sigma{{\Greekmath 011B}}%
\def\tau{{\Greekmath 011C}}%
\def\upsilon{{\Greekmath 011D}}%
\def\phi{{\Greekmath 011E}}%
\def\chi{{\Greekmath 011F}}%
\def\psi{{\Greekmath 0120}}%
\def\omega{{\Greekmath 0121}}%
\def\varepsilon{{\Greekmath 0122}}%
\def\vartheta{{\Greekmath 0123}}%
\def\varpi{{\Greekmath 0124}}%
\def\varrho{{\Greekmath 0125}}%
\def\varsigma{{\Greekmath 0126}}%
\def\varphi{{\Greekmath 0127}}%
\def{\Greekmath 0272}{{\Greekmath 0272}}
\def\FindBoldGroup{%
{\setbox0=\hbox{$\mathbf{x\global\edef\theboldgroup{\the\mathgroup}}$}}%
}
\def\Greekmath#1#2#3#4{%
\if@compatibility
\ifnum\mathgroup=\symbold
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}%
\else
\mathchar"#1#2#3#
\fi
\else
\FindBoldGroup
\ifnum\mathgroup=\theboldgroup
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}%
\else
\mathchar"#1#2#3#
\fi
\fi}
\newif\ifGreekBold \GreekBoldfalse
\let\SAVEPBF=\pbf
\def\pbf{\GreekBoldtrue\SAVEPBF}%
\@ifundefined{theorem}{\newtheorem{theorem}{Theorem}}{}
\@ifundefined{lemma}{\newtheorem{lemma}[theorem]{Lemma}}{}
\@ifundefined{corollary}{\newtheorem{corollary}[theorem]{Corollary}}{}
\@ifundefined{conjecture}{\newtheorem{conjecture}[theorem]{Conjecture}}{}
\@ifundefined{proposition}{\newtheorem{proposition}[theorem]{Proposition}}{}
\@ifundefined{axiom}{\newtheorem{axiom}{Axiom}}{}
\@ifundefined{remark}{\newtheorem{remark}{Remark}}{}
\@ifundefined{example}{\newtheorem{example}{Example}}{}
\@ifundefined{exercise}{\newtheorem{exercise}{Exercise}}{}
\@ifundefined{definition}{\newtheorem{definition}{Definition}}{}
\@ifundefined{mathletters}{%
\newcounter{equationnumber}
\def\mathletters{%
\addtocounter{equation}{1}
\edef\@currentlabel{\arabic{equation}}%
\setcounter{equationnumber}{\c@equation}
\setcounter{equation}{0}%
\edef\arabic{equation}{\@currentlabel\noexpand\alph{equation}}%
}
\def\endmathletters{%
\setcounter{equation}{\value{equationnumber}}%
}
}{}
\@ifundefined{BibTeX}{%
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}}{}%
\@ifundefined{AmS}%
{\def\AmS{{\protect\usefont{OMS}{cmsy}{m}{n}%
A\kern-.1667em\lower.5ex\hbox{M}\kern-.125emS}}}{}%
\@ifundefined{AmSTeX}{\def\AmSTeX{\protect\AmS-\protect\TeX\@}}{}%
\def\@@eqncr{\let\@tempa\relax
\ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &}%
\else \def\@tempa{&}\fi
\@tempa
\if@eqnsw
\iftag@
\@taggnum
\else
\@eqnnum\stepcounter{equation}%
\fi
\fi
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@eqnswtrue
\global\@eqcnt\z@\cr}
\def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}}
\def\@TCItag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}}
\def\@TCItagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}}
\def\QATOP#1#2{{#1 \atop #2}}%
\def\QTATOP#1#2{{\textstyle {#1 \atop #2}}}%
\def\QDATOP#1#2{{\displaystyle {#1 \atop #2}}}%
\def\QABOVE#1#2#3{{#2 \above#1 #3}}%
\def\QTABOVE#1#2#3{{\textstyle {#2 \above#1 #3}}}%
\def\QDABOVE#1#2#3{{\displaystyle {#2 \above#1 #3}}}%
\def\QOVERD#1#2#3#4{{#3 \overwithdelims#1#2 #4}}%
\def\QTOVERD#1#2#3#4{{\textstyle {#3 \overwithdelims#1#2 #4}}}%
\def\QDOVERD#1#2#3#4{{\displaystyle {#3 \overwithdelims#1#2 #4}}}%
\def\QATOPD#1#2#3#4{{#3 \atopwithdelims#1#2 #4}}%
\def\QTATOPD#1#2#3#4{{\textstyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QDATOPD#1#2#3#4{{\displaystyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QABOVED#1#2#3#4#5{{#4 \abovewithdelims#1#2#3 #5}}%
\def\QTABOVED#1#2#3#4#5{{\textstyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\QDABOVED#1#2#3#4#5{{\displaystyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\tint{\mathop{\textstyle \int}}%
\def\tiint{\mathop{\textstyle \iint }}%
\def\tiiint{\mathop{\textstyle \iiint }}%
\def\tiiiint{\mathop{\textstyle \iiiint }}%
\def\tidotsint{\mathop{\textstyle \idotsint }}%
\def\toint{\mathop{\textstyle \oint}}%
\def\tsum{\mathop{\textstyle \sum }}%
\def\tprod{\mathop{\textstyle \prod }}%
\def\tbigcap{\mathop{\textstyle \bigcap }}%
\def\tbigwedge{\mathop{\textstyle \bigwedge }}%
\def\tbigoplus{\mathop{\textstyle \bigoplus }}%
\def\tbigodot{\mathop{\textstyle \bigodot }}%
\def\tbigsqcup{\mathop{\textstyle \bigsqcup }}%
\def\tcoprod{\mathop{\textstyle \coprod }}%
\def\tbigcup{\mathop{\textstyle \bigcup }}%
\def\tbigvee{\mathop{\textstyle \bigvee }}%
\def\tbigotimes{\mathop{\textstyle \bigotimes }}%
\def\tbiguplus{\mathop{\textstyle \biguplus }}%
\def\dint{\mathop{\displaystyle \int}}%
\def\diint{\mathop{\displaystyle \iint}}%
\def\diiint{\mathop{\displaystyle \iiint}}%
\def\diiiint{\mathop{\displaystyle \iiiint }}%
\def\didotsint{\mathop{\displaystyle \idotsint }}%
\def\doint{\mathop{\displaystyle \oint}}%
\def\dsum{\mathop{\displaystyle \sum }}%
\def\dprod{\mathop{\displaystyle \prod }}%
\def\dbigcap{\mathop{\displaystyle \bigcap }}%
\def\dbigwedge{\mathop{\displaystyle \bigwedge }}%
\def\dbigoplus{\mathop{\displaystyle \bigoplus }}%
\def\dbigodot{\mathop{\displaystyle \bigodot }}%
\def\dbigsqcup{\mathop{\displaystyle \bigsqcup }}%
\def\dcoprod{\mathop{\displaystyle \coprod }}%
\def\dbigcup{\mathop{\displaystyle \bigcup }}%
\def\dbigvee{\mathop{\displaystyle \bigvee }}%
\def\dbigotimes{\mathop{\displaystyle \bigotimes }}%
\def\dbiguplus{\mathop{\displaystyle \biguplus }}%
\if@compatibility\else
\RequirePackage{amsmath}
\makeatother
\endinput
\fi
\typeout{TCILATEX defining AMS-like constructs in LaTeX 2.09 COMPATIBILITY MODE}
\def\makeatother\endinput{\makeatother\endinput}
\bgroup
\ifx\ds@amstex\relax
\message{amstex already loaded}\aftergroup\makeatother\endinput
\else
\@ifpackageloaded{amsmath}%
{\message{amsmath already loaded}\aftergroup\makeatother\endinput}
{}
\@ifpackageloaded{amstex}%
{\message{amstex already loaded}\aftergroup\makeatother\endinput}
{}
\@ifpackageloaded{amsgen}%
{\message{amsgen already loaded}\aftergroup\makeatother\endinput}
{}
\fi
\egroup
\let\DOTSI\relax
\def\RIfM@{\relax\ifmmode}%
\def\FN@{\futurelet\next}%
\newcount\intno@
\def\iint{\DOTSI\intno@\tw@\FN@\ints@}%
\def\iiint{\DOTSI\intno@\thr@@\FN@\ints@}%
\def\iiiint{\DOTSI\intno@4 \FN@\ints@}%
\def\idotsint{\DOTSI\intno@\z@\FN@\ints@}%
\def\ints@{\findlimits@\ints@@}%
\newif\iflimtoken@
\newif\iflimits@
\def\findlimits@{\limtoken@true\ifx\next\limits\limits@true
\else\ifx\next\nolimits\limits@false\else
\limtoken@false\ifx\ilimits@\nolimits\limits@false\else
\ifinner\limits@false\else\limits@true\fi\fi\fi\fi}%
\def\multint@{\int\ifnum\intno@=\z@\intdots@
\else\intkern@\fi
\ifnum\intno@>\tw@\int\intkern@\fi
\ifnum\intno@>\thr@@\int\intkern@\fi
\int
\def\multintlimits@{\intop\ifnum\intno@=\z@\intdots@\else\intkern@\fi
\ifnum\intno@>\tw@\intop\intkern@\fi
\ifnum\intno@>\thr@@\intop\intkern@\fi\intop}%
\def\intic@{%
\mathchoice{\hskip.5em}{\hskip.4em}{\hskip.4em}{\hskip.4em}}%
\def\negintic@{\mathchoice
{\hskip-.5em}{\hskip-.4em}{\hskip-.4em}{\hskip-.4em}}%
\def\ints@@{\iflimtoken@
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits
\else\multint@\nolimits\fi
\eat@
\else
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits\else
\multint@\nolimits\fi}\fi\ints@@@}%
\def\intkern@{\mathchoice{\!\!\!}{\!\!}{\!\!}{\!\!}}%
\def\plaincdots@{\mathinner{\cdotp\cdotp\cdotp}}%
\def\intdots@{\mathchoice{\plaincdots@}%
{{\cdotp}\mkern1.5mu{\cdotp}\mkern1.5mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}}%
\def\RIfM@{\relax\protect\ifmmode}
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi{\RIfM@\expandafter\RIfM@\expandafter\text@\else\expandafter\mbox\fi@\else\expandafter\mbox\fi}
\let\nfss@text\RIfM@\expandafter\text@\else\expandafter\mbox\fi
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi@#1{\mathchoice
{\textdef@\displaystyle\f@size{#1}}%
{\textdef@\textstyle\tf@size{\firstchoice@false #1}}%
{\textdef@\textstyle\sf@size{\firstchoice@false #1}}%
{\textdef@\textstyle \ssf@size{\firstchoice@false #1}}%
\glb@settings}
\def\textdef@#1#2#3{\hbox{{%
\everymath{#1}%
\let\f@size#2\selectfont
#3}}}
\newif\iffirstchoice@
\firstchoice@true
\def\Let@{\relax\iffalse{\fi\let\\=\cr\iffalse}\fi}%
\def\vspace@{\def\vspace##1{\crcr\noalign{\vskip##1\relax}}}%
\def\multilimits@{\bgroup\vspace@\Let@
\baselineskip\fontdimen10 \scriptfont\tw@
\advance\baselineskip\fontdimen12 \scriptfont\tw@
\lineskip\thr@@\fontdimen8 \scriptfont\thr@@
\lineskiplimit\lineskip
\vbox\bgroup\ialign\bgroup\hfil$\m@th\scriptstyle{##}$\hfil\crcr}%
\def\Sb{_\multilimits@}%
\def\endSb{\crcr\egroup\egroup\egroup}%
\def\Sp{^\multilimits@}%
\let\endSp\endSb
\newdimen\ex@
\[email protected]
\def\rightarrowfill@#1{$#1\m@th\mathord-\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\leftarrowfill@#1{$#1\m@th\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill\mkern-6mu\mathord-$}%
\def\leftrightarrowfill@#1{$#1\m@th\mathord\leftarrow
\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\overrightarrow{\mathpalette\overrightarrow@}%
\def\overrightarrow@#1#2{\vbox{\ialign{##\crcr\rightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\let\overarrow\overrightarrow
\def\overleftarrow{\mathpalette\overleftarrow@}%
\def\overleftarrow@#1#2{\vbox{\ialign{##\crcr\leftarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\overleftrightarrow{\mathpalette\overleftrightarrow@}%
\def\overleftrightarrow@#1#2{\vbox{\ialign{##\crcr
\leftrightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\underrightarrow{\mathpalette\underrightarrow@}%
\def\underrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\rightarrowfill@#1\crcr}}}%
\let\underarrow\underrightarrow
\def\underleftarrow{\mathpalette\underleftarrow@}%
\def\underleftarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\leftarrowfill@#1\crcr}}}%
\def\underleftrightarrow{\mathpalette\underleftrightarrow@}%
\def\underleftrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th
\hfil#1#2\hfil$\crcr
\noalign{\nointerlineskip}\leftrightarrowfill@#1\crcr}}}%
\def\qopnamewl@#1{\mathop{\operator@font#1}\nlimits@}
\let\nlimits@\displaylimits
\def\setboxz@h{\setbox\z@\hbox}
\def\varlim@#1#2{\mathop{\vtop{\ialign{##\crcr
\hfil$#1\m@th\operator@font lim$\hfil\crcr
\noalign{\nointerlineskip}#2#1\crcr
\noalign{\nointerlineskip\kern-\ex@}\crcr}}}}
\def\rightarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\copy\z@\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\box\z@\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}
\def\leftarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\copy\z@\mkern-2mu$}\hfill
\mkern-6mu\box\z@$}
\def\qopnamewl@{proj\,lim}{\qopnamewl@{proj\,lim}}
\def\qopnamewl@{inj\,lim}{\qopnamewl@{inj\,lim}}
\def\mathpalette\varlim@\rightarrowfill@{\mathpalette\varlim@\rightarrowfill@}
\def\mathpalette\varlim@\leftarrowfill@{\mathpalette\varlim@\leftarrowfill@}
\def\mathpalette\varliminf@{}{\mathpalette\mathpalette\varliminf@{}@{}}
\def\mathpalette\varliminf@{}@#1{\mathop{\underline{\vrule\@depth.2\ex@\@width\z@
\hbox{$#1\m@th\operator@font lim$}}}}
\def\mathpalette\varlimsup@{}{\mathpalette\mathpalette\varlimsup@{}@{}}
\def\mathpalette\varlimsup@{}@#1{\mathop{\overline
{\hbox{$#1\m@th\operator@font lim$}}}}
\def\stackunder#1#2{\mathrel{\mathop{#2}\limits_{#1}}}%
\begingroup \catcode `|=0 \catcode `[= 1
\catcode`]=2 \catcode `\{=12 \catcode `\}=12
\catcode`\\=12
|gdef|@alignverbatim#1\end{align}[#1|end[align]]
|gdef|@salignverbatim#1\end{align*}[#1|end[align*]]
|gdef|@alignatverbatim#1\end{alignat}[#1|end[alignat]]
|gdef|@salignatverbatim#1\end{alignat*}[#1|end[alignat*]]
|gdef|@xalignatverbatim#1\end{xalignat}[#1|end[xalignat]]
|gdef|@sxalignatverbatim#1\end{xalignat*}[#1|end[xalignat*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@multilineverbatim#1\end{multiline}[#1|end[multiline]]
|gdef|@smultilineverbatim#1\end{multiline*}[#1|end[multiline*]]
|gdef|@arraxverbatim#1\end{arrax}[#1|end[arrax]]
|gdef|@sarraxverbatim#1\end{arrax*}[#1|end[arrax*]]
|gdef|@tabulaxverbatim#1\end{tabulax}[#1|end[tabulax]]
|gdef|@stabulaxverbatim#1\end{tabulax*}[#1|end[tabulax*]]
|endgroup
\def\align{\@verbatim \frenchspacing\@vobeyspaces \@alignverbatim
You are using the "align" environment in a style in which it is not defined.}
\let\endalign=\endtrivlist
\@namedef{align*}{\@verbatim\@salignverbatim
You are using the "align*" environment in a style in which it is not defined.}
\expandafter\let\csname endalign*\endcsname =\endtrivlist
\def\alignat{\@verbatim \frenchspacing\@vobeyspaces \@alignatverbatim
You are using the "alignat" environment in a style in which it is not defined.}
\let\endalignat=\endtrivlist
\@namedef{alignat*}{\@verbatim\@salignatverbatim
You are using the "alignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endalignat*\endcsname =\endtrivlist
\def\xalignat{\@verbatim \frenchspacing\@vobeyspaces \@xalignatverbatim
You are using the "xalignat" environment in a style in which it is not defined.}
\let\endxalignat=\endtrivlist
\@namedef{xalignat*}{\@verbatim\@sxalignatverbatim
You are using the "xalignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endxalignat*\endcsname =\endtrivlist
\def\gather{\@verbatim \frenchspacing\@vobeyspaces \@gatherverbatim
You are using the "gather" environment in a style in which it is not defined.}
\let\endgather=\endtrivlist
\@namedef{gather*}{\@verbatim\@sgatherverbatim
You are using the "gather*" environment in a style in which it is not defined.}
\expandafter\let\csname endgather*\endcsname =\endtrivlist
\def\multiline{\@verbatim \frenchspacing\@vobeyspaces \@multilineverbatim
You are using the "multiline" environment in a style in which it is not defined.}
\let\endmultiline=\endtrivlist
\@namedef{multiline*}{\@verbatim\@smultilineverbatim
You are using the "multiline*" environment in a style in which it is not defined.}
\expandafter\let\csname endmultiline*\endcsname =\endtrivlist
\def\arrax{\@verbatim \frenchspacing\@vobeyspaces \@arraxverbatim
You are using a type of "array" construct that is only allowed in AmS-LaTeX.}
\let\endarrax=\endtrivlist
\def\tabulax{\@verbatim \frenchspacing\@vobeyspaces \@tabulaxverbatim
You are using a type of "tabular" construct that is only allowed in AmS-LaTeX.}
\let\endtabulax=\endtrivlist
\@namedef{arrax*}{\@verbatim\@sarraxverbatim
You are using a type of "array*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endarrax*\endcsname =\endtrivlist
\@namedef{tabulax*}{\@verbatim\@stabulaxverbatim
You are using a type of "tabular*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endtabulax*\endcsname =\endtrivlist
\def\endequation{%
\ifmmode\ifinner
\iftag@
\addtocounter{equation}{-1}
$\hfil
\displaywidth\linewidth\@taggnum\egroup \endtrivlist
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@ignoretrue
\else
$\hfil
\displaywidth\linewidth\@eqnnum\egroup \endtrivlist
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@ignoretrue
\fi
\else
\iftag@
\addtocounter{equation}{-1}
\eqno \hbox{\@taggnum}
\global\@ifnextchar*{\@tagstar}{\@tag}@false%
$$\global\@ignoretrue
\else
\eqno \hbox{\@eqnnum
$$\global\@ignoretrue
\fi
\fi\fi
}
\newif\iftag@ \@ifnextchar*{\@tagstar}{\@tag}@false
\def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}}
\def\@TCItag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}}
\def\@TCItagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}}
\@ifundefined{tag}{
\def\@ifnextchar*{\@tagstar}{\@tag}{\@ifnextchar*{\@tagstar}{\@tag}}
\def\@tag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}}
\def\@tagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}}
}{}
\def\tfrac#1#2{{\textstyle {#1 \over #2}}}%
\def\dfrac#1#2{{\displaystyle {#1 \over #2}}}%
\def\binom#1#2{{#1 \choose #2}}%
\def\tbinom#1#2{{\textstyle {#1 \choose #2}}}%
\def\dbinom#1#2{{\displaystyle {#1 \choose #2}}}%
\makeatother
\endinput
|
1,941,325,220,028 | arxiv | \section{Introduction}
~
The terminology {\it geometry of information} refers to models of databases subject to noise. This connects to quantification, storage, and communication of digital information. Applications of fundamental topics of information theory include source coding, data compression, and channel coding or error detection and correction. In this paper, we consider algebraic structures occurring in geometry of information and we prove surprising connections between the theory of geometry of information and diophantine geometry (see for instance~\cite{Cor} for an introduction).
\smallskip
We unravel tight bridges between objects of information geometry (such as manifolds of probability distributions and codes) and diophantine geometry and algebraic geometry.
In the first part of this paper, we work with the space of probability distributions on finite sets (see~\cite{Am10} for an introduction and ~\cite{CoCoNen21,CoCoNenFICC,CoMa20} for new developments). Probability distributions are used in many problems such as machine learning, vision, statistical inference, neural networks and others, this development provides a strong tool for many areas. We consider the class of statistical manifolds of exponential type. The aim of the first part of the paper is to consider the structure of the space of probability distributions and the category of these spaces.
Our work is subdivided into three main parts. The first part regards the collection of all probability distributions on the measurable (discrete with $n$ issues) space $Cap_n$ and their embeddings into $Cap_m$ (where $m>n$). It turns out that a hidden hypercubic symmetries appear.
The second part, proves that the Manin conjecture concerning rational points on a Fano variety can be extended to the case of information geometry. The second part, regards a codes/ error-correcting codes aspect of information geometry and we show a tight relation to the motivic Galois group and a modified version of parenthesised braids, serving as a way of encoding all possible errors occurring a given word.
\smallskip
In the first part, we give a proof of the following statement:
\begin{thm-non}(Thm. \ref{T:1})
Let $Cap_n=Cap(\Omega_{n}, \mathcal{A}_{n})$ be the collection of all probability distributions on the measurable (discrete) space $(\Omega_n, \mathcal{A})$ where $\Omega$ is formed from $n$ outcomes.
Then, the diagram of all embeddings of multi-product of $\underbrace{Cap_2\times\cdots\times Cap_2}_{n+1\, times}$ in $Cap(\Omega_{2^{n+1}}, \mathcal{A}_{2^{n+1}})$ has the structure of an $n$-cube.
\end{thm-non}
\smallskip
In the second part of this paper, we show an extension of the Manin conjecture to a wider family of objects, showing thus deep connections
between information geometry and arithmetics/ algebraic geometry. In particular, we show that Manin's conjecture concerning the diophantine geometry of Fano varieties~\cite{ManConj} holds in the case of exponential statistical manifolds, defined over a discrete sample space. Initially, the Manin conjecture states the following.
{\it ``Let $V$ be a Fano variety (defined over a number field $K$);
let $H$ be a height function, relative to the anticanonical divisor, and assume that $V(K)$
is Zariski dense in $V$. Then there exists a non-empty Zariski open subset
$U\subset V$ such that the counting function of $K$-rational points of bounded height, defined by the set
$N_{{U,H}}(B)=\#\{x\in U(K):H(x)\leq B\}$
for $B\geq 1$, satisfies the relation $N_{{U,H}}(B)\sim cB(\log B)^{{\rho -1}}$,
as $B\to \infty$, where $c$ is a constant.''}
A {\it reformulation} of this statement in terms of the pre-Frobenius statistical manifolds of exponential type (defined on a discrete sample space) is given. We show that this conjecture is {\it true} for those manifolds of information geometry.
\begin{thm-non}(Thm. \ref{T:2})
Consider $S=\{p(q,\theta)\}$ an exponential statistical manifold (over a discrete sample space $(\Omega,\mathcal{A})$ of finite dimension.
\begin{itemize}
\item Let $T=(\mathbb{Q}^*)^m$ be the $\mathbb{Q}$-torus of the exponential statistical manifold given by the probability coordinates.
\item Consider $\mathbb{P}_\Sigma$ the smooth $\mathbb{Q}$-compactification of the torus $T$ i.e. a smooth, projective $\mathbb{Q}$-variety in which $T$ lies as a dense open set and $\Sigma$ is a Galois invariant regular complete fan.
\item Let $k$ be the rank of the Picard group $Pic(\mathbb{P}_\Sigma)$.
\end{itemize}
Then, there is only a finite number $N(T,\mathcal{K}^{-1},B)$ of $\mathbb{Q}$-rational points $x \in T(\mathbb{Q})$ having the anticanonical height $H_{\mathcal{K}^{-1}}(x)\leq B$.
Moreover, as $B\to \infty$:
\[N(T,\mathcal{K}^{-1},B)=\frac{ \Theta(\Sigma)}{(k-1)!}\cdot B(logB)^{k-1}(1+o(1)),\]
where $\Theta(\Sigma)$ is a constant.
\end{thm-non}
In former works of Manin and collaborators~\cite{Mouf,QuOp}, was shown the existence of Moufang patterns encoding various symmetries appearing naturally in models, related to storing and transmitting information such as information spaces. By Moufang patterns we have in mind in particular loops (such as Moufang loops). The latter form non-associative analogs of groups. For the case of spaces of probability distributions on finite sets the symmetries of these spaces have the structure of Commutative Moufang Loops.
Loops and quasigroups turn out to play a central role when it comes to considering geometry of information.
The aspect relating geometry of information and (virtually) non-commutative Moufang Loops appears in the context of error-correcting codes and algebraic-geometry codes~\cite{Mouf}.
In the second part of this work, we consider an aspect of geometry of information directly related to semantics and to the theory of error-correcting codes and to errors~\cite{Err,sem}. Any natural language can be considered as a tool for producing large databases. Communication (or transmission of information) refers to the process by which a sender communicates a message (i.e. a union of sequences of letters forming words defined in a given alphabet and associated to a given meaning) to a receiver. During a given communication the message can arrive distorted. We investigate the cases where the message is subject to distortion (or is coded) and arrives finally modified.
A modification can take different aspects such as: permutations of letters, replacement of a letter by another one, removal of a letter or on the contrary extension of words by adding letters and sequences of words. In particular, when it comes to coding, latin squares can be used to decode the message.
\smallskip
We consider the space of all possible modifications of a words (indexed by their length) and suppose that our words are parenthesised. Suppose that we have a pair of parenthesised words $w$ and $w'$ of the same length. It turns out that an object which is perfect for the description of this situation and which also offers a geometric vision of paths of errors made during a transmission is tightly related to the groupoid of parenthesised braids, introduced to study the Grothendieck--Teichm\"uller group and which was first considered by Drinfeld.
In order to cover all sorts of error transmissions we introduce an {\it enriched version of the groupoid of parenthesised braids}, denoted ${\bf mPaB}$ for {\bf m}odified {\bf pa}renthesised {\bf b}raids. This modification of the classical parenthesised braid is necessary due to the fact that the words are allowed to have repeating letters and this is strongly related to loops and quasigroups. The braids are consequently impacted. Thus we equip the standard braids with two supplementary operations called {\it pinching} and {\it attaching} operations.
This groupoid of modified parenthesised braids inherits naturally the operations of cabling $d_i$, strand removal $s_i$, extension $d_0$ as well as $\square$, the coproduct functor defined by setting each individual parenthesised braid $B$ to be group-like, i.e. $\square(B)=B\otimes B$ and $\sigma$ the elementary braid on two strands.
\smallskip
It turns out that the there is a strong connection between the space of all possible transmission errors and arithmetics. Indeed the pro-unipotent Grothendieck--Teichm\"uller group (and thus the motivic Galois group) are included in the automorphism of the pro-unipotent completion of $\widehat{{\bf mPaB}}$.
\begin{thm-non}(Thm. \ref{T:GT} and Cor. \ref{C:mot})
The motivic Galois group is contained in the automorphism of the pro-unipotent completion of the modified parenthesised braids $Aut(\widehat{{\bf mPaB}})$.
\end{thm-non}
\smallskip
To conclude, we discuss an open question relating the Commutative Moufang Loops (CML) structure arising in the symmetries of the spaces of probability distributions on a discrete set. The appearance of the simplest CML’s in the algebraic-geometric setup is motivated by smooth cubic curves in a projective plane $\mathbb{P}_K^2$ over a field $K$. It is in particular shown that the set $E$ of $K$-points of such a curve $X$ forms a CML. Regarding the result of the first part of the paper, we conjecture that the set $E$ of $\mathbb{Q}$-points of a pre-Frobenius statistical manifold forms also a CML.
\smallskip
\subsection*{Plan of the paper}
-Sec.\ref{S:1}. of the paper is devoted to considering $Cap_n$ which is the collection of all probability distributions on the measurable space $(\Omega_n, \mathcal{A})$. The sample space is discrete and $\Omega_n$ has $n$ outputs. There exists an associated monad (called the Giry monad) and an algebra over it (Eilenberg--Moore algebra). We discuss in particular the relation among the product of $Cap_n$, which turns out to be hypercubic.
Secondly, we prove that the the Manin conjecture holds for (pre-Frobenius) the statistical manifolds related to exponential families and defined over a discrete (finite) set (Sec.\ref{S:ManinConj}).
-Sec. \ref{S:3} we discuss another aspect that geometry of information can take, via codes and error codes. It serves as an intermezzo between Sec. 1 and the Sec. 3. and prepares the ground for what follows. After recalling definitions on Loops and quasigroups we study in particular the algebraic properties of the space of modified words and show that quasigroups and loops offer a perfect set up for this.
-Sec. \ref{S:mPaB} we introduce our modified parenthesised braids, which forms a key tool in code-correction. We show that the standard parenthesised braids ${\bf PaB}$ are a full subcategory of ${\bf mPaB}$. We discuss the role of the Grothendieck--Teichm\"uller group in relation to the modified parenthesised braids (Sec.\ref{S:GT}). Finally,
we end the section by showing that the motivic Galois group is contained in the automorphism $Aut(\widehat{{\bf mPaB}}).$ We conclude finally by presenting an open question concerning rational points, Commutative Moufang Loops and information geometry \ref{S:conj}.
\medskip
\section{Statistical pre-Frobenius manifolds in relation to algebraic geometry and Manin's conjecture}\label{S:1}
\subsection{Categorical introduction of considered objects}
Dealing with classical information theory leads to working in the following framework. Let $(\Omega,\mathcal{A},P_0)$ a probability space where $\Omega$ is the space of elementary outputs, $\mathcal{A}$ is the $\sigma$-algebra of events and $P_0$ is a probability measure, (usually $P_0\ll \mu$, $P_0[A]=\int_A f_0d\mu$ for some ($\sigma$-finite) measure $\mu$).
\smallskip
-- The algebra $\frak{B}_c$ is the (commutative) algebra of all bounded measurable functions $f$ on the space of elementary outcomes $\Omega$.
$ \frak{B}_c $ algebra with respect of addition and multiplication by a scalar of function $f:\omega\to \mathbb{R},\quad f^{-1}(x)\in \mathcal{A}, \quad x\in \mathbb{R}$
\smallskip
-- The probability state of an object is determined by a nonnegative, normalized, normal (i.e., ultra- weakly continuous, or what is the same, monotone continuous) linear functional $\Phi_c$ one, $\frak{B}_c$, $\Phi_c:\frak{B}_c\to \mathbb{R}$, which is the expectation with respect to some probability $P_0$:
\[ \Phi_c(f) =\mathbb{E}_{P_0}[f].\]
\vspace{5pt}
-- The set $\mathfrak{I}(\frak{B}_c)$ of all states $\Phi_c$ of an object is a convex closed set in the pre-dual space $\frak{B}_c$,\, $(\frak{B}_c)^\star=\frak{B}_c$.
\vspace{5pt}
-- The idempotents of $\frak{B}_c$ are just the indicators (characteristic functions) of the measurable sets (elements of $\mathcal{A}$), these subspaces are called events (or ``yes-no'' experiments).
Before we enter a categorical definition, let us mention that the collection of all probability measures on a measurable space $(\Omega,\mathcal{A})$ of elementary outcomes
is a {\it convex subset} of the semi-ordered linear space of measures of bounded variations on $(\Omega,\mathcal{A})$.
In some cases, it useful to remark that the collection of all probability measures on $(\Omega,\mathcal{A})$ is equipped with a norm, giving rise to a metric space. However, this aspect will not be important to us here.
\smallskip
These measures are endowed with the supplementary property that they are {\it invariant} under maps of the collection of probability measures induced by invertible measurable maps of the sample space $(\Omega,\mathcal{A})$. This means that given a pair of sample spaces $(\Omega_1,\mathcal{A}_1)$ and $(\Omega_2,\mathcal{A}_2)$ equipped with their corresponding collection of probability measures, say $\{P_{\theta}^i, \, \theta\in \Theta \}$ where $i\in {1,2}$ and with (same) parameter set $\Theta\in \mathbb{R}^n$ one can develop a notion of {\it equivalence}: $\{P_{\theta}^1, \, \theta\in \Theta \}$ and $\{P_{\theta}^2, \, \theta\in \Theta \}$ are said to be equivalent whenever there exist Markov maps $\Pi^{12}$ and $\Pi^{21}$ such that $P^{1}_{\theta}\Pi^{12}= P^{2}_{\theta}$ and $P^{2}_{\theta}\Pi^{21}=P^{1}_{\theta}$, for any $\theta\in\Theta$.
We call $Cap$ the collection of all probability distributions on the measurable space $(\Omega,\mathcal{A}).$
The discussion above leads to defining a category denoted $CAP$, where objects are isomorphic classes of collections of all probability distributions on the measurable spaces $(\Omega,\mathcal{A})$; morphisms are given by the Markov maps. These Markov maps correspond to statistical decision rules in the sense of Wald.
Further algebraic operations on the objects of the category are allowed and are defined as follows. One can define a {\it direct product} of measurable spaces. This construction implies the existence of a tensor product on the collection of all probability measures on those measurable spaces. This multiplication is functorial with respect to the Markov category.
To give an example, let us take $Cap_2$, where $\Omega=\{\omega_1,\omega_2\}$ and the probability distributions $p=\langle p_1,p_2\rangle.$
Defining $Cap_2\times Cap_2$ is given by $\{\omega_1,\omega_2\} \times\{\omega'_1,\omega'_2\}$ and this corresponds to $\{\omega_1\omega'_1,\omega_1\omega'_2,\omega'_1\omega_1,\omega'_2\omega_1\}$, which can be rewritten as:
$\{\omega^{''}_1,\omega^{''}_2,\omega^{''}_3,\omega^{''}_4\}$. If $p=\langle p_1,p_2\rangle$ and $q=\langle q_1,q_2\rangle$ are the corresponding probability distributions then the tensor product on the probability distributions is such that:
\[p\otimes q=\langle p_1q_1,p_1q_2, p_2q_1,p_2q_2\rangle.\]
We shall investigate more precisely what happens during this multiplication process in $Cap_n$. In particular we show, using Segre type of embeddings that we have a hypercube type of relation within these operations.
However, note that in this paper, we are not interested in the quantum aspect of information theory and we limit ourselves to the case where the algebra is commutative. Hence, we do not consider the quantum information geometry aspect which requires a von Neumann algebra $\frak{b}$ of bounded linear operators acting on Hilbert space.
This algebra $\frak{b}$ corresponds to a (generally non-commutative) analogue of the classical commutative algebra of all bounded measurable functions on the space of elementary outcomes. The Hermitian elements of the algebra $\frak{b}$ are called bounded observables. The probability state of an object is determined by a (nonnegative, normalised, normal, monotone continuous) linear functional $\phi$ on the algebra $\frak{b}$. The set of all states $S(\frak{b})$ of a given object forms a convex closed set in the pre-dual space $\frak{b}_*$, where $(\frak{b}_*)^*=\frak{b}$.
Moreover, analogous constructions concerning the Markov maps can be defined and the system of all Markov maps of all collections $S(\frak{b})$ forms an algebraic category.
To summarise, there exist tight relations between convex sets and the sets of states in the probabilistic computation (discrete or continuous) and in quantum computation. We will explore this from a more categorically aspect. In particular, we invoke the Giry monad structure and define an Eilenberg--Moore algebra over this monad to consider our convex sets for information geometry.
\smallskip
Consider the category ${\rm Set}$ of finite sets. Given an object $X$ of ${\rm Set}$, we define
\[\Delta_X=\big\{\tilde{f}: X\to [0,1]\, |\, \tilde{f}\, \text{has finite support and} \, \sum_{x\in X} \tilde{f}(x)=1\]
the simplex over $X$. It is the set of formal finite convex combinations of elements from $X$.
Elements of $\Delta_X$ are the discrete probability distributions over $X$. The mapping from the set $X$ to the set $\Delta_X$ can be made functorial (known as the simplex functor) and defined such that given a morphism of sets $f:X\to Y$ one defines $\Delta(f):\Delta_X\to \Delta_Y$.
One may define, for these convex sets, the algebraic structure of a monad $(\Delta,\mu,\eta)$, where the unit is given by $\eta:X\to \Delta_X$ and the multiplication is defined by $\mu:\Delta_X^2\to \Delta_X$. This monad is commutative. Moreover, given an object $X\in Ob({\rm Set})$ and the {\it structure map} $\gamma:\Delta_X\to X$ commutativity for the following diagrams is satisfied:
\begin{center} \begin{tikzcd}
X \arrow[rd,"\eta_X"] \arrow[r,"Id_X"] & X \\
& \Delta_X\arrow[u, "\gamma"]
\end{tikzcd}
\begin{tikzcd}
\Delta_{\Delta_X} \arrow[d,"\mu_X"] \arrow[r, "\Delta_{\gamma}"] & \Delta_X \arrow[d,"\gamma"] \\
\Delta_X \arrow[r, "\gamma"]& X.
\end{tikzcd}
\end{center}
Taking a pair $(X,\gamma)$ allows to work in an Eilenberg--Moore algebra for the distribution monad $(\Delta,\mu,\eta)$ over the symmetric monoidal category of sets. An algebra morphism $f:(X,\gamma)\to (X',\gamma')$ is a continuous map such that the following diagram commutes:
\begin{center}
\begin{tikzcd}
\Delta_{X} \arrow[d,"\Delta(f)"] \arrow[r, "{\gamma}"] & X \arrow[d,"f"] \\
\Delta_{X'} \arrow[r, "\gamma'"]& X'.
\end{tikzcd}
\end{center}
An Eilenberg--Moore algebra of the monad $(\Delta,\mu,\eta)$ is a map of the form $\gamma: \Delta_X \to X$ s.t. $\gamma \circ \eta=id$ and $\gamma\circ \mu=\gamma \circ \Delta_\gamma.$ Note that each category of algebras for a monad on sets is cocomplete.
Regarding the category of Eilenberg--Moore algebras it is both complete and cocomplete.
A more global idea hides behind this, in the context of probability distributions and concerning the relation between algebras for the Giry monad~\cite{Giry} and the convex spaces formed by the collection of probability distributions. This comes from the following statement:
The category of algebras for the Giry monad is isomorphic to the category of $G$-partitions. Here by $G$-partition we mean that for $X$ (in a fully general setting $X$ is a polish space, i.e. a separable metric space for which a complete metric exists) corresponds to a collection $\{G(x)\, |\, x\in X\}$, which forms a positive convex partition for $\Delta$ into closed sets indexed by $X$.
Moreover, $\delta_x\in G(x)$ (where $\delta_x$ is the Dirac measure on $x$) holds for all $x\in X$, and the set valued map $x\mapsto G(x)$ is $k$-upper-semicontinuous.
Now, algebras over a commutative monad admit a tensor product. By $\star$ we denote the monoidal multiplication of $\Delta$.
So, given $\Delta$-algebras $(A,a)$ and $(B,b)$ their tensor product is the object $A\otimes_\Delta B$ given by:
\[\Delta(\Delta A\otimes \Delta B)\xrightarrow{\star}\Delta \Delta(A\otimes B)\xrightarrow{\mu}\Delta(A\otimes B)\to A\otimes_\Delta B.\]
\smallskip
Now that we have explicitly shown the algebraic structure of the convex spaces of probability distributions, and discussed the tensor product operation for algebras over the Giry monad, we are interested in proving the existence of hidden symmetries appearing within the multiplication relations between $Cap_n$'s. In particular, this leads to proving the existence of a hypercube relation.
Let us go back to the previous example $Cap_2\times Cap_2\hookrightarrow Cap_4$. Recall the tensor product relation on the probability distributions ``\`a la Morozova--Chentsov''~\cite{MoCh}, where one considers the probability distributions under the shape of a vector in an affine space. Regarding our example, this gives us:
\[p\otimes q=\langle p_0q_0,p_0q_1, p_1q_0,p_1q_1\rangle,\]
so that we can consider $p\otimes q$ as the 4-tuple: $\langle p_0q_0,p_0q_1, p_1q_0,p_1q_1\rangle=\langle p'_0,p'_1,p'_2,p'_3\rangle$ defined in the affine space. This relation can obviously be generalised for any $n$ i.e for $Cap_n$.
Now, from an affine $n$-tuple $p=\langle p_1,\cdots, p_n\rangle$ one can take easily the homogeneous coordinates: $P=[p_1:p_2:\cdots: p_n]$. So, now whenever we consider the product $P\otimes Q$, where we consider it from the projective perspective,
one has $P=[p_0:p_1:\cdots: p_n]$ and $Q=[q_0:q_1:\cdots: q_m]$.
This leads to defining a (real) Segre embedding, which looks as follows:
\[h:\mathbb{P}^n\times \mathbb{P}^m \hookrightarrow\mathbb{P}^{(n+1)(m+1)-1},\]
where $\mathbb{P}^n$ is the {\it real} projective space.
Taking a pair of points $(P,Q)$ in the projective space $\mathbb{P}^n\times \mathbb{P}^m$ (corresponding to elements in $Cap$) one can define the product:
\[([p_0:p_1:\cdots p_n],[q_0:q_1:\cdots q_m])\mapsto [p_0q_0:p_0q_1:\cdots:p_iq_j:\cdots p_nq_m].\]
In particular, going back to the $Cap_2$ case, with $P=[p_0:p_1]$ (resp. $Q=[q_0:q_1]$), where $0\leq p_i\leq 1$ and $p_0+p_1=1$ (resp. $0\leq q_i\leq 1$ and $q_0+q_1=1$) we have the following commutative diagram:
\begin{center}
\begin{tikzcd}
\mathbb{P}^1\times \mathbb{P}^1 \times \mathbb{P}^1 \arrow[d,"Id\times h_{(23)}"] \arrow[r,"h_{(12)}\times Id"] & \mathbb{P}^3 \times \mathbb{P}^1 \arrow[d,"h_{((12)3)}"] \\
\mathbb{P}^1 \times \mathbb{P}^3 \arrow[r,"h_{1(23)}"] & \mathbb{P}^7
\end{tikzcd}
\end{center}
The following embedding
\[\underbrace{\mathbb{P}^1\times \cdots \times\mathbb{P}^1}_{n\ times}\to \mathbb{P}^{2^n-1}\]
corresponds to a generalised Segre embedding. This allows to consider the relation between $Cap_2$ and $Cap_{2^n}$ and leads to the following remark on the geometry of $Cap_{2^n}$.
\begin{rem}
Note that this highlights a new way to show the existence of a paracomplex structure on objects in $Cap$. Note that a paracomplex projective space of dimension $n$ is identified to a product of projective spaces of the same dimension $n$ (and defined over the real numbers) i.e. $\mathbb{P}^n\times \mathbb{P}^n$. Since, we can use the Segre embedding and the construction above we can see that for instance in $Cap_4$ we have an embedded paracomplex projective space. Using the generalised Segre embedding, the statement generalises for $Cap_{2^n}$. The paracomplex structure has been mentioned in the work~\cite{CoMa20}. However, this gives another approach to that result.
\end{rem}
The generalised Segre embedding implies the existence of $(n-1)$-hypercube relations when considering $\underbrace{Cap_2\times \cdots \times Cap_2}_{n\, times}.$
Let us start with the following relation.
\[\underbrace{\mathbb{P}^1\times \cdots \times\mathbb{P}^1}_{n\ times}\to \mathbb{P}^{2^n-1}.\]
We proceed by analogy on $Cap_n$ so that we can in fact obtain a similar relation as in the Segre emebedding:
\[\underbrace{Cap_2\times \cdots \times Cap_2}_{n\, times}\to Cap_{2^n}\]
Take $n=3$. For the product $Cap_2\times Cap_2\times Cap_2$ we have that the following square commutative diagram:
\begin{center}
\begin{tikzcd}
\mathbb{P}^1\times \mathbb{P}^1 \times \mathbb{P}^1 \arrow[rr, "h_{(12)}\times Id_3"] \arrow[d,"Id_1\times h_{23}" ] & &
\mathbb{P}^3 \times \mathbb{P}^1 \arrow[d, "h_{(12) 3}" ] \\
\mathbb{P}^1 \times \mathbb{P}^3 \arrow[rr, "h_{1 (23)}"] && \mathbb{P}^7
\end{tikzcd}
\end{center}
\begin{thm}\label{T:1}
The diagram of embeddings of $\underbrace{Cap_2\times \cdots \times Cap_2}_{n+1}$ in $Cap_{2^{n+1}}$ has the structure of an $n$-cube.
\end{thm}
\begin{proof}
The proof can be done by using a bijection between the set of vertices and edges of the generalised Segre embedding diagram and the set of vertices/ edges constructing the $n$-cube.
Take a product $\underbrace{\mathbb{P}^1\times\cdots \times \mathbb{P}^1}_{n+1}$. For any pair of (adjacent) projective spaces in this cartesian product, to which the Segre embedding is applied, add a pair of parenthesis (one open and one closed) such that
\[(\mathbb{P}^1\times\mathbb{P}^1\times \cdots \underbrace{(\mathbb{P}^1\times\mathbb{P}^1)}_{i,i+1}\cdots \times \mathbb{P}^1)\hookrightarrow (\mathbb{P}^1\times\mathbb{P}^1\times \cdots \underbrace{\mathbb{P}^3}_{i}\cdots \times \mathbb{P}^1).\]
The construction goes as follows and it goes by induction on $n\geq1$.
\begin{itemize}
\item To any parenthesised product $\underbrace{(\mathbb{P}^1\times \mathbb{P}^1) \times \mathbb{P}^1\times \cdots\times (\mathbb{P}^1\times \mathbb{P}^1)}_{n+1}$ corresponds a binary word $x=(x_1,\cdots, x_n)$ with $n$ letters such that $x_i\in\{0,1\}$, such that $\underbrace{\mathbb{P}^1\overbrace{\times}^{(x_1,}\mathbb{P}^1\overbrace{\times}^{x_2,} \mathbb{P}^1\cdots\overbrace{\times}^{\cdots,x_n)}\mathbb{P}^1}_{n+1}$ where:
\end{itemize}
\[\begin{cases}
x_i=0 & \text{if there exists {\bf no} parenthesis for the pair} (i,i+1) \text{of projective spaces:}\\
& \underbrace{\mathbb{P}^1\times\cdots \overbrace{\mathbb{P}^1\times \mathbb{P}^1}^{i,i+1}\times \mathbb{P}^1}_{n+1}. \\
x_j=1 & \text{if there exists a parenthesis for the pair} \underbrace{\mathbb{P}^1\times\cdots \overbrace{(\mathbb{P}^1\times \mathbb{P}^1)}^{j,j+1}\times \mathbb{P}^1}_{n+1}\\
\end{cases}\]
So the number of parenthesis is given by the number of units in the word. For each unit added, the remaining zeros of the word correspond to the remaining projective spaces which have not yet been paired.
\begin{itemize}
\item for any $n\geq1$, the product $\underbrace{\mathbb{P}^1\times\cdots \times \mathbb{P}^1}_{n+1}$ corresponds to $\underbrace{(0,\cdots ,0)}_{n}$.
\item $(1,1,\cdots,1)$ corresponds to the projective space $\mathbb{P}^{2^{n+1}-1}$.
\item Each combination of parenthesis being encoded by a binary word $(x_1,\cdots, x_n)$ corresponds to the vertex of the Segre embedding diagram.
\item Add one parenthesis to a given combination. This corresponds to adding a unit to the word i.e. $x_j=1$ if we have $\mathbb{P}^1\times\mathbb{P}^1\times \cdots \underbrace{(\mathbb{P}^1\times\mathbb{P}^1)}_{j,j+1}\cdots \times \mathbb{P}^1$ at the $j$-th and $(j+1)$-th position.
\item An edge of the diagram is drawn whenever to words $x$ and $x'$ differ only by one letter i.e. there exists one unique $j$ such that $x_j\neq x'_j$. For $i\neq j$ we have $x_i=x'_j$.
\end{itemize}
$\star$ Let us discuss the low dimensional case. Take $\mathbb{P}^1\times\mathbb{P}^1$. The corresponding word has one letter $(x_1)$. This corresponds to $Cap_2\times Cap_2$. Then, by the Segre embedding $(\mathbb{P}^1\times\mathbb{P}^1)\hookrightarrow \mathbb{P}^3$.
The new vertex obtained by the Segre embedding modifies the word 0 into the word 1. The diagram is just a segment (so a cube of diemsion 1).
$\star$ We have $\mathbb{P}^1\times \mathbb{P}^1\times \mathbb{P}^1$. Let us use our construction, where vertices of the embedding diagram are encoded by the binary words of length 2, i.e. $(x_1,x_2)$ where $x_i\in \{0,1\}.$
The initial vertex is encoded by $\mathbb{P}^1\times \mathbb{P}^1\times \mathbb{P}^1$ which corresponds to the word $(0,0)$.
Two vertices are connected by an edge whenever the pair of corresponding words differ by only one character. So, for instance, the vertex (0,0) is directly connected by an edge to the vertices $(0,1)$ and $(1,0)$ but not connected to the vertex word (1,1). The diagram is a square.
$\star$ For the case $Cap_2\times Cap_2\times Cap_2\times Cap_2$, it is easy to check that one has a cube diagram relation.
$\star$ In full generality, the relations of embeddings of $\underbrace{\mathbb{P}^1\times \cdots \times\mathbb{P}^1}_{n+1\ times}$ in the generalised Segre embedding have the structure of a $n$-hypercube graph.
In fact, this statement follows from the definition of a hypercube (or $n$-cube) which is a graph of order $2^n$, whose vertices are represented by $n$-tuples $(x_1,\cdots x_n)$ where $x_i\in \{0,1\}$ and whose edges connect vertices which differ in exactly one term.
We use the construction above, where we have established a bijection between the set of vertices indexed by binary words and the parenthesised product of projective spaces; edges of the $n$-cube correspond to applying one Segre embedding to a pair of parenthesised projective spaces.
So, to conclude we have a hypercube graph relation illustrating the diagram of relations in $Cap_n$.
\end{proof}
We illustrate a four dimensional cube (a tesseract) in the figure below (Fig. 1), where vertices are indexed by words of length 4 and letters are in $\{0,1\}$.
Using the above construction, we can exactly illustrate the Segre embedding relations for
:\[\underbrace{Cap_2\times \cdots \times Cap_2}_{5\, times} \to Cap_{32},\] which in a projective version corresponds to illustrating $\underbrace{\mathbb{P}^1\times \cdots \times \mathbb{P}^1}_{5\, times} \to \mathbb{P}^{31}.$
\begin{center}
\begin{tikzpicture}[scale=1.0]
\draw[fill] (-0, 5) circle (.07cm); \node (-0,5) at (-0,5.2) {$\scriptstyle (1111)$};
\node (-0,5) at (-0,5.6) {$\scriptstyle \mathbb{P}^{31}$};
\draw[fill] (-0, 1.9) circle (.07cm); \node (-0,19) at (-0,2.2) {$\scriptstyle (0110)$};
\draw[fill] (-0, -2) circle (.07cm); \node (-0,-2) at (-0,-2.4) {$\scriptstyle (1001)$};
\draw[fill] (-0, -5) circle (.07cm); \node (-0,-5) at (-0,-5.2) {$\scriptstyle (0000)$};
\node (-0,-5) at (-0,-5.6) {$\scriptstyle \mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1$};
\draw[fill] (-4, 3.5) circle (.07cm); \node (-4,3.5) at (-4.7,3.5) {$\scriptstyle (0111)$};
\draw[fill] (4, 3.5) circle (.07cm); \node (4,3.5) at (4.6,3.5) {$\scriptstyle (1110)$};
\draw[fill] (-4, -3.5) circle (.07cm); \node (-4,5) at (-4.5,-3.5) {$\scriptstyle (0001)$};
\draw[fill] (4, -3.5) circle (.07cm); \node (4,-3.5) at (4.5,-3.5) {$\scriptstyle (1000)$};
\draw[fill] (-5.7, 0) circle (.07cm); \node (-5.7,0) at (-6.4,0) {$\scriptstyle (0011)$};
\draw[fill] (5.7, 0) circle (.07cm); \node (5.7,0) at (6.2,0) {$\scriptstyle (1100)$};
\draw[fill] (-2, 0) circle (.07cm); \node (-2,0) at (-2.5,0) {$\scriptstyle (0101)$};
\draw[fill] (2, 0) circle (.07cm); \node (1.9,0) at (2.5,0) {$\scriptstyle (1010)$};
\draw[fill] (-1.5, 1.5) circle (.07cm); \node (-1.5,1.5) at (-1.99,1.6) {$\scriptstyle (1011)$};
\draw[fill] (1.6, -1.5) circle (.07cm); \node (1.6,-1.5) at (2,-1.8) {$\scriptstyle (0100)$};
\draw[fill] (-1.6, -1.45) circle (.07cm); \node (-1.6,-1.345) at (-2.1,-1.6) {$\scriptstyle (0010)$};
\draw[fill] (1.55, 1.35) circle (.07cm); \node (1.55,1.35) at (2,1.5) {$\scriptstyle (1101)$};
\draw[black] (-0,5) -- (-4,3.5);
\draw[black] (-0,5) -- (4,3.5);
\draw[black] (-4,3.5) -- (-0,1.9);
\draw[black] (-0,1.9) -- (4,3.5);
\draw[black] (-0,-5) -- (-4,-3.5);
\draw[black] (-0,-5) -- (4,-3.5);
\draw[black] (-4,-3.5) -- (-0,-2);
\draw[black] (-0,-2) -- (4,-3.5);
\draw[black] (-5.7,0) -- (-4,-3.5);
\draw[black] (5.7,0) -- (4,3.5);
\draw[black] (5.7,0) -- (4,-3.5);
\draw[black] (0,5) -- (-1.5,1.5);
\draw[black] (2,0) -- (4,3.5);
\draw[black] (2,0) -- (4,-3.5);
\draw[black] (-5.7,0) -- (-4,3.5);
\draw[black] (-2,0) -- (-4,-3.5);
\draw[black] (-2,0) -- (-4,3.5);
\draw[black] (-1.5,1.5) -- (-5.7,0);
\draw[black] (1.6,-1.5) -- (5.7,0);
\draw[black] (-1.5,1.5) -- (-0,-2);
\draw[black] (1.6,-1.5) -- (-0,1.9);
\draw[black] (0,-5) -- (-1.6,-1.45);
\draw[black] (-5.7,0) -- (-1.6,-1.45);
\draw[black] (1.9,0) -- (-1.6,-1.45);
\draw[black] (0,5) -- (1.55, 1.35);
\draw[black] (5.7,0) -- (1.55, 1.35);
\draw[black] (0,-5) -- (1.6, -1.45);
\draw[black] (0,-2) -- (1.55, 1.35);
\draw[black] (-2,0) -- (1.6, -1.5);
\draw[black] (-2,0) -- (1.55, 1.35);
\draw[black] (0,1.9) -- (-1.6, -1.45);
\draw[black] (1.9,0) -- (-1.6, -1.45);
\draw[black] (1.9,0) -- (-1.55, 1.5);
\end{tikzpicture}
\smallskip
Figure 1: Segre diagram for $\mathbb{P}^1 \times \mathbb{P}^1\times \mathbb{P}^1\times \mathbb{P}^1\times \mathbb{P}^1$ with vertices labeled by binary words (commas have been omitted for simplicity).
\end{center}
\subsection{Manin's conjecture theorem for discrete exponential families}\label{S:ManinConj}
The theory of exponential varieties reveals the existence of a surprisingly strong connection to diophantine geometry. We show that the asymptotic formula conjectured by Manin, for the case of Fano varieties, concerning the number of $K$-rational points of bounded height with respect to the anticanonical line bundle {\it holds} in the case of a smooth projectivisation of an exponential variety (defined for a discrete finite sample space). This statement extends to the framework of information geometry the conjectured by Manin, which initially was stated in the context of algebraic geometry.
Regarding the previous subsection, we are now working on an object of the category $Cap$.
Our statement goes as follows:
\begin{thm}\label{T:2}
Consider $S=\{p(q,\theta)\}$ an exponential statistical manifold (over a discrete sample space $(\Omega,\mathcal{A})$) of finite dimension.
\begin{itemize}
\item Let $T=(\mathbb{Q}^*)^m$ be the $\mathbb{Q}$-torus of the exponential statistical manifold given by the probability coordinates.
\item Consider $\mathbb{P}_\Sigma$ the smooth $\mathbb{Q}$-compactification of the torus $T$ i.e. a smooth, projective $\mathbb{Q}$-variety in which $T$ lies as a dense open set and $\Sigma$ is a Galois invariant regular complete fan.
\item Let $k$ be the rank of the Picard group $Pic(\mathbb{P}_\Sigma)$.
\end{itemize}
Then, there is only a finite number $N(T,\mathcal{K}^{-1},B)$ of $\mathbb{Q}$-rational points $x \in T(\mathbb{Q})$ having the anticanonical height $H_{\mathcal{K}^{-1}}(x)\leq B$.
Moreover, as $B\to \infty$:
\[N(T,\mathcal{K}^{-1},B)=\frac{ \Theta(\Sigma)}{(k-1)!}\cdot B(logB)^{k-1}(1+o(1)),\]
where $\Theta(\Sigma)$ is a constant.
\end{thm}
\begin{rem}
The exponential statistical manifold is a pre-Frobenius manifold and we will can refer to it as the pre-Frobenius statistical manifold (for a definition of pre-Frobenius manifold we refer from instance to~\cite{Man99}).
\end{rem}
The proof of this statement is done in two parts. The first is to state explicitly the relation from exponential varieties (defined as above for finite, discrete sample space) to toric varieties. The second part is to apply the theorem of Batyrev--Tschinkel in this context.
\subsection{Exponential statistical manifolds for discrete sample space}
A statistical variety (or manifold) can be considered as the parametrized family of probability distributions $S=\{p(x;\theta)\}$, where $p(x;\theta) =\frac{dP_\theta}{d\mu}$ is the Radon--Nikodym derivative of $P_\theta$ w.r.t. the $\sigma$-finite measure $\mu$ (and it is positive $\mu$-almost everywhere).
It comes equipped with the following ingredients:
\begin{itemize}
\item the canonical parameters: $\theta= (\theta^1,\dots, \theta^n)\in \mathbb{R}^n$;
\item the symbol $x$ referring to a family of random variables $\{x_i\}$ on a sample space $\Omega$;
\item $p(x;\theta)$ is the probability distribution parametrized by $\theta$.
\end{itemize}
A family $S=\{p(x;\theta)\}$ of distributions is an {\it exponential family} if the density functions can be written in the following way:
\[p(x;\theta)= \exp(\theta^ix_i-\Psi(\theta)),\] where
\begin{itemize}
\item $\Psi(\theta)$ is a potential function, which is given by $\Psi(\theta)=\log\int_{\Omega}\exp\{\theta^ix_i\}d\mu$.
\item the parameter $\theta$ and $x=(x_i)_{i\in I}$ (where $I$ is a finite set) have been chosen adequately;
\item the canonical parameter satisfies $\Theta:=\{\theta\in \mathbb{R}^d: \Psi(\theta)<\infty\}$.
\end{itemize}
Whenever $p(x;\theta)$ is smooth enough in $\theta$ one can include in the statistical model a structure of an $n$-dimensional manifold. We use the construction of the family $A$ as a manifold, using the atlas $\{U_i,\phi_i\}_{i\in I}$.
From now on suppose that the sample space is finite and discrete, i.e. $\Omega=\{ \omega_1,\cdots,\omega_m\}$.
A small change of notation is required for practical reasons. This leads us to consider the exponential family of probability distribution defined by:
\begin{equation}\label{E:exp1}
p(q;\theta) = p_0(\omega) \exp \{\theta^i q_i ( \omega) - \Psi ( \theta) \},\, \text{where}\quad p_{0}(\omega)> 0
\end{equation}
and with canonical distribution parameter $\theta=(\theta^1,\cdots,\theta^n) \in \mathbb{R}^n$. Furthermore, we have $\omega \in \Omega$ elements of the sample space and $q_i :\Omega \to \mathbb{R}$ are a family $q=\{q_i\}$ of random variables; $\Psi (\theta)$ is a cumulant generating function.
The $q_i(\omega), i \in I $ ($I$ is some list of indices) is a function defining directions of the coordinate axes, called statistics (or {\it directional sufficient statistics}).
\begin{prop}\label{P:tor}
The exponential statistical manifolds (defined as above and over a discrete and finite sample space) have the structure of a real toric variety.
\end{prop} We present the construction below.
\smallskip
\begin{proof}
$\bullet$ Let us define $\mathcal{Q}=\{\bf{q}_1,\cdots,\bf{q}_n\}$, where ${\bf q_i}=(q_{i1},\cdots,q_{im})^T \subset \mathbb{Z}^m$ and the components satisfy $q_{ij}:=q_i(\omega_j)$, with $\omega_j \in \Omega$. The matrix $\mathcal{Q}$ has size $m\times n$ and its components are integers. Columns are given by the set $\{{\bf q_1},\cdots, {\bf q_n}\}$. The matrix $\mathcal{Q}=[q_{ij}]_{i=1,\cdots, n; \, j=1,\cdots,m}$, where $q_{ij} = q_i(\omega_j)$ and $q_0(\omega_j)=1$ give the {\it directional statistics.}
\smallskip
$\bullet$ Put $t_i=e^{\theta^i}\in \mathbb{R}_>$. The monomial is then $t_i^{q_i(\omega_j)}=\exp{\{\theta_iq_i(\omega_j)\}}$ and one can write the following equation:
\[
\exp\left\{\sum_{i=1}^n \theta^iq_{ij}\right\}=\prod_{i=1}^n t_i^{q_{ij}}=\boldsymbol{\tau_j}
\]
So, to conclude, we have that $p(q;t)$ can be rewritten as the product $t_1^{{\bf q}_1}\cdots t_n^{{\bf q}_n}$.
Moreover, since we assumed that $q_{ij}$ are integers, $\boldsymbol{\tau_j}$ (for $j=1,\cdots,m$) form Laurent polynomials in $t_i$.
Therefore, each vector ${\bf q_i}$ is identified to a monomial $t^{\bf q_i}$ in the Laurent polynomial ring $\mathbb{Q}[t^{\pm}]$, where $\mathbb{Q}[t^{\pm}]:=\mathbb{Q}[t_1,\cdots,t_m,t_1^{-1},\cdots,t_m^{-1}]$.
Statistically speaking, the monomial $t_i^{q_i(\omega_j)}$ can be interpreted as the probability of having the canonical parameter $\theta_i$ in the direction of $q_i(\omega_j)$ for the event $\omega_j.$ Whereas, the $m$-tuple $(t_1,\cdots,t_m)\in(\mathbb{Q}^*)^m$, where $t_i=\exp{\theta^i}$ form the {\it probability coordinates}.
\smallskip
Going back to the classical construction of the toric ideal, we apply the following.
Take the (semigroup) homomorphism:
\[\pi: \mathbb{N}^n\to \mathbb{Z}^m,\quad \mathbf{u}=(u_1,\cdots,u_n)\mapsto\sum_{i=1}^n u_i{\bf q_i}.\]
The image of $\pi$ is the semigroup:
\[\mathbb{N}\mathcal{Q}=\{\lambda_1{\bf q_1}+\cdots \lambda_n{\bf q_n}\, :\, \lambda_1,\cdots, \lambda_n\in \mathbb{N}\}.\]
This map $\pi$ lifts to a homomorphism of semigroup algebras:
\[\hat{\pi}: \mathbb{Q}[{\bf y}]\to \mathbb{Q}[t^{\pm1}],\quad y_i\mapsto t^{\bf q_i},\]
where $\mathbb{Q}[{\bf y}]$ is a polynomial ring in the variables ${\bf y}:=(y_1,\cdots, y_n)$.
It is the kernel of the homomorphism $\hat{\pi}$ that generates the toric ideal $\mathcal{I}_{T}$ of $\mathcal{Q}$. The multiplicative group $(\mathbb{Q}^*)^m$ is known as the $m$-dimensional algebraic torus. The variety of the form $V(\mathcal{I}_{T})$ is the affine toric variety. So, we have shown the existence of an $m$-dimensional algebraic torus for the exponential statistical manifolds. This algebraic torus is given by the {\it probability coordinates} $(t_1,\cdots,t_m)\in(\mathbb{Q}^*)^m$, where $t_i=\exp{\theta^i}$.
Note that for $dim(\mathcal{Q})=m$, one can visualise the dense torus using the fact that the set $V(\mathcal{I}_{T})\cap (\overline{\mathbb{Q}}^*)^m$ is an algebraic group under coordinate-wise multiplication which is isomorphic to the $m$-dimensional torus $T=(\overline{\mathbb{Q}}^*)^m$.
To each point $P\in U_i$, where $(U_i,\phi_i)$ is a chart, we apply the homomorphism construction above. The coordinate functions $y_i$ on the chart $U_i$ can be expressed as Laurent monomials in the adequate coordinates. In changing from one chart to another the coordinate transformation remains monomial. So, this forms a smooth toric variety, where we have a collection of charts $y_i: U_i\to\mathbb{Q}^n$, such that on the intersections of $U_i$ with $U_j$ the coordinates $y_i$ must be Laurent monomials in $y_j$.
A toric variety with a collection of charts determines a system of cones $\{\sigma_a \}$ in $\mathbb{R}^n$. Putting coordinates $x_1,\cdots x_n$ on a given fixed chart $U_0$ the coordinate functions $x^{(a)}$ on the remaining charts $U_a$ can be represented as Laurent monomials in $x_1,\cdots x_n$.
Furthermore, if we have a regular function $f$ on $U_a$, then it can be represented as a Laurent polynomial in $x_1,\cdots, x_n$. The regularity condition for a function $f$ on the chart $U_a$ can be expressed in terms of the support of the corresponding Laurent polynomial $\tilde{f}.$ For $\tilde{f}=\sum_{m\in \mathbb{Z}}c_mx^m$ the support of $\tilde{f}$ is the set $\{m\in \mathbb{Z}^n \, |\, c_m\neq 0\}$ and with each chart $U_a$, we associate a cone $\sigma_a$ generated by the exponents of $x_1^{(a)},\cdots, x_n^{(a)}$ as Laurent polynomials in $x_1,\cdots x_n$.
An arbitrary Laurent polynomial $\tilde{f}$ is regarded as a rational function on $X$. Regularity of this function on the chart $U_a$ is equivalent to $supp(\tilde{f})\subset \sigma_a$. Thus, various questions on the rational function $\tilde{f}$ on the toric variety $X$ reduces to the combinatorics of the positioning of $supp(\tilde{f})$ with respect to the system of cones $\{\sigma_a \}$.
Reciprocally, one can construct a toric variety by specifying a system of cones $\{\sigma_a \}$ satisfying certain properties. These requirements can be most conveniently stated in terms of the system of dual cones and leads to the notion of fan.
\end{proof}
\begin{cor}
Consider the exponential statistical variety defined for a discrete finite sample space.
If we have a regular function $f$ on $U_a$, then it can be represented as a Laurent polynomial in $x_1,\cdots, x_n$. The regularity condition for a function $f$ on the chart $U_a$ can be expressed in terms of the support of the corresponding Laurent polynomial $\tilde{f}$ and in the exponential variety it is given by the directional statistics. In particular, the cone is generated by the directional statistics.
\end{cor}
\begin{proof}
This follows from the discussion above.
\end{proof}
Now, we argue that the Manin conjecture holds for exponential statistical manifolds. Indeed, following the construction of Batyrev--Tschinkel~\cite{BaT}, the Manin conjecture is true for toric varieties. From the above statement (Prop. \ref{P:tor}) it follows that the exponential statistical manifolds (defined over finite sample space) have the structure of a (real) toric variety. Therefore, the conclusion follows easily that a smooth projectivised version of the exponential statistical manifolds defined over finite and discrete sample space satisfies the Manin conjecture.
\smallskip
\section{Words, codes and algebraic structures in information transmission}\label{S:3}
\subsection{Motivation}
As was shown in previous works, Moufang loops and quasigroups are central in information geometry.
We focus on the situation of coding or of error making during a transmission of a given information. It turns out that the algebraic structures of loops and quasigroups offers the right language and formalism to deal with this type of problems. This is starting to be developed in Sec. 2.3 and the following sections. We recall below some results relating structures codes and non-necessarily commutative Moufang loops and quaisgroups.
Commutative Moufang Loops appear in the symmetries of the space of probabilities: {\it automorphisms of order two} that are boundary limits of the reflections of geodesics about the center, come equipped with a structure of a quasigroup. These {\it automorphisms} define a composition law on the set of points that forms an {\it abelian quasigroup}.
Similarly, non-necessarily commutative Moufang loops and quasigroups appear among the other aspect of information geometry, regrouping around codes/ structure codes (see~\cite{Mouf,QuOp}). We will mention a few results in relation to this in what follows.
\smallskip
Based on the works in \cite{Err}, family of codes are defined as follows. We choose and fix an integer $q \geq 2$ and a finite set: the alphabet $A$ of cardinality $q$. An (unstructured) {\it code $C$} is defined as a nonempty subset $C \subset S$ of words of length $n\geq 1$.
The sequence $w=(\alpha_i)$ of elements of $A$, where $i = 1,2,...,n$ is called a word $w$ of length $n$. We denote by $n(C)$ the common length of all words in $C$. Such a subset $C$ comes equipped with its code point datum. This is given by a pair $P_C=({\rm R}(C), \delta(C))$, where
${\rm R}(C)$ is called the {\it transmission rate} and $\delta(C)$ is the {\it relative minimal distance} of the code.
\begin{itemize}
\item The relative minimal distance of the code $\delta(C)$ is given by the quotient
$\delta(C):=\frac{d(C)}{n(C)},$ where
$d(C)=\min\{d(a,b)\, |\, a,b \in C, a\neq b\}$ is the minimal distance between two different words in $C$; $n(C) := n$ and $d(a,b)$ is the Hamming distance between two words:
$d((\alpha_i),(\alpha'_i)):= card\{i \in (1,\cdots, n)\, | \, \alpha_i \neq \alpha'_i\};$
\item The transmission rate ${\rm R}(C)$ depends on the $log_{q}(Card(C))$ i.e. we have: ${\rm R}(C)=\frac{[log_{q}(Card(C))]}{n(C)}$.
\end{itemize}
Note that for our investigations the code point $P_C=({\rm R}(C), \delta(C))$ will not be directly considered, although it is implicitly present.
As mentioned in earlier works of \cite{Mouf} (section 5.2), Moufang symmetries generally become visible in the so-called {\it structured codes.} The most studied structure codes appear in linear codes and algebraic-geometric codes. Concerning the former (linear codes) one considers the alphabet $S:= \mathbb{F}_q,$ corresponding to generators of a finite field of cardinality $q$, and $C \subset \mathbb{F}_{nq}$ form $\mathbb{F}_q$-linear subspaces. Concerning the latter (algebraic-geometric codes) one has the same class of alphabets, but the difference is that one considers
$\mathbb{F}_q$-points in an affine (or projective) $\mathbb{F}_q$-scheme with a chosen coordinate system.
Moufang symmetries appear indirectly in this geometric setting. Their existence can be seen using various and different formalisms, motivated for instance by theoretical physics. Let us recall some definitions on loops and quasigroups.
\subsection{Quasigroups and Moufang symmetries}
For the convenience of the reader, we recall below the algebraic structures of quasigroups, loops, Moufang loops.
\begin{enumerate}
\item Let $A$ be a finite set of cardinality $q$. A {\it binary operation} on a set $A$ is a mapping $\diamond:A\times A\to A$ which associates to every ordered pair $(a,b)$ of elements in $A$ a unique element $a\diamond b$. A set with a binary operation is called a {\it magma}.
\item A quasigroup is a magma (i.e. a set $A$ with a binary multiplication denoted by $\diamond$) such that in the equation $x\diamond y=z$ the knowledge of any two of $x, y, z$ specifies uniquely the third. Latin squares form the multiplication tables of quasigroups.
\item Based upon the set $A$ a Latin square is a $|A| \times |A|$ array in which each element of $A$ occurs exactly once in each row and exactly once in each column. In particular, for all ordered pairs $(a,b)\in A^2$ there exist unique solutions $x,y \in A$ to the equations:
$x\diamond a=b,\quad a\diamond y=b$, and those solutions are precisely given by the Latin squares.
\,
Differently speaking, for each element $r$ of a magma $(A, \diamond)$ one can define the left multiplication:
\[L(r)=L_x(r):A\to A, x\mapsto r \diamond x\]
and the right multiplication:
\[R(r)=R_x(r):A\to A, x\mapsto x\diamond r.\]
The operators $L$ and $R$ form bijections of the underlying set $A$. We call them left (resp. right) {\it translation} maps.
In particular, this allows to reformulate the definition of the quasigroup, using the translation maps so that a magma is a combinatorial quasigroup iff the left multiplication $L(r)$ and the right multiplication $R(r)$ are bijective for each element $r$ of $A$.
We can add to this structure the possibility of having a unit denoted $e$ i.e. such that $e\diamond a = a\diamond e = a$ holds for any element $a\in S$.
\item A quasigroup $(A, \diamond)$ is a nonempty set $A$ equipped with a binary multiplication $\diamond: A \times A \to A$
and such that, for each $a \in S$, the right and left translation maps $R(a): A \to A$ and $L(a): A \to A$ given by
$R(a) =r\diamond a$ and $L(a) =a\diamond r$, are permutations of $A$.
If there is a two-sided identity element $1_A = 1_{(A,\diamond)}$ then $A$ is a loop.
\item A loop is a called Moufang if it is a unital quasigroup (it has a unit and every element is invertible) with a near associativity relation:
\[(a\diamond b)\diamond (c\diamond d)=a\diamond ((b\diamond c) d),\]
\[a\diamond (a \diamond b) = (a \diamond a) \diamond b,\quad (a \diamond b) \circ (a \diamond c) = (a \diamond a) \diamond (b \diamond c).\]
where $(a,b,c,d)\in S^4$.
\end{enumerate}
\smallskip
Going back to our previous discussion on code loops, the loop $\mathcal{L}$ is, roughly speaking, given by the sequence:
\[0\to R\to \mathcal{L} \to C\to 0,\]
where $R$ is a ring (which will be more precisely defined below); $C\subset \mathbb{F}^n_{2^r}$ is a linear code, equipped with an additional structure which is introduced in the next paragraph: the ``almost-symplectic structure''. In order to give a flavour to the reader we recall this notion and expose how the loops appear in more details. For further information we refer to~\cite{sem}.
\smallskip
An almost symplectic structure on a finite dimensional vector space $V$ over $\mathbb{F}_q$ ($q$ odd) is a non-degenerate skew-symmetric form $\omega: V\times V \to \mathbb{F}_q$ where $\omega$ satisfies the
anti-symmetry $\omega(u,v) = -\omega(v,u)$, with $\omega(u,0) = \omega(0,u) = 0$,
and for any non-null element $u$ in $V$ there exists some $v\in V$ satisfying $\omega(u,v)\neq0$.
A polarisation of the almost-symplectic form is a function $\beta : V \times V\to R$ satisfying the relation
\[\beta(u,v)-\beta(v,u)=\omega(u,v).\]
\smallskip
Consider the finite field $\mathbb{F}_{2^r}$ and identify it to the residue field: $\mathcal{O}_K/{\bf m}_K$,
where $K$ is an unramified extension of degree $r$ of ${\bf Q}_2$; $\mathcal{O}_K$ is the ring of integers and ${\bf m}_K$ the maximal ideal.
The ring $R$ in the above short exact sequence is given by $R=\mathcal{O}_K/{\bf m}^2_K$.
The construction of the almost-symplectic code loop $\mathcal{L}(V, \beta)$ over $\mathbb{F}_q$ where $q=2^r$ is an extension given by the short exact sequence:
\[0\to R\to \mathcal{L}(V,\beta) \to V\to 0,\]
where $(V,\beta)$ is an almost-symplectic vector space $(V,\omega)$ with polarization $\beta$ over $\mathbb{F}^n_{2^r}$.
\smallskip
This setup motivates our investigations concerning codes and error-codes. In particular, we give a construction allowing to take into account all possible errors (or error corrections) occurring during the transmission of some information. The framework of quasigroups and loops fits adequately this type of problem.
\subsection{Words, codes and algebraic structures}
The algebraic structure of spaces of words and codes are interesting to study. As soon as one associates to words of the code $C$ some given meaning, a code $C$ forms a type of dictionary for a given language. A finite combination of code words form sentences in this language. However, it can happen that given an information encoded by such a sentence it might be distorted during the transmission and so mistakes may appear in the receivers message, changing thus its meaning.
The types of mistakes that can possibly occur are listed below:
\begin{enumerate}
\item letters in a word can be permuted,
\item one letter can be replaced by another letter (we say that this letter has been translated or shifted to another one),
\item new letters can be added to the word,
\item letters can be lost in the word,
\item new words can be added to the preexisting word.
\end{enumerate}
In the following part of this section we consider the first two types of mistakes. We argue that quasigroups and loops offer the perfect setting to define these types of operations.
Mistakes of type (3), (4), (5) are considered in the next section where the notion of modified parenthesised braids is introduced.
\smallskip
\begin{ex}
Consider the alphabet $A_4=\{a,b,c,d\}$ and suppose the Latin square associated is as follows.
\begin{center}
\renewcommand\arraystretch{1.3}
\setlength\doublerulesep{0pt}
\begin{tabular}{r||*{4}{2|}}
$\cdot$ & a & b & c & d \\
\hline\hline
$a$ & b & c & d & a \\
\hline
$b$ & c & d & a & b \\
\hline
$ c$ & d & a & b & c \\
\hline
$d$ & a & b &c & d \\
\hline
\end{tabular}
\end{center}
Then the word $w=(c(ab)d)$ can be distorted using the translation maps as $(L_a(c)(aR_b(b))d)$ and the receiver reads $(d(ad)d)$.
\end{ex}
We do not assume commutativity (unless it is clearly stated) i.e. the word {\it bac} is not equivalent to {\it cab}. When it comes to parenthesised words, associativity is not allowed either, so that $b(ac)$ is not equivalent to $(ba)c$.
\smallskip
We now introduce the following notations and explicit the corresponding notions.
\begin{itemize}
\item Consider an alphabet $A$ (finite set of cardinality $n$).
\item By $M_{p}(A)$ we denote the parenthesised $p$-words formed from the alphabet $A$. Repetitions of letters are allowed.
\item $M_n(A)$: sum of $M_p(A)\times M_{n-p}(A)$, where $1\leq p\leq n-1$. A word $w\in M_n(A)$ can be written as the concatenation of two smaller words, strictly contained in between an open and a closed parenthesis i.e. we have $w=(w')\circ (w'')$ where $w'$ is of length $p$ and $w''$ of length $n-p$. We call those subwords the {\it blocks} of $w$.
\item The sum of the family $(M_n(A))_{n\geq 1}$ is denoted $M(A)$.
\item $M(A)$: the free magma, with composition law $w,w' \mapsto w\circ w'$.
\item $\mathbb{M}_k(A)$ is used only for parenthesised $k$-words with {\it distinct letters}. Note that for this notation to be consistent it is necessary that $k\leq n$. In particular, $\mathbb{M}_n(A)$ are the parenthesised $n$-words with $n$ distinct letters.
\end{itemize}
There is a clear separation of $M_{n}(A)$ into two subclasses made of those words with distinct letters $\mathbb{M}_n(A)$ and those words with repeating letters.
\begin{rem}
Concerning the last class of parenthesised words, if we take for example $A=\{a,b,c,d\}$, then an element of $\mathbb{M}_{5}A$ forms a word where letters repeat: the expression $(a(bd))(ba)$ represents an element of $\mathbb{M}_{5}A$.
\end{rem}
\smallskip
We now consider the connection between the free magma structure $M(A)$ and the magma $(A,\circ)$ on which the quasigroup acts $GQ=(A, \diamond, L,R)$. Suppose for simplicity that $A$ has cardinality $n$.
Then we have an action on the sequence of letters forming a word $w=(x_1\cdots x_n)$ that we can write as an $n$-tuple i.e. $(x_1,\cdots, x_n)$ such that each entry (letter) of the n-tuple is translated by a left $L$ or right map $R$. To avoid any source of confusion we denote the translation of all letters of the word as $T$. So, we have the following:
\[QG\times M_n(A)\to M_n(A)\]
\[(T, (x_1,\cdots, x_n))\to (T(x_1),\cdots, T( x_n))\]
Recall the construction of $M(A)$. For any $w\in M(A)$, there exists a unique $n\geq 1$ such that $w\in M_n(A).$
For a given pair of words $(w,w')\in M_p(A)\times M_{q}(A)$ of length $p$ and $q$ respectively we can define a product $w\circ w'$ forming an element of $M_{p+q}(A)$. The set $M(A)$ with the law composition $w,w' \mapsto w\circ w'$ is the free magma. So, we can state the following lemma.
\begin{lem}\label{L:bki}
Let $A_n$ be a finite set of cardinality $n$. Consider the magma $\mathfrak{N}:=(A_n,\circ)$ and suppose that there exists a quasigroup $QG_n=(A_n, \diamond, R, L)$ acting on the letters of the words of length $n$.
Then, there exists a unique morphism $g:M(A)\to \mathfrak{N}$ from the free magma on $A_n$ to $\mathfrak{N}$.
\end{lem}
\begin{proof}
A quasigroup is a magma $\mathfrak{N}$ where every element is invertible. Let us define the following bijection $f: A \to \mathfrak{N}$.
By induction we can construct the morphism $g$ as follows.
\begin{itemize}
\item Let $f_1=f:M_1(A)\to \mathfrak{N}$, where $M_1(A)=A$.
\item For $n\geq 2$, we have $f_n:M_n(A)\to \mathfrak{N}$ and given $w\circ w' \in M_p(A)\times M_{n-p}(A)$,
$f_n(w\circ w')=f_p(w)\circ f_{n-p}(w')$.
\end{itemize}
There exists only one unique morphism $g$ inducing $f_n$ on $M_n(A)$ for all $n\geq 1$. So, $g$ is the unique morphism of $M(A)$ into $\mathfrak{N}$ which extends $f$.
\end{proof}
\begin{lem}
Consider the quasigroup $(A,\diamond,L,R)$ acting on $M_n(A)$, $n\geq 2$. Then, any permutation of the letters of a word $w\in M_n(A)$ can be recovered by an adequate combination and choice of translation maps $L$ and $R$.
\end{lem}
\begin{proof}
Any permutation can be obtained by a product of transpositions. Now, the operators $L_x$ and $R_y$ define a transposition iff
\begin{equation}\label{E:equation}x\diamond a=b\quad \text{and}\quad b\diamond y=a,\end{equation}
where $x,y \in A$.
So, given a quasigroup $(A,\diamond,L,R)$ any permutation of the letters of a word $w\in M_n(A)$ can be obtained for every $n\geq 2$ from $L$ and $R$.
\end{proof}
Restricting our attention to $\mathbb{M}_{n}A$, we have the following free magma structure, defined as follows:
\begin{itemize}
\item $\mathbb{M}_{0}A=\emptyset$
\item $\mathbb{M}_{1}A=A$
\item $\mathbb{M}_{n}A=\sqcup_{p+q=n}\mathbb{M}_{p}A\times \mathbb{M}_{q}A.$
\end{itemize}
\begin{rem}
We can interpret $\mathbb{M}_{n}A$ differently as a set of rooted binary planar trees (each vertex has exactly two incoming edges) with $n$ leaves labelled by elements of $A$.
\end{rem}
Let $M = \{M(n)\}$ be the symmetric sequence where $M(n)$ is the subset of $\mathbb{M}_n \{1,\cdots, n\}$ consisting of the monomials in $\{1, ..., n\}$ where each element of the set occurs exactly once. The symmetric group $\mathbb{S}_n$ acts from the right on $M(n)$ by permuting the elements of the set $\{1, ..., n\}$. The symmetric sequence $M$ becomes an operad with operadic composition given by replacing letters by monomials (or grafting binary trees). The operad $M$ is called the magma operad.
\begin{cor}
Let $A_n$ be a set of cardinality $n$.
Let $M = \{M(n)\}$ be the symmetric sequence, where $M(n)$ is the subset of $\mathbb{M}_n \{A_n\}$. Then, any permuted $n$-sequence of $M(n)$ in $(A_n)^n$ can be recovered from the action of the quasigroup $(A_n,\diamond,L,R)$ on the elements of $\mathbb{M}_n \{A_n\}$ where for any transposition $(x_ix_j)$ we put the condition that $L(x_i)=x'_i$ and $R(x_j)=x'_j$ for $x'_i=x_j$ (resp. $x'_j=x_i$) and the rest of the letters remain unchanged.
\end{cor}
\begin{proof}
Every $M(n)$ is formed by all words of length $n$, where letters are all distinct.
As was previously shown any permutation of a pair of letters in a word $w \in M(n)$ is obtained from $L$ and $R$ by applying condition~\refeq{E:equation}. So, any symmetric sequence in $M(n)$ is obtained by taking a word with distinct letters and one can apply the operators $L$ and $R$ (and condition~\eqref{E:equation}) to any pairs of letters so as this defines a transposition. So, the action of the quasigroup $(A_n,\diamond,L,R)$ on $n$-sized words with distinct letters where $n\geq 1$ allows the construction of any element in $M(n)$.
\end{proof}
\begin{dfn}
The elements in $M(n)=(\mathbb{M}_n(A_n),\mathbb{S}_n)$ are called {\it symmetric sequences} of length $n$; whereas sequences of length $n$ defined from the alphabet $A_n$ and carrying an action of a quasigroup on $A_n$ are {\it translated sequences} of length $n$ and denoted $M_{QG}(n)$
\end{dfn}
Their relation can be described in the next diagram:
\begin{center}\begin{tikzcd}
M(n)\arrow[hookrightarrow,"i"]{r} & M_{QG}(n)\arrow[hookrightarrow,"j"]{r} \arrow[d,"f_n"] & M(A_n)\arrow[dl,"g"]\\
&(A_n,\circ)&
\end{tikzcd}\end{center}
In short, we can consider two different objects, one being contained in the other one.
The first one is given by the collection of symmetric sequences $\{M(n)\}_{n\geq 1}$. Morphisms between elements of $M(n)$ are given by the right action of the symmetric group $\mathbb{S}_n$. The second object is given by the collection $\{M(A_n)\}_{n\geq 1}$ where morphisms are translation maps, generated by $L$ and $R$ acting componentwise on the $n$-tuples forming $n$-sized words of $M(A_n)$.
can be obtained from translated sequences. All these structures can be obtained set theoretically from the free magma. The free magma allows a decomposition of the magma $(A_n,\circ)$ by word length.
So, if we restrict our considerations to the case, where translation maps form permutations, the following diagram appears:
\begin{center}\begin{tikzcd}
\arrow[ddd,"\cong"] \mathbb{M}_n(A_n) \arrow[dr] \arrow[hookrightarrow,"i"]{r}& M_{n}(A_n)\arrow[d,"f_n"] \arrow[hookrightarrow,"j"]{r}&\arrow[ld,"g"] M(A_n)\arrow[ddd,"\cong"] \\
& (A_n,\circ) \arrow[d,"\sigma\in\mathbb{S}_n"] & \\
& (A_n^\sigma,\circ) & \\
\mathbb{M}_n(A_n^\sigma) \arrow[hookrightarrow,"i"]{r}\arrow[ur,"f_n^\sigma"] & \arrow[u] M_{n}(A_n^\sigma) \arrow[hookrightarrow,"j"]{r} & M(A_n^\sigma) \arrow[lu] \arrow[uuu]
\end{tikzcd}\end{center}
where we have the inclusion morphisms $i,j$. The inclusion $j$ goes from the degree $n$ component generated by all words of length $n$ to the free magma; $A_n^\sigma=\sigma(A_n)$, and $\sigma\in \mathbb{S}_n$ is a permutation obtained from and right combination of translation maps $L$ and $R$ such that they satisfy condition \eqref{E:equation} for any pair of transpositions.
\begin{lem}
The composition of translation maps acting on a set $A_n$ is associative.
\end{lem}
For a word of length 3 one can for instance take:
Given the following data:
\begin{itemize}
\item Word: $(x_1,x_2,x_3)$
\item $f=((L_a,R_b,L_c),(x_1,x_2,x_3))$
\item $g=((R_a,R_c,L_b),(y_1,y_2,y_3))$
\item $h=((L_b,L_c,R_b),(z_1,z_2,z_3))$
\end{itemize}
\[LHS=((h\circ g)\circ f)(x) =(L_b(R_a(y_1)),L_c(R_c(y_2)),R_b(L_b(y_3)))\circ f(x)=\]\[
(L_b(R_a(L_a(x_1))), L_c(R_c(R_b(x_2))),R_b(L_b(L_c(x_3)))) \]
\[RHS=(h\circ (g\circ f)(x)=h\circ (R_a(L_a(x_1)),R_c(R_b(x_2)),L_b(L_c(x_3)))=\]
\[L_b(R_a(L_a(x_1)),L_c(R_c(R_b(x_2))),R_b(L_b(L_c(x_3)))\]
So, RHS=LHS.
\begin{proof}
One can easily check that for a word of length $n$ the statement is true using induction.
\end{proof}
\begin{lem}
Consider a (possibly non reduced) Latin square associated to a quasigroup $(A_n,\diamond,L,R)$, such that the first row (resp. column) corresponds to the sequence of the letters of a word $w\in\mathbb{M}_n(A_n)$. Then the $n-1$ other rows (resp. columns) of the Latin square form sequences of $n$-words being a permutation of $w$.
\end{lem}
\begin{proof}
Take an $n$-word with $n$ distinct letters, $w\in \mathbb{M}_n(A_n)$. We use (a possibly non reduced version of) the Latin square, such that the sequence of letters in the word $w$ forms the first row or first column. The multiplication table, forming the latin square, gives permutations of the word $w$. Multiplying each letter of the word $w$ by an element $a\in A_n$ gives a new row or column indexed by the element $a$.
\end{proof}
\begin{ex}
Using the previously discussed example, it is easy to check the above lemma by taking on the first row the monomial $(badc)$. It gives a non reduced Latin square, described below.
\begin{center}
\renewcommand\arraystretch{1.3}
\setlength\doublerulesep{0pt}
\begin{tabular}{r||*{4}{2|}}
$\cdot$ & b & a & d & c \\
\hline\hline
$b$ & d & c & b & a \\
\hline
$a$ & c & b & a & d \\
\hline
$ d$ & b & a & d & c \\
\hline
$c$ & a & d & c & b \\
\hline
\end{tabular}
\end{center}
It is easy to see that applying the translation maps to the entire word $w$ gives three other permutations: $cbad$, for $L_a$; $dcba$ for $L_b$ and $adcb$ for $L_c$.
\end{ex}
\section{Modified parenthesised Braids as a key to code-correction}\label{S:mPaB}
In this section, we introduced an object that we call the {\it modified parenthesised Braids}. This object serves as a model to consider the space of all paths of errors that may occur during the transmission of a given information. A particular advantage of this object is that it helps visualise the distorsion process easily (via modified braids) and thus leads to more facility for the correction process.
Previously, we have shown that for all $n\geq1$, each $M_{n}(A)$ comes equipped with a decomposition:
\[M_{n}(A)=\sqcup_{1\leq p\leq n-1} M_{p}(A)\times M_{n-p}(A),\]
indexed by the partitions of the integer $n$.
This precise procedure allows to define the parenthesis in the case of parenthesised words. The number of ways to insert $n$ pairs of parentheses in a word of $n+1$ letters is the celebrated Catalan number $Cat(n)$. For $n=2$ there are 2 ways: $((ab)c)$ or $(a(bc))$, as for $n=3$ there are 5 ways: $((ab)(cd)), (((ab)c)d), ((a(bc))d),$ $(a((bc)d)), (a(b(cd)))$.
This is in bijection with all the binomial$(2n,n)$ paths on $\mathbb{Z}$ lattice that start at $(0, 0)$ end at $(2n, 0)$,
where each step corresponds either to making a $(+1,+1)$ step or a $(+1,-1)$ step. The number of such paths that never go below the $x$-axis (also known as the Dyck paths) is $C(n)$.
\begin{prop}
Let $n\geq 1$ be an integer.
To every parenthesised word in $w\in M_{n}(A_n)$ one can construct a corresponding Dyck path of size $n$ in the real plane starting at $(0, 0)$ ending at $(2n, 0)$.
\end{prop}
\begin{proof}
Using the inductive construction on $M_{n}(A_n)$ mentioned above, it is easy to obtain a word with parenthesis. Now, concerning the Dyck paths, a step up i.e. with coordinates $(+1,+1)$ correspond to an opening of parenthesis and a step down (i.e. step with $(+1,-1)$ coordinates) corresponds to a closing parenthesis. Some vertices of the Dyck path may carry a label which corresponds to the letter(s) of the corresponding block in the word. \end{proof}
Note that some vertices can be labeled by an $r$-tuple accordingly to the corresponding block.
The following Dyck path corresponds to the word $((abc)d)(ef)(g(hi))$
\begin{center}\includegraphics[scale=0.39]{Dyckdgm}.
\end{center}
It is important to distinguish $\mathbf{PaT}$ from the actual quasigroup providing the left and right translation maps.
\begin{prop}
The parenthesised translated words $\mathbf{PaT}$ form a category.
\begin{itemize}
\item Objects are the collection $M_n(A_n)$ of parenthesised words of length $n$, where $n\geq 1$ formed from the alphabet $A_n$.
\item Morphisms are non-empty for words of the same length. These morphisms are given by the componentwise action on the $n$-tuple of letters given by the translation maps $L$ and $R$ applied to the letters of the words. Translation maps can permute letters of a word or shift a given letter to another one and obey to the multiplication table given by the Latin square.
\end{itemize}
\end{prop}
\begin{proof}
Objects are the collection of parenthesised words of size $n$, where $n\geq 0$, formed from a $Card(n)$ alphabet $A_n$ and where letters are allowed to repeat. Morphisms map an $n$-sized word to another $n$-sized word, using the left and right translation maps. Those maps act accordingly to the corresponding $n\times n$ Latin square, on the letters of the word. Composition of the translation maps are clearly allowed. There exists an identity morphism $Id$ so that given a word $w$ we have that $Id_w:w\to w$. This is possible since for any $L_r(x):x\mapsto rx=c$ (for $x \in A_n$ a letter of $w$) there exists a divisor of $c$ giving back $x$ (by definition of a quasigroup). The associativity holds. Consider a sequence $(x_1,\cdots, x_n)$. Then one can act on each letter using the left and right translation maps so that $(x_1,\cdots, x_n)$ is mapped to $(T_1(x_1),\cdots, T_n(x_n))$ where $x_1,\cdots, x_n\in A_n$. $T_i$ are the translation maps which can be obtained as a composition of right and left maps acting on each letter independently. The composition of translation maps is associative. In relation to this it is important to distinguish the composition of translation maps operation acting on a sequence of letters and the multiplication operation on the quasigroup.
\end{proof}
\medskip
\subsection{Parenthesised modified braid groupoid ${\bf mPaB}$}
In this section, we rely on the construction presented in~\cite{BarN,BrHoRo,Dri90}, in order to show that the same structure applies for the modified parenthesised braids as for the parenthesised braids.
Roughly speaking by parenthesised braid we mean a braid whose ends (i.e. top and bottom lines) correspond to parenthesised ordered points along a line. Let $B$ be such a parenthesised braid with $n$ strands.
\smallskip
\smallskip
In other words, the operad of parenthesised braids ${\bf PaB}$ is the operad in groupoids
defined as follows (see Def. 6.11~\cite{BrHoRo}).
\begin{itemize}
\item The operad of objects is the magma operad, i.e. $Ob({\bf PaB}) = M=\{M(n)\}_{n\geq 0}$.
\item For each $n \geq 0$, the morphisms of the groupoid ${\bf PaB}(n)$ are morphisms in
the (colored) braid groupoid, denoted ${\bf CoB}(n)=\{{\bf CoB}(n)\}_{n \geq 0}$, where
$Hom_{{\bf PaB}(n)}(p,q)=Hom_{{\bf CoB}(n)}(u(p),u(q))$ with $p,q\in \mathbb{S}_n$ and the morphisms are braids associated to the permutation $qp^{-1}$.
\end{itemize}
For the reader's convenience we recall the definition of the collection of groupoids ${\bf CoB} = \{{\bf CoB}(n)\}_{n \geq 0}$, following Def. 6.1~\cite{BrHoRo}:
\begin{itemize}
\item ${\bf CoB}(0)$ is the empty groupoid.
\item For $n >0$, the set of objects $Ob({\bf CoB}(n))$ is $\mathbb{S}_n$. For our own purposes, we propose to modify here the classical perspective on this object by defining the generators of $\mathbb{S}_n$ from the point of view of translation maps, i.e. given by some specific combinations of translation maps, lying in the space of all translation maps denoted $T_n$.
\item A morphism in ${\bf CoB}(n)$ from $p$ to $q$ is a braid $\alpha \in B(n)$ whose associated
permutation is $qp^{-1}$.
\end{itemize}
The categorical composition in ${\bf CoB}(n)$ is given by the concatenation operation of braids
\[Hom_{{\bf CoB}(n)}(p, q) \times Hom_{{\bf CoB}(n)}(q, t) \to Hom_{{\bf CoB}(n)}(p, t)\]
inherited from the braid group. We write $a \cdot b$ for the categorical composition of $a$ and $b$.
\smallskip
\begin{rem}
As one can see, this type of object fits the description of errors of type (1) discussed earlier. Those errors are mainly given by a permutation of letters in a word. However, this forms a very restrictive subclass of possible mistakes. Moreover, it is rare to form sentences of words having all letters distinct. Therefore, we add the class (2) of possible errors to our investigations and so it is necessary to modify the definition of ${\bf PaB}$ slightly so as to obtain a larger panel of possible errors.
\end{rem}
The above definitions being settled, we introduce the notion of modified parenthesised braid ${\bf mPaB}$. It is reminiscent to its original version ${\bf PaB}$, in the sense that objects are given by parenthesised words. Somehow, since letters are allowed to repeat in a word, one needs to introduce a modification of the braid. This modification of the braid is given by introducing two supplementary operations: the {\it pinching} operation and the {\it attaching} operation. These operations are a geometric representation of the left (resp. right) translation of a letter into another one, if this letters has already been used in the word.
\begin{dfn}
Consider a pair of strands in a given braid. We say that there exists a {\it pinching operation} whenever those two strands are pinched together at a point. This point lies neither on the top nor on the bottom lines of the braid.
We say that there exists an {\it attaching operation} if the pinching lies on either the the top or bottom lines of the braid.
\end{dfn}
\begin{figure}
\begin{center}
\includegraphics[scale=0.15]{attach.jpeg}\end{center}
\caption{Pinching and attaching points}\label{F:pinchA}
\end{figure}
See an illustration of this in Fig. \ref{F:pinchA}, where a pinching point is presented between the strands starting at $a$ and $b$ and an attaching point is presented for the strands starting at $b,c,d$. The attaching operation, occurs during the transformation of the word $abcd$ into the word $addd$.
\begin{rem}
Note that the pinching/ attaching does not imply that the strands have been intertwined. An intertwining of two strands amounts to solving equation \ref{E:equation} for a pair of translation maps applied to a pair of letters.
\end{rem}
\begin{ex}
We provide an additional example of a pinched (modified) braid in Fig. \ref{F:pinch}. As one can see, we have a pair of parenthesised words $(ab)(cd)$ on the top line and $a(b(cd))$ on the bottom line. The pinching occurs after that $c,d$ and $a,b$ are swapped and the word morphism maps $(ab)(cd)\to (ba)(dc)$.
The following translation map applied to $c$ (given by $L_x(c)=d$) gives the new word $b(a(dd))$, where two strands are pinched at $d$.
\begin{figure} \begin{center}\includegraphics[scale=0.35]{pinch.jpeg}\end{center}\caption{One pinching point}\label{F:pinch}
\end{figure}
\end{ex}
We are interested now in considering the errors of type (4),(5) and (6). This implies adding letters, losing letters or even duplicating letters.
First, we introduce the modified parenthesised braids.
\begin{dfn}
Let ${\bf mPaB}$ be the category whose objects are parenthesised words and morphisms are given by a pair $(P_{\star},\sum_{j=1}^k\beta_j\tilde{B}_{j})$, constituted from:
\begin{itemize}
\item a morphism in the category of parenthesised translations, denoted $P_{\star}$;
\item the linear sum of parenthesised modified braids $\sum_{j=1}^k\beta_j\tilde{B}_{j}$ is defined such that the skeleton of $\tilde{B}_{j}$ is $P_{\star}$; the coefficients $\beta_i$ lie in some "ground algebra".
\end{itemize}
The composition law in ${\bf mPaB}$ is given by the bilinear extension of the composition law of modified parenthesised braids.
\end{dfn}
\begin{prop}
The modified parenthesised braids ${\bf mPaB}$ forms a groupoid where:
\begin{itemize}
\item The objects are the collections $n$-sized parenthesised translated words.
\item For each $n \geq 0$, the morphisms are in
the modified braid groupoid $\bf{mCoB}(n)$, where: $Hom_{{\bf mPaB}(n)}(p_{\star},q_{\star})=Hom_{\bf{mCoB}(n)}(u(p_{\star}),u(q_{\star})),$
with $p_{\star},q_{\star}\in T_n$ being translations. \end{itemize}
The symbol $\bf{mCoB} = \{\bf{mCoB}(n)\}_{n \geq 0}$ defines the modified (colored) braids which consists of a collection of groupoids $\bf{mCoB}(n)$ defined as follows.
\begin{itemize}
\item $\bf{mCoB}(0)$ is the empty set.
\item For $n >0$, the set of objects $Ob(\bf{mCoB}(n))$ are the translations $T_n$ (containing $\mathbb{S}_n$) on a set of $n$ elements, where rules of translating elements are given by the corresponding $n\times n$ Latin squares.
\item A morphism in $\bf{mCoB}(n)$ from the translation $p_{\star}$ to the translation $q_{\star}$ is a modified braid $\alpha_{\star}\in mB(n)$ whose associated
translation is $q_{\star}p_{\star}^{-1}$.
\end{itemize}
\end{prop}
\begin{proof}
By definition, a groupoid is a small category in which every morphism is an isomorphism (i.e. it is invertible). A groupoid is given by a set
of objects; here we take the collection of $n$-sized parenthesised words.
For each pair of objects $w$ and $w'$ in the set of parenthesised words, there exists a (possibly empty) set of morphisms from $w$ to $w'$.
Here those morphisms are given by translating one $n-$word into another one by using the translation maps $L$ and $R$. This morphism can amount to a permutation of the letters of the words (like in the classical ${\bf PaB}$ case) but does not have to. In particular, letters can be shifted into other letters, creating thus a word where letters repeat.
Now, for every word $w$, there is a designated element $\mathrm{id}_w$. This is due to the fact that in a quasi group every element is invertible and that we can, in addition, add the notion of neutral element giving the identity (and forming thus a loop).
For each triple of objects $w, w'$, and $w''$, one has a composition of translation maps allowing the morphism $f:w\to w'$ to be composed with $g:w'\to w''$ and giving $gf: w\to w''$. Furthermore the morphisms are invertible (and this follows from the definition of translation maps).
This construction then leads to the modified braids. For a given translation $p$ and $q$ of a word, one defines an associated modified braid, whose translation is $qp^{-1}$. The domain and range are the parenthesised words corresponding to the permutations $p$ and $q$ respectively.
\end{proof}
In order to construct a ``tower'' of modified parenthesised braids, the key setup is already existent for the category ${\bf PaB}$ where one has the {\it extension operations, cabling operations, strand removal operations}. The whole point of the next proposition and lemma will be to first define rigorously the modified parenthesised braid groupoid and to show that the extension, cabling and strand removal operations can be inherited on this new object.
\begin{lem}
Let $({\bf PaB},d_i,s_i,d_0)$ be the category of parenthesised braids, equipped with the three operations known as: extension operation $d_0$, cabling operation $d_i$ and strand removal $s_i$ operation. Then, those operations are inherited on the category ${\bf mPaB}$.
\end{lem}
\begin{proof}
Consider $B$ a parenthesised braid with $n$ strands.
\begin{itemize}
\item {\it Extension operations}. Given a braid $B$, one adds on the left-most (or right-most) side a straight strand, with ends regarded as outer-most. This operation of adding a straight strand does not encounter any obstruction for the modified braids and so it is inherited from ${\bf PaB}$.
\item {\it Cabling operations}. For $1 \leq i \leq n$, let us consider the parenthesised braid obtained from $B$ by doubling its $i$-th strand (counting at the bottom). This cabling operation can be applied in any situation: either when strands are separated as in the classical braid setting or when they are attached/ pinched. So again this operation is inherited from ${\bf PaB}$.
\item {\it Strand removal operations}. For $1 \leq i \leq n$ be the parenthesised braid
obtained from $B$ by removing its $i$-th strand (counting at the bottom). Removing a strand in the modified braid holds also in this case. \end{itemize}
So, all three operations are well defined for ${\bf mPaB}$.
\end{proof}
\smallskip
We will now prove that ${\bf PaB}$ is a full subcategory of ${\bf mPaB}$.
\begin{cor}
The category ${\bf PaB}$ is a full subcategory of ${\bf mPaB}$ i.e. there exists a full inclusion:
\[ {\bf PaB}\hookrightarrow {\bf mPaB}.\]
\end{cor}
\begin{proof}
To have a full subcategory of ${\bf mPaB}$ it is necessary that for any objects $x,y$ in ${\bf PaB}$ every morphism of in ${\bf mPaB}$ is also in ${\bf PaB}$. This is a true statement since any permutation is given by translations maps $L$ and $R$ (satisfying conditions \eqref{E:equation} for any transposition).
\end{proof}
In the setting of the category ${\bf PaB}$, there exists a functor ${\mathbf S}$ called the {\it skeleton functor} on whose image is the category of parenthesised permutations $\mathbf{PaP}$. The operations $d_i, s_i$ of cabling and strand removal operations are naturally defined on $PaP$. The skeleton functor ${\mathbf S}$ intertwines the $d_i$'s and the $s_i$'s acting on parenthesised braids and on parentesized permutations.
The same type of object exists for ${\bf mPaB}$ and parenthesised translations.
Denote the parenthesised translations $\mathbf{PaT}$. The skeleton functor ${\mathbf S}_{\mathbf{PaT}}$ for the ${\bf mPaB}$ is the identity on objects, where objects are parenthesised words in $\mathbf{PaT}$ i.e words equipped with parenthesis and where letters can be permuted or translated, giving thus possibly parenthesised words with non distinct letters.
\begin{prop}
The category $\mathbf{PaT}$ together with the functor ${\mathbf S}_{\mathbf{PaT}}:{\bf mPaB}\to \mathbf{PaT}$ is a fibered linear category.
\end{prop}
\begin{proof}
The category $\mathbf{PaT}$ together with the functor ${\mathbf S}_{\mathbf{PaT}}:{\bf mPaB}\to \mathbf{PaT}$ forms a fibered linear category, for the following reasons.
First, $\mathbf{PaT}$ has the same objects as ${\bf mPaB}$ and the skeleton functor is the identity on objects. Secondly, the inverse image ${\mathbf S}_{\mathbf{PaT}}^{-1}(P_{\star})$ of every morphism $P_{\star}\in \mathbf{PaT}$ is a linear composition of left and right translation maps $L$ (and $R$) and, similarly to the case of the parenthesised braids, it forms a linear space. The composition maps in ${\bf mPaB}$ are also bilinear in the natural sense.
\end{proof}
\subsection{The Grothendieck--Teichm\"uller group and modified parenthesised braids}\label{S:GT}
We now discuss the following theorem.
\begin{thm}\label{T:GT}
The pro-unipotent Grothendieck--Teichm\"uller group is contained in the groups of structure preserving automorphisms $Aut(\widehat{{\bf mPaB}})$.
\end{thm}
In order to prove this statement, we apply the method on fibered linear categories such as shown in detail in Sec.2.1.1. ~\cite{BarN}).
Consider ${\bf mPaB}$ and $\mathbf{PaT}$, being respectively the categories of modified parenthesised braids and parenthesised translations. Define a subcategory of the fibered linear category $({\bf mPaB}, {\mathbf S} : {\bf mPaB} \to\mathbf{PaT})$ as follows. Let $P_{\star}$ be a morphism in $\mathbf{PaT}$. Choose {\bf a linear subspace} in each ${\mathbf S}^{-1}(P_{\star})$, so that the system of subspaces chosen is closed under composition. Closed under composition means that two modified braids (such that the bottom line of the first modified braid coincides with the top line of the second one) lie both in the linear subspace generated by ${\mathbf S}^{-1}(P_{\star})$ and defines another modified braid belonging to ${\mathbf S}^{-1}(P_{\star})$. We construct an ideal ${\mathbf I}$ in $({\bf mPaB}, {\mathbf S} : {\bf mPaB} \to\mathbf{PaT})$, which is a subcategory. The quotient ${\bf mPaB}/{\mathbf I}$ of the fibered linear category ${\bf mPaB}$ by the ideal ${\mathbf I}$ is again a fibered linear category.
These fibered linear categories are compatible with further operations allowing the construction of the inverse limit of an inverse system of fibered linear categories (fibered in a compatible way over the same category of skeletons). So, if ${\mathbf I}$ is an ideal in a fibered linear category ${\mathbf B}$, one can form the ${\mathbf I}$-adic completion and this ${\mathbf I}$-adic completion is again a filtered fibered linear category.
\smallskip
\begin{lem}
There exists a tower of modified parenthesised braids \break
$(\widehat{{\bf mPaB}}, \widehat{{\bf mPaB}}\to \mathbf{PaT}, d_i, s_i)$, where
$\widehat{{\bf mPaB}}$ is the unipotent completion of ${\bf mPaB}$.
\end{lem}
\begin{proof}
Define the subcategory of the fibered linear category $({\bf mPaB}, {\mathbf S}_{\mathbf{PaT}} : {\bf mPaB} \to \mathbf{PaT})$ as follows. Let $P_{\star}$ be a morphism in $\mathbf{PaT}$. We choose a linear subspace in each ${\mathbf S}^{-1}(P_\star)$, so that the system of subspaces chosen is closed under composition
(two translations such that the range of the first translation $T_1$ is the domain of the second $T_2$ and both lying in the linear subspace in ${\mathbf S}^{-1}(P_{\star})$ define a translation $T_1\circ T_2$ also in ${\mathbf S}^{-1}(P_{\star})$).
As mentioned earlier, an ideal in $({\bf mPaB}, {\mathbf S}_{\mathbf{PaT}}: {\bf mPaB} \to \mathbf{PaT})$ is a subcategory ${\mathbf I}$ so that if at least one of the two composable morphisms $T_1$ and $T_2$ in $\mathbf{PaT}$ is actually in ${\mathbf I}$, then their composition $T_1 \circ T_2$ is also in ${\mathbf I}$. The ideal ${\mathbf I}^m$ is such that morphisms of ${\mathbf I}^m$ are all those morphisms in ${\bf mPaB}$ that can be presented as compositions of $m$ morphisms in ${\mathbf I}$.
In particular, given that ${\mathbf I}$ is an ideal of a fibered linear category ${\bf mPaB}$, one can form the ${\mathbf I}$-adic completion $\widehat{{\bf mPaB}} = \lim_{m\to \infty}{\bf mPaB}/{\mathbf I}^m$, where the ${\mathbf I}$-adic completion is a filtered fibered linear category.
Take ${\mathbf I}$ to be the augmentation ideal of ${\bf mPaB}$ formed from all pairs $(P,\beta_jT_j)$ in which $\sum \beta_j=0$. Powers of this ideal defines the {\it unipotent filtration} of ${\bf mPaB}$, which is denoted $\mathcal{F}_{\bf mPaB}={\mathbf I}^{m+1}$.
Let ${\bf mPaB}^{(m)} ={\bf mPaB}/\mathcal{F}_m {\bf mPaB} = {\bf mPaB}/{\mathbf I}^{m+1}$ be the $m$-th unipotent quotient of ${\bf mPaB}$, and let
$\widehat{{\bf mPaB}}= \lim_{ m\to \infty} {\bf mPaB}^{(m)}$,
be the unipotent completion of ${\bf mPaB}$. The fibered linear categories inherit the operations $d_i$ and $s_i$ and a coproduct and filtration $\mathcal{F}_{*}$.
\end{proof}
\smallskip
Finally this construction leads to considering the automorphism group of the tower of modified braids $Aut(\widehat{{\bf mPaB}})$,
being the group of all functors $\widehat{{\bf mPaB}}\to \widehat{{\bf mPaB}}$, covering the skeleton functor, intertwining $d_i, s_i,\square$ (the coproduct functor $\square:{\bf mPaB}\to{\bf mPaB}\otimes {\bf mPaB}$) and fixing the elementary braid $\sigma$ (a crossing of two strands).
\begin{proof}[Proof\, of\, Theorem~\ref{T:GT}]
We have shown that $\widehat{{\bf mPaB}}$ is an enriched version of the construction of $\widehat{PaB}$ in \cite{BarN} (in the sense that it inherits its properties and operations but has some additional structures). As well we obtained that ${\bf PaB}$ is a subcategory of ${\bf mPaB}$.
By definition, we have that $\widehat{GT}=Aut(\widehat{{\bf PaB}})$, where $Aut(\widehat{{\bf PaB}})$ is the group of all functors $\widehat{{\bf PaB}}\to \widehat{{\bf PaB}}$ that cover the skeleton functor, intertwine $d_i, s_i$ and $\square$ and fixes $\sigma$.
The inclusion of ${\bf PaB}$ in ${\bf mPaB}$ implies that $Aut(\widehat{{\bf PaB}})$ is included in $Aut(\widehat{{\bf mPaB}})$.
So, the pro-unipotent Grothendieck--Teichm\"uller group is contained in the groups of structure preserving automorphisms $Aut(\widehat{{\bf mPaB}})$.
\end{proof}
Finally, using the inclusion theorem of~\cite{Brown} stating that the motivic Galois group is included in $Aut(\widehat{{\bf PaB}})$, we can conclude that:
\begin{cor}\label{C:mot}
The motivic Galois group is contained in the automorphism group $Aut(\widehat{{\bf mPaB}})$.
\end{cor}
We can interpret $\widehat{{\bf mPaB}}$ as modelling a situation where one considers infinitely many errors occurring. The information that it tells us is that the in a pro-unipotent completion of the space of ways of making errors has among others the behaviour of the motivic Galois group encapsulated within it.
\subsection{Conjectures and open questions}\label{S:conj}
Moufang loops turn out to be central in geometry of information, in particular for statistical manifolds (related to exponential families) and codes/error-codes. For instance, symmetries of spaces of probability distributions, endowed with their canonical Riemannian metric of information geometry, have the structure of a commutative Moufang loop.
In a different setting, there exists a connection between Moufang loops algebraic geometry. Recall from \cite{Cu} the relation between Moufang loops and the set of algebraic points of a smooth cubic curve in a projective plane $\mathbb{P}^2_K$ over a field $K$. The set $E$ of $K$-points of such a curve $X$ forms a $CML$ with composition law $x\circ y = u\star (x\star y)$, if $ u + x + y$ is the intersection cycle of $X$ with a projective line $\mathbb{P} ^1_K \subset \mathbb{P}^2_K$.
\smallskip
Given that symmetries of statistical manifolds have the structure of $CML$ and that Manin's conjecture (coming from algebraic geometry) is true in the case of statistical manifolds, it leads to think that there is a stronger connection between the $CML$ coming from algebraic geometry and the $CML$ in the statistical manifolds. So, an intriguing question following from the properties of statistical manifolds defined above would be to determine whether the set $E$ of $K$-points of a pre-Frobenius statistical manifold has the structure of a $CML$.
\bibliographystyle{alpha}
|
1,941,325,220,029 | arxiv | \section{Tip localization due to particle capturing}
In this work, we investigate a model where the diffusive motion of particles on a filament ceases as soon as they arrive at a reaction site. This feature, which we refer to as particle capturing, is a key element of our model, as it drives the system out of thermal equilibrium. In order to investigate the impact of particle capture on tip localization of particles, we also investigated a model where particles are not captured at the tip, but where a hopping from the tip into the bulk occurs such that detailed balance is not broken. In detail, we introduce a release rate $\overline{\epsilon}$, at which particles hop from site $i=1$ to site $i=2$. Then, to implement equilibrium conditions for particle hopping (i.e. with respect to a system without lattice growth or shrinkage), we impose $\epsilon/\overline{\epsilon} = \omega_d/\overline{\omega}_d$. This condition ensures detailed balance in a static system and for a constant on-rate along the lattice. Since we also implement lattice growth, detailed balance is still broken, which manifests itself in a net particle drift away from the tip in the comoving frame of reference. In Fig.~\ref{fig:TipLocalozation} we compare density profiles of the hopping-equilibrium model and the one with strict (i.e. irreversible) particle capturing as defined in the main text with parameters as for XMAP215. In the equilibrium model, the density profile is almost constant whereas in the model with capturing a strong tip-localization occurs (1-2 orders of magnitude increase in the tip-density). Although an irreversible capturing is, of course, a simplification, we expect similar effects to occur for release rates much smaller than the equilibrium release rate, $\overline{\epsilon} \ll \overline{\epsilon}_\mathrm{eq}:=(\epsilon\, \overline{\omega}_d)/\omega_d$. In this case, capturing generates a particle current towards the MT tip which conversely leads to spatial correlations subject of this work.
\begin{figure}[!ht]
\centering
\includegraphics[width = 0.65\columnwidth]{FigS1.pdf}
\caption{Diffusion and capture ensures tip-localization. Density profiles from MC simulations (open symbols) of (a) a model where particle hopping obeys detailed balance with respect to a static lattice and (b) the model from the main text. In an ``equilibrium" model no localization occurs and the density profile is almost constant due to fast diffusion. With strict (i.e. irreversible capturing), the tip is highly occupied as compared to its equilibrium occupation (dotted lines). Note that in both models we implement off-rates which differ at the tip and lattice growth which results in non-constant density profiles also when particle hopping obeys detailed balance. Further, also the ``equilibrium" model is out of equilibrium due to lattice growth. In (a) $\overline{\epsilon} = 3.0 \times 10^{3}\, \mathrm{s}^{-1}$, in (b) $\overline{\epsilon} = 0$. Other parameters as for the XMAP215 model, see Table~\ref{tab:MCAK_rates}. \label{fig:TipLocalozation}}
\end{figure}
\section{Mean-field (MF) approximation}
In the mean-field approximation all correlations are neglected; we set $\avg{n_{i} n_{j}} = \avg{n_i} \avg{n_j}$. This closes the hierarchy of equations stated in the main text:
\begin{eqnarray}
\tfrac{\tn{d}}{\tn{d} t}{\avg{ {n}_i }}&=& \epsilon (\avg{n_{i+1}} -2 \avg{n_i} + \avg{n_{i-1}}) + \delta (\avg{ n_1} \avg{ n_{i-1}} - \avg{ n_1} \avg{ n_{i} })+ \omega _\tn{a} c ( 1 - \avg{n_i}) - \omega _\tn{d} \avg{ n_i } \ \mathrm{for\ } i \geq 3 \label{eqn:sup_ni}\\
\tfrac{\tn{d}}{\tn{d} t}{\avg{ {n}_1 } }&=& \epsilon (\avg{n_2}-\avg{n_1 } \avg{n_2}) + \omega _\tn{a} c (1 - \avg{n_1}) - \overline{\omega} _\tn{d} \avg{ n_1} \label{eqn:sup_n1} \\
\tfrac{\tn{d}}{\tn{d} t}{\avg{ {n}_2}} &=& \epsilon (\avg{ n_3} - 2 \avg{n_2} + \avg{n_1} \avg{ n_2 }) - \delta \avg{ n_1 } \avg{n_2 } + \omega _\tn{a} c ( 1 - \avg{n_2}) - \omega _\tn{d} \avg{n_2} \, . \label{eqn:sup_n2}
\end{eqnarray}
Instead of solving the recurrence relation, we use a continuous description for Eq.~\ref{eqn:sup_ni}. At sites $i=1,2$ such an approximation is not valid due to a discontinuity in the density profile. Performing a Taylor expansion for small lattice spacings $a$ up to second order we obtain
\begin{eqnarray}
\partial_t \rho (x,t) &=& \epsilon a^2 \partial_x^2 \rho (x,t) - \delta a \partial_x \rho(x,t) \avg{n_1} + \omega _\tn{a} c (1 - \rho (x,t) ) - \omega _\tn{d} \rho(x,t)\, .
\end{eqnarray}
In the above equation the continuous labeling $x = a (i-1)$ is used for $\rho(x,t) = \avg{n_{i+1}} $. Further, we use that for typical biological systems $\epsilon \gg \delta$ holds true and neglect the second order term due to the particle drift in the comoving frame, $\tfrac{1}{2} \delta a^2 \partial^2_x \rho(x)$. Since we are only interested in the steady state solution we set the time derivative to zero. As boundary condition, we impose that the density equilibrates at the Langmuir density for large distances to the tip, $\lim_{x \to \infty}\rho(x) = \rho_\mathrm{La} = \omega_a c / (\omega_a c + \omega_d)$. The boundary condition at $x=a$ has to be consistent with the solution of Eqs.~\ref{eqn:sup_n1} and~\ref{eqn:sup_n2}, $\rho(a) = \avg{n_2}$. We can use the continuous solution to express $\avg{n_3} = \rho(2a)$ and solve Eqs.~\ref{eqn:sup_n1} and~\ref{eqn:sup_n2}. This self-consistent solution can be obtained numerically and determines the MF density profile along the whole lattice.
\section{The finite segment mean-field (FSMF) approximation}
The finite segment mean-field approach is based on the idea to account for correlations locally within a small segment. In detail, all correlations within this segment are retained whereas outside the segment correlations are neglected. An efficient implementation is achieved by using the transition matrix corresponding to the master equation for occupations of the segment. Since in our model correlations are strongest close to the tip, we choose to keep correlations with respect to the first $N$ sites. For example, for $N=2$ the corresponding transition matrix $M_{ij}$ with $i,j \in \{0,\dots,3\}$ reads
\begin{eqnarray}
M&=&\left(
\begin{matrix}
-2 \omega _a c - \epsilon \avg{n_3} & \omega_d + \epsilon \avg{\overline{n}_3} & \overline{\omega}_d & 0 \\
\omega _a c + \epsilon \avg{n_3} & - \omega_d -\omega_a c - \epsilon (1 + \avg{\overline{n}_3} ) & 0 & \overline{\omega}_d \\
\omega_a c & \epsilon & -\overline{\omega}_d -\omega_a c -\epsilon \avg{n_3} & \delta + \epsilon \avg{\overline{n}_3} + \omega_d \\
0 & \omega_a c & \omega_a c + \epsilon \avg{n_3} & - \omega_d-\overline{\omega}_d - \epsilon \avg{\overline{n}_3} - \delta \\
\end{matrix} \right) .\nonumber
\end{eqnarray}
Here we introduced $\avg{\overline{n}_3} = (1- \avg{n_3})$. Further, the enumeration of states is chosen such that it corresponds to the respective binary number, e.g. $M_{01}$ describes transitions from state $(n_1=0, n_2=1)$ to state $(n_1=0, n_2=0)$. Note that correlations with respect to $n_{N+1}$ are already neglected. The eigenvector of $M$ with eigenvalue 0 is then computed, which yields steady state occupations within the segment in dependence of $\avg{n_{N+1}}$. A self-consistent solution of these occupations and those for sites $i > N$ is obtained in analogous fashion to the MF procedure: We use the continuous MF solution for densities with $i > N$ and the discrete solutions for sites in the segment to express all densities in terms of $\avg{n_{N+1}}$. The master equation for $\avg{n_{N+1}}$ (given by Eq.~\ref{eqn:sup_ni}) is then solved numerically in the steady state to compute the complete density profile. This procedure is, however, strongly limited by the size of the finite segment as the corresponding transition matrix is of size $2^{N}\times2^{N}$.
\section{The correlated mean-field (CMF) approximation}
In the following we will show how to perform the CMF approximation for the model presented in the main text. This approach systematically includes the relevant correlations arising due to the capturing mechanism.
The CMF calculations can be separated in three steps: a) Computation of the continuous solution for the density $\rho(x)$ and correlation profile $g(x)$ in the bulk, $i\geq2$. b) Computation of the discrete solution for $i=1$. c) Matching of the continuous solution and the discrete solution.
We start with deriving the continuous bulk solutions. The density profile is governed by Eq.~1 of the main text:
\begin{eqnarray}
\label{eq:ni_correlated}
\tfrac{\tn{d}}{\tn{d} t}{\avg{ {n}_i }}&=& \epsilon \bigl(\avg{n_{i+1} (1-n_i)} {-} \avg{n_i (1-n_{i+1})} {+} \avg{n_{i-1}(1-n_i)} -\avg{n_{i}(1-n_{i-1})}\bigr)
+ \delta \bigl( \avg{ n_1 n_{i-1}} {-} \avg{ n_1 n_{i} } \bigr) \nonumber \\
&&+\, \omega _\tn{a} c \bigl( 1 {-} \avg{n_i} \bigr) - \omega _\tn{d} \avg{ n_i } \nonumber \\
&=& \epsilon \bigl(\avg{n_{i+1}} {-} 2 \avg{n_i} {+} \avg{n_{i-1}} \bigr)
+ \delta \bigl( \avg{ n_1 n_{i-1}} {-} \avg{ n_1 n_{i} } \bigr) +\, \omega _\tn{a} c \bigl( 1 {-} \avg{n_i} \bigr) - \omega _\tn{d} \avg{ n_i } \, .
\end{eqnarray}
Here, we account for particle hopping with exclusion (terms $\propto \epsilon$), lattice growth (terms $\propto \delta$), particle attachment (terms $\propto \omega_\tn{a}$), and particle detachment (terms $\propto \omega_\tn{d}$).
In the main text we show that it is essential to account for tip-bulk correlations on a large scale. In the CMF approach this is achieved globally by coupling the evolution of the density with the one for tip-bulk correlations. The discrete equation governing the evolution of correlations with respect to the reaction site reads
\begin{eqnarray}
\tfrac{\tn{d}}{\tn{d} t}{\avg{ n_1 n_i }}&=& \epsilon ( \avg{n_1 n_{i-1}} - 2 \avg{n_1 n_i} + \avg{n_1 n_{i+1}} + \avg{n_2 n_i} -\avg{n_1 n_2 n_i} ) + \delta ( \avg{n_1 n_{i-1}} - \avg{n_1 n_i} ) \nonumber \\
&& \, + \omega _\tn{a} c (\avg{n_1} + \avg{n_i} - \avg{n_1 n_i}) - ( \overline{\omega} _\tn{d} + \omega_\tn{d}) \avg{n_1 n_i} .
\end{eqnarray}
The above equation, which follows from the master equation, describes changes of the joint probability for a simultaneous occupation of the first and the $i$-th site: All probabilities for processes that lead to a simultaneous occupation of both lattice sites multiplied with the respective rate are added and all probabilities for processes where one of the two sites is emptied multiplied with the respective rate are subtracted. Again, contributions arise from particle hopping with exclusion (terms $\propto \epsilon$), lattice growth (terms $\propto \delta$), particle attachment (terms $\propto \omega_\tn{a}$), and particle detachment (terms $\propto \omega_\tn{d}$), respectively. For example, for particle hopping we have contributions from hopping processes with respect to the $i$-th site ($\avg{n_1 n_{i-1}} - 2 \avg{n_1 n_i} + \avg{n_1 n_{i+1}}$) as well as the capturing of a particle at the first site ($\avg{n_2 n_i} -\avg{n_1 n_2 n_i} $). Note that higher order correlators can be obtained in complete analogy.
In order to close the hierarchy of moments, we use the factorization scheme stated in the main text: $\avg{n_1 n_2 n_i} \approx \avg{n_1 n_2} \avg{n_i}$ and $\avg{n_2 n_i} \approx \avg{n_2} \avg{n_i}$ for $i \geq 3$. Fig.~\ref{fig:MomentFactorization} shows that this is justified, as the corresponding correlation coefficients are one to two orders of magnitude lower than $\mathrm{corr}(n_1,n_i)$.
In the continuous limit $a\to 0$ the recurrence relations given by the dynamic equations for $\avg{n_i}$ and $\avg{n_1 n_i}$ translate into a set of coupled differential equations. Up to a second order Taylor expansion we obtain
\begin{eqnarray}
\partial_t \rho (x,t) &=& \epsilon a^2 \partial_x^2 \rho (x,t) - \delta a \partial_x g(x,t) + \omega _\tn{a} c (1 - \rho (x,t) ) - \omega _\tn{d} \rho(x,t)\, \label{eq:rhobulkSup}\\
\partial_t g (x,t) &=& \epsilon ( a^2 \partial_x^2 g(x,t)+ \avg{n_2}(t) \rho(x,t) -\avg{n_1 n_2}(t)\rho(x,t) ) - \delta a \partial_x g(x,t) + \omega_\tn{a} c (\avg{n_1}(t)+ \rho(x,t) - 2 g(x,t) ) \nonumber \\
&&- (\omega_\tn{d} + \overline{\omega}_\tn{d}) g(x,t)\,.\label{eq:CxSup}
\end{eqnarray}
Here, we used again a continuous labeling $x = a (i-1)$ and neglected second order terms due to lattice growth ($\propto \tfrac{1}{2} \delta a^2 \partial^2_x g(x)$) since $\epsilon \gg \delta$ for typical biological situations. In this work, we are interested in the steady state properties of the system, $\partial_t \rho(x,t)=0$ and $\partial_t g(x,t)=0$. Under this condition, Eqs.~\ref{eq:rhobulkSup} and~\ref{eq:CxSup} are solved for the continuous solutions $\rho(x)$ and $g(x)$. Further, we impose the following boundary conditions to obtain a meaningful solution: $\lim_{x\to \infty} \rho(x) = \rho_\tn{La}=\omega_\tn{a} c/ (\omega_\tn{a} c + \omega_\tn{d})$, $\lim_{x\to\infty} g(x) = \avg{n_1} \rho_\tn{La}$, $\rho(a)=\avg{n_2}$ and $g(a) = \avg{n_1 n_2}$. Note that the solutions depend on the yet unknown variables $\avg{n_1}$, $\avg{n_2}$ and $\avg{n_1 n_2}$.
In the second step, we solve the equation for the occupancy of the reaction sites, $i=1$,
\begin{eqnarray}
\label{eq:n1Sup}
\tfrac{\tn{d}}{\tn{d} t}{\avg{ {n}_1 } }&=& 0 = \epsilon (\avg{n_2}-\avg{n_1 n_2}) + \omega _\tn{a} c (1-\avg{ n_1}) - \overline{\omega}_\tn{d} \avg{ n_1} \, ,
\end{eqnarray}
to express $\avg{n_1}$ in terms of $\avg{n_2}$ and $\avg{n_1 n_2}$.
Lastly, we self-consistently match the discrete and continuous solutions in that we determine the values of $\avg{n_2}$ and $\avg{n_1 n_2}$. To this end we employ the ``master equations" for the latter variables.
\begin{eqnarray}
\label{eq:n2Sup}
\tfrac{\tn{d}}{\tn{d} t}{\avg{ {n}_2}} &=& 0 =\epsilon (\avg{ n_3} - 2 \avg{n_2} + \avg{n_1 n_2}) - \delta \avg{ n_1 n_2 } + \omega _\tn{a} c (1- \avg{n_2 }) - \omega _\tn{d} \avg{n_2} \\
\label{eq:n1n2Sup}
\tfrac{\tn{d}}{\tn{d} t}{\avg{{n}_1 {n}_2 }}&=& 0 =\epsilon (\avg{n_1 n_3} - \avg{n_1 n_2}) + \delta \avg{n_1 n_2} + \omega_\tn{a} c (\avg{n_1} + \avg{n_2} - 2 \avg{n_1 n_2} ) - (\omega_\tn{d} + \overline{\omega}_\tn{d}) \avg{n_1 n_2} .
\end{eqnarray}
We insert the continuous bulk solutions derived in the first step for $\avg{n_3} = \rho(2a)$ and $\avg{n_1 n_3} = g(2 a)$. Finally, the discrete solution for $\avg{n_1}$ is used to express all variables in terms of $\avg{n_2}$ and $\avg{n_1 n_2}$. This allows us to solve Eqs.~\ref{eq:n2Sup} and~\ref{eq:n1n2Sup} numerically which, as a consequence, fixes the entire density and correlation profile.
\begin{figure}[!ht]
\centering
\includegraphics[width = 0.7\columnwidth]{FigS2.pdf}
\caption{In panel (a) and (b) we show that correlations $\mathrm{corr}(n_2,n_i)$ and $\mathrm{corr}(n_1 n_2, n_i)$ for $i \geq 3$ are negligible since they are one to two orders of magnitude smaller than the tip bulk correlations, $\mathrm{corr}(n_1,n_i)$. Parameters as for the XMAP215 model.\label{fig:MomentFactorization}}
\end{figure}
The behavior of correlations is also demonstrated in Fig~\ref{fig:CorrelBehavior}: Without a capturing mechanism, correlations are purely negative due to the creation of empty sites resulting from the processive polymerization scheme. Opposed to that, purely positive correlations arise in a static lattice with capturing.
\begin{figure}[!ht]
\centering
\includegraphics[width = 0.65\columnwidth]{FigS3.pdf}
\caption{Tip-bulk correlation profile obtained from stochastic simulations. Without particle capturing (orange data points) correlations are negative due to the processive growth of the lattice and the resulting creation of empty lattice sites. Correlations are positive in a static system with a capturing mechanism (blue points). Parameter values are equal to the ones used for the XMAP215 model; concentrations are $c=10\ \mathrm{nM}$ for the case without polymerization and $c=5000\ \mathrm{nM}$ for the case without capturing.\label{fig:CorrelBehavior}}
\end{figure}
The CMF approach neglects correlations within the diffusive compartment (i.e. we assume $\avg{n_i n_j} = \avg{n_i}\avg{n_j}$ and $\avg{n_1 n_i n_j} = \avg{n_1 n_i} \avg{n_j}$ for $i, j \geq 3$ and $i < j$). As this approximation is a non-perturbative ansatz, it is in general not possible to quantify its error. In order to ensure the validity over a broad and biologically relevant parameter range, we performed extensive MC simulations and compared the result with CMF computations. In detail, we performed parameter sweeps for $\epsilon$ (from $300-10000\ \mathrm{s}^{-1}$), $\omega_d$ (from $0.1-10\ \mathrm{s}^{-1}$), $\delta$ (from $5-95\ \mathrm{s}^{-1}$) and $c$ (for each parameter point at five equidistant values between $c_1$ and $c_5$, such that $\rho_1^\mathrm{CMF}(c_1) = 0.1$ and $\rho_1^\mathrm{CMF}(c_5) = 0.9$). The results are shown in Fig.~\ref{fig:CMFDev}. The CMF approximation delivers good results over this very broad parameter range; the maximum relative deviation for $\rho_1$ over the 1000 different tested parameter sets is 6.5\%.
\begin{figure}[!ht]
\centering
\includegraphics[width = 0.65\columnwidth]{FigS4.pdf}
\caption{Error of CMF approximation. We compared results for the tip density obtained from the CMF approximation ($\rho_1^\mathrm{CMF}$) and MC simulations ($\rho_1^\mathrm{MC}$) for 1000 different parameter sets. For each set $\{\epsilon$, $\delta$, $\omega_a$, $\omega_d$, $\overline{\omega}_d\}$ we determined five equidistant concentrations between $c_1$ and $c_5$, such that $\rho_1^\mathrm{CMF}(c_1) = 0.1$ and $\rho_1^\mathrm{CMF}(c_5) = 0.9$. For these concentrations, we computed the average relative deviation between simulation results and analytic approximation to get an estimate $\Delta_\mathrm{CMF}$ of the error along a $\rho_1-c$ curve (right side). Note that we expect the error to vanish for very low and very high occupations. We performed sweeps with respect to $\epsilon$ and $\delta$ (a), and $\epsilon$ and $\omega_d$ (b). Deviations are small, with the maximal $c$-averaged deviation being 5\% and the maximal relative deviation being 6.5\%. Color encodes the $c$-averaged deviations $\Delta_\mathrm{CMF}$ with white denoting $0$\% deviation and dark blue denoting more significant deviations. As expected, we observe a small trend of increasing errors whenever interactions in the lattice bulk become more frequent, i.e. for high $\epsilon$, small $\delta$ and small $\omega_d$. Opposed to Eq.~\ref{eq:CxSup} we include the second order term that arises due to lattice polymerziation, $\tfrac{1}{2} \delta a^2 \partial^2_xg(x)$, as $\epsilon{\gg}\delta$ does not necessarily hold true any more.\label{fig:CMFDev}}
\end{figure}
In a previous publication, we derived an effective theory that allows for the calculation of reaction site occupations that are subject to a diffusion and capture mechanism in a static lattice (i.e. without lattice growth or shrinkage)~\cite{Reithmann2015}. While both approaches consider protein diffusion and capture on filaments, they differ significantly on a conceptual level and with respect to the scope of their predictions:
Whereas the previous approach is based an on a heuristic theory and \emph{a priori} only valid in the absence of polymerization and depolymerization, respectively, the CMF approach specifically accounts for lattice growth and shrinkage. Further, the CMF approximation is derived from more conceptual considerations: It assumes that diffusion and capture creates correlations which primarily affect the tip occupation while the diffusive motion of proteins on the MT depends less significantly on mutual correlations~\cite{Derrida2007}. As a consequence, the CMF approach yields density and tip-bulk correlation profiles for protein occupations along the MT, which are beyond the scope of our previous approach. As shown in the main text, the latter quantities are key to a quantitative understanding of tip-localization due to diffusion and capture and related processes.
\section{Uncatalyzed growth and shrinkage of MTs}
The model described in the main text does not account for MT growth or shrinkage in the absence of depolymerziation or polymerization factors like MCAK or XMAP215. The reason for this assumption is twofold: a) In the experiments with XMAP215~\cite{Widlund2011} and MCAK~\cite{Cooper2010} low concentrations of free tubulin were used such that no spontaneous MT growth was observed. Also, the measurements in Widlund et al.~\cite{Widlund2011} suggest that the rate of tubulin detachment in the corresponding experiments is negligible. b) Concerning MT depolymerization, we aim for a description of protein induced tubulin removal from stabilized MTs in analogy to \emph{in vitro} experiments with MCAK~\cite{Cooper2010,Helenius2006}. In this way, our model neglects the dynamic instability seen for unstabilized MTs~\cite{Antal2007,Padinhateeri2012,Niedermayer2015,Zakharov2015}, but provides a description how a stabilizing structure at the MT tip (e.g. GTP-tubulin) can be removed by regulatory enzymes.
That being said, let us emphasize that an extension towards uncatalyzed tubulin attachment and detachment is feasible based on the model described in the main text. To this end we include further processes in the model: If the terminal lattice site is unoccupied, a new site can be added at rate $\delta_\mathrm{spont}^\mathrm{poly}$ or removed at rate $\delta_\mathrm{spont}^\mathrm{depoly}$. For completeness, we also include catalyzed (processive) growth \emph{and} shrinkage with corresponding rates $\delta_\mathrm{cat}^\mathrm{poly}$ and $\delta_\mathrm{cat}^\mathrm{depoly}$, respectively. The resulting equations for the CMF framework then read
\begin{eqnarray}
\partial_t \rho (x,t) &=& 0 = (\delta_\mathrm{spont}^\mathrm{depoly} -\delta_\mathrm{spont}^\mathrm{poly}) a \partial_x \rho(x,t) + (\epsilon + \frac{1}{2} \delta_\mathrm{spont}^\mathrm{depoly} + \frac{1}{2} \delta_\mathrm{spont}^\mathrm{poly}) a^2 \partial_x^2 \rho (x,t) \nonumber \\
&&+( \delta_\mathrm{cat}^\mathrm{depoly} - \delta_\mathrm{cat}^\mathrm{poly} + \delta_\mathrm{spont}^\mathrm{poly} - \delta_\mathrm{spont}^\mathrm{depoly}) a \, \partial_x g(x,t)
+ \frac{1}{2} ( \delta_\mathrm{cat}^\mathrm{poly} + \delta_\mathrm{cat}^\mathrm{depoly} -\delta_\mathrm{spont}^\mathrm{poly} - \delta_\mathrm{spont}^\mathrm{depoly} ) a^2 \partial_x^2 g(x,t) \nonumber \\
&& + \omega _\tn{a} c (1 - \rho (x,t) ) - \omega _\tn{d} \rho(x,t)\, , \label{eq:rhobulkSpontGrow}\\
\partial_t g (x,t) &=& 0 = (\delta_\mathrm{spont}^\mathrm{depoly} +\epsilon )(\avg{n_2}(t) - \avg{n_1 n_2}(t)) \rho(x,t) + \delta_\mathrm{spont}^\mathrm{depoly} (\avg{n_2}(t)-\avg{n_1 n_2}(t))( a \partial_x \rho(x,t) + \frac{1}{2} a^2 \partial_x^2 \rho(x,t)) \nonumber \\
&&+ (\delta_\mathrm{cat}^\mathrm{depoly} -\delta_\mathrm{cat}^\mathrm{poly} ) a \partial_x g(x,t) +(\epsilon + \frac{1}{2} \delta_\mathrm{cat}^\mathrm{poly}+ \frac{1}{2}\delta_\mathrm{cat}^\mathrm{depoly}) a^2 \partial_x^2 g(x,t) + \omega_\tn{a} c (\avg{n_1}(t)- \rho(x,t) - 2 g(x,t) ) \nonumber \\
&& - (\omega_\tn{d} + \overline{\omega}_\tn{d}) g(x,t)\, , \\\label{eq:CxSpontGrow}
\tfrac{\tn{d}}{\tn{d} t}{\avg{ {n}_1 }(t) }&=& 0 = \epsilon (\avg{n_2} (t)-\avg{n_1 n_2} (t)) + \delta_\mathrm{spont}^\mathrm{depoly} (\avg{n_2}(t) - \avg{n_1 n_2}(t)) + \omega _\tn{a} c (1-\avg{ n_1}(t)) - \overline{\omega}_\tn{d} \avg{ n_1} (t)\, , \\
\tfrac{\tn{d}}{\tn{d} t}{\avg{ {n}_2}(t)} &=& 0 =\epsilon (\avg{ n_3} (t) - 2 \avg{n_2} (t) + \avg{n_1 n_2} (t)) - \delta_\mathrm{cat}^\mathrm{poly} \avg{ n_1 n_2 } (t) + \delta_\mathrm{cat}^\mathrm{depoly}(\avg{n_1 n_3} (t)- \avg{n_1 n_2} (t)) \nonumber \\
&&- \delta_\mathrm{spont}^\mathrm{poly}(\avg{n_2} (t)- \avg{ n_1 n_2 } (t)) + \delta_\mathrm{spont}^\mathrm{depoly} (\avg{n_1 n_2}(t) - \avg{n_1 n_3}(t) + \avg{n_3}(t)- \avg{n_2}(t))+ \omega _\tn{a} c (1- \avg{n_2 }(t)) - \omega _\tn{d} \avg{n_2} (t) \, , \nonumber \\
\\
\tfrac{\tn{d}}{\tn{d} t}{\avg{{n}_1 {n}_2 } (t)}&=& 0 =\epsilon (\avg{n_1 n_3}(t) - \avg{n_1 n_2}(t)) - \delta_\mathrm{cat}^\mathrm{poly} \avg{n_1 n_2} (t) + \delta_\mathrm{cat}^\mathrm{depoly} (\avg{n_1 n_3} (t)- \avg{n_1 n_2}(t)) \nonumber \\
&&+ \delta_\mathrm{spont}^\mathrm{depoly} (\avg{n_2}(t) \avg{n_3}(t) - \avg{n_1 n_2}(t)\avg{n_3}(t)) + \omega_\tn{a} c (\avg{n_1}(t) + \avg{n_2}(t) - 2 \avg{n_1 n_2} (t)) - (\omega_\tn{d} + \overline{\omega}_\tn{d}) \avg{n_1 n_2} (t).\label{eq:n1n2SpontGrow}
\end{eqnarray}
The equations are solved in analogy to the case without spontaneous lattice dynamics.
As mentioned above, our models neglect intrinsic MT dynamics such as dynamic instability. However, we expect validity of our results for tip-localization also under such circumstances. We studied the extended model with spontaneous growth and shrinkage rates over a variety of parameter values (up to spontaneous growth and shrinkage rates of $24\, \mu \mathrm{m}/ \mathrm{min}$).
For a comparison, we estimated the rate of spontaneous MT growth ($v_\mathrm{spont} = a (\delta_\mathrm{spont}^\mathrm{poly}- \delta_\mathrm{spont}^\mathrm{depoly})$) at tubulin concentrations slightly above 5 $\mu\mathrm{M}$ from the experiments performed by Widlund et al.~\cite{Widlund2011}. At such tubulin concentrations, MTs were observed to start growing also without the presence of XMAP215 at a speed of approximately $v_\mathrm{spont} =0.5\ \mu\mathrm{m}/\mathrm{min}$. Given this resulting spontaneous MT growth rate, we compared a model with and without fast intrinsic MT dynamics ($\delta_\mathrm{spont}^\mathrm{poly} = 1\, s^{-1} $ and $\delta_\mathrm{spont}^\mathrm{depoly} = 0$ for a stable lattice; $\delta_\mathrm{spont}^\mathrm{poly} = 51\, s^{-1} $ and $\delta_\mathrm{spont}^\mathrm{depoly} = 50\, s^{-1}$ for a dynamic lattice). The results are shown in Figs.~\ref{fig:SpontGrowth} and~\ref{fig:DensityDynamicLattice}. They show the robustness of the protein distribution $\rho(x)$ and, in particular, the tip occupation against changes in the lattice growth or shrinkage rates. Moreover, the CMF approximation is also applicable for rapidly fluctuating MT lengths. Note that XMAP215 also catalyzes tubulin removal under certain conditions~\cite{Brouhard2008} which could readily be accounted for in the above approach.
\begin{figure}[!ht]
\centering
\includegraphics[width = 0.7\columnwidth]{FigS5.pdf}
\caption{Extended model that accounts for uncatalyzed growth and shrinkage of MTs. We compare diffusion and capture on a slowly growing lattice ($\delta_\mathrm{spont}^\mathrm{poly} = 1\, s^{-1} $, $\delta_\mathrm{spont}^\mathrm{depoly} = 0$, $\delta_\mathrm{cat}^\mathrm{depoly} = 0$, $\delta_\mathrm{cat}^\mathrm{poly} = 9.5\, s^{-1}$, blue) with diffusion and capture on a lattice with fast intrinsic dynamics but the same average growth speed ($\delta_\mathrm{spont}^\mathrm{poly} = 51\, s^{-1} $ , $\delta_\mathrm{spont}^\mathrm{depoly} = 50\, s^{-1}$, $\delta_\mathrm{cat}^\mathrm{depoly} = 50\, s^{-1}$, $\delta_\mathrm{cat}^\mathrm{poly} = 59.5\, s^{-1}$, orange). The average MT growing velocity, and therefore also the tip density, deviate little which implies the validity of our results also on dynamic lattices. MC simulations (symbols) agree well with solutions of the CMF approximation (lines). Other parameter values are as for the XMAP215 model, see Table~\ref{tab:MCAK_rates}.\label{fig:SpontGrowth}}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width = 0.7\columnwidth]{FigS6.pdf}
\caption{Density profiles of an adapted model with an intrinsically dynamic lattice (orange) in comparison to the model presented in the main text (blue) for $c=10 \, \mathrm{nM}$ and $c=100 \, \mathrm{nM}$. Tip-localization occurs also on a lattice with fast spontaneous growth and shrinkage. The tip-density is almost unaffected by rapid fluctuations of the MT length, suggesting the validity of our results also for dynamic MTs. The results of our simulations (symbols) agree well with the CMF results (lines). Model parameters are $\delta_\mathrm{spont}^\mathrm{poly} = 51\, s^{-1} $ , $\delta_\mathrm{spont}^\mathrm{depoly} = 50\, s^{-1}$, $\delta_\mathrm{cat}^\mathrm{depoly} = 50\, s^{-1}$, $\delta_\mathrm{cat}^\mathrm{poly} = 59.5\, s^{-1}$ for the dynamic lattice. Other parameters and parameters for the stable lattice as for the XMAP215 model.\label{fig:DensityDynamicLattice}}
\end{figure}
\section{MCAK model}
\begin{figure}[!ht]
\centering
\includegraphics[width = 0.45\columnwidth]{FigS7.pdf}
\caption{Illustration of the MCAK model. Particle movement is identical to the XMPAP215 model. Depolymerization occurs whenever the first lattice site is occupied. Particles depolymerize processively in that they move along with the shrinking tip. When the second site is occupied, a particle on the tip that stimulates shrinkage falls off together with the first lattice site. \label{fig:MCAKmodel}}
\end{figure}
Similar to the model for XMAP215 stated in the main text we can set up a model for the depolymerase activity of MCAK, see Fig.~\ref{fig:MCAKmodel}. The ensuing set of equations corresponding to the CMF approach in the bulk are a special case of Eqs.~\ref{eq:rhobulkSpontGrow}-\ref{eq:n1n2SpontGrow} with $\delta_\mathrm{spont}^\mathrm{poly}=\delta_\mathrm{spont}^\mathrm{depoly}=\delta_\mathrm{cat}^\mathrm{poly}=0$. We implement a processive depolymerization scheme~\cite{Helenius2006,Cooper2010,Klein2005}. In detail, MCAK particles stay at the terminal site during depolymerization (i.e. move along with the tip) whenever the neighboring site is empty. Otherwise, they dissociate from the tip during depolymerization. This means that MCAK particles fall off the MT tip whenever they hit another particle during the depolymerization process.
\begin{figure}[!ht]
\centering
\includegraphics[width = 0.7\columnwidth]{FigS8.pdf}
\caption{Comparison of different analytic approaches (lines) with simulations of the MCAK model (circles). Whereas the MF and FSMFT approaches (dashed lines) predict the depolymerization velocity insufficiently, the CMF approximation (solid line) delivers results which are in excellent agreement with simulation data. Model parameters are given in Table~\ref{tab:MCAK_rates}.\label{fig:MCAK_MFSchemes}}
\end{figure}
The results of the CMF approach for the MCAK model agree excellently with simulation data, as shown in Fig.~\ref{fig:MCAK_MFSchemes}. Further, also for the MCAK model the MF approximation and FSMFT produce results that deviate from simulation data at intermediate concentrations.
\section{Parameter values}
\begin{table}
\setlength{\extrarowheight}{2.pt}
\begin{tabular*}{0.8\textwidth}{@{\extracolsep{\fill}}cccccc}
\multirow{2}{*}{\bf{Experiment}}\\\\\hline\hline & $D$ & $k_\mathrm{on}$ & $k_\mathrm{off}$ & $v_\mathrm{max}$ & $K_M$ \\
&($\mu$m)$^2\ \mathrm{\ s}^{-1}$ & events /(s $\mu$m nM) & events/s & $\mu$m/min & $\mu$m/(min nM) \\ \hline
MCAK-FL & 7.6 $\times 10^{-2}$ & 4.56 $\times 10^{-1}$ & 1.70 & 5.0 $\times 10^{-1}$ & 4.3 \\\hline
\multirow{2}{*}{} & $D$ & $k_\mathrm{on}$ & $k_\mathrm{off}$ & $v_\mathrm{max}$ & $K_\tn{off}$ \\
&($\mu$m)$^2 \ \mathrm{\ s}^{-1}$ & events /(s $\mu$m nM) & events/s & $\mu$m/min & s$^{-1}$ \\ \hline
XMAP215 & 3.0 $\times 10^{-1}$ & 1$\times 10^{-1}$ & 4.1$\times 10^{-1}$ & 4.6 & 2.6 $\times 10^{-1}$ \\\hline\\%\hline\hline
\multirow{2}{*}{\bf{Theory}} \\ \\ \hline\hline& $\epsilon $ & $\omega _\tn{a}$ & $\omega_\tn{d}$ & $\delta$ & $\overline{\omega}_\tn{d}$ \\
& $\mathrm{\ s}^{-1}$ & $\mathrm{\ (nM\ s)}^{-1}$ & $\mathrm{\ s}^{-1}$ & $\mathrm{\ s}^{-1}$ & $\mathrm{\ s}^{-1}$ \\ \hline
MCAK-FL & 1.2 $\times 10^{3}$ & 2.61 $\times 10^{-4}$ & 1.70 & 5.2 $\times 10^{-1}$ & 3.0 $\times 10^{-2}$ \\\hline
XMAP215 & 4.7 $\times 10^{3}$\ & 6 $\times 10^{-5}$ & 4.1 $\times 10^{-1}$ & 9.5 & 2.6 $\times 10^{-1}$ \\\hline
\end{tabular*}
\caption{Rate constants for MCAK-FL~\cite{Cooper2010} and XMAP215~\cite{Brouhard2008,Widlund2011}. The diffusion constant $D$ and the on- and off-rates of enzymes to the MT lattice, $k_\mathrm{on}$ and $k_\mathrm{off}$, were measured directly. The measured depolymerization and polymerization profiles yield the maximal depolymerization and polymerization velocities $v_\mathrm{max}$ and the effective Michaelis constant $K_M$.
Conversion to the theoretical values was achieved by translating $k_\mathrm{on}$, $k_\mathrm{off}$, and $v_\mathrm{max}$, into appropriate lattice units. The hopping rate is related to the diffusion coefficient by $\epsilon = D/a^2$. The off-rate at the first site for MCAK was, in contrast to the one for XMAP215, not measured directly. It can, however, be estimated from $K_M$ by using the depolymerization behavior at low concentrations and a MF argument which exploits the fact that the system is uncorrelated at asymptotically low occupations~\cite{Reithmann2015}. }
\label{tab:MCAK_rates}
\label{tab:MCAK_exp}
\end{table}
The parameter values used for the XMAP215 and MCAK model were extracted from experimental data~\cite{Cooper2010,Brouhard2008,Widlund2011}. Model parameters were computed based on measured diffusion coefficients (for $\epsilon$), particle dwell times on the MT tip (for $\overline{\omega}_d$) and bulk (for $\omega_d$), attachment rates (for $\omega_a$), and maximal (de)polymerization velocities at saturated (de)polymerase concentrations (for $\delta$). A conversion factor $n_\mathrm{tubulins}$ from $\mu\mathrm{m}$ into tubulin subunits was adapted to the assumed protofilament numbers $ n_\mathrm{protofilaments}$ of the MTs used in the respective experiments: 1625 $\mathrm{tubulin\ dimers}/\mu\mathrm{m}$ for XMAP215~\cite{Widlund2011} and 1750 $\mathrm{tubulin\ dimers}/\mu\mathrm{m}$ for MCAK~\cite{Cooper2010}. Note that the polymerization velocity refers to one MT tip~\cite{Brouhard2008,Widlund2011}, wheres the depolymerization rate refers to the average shrinkage rate of both ends~\cite{Cooper2010}. Opposed to the measurements for MCAK, where the maximal depolymerization velocity was determined~\cite{Cooper2010}, Widlund et al. do not directly state the maximal MT polymerization velocity due to XMAP215 induced growth~\cite{Widlund2011}. To get a good estimate for the maximal growing velocity $v_\mathrm{max}$ of MTs at saturating polymerase (XMAP215) concentrations, we fitted a Michaelis-Menten curve to the experimental data. The rate of tubulin attachment and detachment per regulating protein $\delta$ depends on the maximal number of catalytically active proteins at the MT tip $n_\mathrm{tip}$: $v_\mathrm{max} = \delta \, n_\mathrm{tip}\, n_\mathrm{tubulins}^{-1}$. Since the specific number for $n_\mathrm{tip}$ is elusive (there are estimates for approximately 10 XMAP215s at the MT tip at $50\ \tn{nM}$ XMAP215.~\cite{Brouhard2008}), we have to make an assumption. Here, we choose one protein per protofilament, $n_\mathrm{tip} = n_\mathrm{protofilaments}$. In doing so the MT tip velocity then reduces to $v = \avg{n_1} \, \delta \, n_\mathrm{protofilaments} \, n_\mathrm{tubulins}^{-1} = \avg{n_1} \, \delta \, a$, where $a$ is the length of a tubulin dimer.
As the dwell time of proteins on the tip (i.e. $1/\overline{\omega}_d$) was not measured for MCAK particles, we used the measured Michaelis constant $K_M$ to estimate this value: Since the Michaelis constant determines the linear increase in the depolymerization velocity for asymptotically low MCAK concentrations, $v_{\mathrm{low\ }c}=1/K_M \times c +\mathcal{O}(c^2)$, we can use it to estimate the tip-dwell time for MCAK particles. In detail, we analytically computed the depolymeriztion velocity for asymptotically low concentrations using a MF and low-density approximation of our model up to first order in $c$~\cite{Reithmann2015}. As
correlations vanish under these conditions, we expect the result to be exact which allows us to infer the MCAK off-rate at the tip $\overline{\omega}_d$.
The list of ensuing parameters is given in Table~\ref{tab:MCAK_rates}.
|
1,941,325,220,030 | arxiv | \section{Details of lattice simulations }
To test our proposed method numerically, we carry out a pilot study of SU(3) gauge theory with $N_f = 12$ degenerate fermions in the fundamental representation. We use a set of gauge configurations that were originally generated for finite-size study of this system~ \cite{Cheng:2013xha} using a plaquette gauge action and nHYP-smeared staggered fermions~\cite{Hasenfratz:2001hp,Hasenfratz:2007rf}. Further details on the lattice action can be found in Refs.~\cite{Cheng:2011ic,Cheng:2013eu,Cheng:2013bca,Cheng:2013xha}. We consider five values of the bare gauge coupling $\beta=4.0,5.0,5.5,5.75$ and $6.0$, analyzing 46 and 31 configurations on lattice volumes of $24^3\times 48$ and $32^3\times 64$, respectively. The fermion mass is set to $m = 0.0025$, small enough that we expect the breaking of scale invariance to be dominated by the finite spatial extent $L$.
We consider only fermionic operators, and use the axial charge $A^4$ for our conserved operator $\mathcal{A}$. Since staggered fermions have a remnant U(1) symmetry, it is straightforward to construct a conserved axial charge operator with $Z_A=1$ \cite{Aoki:1999av}. We use on-site staggered operators for the pseudoscalar, vector, and nucleon, and a 1-link operator for the axial charge states. Our individual correlators are consistent with simple exponential decay, although we cannot rule out a functional dependence that includes a Yukawa-like power law correction \cite{Ishikawa:2013tua}.
Following \refcite{Luscher:2013cpa}, we adopt non-linear Wilson flow for the gauge fields and linear fermion flow. We consider 10 flow time values between $1.0 \le t/a^2 \le 7.0$ (note that the flow range is $\sqrt{8t}$.) The strong correlations in GF lead to very small statistical errors in the flow-time dependence.
\paragraph{Analysis -- }
\begin{figure}
\includegraphics[width=0.48\textwidth]{corr-flow.png}
\caption{Dependence of the correlator ratio $\Rop_P$ on source-sink separation $x_0$ and flow scale $\sqrt{8t}$. For each value of $\sqrt{8t}$, a stable plateau in $R_P$ is seen for $x_0 \gtrsim 2\sqrt{8t}$. The results shown here are on $32^3\times 64$ volumes at $\beta=5.75$. \label{fig:corr-flow}}
\end{figure}
In the following, we work in lattice units. The ratio given in \eq{eq:ratio_full} should be independent of $x_0$ at large $x_0$, as long as the operator $\op$ has well defined quantum numbers. At distances comparable to the flow range, $x_0 \lesssim \sqrt{8t}$, the flowed operators overlap and the ratios could have non-trivial and non-universal structure. Since we are using staggered fermions where the action has oscillating phase factors, in the small $x_0$ region we observe significant oscillation, as shown in \fig{fig:corr-flow} for the $\gamma_5$ pseudoscalar operator that does not have a partner in the channel. The width of the oscillation is about $2\sqrt{8 t}$, after which a stable plateau develops. The decrease in the value of the plateau as the flow time increases predicts the anomalous dimension of the pseudoscalar operator.
We work directly with the ratio $\Rop(t)$ of eq.~\ref{eq:ratio_full}, and do not attempt to extrapolate the fermion field exponent $\eta$ (obtained from using $\symop$ in eq.~\ref{eq:ratio}) to the infrared limit, as it shows much stronger finite-volume and bare coupling dependence than the full operator ratios. At fixed $t$ and $\beta$ we typically find $\eta \lesssim 0.1$.
As a consistency check we consider the vector operator, but find large systematic effects due to oscillation; although we cannot quote a precise extrapolated value, we generally find the associated anomalous dimension consistent with zero as expected.
We predict the anomalous dimension as a function of $t$ by comparing the ratios at consecutive $(t_1,t_2)$ flow time values
\begin{equation}
\gamma_\op (\beta, \bar{t}, L) =
\frac{ \rm{log}(\Rop_\op(t_1,\beta,L) / \Rop_\op(t_2,\beta,L)) } { \rm{log}(\sqrt{t_1}/\sqrt{t_2}) }
\end{equation}
where $\bar{t}=(t_1+t_2)/2$. The mass anomalous dimension is predicted by considering the pseudoscalar operator, recalling that $\gamma_m = -\gamma_{S} = - \gamma_{PS} $. We estimate the finite volume corrections by \eq{eq:finite_vol_corr}, estimating $\gamma_m$ iteratively. We have numerical data on $24^3\times 48$ and $32^3\times 64$ volumes so $s=32/24$, and \eq{eq:finite_vol_corr} increases the effective volume to $42.66$.
In \fig{fig:gamma-M} we show the infinite volume estimated $\gamma_m$ as a function of $\mu \equiv 1/\sqrt{ 8 \bar{t}}$.
There is significant dependence on the bare gauge coupling $\beta$ and also on the flow time $t$, as expected in a slowly running system. We extrapolate to the $t \to \infty$ limit as
\begin{equation}
\gamma_m(\beta,t) = \gamma_0 + c_\beta t^{\alpha_1} + d_\beta t^{\alpha_2}
\label{eq:extrapolation}
\end{equation}
motivated by the expectation that the correction terms should be due to the slowly evolving irrelevant couplings, associated with higher-dimensional operators that can mix with the operator of interest.. Based on Refs.~\cite{Cheng:2013xha,Cheng:2013eu,Cheng:2013bca} we expect the FP to be closest to the $\beta=5.5-6.0$ range, so that the dependence on $\beta$ should be weakest in this range.
We perform a combined fit versus $\beta$ and $t$ using common $\gamma_0$, $\alpha_1$ and $\alpha_2$, but allowing $\beta$ dependent coefficients $c_\beta$ and $d_\beta$. The central fit, as shown in \fig{fig:gamma-M}, omits $\beta=4.0$ and discards the smallest and two largest $t$ values, predicting $\gamma_m=0.23$. The other exponents obtained are $\alpha_1 = -0.25(14)$ and $\alpha_2 = -2.37(29)$; these likely include some remaining finite-volume effects and thus should not correspond directly to irrelevant operator dimensions.
We vary the analysis by dropping small/large $t$ values, and also including or discarding $\beta=4.0$ and $\beta=6.0$ from the fit; from these variations we estimate a systematic error of $0.04$ on $\gamma_m$. As an additional cross-check on our finite volume correction procedure, we perform an alternative analysis in which a global fit to $\Rop_\op(t)$ is carried out assuming power-law dependence on the dimensionless ratio $\sqrt{8t}/L$. This gives a central value of 0.27. We conservatively take the difference in central values as an estimate of our finite-volume extrapolation systematic, giving the final prediction
\begin{equation}
\gamma_m = 0.23(6)
\end{equation}
combining the systematic errors in quadrature.
A significant advantage of this technique is that more complicated composite operators can be dealt with in a straightforward way. To demonstrate this, we consider the nucleon operator with our method. The nucleon shows more significant oscillations in the ratio $\Rop_N$, continuing into the plateau region; we account for the oscillations by averaging over adjacent pairs of $x_0$ values to obtain $\Rop_N$. The oscillations at large $x_0$ may be due to the coupling of the staggered nucleon operator to other wrong-parity states; numerically the coupling is small in the ratio. We define the nucleon anomalous dimension with an additional negative sign, $\gamma_N \equiv \Delta_N - d_N$, to match the convention of \refcite{Pica:2016rmv,Gracey:2018oym}. Repeating the full analysis as described yields \fig{fig:gamma-N} and predicts
\begin{equation}
\gamma_N = 0.05(5)
\end{equation}
where the finite-volume systematic error is estimated to be 0.03 and the remaining combined systematic and statistical error is 0.04.
\begin{figure}
\includegraphics[width=0.48\textwidth]{gamma-extrap-M.png}
\caption{Extrapolation of the mass anomalous dimension $\gamma_m$ to the infrared limit, as described in the text. \label{fig:gamma-M}}
\end{figure}
\begin{figure}
\includegraphics[width=0.48\textwidth]{gamma-extrap-N.png}
\caption{Extrapolation of the nucleon anomalous dimension $\gamma_N$ to the infrared limit, as described in the text. \label{fig:gamma-N}}
\end{figure}
\paragraph{Conclusion --}
We have shown that gradient flow (GF) can be used to study renormalization group (RG) transformations directly, with no need for costly ensemble matching, yielding significantly higher statistical precision and lower cost than conventional MCRG techniques. The use of correlation functions at distances $x_0 \gg \sqrt{t}$ is crucial for our construction, as is the use of a conserved current to explicitly account for wavefunction renormalization under RG, without which the GF transformation does not have a fixed point. We have worked effectively at zero fermion mass, but it would be interesting to consider whether extrapolation from $m \neq 0$ in the vicinity of a fixed point would be practical in future work. There are many possibilities for future studies of other conformal theories using this technique, such as $\mathcal{N}=4$ super-Yang-Mills \cite{Schaich:2015daa} or $\phi^4$ theory in three dimensions.
Finally, it would be very interesting if our derivation could be generalized to QCD-like theories in which there is no IR-stable fixed point to work around. Such an extension could provide a new and general method for operator renormalization in lattice QCD.
\begin{acknowledgments}
\paragraph{Acknowledgments --} A.H. thanks Luigi del Debbio, Chris Monahan, Andreas Schindler, Georg Bergner and the participants of the workshop ``Numerical approaches to
holography, quantum gravity and cosmology",
Edinburgh, for valuable comments and useful discussions. We are grateful to Evan Weinberg for his help with the flowed spectrum measurement. Computations for this work were carried out with resources provided by the USQCD Collaboration, which is funded by the Office of Science of the U.~S.~Department of Energy. This work has been supported by the U.~S.~Department of Energy under grant number DE-SC0010005. Brookhaven National Laboratory is supported by the U.~S.~Department of Energy under contract DE-SC0012704.
\end{acknowledgments}
|
1,941,325,220,031 | arxiv | \section{Introduction}\label{sec:0}
The Nirenberg problem, raised by Nirenberg in the years 1969-1970,
asks on the standard sphere $(\Sn,g_0)$ $(n\geq 2)$, if one
can find a conformally invariant metric $g$ such that the scalar curvature (Gauss curvature for $n=2$) of $g$ is equal to the given function $K$. So the Nirenberg problem is also called the prescribed curvature problem on $\Sn$. This problem is equivalent to solving
$$
-\Delta_{g_{0}}w+1=Ke^{2w} \quad\mbox{ on }\, \mathbb{S}^2,
$$
and
$$
-\Delta_{g_{0}}v+c(n)R_0v=c(n)Kv^{\frac{n+2}{n-2}} \quad\mbox{ on }\, \Sn \quad \mbox{ for } \,n\geq 3,
$$
where $\Delta_{g_{0}}$ is the Laplace-Beltrami operator on $(\Sn, g_{0})$, $c(n)=(n-2)/(4(n-1))$, $R_0=n(n-1)$ is the scalar curvature of $(\Sn, g_{0})$ and $v=e^{\frac{n-2}{4}w}$. The Nirenberg problem has been studied extensively, we refer as examples \cite{Cwx1,CL1,CL2,BC1,BC2,Li93,Li93b,Li95,Li96,CGY,Han,CLin1,CLin2,Yan1,DLY,LWX,WY,SZ,CY91,ES,BE,M,GMPY,PWW} and references therein. For more recent and further studies, see \cite{Ahmedou,MM}.
In this paper, we are concerned with the \emph{fractional Nirenberg problem} with the Nirenberg’s equation in the fractional setting which constitutes in itself a branch in geometric analysis. This problem was naturally raised on $\sigma} \newcommand{\Sn}{\mathbb{S}^n$-curvature: finding a new metric $g$ on the standard sphere $\Sn$, $n\geq 2$, conformally equivalent to the standard one $g_{0}$, such that its $\sigma} \newcommand{\Sn}{\mathbb{S}^n$-curvature is equal to a prescribed function $K$ on $\Sn$. More precisely, we investigate the existence of solutions to the following nonlinear equation:\begin{equation}\label{maineq}
P_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}(v)=c(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n) K v^{\frac{n+2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}} ,\quad v>0\quad \text{ on }\,\Sn,
\end{equation}
where $\sigma} \newcommand{\Sn}{\mathbb{S}^n\in (0,1)$, $P_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}=\Gamma(B+\frac{1}{2}+\sigma} \newcommand{\Sn}{\mathbb{S}^n)/\Gamma(B+\frac{1}{2}-\sigma} \newcommand{\Sn}{\mathbb{S}^n)$ is the $2\sigma} \newcommand{\Sn}{\mathbb{S}^n$-order conformal Laplacian on $\Sn$,
$B=\sqrt{-\Delta_{g_{0}}+(\frac{n-1}{2})^{2}}$, $c(n, \sigma} \newcommand{\Sn}{\mathbb{S}^n)=\Gamma(\frac{n}{2}+\sigma} \newcommand{\Sn}{\mathbb{S}^n) / \Gamma(\frac{n}{2}-\sigma} \newcommand{\Sn}{\mathbb{S}^n)$,
$\Gamma$ is the Gamma function, $K$ is a given function on $\Sn$.
The operator $P_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}$ can be seen more concretely on $\mathbb{R}^n$
using stereographic projection. Indeed, let
$$
F: \mathbb{R}^n \rightarrow \Sn \backslash\{\mathcal{N}\}, \quad x \mapsto\Big(\frac{2 x}{|x|^{2}+1}, \frac{|x|^{2}-1}{|x|^{2}+1}\Big)
$$
be the inverse of stereographic projection, where $\mathcal{N}$ is the north pole of $\Sn$. Then it holds that
$$
P_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}(\phi) \circ F=|J_{F}|^{-\frac{n+2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}{2 n}}(-\Delta)^{\sigma} \newcommand{\Sn}{\mathbb{S}^n}(|J_{F}|^{\frac{n-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}{2 n}}(\phi \circ F))\quad \text{ for }\, \phi\in C^{\infty}(\Sn),
$$
where $|J_{F}|=(\frac{2}{1+|x|^{2}})^{n}$, and $(-\Delta)^{\sigma} \newcommand{\Sn}{\mathbb{S}^n}$ is the fractional Laplacian operator defined by
\begin{align*}
(-\Delta)^{\sigma} \newcommand{\Sn}{\mathbb{S}^n} u(x) &=C_{n, \sigma} \newcommand{\Sn}{\mathbb{S}^n} \,\mathrm{P.V.}\, \int_{\mathbb{R}^n} \frac{u(x)-u(y)}{|x-y|^{n+2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}}\, \d y,
\end{align*}
where $\mathrm{P.V.}$ is the principal value and $C_{n, \sigma} \newcommand{\Sn}{\mathbb{S}^n}=\pi^{-(2 \sigma} \newcommand{\Sn}{\mathbb{S}^n+\frac{n}{2})} \frac{\Gamma(\frac{n}{2}+\sigma} \newcommand{\Sn}{\mathbb{S}^n)}{\Gamma(-\sigma} \newcommand{\Sn}{\mathbb{S}^n)}$.
This operator is well defined in $\mathcal{S}$, the Schwartz space of rapidly decreasing $C^{\infty}$ function in $\mathbb{R}^n$, and it can
be equivalently defined as:
$$
(-\Delta)^{\sigma} \newcommand{\Sn}{\mathbb{S}^n} u(x)=-\frac{1}{2} C_{n, \sigma} \newcommand{\Sn}{\mathbb{S}^n} \int_{\mathbb{R}^n} \frac{u(x+y)+u(x-y)-2 u(x)}{|y|^{n+2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}}\, \d y,
$$
see \cite{Stein,Hitchhiker}. Then let $u=|J_{F}|^{\frac{n-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}{2 n}}(\phi \circ F)$, one can transfer Eq. \eqref{maineq} into the following equation with critical exponent
\begin{align}\label{maineq1}
(-\Delta)^{\sigma} \newcommand{\Sn}{\mathbb{S}^n} u=K(x) u^{\frac{n+2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}},\quad u>0 \quad \text{ in }\,\mathbb{R}^n.
\end{align}
In general, the intertwing operator $P_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}$ can be well-defined for all $\sigma} \newcommand{\Sn}{\mathbb{S}^n\in (0,\frac{n}{2})$ when $n\geq 2$, see e.g., Branson \cite{Branson}. For $\sigma} \newcommand{\Sn}{\mathbb{S}^n=1$, $P_{1}=-4 \frac{n-1}{n-2} \Delta_{g_{0}}+n(n-1)$ is the well known conformal Laplacian associated with the classical Nirenberg problem. For $\sigma} \newcommand{\Sn}{\mathbb{S}^n=2$, $P_{2}=\Delta_{g_{0}}^{2}-\frac{1}{2}(n^{2}-2n-4)\Delta_{g_{0}}+\frac{n-4}{16} n(n^{2}-4)$ is the well known Paneitz operator. Up to positive constants $P_{1}(1)$ is the scalar curvature associated to $g_{0}$ and $P_{2}(1)$ is the so-called $Q$-curvature. In fact, $P_{1}$ and $P_{2}$ are the first two terms of a sequence of conformally covariant elliptic operators $P_{k}$ which exists for all positive integers $k$ if $n$ is odd and for $k=\{1, \ldots, n / 2\}$ if $n$ is even. These operators have been first introduced by Graham, Jenne, Mason and Sparling in \cite{GJMS}. In \cite{GZ}, Graham and Zworski showed that $P_{k}$ can be realized as the residues at $\sigma} \newcommand{\Sn}{\mathbb{S}^n=k$ of a meromorphic family of scattering operators. Unlike the Laplacian, the fractional Laplacian is a non-local operator. In a seminal paper \cite{CS2}, Caffarelli and Silvester express the non-local operator $(-\Delta)^{\sigma} \newcommand{\Sn}{\mathbb{S}^n}$, $\sigma} \newcommand{\Sn}{\mathbb{S}^n \in(0,1)$ on $\mathbb{R}^n$ as a generalized Dirichlet-Neumann map for an elliptic boundary value problem with local differential operators. Later on, Chang and Gonz\'alez \cite{CG} showed that for any $\sigma} \newcommand{\Sn}{\mathbb{S}^n\in (0,\frac{n}{2})$, the operator $P_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}$ can also be defined as a Dirichlet-to-Neumann operator of a conformally compact Einstein manifold.
The fractional operators $P_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}$ and their associated fractional order curvatures $P_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}(1)$ which will be called $\sigma} \newcommand{\Sn}{\mathbb{S}^n$-curvatures have been the subject of many studies. On general manifolds, the prescribing $\sigma} \newcommand{\Sn}{\mathbb{S}^n$-curvature problem was considered in \cite{GZ,CG,GMS,MQ,QR} and references therein.
Throughout the paper, we assume $\sigma} \newcommand{\Sn}{\mathbb{S}^n \in (0,1)$ and $n\geq 2$ without otherwise stated.
\medskip
Problem \eqref{maineq} (or \eqref{maineq1}) is a focus of reserach in the recent decades, and it continues to inspire new thoughts, see for example \cite{JLX,ACH,JLX2,LR1,LR2,CZ1,AACHM,CY,CA2,LY,Niu18,Liuzy,GNT,GN,CA1}.
Fundamental progress was made by Jin, Li and Xiong in \cite{JLX,JLX2}, from which they obtained compactness and existence results by applying the blow-up analysis and the degree counting argument. It is also worth to noting that Jin, Li and Xiong in \cite{JLX3} generalized the previous results to all $\sigma} \newcommand{\Sn}{\mathbb{S}^n\in (0,\frac{n}{2})$ by using integral representations. Later on, the authors in \cite{AACHM,ACH,CY,CA1} obtained some existence criterions by establishing Euler-Hopf type index formula. Recently, there have been some works devoted to the multiplicity results, and those mainly use the Lyapunov-Schmidt reduction method (see e.g., \cite{CA2,LY,CZ1,Niu18,Liuzy,LR2,LR1,GN}).
\medskip
The aim of this paper is to investigate the number of positive solutions to Eq. \eqref{maineq} (or \eqref{maineq1}) under various local assumptions on the prescribed function $K$. Basically speaking, we obtain a $C^0$ density result for the fractional Nirenberg problem \eqref{maineq} by constructing infinitely many multi-bump solutions to the corresponding perturbed equations. As a variation of this idea, the related
problem \eqref{maineq1} with $K(x)$ being periodic in one of the variables are also studied and infinitely many multi-bump solutions (modulo translations by its periods) are obtained under some flatness conditions. Furthermore, the solutions we construct in this paper concentrate at local maximum points of $K(x)$, whose distance is very large.
\medskip
The first main theorem of this paper deals with the existence of multi-bump solutions to a perturbed \emph{fractional Nirenberg problem}.
\begin{thm}\label{thm:0.1}
Let $K \in L^{\infty}(\Sn)$ and assume that for some $\widetilde{x} \in \Sn$, $\widetilde{\varepsilon}>0$, $K(\widetilde{x})>0$, and $K \in C^{0}(B_{\widetilde{\varepsilon}}(\widetilde{x}))$ ($B_{\widetilde{\varepsilon}}(\widetilde{x})$ denotes the geodestic ball in $\Sn$ of radius $\widetilde{\varepsilon}$ and centered at $\widetilde{x}$). Then for any $\varepsilon\in(0,\widetilde{\varepsilon})$, any integers $k\geq 1$ and $m\geq 2$, there exists
$K_{\varepsilon, k, m} \in L^{\infty}(\Sn)$ with $K_{\varepsilon, k, m}-K \in C^{0}(\Sn)$, $\|K_{\varepsilon, k, m}-K\|_{C^{0}(\Sn)}<\varepsilon$ and $K_{\varepsilon, k, m}\equiv K$ in $\Sn \backslash B_{\varepsilon}(\widetilde{x})$, such that, for each $2 \leq s \leq m$, the equation\begin{align}\label{perturbed}
P_\sigma} \newcommand{\Sn}{\mathbb{S}^n(v)=c(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n)K_{\varepsilon, k, m} v^{\frac{n+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}} \quad \mbox{ on } \,\Sn
\end{align}
has at least $k$ positive solutions with $s$ bumps.
\end{thm}
A couple of remarks regarding Theorem \ref{thm:0.1} are in order.
\begin{rem}\begin{itemize}
\item[(1)] For the precise meaning of ``$s$ bumps", refer to the proof of Theorem \ref{thm:0.1} in Section \ref{sec:6}. Roughly speaking, we say a solution has $s$ bumps if most of its mass is concentrated in $s$ disjoint regions.
Since the number of bumps and the number of solutions can be chosen arbitrarily, we obtain the existence of infinitely many multi-bump solutions to Eq. \eqref{perturbed}.
\item[(2)] One cannot expect to perturb any $K(x)$ near any point $\widetilde{x} \in \Sn$ in the $C^{1}$ sense to obtain the existence of solutions. This is evident if we take $K(x)=x_{n+1}+2$ with $x=(x_1,\ldots,x_{n+1})\in \Sn$ and $\widetilde{x}$ to be different from the north and the south poles, since the perturbed function would violate the Kazdan-Warner type condition (see \cite[Proposition A.1]{JLX}).
\item[(3)] The perturbation $K_{\varepsilon, k, m}$ can be constructed explicitly
in Section \ref{sec:5} and it is a way of gluing approximate solutions into genuine
solutions. The method is variational, rather than through Lyapunov-Schmidt reduction method as in \cite{CA2,LY,CZ1,Niu18,Liuzy,LR2,LR1,GN}, etc. The solutions we obtained have most of their mass in disjoint small balls centered at the maximum points of $K_{\varepsilon, k, m}$, which are far away from each other. This fact accounts for why there are infinitely many solutions solve Eq. \eqref{perturbed}. Moreover, the solutions we constructed are nearly bubble functions, we refer to Section \ref{sec:5} for more details.
\item[(4)] If $K_{\varepsilon}=1+\varepsilon K(x)$ and $K(x)$ has at least two critical points satisfying some local conditions, Chen-Zheng \cite{CZ1} showed that Eq. \eqref{maineq1} with $K=K_{\varepsilon}$ has two multi-bump solutions when $\varepsilon$ is small.
Here we give a more general existence result since we can perturb any given positive continous function in any neighborhood of any given point on $\Sn$ such that for the perturbed equations there exist many solutions.
\item[(5)]
The main feature of Theorem \ref{thm:0.1} is that, even if a given function $K \in L^{\infty}(\Sn)$ in Theorem \ref{thm:0.1} cannot be realized as the $\sigma} \newcommand{\Sn}{\mathbb{S}^n$-curvature of a metric $g$ conformal to $g_{0}$, nevertheless we can find a function $K'$ arbitraly close to $K$ in $C^{0}(\Sn)$ which is the $\sigma} \newcommand{\Sn}{\mathbb{S}^n$-curvature as many conformal metrics to $g_{0}$ as we want.
\end{itemize}
\end{rem}
As a consequence of Theorems \ref{thm:0.1}, we also have
\begin{cor}\label{cor:1}
The smooth $\sigma} \newcommand{\Sn}{\mathbb{S}^n$-curvature functions of metrics conformal to $g_0$ are dense in $C^{0}(\Sn)$.
\end{cor}
\medskip
Next we consider the related problem \eqref{maineq1}. Before stating the results, we introduce some notation.
Let $E$ be the completion of the space $C_{c}^{\infty}(\mathbb{R}^n)$ with respect to the norm
\begin{align*}
\| u\|_E:=\Big(\int_{\mathbb{R}^n}|(-\Delta)^{\frac{\sigma} \newcommand{\Sn}{\mathbb{S}^n}{2}} u|^2\,\d x\Big)^{1/2} =\Big(\int_{\mathbb{R}^n\times\mathbb{R}^n}\frac{|u(x)-u(y)|^2}{|x-y|^{n+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}\,\d x\, \d y\Big)^{1/2},
\end{align*}
see \cite{Hitchhiker}. Denote the fractional critical Sobolev exponent $2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^*:=\frac{2n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}$, it is well known that $E$ can be embedded into $L^{2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^*}(\mathbb{R}^n)$ and the sharp
$\sigma} \newcommand{\Sn}{\mathbb{S}^n$-order Sobolev inequality is
\begin{equation}\label{sharp1}
S_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n}\Big(\int_{\mathbb{R}^n}|u|^{2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}}\, \d x \Big)^{1/2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}} \leq \Big(\int_{\mathbb{R}^n}|(-\Delta)^{\frac{\sigma} \newcommand{\Sn}{\mathbb{S}^n}{2}} u|^{2}\,\d x \Big)^{1/2}.
\end{equation}
For any $z\in\mathbb{R}^n$ and $\lam >0$, set
\begin{align}\label{bubble}
\delta(z,\lam )(x):=2^{\frac{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{2}}\Big(\frac{\Gamma(\frac{n+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{2})}{\Gamma(\frac{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{2})}\Big)^{\frac{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{4\sigma} \newcommand{\Sn}{\mathbb{S}^n}}\Big(\frac{\lam }{\lam ^{2}+|x-z|^{2}}\Big)^{\frac{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{2}}.
\end{align} Then $\delta( z,\lam )$ is the solution to the problem \begin{align}\label{bubbleeq}
(-\Delta)^{\sigma} \newcommand{\Sn}{\mathbb{S}^n}u=u^{\frac{n+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}},\quad u>0 \quad \text{ in }\, \mathbb{R}^n
\end{align} for every $(z,\lam )\in \mathbb{R}^n\times (0,+\infty)$ (see, e.g., \cite{MQ,JLX}). Moreover, \eqref{bubble} and its non-zero constant multiples attain the sharp $\sigma} \newcommand{\Sn}{\mathbb{S}^n$-order Sobolev inequality \eqref{sharp1}, see Lieb \cite{Lie83}.
Let $D$ be the completion of $C_{c}^{\infty}(\overline{\mathbb{R}}^{n+1}_+)$ with respect to the weighted Sobolev norm
\begin{align}
\| U\|_{D}:=\Big(\int_{\mathbb{R}^{n+1}_+}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U (X)|^2 \,\d X\Big)^{1/2}
\end{align}
equipped with the natural inner product
\begin{align}\label{inner}
\langle U,V \rangle_D:=\int_{\mathbb{R}^{n+1}_+}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}\nabla U \nabla V \,\d X \quad \text{ for }\, U,V\in D,
\end{align}
where $X=(x,t)\in \mathbb{R}_{+}^{n+1}:=\mathbb{R}^n \times(0, \infty)$.
\medskip
Throughout the paper, we write $\|\cdot\|$ (resp. $\|\cdot\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}$) to denote the norm of $E$ (resp. $D$) and simply use $E^+$ (resp. $D^+$) to denote the set consisiting of all positive functions of $E$ (resp. $D$).
We analyze Eq. \eqref{maineq1} via the extension formulation for fractional Laplacians established by Caffarelli and Silvestre \cite{CS2}. This is a commonly used tool nowadays, through which instead of Eq. \eqref{maineq1} we can study a degenerate elliptic equation with a Neumann boundary condition in one dimension higher:
\begin{gather}\label{maineq2}
\left\{\begin{aligned}
&\operatorname{div}(t^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n} \nabla U)=0 && \text { in }\, \mathbb{R}_{+}^{n+1},\\ &\partial_{\nu}^{\sigma} \newcommand{\Sn}{\mathbb{S}^n} U=N_{\sigma} \newcommand{\Sn}{\mathbb{S}^n} K(x)U(x,0)^{\frac{n+2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}}&& \text { on }\, \mathbb{R}^n,
\end{aligned}\right.
\end{gather}
where $u(x)=U(x,0)$, $$\partial_{\nu}^{\sigma} \newcommand{\Sn}{\mathbb{S}^n}U:=-\lim_{t \rightarrow 0^{+}}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}\pa_t U(X) ,$$and $N_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}=2^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n} \Gamma(1-\sigma} \newcommand{\Sn}{\mathbb{S}^n) / \Gamma(\sigma} \newcommand{\Sn}{\mathbb{S}^n)$. The extension will always refer to the cononical one:
\begin{align}\label{extension}
U(X)=\mathcal{P}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}[u]:=\beta(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n)\int_{\mathbb{R}^n}\frac{t^{2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}{(|x-\xi|^2+t^2)^{\frac{n+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{2}}} u(\xi)\, \d \xi,
\end{align}
where $\beta(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n)$ is a normalization constant. Namely, it is obtained by a Possion type integral. We always drop the harmless constant $N_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}$ for brevity and refer to $U=\mathcal{P}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}[u]$ in \eqref{extension} to be the \emph{extension} of $u$. For more details, one may refer to \cite{CS2}.
Utilizing the above extension formulation, we know that $\widetilde{\delta}(z,\lam):=\mathcal{P}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}[\delta(z,\lam )]$ solves
\begin{align*}
\left\{\begin{aligned}
&\operatorname{div}(t^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n} \nabla U)=0 & &\text { in }\, \mathbb{R}_{+}^{n+1},\\&\pa_{\nu}^{\sigma} \newcommand{\Sn}{\mathbb{S}^n}U=N_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}U(x,0)^{\frac{n+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}&&\text { on } \,\mathbb{R}^n, \\ &U(x,0)=\delta(z,\lam ) && \text { on } \,\mathbb{R}^n,
\end{aligned}\right.
\end{align*}
and as an immediately consequence we have
\begin{align}\label{equla}
\int_{\mathbb{R}^{n+1}_+}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla\widetilde{\delta}(z,\lam )|^2\,\d X=N_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}\int_{\mathbb{R}^n}\delta(z,\lam )^{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}\,\d x.
\end{align}Moreover, the extremal functions of Sobolev trace inequality on $ H^{1}(t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n},\mathbb{R}^{n+1}_+)$
\begin{align}\label{trace}
\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n} \Big(\int_{\mathbb{R}^n}|U(x,0)|^{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}\,\d x\Big)^{1/2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}\leq \Big(\int_{\mathbb{R}^{n+1}_+}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U|^2\,\d X\Big)^{1/2}
\end{align}
have the form $U(x,t)=\alpha} \newcommand{\lam}{\lambda \widetilde{\delta}(z,\lam )$ for any $\alpha} \newcommand{\lam}{\lambda\in \mathbb{R}\backslash\{0\}$, $z\in \mathbb{R}^n$ and $\lam>0$. Here $\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n}=N_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{-1/2}S_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n}$ is the optimal constant and $H^{1}(t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n},\mathbb{R}^{n+1}_+)$ is the closure of $C_{c}^{\infty}(\overline{\mathbb{R}}^{n+1}_{+})$ under the norm
\begin{align*}
\|U\|_{H^{1}(t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n},\mathbb{R}^{n+1}_+)}:=\Big(\int_{\mathbb{R}^{n+1}_+}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}(|U|^2+|\nabla U|^2)\,\d X\Big)^{1/2},
\end{align*}
see, e.g., \cite{JX2,GMS}. Furthermore, it follows from \eqref{bubble}, \eqref{bubbleeq} and \eqref{equla} that\begin{align}\label{Sn}
\| \widetilde{\delta}(0,1)\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}=\| \widetilde{\delta}( z,\lam )\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n},\quad \mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n}
=N_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{-1/2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}\| \widetilde{\delta}(0,1)\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{2\sigma} \newcommand{\Sn}{\mathbb{S}^n/n}
\end{align}for any $(z,\lam )\in \mathbb{R}^n\times (0,+\infty)$. We call $\delta(z,\lam )$ and $\widetilde{\delta}(z,\lam )$ \emph{bubbles}.
Let $K\in L^{\infty}(\mathbb{R}^n)$, we define the energy functional $I_K: D\rightarrow \mathbb{R}$ by
\begin{align}\label{functional}
I_{K}(U):=\frac{1}{2} \int_{\mathbb{R}^{n+1}_+}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U|^{2}\, \d X-\frac{1}{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}} \int_{\mathbb{R}^n}K(x)|U(x,0)|^{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}\, \d x.
\end{align} Obviously a positive critical point give rise to a positive solution to Eq. \eqref{maineq2} and thus a positive solution to Eq. \eqref{maineq1}.
For $z_{j}=(z_{j 1}, \ldots, z_{j n}) \in \mathbb{R}^n$, $\lam _{j} \in (0,+\infty)$ and $j=1, \ldots, k$, let
\begin{align*}
D_{z, \lam , k}:=\Big\{v \in D:\big\langle \widetilde{\delta}( z_{j}, \lam _{j} ), v\big\rangle=\big\langle\frac{\partial \widetilde{\delta}( z_{j}, \lam _{j} )}{\partial \lam _{j}}, v\big\rangle=\big\langle\frac{\partial \widetilde{\delta}( z_{j}, \lam _{j} )}{\partial z_{j i}}, v\big\rangle=0\Big\}
\end{align*}
for $j=1, \ldots, k$ and $i=1, \ldots, n$. Here and in the following, $\langle \cdot,\cdot\rangle$ denotes the inner product \eqref{inner}.
Let $K(x)\in L^{\infty}(\mathbb{R}^n)$, $O^{(1)}, \ldots, O^{(m)} \subset \mathbb{R}^n$ are some open sets with $\operatorname{dist}(O^{(i)}, O^{(j)}) \geq 1$ for any $ i \neq j$. If $K \in C^{0}(\bigcup_{i=1}^{m} O^{(i)})$,
we define $V(m, \varepsilon):=V(m, \varepsilon, O^{(1)}, \ldots, O^{(m)}, K)$ as the following open set in $D$ for $\varepsilon>0$:
\begin{align}\label{bumps}\begin{aligned}
V(m, \varepsilon):=\Big\{
U\in D:&\,\exists \,\alpha=(\alpha_{1}, \ldots, \alpha_{m}) \in \mathbb{R}^m,\, \exists\, z=(z_{1}, \ldots, z_{m}) \in O^{(1)}\times \ldots \times O^{(m)}, \\&\,\exists\,\lam =(\lam _{1}, \ldots, \lam _{m}) ,\,\lam _{i}>\varepsilon^{-1},\,\forall \,i\leq m, \text{ such that}\\ &\, |\alpha_{i}-K(z_{i})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n}|<\varepsilon,\, \forall \,i\leq m,\text{ and}
\\&\, \|U-\varphi(\alpha,z , \lam )\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}<\varepsilon
\Big\},\end{aligned}
\end{align}
where
\begin{align*}\varphi(\alpha,z, \lam ):=\sum_{i=1}^{m} \alpha_{i} \widetilde{\delta}( z_{i},\lam _{i} ).
\end{align*}
The open set $V(m, \varepsilon)$ recodes the information of the concentration rate and the locations of points of concentration. Furthermore, the family of solutions we constuct is of the form (after stereographic projection for Eq. \eqref{maineq})
\begin{align*}
U=\sum_{i=1}^{m} \alpha_{i} \widetilde{\delta}( z_{i},\lam _{i} )+v,
\end{align*}
where $v\in D_{z,\lam , m}$ and the contribution of the error term $v$ can be negligible (see Proposition \ref{prop:3.1}). Moreover, we construct multi-bump solutions near critical points of $K(x)$ and the bumps can be chosen arbitrarily many. For this purpose, we assume that $K(x)$ satisfies the following conditions:
\begin{itemize}
\item[$(K_1)$] $K(x)$ is periodic in at least one variable, that is, there is a positive constant $T$, such that $K(x_1+lT, x_2 , \ldots, x_n)=K(x_1 , x_2 ,\ldots, x_n)$ for any integer $l$ and $x\in\mathbb{R}^n$.
\item[$(K_2)$] Let $\Sigma$ denote the set consisting of all the critical points $z$ of $K(x)$, satisfying (after a suitable rotation of the coordinate system depending on $z$),
\begin{align}
K(x)=K(z)+\sum_{i=1}^{n} a_{i}|x_{i}-z_{i}|^{\beta}+R(|x-z|)
\end{align}
for $x$ close to $z$, where $a_{i}$ and $\beta$ are some constants depending on $z$, $a_{i} \neq 0$ for $i=1, \ldots, n$, $\sum_{i=1}^{n} a_{i}<0$, $\beta \in(n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n, n)$, and $R(y)$ is $C^{[\beta]-1,1}$ and satisfies$$\sum_{s=0}^{[\beta]}|\nabla^{s} R(y)||y|^{-\beta+s}=o(1)\quad \text{as $y\rightarrow 0$,}$$ where $C^{[\beta]-1,1}$ means that up to $[\beta]-1$ derivatives are Lipschitz functions, $[\beta]$ denotes the integer part $\beta$ and $\nabla^{s}$ denotes all possible partial derivatives of order $s$.
\end{itemize}
We remark that the $(K_2)$ type condition was originally introduced by Li in \cite{Li95} and widely used in the fractional setting, see, e.g., \cite{Niu18,JLX}.
\begin{thm} \label{thm:0.2}Assume that $K(x) \in C^{1}(\mathbb{R}^n) \cap L^{\infty}(\mathbb{R}^n)$ satisfies $(K_1)$-$(K_2)$ and
\begin{itemize}
\item[$(K_3)$] $K_{\max } := \max _{x \in \mathbb{R}^n} K(x)>0$ is achieved and the set $K^{-1}(K_{\max }) := \{x \in \mathbb{R}^n: K(x)=K_{\max }\}$ has at least one bounded connected component,
denoted as $\mathscr{C}$.
\end{itemize}
Then for any integers $m\geq 2$, Eq. \eqref{maineq2} has infinitely $m$-bump solutions in $D$ modulo translations by $T$ in the $x_{1}$ variable. More precisely, for any $\varepsilon>0$ and $x^*\in\mathscr{C}$, there exists some constant $l^{*}>0$ such that for any integers $l^{(1)},\ldots, l^{(k)}$ satisfying $2\leq k \leq m$, $\min _{1 \leq i \leq k}|l^{(i)}|\geq l^{*}$ and $\min _{i \neq j}|l^{(i)}-l^{(j)}| \geq l^{*}$, there is at least one solution $U$ of Eq.
\eqref{maineq2} in $V(k, \varepsilon, B_{\varepsilon}(x^{(1)}), \ldots, B_{\varepsilon}(x^{(k)}))$ with $$
k c-\varepsilon \leq I_{K}(U) \leq k c+\varepsilon,
$$ where $$x^{(i)}=x^{*}+(l^{(i)} T, 0,\ldots,0),\quad c=\frac{\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n}(K(x^{*}))^{(2 \sigma} \newcommand{\Sn}{\mathbb{S}^n-n) / 2\sigma} \newcommand{\Sn}{\mathbb{S}^n}(\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n})^{n/\sigma} \newcommand{\Sn}{\mathbb{S}^n},$$
and $V(k, \varepsilon, B_{\varepsilon}(x^{(1)}), \ldots, B_{\varepsilon}(x^{(k)}))$ are some subsets of $D$ defined in \eqref{bumps}.
\end{thm}
Several remarks involving the hypotheses of the above theorem are in order.
\begin{rem}
\begin{itemize}
\item[(1)]We can see from the description of $(K_3)$ that there exists some bounded open neighborhood $O$ of $\mathscr{C}$ such that $K_{max}\geq \max_{x\in\pa O}K+\delta$,
where $\delta$ is some small positive number. This fact together with assumption $(K_1)$ implies that $K(x)$ has a sequence of local maximum points $z_j$ with $|z_j|\rightarrow \infty$ as $j\rightarrow \infty$.
\item[(2)]Condition $(K_3)$ is sharp in the sense that one can construct examples easily below to show that if $(K_3)$ is not satiesfied, Eq. \eqref{maineq1} (or equivalently Eq. \eqref{maineq2}) may have no nontrivial solutions, which shows that $(K_3)$ is not merely a technical hypothesis.
\item[(3)] $U\in V(k, \varepsilon, B_{\varepsilon}(x^{(1)}), \ldots, B_{\varepsilon}(x^{(k)}))$ implies that $U$ has most of
its mass concentrated in $B_{\varepsilon}(x^{(1)})$, $\ldots$, $B_{\varepsilon}(x^{(k)})$. In particular, if $(l^{(1)}, \ldots, l^{(k)}) \neq (\widetilde{l}^{(1)}, \ldots, \widetilde{l}^{(k)})$, the corresponding solutions $U$ and $\widetilde{U}$ are different.
\item[(4)] $c$ is the mountain pass value corresponding to Eq. \eqref{maineq2} and $\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n}$ is the sharp constant in \eqref{trace}.
\item[(5)] Note that the authors in \cite{Niu18,LR1} apply the Lyapunov-Schmidt reduction method to obtain infinitely many multi-bump solutions clustered on some lattice points in $\mathbb{R}^n$ under similar assumptions. Liu \cite{Liuzy} use the same method to construct infinitely many concentration solutions to Eq. \eqref{maineq1} under the assumption that $K(x)$ has a sequence of strictly local maximum points moving to infinity. The solutions we constructed in Theorem \ref{thm:0.2}, roughly speaking, concentrate at $k$ different points and the distance between different concentrate points is very large.
\end{itemize}
\end{rem}
\begin{exa}[Nonexistence] Let $K(x) \in C^{1}(\mathbb{R}^n) \cap L^{\infty}(\mathbb{R}^n)$, $K(x)$ and $\nabla K(x)$ are bounded in $\mathbb{R}^n$, $\frac{\pa K}{\pa x_2}$ is nonnegative but not identically zero. Then the only nonnegative solution of Eq. \eqref{maineq1} in $E$ is the trivial solution $u\equiv 0$.
\end{exa}
\begin{proof}
Let $u\geq 0$ be any solution in $E$. Multiplying \eqref{maineq1} by $\frac{\pa u}{\pa x_2}$ and using \cite[Proposition A.1]{JLX}, we obtain $\int_{\mathbb{R}^n}\frac{\pa K}{\pa x_2}u^{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}\,\d x=0$.
The hypotheses on $K(x)$ imply that $u$ is identically zero in an open set, hence $u\equiv 0$ by the unique continuation results (see, e.g., \cite{L.H}).
\end{proof}
In fact we can obtain more information on the solutions obtained in Theorem \ref{thm:0.2}.
\begin{thm}\label{thm:0.3}Assume that $K(x) \in L^{\infty}(\mathbb{R}^n)$ satisfies $(K_1)$-$(K_2)$, and
\begin{itemize}
\item[$(K_3)^{\prime}$] There exist some positive constant $A_{1}$ and a bounded open set $O \subset \mathbb{R}^n$ such that
\begin{gather*}
K \in C^{1}(\overline{O}),\\ 1 / A_{1} \leq K(x) \leq A_{1}, \quad \forall \, x \in \overline{O},\\\max_{x\in\overline{O}}K(x)=\sup _{x \in \mathbb{R}^n} K(x)>\max _{x \in \partial O} K(x).
\end{gather*}
\end{itemize} Then for any $\varepsilon>0$, Eq. \eqref{maineq2} has infinitely many solutions in $D^+$ satisfying\begin{align}\label{0.13}
c \leq I_{K}(U) \leq c+\varepsilon \text {~ or ~} 2 c-\varepsilon \leq I_{K}(U) \leq 2 c+\varepsilon
\end{align}and\begin{align*}
\sup \big\{\|U\|_{L^{\infty}(\mathbb{R}^{n+1}_+)} : I_{K}^{\prime}(U)=0,\,\text {$U$\,satisfies \eqref{0.13}} \big\}=\infty,
\end{align*} where $$
c=\frac{\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n}(\max _{\overline{O}} K)^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n) / 2\sigma} \newcommand{\Sn}{\mathbb{S}^n}(\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n})^{n/\sigma} \newcommand{\Sn}{\mathbb{S}^n}.$$
More precisely, for any $\varepsilon>0$, there exists $l^{*}>0$ such that for any
integers $l^{(1)}$ and $l^{(2)}$ satisfying $|l^{(1)}-l^{(2)}| \geq l^{*}$, there is at least one solution $U$ of
\eqref{maineq2} in $V(1, \varepsilon, O,K) \cup V(2, \varepsilon, O_{l}^{(1)}, O_{l}^{(2)}, K)$ with \eqref{0.13}, where $$
O_{l}^{(1)}=O+(l^{(1)} T, 0, \ldots, 0), \quad O_{l}^{(2)}=O+(l^{(2)} T, 0, \ldots, 0),
$$
and $V(1, \varepsilon, O,K)$, $V(2, \varepsilon, O_{l}^{(1)}, O_{l}^{(2)}, K)$ are some subsets of $D$ defined in \eqref{bumps}.
\end{thm}
\begin{rem}
By the stereographic projection from $\Sn \backslash\{\mathcal{N}\}$ to $\mathbb{R}^n$, the solutions obtained in Theorem \ref{thm:0.2} and \ref{thm:0.3} can be lifted to a solution of Eq. \eqref{maineq} on $\Sn$ which is positive except at the north pole $\mathcal{N}$. In this sense, Eq. \eqref{maineq} is solvable under the assumptions of Theorem \ref{thm:0.2} and \ref{thm:0.3}.
\end{rem}
\medskip
For $n\geq 3$ and $\sigma} \newcommand{\Sn}{\mathbb{S}^n=1$, Theorem \ref{thm:0.1}--\ref{thm:0.3} was proved by Li \cite{Li95}. In our earlier work \cite{TWZ}, we deal with the case $\sigma} \newcommand{\Sn}{\mathbb{S}^n=1/2$ and obtain the density and multiplicity results. The main objective of this paper is to extend the above results to the nonlocal setting $\sigma} \newcommand{\Sn}{\mathbb{S}^n \in (0,1)$.
Although certain parts of the proof can be obtained by minor modifications of the classical arguments in \cite{Li93,Li93b,Li95}, there are plenty of technical difficulties which demand new ideas to handle the non-local terms. We will pay attention to, for instance, the following features.
\begin{itemize}
\item We largely depend on the extension result of Caffarelli and Silvester \cite{CS2} to analyze solutions. It is a standard argument nowadays to study nonlocal problems. Because of the degeneracy of the extended problem \eqref{maineq1}, it is not easy to study the asymptotic behavior of the solution near the infity, see Section \ref{sec:1} where some properties of solutions are obtained. Hence, in showing the decay
property of rescaled solutions, we do not use potential analysis, but iteratively apply the rescaling argument based on the maximum principle.
\item Suppose that $\sigma} \newcommand{\Sn}{\mathbb{S}^n \in (0,1)\backslash\{1/2\}$, a complicated function space needed while the energy minimization promlem in $\sigma} \newcommand{\Sn}{\mathbb{S}^n=1/2$ can be solved via reflection method. Here we give an extension result for fractional Sobolev spaces defined on Lipschitz boundary and apply an embedding theorem of fractional Sobolev spaces into weighted Sobolev spaces, see Section \ref{sec:2}.
\item We construct solutions by modifying the minimax procedure in the classical case whlie dedicate estimates needed to run the gradient flow, see Section \ref{sec:3} and \ref{sec:4}.
\end{itemize}
\medskip
We study Eq. \eqref{maineq} (or \eqref{maineq1}) by subcritical approach, for example, we refer to the reader \cite{JLX,JLX2}. It is worth noting that our methods continue in the direction pioneered in the earlier work \cite{Se,CR1,CR2}. These techniques provide, roughly speaking, ways of gluing ``approximate solutions" together to obtain a genuine solution. There have been some works on ``gluing approximate solutions" by using the Lyapunov-Schmidt reduction method (see e.g., \cite{CA2,LY,CZ1,Niu18,Liuzy,LR2,LR1,GN}) where more precise information on the linearized problem is needed. However it seems that the methods in S\'ere \cite{Se}, Coti Zelati-Rabinowitz \cite{CR1,CR2} and Li \cite{Li93,Li95} have provided an elegant way to glue approximate solutions for certain periodic differential equations where it is difficult to obtain as precise information as needed for applying the Lyapunov-Schmidt reduction method. Inspired by the above works, we apply these techniques and combine with the localization method in \cite{CS2} to construct solutions corresponding to the nonlocal problem \eqref{maineq1}. As far as we know, this paper is the first attempt to modify the above mentioned gluing method towards equations with the fractional Laplacians in the Euclidean setting or the conformal Laplacian operators under a particular choice of the metric
in constructing multi-bump solutions. This paper also overcomes the difficulty appearing in using Lyapunov-Schmidt reduction method to locate the concentrating points of the solutions.
\medskip
Let us introduce some arrangements to prove the main results in this paper. Theorems \ref{thm:0.1}--\ref{thm:0.3} and Corollary \ref{cor:1} are derived in Section \ref{sec:6} from Proposition \ref{prop:5.1}, a more general result on Eq. \eqref{maineq1}. To derive Proposition \ref{prop:5.1}, we first study a compactified problem (Theorem \ref{thm:3.1}) in Section \ref{sec:1}--\ref{sec:5}. Then we derive Proposition \ref{prop:5.1} by using Theorem \ref{thm:3.1} and some blow-up analysis in \cite{JLX}. Theorem \ref{thm:3.1} is a technical result in our paper, which is essential to make the variational gluing methods applicable.
\medskip
The present paper is organized as the following.
In Section \ref{sec:1}, we establish some a priori estimates for solutions to degenerate elliptic equations. In Section \ref{sec:2}, we consider a minimization problem on exterior domain. In Section \ref{sec:3}, existence and multiplicity results for the subcritical case will be stated, and its proof will be sketched.
In Section \ref{sec:4}, we follow and refine the analysis of Bahri and Coron \cite{BC1,BC2} to study the subcritical interaction of two well-spaced bubbles. The technical result Theorem \ref{thm:3.1} will be completed in Section \ref{sec:5} by applying minimax procedure as in Coti Zelati and Rabinowitz \cite{CR1,CR2}. Finally, the main theorems are proved in Section \ref{sec:6} with the aid of blow-up analysis established by Jin, Li and Xiong \cite{JLX}.
\medskip
\textbf{\bfseries Notation:} We collect below a list of the main notation used throughout the paper.
\begin{itemize}\raggedright
\item We use capital letters, such as $X=(x,t)$ (resp. $Y=(y, s)$) to denote an element of the upper half space $\mathbb{R}_{+}^{n+1}$, where $x \in \mathbb{R}^n$ (resp. $y\in \mathbb{R}^n$) and $t>0$ (resp. $s>0$).
\item For a domain $D \subset \mathbb{R}^{n+1}$ with boundary $\partial D$, we denote $\partial^{\prime} D$ as the interior of $\overline{D} \cap \partial \mathbb{R}_{+}^{n+1}$ in $\mathbb{R}^n=\partial \mathbb{R}_{+}^{n+1}$ and $\partial^{\prime \prime} D=\partial D \backslash \partial^{\prime} D$.
\item For $\overline{X} \in \mathbb{R}^{n+1}$, denote $\mathcal{B}_{r}(\overline{X}):=\{X \in \mathbb{R}^{n+1}:|X-\overline{X}|<r\}$ and $\mathcal{B}_{r}^{+}(\overline{X}):=\mathcal{B}_{r}(\overline{X}) \cap \mathbb{R}_{+}^{n+1}$, where
$|\cdot|$ is the Euclidean distance. If $\overline{X}=(\overline{x},0) \in \partial \mathbb{R}_{+}^{n+1}$, $B_{r}(\overline{X}):=\{x\in\mathbb{R}^n:|x-\overline{x}|<r\}$.
Hence $\partial^{\prime} \mathcal{B}_{r}^{+}(\overline{X})=B_{r}(\overline{X})$ if $\overline{X} \in \partial \mathbb{R}_{+}^{n+1}$. Moreover, when $\overline{X}=(\overline{x},0)$, we simply use $\B_r(\overline{x})$ (resp. $\B^{+}_r(x)$ and $B_r(\overline{x})$) for $\B_r(\overline{X})$ (resp. $\B^{+}_r(\overline{X})$ and $B_r(\overline{X})$) and will not keep writing the center $\overline{X}$ if $\overline{X}=0$.
\item For any weakly differentiable function $U(x,t)$ on $\mathbb{R}_{+}^{n+1}$, we denote $\nabla_{x} U=(\partial_{x_1} U, \ldots, \partial_{x_n}U)$ and $\nabla U=(\nabla_{x} U, \partial_{t} U)$.
\item For any $\sigma} \newcommand{\Sn}{\mathbb{S}^n\in(0,1)$ and $n\geq 2$, we denote $2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}=\frac{2n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}$ and $H(x)=(\frac{2}{1+|x|^2})^{(n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n)/2}$.
\item $C>0$ is a generic constant which can vary from line to line.
\end{itemize}
\section{Some a priori estimates for degenerate elliptic equations}\label{sec:1}
In this section we present some a priori estimates of positive solutions to the equation
\begin{align*}
(-\Delta)^{\sigma} \newcommand{\Sn}{\mathbb{S}^n} u=K(x)u^{\frac{n+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}-\tau},\quad |x|\geq 1
\end{align*}
with $\tau \geq 0$ small. For this purpose, we use the extension formula \eqref{extension} to consider the related degenerate elliptic equations. Before we present the main results in this section, we introduce some notation.
For any $z \in \mathbb{R}^n$ and $R>0$ we define the weighted Sobolev space $ H^{1}( t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n},\B^+_{R}(z))$ of functions $U\in L^{2}(t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n},\B_{R}^{+}(z))$ such that $\nabla U \in L^{2}(t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n},\B_{R}^{+}(z))$, endowed with the norm
\begin{align}\label{Wballnorm}
\|U\|_{ H^{1}(t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n},\B^+_{R}(z))}:=\Big(\int_{\B^+_{R}(z)} t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U|^{2}+|U|^{2}) \,\d X\Big)^{1 / 2} .
\end{align}
We say $U \in H_{loc}^{1}(t^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}, \mathbb{R}_{+}^{n+1})$ if $U \in H^{1}(t^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}, \B^+_{R})$ for every $R>0$, and $U \in H_{loc}^{1}(t^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}, \mathbb{R}_{+}^{n+1} \backslash\{0\})$ if $U \in H^{1}(t^{1-2 \sigma}, \mathcal{B}_{R}^{+} \backslash \overline{\mathcal{B}}_{\varepsilon}^{+})$ for any all $R>\varepsilon>0$.
\begin{prop}\label{prop:1.1}
Suppose that $K \in L^{\infty}(\mathbb{R}^n \backslash B_{1})$ satisfies $\|K\|_{L^{\infty}(\mathbb{R}^n \backslash B_{1})}\leq A_0$ for some positive constant $A_0$. Then there exists some positive constants $\mu_{1}=\mu_{1}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n, A_0)$ and $C(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n, A_0)$ such that for any positive solution $U(x,t)$ of
$$
\left\{\begin{aligned}
& \operatorname{div}(t^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n} \nabla U)=0& & \text { in }\, \mathbb{R}_{+}^{n+1}, \\
&\pa_{\nu}^{\sigma} \newcommand{\Sn}{\mathbb{S}^n} U=K(x) U(x, 0)^{\frac{n+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}& & \text { on }\, \mathbb{R}^{n} \backslash B_{1},
\end{aligned}\right.
$$
with $\nabla U \in L^{2}(t^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n} , \mathbb{R}_{+}^{n+1} \backslash \mathcal{B}_{1}^{+})$,
$U(x, 0) \in L^{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}(\mathbb{R}^n \backslash B_{1})$ and \begin{align}\label{prop:1.1-1}
\int_{\mathbb{R}_{+}^{n+1} \backslash \mathcal{B}_{1}^{+}} t^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U|^{2}\,\d X\leq \mu_{1},\end{align} we have
\begin{align*}
\sup_{X\in \overline{\mathbb{R}}^{n+1}_{+}\backslash \B^+_2} |X|^{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}U(X)\leq C(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n, A_0).
\end{align*}
\end{prop}
\begin{proof}
Our arguments are in the spirit of those in \cite{Li93,Li93b,CJYX}.
By some appropriate extension of $U(X)$ to $\mathcal{B}_{1}^{+}$, it follows from \eqref{trace} and \eqref{prop:1.1-1} that \begin{align}\label{prop:1.1-2}
\Big(\int_{\mathbb{R}^n \backslash B_{1}}|U(x, 0)|^{2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}}\,\d x\Big)^{2/2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}} \leq C_{0}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n)
\int_{\mathbb{R}_{+}^{n+1} \backslash \mathcal{B}_{1}^{+}} t^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U|^{2}\,\d X\leq C_{0}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n) \mu_1.
\end{align} Throughtout the paper, we use $C_{0}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n)>1$ denotes some universal constant only depend on $n$ and $\sigma} \newcommand{\Sn}{\mathbb{S}^n$.
Now we perform a Kelvin transformation on $U(X)$. Let
\begin{gather*}
\widetilde{X}=(\widetilde{x}, \widetilde{t})=\frac{X}{|X|^{2}}, \quad|X| \geq 1,\\V(\widetilde{X})=\frac{1}{|\widetilde{X}|^{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}} U\Big(\frac{\widetilde{x}}{|\widetilde{X}|^2},\frac{\widetilde{t}}{|\widetilde{X}|^2}\Big) .
\end{gather*}
By \eqref{prop:1.1-1} and \eqref{prop:1.1-2}, some calculations lead to \begin{align*} \int_{\mathcal{B}_{1}^{+}} \widetilde{t}^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla V|^{2}\,\d \widetilde{X}+\int_{B_{1}} |V(\widetilde{x}, 0)|^{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}\,\d \widetilde{x} \leq C_{0}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n) \mu_{1}.
\end{align*} Moreover, $V(\widetilde{X})$ satisfies
\begin{align*}
\left\{\begin{aligned}
& \operatorname{div}(\widetilde{t}^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n} \nabla V)=0& & \text { in }\, \mathbb{R}_{+}^{n+1}, \\
&\pa_{\nu}^{\sigma} \newcommand{\Sn}{\mathbb{S}^n}V=K(\frac{\widetilde{x}}{|\widetilde{x}|^{2}}) V(\widetilde{x}, 0)^{\frac{n+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}& & \text { on } \, B_{1}\backslash\{0\}.
\end{aligned}
\right.
\end{align*}
Therefore, it follows from \cite[Lemma 2.8]{JLX} that $V(\cdot,0)\in L^{q}(B_{9/10})$ for some $q>\frac{n}{2\sigma} \newcommand{\Sn}{\mathbb{S}^n}$. Using the Harnack inequality in \cite[Proposition 2.6(iii)]{JLX} (see also \cite{XaYa}), we just need to give an a priori bound of $\|V(\cdot,0)\|_{L^{\infty}(B_{1/2})}$ to complete the proof. We claim that there exists some positive constant $C(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n, A_0)$ such that
\begin{align}\label{prop:1.1-5}
\|V(\widetilde{x}, 0)\|_{L^{\infty}(B_{1/2})} \leq C(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n, A_0).
\end{align}
We prove \eqref{prop:1.1-5} by contradiction argument. Suppose the contrary of \eqref{prop:1.1-5}, then there exists a sequence of $\{K_{j}\}$ and $\{U_{j}>0\}$ satisfying
\begin{gather*}
\|K_j\|_{L^{\infty}(\mathbb{R}^n \backslash B_{1})}\leq A_0,
\\\left\{\begin{aligned}
&\operatorname{div}(t^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n} \nabla U_{j})=0& & \text { in }\, \mathbb{R}_{+}^{n+1}, \\
&\pa_{\nu}^{\sigma} \newcommand{\Sn}{\mathbb{S}^n} U_{j}=K_{j}(x) U_{j}(x, 0)^{\frac{n+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}} && \text { on }\, B_{1}\backslash\{0\},
\end{aligned}\right.\\ \int_{\mathcal{B}_{1}^{+}} \widetilde{t}^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla V_{j}|^{2}\,\d \widetilde{X} +\int_{B_{1}} |V_j(\widetilde{x}, 0)|^{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}\,\d \widetilde{x}\leq C_{0}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n) \mu_{1},
\end{gather*}
but\begin{align*}
\|V_{j}(\widetilde{x},0)\|_{L^{\infty}(B_{1/2})} \geq j,
\end{align*}
where $V_{j}(\widetilde{X})$ is obtained by a Kelvin transformation on $U_{j}(X)$ as before.
Note that, again by \cite[Proposition 2.6(iii)]{JLX}, $V_j$ is H\"older continuous in $\overline{\B}^+_{0.9}$, thus we can choose $\widetilde{x}_{j} \in B_{0.9}$ such that
\begin{align*}
(0.9-|\widetilde{x}_{j}|)^{(n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n) / 2}V_{j}(\widetilde{x}_{j}, 0)=\max _{|\widetilde{x}| \leq 0.9}(0.9-|\widetilde{x}|)^{(n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n)/2} V_{j}(\widetilde{x}, 0).
\end{align*}
Let $s_{j}=\frac{1}{2}(0.9-|\widetilde{x}_{j}|)>0$. Clearly we have
\begin{align*}
s_{j}^{(n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n) / 2} \sup _{B_{s_{j}}(\widetilde{x}_{j})} V_{j}(\widetilde{x}, 0) & \geq 3^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n) / 2}(0.9-|\widetilde{x}_{j}|)^{(n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n) / 2}V_{j}(\widetilde{x}_{j}, 0) \\
&=3^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n) / 2} \max _{|\widetilde{x}| \leq 0.9}(0.9-| \widetilde{x}|)^{(n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n) / 2}V_{j}(\widetilde{x}, 0) \\
& \geq 3^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n) / 2} \max _{|\widetilde{x}|\leq 0.5}(0.9-|\widetilde{x}|)^{(n- 2\sigma} \newcommand{\Sn}{\mathbb{S}^n) / 2}V_{j}(\widetilde{x}, 0) \\
& \geq 3^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n) / 2}(0.9-0.5)^{(n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n)/ 2} j \\
& \rightarrow \infty\quad \text{ as }j\rightarrow \infty,\end{align*}
and
\begin{align*}V_{j}(\widetilde{x}_{j}, 0) &=(0.9-|\widetilde{x}_{j}|)^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n) / 2} \max _{|\widetilde{x}| \leq 0.9}(0.9-|\widetilde{x}|)^{(n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n) / 2}V_{j}(\widetilde{x}, 0) \\
& \geq(0.9-|\widetilde{x}_{j}|)^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n) / 2} \max _{|\widetilde{x}-\widetilde{x}_{j}| \leq s_{j}}(0.9-|\widetilde{x}|)^{(n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n) / 2}V_{j}(\widetilde{x}, 0) \\
& \geq(2 s_{j})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n) / 2}(s_{j})^{(n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n) / 2} \max _{|\widetilde{x}-\widetilde{x}_{j}| \leq s_{j}}V_{j}(\widetilde{x}, 0)\\& \geq 2^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n) / 2} \max _{|\widetilde{x}-\widetilde{x}_{j}| \leq s_{j}}V_{j}(\widetilde{x}, 0) .
\end{align*}
In conclusion, we obtain\begin{gather*}
|\widetilde{x}_{j}|<0.9,\\(s_{j})^{(n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n) / 2} \max _{|\widetilde{x}-\widetilde{x}_{j}| \leq s_{j}}V_{j}(\widetilde{x}, 0) \rightarrow \infty\quad \text{ as } \, j\rightarrow \infty,\\V_{j}(\widetilde{x}_{j}, 0) \geq 2^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n) / 2} \max _{|\widetilde{x}-\widetilde{x}_{j}| \leq s_{j}}V_{j}(\widetilde{x}, 0).
\end{gather*}
Now, consider\begin{gather*}
W_{j}(\hat{x}, \hat{t})=\frac{1}{V_{j}(\widetilde{x}_{j}, 0)} V_{j}\Big(\widetilde{x}_{j}+\frac{\hat{x}}{V_{j}(\widetilde{x}_{j}, 0)^{\frac{2}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}}, \frac{\hat{t}}{V_{j}(\widetilde{x}_{j}, 0)^{\frac{2}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}}\Big),\quad \hat{X}=(\hat{x}, \hat{t})\in \Omega_j,
\end{gather*}
where \begin{align*}
\Omega_j:=\Big\{(\hat{x}, \hat{t}) \in \mathbb{R}_{+}^{n+1}: \Big(\widetilde{x}_{j}+\frac{\hat{x}}{V_{j}(\widetilde{x}_{j}, 0)^{\frac{2}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}}, \frac{\hat{t}}{V_{j}(\widetilde{x}_{j}, 0)^{\frac{2}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}}\Big) \in \mathcal{B}_{1}^{+} \backslash\{0\}\Big\}.
\end{align*}
Clearly, $W_j(\hat{X})$ satisfies \begin{gather*}
\int_{\B^+_{R_j} }\hat{t}^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla W_{j}|^{2}\,\d \hat{X}+\int_{B_{R_j}}|W_{j}(\hat{x}, 0)|^{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}\, \d \hat{x} \leq C_{0}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n) \mu_{1},\\\left\{\begin{aligned}
&\operatorname{div}(\hat{t}^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n} \nabla W_{j})=0&& \text { in }\, \Omega_j, \\
&\pa_{\nu}^{\sigma} \newcommand{\Sn}{\mathbb{S}^n} W_{j}=K_{j}\Big(\widetilde{x}_{j}+\frac{\hat{x}}{V_{j}(\widetilde{x}_{j}, 0)^{\frac{2}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}} \Big)W_{j}(\hat{x}, 0)^{\frac{n+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}& & \text { on }\, \pa'\Omega_j,
\end{aligned}\right.\\W_{j}(0,0)=1,\\W_{j}(\hat{x}, 0) \leq 2^{\frac{n-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}{2}},\quad \forall\,|\hat{x}|<R_j,
\end{gather*}
where \begin{align*}
R_j:=V_{j}(\widetilde{x}_{j}, 0)^{2/(n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n)
} s_{j}\rightarrow \infty \quad \text{ as }\, j\rightarrow \infty.
\end{align*}
By \cite[Proposition 2.6]{JLX}, for any given $\overline{t}>0$ we have
$$
0 \leq W_{j} \leq C(\overline{t}) \quad \text { in }\, B_{R_{j} / 2} \times[0, \overline{t}),
$$
where $C(\overline{t})$ depends only on $n, \sigma} \newcommand{\Sn}{\mathbb{S}^n$ and $\overline{t}$. Then by Corollary 2.10 and Theorem 2.14 in \cite{JLX}, there exists some $\alpha>0$ such that for every $R>1$,
\begin{align*}
\|W_{j}\|_{H^{1}(t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n},\mathcal{B}_{R}^{+})}+\|W_{j}\|_{C^{\alpha}(\overline{\mathcal{B}}_{R}^{+})}+\|W_{j}(\cdot,0)\|_{C^{2, \alpha}(\overline{B}_{R})} \leq C(R),
\end{align*}
where $C(R)$ is independent of $i$. Thus, after passing to a subsequence, we have, for some nonnegative function $W \in H_{loc}^{1}(t^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}, \overline{\mathbb{R}}_{+}^{n+1}) \cap C_{loc }^{\alpha}(\overline{\mathbb{R}}_{+}^{n+1})$
\begin{align*}
\left\{ \begin{aligned}
&W_{j} \rightharpoonup W &&\text {weakly in }\, H_{loc}^{1}(t^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}, \mathbb{R}_{+}^{n+1}), \\ &W_{j} \rightarrow W &&\text {in } \, C_{loc}^{\alpha / 2}(\mathbb{R}_{+}^{n+1}), \\ &W_{j}(\cdot,0) \rightarrow W(\cdot,0) &&\text {in }\, C_{loc}^{2}(\mathbb{R}^n).\end{aligned}\right.
\end{align*}
Let
$\overline{K}$ be the weak $*$ limit of $\{K_{j}(\widetilde{x}_{j}+(1 / V_{j}(\widetilde{x}_{j},0))^{2 /(n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n)}\hat{x})\}$ in $L_{loc}^{\infty}(\mathbb{R}^n)$, then we have $\|\overline{K}\|_{L^{\infty}(\mathbb{R}^n)} \leq A_0$. Moreover, $W(\hat{X})$ satisfies
\begin{gather}
\label{11}
\left\{\begin{aligned}
&\operatorname{div}(\hat{t}^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n} \nabla W)=0& & \text { in }\, \mathbb{R}_{+}^{n+1}, \\
&\pa_{\nu}^{\sigma} \newcommand{\Sn}{\mathbb{S}^n} W=\overline{K}(\hat{x}) W(\hat{x}, 0)^{\frac{n+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}& & \text { on }\, \mathbb{R}^n,
\end{aligned}\right.\\W(0,0) = 1,\notag\\\int_{\mathbb{R}_{+}^{n+1}} \hat{t}^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla W|^{2} \,\d \hat{X}+\int_{\mathbb{R}^n}|W(\hat{x}, 0)|^{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}} \,\d \hat{x} \leq C_{0}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n) \mu_{1}.\notag
\end{gather}
Multiplying \eqref{11} by $W$ and integrating by parts, we obtain
\begin{align*}
\int_{\mathbb{R}_{+}^{n+1}} \hat{t}^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla W|^{2}\,\d \hat{X}&= \int_{\mathbb{R}^n} \overline{K}W(\hat{x}, 0)^{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}\, \d \hat{x} \\& \leq A_{0}\Big(\int_{\mathbb{R}_{+}^{n+1}} \hat{t}^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla W|^{2} \,\d \hat{X} \Big)^{n /(n- 2\sigma} \newcommand{\Sn}{\mathbb{S}^n)}\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{-2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}},
\end{align*}
where $\mathcal{S}_{n, \sigma} \newcommand{\Sn}{\mathbb{S}^n}$ is defined in \eqref{trace}.
Therefore,
\begin{align*}
1 &\leq A_{0}\Big(\int_{\mathbb{R}_{+}^{n+1}} \hat{t}^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla W|^{2} \,\d \hat{X} \Big)^{2\sigma} \newcommand{\Sn}{\mathbb{S}^n /(n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n)}\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{-2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}} \\& \leq A_{0}(C_{0}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n) \mu_{1})^{2\sigma} \newcommand{\Sn}{\mathbb{S}^n /(n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n)}\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{-2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}}.
\end{align*}
This is a contradiction if we choose $\mu_{1}=\mu_{1}(n, \sigma} \newcommand{\Sn}{\mathbb{S}^n,A_{0})$ to satisfy
$A_{0}(C_{0}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n) \mu_{1})^{2\sigma} \newcommand{\Sn}{\mathbb{S}^n /(n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n)}\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{-2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}}<1 $. We complete the proof of \eqref{prop:1.1-5}.
\end{proof}
\begin{prop}\label{prop:1.2}
Let $\mu_{1}$ and $C(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,A_0)$ be the positive constants in
Proposition \ref{prop:1.1}. Then for any $2<l_{1}<l_{2}<\infty$, there exists a positive constant $R_{1}=R_{1}(n, \sigma} \newcommand{\Sn}{\mathbb{S}^n,A_0, \mu_{1}, l_{1}, l_{2})>l_{2}$ such that for any $K \in L^{\infty}(B_{R_{1}} \backslash B_{1})$ with
$\|K\|_{L^{\infty}(B_{R_1} \backslash B_{1})}\leq A_0$ and any positive solution $U(X)$ of$$
\left\{\begin{aligned}
&\operatorname{div}(t^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n} \nabla U)=0 && \text { in }\, \mathbb{R}_{+}^{n+1}, \\
&\pa_{\nu}^{\sigma} \newcommand{\Sn}{\mathbb{S}^n} U=K(x) U(x, 0)^{\frac{n+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}& & \text { on } \,B_{R_{1}} \backslash B_{1},
\end{aligned}\right.
$$
with \begin{align*}
\int_{\mathcal{B}_{R_{1}}^{+} \backslash \mathcal{B}_{1}^{+}} t^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U|^{2}\,\d X+\int_{B_{R_{1}} \backslash B_{1}}|U(x, 0)|^{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}\,\d x\leq \mu_{1}, \end{align*} we have\begin{align*}
\sup_{X\in \overline{\mathcal{B} }^+_{l_2}\backslash \mathcal{B}^+_{l_1}} |X|^{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}U(X) \leq 2 C(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n, A_0). \end{align*}
\end{prop}
\begin{proof}
Suppose the contrary, then for $R_{j}=l_{2}+j$ with $j=3,4,5, \ldots$, there exists a sequence of $\{K_{j}\}$ and $\{U_{j}>0\}$ satisfying
\begin{gather*}
\|K_{j}\|_{L^{\infty}(B_{R_1}\backslash B_1)} \leq A_{0},\\\left\{\begin{aligned}
&\operatorname{div}(t^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n} \nabla U_{j})=0 && \text { in }\, \mathbb{R}_{+}^{n+1}, \\
&\pa_{\nu}^{\sigma} \newcommand{\Sn}{\mathbb{S}^n} U_{j}= K_{j}(x) U_{j}(x, 0)^{\frac{n+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}} && \text { on }\, B_{R_{j}} \backslash B_{1},
\end{aligned}\right.\\
\int_{\mathcal{B}_{R_{j}}^{+} \backslash \mathcal{B}_{1}^{+}} t^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U_j|^{2}\,\d X+\int_{B_{R_{j}} \backslash B_{1}}|U_j(x, 0)|^{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}\,\d x\leq \mu_{1}, \end{gather*}
but$$
\sup _{X\in \overline{\mathcal{B}}_{l_2}^+ \backslash \mathcal{B}_{l_1}^+}|X|^{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n} U_{j}(X)>2 C(n, \sigma} \newcommand{\Sn}{\mathbb{S}^n,A_0).
$$
Arguing as in the proof of Proposition \ref{prop:1.1}, we know that for any
$\mu\in (0,1)$, $\|U_{j}\|_{L^{\infty}(\B^+_{R_{j}/2}\backslash \B^+_{1+\mu})}$ is bounded by a constant independent of $j$.
Let $U(X)$ be the $H^1_{loc}(t^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}, \overline{\mathbb{R}}_{+}^{n+1})$ weak limit of $U_{j}$ (passing to a subsequence), we know that\begin{gather}\notag
\left\{\begin{aligned}
&\operatorname{div}(t^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n} \nabla U)=0& & \text { in }\, \mathbb{R}_{+}^{n+1}, \\
&\pa_{\nu}^{\sigma} \newcommand{\Sn}{\mathbb{S}^n} U=\overline{K}(x) U(x, 0)^{\frac{n+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}& &\text { on }\, \mathbb{R}^n\backslash B_1,
\end{aligned}\right.\\ \sup _{X\in \overline{\B}_{l_2}^+\backslash \mathcal{B}_{l_1}^+}|X|^{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}U(X) \geq 2 C(n, \sigma} \newcommand{\Sn}{\mathbb{S}^n,A_0), \label{12}
\end{gather}
where $\overline{K}(x)$ is the weak $*$ limit of $K_{j}(x)$ in $L^{\infty}(B_{R_{1}} \backslash B_{1})$ satisfying $\|\overline{K}\|_{L^{\infty}(B_{R_{1}} \backslash B_{1})}\leq A_0$. However, it follows from Proposition \ref{prop:1.1} that \begin{align*}
\sup_{X\in \overline{\mathbb{R}}^{n+1}_{+}\backslash \B^+_2} |X|^{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}U(X)\leq C(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n, A_0).
\end{align*}This contradicts to \eqref{12}.
\end{proof}
\begin{prop}\label{prop:1.3}
Suppose that $l_{2}>100 l_{1}>100$, $K \in C^{1}(B_{l_{2}} \backslash B_{l_{1}})$ and $\|K\|_{C^{1}(B_{l_{2}} \backslash B_{l_{1}})}\leq A_1$ for some positive constant $A_1$. Then for any positive solution $U(X)$ of
$$
\left\{\begin{aligned}
&\operatorname{div}(t^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n} \nabla U)=0 && \text { in }\, \mathbb{R}_{+}^{n+1}, \\
&\pa_{\nu}^{\sigma} \newcommand{\Sn}{\mathbb{S}^n} U= K(x) U(x, 0)^{\frac{n+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}& & \text { on }\, B_{l_{2}} \backslash B_{l_{1}},
\end{aligned}\right.
$$
satisfying \begin{align}\label{prop1.3-1}
\sup_{x\in B_{l_{2}} \backslash B_{l_{1}}}|x|^{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}U(x, 0) \leq A
\end{align}for some constant $A>1$,
we have$$
\sup_{X\in\overline{\mathcal{B}}_{l_2/4}^+\backslash \mathcal{B}_{4l_1}^+}|X|^{n+1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla_x U| \leq C(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,A_1,A),
$$
and $$
\sup_{X\in\overline{\mathcal{B}}_{l_2/4}^+\backslash \mathcal{B}_{4l_1}^+}|X|^{n}|t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}\pa_t U| \leq C(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,A_1,A),
$$where $C(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,A_1,A)$ is some positive constant depending only on $n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,A_1$ and $A$.
\end{prop}
\begin{proof}
For any $r \in(4 l_{1}, l_{2}/4)$, we have
\begin{align*}
\left\{\begin{aligned}
&\operatorname{div}(t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n} \nabla U)=0&& \text { in }\, \mathbb{R}_{+}^{n+1}, \\
&\pa_{\nu}^{\sigma} \newcommand{\Sn}{\mathbb{S}^n} U= K(x) U(x, 0)^{\frac{n+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}& & \text { on }\,B_{2r}\backslash B_{r/2},
\end{aligned}\right.
\end{align*} and
$\sup_{x\in B_{2r}\backslash B_{r/2}} U(x,0) \leq (r/2)^{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n} A$
by \eqref{prop1.3-1}.
Let $V(X)=r^{\frac{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{2}} U(rX)$, then it solves
$$ \left\{\begin{aligned}
&\operatorname{div}(t^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n} \nabla V)=0 && \text { in }\, \mathbb{R}_{+}^{n+1}, \\
&\pa_{\nu}^{\sigma} \newcommand{\Sn}{\mathbb{S}^n} V=K(rx)V(x,0)^{\frac{n+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}} & & \text { on }\,B_2\backslash B_{1/2}.
\end{aligned}\right.$$
Note that $V(x,0) \leq 2^{\frac{2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n}{2}} A$ and $|K(rx)V(x,0)^{\frac{n+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}| \leq 2^{n+2\sigma} \newcommand{\Sn}{\mathbb{S}^n} A_0 A^{\frac{n+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}$
in the annulus $\{x \in \mathbb{R}^n : 1/2 \leq |x| \leq 2\}$, we deduce from \cite[Proposition 2.6(iii)]{JLX} and \cite[Lemma 4.5]{XaYa} that\begin{align*}
\sup_{|X|=1} |\nabla_x V| \leq C(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,A_1,A),
\end{align*}and \begin{align*}
\sup_{|X|=1} |t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}\pa_{t} V| \leq C(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,A_1,A).
\end{align*}
As a consequence, \begin{align*}
\sup_{|X|=r}|X|^{n+1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n} |\nabla_x U| \leq C(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,A_1,A) ,
\end{align*}and \begin{align*}
\sup_{|X|=r} |X|^{n}|t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}\pa_t U| \leq C(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,A_1,A).
\end{align*}This finishes the proof.
\end{proof}
\begin{prop}\label{prop:1.4}
Let $\mu_1$, $R_{1}$ and $C(n, \sigma} \newcommand{\Sn}{\mathbb{S}^n,A_{0})$ be the constants in Proposition \ref{prop:1.2}. Then for any $2<l_{1}<l_{2}<\infty$, there exist some positive constants $\mu_{2}=\mu_{2}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n, A_{0}) \leq \mu_{1}$ and $\overline{\tau}=\overline{\tau}(n, \sigma} \newcommand{\Sn}{\mathbb{S}^n,l_{1}, l_{2}, A_{0})$ such that for any $0 \leq \tau \leq \overline{\tau}$, $K\in L^{\infty}(B_{R_1}\backslash B_1)$ with $\|K\|_{L^{\infty}(B_{R_1}\backslash B_1)}\leq A_0$, and any positive solution $U(X)$ of
$$
\left\{\begin{aligned}
&\operatorname{div}(t^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n} \nabla U)=0 && \text { in }\, \mathbb{R}_{+}^{n+1}, \\
&\pa_{\nu}^{\sigma} \newcommand{\Sn}{\mathbb{S}^n} U= K(x) U(x, 0)^{\frac{n+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}-\tau}& &\text { on }\, B_{2R_1}\backslash B_1,
\end{aligned}\right.
$$
with
$$
\int_{\mathcal{B}_{2R_{1}}^{+} \backslash \mathcal{B}_{1}^{+}} t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U|^{2}\,\d X +\int_{B_{2R_1}\backslash B_1} |U(x, 0)|^{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}\,\d x \leq \mu_{2},
$$
we have\begin{align}\label{prop:2.4-1}
\sup_{X\in \overline{\mathcal{B}}^+_{l_2}\backslash \mathcal{B}^+_{l_1}} |X|^{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n} U(X) \leq 3 C(n, \sigma} \newcommand{\Sn}{\mathbb{S}^n, A_0).
\end{align}
Furthermore, if $K\in C^{1}(B_{R_1}\backslash B_1)$ with $\|K\|_{C^{1}(B_{R_1}\backslash B_1)}\leq A_1$ for some positive constant $A_1$, we have\begin{align*}
\sup_{X\in \overline{\mathcal{B}}^+_{l_2}\backslash \mathcal{B}^+_{l_1}} |X|^{n+1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla_x U|&\leq 2 C(n, \sigma} \newcommand{\Sn}{\mathbb{S}^n, A_1,A),
\end{align*}
and
\begin{align*}
\sup_{X\in \overline{\mathcal{B}}^+_{l_2}\backslash \mathcal{B}^+_{l_1}} |X|^{n}|\pa_{t} U|\leq 2 C(n, \sigma} \newcommand{\Sn}{\mathbb{S}^n, A_1,A) ,
\end{align*}
where $C(n, \sigma} \newcommand{\Sn}{\mathbb{S}^n, A_1,A)$ is the constant in Proposition \ref{prop:1.3} with $A$ replaced by $3C(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,A_0)$.
\end{prop}
\begin{proof}
Suppose the contrary, then there exists a sequence $0\leq \tau_{j} \rightarrow 0$ and $U_{\tau_{j}}>0$ satisfying
\begin{gather*}
\left\{\begin{aligned}
&\operatorname{div}(t^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n} \nabla U_{\tau_{j}})=0 && \text { in }\, \mathbb{R}_{+}^{n+1}, \\
&\pa_{\nu}^{\sigma} \newcommand{\Sn}{\mathbb{S}^n} U_{\tau_{j}}= K(x) U_{\tau_{j}}(x, 0)^{\frac{n+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}-\tau_j}& & \text { on }\, B_{2R_1}\backslash B_1,
\end{aligned}\right.
\\
\int_{\mathcal{B}_{2R_{1}}^{+} \backslash \mathcal{B}_{1}^{+}} t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U_{\tau_{j}}|^{2}\,\d X +\int_{B_{2R_1}\backslash B_1} |U_{\tau_{j}}(x, 0)|^{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}\, \d x\leq \mu_{2},
\end{gather*}
but\begin{align*}
\sup _{X\in \overline{\mathcal{B}}^+_{l_2}\backslash \mathcal{B}^+_{l_1}}\{|X|^{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}U_{\tau_{j}}(X)\} > 3 C(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n, A_0).
\end{align*}
Choosing $\mu_{2} \in(0, \mu_{1})$ to be small, we use an argument similar to the proof of Proposition \ref{prop:1.1} to obtain (by passing to a subsequence)$$
U_{\tau_{j}}(X) \rightarrow U(X) \quad \text { in }\, C_{loc}^{ \alpha/2}(\mathcal{B}^+_{2R_1}\backslash \mathcal{B}^+_{1}).
$$Moreover, $U(X)$ satisfies
\begin{align}\label{13}
\sup _{X\in \overline{\mathcal{B} }^+_{l_2}\backslash \mathcal{B}^+_{l_1}}\{|X|^{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}U(X)\} \geq 3 C(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n, A_0).
\end{align}
However, by Proposition \ref{prop:1.2} we have\begin{align*}
\sup _{X\in \overline{\mathcal{B}}^+_{l_2}\backslash \mathcal{B}^+_{l_1}}\{|X|^{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}U(X)\} \leq 2 C(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n, A_0),
\end{align*}
which contradicts to \eqref{13}. We finish the proof of \eqref{prop:2.4-1}. The other two terms can be managed in a similar manner by using Proposition \ref{prop:1.3} and contradiction argument, we omit it here.
\end{proof}
\section{A minimization problem on exterior domain}\label{sec:2}
For any $z_1,z_{2} \in\mathbb{R}^n$ satisfy $|z_{1}-z_{2}| \geq 10$, denote $\Omega:=\mathbb{R}^{n+1}_{+} \backslash\{\B^+_{1}(z_{1}) \cup \B^+_{1}(z_{2})\}$. We define $D_{\Omega}$ by taking the closure of $C_{c}^{\infty}(\overline{\Omega})$ under the norm
\begin{align*}
\|U\|_{D_{\Omega}}:=\Big(\int_{\Omega}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U|^2\,\d X\Big)^{1/2}+\Big(\int_{\pa'\Omega}|U(x,0)|^{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}\,\d x\Big)^{1/2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}.
\end{align*}
Clearly, $D_{\Omega}$ is a Banach space.
By some appropriate extension to $\overline{\B}^+_{1}(z_{1}) \cup \overline{\B}^+_{1}(z_{2})$ and using \eqref{trace}, we have a Sobolev trace type inequality on $D_{\Omega}$:
\begin{lem}\label{traceexterior}
Let $D_{\Omega}$ be defined as above. There exists some positive constant
$C(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n)$ such that for all $ U \in D_{\Omega}$, there holds\begin{align*}
\Big(\int_{\pa'\Omega}|U(x,0)|^{2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}}\,\d x\Big)^{1/2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}} \leq C(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n)\Big(\int_{\Omega } t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n} |\nabla U|^2\, \d X\Big)^{1 / 2},
\end{align*}
where the constant $C(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n)$ depends only
on $n$ and $\sigma} \newcommand{\Sn}{\mathbb{S}^n$. In particular, it does not depend on $z_{1}, z_{2}$ provided $|z_{1}-z_{2}| \geq 10$.
\end{lem}
Its proof can be done as in \cite[Lemma A.1]{JX2} with minor modifications, so we omit it here.
\medskip
Let $K\in L^{\infty}(\pa'\Omega)$ satisfy $\|K\|_{L^{\infty}(\pa'\Omega)}\leq A_0$ for some constant $A_{0}>0$. We define a functional on $D_{\Omega}$ by
\begin{align*}
I_{K,\Omega}(U):=&\frac{1}{2}\int_{\Omega}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U|^2\,\d X -\frac{1}{2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}-\tau} \int_{\pa'\Omega } K(x)H^{\tau}(x)|U(x,0)|^{2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}-\tau}\,\d x
\end{align*}
with $\tau\geq 0$ small. For any $U \in D_{\Omega}$, using H\"older inequality and Lemma \ref{traceexterior}, we have
\begin{align}
&\Big| I_{K,\Omega}(U)-\frac{1}{2} \int_{\Omega}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U|^2\,\d X\Big|\notag \\ \leq & A_0 C_0(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n)\Big(\int_{\pa'\Omega }|U(x,0)|^{2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}}\,\d x\Big)^{(2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}-\tau)/2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}}\notag\\\leq & A_0 C_0(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n)\Big(\int_{\Omega}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U|^2\,\d X\Big)^{(2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}-\tau)/2},\label{16}
\end{align}
where $C_0(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n)$ denotes some universal constant which can vary from line to line.
\begin{prop}\label{prop:2.1}
Let $D_{\Omega}$ be defined as above. There exist some constants $r_{0}=r_{0}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n, A_{0})\in (0,1)$ and $C_{1}=C_{1}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n)>1$ such that for any $z_{1},z_{2} \in \mathbb{R}^n$ with $|z_{1}-z_{2}| \geq 10$, and $W \in W^{\sigma} \newcommand{\Sn}{\mathbb{S}^n,2}(\pa'' \Omega)$ with $r=\|W\|_{W^{\sigma} \newcommand{\Sn}{\mathbb{S}^n,2}(\pa'' \Omega)}\leq r_{0}$, the following minimum is achieved:$$
\min \Big\{I_{K,\Omega}(U):U \in D_{\Omega},\, U|_{\pa'' \Omega}=W, \,\int_{\Omega}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U|^2\,\d X\leq C_{1} r_{0}^{2}\Big\}.
$$The minimizer is unique (denoted $U_{W}$) and satisfies $
\int_{\Omega}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U_{W}|^2\,\d X\leq C_{1} r^{2}/2$. Furthermore, the map $W \mapsto U_{W}$ is continuous from $ W^{\sigma} \newcommand{\Sn}{\mathbb{S}^n,2} (\pa'' \Omega)$ to $D_{\Omega}$. In particular, the constants $r_{0}$ and $C_1$ are independent of $z_{1}, z_{2}$ provided $|z_{1}-z_{2}| \geq 10$.
\end{prop}
\begin{rem}(1) Following the terminology in \cite{Adams}, $W^{\sigma} \newcommand{\Sn}{\mathbb{S}^n,2}(\pa'' \Omega)$ stands for fractional order Sobolev space defined on the boundary $\pa'' \Omega$, obtained by transporting (via a partition of unity and pull-back) the standard scale $W^{\sigma} \newcommand{\Sn}{\mathbb{S}^n,2}(\mathbb{R}^{n})=H^{\sigma} \newcommand{\Sn}{\mathbb{S}^n}(\mathbb{R}^n)$. We refer the readers to \cite[p.215]{Adams} for more details.
(2) When $\sigma} \newcommand{\Sn}{\mathbb{S}^n=1/2$, the proof of Proposition \ref{prop:2.1} can be directly followed from \cite[Proposition 2.1]{Li93} or \cite[Proposition 3.3]{Li93b}.
\end{rem}
\begin{proof}[Proof of Proposition \ref{prop:2.1}]
We claim that there exist some constant $C_{1}=C_{1}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n)>0$ and $\overline{W} \in D_{\Omega}$ such that
\begin{align}\label{prop3.2-1}
\int_{\Omega}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla \overline{W}|^2\,\d X \leq \frac{C_{1}}{8} r_0^{2}\quad \text{ and }\quad \overline{W}|_{\pa'' \Omega}=W. \end{align}
To justify this, we first note that $\pa'' \Omega$ is compact and smooth (mollifying the singularities of $\pa'' \Omega$ if necessary), thus any open cover $\{U_j\}$ of $\pa'' \Omega$ and the associated collection $\{\Psi_j\}$ of smooth maps from $\B_1$ onto the sets $U_j$ are finite collections, say $1 \leq j \leq k$. If $\{\omega_j\}$ is a partition of unity for $\pa'' \Omega$ subordinate to $\{U_j\}$,
we define $\theta_j W$ on $\mathbb{R}^{n}=\pa \mathbb{R}^{n+1}_+$ by
$$
\theta_j W(x)= \begin{cases}(\omega_j W)(\Psi_j(x, 0)) & \text {if }\,|x|<1, \\ 0 & \,\text {otherwise, }\end{cases}
$$
where $x=(x_1,\ldots,x_n)$. Then \eqref{prop3.2-1} follows directly from the proof of Proposition 2.1 in \cite{JLX}. We fix the value of $C_{1}$ from now on and the value of $r_{0}=r_{0}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n, A_{0})$ is determined in the following.
First it follows from \eqref{16} and \eqref{prop3.2-1} that if $r_{0}>0$ is chosen small enough, then
\begin{align}
I_{K,\Omega}(\overline{W}) & \leq \frac{1}{2} \int_{\Omega}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla \overline{W}|^2\,\d X+ A_0 C_0(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n)\Big(\int_{\Omega}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla \overline{W}|^2\,\d X\Big)^{(2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}-\tau)/2}\notag\\
& \leq \frac{4}{5} \int_{\Omega}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla \overline{W}|^2\,\d X \leq \frac{C_{1}}{10} r^{2}.\label{prop3.2-2}
\end{align}
Observe that for any $U$ in \begin{align}\label{16'}
\Big\{U \in D_{\Omega}:\, C_{1} r^{2}/2 \leq \int_{\Omega}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U|^2\,\d X \leq 2C_{1} r_{0}^{2}\Big\},
\end{align} we can derive from \eqref{16} that
\begin{align*}
I_{K,\Omega}(U) \geq &\frac{1}{2} \int_{\Omega}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U|^{2}\,\d X-A_{0} C_{0}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n)\Big(\int_{\Omega}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U|^{2}\,\d X\Big)^{(2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}-\tau)/2} \\
\geq &\frac{1}{2} \int_{\Omega}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U|^{2}\, \d X\\&-A_{0} C_{0}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n)(2 C_{1} r_{0}^{2})^{2\sigma} \newcommand{\Sn}{\mathbb{S}^n /(n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n)-\tau/2} \int_{\Omega}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U|^{2}\, \d X.
\end{align*}
Thus, if we choose $r_{0}>0$ to further satisfy
$A_{0} C_0(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n)(2 r_{0}^{2} C_{1})^{2\sigma} \newcommand{\Sn}{\mathbb{S}^n /(n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n)-\tau/2} \leq 1/4$,
then using \eqref{prop3.2-2} we have
\begin{align*}
I_{K,\Omega}(U) \geq \frac{1}{4} \int_{\Omega}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U|^{2}\, \d X \geq \frac{1}{4}\Big(\frac{C_{1} r^{2}}{2}\Big) >I_{K,\Omega}(\overline{W}).
\end{align*}
Therefore, the minimizer is not achieved in the set \eqref{16'}.
Next we prove the existence of the minimizer.
Write $U=V+\overline{W}$, $V|_{\pa''\Omega}=0$, $J_{K,\Omega}(V)=I_{K,\Omega}(U)=I_{K,\Omega}(V+\overline{W})$.
We only need to minimize $J_{K,\Omega}(V)$ for $ \int_{\Omega}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla V|^2\,\d X \leq 2 C_{1} r_{0}^{2}$ due to the above argument. It is easy to see that if $r_{0}$ is small enough, then $J_{K,\Omega}$ is strictly convex in the ball \begin{align*}\Big\{V \in D_{\Omega}:\, V|_{\pa'' \Omega}=0, \int_{\Omega}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla V|^2\,\d X\leq 2 C_{1} r_{0}^{2}\Big\}. \end{align*} Therefore it is standard to conclude the existence of a unique local minimizer $V_{W}$.
Finally, set
$U_{W}=V_{W}+\overline{W}$, then $U_{W}$ is a local minimizer. As discussed above, $U_{W}$ satisfies
$\int_{\Omega}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U_W|^2\,\d X \leq C_{1}r^{2}/2$. The uniqueness and the continuity of the map
$W \mapsto U_{W}$ follows from the strict local convexity of $J_{K,\Omega}$.
\end{proof}
\section{Existence and multiplicity results for the subcritical case}\label{sec:3}
We point out that due to the presence of the fractional Sobolev critical exponent, the corresponding Euler-Lagrange functional corresponding to Eq. \eqref{maineq1} does not satisfy the Palais-Smale condition. One way to overcome such a difficulty is to consider the following subcritical approximation problem
\begin{align}\label{subcritical}
\left\{\begin{aligned}
&(-\Delta)^{\sigma} \newcommand{\Sn}{\mathbb{S}^n} u=K(x)H^{\tau}(x) u^{\frac{n+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}-\tau} && \text { in } \, \mathbb{R}^n,\\&u\in E^+,&&
\end{aligned}\right.
\end{align}
with $\tau> 0$ small. The aim in this section is to establish the existence and multiplicity results for the above subcritical type equations using the localization method introduced by Caffarelli and Silvestre \cite{CS2} as stated in the introduction.
We first introduce some notation which is used throughout the paper.
Let $\{K_{l}(x)\}$ be a sequence of functions satisfying the following conditions.
\begin{itemize}
\item[(i)]There exists some positive constant $A_{1}>1$ such that for any
$l= 1, 2, 3,\ldots$,\begin{align}\label{18}
| K_{l}(x)| \leq A_{1}, \quad \forall\, x \in \mathbb{R}^n.
\end{align}
\item[(ii)] For some integer $m \geq 2$, there exist $z_{l}^{(i)} \in \mathbb{R}^n$, $1 \leq i \leq m$, $R_{l} \leq \frac{1}{2} \min_{i \neq j}|z_{l}^{(i)}-z_{l}^{(j)}|$, such that $K_{l}$ is continuous near $z_{l}^{(i)}$ and
\begin{align}
\label{19}&\lim _{l \rightarrow \infty} R_{l}=\infty,&\\\label{20}&K_{l}(z_{l}^{(i)})=\max _{x \in B_{R_{l}}(z_{l}^{(i)})} K_{l}(x), & 1 \leq i \leq m,\\&\lim _{l \rightarrow \infty} K_{l}(z_{l}^{(i)})=a^{(i)},& 1 \leq i \leq m,\label{21}\\\label{22}&K_{\infty}^{(i)}(x):=(\text {weak $*$}) \lim _{l \rightarrow \infty} K_{l}(x+z_{l}^{(i)}), & 1 \leq i \leq m.
\end{align}
\item[(iii)]There exist some positive constants $A_{2}, A_{3}>1$, $\delta_{0}, \delta_{1}>0$, and
some bounded open sets $O_{l}^{(1)}, \ldots, O_{l}^{(m)} \subset \mathbb{R}^n$, such that, if we define for $1 \leq i \leq m$,
\[ \widetilde{O}_{l}^{(i)}=\big\{x \in \mathbb{R}^n : \operatorname{dist}(x, O_{l}^{(i)})<\delta_{0}\big\},\]
\[ O_{l}=\bigcup_{i=1}^{m} O_{l}^{(i)}, \quad \widetilde{O}_{l}=\bigcup_{i=1}^{m} \widetilde{O}_{l}^{(i)},\]
we have
\begin{equation}} \newcommand{\ee}{\end{equation}} \newcommand{\X}{\overline{X}\begin{split}
z_{l}^{(i)} \in O_{l}^{(i)},\quad \operatorname{diam}(O_{l}^{(i)})< R_{l}/10,
\end{split}\ee
\begin{equation}} \newcommand{\ee}{\end{equation}} \newcommand{\X}{\overline{X}\label{23}\begin{split}
K_{l} \in C^{1}(\widetilde{O}_{l},[1/A_{2}, A_{2}]),
\end{split}\ee
\begin{equation}} \newcommand{\ee}{\end{equation}} \newcommand{\X}{\overline{X}\label{24}\begin{split}
K_{l}(z_{l}^{(i)}) \geq \max _{x \in \partial O_{l}^{(i)}} K_{l}(x)+\delta_{1},
\end{split}\ee
\begin{equation}} \newcommand{\ee}{\end{equation}} \newcommand{\X}{\overline{X}\label{25}\begin{split}
\max _{x \in\widetilde{O}_{l}}|\nabla K_{l}(x)| \leq A_{3}.
\end{split}\ee
\end{itemize}
For $\varepsilon>0$ small, we define $V_{l}(m, \varepsilon):=V(m, \varepsilon, O_{l}^{(1)}, \ldots, O_{l}^{(m)}, K_{l})$ as in \eqref{bumps}.
Here and in the following, we are concerned with the case $m=2$, since the more general result is similar in nature.
\medskip
If $U$ is a function in $V_l(2,\varepsilon)$, one can find an optimal representation, following the ideas introduced in \cite{BC1,BC2}. Namely, we have
\begin{prop}\label{prop:3.1}
There exists $\var_{0} \in(0,1)$ depending only on $A_{1}, A_{2}, A_{3}, n,\sigma} \newcommand{\Sn}{\mathbb{S}^n, \delta_{0}$, and $m$, but independent of $l$, such that, for any $\varepsilon\in (0,\var_{0}]$, $U \in V_{l}(2, \varepsilon)$, the following minimization problem
\begin{align}\label{27}
\min _{(\alpha, z, \lam ) \in B_{4\varepsilon}}\Big\|U-\sum_{i=1}^{2} \alpha_{i} \widetilde{\delta}( z_{i}, \lam _{i} )\Big\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}
\end{align}has a unique solution $(\alpha,z,\lam )$ up to a permutation. Moreover, the minimizer is achieved in $B_{2 \varepsilon}$ for large $l$, where
\begin{align*}
B_{\varepsilon}=\Big\{(\alpha, z,\lam ): &\,\alpha=(\alpha_{1}, \alpha_{2}),\,1/(2 A_{2}^{(n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n)/ 4\sigma} \newcommand{\Sn}{\mathbb{S}^n}) \leq \alpha_{1}, \alpha_{2} \leq 2 A_{2}^{(n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n},\\&\,z=(z_{1}, z_{2})\in {O}_{l}^{(1)}\times {O}_{l}^{(2)},\, \lam =(\lam _{1}, \lam _{2}),\, \lam _{1}, \lam _{2} \geq \varepsilon^{-1}\Big\}.
\end{align*} In particular, we can write $U$ as follows:
$$
U=\sum_{i=1}^{2} \alpha_{i} \widetilde{\delta}( z_{i}, \lam _{i} )+v,
$$where $v\in D_{z,\lam ,2}$.
In addition, the variables $\{\alpha_{i}\}$ satisfy\begin{align}\label{28''}
|\alpha_{i}-K_{l}(z_{i})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n}|=o_{\varepsilon}(1)\quad \text{ for }\,i=1,2,
\end{align}where $o_{\varepsilon}(1)\rightarrow 0$ as $\varepsilon\rightarrow 0$.
\end{prop}
\begin{proof}
The proof is similar up to minor modifications to the corresponding statements in \cite{BC1,BC2},
we omit it here.
\end{proof}
In the sequel, we will often spilt $U$, a function in $ V_{l}(2, \varepsilon)$, $\varepsilon \in (0,\var_{0}]$, under the form
\begin{align*}
U=\alpha_{1}^{l} \widetilde{\delta}( z_{1}^{l},\lam _{1}^{l} )+\alpha_{2}^{l} \widetilde{\delta}( z_{2}^{l},\lam _{2}^{l} )+v^{l}
\end{align*}after making the minimization \eqref{27}. Proposition \ref{prop:3.1} guarantees the existence and uniqueness of $\alpha_{i}=\alpha_{i}(u)=\alpha_{i}^{l}$, $z_{i}=z_{i}(u)=z_{i}^{l}$ and $\lam _{i}=\lam _{i}(u)=\lam _{i}^{l}$ for $i=1,2$ (we omit the index $l$ for simplicity).
For any $K\in L^{\infty}(\mathbb{R}^n)$ and $U\in D$, we define
\begin{align}I_{K, \tau}(U):= \frac{1}{2} \int_{\mathbb{R}^{n+1}_+}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U|^2\,\d X-\frac{1}{2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}-\tau} \int_{\mathbb{R}^n} K(x)H^{\tau}(x)|U(x,0)|^{2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}-\tau}\, \d x \label{subcriticalfunctional}
\end{align}
with $\tau\geq 0$ small. Clearly, $I_{K}= I_{K, 0}$, $I_{K, \tau} \in C^{2}(D, \mathbb{R})$.
To continue our proof, let $\{\overline{\tau}_l\}$ be a sequence satisfying
\begin{align}\label{26}
\lim _{l \rightarrow \infty} \overline{\tau}_{l}=0, \quad \lim _{l \rightarrow \infty}(|z_{l}^{(1)}|+|z_{l}^{(2)}|)^{\overline{\tau}_l}=1.
\end{align}
We now give a lower bound energy estimate for some well-spaced \emph{bubbles}.
\begin{lem}\label{lem:3.1}
Let $\var_0$ be the constant in Proposition \ref{prop:3.1}. Suppose that $\var_1 \in(0,\var_{0})$ small enough and $l$ large enough, $0 \leq \tau \leq \overline{\tau}_{l}$. Then there exists some constant $A_{4}=A_{4}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,\delta_{1}, A_{2})>1$ such that for any
$U \in V_{l}(2, \var_1 )$ with $z_{1}(U) \in \widetilde{O}_{l}^{(1)}$, $z_{2}(U) \in \widetilde{O}_{l}^{(2)}$, and $\operatorname{dist}(z_{1}(U), \partial O_{l}^{(1)})<\delta_{1} /(2 A_{3}) $
or $\operatorname{dist}(z_{2}(U), \partial O_{l}^{(2)})<\delta_{1} / (2 A_{3})$, we have \begin{align*}
I_{K_{l},\tau}(U) \geq c^{(1)}+c^{(2)}+1/A_{4},
\end{align*}where\begin{align}\label{28}
c^{(i)}=\frac{\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n}(a^{(i)})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/2\sigma} \newcommand{\Sn}{\mathbb{S}^n}(\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n})^{n/\sigma} \newcommand{\Sn}{\mathbb{S}^n}.
\end{align}
\end{lem}
\begin{proof}
We assume that $\operatorname{dist}(z_{1}(U), \partial O_{l}^{(1)})<\delta_{1} /(2 A_{3})$. It follows from \eqref{Sn}, \eqref{28''}, and some direct computations that, for $\var_1 >0$ small and $l$ large,
\begin{align*}
I_{K_{l}, \tau}(U)=&\sum_{i=1}^{2} I_{K_{l}, \tau}(\alpha_{i} \widetilde{\delta}( z_{i},\lam _{i} ))+o_{\var_1 }(1)
\\=&\sum_{i=1}^{2}\Big\{\frac{1}{2} K_{l}(z_{i})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/2\sigma} \newcommand{\Sn}{\mathbb{S}^n} \| \widetilde{\delta}( z_{i},\lam _{i} )\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{2}\\&-\frac{1}{2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}} K_{l}(z_{i})^{-n/2\sigma} \newcommand{\Sn}{\mathbb{S}^n} \int_{\mathbb{R}^n} K_{l}(\cdot) \delta( z_{i},\lam _{i} )^{2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}-\tau} \Big\}+o_{\var_1 }(1)+o(1)
\\\geq&\sum_{i=1}^{2}\Big\{\frac{1}{2} K_{l}(z_{i})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/2\sigma} \newcommand{\Sn}{\mathbb{S}^n} \| \widetilde{\delta}(0,1)\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{2}\\&-\frac{1}{2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}} K_{l}(z_{i})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/2\sigma} \newcommand{\Sn}{\mathbb{S}^n} \|\delta(0,1)\|^{2} \Big\}+o_{\var_1 }(1)+o(1)
\\=&\sum_{i=1}^{2} \frac{\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n} K_{l}(z_{i})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/2\sigma} \newcommand{\Sn}{\mathbb{S}^n}(\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n})^{n/\sigma} \newcommand{\Sn}{\mathbb{S}^n}+o_{\var_1 }(1)+o(1).
\end{align*}
Combining this estimate with the assumption $\operatorname{dist}(z_{1}(U), \partial O_{l}^{(1)})<\delta_{1} /(2 A_{3})$, we obtain
\begin{align*}
I_{K_{l}, \tau}(U) \geq& \frac{\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n}(K_{l}(z_{l}^{(1)})-\delta_{1}/2)^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/2\sigma} \newcommand{\Sn}{\mathbb{S}^n}(\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n})^{n/\sigma} \newcommand{\Sn}{\mathbb{S}^n}\\&+\frac{\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n} K_{l}(z_{l}^{(2)})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/2\sigma} \newcommand{\Sn}{\mathbb{S}^n}(\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n})^{n/\sigma} \newcommand{\Sn}{\mathbb{S}^n}+o_{\var_1 }(1)+o(1)\\\geq& \sum_{i=1}^{2} c^{(i)}+1/A_{4},
\end{align*}
where the choice of $A_4$ is evident thanks to \eqref{20}, \eqref{21}, \eqref{24}, \eqref{25} and \eqref{28}.
The proof is now complete.
\end{proof}
From now on, the value of $A_{4}$ and $\var_1 $ are fixed. The main result in this section can be stated as follows:
\begin{thm}\label{thm:3.1}
Suppose that $\{K_{l}\}$ is a sequence of functions satisfying (i)--(iii). If there exist some bounded open sets $O^{(1)}, \ldots, O^{(m)} \subset \mathbb{R}^n$ and some constants $\delta_{2}, \delta_{3}>0$ such that for all $1 \leq i \leq m$,
\begin{gather}
\label{1111}
\widetilde{O}_{l}^{(i)}-z_{l}^{(i)} \subset O^{(i)}\quad \text { for all }\, l,\\\label{29}
\big\{U \in D^+: I_{K_{\infty}^{(i)}}^{\prime}(U)=0,\, c^{(i)} \leq I_{K_{\infty}^{(i)}}(U) \leq c^{(i)}+\delta_{2}\big\}\cap V(1, \delta_{3}, O^{(i)}, K_{\infty}^{(i)})=\emptyset.
\end{gather}
Then for any $\varepsilon>0$, there exists integer $\overline{l}_{\varepsilon, m}>0$ such that for any $l \geq \overline{l}_{\varepsilon, m}$ and $\tau\in (0,\overline{\tau}_{l})$, there exists $U_{l}= U_{l, \tau} \in V_{l}(m, \varepsilon)$ which solves
\begin{align} \label{30}
\left\{ \begin{aligned}
&\operatorname{div}(t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}\nabla U_l)=0&&\text{ in }\,\mathbb{R}^{n+1}_+,\\&\pa_{\nu}^{\sigma} \newcommand{\Sn}{\mathbb{S}^n}U_l=K_{l}(x)H^{\tau}(x) U_{l}(x,0)^{\frac{n+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}-\tau}&& \text { on } \, \mathbb{R}^n.
\end{aligned}\right.
\end{align}
Furthermore, $U_{l}$ satisfies\begin{align}\label{31}
\sum_{i=1}^{m} c^{(i)}-\varepsilon \leq I_{K_{l}, \tau}(U_{l}) \leq \sum_{i=1}^{m} c^{(i)}+\varepsilon.
\end{align}
\end{thm}
We prove Theorem \ref{thm:3.1} by contradiction argument. For simplicity,
we only consider the case of $m = 2$, since the changes for $m > 2$ are evident. Suppose the contrary of Theorem \ref{thm:3.1}, i.e., for some $\varepsilon^{*}>0$, there exists a sequence of $l \rightarrow \infty$, $0<\tau_{l}<\overline{\tau}_{l}$ such that Eq. \eqref{30}, for $\tau=\tau_{l}$, has
no solution in $V_{l}(2, \varepsilon^{*})$ satisfying \eqref{31} with $\varepsilon=\varepsilon^{*}$. A complicated
procedure will be followed in order to yield a contradiction. It will be outlined now and the details will be given in the next two sections. The proof consists of two parts:
\begin{itemize}
\item \emph{Part 1.} Under the contrary of Theorem \ref{thm:3.1}, we obtain a uniform lower bounds of the gradient vectors in some certain annular region. It is a standard consquence of the Palais-Smale condition in variational argument.
\item \emph{Part 2.} We use variational method to construct an approximating ``minimax'' curve. Part 1 can be used to construct a deformation. In our setting, we follow the nonnegative gradient flow to make a deformation, which is an important process to abtain a counterexample.
\end{itemize}
Part 1 will be carried out in Section \ref{sec:4} and Part 2 in Section \ref{sec:5}.
\section{First part of the proof of Theorem \ref{thm:3.1}}\label{sec:4}
For $\varepsilon_{2}>0$, we denote $\widetilde{V}_{l}(2, \varepsilon_{2})$ the set of functions $U$ in $D$ satisfies: there exist $\alpha=(\alpha_{1}, \alpha_{2}) \in \mathbb{R}^2$, $z=(z_{1}, z_{2}) \in O_{l}^{(1)} \times O_{l}^{(2)}$ and $\lam =(\lam _{1}, \lam _{2})\in \mathbb{R}^2$ such that
\begin{gather*}
\lam _{1},\lam _{2}>\varepsilon_{2}^{-1},\\|\lam _{i}^{\tau_{l}}-1|<\varepsilon_{2}, ~ i=1,2,\\|\alpha_{i}-K_{l}(z_{i})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n}|<\varepsilon_{2}, ~ i=1,2,\\\Big\|U-\sum_{i=1}^{2} \alpha_{i}\widetilde{\delta}(z_i,\lam _i)^{1+O(\tau_l)}\Big\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}<\varepsilon_{2}.
\end{gather*}
Throughout the paper, we denote $p_{l}=\frac{n+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}-\tau_{l}$.
\begin{lem}\label{lem:3.2}
For $\varepsilon_{2}=\varepsilon_{2}(\var_1 , \varepsilon^{*}, n,\sigma} \newcommand{\Sn}{\mathbb{S}^n)>0$ small enough,
we have, for $l$ large enough,
\begin{align}\label{32}
\widetilde{V}_{l}(2, \varepsilon_{2}) \subset V_{l}(2, o_{\varepsilon_{2}}(1)) \subset V_{l}(2 , \var_1 ) \cap V_{l}(2, \varepsilon^{*}),
\end{align}
where $o_{\varepsilon_{2}}(1)\rightarrow 0$ as $\varepsilon_{2}\rightarrow 0$.
\end{lem}
\begin{proof}
It is easy to check \eqref{32} by using the definition of $\widetilde{V}_{l}(2, \varepsilon_{2})$. Hence we omit it.
\end{proof}
The following result is the crucial step in the proof of
Theorem \ref{thm:3.1}.
\begin{prop}\label{prop:3.2}
Under the hypotheses of Theorem \ref{thm:3.1} and the
contrary of the conclusion of Theorem \ref{thm:3.1}, there exist some constants $\varepsilon_{2} \in(0, \min \{\var_{0},\var_1 , \varepsilon^{*}, \delta_{3}\})$ and
$\varepsilon_{3} \in(0, \min\{\var_{0}, \var_1 , \varepsilon_{2}, \varepsilon^{*}, \delta_{3}\})$ which are independent of $l$ such that \eqref{32}
holds for such $\varepsilon_{2}$, and there exist $\delta_{4}=\delta_{4}(\varepsilon_{2}, \varepsilon_{3})>0$ and $l_{\varepsilon_{2}, \varepsilon_{3}}^{\prime}>1$ such that for any
$l \geq l_{\varepsilon_{2}, \varepsilon_{3}}^{\prime}$, $U \in \widetilde{V}_{l}(2, \varepsilon_{2}) \backslash \widetilde{V}_{l}(2, \varepsilon_{2} / 2)$ with $|I_{K_{l}, \tau_{l}}(U)-(c^{(1)}+c^{(2)})|<\varepsilon_{3}$, we have$$
\|I_{K_{l}, \tau_{l}}^{\prime}(U)\| \geq \delta_{4},
$$where $I_{K_{l}, \tau_{l}}^{\prime}$ denotes the Fr\'echet derivative.
\end{prop}
\begin{rem}
Proposition \ref{prop:3.2} will be used to construct an approximating ``minimaxing" curve in Section \ref{sec:5}.
\end{rem}
Evidently, we have, under the contrary of Theorem \ref{thm:3.1}, that for each $l$,
\begin{align*}
\inf\{\|I_{K_{l}, \tau_{l}}^{\prime}(U)\|:U\in \widetilde{V}_{l}(2, \varepsilon_{2}) \backslash \widetilde{V}_{l}(2, \varepsilon_{2} / 2),\, |I_{K_{l}, \tau_{l}}(U)-(c^{(1)}+c^{(2)})|<\varepsilon_{3}\}>0.
\end{align*}
We prove Proposition \ref{prop:3.2} by contradiction argument. Suppose the statement in the Proposition \ref{prop:3.2} is not true, then no matter how small $\varepsilon_{2}, \varepsilon_{3}>0$ are, there exists a subsequence (which is still denoted as $\{U_l\}$) such that
\begin{gather}\label{33}
\{U_{l}\} \subset \widetilde{V}_{l}(2, \varepsilon_{2}) \backslash \widetilde{V}_{l}(2, \varepsilon_{2} / 2),\\\label{34}
|I_{K_{l}, \tau_l}(U_{l})-(c^{(1)}+c^{(2)})|<\varepsilon_{3},\\\label{35}
\lim _{l \rightarrow \infty}\|I_{K_{l}, \tau_{l}}^{\prime}(U_{l})\|=0.
\end{gather}
However, under the above assumptions, we can prove that there exists another subsequence, still denotes by $\{U_{l}\}$, such that $U_{l}\in \widetilde{V}_{l}(2, \varepsilon_{2} / 2)$, which leads to a contradition. The existence of such sequence needs some lengthy indirect analysis to the interaction of two ``\emph{bubbles}''.
We break the proof of Proposition \ref{prop:3.2} into several claims.
\medskip
First we write\begin{equation}\label{36}
U_{l}=\alpha_{1}^{l} \widetilde{\delta}( z_{1}^{l}, \lam _{1}^{l} ) +\alpha_{2}^{l} \widetilde{\delta}( z_{2}^{l}, \lam _{2}^{l} )+v_{l}
\end{equation}
after making the minimization \eqref{27}. By Proposition \ref{prop:3.1} and some standard arguments in \cite{BC1,BC2,Li93}, if $\varepsilon_{2}>0$ small enough, we have
\begin{gather}
\label{37}(\lam _{1}^{l})^{-1},~ (\lam _{2}^{l})^{-1} = o_{\varepsilon_{2}}(1),\\\label{38}
|\alpha_{i}^{l}-K_{l}(z_{i}^{l})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n}| = o_{\varepsilon_{2}}(1), \quad i=1,2,\\\label{39}
\|v_{l}\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n} = o_{\varepsilon_{2}}(1),\\\label{40}
\operatorname{dist}(z_{1}^{l}, O_{l}^{(1)}),~ \operatorname{dist}(z_{2}^{l}, O_{l}^{(2)}) = o_{\varepsilon_{2}}(1).
\end{gather}
\medskip
Next we will derive some elementary estimates of the interaction of two ``\emph{bubbles}'' in \eqref{36}. More precisely, we want to find another representative of $U_l$ in \eqref{36}, from which we can deduce its location and concentrate rate easily. Let us introduce a linear isometry operator first.
For $z \in \mathbb{R}^n$, we define $T_{z}: D\rightarrow D$ by
$$
T_{z} U(x,t):=U(x+z,t).
$$It is easy to see that $\|T_{z} U\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}=\|U\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}$.
We now give \emph{bubble's} profile of \eqref{36}.
\begin{claim}\label{claim:1}
For $\varepsilon_{2}>0$ small enough, we have \begin{align*}
\lim _{l \rightarrow \infty} \lam _{1}^{l}=\lim _{l \rightarrow\infty} \lam _{2}^{l}=\infty.
\end{align*}
\end{claim}
\begin{proof}
Assume to the contrary that $ \lam _{1}^{l}=\lam _{1}+o(1)<\infty$ up to a subsequence. Here and in the following, let $o(1)$ denote any sequence tending to 0 as $l\rightarrow \infty$.
Now the proof consists of three steps.
\textbf{ Step 1} (Construct a positive solution). One observes from \eqref{36} that$$
T_{z_{1}^{l}} U_{l}=\alpha_{1}^{l} \widetilde{\delta}( 0,\lam _{1}^{l} )+\alpha_{2}^{l} \widetilde{\delta}(z_{2}^{l}-z_{1}^{l}, \lam _{2}^{l})+T_{z_{1}^{l}} v_{l}.
$$
Then by Proposition \ref{prop:3.1}, after passing to a subsequence, we have
\begin{gather}
\lim _{l \rightarrow \infty} \alpha_{1}^{l}=\alpha_{1} \in\Big[\frac{1}{2}(A_{2})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n}-o_{\varepsilon_{2}}(1), 2(A_{2})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n}+o_{\varepsilon_{2}}(1)\Big],
\label{alpha1} \\ \lim _{l \rightarrow \infty} \alpha_{2}^{l}=\alpha_{2} \in\Big[\frac{1}{2}(A_{2})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n}-o_{\varepsilon_{2}}(1), 2(A_{2})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n}+o_{\varepsilon_{2}}(1)\Big], \label{alpha2}
\end{gather}
and $$ T_{z_{1}^{l}} v_{l} \rightharpoonup w_0 \quad \text { weakly in }\, D$$
for some $w_0 \in D$. From the lower semi-continuity of the norm and \eqref{39}, we infer that \begin{align}\label{41}
\|w_0\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n} \leq \varliminf_{l \rightarrow \infty}\|T_{z_{1}^{l}} v_{l}\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}= o_{\varepsilon_{2}}(1).
\end{align}
Using the assumption (ii) (stated in the beginning of Section \ref{sec:3}), we get \begin{align}\label{42}
\lim _{l \rightarrow \infty}|z_{1}^{l}-z_{2}^{l}| \geq \lim _{l \rightarrow \infty} R_{l}=\infty.
\end{align}
Therefore,
\begin{align}\label{43}
T_{z_{1}^{l}} U_{l} \rightharpoonup W:=\alpha_{1} \widetilde{\delta}( 0,\lam _{1} )+w_0\quad \text { weakly in }\, D.
\end{align}
Obviously, $W \neq 0$ if $\varepsilon_{2}$ is small enough.
Next we prove that $W$ is a weak solution of the following equation
\begin{align}\label{44}
\left\{\begin{aligned}
&\operatorname{div}(t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}\nabla W)=0&&\text{ in }\,\mathbb{R}^{n+1}_+,\\&\pa_{\nu}^{\sigma} \newcommand{\Sn}{\mathbb{S}^n}W=T_{\zeta}K_{\infty}^{(1)}|W(x,0)|^{\frac{4\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}W(x,0) && \text { on }\, \mathbb{R}^n,
\end{aligned}\right.
\end{align}
where $\zeta \in O^{(1)}$ with $\operatorname{dist}(\zeta , \partial O^{(1)})>\delta_{0} / 2$. Note that we have abused notation a bit. Only in this proof we write $T_{\zeta}K_{\infty}^{(1)}=K_{\infty}^{(1)}(\cdot+\zeta)$.
For any $\phi \in C_{c}^{\infty}(\overline{\mathbb{R}}^{n+1}_{+})$, it follows from \eqref{35} that
$$
I_{K_{l},\tau_{l}}^{\prime}(U_{l})(T_{-z_{1}^{l}} \phi) =o(1)\|T_{-z_{1}^{l}} \phi\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n} =o(1)\|\phi\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n} =o(1).
$$
Summing up \eqref{22}, \eqref{25}, \eqref{26} and \eqref{43}, we find that
\begin{align*}
o(1)=&\big \langle U_{l},T_{-z_{1}^{l}}\phi\big\rangle-\int_{\mathbb{R}^n\times\{0\}} K_{l}(\cdot)H^{\tau_l}(\cdot)|U_{l}|^{p_l-1}U_{l}T_{-z_{1}^{l}} \phi
\\=&\big \langle T_{z_{1}^{l}} U_{l},\phi\big\rangle-\int_{\mathbb{R}^n\times\{0\}} K_{l}(\cdot+z_{1}^{l})H^{\tau_l}(\cdot+z_{1}^{l})|T_{z_{1}^{l}} U_{l}|^{p_l-1}(T_{z_{1}^{l}}U_{l}) \phi
\\=&\big\langle W ,\phi\big\rangle -\int_{\mathbb{R}^n\times\{0\}} T_{\zeta}K_{\infty}^{(1)}|W|^{\frac{4\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}W\phi+o(1),
\end{align*}
where $\zeta=\lim _{l \rightarrow \infty}(z_{1}^{l}-z_{l}^{(1)})$ along a subsequence. This means $W$ is a weak solution of \eqref{44}.
The positivity of $W$ can be verified from the following argument.
Let us write $W=W^{+}-W^{-}$, where $W^{+}=\max (W, 0)$ and $W^{-}=\min (W, 0)$. It follows from \eqref{43}, \eqref{41} and \eqref{trace} that \begin{align}\label{claim1-1}
\int_{\mathbb{R}^n\times\{0\}}(W^{-})^{2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}}\,\d x = o_{\varepsilon_{2}}(1).
\end{align} Multiplying \eqref{44}
by $W^{-}$ and integrating by parts, we have
\begin{align}
\int_{\mathbb{R}^{n+1}_+}|\nabla W^-|^2\,\d X=&\int_{\mathbb{R}^n\times\{0\}}T_{\zeta}K_{\infty}^{(1)}(W^{-})^{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}\,\d x\notag\\\leq & o_{\varepsilon_{2}}(1)\Big(\int_{\mathbb{R}^n\times\{0\}}(W^{-})^{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}\,\d x\Big)^{2/2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}\notag\\\leq & o_{\varepsilon_{2}}(1)\int_{\mathbb{R}^{n+1}_+}|\nabla W^-|^2\,\d X,
\end{align}
where we used \eqref{claim1-1} in the first inequality and \eqref{trace} in the second inequality. Hence, if $\varepsilon_{2}>0$ is small enough, we immediately obtain $W^{-} \equiv 0$, namely, $W \geq 0$. It folllows from \eqref{44} and the strong maximum principle that $W > 0$.
\textbf{ Step 2} (Energy bound estimates). We now begin to estimate the value of $I_{T_{\zeta}K_{\infty}^{(1)}} (W)$ in order to obtain a contradiction. The estimate we are going to establish is
\begin{align}\label{45}
c^{(1)} \leq I_{T_{\zeta}K_{\infty}^{(1)}}(W) \leq c^{(1)}+o_{\varepsilon_{2}}(1),
\end{align}
where $o_{\varepsilon_{2}}(1)\rightarrow 0$ as $\varepsilon_{2}\rightarrow 0$.
Firstly, multiplying \eqref{44} by $W$ and integrating by parts, we have
\begin{align*}
\int_{\mathbb{R}^{n+1}_{+}}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla W|^{2}\,\d X=\int_{\mathbb{R}^n\times\{0\}} T_{\zeta}K_{\infty}^{(1)}W^{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}\,\d x.
\end{align*}
This implies that
\begin{align*}
I_{T_{\zeta} K_{\infty}^{(1)}}(W)&=\frac{1}{2} \int_{\mathbb{R}^{n+1}_{+}}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla W|^{2}\,\d X-\frac{1}{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}\int_{\mathbb{R}^n\times\{0\}} T_{\zeta}K_{\infty}^{(1)}W^{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}\,\d x\\&=\frac{\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n} \int_{\mathbb{R}^n\times\{0\}} T_{\zeta}K_{\infty}^{(1)}W^{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}\,\d x.
\end{align*}
We thus conclude from \eqref{trace} and the upper bound $T_{\zeta}K_{\infty}^{(1)} \leq a^{(1)}$ that
\begin{align*}
\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n} \leq& \frac{\big(\int_{\mathbb{R}^{n+1}_{+}}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla W|^{2}\,\d X\big)^{1 / 2}}{\big(\int_{\mathbb{R}^n\times\{0\}}W^{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}\,\d x\big)^{1/2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}}\\ \leq &\frac{\big(\int_{\mathbb{R}^{n+1}_{+}}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla W|^{2}\,\d X\big)^{1 / 2}}{\big(\int_{\mathbb{R}^n\times\{0\}} T_{\zeta}K_{\infty}^{(1)}W^{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}\,\d x\big)^{1/2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}}(a^{(1)})^{1/2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}} \\
=&\Big(\int_{\mathbb{R}^n\times\{0\}} T_{\zeta}K_{\infty}^{(1)}W^{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}\,\d x\Big)^{\sigma} \newcommand{\Sn}{\mathbb{S}^n/n}(a^{(1)})^{1/2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}} .
\end{align*}
Therefore, we complete the proof of the first inequality in \eqref{45}.
On the other hand, we deduce from \eqref{18} that $$
| K_{\infty}^{(1)}(x)| \leq A_{1},\quad \forall\, x\in \mathbb{R}^n.$$
Owing to \eqref{Sn}, \eqref{36}, \eqref{37}, \eqref{39} and \eqref{42}, we have
\begin{align*}
I_{K_{l}, \tau_{l}}(U_{l})&=I_{K_{l}, \tau_{l}}(\alpha_{1}^{l} \widetilde{\delta}( z_{1}^{l},\lam _{1}^{l} ))+I_{K_{l}, \tau_{l}}(\alpha_{2}^{l} \widetilde{\delta}( z_{2}^{l},\lam _{2}^{l} ))+o_{\varepsilon_{2}}(1)\\&
=I_{T_{z_{1}^l}K_{l}, \tau_{l}}(\alpha_{1}^{l} \widetilde{\delta}( 0,\lam _{1}^{l} ))+I_{K_{l},\tau_{l}}(\alpha_{2}^{l} \widetilde{\delta}( z_{2}^{l},\lam _{2}^{l} ))+o_{\varepsilon_{2}}(1)\\&
=I_{T_{z_{1}^l}K_{l}, \tau_{l}}(\alpha_{1} \widetilde{\delta}( 0,\lam _{1} ))+I_{K_{l}, \tau_{l}}(\alpha_{2}^{l} \widetilde{\delta}( z_{2}^{l},\lam _{2}^{l} ))+o_{\varepsilon_{2}}(1)+o(1)\\&=I_{ T_{\zeta}K_{\infty}^{(1)}}(\alpha_{1} \widetilde{\delta}( 0,\lam _{1} ))+I_{K_{l}, \tau_{l}}(\alpha_{2}^{l} \widetilde{\delta}( z_{2}^{l},\lam _{2}^{l} ))+o_{\varepsilon_{2}}(1)+o(1)
\\&=I_{T_{\zeta}K_{\infty}^{(1)}}(W)+I_{K_{l}, \tau_{l}}(\alpha_{2}^{l} \widetilde{\delta}( z_{2}^{l},\lam _{2}^{l} ))+o_{\varepsilon_{2}}(1)+o(1).
\end{align*}
Consequently,
\begin{align}\label{47}
I_{T_{\zeta}K_{\infty}^{(1)}}(W)=I_{K_{l}, \tau_{l}}(U_{l})-I_{K_{l}, \tau_{l}}(\alpha_{2}^{l} \widetilde{\delta}( z_{2}^{l},\lam _{2}^{l} ))+o_{\varepsilon_{2}}(1)+o(1).
\end{align}
Combining \eqref{35} and \eqref{36}, we find\begin{align*}
o(1)&=
I_{K_{l}, \tau_{l}}^{\prime}(U_{l})(\alpha_{2}^{l} \widetilde{\delta}( z_{2}^{l},\lam _{2}^{l} ))=I_{K_{l},\tau_{l} }^{\prime}(\alpha_{2}^{l} \widetilde{\delta}( z_{2}^{l},\lam _{2}^{l} ))(\alpha_{2}^{l} \widetilde{\delta}( z_{2}^{l},\lam _{2}^{l} ))+o_{\varepsilon_{2}}(1)+o(1),
\end{align*}
namely,\begin{gather}
\label{48}
\|\alpha_{2}^{l} \widetilde{\delta}( z_{2}^{l},\lam _{2}^{l} ) \|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{2}=\int_{\mathbb{R}^n} K_{l}(x)H^{\tau_l}(x)(\alpha_{2}^{l} \delta( z_{2}^{l},\lam _{2}^{l} ))^{2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}-\tau_{l}}\, \d x+o_{\varepsilon_{2}}(1)+o(1),\\ \label{49}I_{K_{l}, \tau_{l}}(\alpha_{2}^{l} \widetilde{\delta}( z_{2}^{l},\lam _{2}^{l} ))=\frac{\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n} \|\alpha_{2}^{l} \widetilde{\delta}( z_{2}^{l},\lam _{2}^{l} )\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{2}+o_{\varepsilon_{2}}(1)+o(1).
\end{gather}
From \eqref{Sn} and \eqref{alpha2}, we obtain\begin{align}\label{50}
\begin{aligned}
\|\alpha_{2}^{l} \widetilde{\delta}(z_{2}^{l},\lam _{2}^{l} )\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{2} &\geq\Big(\frac{1}{2}(A_{2})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n}-o_{\varepsilon_{2}}(1)\Big)(\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n})^{n/\sigma} \newcommand{\Sn}{\mathbb{S}^n} \\&>\frac{1}{4}(A_{2})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n}(\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n})^{n/\sigma} \newcommand{\Sn}{\mathbb{S}^n}.
\end{aligned}
\end{align}
Then, by \eqref{trace}, \eqref{19}--\eqref{21}, \eqref{37}, \eqref{42}, and H\"{o}lder inequality, we have
\begin{align*}
\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n}&\leq \frac{\big(\int_{\mathbb{R}^{n+1}_+}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla(\alpha_{2}^{l} \widetilde{\delta}( z_{2}^{l},\lam _{2}^{l} ))|^{2}\,\d X\big)^{1/2}}{\big(\int_{\mathbb{R}^n}(\alpha_{2}^{l} \delta( z_{2}^{l},\lam _{2}^{l} ))^{2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}}\,\d x\big)^{1/2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}}}
\\& \leq \frac{\big(\int_{\mathbb{R}^{n+1}_+}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla(\alpha_{2}^{l} \widetilde{\delta}( z_{2}^{l},\lam _{2}^{l} ))|^{2}\,\d X\big)^{1/2}}{\big(\int_{B_{R_{l}}(z_{l}^{(2)})}(\alpha_{2}^{l} \delta( z_{2}^{l},\lam _{2}^{l} ))^{2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}}\,\d x\big)^{1/2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}+o(1)}\\ &\leq \frac{\big(\int_{\mathbb{R}^{n+1}_+}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla(\alpha_{2}^{l} \widetilde{\delta}( z_{2}^{l},\lam _{2}^{l} ))|^{2}\,\d X\big)^{1/2}\cdot K_{l}(z_{l}^{(2)})^{1/2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}}{\big(\int_{B_{R_{l}}(z_{l}^{(2)})} K_{l}(x)H^{\tau_l}(x)(\alpha_{2}^{l} \delta( z_{2}^{l}, \lam _{2}^{l} ))^{2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}-\tau_{l}}\,\d x\big)^{1/2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}+o(1)}
\\&=\frac{\big(\int_{\mathbb{R}^{n+1}_+}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla (\alpha_{2}^{l} \widetilde{\delta}( z_{2}^{l},\lam _{2}^{l} ))|^{2}\,\d X\big)^{1/2} \cdot(a^{(2)})^{1/2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}+o(1)}{\big(\int_{\mathbb{R}^n} K_{l}(x)H^{\tau_l}(x)(\alpha_{2}^{l} \delta( z_{2}^{l},\lam _{2}^{l} ))^{2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}-\tau_{l}}\,\d x\big)^{1/2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}+o(1)}.
\end{align*}
Thus, using \eqref{48}, we establish that
\begin{align*}
\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n} \leq\|\alpha_{2}^{l} \widetilde{\delta}( z_{2}^{l},\lam _{2}^{l} )\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{2\sigma} \newcommand{\Sn}{\mathbb{S}^n / n} \cdot(a^{(2)})^{1/2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}+o(1).
\end{align*}
This together with \eqref{49} gives
\begin{align}
I_{K_{l}, \tau_{l}}(\alpha_{2}^{l} \widetilde{\delta}( z_{2}^{l},\lam _{2}^{l} )) &\geq \frac{\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n}(a^{(2)})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/2\sigma} \newcommand{\Sn}{\mathbb{S}^n}(\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n})^{n/\sigma} \newcommand{\Sn}{\mathbb{S}^n}+o_{\varepsilon_{2}}(1)+o(1)\notag\\&=c^{(2)}+o_{\varepsilon_{2}}(1)+o(1).\label{a}
\end{align}
Putting \eqref{34}, \eqref{47} and \eqref{a} together, we obtain the
right hand side of \eqref{45}.
\textbf{Step 3} (Completion of the proof). Finally, a contradiction arises from \eqref{29}, \eqref{43}--\eqref{45}, and the positivity of $W$ for $\varepsilon_{2}>0$ small enough. This proves that $\lim_{l\rightarrow \infty } \lam _{1}^{l}=\infty$. Similarly we have $\lim_{l\rightarrow \infty } \lam _{2}^{l}=\infty$. Claim \ref{claim:1} has been established.
\end{proof}
For any $\lam >0$ and $z \in \mathbb{R}^n$, we define $\mathscr{T}_{l,\lam , z}: D\rightarrow D$ by $$\mathscr{T}_{l, \lam , z} U(x,t):=\lam ^{2\sigma} \newcommand{\Sn}{\mathbb{S}^n / (1-p_{l})} U(\lam ^{-1}x+z,\lam ^{-1}t).$$It is clear that $\mathscr{T}_{l, \lam , z}^{-1} U(x,t)=\lam ^{2 \sigma} \newcommand{\Sn}{\mathbb{S}^n/ (p_l-1)} U(\lam (x-z),\lam t)$.
\begin{lem}\label{lem:3.3}
There exists some constant $C=C(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,A_{2})>0$ such that for
small $\varepsilon_{2}$ and large $l$, we have \begin{align*}
(\lam _{1}^{l})^{\tau_{l}} , (\lam _{2}^{l})^{\tau_{l}} \leq C.
\end{align*}
\end{lem}
\begin{proof
Applying \eqref{35}, we deduce that
\begin{align}\label{51}
I_{K_{l}, \tau_{l}}^{\prime}(U_{l})( \widetilde{\delta}( z_{1}^{l},\lam _{1}^{l} ))=o(1).
\end{align}
Now an explicit calculation from \eqref{39}, \eqref{42}, Claim \ref{claim:1}, and Proposition \ref{prop:3.1} yields that
\begin{gather*}\big\langle \widetilde{\delta}( z_{1}^{l},\lam _{1}^{l} ) , v_{l}\big\rangle=0,\\
\big\langle \widetilde{\delta}( z_{1}^{l},\lam _{1}^{l} ), \widetilde{\delta}( z_{2}^{l},\lam _{2}^{l} )\big\rangle =o(1), \\\int_{\mathbb{R}^n} K_{l}(\cdot)H^{\tau_l}(\cdot) \delta( z_{2}^{l},\lam _{2}^{l} )^{p_{l}} \delta( z_{1}^{l},\lam _{1}^{l} )=o(1),\\\int_{\mathbb{R}^n\times\{0\}} K_{l}(\cdot)H^{\tau_l}(\cdot) v_{l}^{p_{l}} \delta( z_{1}^{l},\lam _{1}^{l} )=o_{\varepsilon_{2}}(1).
\end{gather*}
Putting together the above estimates, we have
\begin{align}\label{52}
(\alpha_{1}^{l})^{p_{l}} \int_{\mathbb{R}^n} K_{l}(\cdot)H^{\tau_l}(\cdot) \delta( z_{1}^{l},\lam _{1}^{l} )^{p_{l}+1} =\alpha_{1}^{l} \| \widetilde{\delta}( z_{1}^{l},\lam _{1}^{l} )\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^2
+o_{\varepsilon_{2}}(1)+o(1).
\end{align}
Then the proof of the first term completed from \eqref{23}, \eqref{26}, \eqref{38}, \eqref{40}, \eqref{52}, and Claim \ref{claim:1}. Similarly we have $(\lam _{2}^{l})^{\tau_{l}} \leq C$.
\end{proof}
Without loss of generality, we can always assume that
\begin{equation}\label{53}
\lam _{1}^{l} \leq \lam _{2}^{l}.
\end{equation}
A direct computation using \eqref{36} shows that
\begin{equation}\label{54}
\mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}} U_{l}=\widetilde{\alpha}_{1}^{l} \widetilde{\delta}(0,1)+\widetilde{\alpha}_{2}^{l}\widetilde{\delta}(\lam _{1}^{l}(z_{2}^{l}-z_{1}^{l}),\lam _{2}^{l} / \lam _{1}^{l}) +\mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}} v_{l},
\end{equation}where\[\widetilde{\alpha}_{1}^{l}=\alpha_{1}^{l}(\lam _{1}^{l})^{(n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n) / 2-2\sigma} \newcommand{\Sn}{\mathbb{S}^n /(p_{l}-1)},\]
\[\widetilde{\alpha}_{2}^{l}=\alpha_{2}^{l}(\lam _{1}^{l})^{(n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n) / 2-2\sigma} \newcommand{\Sn}{\mathbb{S}^n /(p_{l}-1)}.\]
Then we can verify the existence of $U_{1} \in D$ and $\xi_{1} \in O^{(1)}$ such that
\begin{gather}
\label{55}
\mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}} U_{l} \rightharpoonup U_{1} \quad \text { weakly in }\, D,\\\label{56}\lim _{l \rightarrow \infty}(z_{1}^{l}-z_{l}^{(1)})=\xi_{1},
\end{gather}up to a subsequence.
Accordingly, by making use of \eqref{22}, \eqref{25}, \eqref{56} and \eqref{40}, we have \begin{equation}\label{57}
\lim _{l \rightarrow \infty} K_{l}(z_{1}^{l})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n}=K_{\infty}^{(1)}(\xi_{1})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n}.
\end{equation}
Now, one observes from \eqref{35} that for any $\phi \in C_{c}^{\infty}(\overline{\mathbb{R}}^{n+1}_+)$, we get
\begin{align*}
o(1)=&I_{K_{l}, \tau_{l}}^{\prime}(U_{l})(\mathscr{T}^{-1}_{l, \lam _{1}^{l}, z_{1}^{l}} \phi)\\=&
\big\langle U_l,\mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}}^{-1} \phi\big\rangle-\int_{\mathbb{R}^n\times\{0\}} K_{l}(\cdot)H^{\tau_l}(\cdot)|U_{l}|^{p_{l}-1}U_{l} \mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}}^{-1} \phi\\=&(\lam _{1}^{l})^{4\sigma} \newcommand{\Sn}{\mathbb{S}^n /(p_{l}-1)+2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n}\Big\{\big\langle \mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}} U_{l}, \phi\big\rangle-\int_{\mathbb{R}^n\times\{0\}} K_{l}(\cdot/\lam _{1}^{l}+z_{1}^{l})\\ &\times H^{\tau_l}(\cdot/ \lam _{1}^{l}+z_{1}^{l}) |\mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}} U_{l}|^{p_{l}-1}(\mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}} U_{l}) \phi\Big\}.
\end{align*}Taking the limit $l \rightarrow \infty$, and then using \eqref{22}, \eqref{26}, \eqref{40}, \eqref{55}, \eqref{56}, and Lemma \ref{lem:3.3}, we obtain
\begin{align*}
\big\langle U_{1} ,\phi\big\rangle-\int_{\mathbb{R}^n} K_{\infty}^{(1)}(\xi_{1})|U_{1}(x,0)|^{\frac{4\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}} U_{1}(x,0)\,\d x=0.
\end{align*}
Namely, $U_1$ satisfies
\begin{align}\label{58}
\left\{\begin{aligned}
&\operatorname{div}(t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}\nabla U_1)=0&&\text{ in }\,\mathbb{R}^{n+1}_+,\\&\pa_{\nu}^{\sigma} \newcommand{\Sn}{\mathbb{S}^n}U_1=K_{\infty}^{(1)}(\xi_{1})|U_{1}(x,0)|^{\frac{4\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}} U_{1}(x,0) && \text { on }\, \mathbb{R}^n.
\end{aligned}\right.
\end{align}
Moreover, we see from \eqref{54} that $U_{1}$ is not identically zero if $\varepsilon_{2}$ is small enough. We then argue as before to obtain $U_1>0$.
By the classification theorem of positive
solutions of \eqref{58} (see \cite[Proposition 1.3]{MQ} or \cite[Theorem 1.8]{JLX}), there exists $\lam ^{*}>0$ and $z^{*} \in \mathbb{R}^n$ such that\begin{align}\label{59}
U_{1}=K_{\infty}^{(1)}(\xi_{1})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n} \widetilde{\delta}(z^{*},\lam ^{*}).
\end{align}
\begin{claim}\label{claim:2}
For $l$ large enough, we have \begin{align*}
|z^{*}| = o_{\varepsilon_{2}}(1), ~ |\lam ^{*}-1| = o_{\varepsilon_{2}}(1),~ (\lam _{1}^{l})^{\tau_{l}}=1+o_{\varepsilon_{2}}(1).
\end{align*}
\end{claim}
\begin{proof}
First of all, using Lemma \ref{lem:3.3}, we find $$\lim_{l \rightarrow \infty}(\lam _{1}^{l})^{\tau_{l}}=A_{\varepsilon_{2}, \varepsilon_{3}}$$
along a subsequence, where $A_{\varepsilon_{2}, \varepsilon_{3}}$ is a positive constant independent of $l$ for fixed $\varepsilon_{2}$ and $\varepsilon_{3}$. Thanks to \eqref{38} and \eqref{57}, we obtain\begin{align*}
\alpha_{1}^{l}=K_{\infty}^{(1)}(\xi_{1})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n}+o_{\varepsilon_{2}}(1)+o(1).
\end{align*}
Note that \begin{align}\label{60}
\widetilde{\alpha}_{1}^{l}=\alpha_{1}^{l}(\lam _{1}^{l})^{(n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n) / 2-2\sigma} \newcommand{\Sn}{\mathbb{S}^n /(p_{l}-1)}=\alpha_{1}^{l}(\lam _{1}^{l})^{-(n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n)^{2}\tau_{l} / 8\sigma} \newcommand{\Sn}{\mathbb{S}^n +O(\tau_{l}^{2})}.
\end{align}
Then it can be computed that\begin{align}\label{61}
\widetilde{\alpha}_{1}^{l}=K_{\infty}^{(1)}(\xi_{1})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n}(A_{\varepsilon_{2}, \varepsilon_{3}})^{-(n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n)^{2} / 8\sigma} \newcommand{\Sn}{\mathbb{S}^n}+o_{\varepsilon_{2}}(1)+o(1).
\end{align}
Moreover, by \eqref{37}, \eqref{42} and \eqref{53}--\eqref{55}, we have
\begin{align}\label{62}
\begin{aligned}
\widetilde{\alpha}_{1}^{l} \widetilde{\delta}(0,1)+\mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}} v_{l} \rightharpoonup U_{1} \quad \text { weakly in } \, D.
\end{aligned}
\end{align}
Consequently, it follows from \eqref{39}, \eqref{61}, \eqref{62}, and Lemma \ref{lem:3.3} that\begin{align*}
\| K_{\infty}^{(1)}(\xi_{1})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n} (A_{\varepsilon_{2}, \varepsilon_{3}})^{-(n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n)^{2} / 8\sigma} \newcommand{\Sn}{\mathbb{S}^n} \widetilde{\delta}(0,1)-K_{\infty}^{(1)}(\xi_{1})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n} \widetilde{\delta}( z^{*}, \lam ^{*} ) \|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}=o_{\varepsilon_{2}}(1)+o(1).
\end{align*}
Finally, taking the limit $l\rightarrow \infty$, we get\begin{align*}|z^{*}|=o_{\varepsilon_{2}}(1), \quad \lam ^{*}=1+o_{\varepsilon_{2}}(1), \quad A_{{\varepsilon_{2}, \varepsilon_{3}}}=1+o_{\varepsilon_{2}}(1). \end{align*}
\end{proof}
We define $\xi_{l} \in D$ by\begin{align}\label{63}
\mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}} U_{l}=U_{1}+\mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}} \xi_{l}.
\end{align}
It follows from \eqref{55} that\begin{align}\label{63'}
\mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}} \xi_{l} \rightharpoonup 0 \quad \text { weakly in }\, D. \end{align}
\begin{claim}\label{claim:3}
For $\varepsilon_{2}$ small enough, we have $\|I_{K_{l}, \tau_{l}}^{\prime}(\xi_{l})\|=o(1)$.
\end{claim}
\begin{proof}
For any $\phi \in C_{c}^{\infty}(\overline{\mathbb{R}}^{n+1}_+)$, it follows from \eqref{35}, \eqref{63}, \eqref{58}, and Lemma \ref{lem:3.3} that
\begin{align}
o(1)\|\phi\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}=&I_{K_{l}, \tau_{l}}^{\prime}(U_{l})(\mathscr{T}^{-1}_{l, \lam _{1}^{l}, z_{1}^{l}} \phi)\notag\\=&
\big\langle U_{l},\mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}}^{-1} \phi\big\rangle
-\int_{\mathbb{R}^n\times\{0\}} K_{l}(\cdot)H^{\tau_l}(\cdot)|U_{l}|^{p_{l}-1}U_{l} \mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}}^{-1} \phi\notag\\
=&(\lam _{1}^{l})^{4 \sigma} \newcommand{\Sn}{\mathbb{S}^n /(p_{l}-1)+2 \sigma} \newcommand{\Sn}{\mathbb{S}^n-n}\Big\{\big\langle \mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}} U_{l}, \phi\big\rangle
-\int_{\mathbb{R}^n\times\{0\}} K_{l}(\cdot/\lam _{1}^{l}+z_{1}^{l})\notag\\&\times H^{\tau_l}(\cdot/ \lam _{1}^{l}+z_{1}^{l}) |\mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}} U_{l}|^{p_{l}-1}(\mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}} U_{l}) \phi\Big\}\notag\\=&(\lam _{1}^{l})^{4 \sigma} \newcommand{\Sn}{\mathbb{S}^n /(p_{l}-1)+2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n}\Big\{\big\langle U_{1} , \phi \rangle +\big\langle \mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}} \xi_{l},\phi\big\rangle-\int_{\mathbb{R}^n\times\{0\}} K_{l}(\cdot/\lam _{1}^{l}+z_{1}^{l})\notag\\&\times H^{\tau_l}(\cdot/\lam _{1}^{l}+z_{1}^{l}) |\mathscr{T}_{l, \lam _{1}^{l},z_{1}^{l}} U_{l}|^{p_{l}-1}(\mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}} U_{l}) \phi\Big\}\notag \\=&I_{K_{l},\tau_{l} }^{\prime}(\xi_{l})(\mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}}^{-1} \phi)+(\lam _{1}^{l})^{4 \sigma} \newcommand{\Sn}{\mathbb{S}^n /(p_{l}-1)+2 \sigma} \newcommand{\Sn}{\mathbb{S}^n-n}\Big\{\int_{\mathbb{R}^n\times\{0\}} K_{\infty}^{(1)}(\xi_{1})U_{1}^{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}-1} \phi\notag\\&+\int_{\mathbb{R}^n\times\{0\}} K_{l}(\cdot/\lam _{1}^{l}+z_{1}^{l})H^{\tau_l}(\cdot / \lam _{1}^{l}+z_{1}^{l})|\mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}} \xi_{l}|^{p_{l}-1}(\mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}} \xi_{l}) \phi\notag\\&-\int_{\mathbb{R}^n\times\{0\}} K_{l}(\cdot/\lam _{1}^{l}+z_{1}^{l}) H^{\tau_l}(\cdot / \lam _{1}^{l}+z_{1}^{l}) |\mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}} U_{l}|^{p_{l}-1}(\mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}} U_{l}) \phi\Big\}.\label{64}
\end{align}
Then a direct calculation exploiting \eqref{26}, \eqref{57}, \eqref{59}, Claim \ref{claim:2}, H\"{o}lder inequality, and the fractional Sobolev embedding theorem shows
\begin{align}
\Big| \int_{\mathbb{R}^n\times\{0\}} K_{l}(\cdot/\lam _{1}^{l}+z_{1}^{l})H^{\tau_l}(\cdot / \lam _{1}^{l}+z_{1}^{l})U_{1}^{p_{l}} \phi -\int_{\mathbb{R}^n\times\{0\}} K_{\infty}^{(1)}(\xi_{1})U_{1}^{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}-1} \phi \Big|=o(1)\|\phi\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}.\label{65}
\end{align}
Finally, by \eqref{64}, \eqref{65}, Lemma \ref{lem:3.3}, and some elementary inequalities, we deduce that
\begin{align*}
&|I_{K_{l}, \tau_{l}}^{\prime}(\xi_{l})(\mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}} \phi)|\\=& o(1)\|\phi\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}+O(1) \int_{\mathbb{R}^n\times\{0\}}(|\mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}} \xi_{l}|^{p_{l}-1} U_{1}+|\mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}} \xi_{l}|U_{1}^{p_{l}-1})|\phi|\\=& o(1)\|\phi\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n},
\end{align*}
where the last inequality follows from \eqref{63'}, Claim \ref{claim:2}, H\"older inequality, and the fractional Sobolev embedding theorem.
Claim \ref{claim:3} has been established.
\end{proof}
\begin{claim}\label{claim:4}
$I_{K_{l}, \tau_{l}}\left(\xi_{l}\right) \leq c^{(2)}+\varepsilon_{3}+o(1)$.
\end{claim}
\begin{proof} By a change of variable and using Claim \ref{claim:2} and \eqref{63}, some calculations lead to
\begin{align}
I_{K_{l}, \tau_{l}}(U_{l})=&\frac{1}{2} \| U_{l}\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{2}-\frac{1}{p_{l}+1} \int_{\mathbb{R}^n\times\{0\}} K_{l}(\cdot)H^{\tau_l}(\cdot)|U_{l}|^{p_{l}+1}\notag
\\=&(\lam _{1}^{l})^{4 \sigma} \newcommand{\Sn}{\mathbb{S}^n /(p_{l}-1)+2 \sigma} \newcommand{\Sn}{\mathbb{S}^n-n}\Big\{\frac{1}{2} \big\| \mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}} U_{l}\big\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{2}-\frac{1}{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}} \int_{\mathbb{R}^n\times\{0\}} K_{l}(\cdot/\lam _{1}^{l}+z_{1}^{l})\notag\\&\times H^{\tau_l}(\cdot / \lam _{1}^{l}+z_{1}^{l})|\mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}} U_{l}|^{p_{l}+1}\Big\}+o(1)\notag\\=&(\lam _{1}^{l})^{4 \sigma} \newcommand{\Sn}{\mathbb{S}^n /(p_{l}-1)+2 \sigma} \newcommand{\Sn}{\mathbb{S}^n-n}\Big\{\frac{1}{2} \|U_{1}\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{2}+\big\langle U_{1},\mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}} \xi_{l}\big\rangle+\frac{1}{2} \big\|\mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}} \xi_{l} \big\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{2}\notag\\
&-\frac{1}{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}} \int_{\mathbb{R}^n\times\{0\}} K_{l}(\cdot/\lam _{1}^{l}+z_{1}^{l})H^{\tau_l}(\cdot / \lam _{1}^{l}+z_{1}^{l})U_{1}^{p_{l}+1}\notag\\
&-\frac{1}{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}} \int_{\mathbb{R}^n\times\{0\}} K_{l}(\cdot/\lam _{1}^{l}+z_{1}^{l})H^{\tau_l}(\cdot / \lam _{1}^{l}+z_{1}^{l})|\mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}} \xi_{l}|^{p_{l}+1}\notag\\
&-O(1) \int_{\mathbb{R}^n\times\{0\}}(|\mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}} \xi_{l}|^{p_{l}} U_{1}+|\mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}} \xi_{l}| U_{1}^{p_{l}})\Big\}+o(1)\notag\\
=&I_{K_{l}, \tau_{l}}(\xi_{1})+(\lam _{1}^{l})^{2(p_{l}+1)/(p_{l}-1)-n}\Big\{\frac{1}{2} \| U_{1}\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{2}-\frac{1}{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}} \int_{\mathbb{R}^n\times\{0\}} K_{l}(\cdot/\lam _{1}^{l}+z_{1}^{l})\notag\\&\times H^{\tau_l}(\cdot / \lam _{1}^{l}+z_{1}^{l})U_{1}^{p_{l}+1} \Big\}+o(1),\label{66}
\end{align}
where we used \eqref{59} and \eqref{63'} in the last equality.
We dirive from \eqref{22}, \eqref{56} and \eqref{trace} that
\begin{align}
&\frac{1}{2} \| U_{1}\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{2}-\frac{1}{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}} \int_{\mathbb{R}^n\times\{0\}} K_{l}(\cdot/\lam _{1}^{l}+z_{1}^{l})H^{\tau_l}(\cdot / \lam _{1}^{l}+z_{1}^{l})U_{1}^{p_{l}+1}\notag\\
=&I_{K_{\infty}^{(1)}(\xi_{1})}(U_{1})+o(1) \notag\\
\geq& \frac{\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n} K_{\infty}^{(1)}(\xi_{1})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/2\sigma} \newcommand{\Sn}{\mathbb{S}^n }(\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n})^{n/\sigma} \newcommand{\Sn}{\mathbb{S}^n}+o(1) \notag\\
\geq& c^{(1)}+o(1).\label{67}
\end{align}
Claim \ref{claim:4} follows from \eqref{34}, \eqref{66}, \eqref{67}, and the fact $(\lam_{1}^{l})^{4 \sigma} \newcommand{\Sn}{\mathbb{S}^n /(p_{l}-1)+2 \sigma} \newcommand{\Sn}{\mathbb{S}^n-n} \geq 1$.
\end{proof}
From \eqref{36}, \eqref{59} and \eqref{63}, we have
\begin{align}\label{69}
\xi_{l}=U_{l}-\mathscr{T}_{l, \lam _{1}^{l}, z_{1}^{l}}^{-1} U_{1}=\alpha_{2}^{l} \widetilde{\delta}( z_{2}^{l}, \lam _{2}^{l} )+w_{l},
\end{align}
where
\begin{align*}
w_{l}=\alpha_{1}^{l} \widetilde{\delta}( z_{1}^{l}, \lam _{1}^{l} )-K_{\infty}^{(1)}(\xi_{1})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n}(\lam _{1}^{l})^{2 \sigma} \newcommand{\Sn}{\mathbb{S}^n/(p_{l}-1)-(n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n) / 2} \widetilde{\delta}( z^{*}/\lam _{1}^{l}+z_{1}^{l}, \lam ^{*} \lam _{1}^{l} )+v_{l}.
\end{align*}
Now using \eqref{60} and Claim \ref{claim:2}, we have, for large $l$, that
\begin{align}
\label{70}\|w_{l}\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n} = o_{\varepsilon_{2}}(1).
\end{align}
We can simply repeat the previous arguments on $\xi_{l}$ instead of on $U_{l}$. For the reader's convenience, we carry out some crucial steps.
A direct computation using \eqref{69} shows that
\begin{equation}\label{72}
\mathscr{T}_{l, \lam _{2}^{l}, z_{2}^{l}} \xi_{l}=\overline{\alpha}_{2}^{l} \widetilde{\delta}(0,1)+\mathscr{T}_{l, \lam _{2}^{l}, z_{2}^{l}} w_{l},
\end{equation}
where
\begin{equation}\label{73}
\overline{\alpha}_{2}^{l}=\alpha_{2}^{l}(\lam _{2}^{l})^{(n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n) / 2-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n/(p_{l}-1)}.
\end{equation}
Then we can verify the existence of $U_{2} \in D$ and $\xi_{2} \in O^{(2)}$ such that\begin{gather}
\label{74}\mathscr{T}_{l, \lam _{2}^{l}, z_{2}^{l}} \xi_{l} \rightharpoonup U_{2}\quad
\text { weakly in }\, D,
\\\label{75}
\lim _{l \rightarrow \infty}(z_{2}^{l}-z_{l}^{(2)})=\xi_{2},
\end{gather}
up to a subsequence.
Accordingly, by making use of \eqref{22}, \eqref{25} and \eqref{75}, we have\begin{equation}\label{76}
\lim _{l \rightarrow \infty} K_{l}(z_{2}^{l})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n}=K_{\infty}^{(2)}(\xi_{2})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n}.
\end{equation}
For any $\phi \in C_{c}^{\infty}(\overline{\mathbb{R}}^{n+1}_+)$, it follows from Claim \ref{claim:3} and Lemma \ref{lem:3.3} that\begin{align*}
o(1)=&I_{K_{l}, \tau_{l}}^{\prime}(\xi_{l})(\mathscr{T}_{l, \lam _{2}^{l}, z_{2}^{l}}^{-1} \phi)\\=&(\lam _{2}^{l})^{4 \sigma} \newcommand{\Sn}{\mathbb{S}^n /(p_{l}-1)+2 \sigma} \newcommand{\Sn}{\mathbb{S}^n-n}\Big\{\big\langle \mathscr{T}_{l, \lam _{2}^{l}, z_{2}^{l}} \xi_{l} , \phi\big\rangle -\int_{\mathbb{R}^n\times\{0\}} K_{l}(\cdot/\lam _{2}^{l}+z_{2}^{l})\\&\times H^{\tau_l}(\cdot / \lam _{2}^{l}+z_{2}^{l})|\mathscr{T}_{l, \lam _{2}^{l}, z_{2}^{l}} \xi_{l}|^{p_l-1}(\mathscr{T}_{l, \lam _{2}^{l}, z_{2}^{l}} \xi_{l}) \phi\Big\}.
\end{align*}
Taking the limit $l \rightarrow \infty$ and arguing as before, we have\begin{align*}
\big\langle U_{2},\phi\big\rangle -\int_{\mathbb{R}^n\times\{0\}} K_{\infty}^{(2)}(\xi_{2})|U_{2}|^{\frac{4\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}} U_{2} \phi=0.
\end{align*}
Namely, $U_2$ satisfies
\begin{equation}} \newcommand{\ee}{\end{equation}} \newcommand{\X}{\overline{X}\label{77}
\left\{\begin{aligned}
&\operatorname{div}(t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}\nabla U_2)=0&&\text{ in }\,\mathbb{R}^{n+1}_+,\\&\pa_{\nu}^{\sigma} \newcommand{\Sn}{\mathbb{S}^n}U_2=K_{\infty}^{(2)}(\xi_{2})|U_{2}(x,0)|^{\frac{4\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}} U_{2}(x,0) && \text { on }\, \mathbb{R}^n.
\end{aligned}\right.
\ee
Arguing as before, one can prove that, for $\varepsilon_{2}$ small enough, $U_{2}>0$ and for some $z^{**} \in \mathbb{R}^n$ and $\lam ^{* *}>0$, there holds
\begin{align}\label{78}
U_{2}=K_{\infty}^{(2)}(\xi_{2})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n} \widetilde{\delta}(z^{**},\lam ^{* *}).
\end{align}
\begin{claim}\label{claim:5}
For $l$ large enough, we have \begin{align*}
|z^{**}| = o_{\varepsilon_{2}}(1), ~|\lam ^{* *}-1| = o_{\varepsilon_{2}}(1),~ (\lam _{2}^{l})^{\tau_{l}}=1+o_{\varepsilon_{2}}(1). \end{align*}
\end{claim}
\begin{proof} Lemma \ref{lem:3.3} shows that, up to a subsequence, $$\lim_{l \rightarrow \infty}(\lam _{2}^{l})^{\tau_{l}}=B_{\varepsilon_{2}, \varepsilon_{3}},$$
where $B_{\varepsilon_{2}, \varepsilon_{3}}$ is a positive constant independent of $l$ for fixed $\varepsilon_{2}$ and $\varepsilon_{3}$.
One derive from \eqref{22}, \eqref{25}, \eqref{40} and \eqref{75} that\begin{align}\label{79}
\lim _{l \rightarrow \infty} K_{l}(z_{2}^{l})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n}=K_{\infty}^{(2)}(\xi_{2})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n}.
\end{align}
Then it follows from \eqref{73} and \eqref{79} that
\begin{align}\label{81}
\overline{\alpha}_{2}^{l}=K_{\infty}^{(2)}(\xi_{2})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n}(B_{\varepsilon_{2}, \varepsilon_{3}})^{-(n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n)^{2} / 8\sigma} \newcommand{\Sn}{\mathbb{S}^n}+o_{\varepsilon_{2}}(1)+o(1).
\end{align}
Consequently, by \eqref{70}, \eqref{72}, \eqref{74}, \eqref{78}, \eqref{81}, and Lemma \ref{lem:3.3}, we have\begin{align*}
\| K_{\infty}^{(2)}(\xi_{2})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n}(B_{\varepsilon_{2}, \varepsilon_{3}})^{-(n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n)^{2}/ 8\sigma} \newcommand{\Sn}{\mathbb{S}^n} \widetilde{\delta}(0,1)-K_{\infty}^{(2)}(\xi_{2})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n} \widetilde{\delta}( z^{**}, \lam ^{* *} ) \|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}=o_{\varepsilon_{2}}(1)+o(1).
\end{align*}
Taking the limit $l\rightarrow \infty$, we get \begin{align*}
|z^{**}|=o_{\varepsilon_{2}}(1), \quad \lam ^{* *}=1+o_{\varepsilon_{2}}(1), \quad B_{\varepsilon_{2}, \varepsilon_{3}}=1+o_{\varepsilon_{2}}(1).
\end{align*}
\end{proof}
We define $\eta_{l} \in D$ by
\begin{equation}\label{82}
\mathscr{T}_{l, \lam _{2}^{l}, z_{2}^{l}} \xi_{l}=U_{2}+\mathscr{T}_{l, \lam _{2}^{l}, z_{2}^{l}} \eta_{l}.
\end{equation}
Clearly,\begin{equation}\label{821}
\mathscr{T}_{l, \lam _{2}^{l}, z_{2}^{l}} \eta_{l} \rightharpoonup 0 \quad \text { weakly in }\, D.
\end{equation}
\begin{claim}\label{claim:6}
For $\varepsilon_{2}$ small enough, we have $\|I_{K_{l},\tau_{l}}^{\prime}(\eta_{l})\|=o(1)$.
\end{claim}
\begin{proof}
We follow the same method as the proof of Claim \ref{claim:3} and only record some crucial steps.
For any $\phi \in C_c^{\infty}(\overline{\mathbb{R}}^{n+1}_+)$, by \eqref{77}, \eqref{82}, Claim \ref{claim:3}, and Lemma \ref{lem:3.3}, we have
\begin{align}
o(1)\|\phi\|=&I_{K_{l}, \tau_{l}}^{\prime}(\xi_{l})(\mathscr{T}_{l, \lam _{2}^{l}, z_{2}^{l}}^{-1} \phi)\notag
\\=&I_{K_{l}, \tau_{l}}^{\prime}(\eta_{l})(\mathscr{T}_{l, \lam _{2}^{l}, z_{2}^{l}}^{-1} \phi)+(\lam _{2}^{l})^{4 \sigma} \newcommand{\Sn}{\mathbb{S}^n /(p_{l}-1)+2 \sigma} \newcommand{\Sn}{\mathbb{S}^n-n}\Big\{\int_{\mathbb{R}^n\times\{0\}} K_{\infty}^{(2)}(\xi_{2})U_{2}^{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}-1} \phi\notag\\&+\int_{\mathbb{R}^n\times\{0\}} K_{l}(\cdot/\lam _{2}^{l}+z_{2}^{l})H^{\tau_l}(\cdot/ \lam _{2}^{l}+z_{2}^{l})|\mathscr{T}_{l, \lam _{2}^{l},z_{2}^{l},} \eta_{l}|^{p_{l}-1}(\mathscr{T}_{l, \lam _{2}^{l}, z_{2}^{l}} \eta_{l}) \phi\notag\\&-\int_{\mathbb{R}^n\times\{0\}} K_{l}(\cdot/\lam _{2}^{l}+z_{2}^{l})H^{\tau_l}(\cdot/ \lam _{2}^{l}+z_{2}^{l})|\mathscr{T}_{l, \lam _{2}^{l}, z_{2}^{l}} \xi_{l}|^{p_{l}-1}(\mathscr{T}_{l, \lam _{2}^{l}, z_{2}^{l}} \xi_{l})\Big\}.\label{83}
\end{align}
Then a direct calculation exploiting \eqref{26}, \eqref{76}, \eqref{78}, Claim \ref{claim:2}, H\"{o}lder inequality, and the fractional Sobolev embedding theorem shows
\begin{align}
\Big| \int_{\mathbb{R}^n\times\{0\}} K_{l}(\cdot/\lam _{2}^{l}+z_{2}^{l})H^{\tau_l}(\cdot/ \lam _{2}^{l}+z_{2}^{l})U_{2}^{p_{l}} \phi-\int_{\mathbb{R}^n\times\{0\}} K_{\infty}^{(2)}(\xi_{2})U_{2}^{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}-1} \phi\Big| =o(1)\|\phi\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}.\label{84}
\end{align}
Finally, by \eqref{83}, \eqref{84}, Lemma \ref{lem:3.3},
and some elementary inequalities, we obtain\begin{align*}
&|I_{K_{l}, \tau_{l}}^{\prime}(\eta_{l})(\mathscr{T}_{l, \lam _{2}^{l}, z_{2}^{l}}^{-1} \phi)|\\=& o(1)\|\phi\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}+O(1) \int_{\mathbb{R}^n\times\{0\}}\{|\mathscr{T}_{l, \lam _{2}^{l}, z_{2}^{l}} \eta_{l}|^{p_{l}-1} U_{2} +|\mathscr{T}_{l, \lam _{2}^{l}, z_{2}^{l}} \eta_{l}|U_{2} ^{p_{l}-1}\}|\phi|\\=& o(1)\|\phi\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n},
\end{align*}
the last step follows from \eqref{821}, Claim \ref{claim:2}, H\"{o}lder inequality, and the fractional Sobolev embedding theorem.
Claim \ref{claim:6} has been established.
\end{proof}
\begin{claim}\label{claim:7}
For $\varepsilon_{2}$ small enough, we have $I_{K_{l},\tau_{l}}(\eta_{l}) \leq \varepsilon_{3}+o(1)$.
\end{claim}
\begin{proof}We follow the same method as the proof of Claim \ref{claim:4} and only record some crucial steps.
In view of Claim \ref{claim:5}, \eqref{78}, \eqref{82} and \eqref{821}, we obtain
\begin{align}
I_{K_{l}, \tau_{l}}(\xi_{l})=&\frac{1}{2} \|\xi_{l}\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{2}-\frac{1}{p_{l}+1} \int_{\mathbb{R}^n\times\{0\}} K_{l}(\cdot)H^{\tau_l}(\cdot)|\xi_{l}|^{p_{l}+1}\notag\\
=&(\lam _{2}^{l})^{4 \sigma} \newcommand{\Sn}{\mathbb{S}^n /(p_{l}-1)+2 \sigma} \newcommand{\Sn}{\mathbb{S}^n-n}\Big\{\frac{1}{2} \| \mathscr{T}_{l, \lam _{2}^{l}, z_{2}^{l}} \xi_{l}\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{2}-\frac{1}{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}} \int_{\mathbb{R}^n\times\{0\}} K_{l}(\cdot/\lam _{2}^{l}+z_{2}^{l})\notag\\&\times H^{\tau_l}(\cdot/ \lam _{2}^{l}+z_{2}^{l})|\mathscr{T}_{l, \lam _{2}^{l}, z_{2}^{l}}\xi_{l}|^{p_{l}+1}\Big\}+o(1)\notag\\
=&I_{K_{l}, \tau_{l}}(\eta_{l})+(\lam _{2}^{l})^{4 \sigma} \newcommand{\Sn}{\mathbb{S}^n /(p_{l}-1)+2 \sigma} \newcommand{\Sn}{\mathbb{S}^n-n}\Big\{\frac{1}{2} \| U_{2} \|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{2}-\frac{1}{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}} \int_{\mathbb{R}^n\times\{0\}} K_{l}(\cdot/\lam _{2}^{l}+z_{2}^{l})\notag\\&\times H^{\tau_l}(\cdot/ \lam _{2}^{l}+z_{2}^{l})U_{2} ^{p_{l}+1}\Big\}+o(1).\label{85}
\end{align}
Then we derive from \eqref{22}, \eqref{75} and \eqref{trace} that
\begin{align}
&\frac{1}{2} \| U_{2} \|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{2}-\frac{1}{2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}} \int_{\mathbb{R}^n\times\{0\}} K_{l}(\cdot/\lam _{2}^{l}+z_{2}^{l})H^{\tau_l}(\cdot/ \lam _{2}^{l}+z_{2}^{l})U_{2} ^{p_{l}+1} \\
=&I_{K_{\infty}^{(2)}(\xi_{2})}(U_{2} )+o(1) \notag\\
\geq &\frac{\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n} K_{\infty}^{(2)}(\xi_{2})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/2\sigma} \newcommand{\Sn}{\mathbb{S}^n}(\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n})^{n/\sigma} \newcommand{\Sn}{\mathbb{S}^n}+o(1)\notag\\\geq& c^{(2)}+o(1).\label{86}
\end{align}
Claim \ref{claim:7} follows from \eqref{85}, \eqref{86}, Claim \ref{claim:4}, and the fact $(\lam _{2}^{l})^{2(p_{l}+1)(p_{l}-1)-n} \geq 1$.
\end{proof}
\begin{claim}\label{claim:8}
For $\varepsilon_{2}>0$ small enough, we have $\eta_{l} \rightarrow 0$ strongly in $D$.
\end{claim}
\begin{proof}%
It follows from Claim \ref{claim:6} and Claim \ref{claim:7} that
\begin{align}\label{89}
\| \eta_{l}\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{2} \leq \frac{n}{\sigma} \newcommand{\Sn}{\mathbb{S}^n} \varepsilon_{3}+o(1).
\end{align}
Suppose that Claim \ref{claim:8} does not hold, then along a subsequence we have
\begin{align}\label{90}
\|\eta_{l}\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{\tau_{l}}=1+o(1).
\end{align}
We derive from \eqref{90}, H\"{o}lder inequality, and Claim \ref{claim:6} that$$
\|\eta_{l}\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{2} \leq C(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n, A_{1})\Big(\int_{\mathbb{R}^n}|\eta_{l}(x,0)|^{2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}}\,\d x\Big)^{(p_{l}+1)/2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}}+o(1),
$$and then\begin{align}\label{91}
\| \eta_{l}\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{2} \leq C(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n, A_{1}) \int_{\mathbb{R}^n}|\eta_{l}(x,0)|^{2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}}\,\d x+o(1).
\end{align}
It follows from \eqref{91} and \eqref{trace} that\begin{align*}
\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n} \leq \frac{\big(\int_{\mathbb{R}^{n+1}_+}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla \eta_{l}|^{2}\,\d X\big)^{1 / 2}}{\big(\int_{\mathbb{R}^n}|\eta_{l}(x,0)|^{2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}}\,\d x\big)^{1/2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}}}\leq \frac{\big(\int_{\mathbb{R}^{n+1}_+}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla\eta_{l}|^{2}\,\d X\big)^{1 / 2}C(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n, A_{1})^{1/2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}}}{\big(\int_{\mathbb{R}^{n+1}_+} t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla \eta_{l}|^{2}\,\d X+o(1)\big)^{1/2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}}.
\end{align*}
Thus,\begin{align}\label{92}
\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n} \leq C(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n, A_{1})^{1/2^{*}_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}}\| \eta_{l}\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{2\sigma} \newcommand{\Sn}{\mathbb{S}^n/ n}.
\end{align}
However, \eqref{89} and \eqref{92} cannot hold at the same time if $\varepsilon_{2}>0$ is small enough. Claim \ref{claim:8} has been established.
\end{proof}
Rewriting \eqref{63} and \eqref{82}, we have\begin{equation}\label{93}
U_{l}=\mathscr{T}_{l, \lam _{2}^{l}, z_{2}^{l}}^{-1} U_{1}+\mathscr{T}_{l, \lam _{2}^{l}, z_{2}^{l}}^{-1} U_{2} +\eta_{l}.
\end{equation}
\begin{claim}\label{claim:9}
For $\varepsilon_{2}>0$ small enough, we have
$$ (\lam _{1}^{l})^{\tau_{l}}=1+o_{\varepsilon_{3}}(1)+o(1),\quad (\lam _{2}^{l})^{\tau_{l}}=1+o_{\varepsilon_{3}}(1)+o(1).
$$
\end{claim}
\begin{proof}
We deduce from \eqref{66}, \eqref{67}, and Lemma \ref{lem:3.3} that
\begin{align}\label{94}
I_{K_{l}, \tau_l}(U_{l}) \geq I_{K_{l}, \tau_{l}}(\xi_{l})+(\lam _{1}^{l})^{4 \sigma} \newcommand{\Sn}{\mathbb{S}^n /(p_{l}-1)+2 \sigma} \newcommand{\Sn}{\mathbb{S}^n-n} c^{(1)}+o(1).
\end{align}Combining \eqref{85}, \eqref{86}, and Lemma \ref{lem:3.3}, we are led to
\begin{align}\label{95}
I_{K_{l}, \tau_{l}}(\xi_{l}) \geq I_{K_{l}, \tau_{l}}(\eta_{l})+(\lam _{2}^{l})^{4 \sigma} \newcommand{\Sn}{\mathbb{S}^n /(p_{l}-1)+2 \sigma} \newcommand{\Sn}{\mathbb{S}^n-n} c^{(2)}+o(1).
\end{align} Then we use Claim \ref{claim:8} to deduce that\begin{align}\label{96}
I_{K_{l}, \tau_l}(\eta_{l})=o(1).
\end{align}
Finally, we put together \eqref{34}, \eqref{94}--\eqref{96} to obtain$$
\sum_{i=1}^{2}\big\{(\lam _{i}^{l})^{4 \sigma} \newcommand{\Sn}{\mathbb{S}^n /(p_{l}-1)+2 \sigma} \newcommand{\Sn}{\mathbb{S}^n-n}-1\big\} c^{(i)} \leq \varepsilon_{3}+o(1).
$$
This completes the proof of Claim \ref{claim:9}.\end{proof}
\begin{claim}\label{claim:10}
Let $\delta_{5}=\delta_{1} /(2 A_{3})$. Then if $\varepsilon_{2}>0$ is chosen to be small enough, we have, for large $l$, that \begin{align*}
\operatorname{dist}(z_{1}^{l}, \partial O_{l}^{(1)})\geq \delta_{5}, \quad
\operatorname{dist}(z_{2}^{l}, \partial O_{l}^{(2)})\geq \delta_{5}.
\end{align*}
\end{claim}
\begin{proof}
Suppose the contrary, that for a sequence of $l\rightarrow \infty$, we may assume
$\operatorname{dist}(z_{1}^{l}, \partial O_{l}^{(1)})<\delta_{5}$. In the following, we shall estimate the value of the $I_{K_{l}, \tau_{l}}(U_{l})$, which will allow us to reach a contradiction.
To compute the value of $I_{K_{l}, \tau_{l}}(U_{l})$, we first observe from \eqref{Sn}, \eqref{36}--\eqref{39}, Claim \ref{claim:1}, Claim \ref{claim:2} and Claim \ref{claim:5} that
\begin{align*}
I_{K_{l}, \tau_{l}}(U_{l}) =&I_{K_{l}, \tau_{l}}(\alpha_{1}^{l} \widetilde{\delta}( z_{1}^{l}, \lam _{1}^{l} ))+I_{K_{l}, \tau_{l}}(\alpha_{2}^{l} \widetilde{\delta}( z_{2}^{l}, \lam _{2}^{l} ))+o_{\varepsilon_{2}}(1)\\=&I_{K_{l}, \tau_{l}}(K_{l}(z_{1}^{l})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n} \widetilde{\delta}( z_{1}^{l}, \lam _{1}^{l} ))\\&+I_{K_{l}, \tau_{l}}(K_{l}(z_{2}^{l})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n} \widetilde{\delta}( z_{2}^{l}, \lam _{2}^{l} ))+o_{\varepsilon_{2}}(1)\\=&I_{K_{l}}(K_{l}(z_{1}^{l})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n} \widetilde{\delta}( z_{1}^{l}, \lam _{1}^{l} ))\\&+I_{K_{l}}(K_{l}(z_{2}^{l})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n} \widetilde{\delta}( z_{2}^{l}, \lam _{2}^{l} ))+o_{\varepsilon_{2}}(1)+o(1)
\\=&\sum_{i=1}^{2} K_{l}(z_{i}^{l})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/2\sigma} \newcommand{\Sn}{\mathbb{S}^n} \frac{\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n}(\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n})^{\sigma} \newcommand{\Sn}{\mathbb{S}^n/n}+o_{\varepsilon_{2}}(1)+o(1).%
\end{align*} An application of \eqref{24} and \eqref{25} shows
\begin{align}\label{99}
K_{l}(z_{1}^{l}) \leq K_{l}(z_{l}^{(1)})-\delta_{1}+A_{3} \delta_{5}=K_{l}(z_{l}^{(1)})-\delta_{1}/2.
\end{align}
Therefore,
\begin{align}
I_{K_{l}, \tau_{l}}(U_{l})
\geq&(K_{l}(z_{l}^{(1)})-\delta_{1}/2)^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/2\sigma} \newcommand{\Sn}{\mathbb{S}^n} \frac{\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n}(\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n})^{\sigma} \newcommand{\Sn}{\mathbb{S}^n/n}\notag\\&+K_{l}(z_{l}^{(2)})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/2\sigma} \newcommand{\Sn}{\mathbb{S}^n} \frac{\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n}(\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n})^{\sigma} \newcommand{\Sn}{\mathbb{S}^n/n}+o_{\varepsilon_{2}}(1)+o(1)\notag\\=&(a^{(1)}-\delta_{1}/2)^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/2\sigma} \newcommand{\Sn}{\mathbb{S}^n} \frac{\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n}(\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n})^{\sigma} \newcommand{\Sn}{\mathbb{S}^n/n}\notag\\&+(a^{(2)})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/2\sigma} \newcommand{\Sn}{\mathbb{S}^n}\frac{\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n}(\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n})^{\sigma} \newcommand{\Sn}{\mathbb{S}^n/n}+o_{\varepsilon_{2}}(1)+o(1)\notag\\
=&c^{(1)}\Big(\frac{a^{(1)}}{a^{(1)}-\delta_{1} / 2}\Big)^{(n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n)/2\sigma} \newcommand{\Sn}{\mathbb{S}^n}+c^{(2)}+o_{\varepsilon_{2}}(1)+o(1)\notag
\\>&c^{(1)}+c^{(2)}+o_{\varepsilon_{2}}(1)+o(1)\label{100}
\end{align}
by utilizing \eqref{20}, \eqref{21}, \eqref{26}, \eqref{37} and \eqref{99}. However, if $\varepsilon_{2}>0$ is small enough and $l$ large enough, \eqref{100} contradicts to \eqref{34}. This completes the proof of Claim \ref{claim:10}.
\end{proof}
We are now in the position to prove Proposition \ref{prop:3.2}.
\begin{proof}[Proof of Proposition \ref{prop:3.2}]
Applying \eqref{22}, \eqref{25}, \eqref{56}, \eqref{59}, and Claim \ref{claim:9}, we deduce that
\begin{align*}
\mathscr{T}_{l, \lam _{2}^{l}, z_{2}^{l}}^{-1}U_{1} &=(\lam _{1}^{l})^{2\sigma} \newcommand{\Sn}{\mathbb{S}^n /(p_{l}-1)} U_{1}(\lam _{1}^{l}(x-z_{1}^{l}),\lam _{1}^{l}) \\
&=(\lam _{1}^{l})^{2 \sigma} \newcommand{\Sn}{\mathbb{S}^n/(p_{l}-1)-(n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n)/2\sigma} \newcommand{\Sn}{\mathbb{S}^n} K_{\infty}^{(1)}(\xi_{1})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n} \widetilde{\delta}(z_{1}^{l}+z^{*}/\lam _{1}^{l},\lam ^{*} \lam _{1}^{l} )\\&=K_{\infty}^{(1)}(\xi_{1})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n} \widetilde{\delta}(z_{1}^{l}+z^{*}/\lam _{1}^{l},\lam ^{*} \lam _{1}^{l} )+o_{\varepsilon_{3}}(1)\\&=K_{l}(z_{1}^{l}+z^{*}/\lam _{1}^{l})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n} \widetilde{\delta}(z_{1}^{l}+z^{*}/\lam _{1}^{l},\lam ^{*} \lam _{1}^{l} )+o_{\varepsilon_{3}}(1)+o(1).
\end{align*}
Similarly, we have
$$
\mathscr{T}_{l, \lam _{2}^{l}, z_{2}^{l}}^{-1} U_{2}=K_{l}(z_{2}^{l}+z^{**}/\lam _{2}^{l})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n} \widetilde{\delta}(z_{2}^{l}+z^{**}/\lam _{2}^{l}, \lam ^{**} \lam _{2}^{l} )+o_{\varepsilon_{3}}(1)+o(1).
$$
Therefore, we can rewrite \eqref{93} as (see Claim \ref{claim:8} and the above)
\begin{align}
U_{l}= &K_{l}(z_{1}^{l}+z^{*}/\lam _{1}^{l})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n} \widetilde{\delta}(z_{1}^{l}+z^{*}/\lam _{1}^{l},\lam ^{*}\lam _{1}^{l} )\notag\\& +K_{l}(z_{2}^{l}+z^{**}/\lam _{2}^{l})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n} \widetilde{\delta}( z_{2}^{l}+z^{**}/\lam _{2}^{l},\lam ^{**}\lam _{2}^{l} )+o_{\varepsilon_{3}}(1)+o(1) .\label{101}
\end{align}
We now fix the value of $\varepsilon_{2}$ to be small to make all the previous
arguments hold and then make $\varepsilon_{3}>0$ small (depending on $\varepsilon_{2}>0$) to make the following hold (using Claim \ref{claim:9}):
\begin{align}\label{102}
\begin{aligned}
|(\lam ^{*} \lam _{1}^{l})^{\tau_{l}}-1| &\leq o_{\varepsilon_{3}}(1)+o(1)<\varepsilon_{2}/2 ,\\ |(\lam ^{* *} \lam _{2}^{l})^{\tau_{l}}-1| &\leq o_{\varepsilon_{3}}(1)+o(1)<\varepsilon_{2}/2.
\end{aligned}
\end{align}
From \eqref{101}, \eqref{102}, Claim \ref{claim:1} and Claim \ref{claim:9}, we see that for $\varepsilon_{3}>0$ small, we
have, for large $l$, \begin{align*}
U_{l} \in \widetilde{V}_{l}(2, \varepsilon_{2} / 2),
\end{align*}
which contradicts to \eqref{33}. This concludes the proof of Proposition \ref{prop:3.2}.
\end{proof}
\section{Complete the proof of Theorem \ref{thm:3.1}}\label{sec:5}
In this section we will complete the proof of Theorem \ref{thm:3.1}. Precisely, under the contrary of Theorem \ref{thm:3.1} and combining with the Proposition \ref{prop:3.2} established in Section \ref{sec:4}, we will reach a contradiction after a lengthy indirect argument. The method we shall use is similar to that in \cite{Li93}, see also \cite{CR1,CR2,CES,Se}, but we have to set up a framework to fit the fractional situation. The basic idea is as follows: Given finitely many solutions (at low energy), to translate their supports far apart and patch the pieces together create many multi-bump solutions. The authors \cite{CR1,CR2,CES,Se} have introduced the original and powerful ideas which permit the construction of such solutions via variational methods. In particular, they are able to find many homoclinic-type solutions to periodic Hamiltonian systems (see \cite{Se,CR1}) and to certain elliptic equations of nonlinear Schr\"odinger type on $\mathbb{R}^n$ with periodic coefficients (see \cite{CR2}). Li has given a slight modification to the minimax procedure in \cite{CR1,CR2} and has applied it to certain problems where periodicity is not present, for example, the problem of prescribing scalar curvature on $\Sn$ (see \cite{Li93b,Li93,Li95}).
Inspired from the above, we modify the techniques of \cite{Li93,CR1,CR2,CES,Se} to fit variational problems with nonlocal terms. To reduce overlaps, we will omit the proofs of several intermediate results which closely
follow standard arguments, giving appropriate references. Let us start with defining a certain family of sets and minimax values and giving some notation.
For any $z\in\mathbb{R}^n$, we define the space $\mathcal{H}_{0}^{1}(t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n},\Sigma_{R}^{+}(z))$ as the closure in $ H^{1}(t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}, \B_{R}^{+}(z))$ of $C_{c}^{\infty}(\Sigma_{R}^{+}(z))$ under the norm \eqref{Wballnorm}. It follows from the Hardy–Sobolev inequality in \cite[Lemma 2.4]{FV} that $\mathcal{H}_{0}^{1}(t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n},\Sigma_{R}^{+}(z))$ can be
endowed with the equivalent norms\begin{align*}
\|U\|_{\mathcal{H}_{0}^{1}(t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n},\Sigma_{R}^{+}(z))}:=\Big(\int_{\B_{R}^{+}(z)} t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U|^{2} \,\d X\Big)^{1/2}.
\end{align*}
In this section, we write $\tau=\tau_{l}$, $p=p_l$, and $\mathcal{H}_{0}^{1}(t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n},\Sigma_{R}^{+}(z))=\mathcal{H}_{0}^{1}(\Sigma_{R}^{+}(z))$ to simplify the notation.
Now, we define
\begin{align*}
&\gamma_{l, \tau}^{(1)}=\big\{g^{(1)} \in C([0,1],\mathcal{H}_{0}^{1}(\Sigma_{R_l}^{+}(z_{l}^{(1)}))) : g^{(1)}(0)=0,\, I_{K_{l}, \tau}(g^{(1)}(1))<0\big\},\\&\gamma_{l, \tau}^{(2)}=\big\{g^{(2)} \in C([0,1], \mathcal{H}_0^{1}(\Sigma_{R_l}^{+}(z_{l}^{(2)}))) : g^{(2)}(0)=0,\, I_{K_{l}, \tau}(g^{(2)}(1))<0\big\},\\&c_{l, \tau}^{(1)}=\inf _{g^{(1)}\in \gamma_{l, \tau}^{(1)}} \max _{0 \leq \theta_{1} \leq 1} I_{K_{l },\tau}(g^{(1)}(\theta_{1})),\\&c_{l, \tau}^{(2)}=\inf _{g^{(2)}\in \gamma_{l, \tau}^{(2)}} \max _{0 \leq \theta_{2} \leq 1} I_{K_{l},\tau}(g^{(2)}(\theta_{2})).
\end{align*}
We have abused the notation a little by writing $I_{K_{l},\tau}$ as
$I_{K_{l},\tau}:\mathcal{H}_0^{1}(\Sigma_{R_l}^{+}(z_{l}^{(1)})) \rightarrow \mathbb{R}$ and also $I_{K_{l},\tau}:\mathcal{H}_0^{1}(\Sigma_{R_l}^{+}(z_{l}^{(2)})) \rightarrow \mathbb{R}$.
\begin{prop}\label{prop:4.1}
Let $\{K_{l}\}$ be a sequence of functions
satisfying \eqref{18}, \eqref{20} and \eqref{21}. Then there holds
\begin{gather}
\label{103}c_{l, \tau}^{(1)}=c^{(1)}+o(1),\\\label{104}c_{l, \tau}^{(2)}=c^{(2)}+o(1),
\end{gather}
where $o(1)\rightarrow 0$ as $l\rightarrow \infty$.
\end{prop}
\begin{proof}
We will only prove \eqref{103}, because \eqref{104} can be justified in a similar manner.
From the definition of $c_{l, \tau}^{(1)}$, we deduce that
\begin{align}\label{105}
C^{-1}( n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,A_{1})\leq c_{l, \tau}^{(1)} \leq C( n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,A_{1})
\end{align}
for some constant $C( n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,A_{1})>0$. Moreover, for any $U \in \mathcal{H}^{1}_{0}(\Sigma_{R_l}^{+}(z_{l}^{(1)})) \backslash\{0\}$, one has
\begin{align*}
c_{l, \tau}^{(1)} &\leq \max _{0 \leq s<\infty} I_{K_{l}, \tau}(s U)\\&=\Big(\frac{1}{2}-\frac{1}{p+1}\Big) \frac{\big(\int_{\B^+_{R_{l}}(z_{l}^{(1)})}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U|^{2}\,\d X\big)^{(p+1) /(p-1)}}{\big(\int_{B_{R_{l}}(z_{l}^{(1)})} K_{l}(x)H^{\tau}(x)|U(x,0)|^{p+1}\,\d x\big)^{2 /(p-1)}}.
\end{align*}
Let $U=\eta(x+z_{l}^{(1)},t ) \widetilde{\delta}(z_{l}^{(1)}, \lam _{l})$, where $\eta$ is a nonnegative smooth cut-off function supported in $\B_{1}$ and equal to 1 in $\B_{1/2}$, and $\{\lam _{l}\}$ is a sequence satisfying
\begin{align}
\label{106}
\lim _{l \rightarrow \infty} \lam _{l}=\infty,\quad (\lam _{l})^{\tau}=1+o(1).
\end{align}Then we obtain\begin{align*}
c_{l, \tau}^{(1)} \leq& \frac{\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n}(a^{(1)})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/2\sigma} \newcommand{\Sn}{\mathbb{S}^n}(\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n})^{n/\sigma} \newcommand{\Sn}{\mathbb{S}^n}+o(1)\\=&c^{(1)}+o(1).
\end{align*}
The other side of the inequality can be proved as the following.
For any $l, \tau$ fixed, it is well-known that there exists $\{U_{k}\} \subset \mathcal{H}_{0}^{1}(\Sigma_{R_l}^{+}(z_{l}^{(1)}))$ such that
\begin{gather*}
\lim _{k \rightarrow \infty} I_{K_{l}, \tau}(U_{k})=c_{l, \tau}^{(1)},\\\label{107'}
I_{K_{l}, \tau}^{\prime}(U_{k}) \rightarrow 0 \quad \text{ in }\, H^{-\sigma} \newcommand{\Sn}{\mathbb{S}^n}(\Sigma_{R_l}^{+}(z_{l}^{(1)}))\quad \text{ as }\, k \rightarrow \infty,
\end{gather*}
where $H^{-\sigma} \newcommand{\Sn}{\mathbb{S}^n}(\Sigma_{R_l}^{+}(z_{l}^{(1)}))$ denotes the dual space of $\mathcal{H}_{0}^{1}(\Sigma_{R_l}^{+}(z_{l}^{(1)}))$. Namely, we have
\begin{gather}
\frac{1}{2} \int_{\B^+_{R_{l}}(z_{l}^{(1)})}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U_{k}|^{2}\,\d X-\frac{1}{p+1} \int_{B_{R_{l}}(z_{l}^{(1)})} K_{l}(x)H^{\tau}(x)|U_{k}(x,0)|^{p+1}\,\d x=c_{l, \tau}^{(1)}+o_{k}(1),\label{107}\\ \int_{\B^+_{R_{l}}(z_{l}^{(1)})}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U_{k}|^{2}\,\d X=\int_{B_{R_{l}}(z_{l}^{(1)})} K_{l}(x)H^{\tau}(x)|U_{k}(x,0)|^{p+1}\,\d x+o_{k}(1),\label{107''}
\end{gather}
where $o_{k}(1)\rightarrow 0$ as $k\rightarrow \infty$.
It follows that\begin{align}\label{108}
c_{l, \tau}^{(1)}=\frac{\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n} \int_{\B^+_{R_{l}}(z_{l}^{(1)})}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U_{k}|^{2}\,\d X+o_{k}(1)+o(1).
\end{align}
By \eqref{trace}, \eqref{26}, \eqref{105}, \eqref{107} and \eqref{107''}, we have
\begin{align*}
\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n} & \leq \frac{\big(\int_{\B^+_{R_{l}}(z_{l}^{(1)})}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U_{k}|^{2}\,\d X\big)^{1 / 2}}{\big(\int_{B_{R_{l}}(z_{l}^{(1)})}|U_{k}(x,0)|^{2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}}\,\d x\big)^{1/2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}}} \\& \leq \frac{\big(\int_{\B^+_{R_{l}}(z_{l}^{(1)})}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U_{k}|^{2}\,\d X\big)^{1 / 2}}{(1+o(1))\big(\int_{B_{R_{l}}(z_{l}^{(1)})}|U_{k}(x,0)|^{p+1}\,\d x\big)^{1 /(p+1)}} \\
& \leq \frac{\big(\int_{\B^+_{R_{l}}(z_{l}^{(1)})}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U_{k}|^{2}\,\d X\big)^{1 / 2} K_{l}(z_{l}^{(1)})^{1 /(p+1)}}{(1+o(1))\big(\int_{B_{R_{l}}(z_{l}^{(1)})} K_{l}(x)H^{\tau}(x)|U_{k}(x,0)|^{p+1}\,\d x\big)^{1 /(p+1)}} \\
&=\Big(\int_{\B^+_{R_{l}}(z_{l}^{(1)})}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U_{k}|^{2}\,\d X\Big)^{\sigma} \newcommand{\Sn}{\mathbb{S}^n / n} K_l(z_{l}^{(1)})^{1/2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}}+o_{k}(1)+o(1),
\end{align*}
namely,\begin{align}\label{109}
\varliminf_{k \rightarrow \infty} \int_{\B^+_{R_{l}}(z_{l}^{(1)})}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U_{k}|^{2}\,\d X \geq K_{l}(z_{l}^{(1)})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/2\sigma} \newcommand{\Sn}{\mathbb{S}^n}(\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n})^{n/\sigma} \newcommand{\Sn}{\mathbb{S}^n}+o(1).
\end{align}
We can deduce from \eqref{21}, \eqref{108} and \eqref{109} that $$c_{l, \tau}^{(1)} \geq\frac{\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n}(a^{(1)})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/2\sigma} \newcommand{\Sn}{\mathbb{S}^n}(\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n})^{n/\sigma} \newcommand{\Sn}{\mathbb{S}^n}+o(1)=c^{(1)}+o(1).$$
This gives the proof of \eqref{103}.
\end{proof}
We define
\begin{gather}
\Gamma_{l}=\big\{G=\overline{g}^{(1)}+\overline{g}^{(2)} : \overline{g}^{(1)}, \overline{g}^{(2)} \text { satisfy }\eqref{110}-\eqref{114}\big\},\\\overline{g}^{(1)}, ~\overline{g}^{(2)} \in C([0,1]^{2}, D),\label{110}\\\overline{g}^{(1)}(0, \theta_{2})=\overline{g}^{(2)}(\theta_{1}, 0)=0, \quad 0 \leq \theta_{1}, \theta_{2} \leq 1,\label{111}\\I_{K_{l}, \tau}(\overline{g}^{(1)}(1, \theta_{2}))<0,~ I_{K_{l}, \tau}(\overline{g}^{(2)}(\theta_{1}, 1))<0, \quad 0 \leq \theta_{1}, \theta_{2} \leq 1,\label{112}\\\operatorname{supp} \overline{g}^{(1)} \subset \B^+_{R_{l}}(z_{l}^{(1)}), \quad \theta=(\theta_1,\theta_2) \in[0,1]^{2},\label{113}\\\operatorname{supp} \overline{g}^{(2)} \subset \B^+_{R_{l}}(z_{l}^{(2)}), \quad\theta=(\theta_1,\theta_2) \in[0,1]^{2},\label{114}\\b_{l, \tau}=\inf _{G \in \Gamma_{l}} \max _{\theta \in[0,1]^{2}} I_{K_{l}, \tau}(G(\theta)).
\end{gather}
\begin{rem}
Observe that if $G=g^{(1)}+g^{(2)}$ with $g^{(1)}\in \gamma_{l, \tau}^{(1)}$, $g^{(2)}\in \gamma_{l, \tau}^{(2)}$ and $\operatorname{supp}g^{(1)}\cap \operatorname{supp}g^{(2)}=\emptyset$,
then $I_{K_{l}, \tau}(G)=I_{K_{l}, \tau}(g^{(1)})+I_{K_{l}, \tau}
(g^{(2)})$.
\end{rem}
\begin{prop}\label{prop:4.2}
Suppose that $\{K_{l}\}$ is a sequence of functions satisfying \eqref{18}, \eqref{20} and \eqref{21}, then there holds $b_{l, \tau}=c_{l, \tau}^{(1)}+c_{l, \tau}^{(2)}+o(1)$.
\end{prop}
\begin{proof}The first inequality $b_{l, \tau} \geq c_{l, \tau}^{(1)}+c_{l, \tau}^{(2)}$ can be achieved from the definition of $c_{l, \tau}^{(1)}$ and $c_{l, \tau}^{(2)}$ with additional compactness argument on $[0,1]^2$, we omit it here and refer to \cite[Proposition 3.4]{CR1} for details.
On the other hand, for $0 \leq \theta_{1}, \theta_{2} \leq 1$, let
\begin{align*}
g_{l}^{(1)}(\theta_{1})&=\theta_{1} C_{1} K_{l}(z_{l}^{(1)})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n} \eta(x+z_{l}^{(1)},t) \widetilde{\delta}(z_{l}^{(1)},\lam _l),\\ g_{l}^{(2)}(\theta_{2})&=\theta_{2} C_{1} K_{l}(z_{l}^{(2)})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n} \eta(x+z_{l}^{(2)},t) \widetilde{\delta}(z_{l}^{(2)},\lam _l),
\end{align*}
where $\{\lam _{l}\}$ is defined in \eqref{106} and $C_{1}=C_{1}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,A_{1}, A_{2})>1$ is a constant such that \begin{align*}
I_{K_l,\tau}(g_{l}^{(1)}(\theta_{1}))<0\quad \text{ and }\quad I_{K_l,\tau}(g_{l}^{(2)}(\theta_{2}))<0
\end{align*}
for large $l$. We fix the value of $C_{1}$ from now on.
For $\theta=(\theta_{1}, \theta_{2}) \in[0,1]^{2}$, let
$G_{l}(\theta)=g_{l}^{(1)}(\theta_{1})+g_{l}^{(2)}(\theta_{2})$.
Observe that $\operatorname{supp}g_{l}^{(1)}(\theta_{1})\cap \operatorname{supp} g_{l}^{(2)}(\theta_{2})=\emptyset$, a direct calculation shows that
\begin{align*}
\max _{\theta \in[0,1]^{2}} I_{K_{l}, \tau}(G_{l}(\theta))
= &\max _{\theta_{1} \in[0,1]} I_{K_{l}, \tau}(g_{l}^{(1)}(\theta_{1}))+\max _{\theta_{2} \in[0,1]} I_{K_{l}, \tau}(g_{l}^{(2)}(\theta_{2}))+o(1)\\\leq &\max _{0 \leq s<\infty} I_{K_{l}, \tau}(s \eta(x+z_{l}^{(1)},t) \widetilde{\delta}(z_{l}^{(1)}, \lam _{l})) \\& +\max _{0 \leq s<\infty} I_{K_{l}, \tau}(s \eta(x+z_{l}^{(2)},t) \widetilde{\delta}(z_{l}^{(2)}, \lam _l))+o(1)\\=&\frac{\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n}(a^{(1)})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/2\sigma} \newcommand{\Sn}{\mathbb{S}^n}(\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n})^{n/\sigma} \newcommand{\Sn}{\mathbb{S}^n}\\& +\frac{\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n}(a^{(2)})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/2\sigma} \newcommand{\Sn}{\mathbb{S}^n}(\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n})^{n/\sigma} \newcommand{\Sn}{\mathbb{S}^n}+o(1)\\=&c_{l, \tau}^{(1)}+c_{l, \tau}^{(2)}+o(1),
\end{align*}
where the last equality is due to Proposition \ref{prop:4.1}.
Therefore, $
b_{l, \tau} \leq c_{l, \tau}^{(1)}+c_{l, \tau}^{(2)}+o(1)
$. This ends the proof. \end{proof}
In the following, under the contrary of
Theorem \ref{thm:3.1}, we can construct $H_{l} \in \Gamma_{l}$ for large $l$, such that $$
\max _{\theta \in[0,1]^{2}} I_{K_{l}, \tau}(H_{l}(\theta))<b_{l, \tau},
$$ which contradicts to the definition of $b_{l, \tau}$. A lengthy construction is required to establish this fact and a brief sketch of it will be given now by the details.
\emph{Step 1:} Choosing some suitably small number $\varepsilon_{4}>0$, we can construct
$G_{l} \in \Gamma_{l}$ such that $$
\max _{\theta \in[0,1]^{2}} I_{K_{l}, \tau}(G_{l}(\theta)) \leq b_{l, \tau}+\varepsilon_{4},$$ and satisfies some further properties.
\emph{Step 2:} By the negative gradient flow of $I_{K_{l}, \tau}$, $G_{l}$ is deformed to $U_{l}$ such that $$\max _{\theta\in[0,1]^{2}} I_{K_{l}, \tau}(U_{l}(\theta)) \leq b_{l, \tau}-\varepsilon_{4}.$$ If $U_{l}\in \Gamma_{l}$, we will reach a contradiction to the definition of $b_{l, \tau}$. However, $U_{l}$ is not necessarily in $\Gamma_{l}$ any more since the deformation may not preserve properties \eqref{113}-\eqref{114}.
\emph{Step 3:} Applying Propositions \ref{prop:1.4}, \ref{prop:2.1} and \ref{prop:3.2}, we modify $U_{l}$ to
obtain $H_{l} \in \Gamma_{l}$ such that $$
\max _{\theta \in[0,1]^{2}} I_{K_{l}, \tau}(H_{l}(\theta)) \leq b_{l, \tau}-\varepsilon_{4}/2.
$$
All the three steps are completed for large $l$ only. Now we give the details to establish these steps.
\medskip
\emph{Step 1: Construction of $G_l$.} Let $G_{l}$ be the one we have just defined. We establish some properties of it which are needed.
\begin{lem}\label{lem:4.1}
For any $\varepsilon\in (0,1)$, if $I_{K_{l}, \tau}(g_{l}^{(i)}(\theta_{i})) \geq c_{l, \tau}^{(i)}-\varepsilon$ for $i = 1,2$, then there exists some constants $\Lambda} \newcommand{\B}{\mathcal{B} _{1}=\Lambda} \newcommand{\B}{\mathcal{B} _{1}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,\varepsilon ,A_{1}, A_{3})>1$ and $C_{0}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n)>0$ such that for any $l \geq \Lambda} \newcommand{\B}{\mathcal{B} _{1}$ and $0 \leq \theta_{1}, \theta_{2} \leq 1$, we have $|C_{1} \theta_{i}-1| \leq C_{0}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n) \sqrt{\varepsilon},\,i=1,2.$
\end{lem}
\begin{proof}
We only take into account the case $i=1$ since the other case can be covered in the same way.
Let $s_1=C_{1} \theta_{1}$, a direct calculation shows that
\begin{align}
& I_{K_{l}, \tau}(g_{l}^{(1)}(\theta_{1}))\notag\\=&\frac{1}{2} s_1^{2} K_{l}(z_{l}^{(1)})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/2\sigma} \newcommand{\Sn}{\mathbb{S}^n} \|\eta(x+z_{l}^{(1)},t) \widetilde{\delta}(z_{l}^{(1)}, \lam _{l})\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{2} \notag\\&-\frac{1}{p+1} s_1^{p+1} K_{l}(z_{l}^{(1)})^{(p+1)(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/4\sigma} \newcommand{\Sn}{\mathbb{S}^n}\notag\\&\times \int_{\mathbb{R}^n} K_{l}(x)H^{\tau}(x)|\eta(x+z_{l}^{(1)},0) \delta(z_{l}^{(1)}, \lam _{l})|^{p+1}\, \d x\notag\\=&\Big(\frac{1}{2}+o(1)\Big) s_1^{2} K_{l}(z_{l}^{(1)})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/2\sigma} \newcommand{\Sn}{\mathbb{S}^n} \| \widetilde{\delta}(0,1)\|_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{2}\notag \\& -\Big(\frac{1}{p+1}+o(1)\Big) s_1^{p+1} K_{l}(z_{l}^{(1)})^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n)/2\sigma} \newcommand{\Sn}{\mathbb{S}^n} \| \delta(0,1)\|^{2}\notag
\\=&\Big[\Big(\frac{n}{2\sigma} \newcommand{\Sn}{\mathbb{S}^n}+o(1)\Big) s_1^{2}-\Big(\frac{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{2\sigma} \newcommand{\Sn}{\mathbb{S}^n}+o(1)\Big) s_1^{p+1}\Big]c_{l,\tau}^{(1)},\label{lem:6.1}
\end{align}
where Proposition \ref{prop:4.2} is used in the last step.
Hence, using \eqref{lem:6.1} and the hypothesis $I_{K_{l}, \tau}(\overline{g}_{l}^{(1)}(\theta_{1})) \geq c_{l, \tau}^{(1)}-\varepsilon$, we complete the proof.
\end{proof}
\begin{lem}\label{lem:4.2}
For any $\varepsilon\in (0,1)$, there exists a constant $\Lambda} \newcommand{\B}{\mathcal{B} _{2}=\Lambda} \newcommand{\B}{\mathcal{B} _{2}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,\varepsilon, A_{1}, A_{3})>\Lambda} \newcommand{\B}{\mathcal{B} _{1}$ such
that for any $l \geq \Lambda} \newcommand{\B}{\mathcal{B} _{2}$ and $0 \leq \theta_{1}, \theta_{2} \leq 1$, we have
$I_{K_{l}, \tau}(g_{l}^{(i)}(\theta_{i})) \leq c_{l, \tau}^{(i)}+\varepsilon/10,\,i=1,2.$
\end{lem}
\begin{proof}
The proof is similar to Proposition \ref{prop:4.2}.
\end{proof}
\begin{lem}\label{lem:4.3}
For any $\varepsilon\in (0,1)$, there exists a constant $\Lambda} \newcommand{\B}{\mathcal{B} _{3}=\Lambda} \newcommand{\B}{\mathcal{B} _{3}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,\varepsilon, A_{1}, A_{3})>\Lambda} \newcommand{\B}{\mathcal{B} _{2}$ such
that$$
I_{K_{l}, \tau}(G_{l}(\theta))\big|_{\theta \in \partial[0,1]^{2}} \leq \max
\{c^{(1)}+\varepsilon, c^{(2)}+\varepsilon\}\quad \text{ for all }\, l\geq \Lambda} \newcommand{\B}{\mathcal{B} _{3}.
$$
\end{lem}
\begin{proof} Lemma \ref{lem:4.3} follows immediately from Lemma \ref{lem:4.2}.\end{proof}
\begin{lem}\label{lem:4.4}
For any $\varepsilon\in (0,1/2)$, if $
I_{K_{l}, \tau}(G_{l}(\theta)) \geq c_{l, \tau}^{(1)}+c_{l, \tau}^{(2)}-\varepsilon $, then there exists a constant $C_{0}=C_{0}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n)>1$ such that for any $l \geq \Lambda} \newcommand{\B}{\mathcal{B} _{3}$ and $\theta \in[0,1]^{2}$, we have $ |C_{1} \theta_{i}-1| \leq C_{0} \sqrt{\varepsilon},\,i=1,2$.
\end{lem}
\begin{proof}
Since $g_{l}^{(1)}(\theta_{1})$ and $g_{l}^{(2)}(\theta_{2})$ have disjoint supports, after a direct calculation, we see that $$I_{K_{l}, \tau}(G_{l}(\theta))=I_{K_{l}, \tau}(g_{l}^{(1)}(\theta_{1}))+I_{K_{l}, \tau}(g_{l}^{(2)}(\theta_{2}))+o(1).$$
Then it follows from Lemma \ref{lem:4.2} and the
hypothesis $I_{K_{l}, \tau}(G_{l}(\theta)) \geq c_{l, \tau}^{(1)}+c_{l, \tau}^{(2)}-\varepsilon$ that \begin{align}\label{lem:4.4.1}
I_{K_{l}, \tau}(g_{l}^{(1)}(\theta_{1})) \geq c_{l, \tau}^{(1)}-2 \varepsilon\quad\text{ and }\quad I_{K_{l}, \tau}(g_{l}^{(2)}(\theta_{2})) \geq c_{l, \tau}^{(2)}-2 \varepsilon.
\end{align}
Lemma \ref{lem:4.4} follows immediately from \eqref{lem:4.4.1} and Lemma \ref{lem:4.1}.
\end{proof}
\medskip
\emph{Step 2: The deformation of $G_l$.} Let
\begin{gather*}
M_{l} =\sup \big\{\|I_{K_{l}, \tau}^{\prime}(U)\| : U \in V_{l}(2, \var_1 )\big\},\\\beta_{l} =\operatorname{dist}\big(\partial \widetilde{V}_{l}(2, \varepsilon_{2}), \partial \widetilde{V}_{l}(2,\varepsilon_{2}/2)\big).
\end{gather*}
One can see from the definition of $M_{l}$ that there exists some constant $C_{2}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,\varepsilon_{2},A_{1})>1$ such that\begin{equation}\label{119}
M_{l} \leq C_{2}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,\varepsilon_{2},A_{1}).
\end{equation}
It is also clear from the definition of $\widetilde{V}_{l}(2, \varepsilon_{2})$ that\begin{equation}\label{120}
\beta_{l} \geq \frac{\varepsilon_{2}}{4}.
\end{equation}
By Lemma \ref{lem:4.4}, we choose $\varepsilon_{4}$ to satisfy, for $l$ large, that
\begin{equation}} \newcommand{\ee}{\end{equation}} \newcommand{\X}{\overline{X}\label{122}
\varepsilon_{4}<\min\Big\{\varepsilon_{3},\frac{1}{2 A_{4}} ,\frac{\varepsilon_{2} \delta_{4}(\varepsilon_{2}, \varepsilon_{3})^{2}}{8 C_{2}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,\varepsilon_{2},A_{1})}\Big\},\ee
\begin{equation}} \newcommand{\ee}{\end{equation}} \newcommand{\X}{\overline{X}\label{123}\begin{split}
I_{K_{l}, \tau}(G_{l}(\theta)) \geq& c_{l, \tau}^{(1)}+c_{l, \tau}^{(2)}-\varepsilon_{4}\,\text{ implies that}\\& G_{l}(\theta) \in \widetilde{V}_{l}(2, \varepsilon_{2} / 2),\, z_{1}(G_{l}(\theta)) \in O_{l}^{(1)},\, z_{2}(G_{l}(\theta)) \in O_{l}^{(2)}.
\end{split}
\ee
$G_{l}(\theta)$ has been defined by now.
We know from Lemma \ref{lem:4.2} that for $l$ large enough,
$$
\max _{\theta \in[0,1]^{2}} I_{K_{l},\tau}(G_{l}(\theta)) \leq c_{l, \tau}^{(1)}+c_{l, \tau}^{(2)}+\varepsilon_{4}.
$$
For any $U_{0} \in \widetilde{V}_{l}(2, \varepsilon_{2} / 2)$, we consider the negative gradient flow of $I_{K_{l}, \tau}$,
\begin{equation}\label{124}
\left\{\begin{aligned}
&\frac{\d }{\d s} \xi(s, U_{0}) =-I_{K_{l}, \tau}^{\prime}(\xi(s, U_{0})), \quad s \geq 0, \\
&\xi(0, U_{0}) =U_{0}.
\end{aligned}\right.
\end{equation}
Under the contrary of Theorem \ref{thm:3.1}, we know $I_{K_{l}, \tau}$ satisfies the Palais-Smale condition. Furthermore, the flow defined above never stops before exiting $V_{l}(2, \varepsilon^{*})$.
We define $U_{l} \in C([0,1]^{2}, D)$ by the following.
\begin{itemize}
\item If $I_{K_{l}, \tau}(G_{l}(\theta)) \leq c_{l, \tau}^{(1)}+c_{l, \tau}^{(2)}-\varepsilon_{4}$, we define $s_{l}^{*}(\theta)=0$.
\item If $I_{K_{l}, \tau}(G_{l}(\theta)) > c_{l, \tau}^{(1)}+c_{l, \tau}^{(2)}-\varepsilon_{4}$, then according
to \eqref{123}, $G_{l}(\theta) \in \widetilde{V}_{l}(2, \varepsilon_{2} / 2)$, $z_{1}(G_{l}(\theta)) \in O_{l}^{(1)}$ and $z_{2}(G_{l}(\theta)) \in O_{l}^{(2)}$. We
define $$s_{l}^{*}(\theta)=\min\{s>0 : I_{K_{l}, \tau}(\xi(s, G_{l}(\theta)))=c_{l, \tau}^{(1)}+c_{l, \tau}^{(2)}-\varepsilon_{4}\}.$$
\end{itemize}
Now we set $$U_{l}(\theta)=\xi(s_{l}^{*}(\theta), G_{l}(\theta)).$$
The existence and continuity
of $s_{l}^{*}(\theta)$ is guaranteed by the following lemma.
\begin{lem}\label{lem:4.5}
For any $U_{0} \in \widetilde{V}_{l}(2, \varepsilon_{2} / 2)$ with $z_{1}(U_{0}) \in O_{l}^{(1)}$, $z_{2}(U_{0}) \in O_{l}^{(2)}$, and
$I_{K_{l}, \tau}(U_{0})\in (c_{l, \tau}^{(1)}+c_{l, \tau}^{(2)}-\varepsilon_{4},c_{l, \tau}^{(1)}+c_{l, \tau}^{(2)}+\varepsilon_{4}]$, the flow line $\xi(s, U_{0})$ $(s \geq 0)$
cannot leave $\widetilde{V}_{l}(2, \varepsilon_{2})$ before reaching $I_{K_{l}, \tau}^{-1}(c_{l, \tau}^{(1)}+c_{l, \tau}^{(2)}-\varepsilon_{4})$.
\end{lem}
\begin{proof}
The proof can be done exactly in the same way as in \cite[Lemma 5]{BC1}, so we omit it.
\end{proof}
Lemma \ref{lem:4.5} is a local version of Lemma 5 in \cite{BC1} since we only need the compactness property of the flow line in certain region and Proposition \ref{prop:3.2} has provided control of $\|I_{K_{l}, \tau}^{\prime}\|$ in the region.
We can see from Lemma \ref{lem:4.5} that $s_{l}^{*}(\theta)$ is well defined. Since $I_{K_{l}, \tau}$ has no critical point in $
\widetilde{V}_{l}(2, \varepsilon_{2}) \cap\{ U \in D: | I_{K_{l}, \tau}(U)-c^{(1)}-c^{(2)}|\leq \varepsilon_{4}\} \subset V_{l}(2, \varepsilon^{*}) \cap\{U \in D:|I_{K_{l}, \tau}(U)-c^{(1)}-c^{(2)}|\leq \varepsilon^{*}\}$
under the contradiction hypothesis, $s_{l}^{*}(\theta)$ is continuous in $\theta$ (see also, \cite[Proposition 5.11]{Li93b} and \cite[Lemma 5]{BC1}), hence $U_{l} \in C([0,1]^{2}, D)$.
\medskip
\emph{Step 3: The construction of $H_l$.}
It follows from the construction of $U_l$ that
\begin{align}
\max_{\theta\in [0,1]^2}I_{K_l,\tau}(U_{l}(\theta))\leq c_{l, \tau}^{(1)}+c_{l, \tau}^{(2)}-\varepsilon_{4}.
\end{align}
Since the gradient flow does not keep properties \eqref{113}-\eqref{114}, $U_{l}(\theta)$ is not necessarily in $\Gamma_{l}$ any more. It follows from Lemma \ref{lem:4.5} that if $I_{K_{l}, \tau}(G_{l}(\theta))>c_{l, \tau}^{(1)}+c_{l,\tau}^{(2)}-\varepsilon_{4}$, then the gradient flow $\xi(s,G_l(\theta))$ $(s\geq 0)$ cannot leave $\widetilde{V}_{l}(2, \varepsilon_{2})$ before reaching $I_{K_{l}, \tau}^{-1}(c_{l, \tau}^{(1)}+c_{l, \tau}^{(2)}-\varepsilon_{4})$.
Using \eqref{123} and the above information we know that if $I_{K_{l}, \tau}(G_{l}(\theta))>c_{l, \tau}^{(1)}+c_{l,\tau}^{(2)}-\varepsilon_{4}$, then $U_{l}(\theta) \in \widetilde{V}_{l}(2, \varepsilon_{2}) \subset V_{l}(2, o_{\varepsilon_{2}}(1))$ with $z_{1}(U_{l}(\theta)) \in O_{l}^{(1)}$ and $z_{2}(U_{l}(\theta)) \in O_{l}^{(2)}$. This implies that
\begin{gather}
\label{125}
\int_{\Omega_{l}}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U_{l}(\theta)|^{2}\,\d X+\int_{\pa'\Omega_{l}} |U_{l}(\theta)|^{2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}}\,\d x = o_{\varepsilon_{2}}(1),\\\label{126}
\|U_{l}(\theta)\|_{ W^{\sigma} \newcommand{\Sn}{\mathbb{S}^n,2}(\pa'' \Omega_{l})} = o_{\varepsilon_{2}}(1),
\end{gather}
where
\begin{align*}
\Omega_{l}&=\mathbb{R}^{n+1}_+\backslash\{\B^+_{r}(z_{l}^{(1)}) \cup \B^+_{r}(z_{l}^{(2)})\},\\r&=4(\operatorname{diam} O^{(1)}+\operatorname{diam} O^{(2)}), \\\operatorname{diam} O^{(1)}&=\sup \{|x-y| : x, y \in O^{(1)}\},\\\operatorname{diam} O^{(2)}&=\sup \{|x-y| : x, y \in O^{(2)}\}.
\end{align*}
By Proposition \ref{prop:2.1} ($\varepsilon_{2}>0$ sufficiently small),
we can modify $U_{l}(\theta)$ in $\Omega_{l}$ after making the following minimization.
Let
\begin{align*}
\varphi_{l}(\theta)=U_{l}(\theta)|_{\pa'' \Omega_{l}}.
\end{align*}
Thanks to \eqref{125}-\eqref{126}, we can apply Proposition \ref{prop:2.1} to obtain the minimizer $U_{\varphi_{l}}(\theta)$ to
\begin{align*}
\min \Big\{I_{K_{l}, \Omega_{l}}(U): U \in D_{\Omega_l}, \,U|_{\pa'' \Omega_{l}}=\varphi_{l}(\theta),\,\int_{\Omega_l}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U|^2\,\d X\leq C_{1} r_{0}^{2}\Big\},
\end{align*}
where $D_{\Omega_{l}}$ is the closure of $ C^{\infty}_{c}(\overline{\Omega}_{l})$ under the norm \begin{align*}\|U\|_{D_{\Omega_{l}}}:=\Big(\int_{\Omega_{l}}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla U|^{2}\,\d X\Big)^{1 / 2}+\Big(\int_{\pa'' \Omega_{l}}|U(x,0)|^{2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}}\,\d x\Big)^{1/2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}},\end{align*}and
$C_1$ and $r_{0}$ are the constants given by Proposition \ref{prop:2.1}.
For $\theta \in[0,1]^{2}$, we define
\begin{align*}
W_{l}(\theta)(X)=\left\{\begin{aligned}
&U_{l}(\theta)(X), & & X \in \B^+_{r}(z_{l}^{(1)}) \cup \B^+_{r}(z_{l}^{(2)}),\\
&U_{\varphi_{l}}(\theta)(X), & & X \in \mathbb{R}^{n+1}_+\backslash\{\B^+_{r}(z_{l}^{(1)}) \cup \B^+_{r}(z_{l}^{(2)})\}=\Omega_{l}.
\end{aligned}\right.
\end{align*}
It follows from Proposition \ref{prop:2.1} that $U_{l} \in C([0,1]^{2}, D)$ and satisfies
\begin{gather}
\max _{\theta \in[0,1]^{2}} I_{K_{l}, \tau }(W_{l}(\theta)) \leq \max _{\theta \in[0,1]^{2}} I_{K_{l}, \tau}(U_{l}(\theta)) \leq c_{l, \tau}^{(1)}+c_{l, \tau}^{(2)}-\varepsilon_{4},\label{127}\\\int_{\Omega_{l}}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla W_{l}(\theta)|^{2}\, \d X+\int_{\pa'\Omega_{l}} |W_{l}(\theta)|^{2_{\sigma} \newcommand{\Sn}{\mathbb{S}^n}^{*}}\,\d x = o_{\varepsilon_{2}}(1),\\\label{128}
\left\{\begin{aligned}
&\operatorname{div}(t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}\nabla W_{l}(\theta))=0&&\text{ in }\,\Omega_{l},\\&\pa_{\nu}^{\sigma} \newcommand{\Sn}{\mathbb{S}^n}W_{l}(\theta)=K_{l}(x)H^{\tau}(x)W_{l}(\theta)^{p} &&\text { on } \, \pa'\Omega_{l}.
\end{aligned}\right.
\end{gather}
In order to construct the required $H_l$, we introduce the following notation.
First we write
\begin{align*}
\Omega_l^1&:=(\mathcal{B}_{l_{1}}^+(z_{l}^{(1)}) \backslash \mathcal{B}^+_{r}(z_{l}^{(1)})) \cup(\mathcal{B}^+_{l_{1}}(z_{l}^{(2)})\backslash \mathcal{B}^{+}_{r}(z_{l}^{(2)})),\\ \Omega_l^2&:=(\mathcal{B}_{l_{2}}^+(z_{l}^{(1)}) \backslash \mathcal{B}^+_{l_{1}}(z_{l}^{(1)})) \cup(\mathcal{B}^+_{l_{2}}(z_{l}^{(2)})\backslash \mathcal{B}^{+}_{l_{1}}(z_{l}^{(2)})),\\\Omega_l^3&:=(\mathbb{R}^{n+1}_{+} \backslash \B^+_{l_{2}}(z_{l}^{(1)})) \cap(\mathbb{R}^{n+1}_{+} \backslash \mathcal{B}^+_{l_{2}}(z_{l}^{(2)})).
\end{align*}
Clearly, $\Omega_l=\Omega_l^1\cup\Omega_l^2 \cup\Omega_l^3$ for large $l$.
For $l_{2}>100 l_{1}>1000 r$ (we determine the values of $l_{1}, l_{2}$ at the end), we introduce cut-off functions $\eta_{l} \in C_{c}^{\infty}(\mathbb{R}^{n+1})$ for large $l$,
\begin{gather*}
\eta_{l}(x,t)=\left\{\begin{aligned}
&1, & & (x,t)\in \B_{l_1}(z_l^{(1)}) \cup \B_{l_1}(z_l^{(2)}), \\
& 0, & &(x,t)\in (\mathbb{R}^{n+1}\backslash \B_{l_2}(z_l^{(1)})) \cap (\mathbb{R}^{n+1}\backslash \B_{l_2}(z_l^{(2)})), \\
&\geq 0 ,& &\text {elsewhere,}
\end{aligned}\right.\\
|\nabla \eta_{l}| \leq \frac{10}{l_{2}-l_{1}}, \quad (x,t) \in \mathbb{R}^{n+1},
\end{gather*}
and set
\begin{align*}
H_{l}(\theta)=\eta_{l}(x,t)W_{l}(\theta).
\end{align*}
Next, we will prove that $H_{l}(\theta)\in \Gamma_{l}$, but the energy of $H_{l}(\theta)$ contradicts to $b_{l,\tau}$.
Multiplying $(1-\eta_{l}) W_{l}(\theta)$ on both sides of \eqref{128} and integrating by parts, we have\begin{align*}
&\int_{\Omega_l}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n} \nabla((1-\eta_{l}) W_{l}(\theta)) \nabla W_{l}(\theta)\,\d X\\=&\int_{\pa'\Omega_l} K_{l}(x)H^{\tau}(x)(1-\eta_{l}(x,0))|U_{l}(\theta)|^{p+1}\,\d x.
\end{align*}
A direct computation shows that\begin{align*}
&\int_{\Omega_l^3}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla W_{l}(\theta)|^{2}\,\d X-\int_{\pa^{\prime}\Omega_l^3}K_{l}(x)H^{\tau}(x)|U_{l}(\theta)|^{p+1}\,\d x\\=&\int_{\Omega_2^l}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}W_{l}(\theta) \nabla\eta_{l} \nabla W_{l}(\theta)\,\d X\\&-\int_{\Omega_2^l}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}(1-\eta_{l}) |\nabla W_{l}(\theta)|^2\,\d X\\& +\int_{\pa^{\prime}\Omega_2^l}K_{l}(x)(1-\eta_{l}(x,0))H^{\tau}(x)|U_{l}(\theta)|^{p+1}\,\d x\\\geq&-\int_{\Omega_2^l}\Big[\frac{10}{l_{2}-l_{1}}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}| W_{l}(\theta)||\nabla W_{l}(\theta)| +t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla W_{l}(\theta)|^{2}\Big]\,\d X\\&-2A_{1}\int_{\pa^{\prime}\Omega_2^l}|U_{l}(\theta)|^{p+1}\,\d x\\\geq&-\int_{\Omega_2}\frac{10}{l_{2}-l_{1}}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}| W_{l}(\theta)||\nabla W_{l}(\theta)|\,\d X -4A_{1}\int_{\pa^{\prime}\Omega_2}|U_{l}(\theta)|^{p+1}\,\d x.
\end{align*}
Then by Proposition \ref{prop:1.4}, there exists some constant $C_{3}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,A_{1})>1$ such that for $l$ large enough, we have \begin{align}\label{129} | W_{l}(\theta)| &\leq \frac{C_{3}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,A_{1})}{|(x-z_{l}^{(i)},t)|^{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}},
\\
\label{131} |\nabla_x W_{l}(\theta)| &\leq \frac{C_{3}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,A_{1})}{|(x-z_{l}^{(i)},t)|^{n+1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}, \\
\label{131'}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n} |\pa_{t} W_{l}(\theta)| &\leq \frac{C_{3}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,A_{1})}{|(x-z_{l}^{(i)},t)|^{n+1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}},
\end{align}
for all $(x,t)\in \Sigma^+_{l_2}(z_l^{(i)})\backslash \Sigma^+_{l_1}(z_l^{(i)})$, $i=1,2$.
Consequently,
\begin{align}
&\int_{\Omega_l^3}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla W_{l}(\theta)|^{2}\,\d X-\int_{\pa^{\prime
}\Omega_l^3}K_{l}(x)H^{\tau}(x)|U_{l}(\theta)|^{p+1}\,\d x\notag\\\displaystyle \geq& \begin{cases}
-C_{0}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n) C_{3}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,A_{1})\Big[\frac{1}{(l_2-l_1)l_1^{n-2+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}+\frac{1}{l_1^{n-2+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}+\frac{1}{l_1}\Big]\quad &\text{ if }\, 0<\sigma} \newcommand{\Sn}{\mathbb{S}^n\leq \frac{1}{2},\\-C_{0}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n) C_{3}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,A_{1})\Big[\frac{l_2^{2-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}{(l_2-l_1)l_1^{n+2-4\sigma} \newcommand{\Sn}{\mathbb{S}^n}}+\frac{l_2^{2-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}{l_1^{n+2-4\sigma} \newcommand{\Sn}{\mathbb{S}^n}}+\frac{1}{l_1}\Big]\quad &\text{ if } \,\frac{1}{2} <\sigma} \newcommand{\Sn}{\mathbb{S}^n<1,
\end{cases}\notag\\ \displaystyle \geq& \begin{cases}
-C_{0}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n) C_{3}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,A_{1})\frac{1}{l_1}\quad &\text{ if } \,0<\sigma} \newcommand{\Sn}{\mathbb{S}^n\leq \frac{1}{2},\\-C_{0}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n) C_{3}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,A_{1})\Big[\frac{l_2^{2-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}{l_1^{n-2}}+\frac{1}{l_1}\Big]\quad &\text{ if } \,\frac{1}{2} <\sigma} \newcommand{\Sn}{\mathbb{S}^n<1.
\end{cases}\label{133}
\end{align}
Thanks to \eqref{129}--\eqref{131'}, \cite[Lemma A.4]{JLX} and a density argument, we see from \eqref{127} and \eqref{133} that
\begin{align*}
I_{K_{l}, \tau}(H_{l}(\theta))=& \frac{1}{2} \int_{\mathbb{R}^{n+1}_+}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla(\eta_{l} W_{l}(\theta))|^{2}\,\d X \\&-\frac{1}{p+1} \int_{\mathbb{R}^n\times\{0\}} K_{l}(x)H^{\tau}(x)|\eta_{l}(x,0) W_{l}(\theta)|^{p+1}\,\d x\\=&\frac{1}{2} \int_{\mathbb{R}^{n+1}_{+}}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n} |\nabla \eta_{l}|^{2}| W_{l}(\theta)|^{2}\,\d X\\&+\int_{\mathbb{R}^{n+1}_+}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n} \eta_{l} W_{l}(\theta) \nabla \eta_{l}\nabla W_{l}(\theta)\,\d X\\& +\frac{1}{2} \int_{\mathbb{R}^{n+1}_+}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\eta_{l}|^{2}|\nabla W_{l}(\theta)|^{2}\,\d X\\&-\frac{1}{p+1} \int_{\mathbb{R}^n\times\{0\}} K_{l}(x)H^{\tau}(x)|\eta_{l}(x,0) W_{l}(\theta)|^{p+1}\,\d x
\\\leq& I_{K_{l}, \tau}(W_{l}(\theta)) -\frac{1}{2} \int_{\mathbb{R}^{n+1}_{+}}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}(1-|\eta_{l}|^{2})|\nabla W_{l}(\theta)|^{2}\,\d X\\& \displaystyle+\frac{1}{p+1} \int_{\mathbb{R}^n\times\{0\}} K_{l}(x)H^{\tau}(x)(1-|\eta_{l}(x,0)|^{p+1})|W_{l}(\theta)|^{p+1}\,\d x
\\& +\begin{cases}
C_{0}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n) C_{3}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,A_{1})\Big[\frac{1}{(l_2-l_1)^2l_1^{n-2+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}+\frac{1}{(l_2-l_1)l_1}\Big]\quad &\text{ if } \,0<\sigma} \newcommand{\Sn}{\mathbb{S}^n\leq \frac{1}{2},\\C_{0}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n) C_{3}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,A_{1})\Big[\frac{l_2^{2-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}{(l_2-l_1)^2l_1^{n-4\sigma} \newcommand{\Sn}{\mathbb{S}^n}}+\frac{l_2^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}{(l_2-l_1)l_1^{2n+1-4\sigma} \newcommand{\Sn}{\mathbb{S}^n}}\Big]\quad &\text{ if }\, \frac{1}{2} <\sigma} \newcommand{\Sn}{\mathbb{S}^n<1,
\end{cases}\\\leq &c_{l, \tau}^{(1)}+c_{l, \tau}^{(2)}-\varepsilon_{4}-\frac{1}{2}\int_{ \Omega_1^l}t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}|\nabla W_{l}(\theta)|^{2}\,\d X\\&+\frac{1}{p+1}\int_{\pa^{\prime} \Omega_1^l} K_{l}(x)H^{\tau}(x)|U_{l}(\theta)|^{p+1} \,\d x\\&\displaystyle +\begin{cases}
C_{0}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n) C_{3}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,A_{1})\Big[\frac{1}{(l_2-l_1)^2l_1^{n-2+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}+\frac{1}{(l_2-l_1)l_1}\Big]\quad &\text{ if }\, 0<\sigma} \newcommand{\Sn}{\mathbb{S}^n\leq \frac{1}{2},\\C_{0}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n) C_{3}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,A_{1})\Big[\frac{l_2^{2-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}{(l_2-l_1)^2l_1^{n-4\sigma} \newcommand{\Sn}{\mathbb{S}^n}}+\frac{l_2^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}{(l_2-l_1)l_1^{2n+1-4\sigma} \newcommand{\Sn}{\mathbb{S}^n}}\Big]\quad &\text{ if }\, \frac{1}{2} <\sigma} \newcommand{\Sn}{\mathbb{S}^n<1,
\end{cases}\\\leq &c_{l, \tau}^{(1)}+c_{l, \tau}^{(2)}-\varepsilon_{4}\\& +\begin{cases}
C_{0}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n) C_{3}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,A_{1})\frac{1}{l_1}\quad &\text{ if } \,0<\sigma} \newcommand{\Sn}{\mathbb{S}^n\leq \frac{1}{2},\\C_{0}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n) C_{3}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,A_{1})\Big(\frac{l_2^{2-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}{l_1^{n-2}}+\frac{1}{l_1}\Big)\quad &\text{ if }\, \frac{1}{2} <\sigma} \newcommand{\Sn}{\mathbb{S}^n<1,\end{cases}\\& \displaystyle +\begin{cases}
C_{0}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n) C_{3}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,A_{1})\Big[\frac{1}{(l_2-l_1)^2l_1^{n-2+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}+\frac{1}{(l_2-l_1)l_1}\Big]\quad &\text{ if }\, 0<\sigma} \newcommand{\Sn}{\mathbb{S}^n\leq \frac{1}{2},\\C_{0}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n) C_{3}(n,\sigma} \newcommand{\Sn}{\mathbb{S}^n,A_{1})\Big[\frac{l_2^{2-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}{(l_2-l_1)^2l_1^{n-4\sigma} \newcommand{\Sn}{\mathbb{S}^n}}+\frac{l_2^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}{(l_2-l_1)l_1^{2n+1-4\sigma} \newcommand{\Sn}{\mathbb{S}^n}}\Big]\quad &\text{ if }\, \frac{1}{2} <\sigma} \newcommand{\Sn}{\mathbb{S}^n<1.
\end{cases}
\end{align*}
Now we choose $l_{1}>10 r$, $l_{2}>200 l_{1}$ to be large enough such that
\begin{equation}\label{135}
I_{K_{l}, \tau}(H_{l}(\theta))\leq c_{l, \tau}^{(1)}+c_{l, \tau}^{(2)}-\frac{\varepsilon_{4}}{2}.
\end{equation}
Then for $l$ large enough (depending on $l_{1}, l_{2}, \varepsilon's, C's$), we have\begin{equation}\label{136}
H_{l} \in \Gamma_{l}.
\end{equation}
Therefore, for $l$ sufficiently large, we have\begin{equation}\label{137}
\max _{\theta \in[0,1]^{2}} I_{K_{l}, \tau}(H_{l}(\theta)) \leq c_{l, \tau}^{(1)}+c_{l, \tau}^{(2)}-\frac{\varepsilon_{4}}{2}<b_{l, \tau}.
\end{equation}
However, \eqref{137} cannot hold by \eqref{136} and the definition of $b_{l, \tau}$. This completes the proof of Theorem \ref{thm:3.1}.
\section{Proof of main theorems }\label{sec:6}
In this section we present our main result from which we deduce
Theorems \ref{thm:0.1}--\ref{thm:0.3} and Corollary \ref{cor:1}.
\begin{prop}\label{prop:5.1}
Suppose that $\{K_{l}\}$ is a sequence of functions satisfying conditions (i)--(iii) (see Section \ref{sec:3}) and $(K_2)$. Assume also that there exist some bounded open sets $O^{(1)}, \ldots, O^{(m)} \subset \mathbb{R}^n$
and some positive constants $\delta_{2}$, $\delta_{3}>0$ such that for any $1 \leq i \leq m$,
\begin{gather*}
\widetilde{O}_{l}^{(i)}-z_{l}^{(i)} \subset O^{(i)} \quad \text { for all }\, l,\\
\big\{U \in D^+: I_{K_{\infty}^{(i)}}^{\prime}(U)=0,\, c^{(i)} \leq I_{K_{\infty}^{(i)}}(U) \leq c^{(i)}+\delta_{2}\big\}\cap V(1, \delta_{3}, O^{(i)}, K_{\infty}^{(i)})=\emptyset.
\end{gather*}
Then for any $\varepsilon>0$, there exists integer $\overline{l}_{\varepsilon, m}>0$ such that for any $l \geq \overline{l}_{\varepsilon, m}$, there exists $ U_{l} \in V_{l}(m, \varepsilon)$ which solves
\begin{align}\label{139}
\left\{\begin{aligned}
&\operatorname{div}(t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}\nabla U_{l})=0&&\text{ in }\,\mathbb{R}^{n+1}_+,\\
&\pa_{\nu}^{\sigma} \newcommand{\Sn}{\mathbb{S}^n} U_l=K_{l}(x) U_{l}(x,0)^{\frac{n+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}&&\text{ on }\,\mathbb{R}^n.
\\
\end{aligned}\right.
\end{align}
Furthermore, $U_{l}$ satisfies
\begin{align*
\sum_{i=1}^{m} c^{(i)}-\varepsilon \leq I_{K_{l}}(U_{l}) \leq \sum_{i=1}^{m} c^{(i)}+\varepsilon.
\end{align*}
\end{prop}
The proof of Proposition \ref{prop:5.1} is by contradiction arguments,
depending on blow-up analysis for a family of equations \eqref{subcritical} approximating Eq. \eqref{139}. More precisely, if the sequence
of subcritical solutions $U_{l, \tau}$ $(0<\tau<\overline{\tau}_{l})$ obtained in Theorem \ref{thm:3.1} is uniformly bounded
as $\tau\rightarrow 0$, some local estimates in \cite{JLX} imply that there exists a subsequence converging to a
positive solution of Eq. \eqref{139}. However, a prior $\{U_{l,\tau}\}$ might blow up,
we have to rule out this possibility. Note that $U_{l, \tau}\in V_{l}(m, o_{\varepsilon_{2}}(1))$, which consists of functions with $m$ $(m \geq 2)$ ``bumps", we apply some results of blow-up analysis developed in \cite{JLX} to conclude that, as $\tau\rightarrow 0$, there is no blow up occurring under the hypotheses of Proposition \ref{prop:5.1}.
In the following we show the boundedness of $\{U_{l, \tau}\}$ (as $\tau \rightarrow 0$) by contradiction argument. More precisely, we reach a contradiction by checking balance via a Pohozaev type identity in some proper region. We start by recalling the notion of blow up points, isolated blow up points and isolated simple blow up points.
Let $\Omega \subset \mathbb{R}^n$ be a domain, $\tau_{i} \geq 0$ satisfy $\lim _{i \rightarrow \infty} \tau_{i}=0$, $p_{i}=\frac{n+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}-\tau_{i}$, and let $K_{i} \in C^{1,1}(\Omega)$ satisfy, for some constants $A_{1}, A_{2}>0$,
\begin{align}\label{bdd}
1 / A_{1} \leq K_{i}(x) \leq A_{1} \quad \text { for all }\, x \in \Omega, \quad\|K_{i}\|_{C^{1,1}(\Omega)} \leq A_{2}.
\end{align}
Let $u_{i} \in L^{\infty}(\Omega) \cap E$ with $u_{i} \geq 0$ in $\mathbb{R}^n$ satisfy
\begin{align}\label{subcriticaleq}
(-\Delta)^{\sigma} \newcommand{\Sn}{\mathbb{S}^n} u_{i}= K_{i} u_{i}^{p_{i}} \quad \text { in } \,\Omega.
\end{align}
We say that $\{u_i\}$ blows up if $\|u_i\|_{L^\infty(\Omega} \newcommand{\pa}{\partial)}\to \infty$ as $i\to \infty$.
\begin{defn}\label{def:isolatedblowup}
Suppose that $\{K_i\}$ satisfies \eqref{bdd} and $\{u_i\}$ satisfies \eqref{subcriticaleq}.
We say a point $\overline y\in \Omega} \newcommand{\pa}{\partial$ is an isolated blow up point of $\{u_i\}$ if there exist
$0<\overline r<\mbox{dist}(\overline y,\Omega} \newcommand{\pa}{\partial)$, $\overline C>0$, and a sequence $y_i$ tending to $\overline y$, such that,
$y_i$ is a local maximum of $u_i$, $u_i(y_i)\rightarrow \infty$ and
\begin{align*}
u_i(y)\leq \overline C | y-y_i|^{-2\sigma/(p_i-1)} \quad \mbox{ for all }\, y\in B_{\overline r}(y_i).
\end{align*}
\end{defn}
Let $y_i\rightarrow \overline y$ be an isolated blow up point of $u_i$, define
\begin{equation}} \newcommand{\ee}{\end{equation}} \newcommand{\X}{\overline{X}\label{def:average}
\overline u_i(r)=\frac{1}{|\pa B_r(y_i)|} \int_{\pa B_r(y_i)}u_i,\quad r>0,
\ee
and
\begin{align*}
\overline w_i(r)=r^{2\sigma/(p_i-1)}\overline u_i(r), \quad r>0.
\end{align*}
\begin{defn}\label{isolatedsimpleblowup}
We say $y_i \to \overline y\in \Omega} \newcommand{\pa}{\partial$ is an isolated simple blow up point, if $y_i \to \overline y$ is an isolated blow up point, such that, for some
$\rho>0$ (independent of $i$), $\overline w_i$ has precisely one critical point in $(0,\rho)$ for large $i$.
\end{defn}
Utilizing these notions, we now present some facts. By some standard blow up arguments, the blow up points cannot occur in $\mathbb{R}^n \backslash(\bigcup_{i=1}^{m}{\widetilde{O}}_{l}^{(i)})$ since the energy of $\{U_{l, \tau}\}$ in the region is small by the fact that $U_{l, \tau} \in V_{l}(m, o_{\varepsilon_{2}}(1))$ and the definition of $V_{l}(m, o_{\varepsilon_{2}}(1))$.
Hence the blow up points can occur only in $\bigcup_{i=1}^{m}{\widetilde{O}}_{l}^{(i)}$. By the structure of functions in $V_{l}(m,o_{\varepsilon_{2}}(1))$ and some blow up arguments obtained in \cite[Proposition 5.1]{JLX}, there are at most $m$ isolated blow up points, namely, the blow up occurs in $\{(\overline{y}_{1},0), \ldots, (\overline{y}_{m},0)\}$ for some $\overline{y}_{i} \in \widetilde{O}_{l}^{(i)}$ $(1\leq i\leq m)$. Futhermore, we conclude from \cite[Proposition 4.16]{JLX} that an isolated blow up point has to be an isolated simple blow up point. From the structure of functions in $V_{l}(m,o_{\varepsilon_{2}}(1))$ we know that if the blow up does occur, there have to be exactly $m$ isolated simple blow up points, see \cite[Section 4]{JLX} for more details.
Let us consider this situation only, namely, $\{\overline{Y}_{i}=(\overline{y}_{i},0), 1\leq i\leq m\}$ is the blow up set and they are all isolated simple blow up points. Moreover, in our situation, $K_{i}(x)=K(x)H^{\tau_i}(x)$ is the sequence of functions in Eq. \eqref{subcriticaleq}. We may assume that the blow up occurs at $U_{i}=U_{l, \tau_{i}}$ and we can apply some blow-up analysis results in \cite{JLX} to $U_{i}$. Here and in the following we suppress the
dependence of $l$ in the notation since $l$ is fixed in the blow-up analysis.
We now complete the proof of Proposition \ref{prop:5.1} by checking balance
via a Pohozaev type indentity.
\begin{proof}[Proof of Proposition \ref{prop:5.1}]
Let $U_i$ be the extension of $u_i$ (see \eqref{extension}) corresponding to the solution of Eq. \eqref{subcriticaleq} with $K_{i}=KH^{\tau_i}$. Let $\overline{y}=\overline{y}_{1}$ and $\{y_{i}\}$ be the sequence as in Definitions \ref{def:isolatedblowup} and \ref{isolatedsimpleblowup}. Applying the Pohozaev identity \cite[Proposition 4.7]{JLX} to $U_{i}$, we obtain
\begin{align}\label{141}
\int_{\partial^{\prime} \mathcal{B}_{R}^{+}(y_{i})} B^{\prime}(Y, U_{i}, \nabla U_{i}, R, \sigma} \newcommand{\Sn}{\mathbb{S}^n)+\int_{\partial^{\prime \prime} \mathcal{B}_{R}^{+}(y_{i})} s^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n} B^{\prime \prime}(Y, U_{i}, \nabla U_{i}, R, \sigma} \newcommand{\Sn}{\mathbb{S}^n)=0,
\end{align}
where $$
B^{\prime}(Y, U_{i}, \nabla U_{i}, R, \sigma} \newcommand{\Sn}{\mathbb{S}^n)=\frac{n-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}{2} K_{i} U_{i}^{p_{i}+1}+\langle Y, \nabla U_{i}\rangle K_{i} U_{i}^{p_{i}},
$$
and $$
B^{\prime \prime}(Y, U_{i}, \nabla U_{i}, R, \sigma} \newcommand{\Sn}{\mathbb{S}^n)=\frac{n-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}{2} U_{i} \frac{\partial U_{i}}{\partial \nu}-\frac{R}{2}|\nabla U_{i}|^{2}+R\Big|\frac{\partial U_{i}}{\partial \nu}\Big|^{2}.
$$
We are going to derive a contradiction to \eqref{141}, by showing that for small $R>0$,\begin{align}\label{Pohozaeve1}
\limsup _{i \rightarrow \infty} U_{i}(Y_{i})^{2} \int_{\partial^{\prime} \mathcal{B}_{R}^{+}(y_{i})} B^{\prime}(Y, U_{i}, \nabla U_{i}, R, \sigma} \newcommand{\Sn}{\mathbb{S}^n) \leq 0,
\end{align}
and\begin{align}\label{Pohozaeve2}
\limsup _{i \rightarrow \infty} U_{i}(Y_{i})^{2} \int_{\partial'' \mathcal{B}_{R}^{+}(y_{i})} s^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n} B^{\prime \prime}(Y, U_{i}, \nabla U_{i}, R, \sigma} \newcommand{\Sn}{\mathbb{S}^n)<0.
\end{align}
Hence Proposition \ref{prop:5.1} will be established.
Let $\mathscr{S}=\{\overline{Y}_{1},\overline{Y}_{2},\ldots,\overline{Y}_{m}\}$, applying B\^ocher Lemma in \cite[Lemma 4.10]{JLX} and maximum principle, we deduce that
\begin{align*}
U_{i}(Y_{i}) U_{i}(Y)\rightarrow G(Y)=\sum_{k=1}^{m} b_{k}|Y-\overline{Y}_{k}|^{2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n}+h(Y)\quad \text { in }\, C_{loc}^{\alpha}(\overline{\mathbb{R}}_{+}^{n+1} \backslash \mathscr{S}) \cap C_{loc}^{2}(\mathbb{R}_{+}^{n+1})
\end{align*}
and
\begin{align*}
u_{i}(y_{i}) u_{i}(y)\rightarrow G(y,0)=\sum_{k=1}^{m} b_{k}|y-\overline{y}_{k}|^{2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n}+h(y,0)\quad \text { in }\, C_{loc}^{2}(\mathbb{R}^n\backslash \{\overline{y}_1,\overline{y}_2,\ldots,\overline{y}_m\})
\end{align*}
as $i\rightarrow \infty$, where $b_{k}>0$ $(1 \leq k \leq m)$ and $h(Y)$ satisfies
$$
\left\{\begin{aligned}
&\operatorname{div}(s^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n} \nabla h)=0 && \text { in }\, \mathbb{R}_{+}^{n+1}, \\
& \partial_{\nu}^{\sigma} \newcommand{\Sn}{\mathbb{S}^n} h=0 && \text { on }\, \mathbb{R}^n.
\end{aligned}\right.
$$
In particular, in a small half punctured disc at $Y_1$, we have
$$
\lim _{i \rightarrow \infty} U_{i}(Y_{i}) U_{i}(Y)=b_{1}|Y-Y_1|^{2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n}+b+w(Y),
$$
where $b>0$ is a positive constant and $w(Y)$ is a smooth function near $Y_1$ with $w(Y_1)=0$.
Now if we choose $R>0$ small enough, it is easy to verify \eqref{Pohozaeve2} by
\begin{align*}
& \limsup _{i \rightarrow \infty} U_{i}(Y_{i})^{2} \int_{\partial'' \mathcal{B}_{R}^{+}(y_{i})} s^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n} B^{\prime \prime}(Y, U_{i}, \nabla U_{i}, R, \sigma} \newcommand{\Sn}{\mathbb{S}^n) \\
=&\int_{\partial''\mathcal{B}_{R}^{+}(y_{i})} s^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n} B^{\prime \prime}(Y, G, \nabla G, R, \sigma} \newcommand{\Sn}{\mathbb{S}^n)\\=&-\frac{(n-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n)^{2}}{2} b_1^{2} \int_{\partial''\mathcal{B}_{1}^{+}} s^{1-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}\,\d y\,\d s+o_{R}(1)<0.
\end{align*}
On the other hand, via integration by parts, we have
\begin{align*}
&\int_{\partial^{\prime} \mathcal{B}_{R}^{+}(Y_{i})} B^{\prime}(Y, U_{i}, \nabla U_{i}, R, \sigma} \newcommand{\Sn}{\mathbb{S}^n) \\
=&\frac{n-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}{2} \int_{B_{R}(y_{i})} K_i u_{i}^{p_{i}+1}+\int_{B_{R}(y_{i})}\langle y-y_i, \nabla u_{i}\rangle K_i u_{i}^{p_{i}} \\
=&\frac{n-2 \sigma} \newcommand{\Sn}{\mathbb{S}^n}{2} \int_{B_{R}(y_{i})} K_i u_{i}^{p_{i}+1}-\frac{n}{p_{i}+1} \int_{B_{R}(y_{i})} K_i u_{i}^{p_{i}+1} \\
& -\frac{1}{p_{i}+1} \int_{B_{R}(y_{i})}\langle y-y_i, \nabla K_i \rangle u_{i}^{p_{i}+1}+\frac{R}{p_{i}+1} \int_{\partial B_{R}(y_{i})} K_i u_{i}^{p_{i}+1} \\
\leq&-\frac{1}{p_{i}+1} \int_{B_{R}(y_{i})}\langle y-y_i, \nabla K_i\rangle u_{i}^{p_{i}+1}+C u_{i}(y_i)^{-(p_{i}+1)},
\end{align*}
where \cite[Proposition 4.9]{JLX} is used in the last inequality. Hence \eqref{Pohozaeve1} follows from \cite[Corollary 4.15]{JLX}.
Finally, from the above argument we know that there will be no blow up occur under the hypotheses of Proposition \ref{prop:5.1} and hence Proposition \ref{prop:5.1} is established.
\end{proof}
We are now ready to complete the proofs of the main results in our paper.
\begin{proof}[Proof of Theorem \ref{thm:0.2}]
Suppose Theorem \ref{thm:0.2} is false, then for some $\overline{\varepsilon}>0$ (we can assume $\overline{\varepsilon}$ to be very small) and $k \in\{2,3,4, \ldots\}$, there exists a sequence of integers $I_{l}^{(1)}, \ldots, I_{l}^{(k)}$, such that
\begin{gather*}
\lim _{l \rightarrow \infty}|I_{l}^{(i)}|=\infty ,\\\lim _{l \rightarrow \infty}|I_{l}^{(i)}-I_{l}^{(j)}|=\infty, ~ i \neq j,
\end{gather*}
but Eq. \eqref{maineq2} has no solution in $V(k, \overline{\varepsilon}, B_{ \overline{\varepsilon}}(x_{l}^{(1)}), \ldots, B_{\overline{\varepsilon}}(x_{l}^{(k)}))$ which satisfies
$k c-\overline{\varepsilon} \leq I_{K} \leq k c+\overline{\varepsilon}$, where $c=(\sigma} \newcommand{\Sn}{\mathbb{S}^n / n)(K(x^{*}))^{(2\sigma} \newcommand{\Sn}{\mathbb{S}^n-n) / 2\sigma} \newcommand{\Sn}{\mathbb{S}^n}(\mathcal{S}_{n,\sigma} \newcommand{\Sn}{\mathbb{S}^n})^{n/\sigma} \newcommand{\Sn}{\mathbb{S}^n}$ and $x_{l
}^{(i)}=x^{*}+(I_{l}^{(i)} T, 0,\ldots,0)$.
For $\varepsilon>0$ small, define
\begin{gather*}
K_{l}(x_{1}, x_{2},\ldots, x_{n})=K(x_{1}, x_{2},\ldots, x_{n}),\\
O_{l}^{(i)}=B_{\varepsilon}(x_{l}^{(i)}),\quad \widetilde{O}_{l}^{(i)}=B_{2 \varepsilon}(x_{l}^{(i)}),\\R_{l}=\min _{i \neq j}\big\{\sqrt{|I_{l}^{(i)}|}, \sqrt{|I_{l}^{(i)}-I_{l}^{(j)}|}\big\},\\K_{\infty}^{(i)}(x_{1}, x_{2},\ldots, x_{n})=K_{\infty}(x_{1}, x_{2}, \ldots,x_{n})=\lim _{l \rightarrow \infty} K(x_{1}+lT, x_{2},\ldots, x_{n}),\\a^{(i)}=K(x^{*}).
\end{gather*}
It is easy to see that $K_{\infty}$ is $T$--periodic in $x_{1}$ and satisfies assumption $(K_2)$ and
\begin{gather*}
K_{\infty}(x^{*})=\sup _{x \in \mathbb{R}^n} K_{\infty}(x)>0.
\end{gather*}
Under our hypothesis, it follows from \cite[Theorem 5.4]{JLX} that it is impossible to have more than one blow-up point.
Applying \cite[Theorem 5.6]{JLX} and Proposition \ref{prop:5.1}, we
immediately get a contradiction.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:0.3}]
Theorem \ref{thm:0.3} can be proved similarly to Theorem \ref{thm:0.2} before, we omit it here.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:0.1}]
Let $\widetilde{x} \in \Sn$ be the north pole and make a stereographic projection to the equatorial plane of $\Sn$, then \eqref{maineq} is transformed to \eqref{maineq1}, up to a harmless positive constant in front of $K(x)$.
Here $K(x) \in L^{\infty}(\mathbb{R}^n)$ satisfies, for some constants $A_{1}>1$, $R>1$, and $K_{\infty}>0$,
\begin{gather*}
1/A_1 \leq K(x) \leq A_1,\\ K \in C^{0}(\mathbb{R}^n \backslash B_{R}),\\
\lim _{|x| \rightarrow \infty} K(x)=K_{\infty}.
\end{gather*} Let $\psi \in C^{\infty}(\mathbb{R}^n)$ satisfies assumption $(K_2)$ and
\begin{gather}
\label{147}
\|\psi\|_{C^{2}(\mathbb{R}^n)}<\infty,\\\label{148}
\lim _{|x| \rightarrow \infty } \psi(x)=: \psi_{\infty}>0,\\\label{150}
\langle \nabla \psi,
x\rangle<0,~ \forall\, x \neq 0.
\end{gather}
It follows from \eqref{147}--\eqref{150} that
$\psi$ violates the Kazdan-Warner type condition (see \cite[Proposition A.1]{JLX}) and therefore\begin{align*}
(-\Delta)^{\sigma} \newcommand{\Sn}{\mathbb{S}^n} u=\psi|u|^{\frac{4\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}} u \quad \text { in }\, \mathbb{R}^n
\end{align*}has no nontrivial solution in $E$.
For any $\varepsilon\in (0,1)$, $k=1,2,3, \ldots$ and $m=2,3,4, \ldots$, we choose an integer $\overline{k}$ such that for any $2 \leq s \leq m$, there holds $C_{\overline{k}}^s\geq k$, where $C_{\overline{k}}^s$ is a combination number. Then we choose $e_{1}, e_{2}, \ldots, e_{\overline{k}} \in \partial B_{1}$ to be $\overline{k}$ distinct points.
Let
$$
\delta_{R}=\max _{|x| \geq R}|K(x)-K_{\infty}|+\max _{|x| \geq R}|\psi(x)-\psi_{\infty}|, \quad R>1,
$$
and $\widetilde{\Omega}_{l}^{(i)}$ be the connected component of$$
\big\{x : \varepsilon(\psi(x-l e_{i})-\psi_{\infty})+K_{\infty}-\delta_{\sqrt{l}}>K(x)\big\},
$$which contains $x=l e_{i}$. Define
\begin{gather*}
R_{l}=\min _{1 \leq i \leq m} \sup \{|x-l e_{i}| : x \in \widetilde{\Omega}_{l}^{(i)}\},\\ K_{\varepsilon, k, m, l}=\left\{\begin{aligned}
&\varepsilon(\psi(x-l e_{i})-\psi_{\infty})+K_{\infty}-\delta_{\sqrt{l}} && \text { if } x \in \widetilde{\Omega}_{l}^{(i)}, \\
&K(x) && \text { otherwise. }
\end{aligned}\right.
\end{gather*}
It is easy to prove that\begin{gather*}
\operatorname{diam}(\widetilde{\Omega}_{l}^{(i)}) \leq \sqrt{l}\quad \text { for large }\, l,\\\lim _{l \rightarrow \infty} R_{l}=\infty.
\end{gather*}
Now we consider the equation corresponding to $K_{\varepsilon, k, m, l}$:
\begin{align}\label{153}
\left\{\begin{aligned}
&\operatorname{div}(t^{1-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}\nabla U)=0&&\text{ in }\,\mathbb{R}^{n+1}_+,\\
&\pa_{\nu}^{\sigma} \newcommand{\Sn}{\mathbb{S}^n} U=K_{\varepsilon, k, m,l} U(x,0)^{\frac{n+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}}&&\text{ on }\,\mathbb{R}^n.
\\
\end{aligned}\right.
\end{align}
For any $2 \leq s \leq m$, we claim that, for $l$ large enough, Eq. \eqref{153} has at least $k$ solutions with $s$ bumps.
To verify it, let $e_{j_{1}}, \ldots, e_{j_{s}}$ be any $s$ distinct points among $e_{1}, \ldots, e_{\overline{k}}$. For $i=1,2, \ldots, s$, we define
\begin{gather*}
z_{l}^{(i)}=l e_{j_{i}} ,\\ O_{l}^{(i)}=B_{1}(z_{l}^{(i)}),\quad \widetilde{O}_{l}^{(i)}=B_{2}(z_{l}^{(i)}), \\
K_{\infty}^{(i)}=\varepsilon(\psi-\psi_{\infty})+K_{\infty},\\ a^{(i)}=\varepsilon(\psi(0)-\psi_{\infty})+K_{\infty}.
\end{gather*}
Then using an argument similar to the proof of Theorem \ref{thm:0.2}, we can prove that there exists at least
a solution in $V_{l}(s, \varepsilon)$ for large $l$. It is also easy to see that if we choose a different set of $s$ points among $\{e_{1}, \ldots, e_{\overline{k}}\}$, we get different solutions since
their mass are distributed in different regions by the definition of $V_{l}(s, \varepsilon)$. By the choice of $\overline{k}$, there are at least $k$ different sets of such points. Therefore Eq. \eqref{153} has at least $k$ solutions for large $l$. This gives the proof of the claim.
Finally, we fix $l$ large enough to make the above argument work for all $2 \leq s \leq m$ and set $K_{\varepsilon, k, m}=K_{\varepsilon, k, m, l}$. Thus there exists at least $k$ solutions with $s$ ($2 \leq s \leq m$) bumps to the following equations \begin{align*}
(-\Delta)^{\sigma} \newcommand{\Sn}{\mathbb{S}^n} u= K_{\varepsilon, k, m} u^{\frac{n+2\sigma} \newcommand{\Sn}{\mathbb{S}^n}{n-2\sigma} \newcommand{\Sn}{\mathbb{S}^n}},\quad u>0 \quad \text { in }\, \mathbb{R}^n.
\end{align*}Theorem \ref{thm:0.1} follows from the above after performing a sterographic projection on the original equation, we omit the details here.
\end{proof}
\begin{proof}[Proof of Corollary \ref{cor:1}]We can see from the proof in Theorem \ref{thm:0.1} that if $K\in C^{\infty}(\Sn)$, then $K_{\varepsilon,k,m}-K\in C^{\infty}(\Sn)$ can also be achieved.
\end{proof}
|
1,941,325,220,032 | arxiv | \section{Introduction}
\label{sec:introduction}
Measurements of the expansion rate history $H(t)$ of the universe, when interpreted within the
standard model of cosmology, convincingly indicate
that the universe has recently entered a phase of accelerated expansion \cite{Perlmutter:1998np,Riess:1998cb, astier, marpairs,mod1, anderson, Bel:2013ksa, san,Ade:2013zuv}.
Most of this unexpected evidence is provided via
geometric probes of cosmology, that is by constraining the redshift scaling of the luminosity distances $d_L(z)$
of cosmic standard candles (such as Supernovae Ia), or of the angular diameter distance $d_A(z)$ of
cosmic standard rulers (such as the sound horizon scale at the last scattering epoch).
Despite much observational evidence, little is known about the physical
mechanism that drives cosmic acceleration. As a matter of fact, virtually all the attempts to make sense of this perplexing
phenomenon without invoking a new constant of nature (the so called cosmological constant) call for exotic physics beyond current theories.
For example, it is speculated that cosmic acceleration might be
induced by a non clustering, non evolving, non interacting and extremely light
vacuum energy $\Lambda$ \cite{Peebles:2002gy},
or by a cosmic field with negative pressure, and thus repulsive
gravitational effect, that changes with time and
varies across space (the so called dark energy fluid) \cite{lucashin, cope, wett:1988, Caldwell:1997ii, ArmendarizPicon:2000dh,Binetruy:2000mh,Uzan:1999ch,Riazuelo:2001mg, Gasperini:2001pc}, if not by a break-down of Einstein's
theory of gravity on cosmological scales (the so called dark gravity scenario) \cite{costas, DeFelice:2010aj, DGP, Deffayet:2001pu, Arkani-Hamed:2002fu,Capozziello:2003tk, Nojiri:2005jg, deRham:2010kj, Piazza:2009bp,GPV,JLPV,BFS}.
This last, extreme eventuality is made somewhat less far-fetched by the fact that
a large variety of nonstandard gravitational models, inspired by fundamental physics arguments,
can be finely tuned to reproduce the expansion rate history of the successful standard model of cosmology, the
$\Lambda$CDM paradigm.
Although different models make undistinguishable predictions about the amplitude and scaling of background observables such as $d_L, d_A$ and $H,$ the analysis of the clustering properties of matter on large linear cosmic scales is in principle sufficient to distinguish and falsify alternative gravitational scenarios. Indeed, a generic prediction of
modified gravity theories is that the Newton constant $G$ becomes a time (and possibly scale) dependent function $G_{\rm eff}$. Therefore, dynamical observables of cosmology
which are sensitive to the amplitude of $G$, such as,
for example, the clustering properties of cosmic structures, provide a probe for resolving geometrical degeneracies among models and for properly identifying the specific signature of nonstandard gravitational signals. Considerable phenomenological effort is thus devoted to engineering and applying methods for
extracting information from dynamical observables of the inhomogeneous sector of the universe
\cite{Bel:2012ya,TurHudFel12,DavNusMas11,BeuBlaCol12,PerBurHea04,SonPer09,RosAngSha07,CabGaz09,SamPerRac12, ReiSamWhi12,ConBlaPoo13,GuzPieMen08,TorGuzPea13}.
Indeed, thanks to large and deep future galaxy redshift surveys, such as for example Euclid \cite{euclid2},
the clustering properties of matter will be soon characterized with a `background level' precision,
thus providing us with stringent constraints on the viability of alternative gravitational scenarios.
Extending the perimeter of precision cosmology beyond zeroth order observables into the domain of
first order perturbative quantities critically depends on observational improvements but also
on the refinement of theoretical tools. Among the quantities that are instrumental in constraining
modified gravity models, the linear growth index $\gamma$,
\begin{equation} \label{defuno}
\gamma(a) \equiv \big( \ln \Omega_{\rm m} (a) \big)^{-1} \ln \Big( \frac{d \ln \delta_{\rm m}(a)}{d \ln a} \Big)
\end{equation}
\noindent where $a$ is the scale factor of the universe, $\Omega_{\rm m}=(8\pi G \rho_{\rm m})/(3H^2)$ is the reduced density of matter and $\delta_{\rm m} =\rho_{\rm m}/\bar{\rho}_{\rm m} -1$
the dimensionless density contrast of matter, has attracted much attention. Despite being in principle a function, this quantity is often, and effectively, parameterized as being constant
\cite{pee80}. Among the various appealing properties of such an approximation, two in particular deserve to be mentioned.
First, the salient features of the growth rate history of linear structures can be compressed into a single scalar quantity which can be easily constrained using standard parameter estimation
techniques. As it is the case with parameters such as $H_0$, $\Omega_{\rm m,0}$, etc., which incorporate all the information contained in the expansion rate function $H(t)$, so it is
extremely economic to label and identify different growth histories $\delta_{\rm m}(t)$ with the single book-keeping index $\gamma$.
Moreover, since the growth index parameter takes distinctive values for distinct gravity theories, any deviation of its estimated amplitude from the reference value $\gamma_0=6/11$ (which represents the
exact asymptotic early value of the function $\gamma(a)$ in a $\Lambda$CDM cosmology \cite{WanSte98})
is generically interpreted as a potential signature of new gravitational physics.
However useful in quantifying deviations from standard gravity predictions, this index must also be precise to be of any practical use.
As a rule of thumb, the systematic error introduced by approximating $\gamma(a)$ with $\gamma_0$, which depends on $\Omega_{\rm m}$, must be much smaller
than the precision with which future experiments are expected to constrain the growth index over a wide redshift range($\sim 0.7\%$ \cite{euclid2}).
Notwithstanding, already within a standard $\Lambda$CDM framework with $\Omega_{\rm m,0}=0.315$, the imprecision of the asymptotic approximation
is of order $2\%$ at $z=0$. More subtly, the expansion kinematic is expected to leave time dependent imprints in the growth index.
The need to model the redshift evolution of the growth index, especially in view of the large redshift baseline that will be surveyed by future data,
led to the development of more elaborated parameterizations \cite{Gong, GanMorPol09, FuWuYu2009,pg,gp}. Whether their justification is purely phenomenological or
theoretical, these formulas aim at locking the expected time variation of $\gamma(a)$ into a small set of scalar quantities, the so called growth index parameters $\gamma_i$.
For example, some authors (e.g. \cite{pg,gp}) suggest to use the Taylor expansion $\gamma(z)\,=\,\gamma_0\,+\,\big[\frac{d \gamma}{dz}\big]_{z=0}\, z$ for data fitting purposes. Indeed, this approach
has the merit of great accuracy at present epoch, but it becomes too inaccurate at the intermediate redshifts ($z\sim 0.5$) already probed by current data.
On top of precision issues, there are also interpretational concerns. Ideally, we would like the growth index parameter space to be also in one-to-one correspondence with predictions of specific gravitational theories. In other terms we would like to use likelihood contours in this plane to select/reject specific gravitational scenarios. This is indeed a tricky issue.
For example, it is rather involved to link the amplitude of the growth index parameters to predictions of theoretical models if the growth index fitting formula has a phenomenological nature.
More importantly, it is not evident how to extract growth information ($\delta_{\rm m}(a)$) from a function, $\gamma$ which, as equation (\ref{defuno}) shows, is degenerate with background information (specifically $\Omega_{\rm m}(a)$). In other terms, the growth index parameters are model dependent quantity that can be estimated only after a specific model for the evolution of the background quantity $\Omega_{\rm m}(a)$
is supplied. Therefore it is not straightforward to use the likelihood function in the $\gamma$-plane to reject dark energy scenarios for which the background quantities do not scale as in the fiducial.
Because of this, up to now, growth index measurements in a given fiducial were used to rule out only the null-hypothesis that the fiducial correctly describes large scale structure formation processes. Growth index estimates were not used to gauge the viability of a larger class of alternative gravitational scenarios.
A reverse argument also holds and highlights the importance of working out a
growth index parameterization which is able to capture the finest details of the exact numerical solution, establishing at the same time, the functional dependence on background observables.
Indeed, once a given gravitational paradigm is assumed as a prior, the degeneracy of growth index measurements with background information, can be exploited to constraining the
background parameter of the resulting cosmological model, directly using growth data. Therefore, by expressing the growth index as a function of specific dark energy or dark gravity parameters one can
test for the overall coherence of cosmological information extracted from the joint analyses of smooth and inhomogeneous observables.
In this paper we address some of these issues by means of a new parameterization of the growth index. The main virtues of the approach is that the proposed formula
is {\it a)} flexible, i.e.~it describes predictions of a wide class of cosmic acceleration models, {\it b)} accurate, i.e.~it performs better than alternative models in reproducing exact numerical results, {\it c)} it is 'observer friendly', i.e.~accuracy is achieved with a minimum number of parameters
and {\it d)} it is `theorist friendly', i.e.~the amplitude of the fitting parameters can be directly and mechanically related, in an analytic way, to predictions of theoretical models.
The paper is organized as follows. We define the parameterization for the growth index in section \S \ref{sec:parametrizing}, and we discuss its accuracy in describing various dark energy models such as smooth and clustering quintessence in \S \ref{sec:standard}. In \S \ref{sec:modified} we apply the formalism to modified gravity scenarios. In particular, we discuss the DGP \cite{DGP} and the Parameterized Post Friedmanian \cite{FerSko10} scenarios. In \S \ref{sec:constraining} we will impose the studied models to current (simulated future) data. Conclusions are drawn in \S \ref{sec:conclusions}. Throughout all the paper, if not specified differently, the flat Friedmann-Lema\^itre-Robertson-Walker cosmology with $Planck$ parameters $\Omega_{\rm m,0} =0.315, \sigma_{8,0}=0.835$ \cite{Ade:2013zuv} is referred to as the {\it reference} $\Lambda$CDM model.
\section{Parameterizing the growth index}
\label{sec:parametrizing}
In this section we introduce our notation and we give the theoretical background for analyzing the clustering of matter on large linear scales.
For a large class of dark energy models and gravitational laws, at least on scales where the quasi-static approximation applies,
the dynamics of linear matter perturbations $\delta_{\rm m}$ can be effectively described by the following second order differential equation
\begin{align}
\ddot{\delta}_{\rm m} + 2\:\! H \:\! \nu \:\! \dot{\delta}_{\rm m} - 4\:\! \pi \:\! G \:\! \mu \:\! \rho_{\rm m} \:\! \delta_{\rm m} = 0,
\label{eq:matter_density_fluctuations}
\end{align}
\noindent where overdot ($\,\dot{}\,$) denotes derivation with respect to cosmic time $t$ and
the dimensionless response $\mu \equiv G_{\rm eff}/G$ and damping $\nu$ coefficients are general functions of cosmic time and possibly the spatial Fourier frequency. As for the simplest case, General Relativity even augmented by a minimally coupled scalar field, marks the point $\mu=\nu=1$ while for modified gravity models we expect, in general, $\mu \neq 1$.
The specific form of $\mu$ predicted in higher dimension brane models \cite{DGP} or in scalar tensor theories of gravity \cite{FerSko10} are given in \S \ref{sec:DGP} and \S \ref{sec:skofe} respectively.
See, instead, \cite{kun} for a more elaborate model of modified gravity which does not reduce to the standard form of Eq. \ref{eq:matter_density_fluctuations}.
It is convenient, from the observational perspective, to express Eq. \ref{eq:matter_density_fluctuations}
in terms of the linear growth rate $f$, a cosmic observable defined as the logarithmic
derivative of the matter density fluctuations $f=\frac{d\ln \delta_{\rm m}}{d \ln a}$. We obtain
\begin{align}
f'+f^2+\Big(1+\nu +\frac{H'}{H}\Big)\:\! f - \frac{3}{2}\!\: \mu\, \Omega_{\rm m} = 0
\label{eq:f_H_General}
\end{align}
where prime ($'$) denotes derivation with respect to $\ln a$. It is standard practice to look for solutions of Eq. \ref{eq:f_H_General} with the form
\begin{align}
f=\Omega_{\rm m}(a)^{\,\gamma(a)}
\label{eq:f_approximation}
\end{align}
where $\gamma$ is called the growth index already introduced in \S \ref{sec:introduction}. Although, in some special cases, an exact analytic solution can be
explicitly given (for example, in terms of a hypergeometric function if $\nu=\mu=1$ and the DE {\it EoS} is constant \cite{sil}),
the approximate parametric solution $\gamma=constant$ has been widely advocated \cite{sp,kn,ius,pg,sa,nep,gp,cdp1,bbs,cdp2,car,basil}. Indeed,
the growth rate of matter density perturbations is highly sensitive to the kinematics of the smooth expansion of the Universe,
as well as to the gravitational instability properties of the background fluids. The approximate anstaz $\gamma=constant$ interestingly and
neatly splits the background (basis) and the inhomogeneous contribution (exponent)
to the linear growth rate of structures. Gravitational models beyond standard general relativity,
which predict identical background expansion histories, i.e.~identical observables $H(a)$ and $\Omega_{\rm m}(a)$,
may thus be singularized and differentiated by constraining the amplitude of $\gamma$.
However solutions with a constant growth index of (sub-horizon) matter density perturbations
cannot be realized in quintessence models of dark energy over the whole period from the beginning of matter domination up into distant future.
Moreover, it is likely that different gravitational models tuned to reproduce the same expansion history $H$
display unidentical evolution for $\Omega_{\rm m}$. This implies that the amplitudes of $\gamma$ inferred in two different cosmological models, characterized by
distinctive evolution laws of the matter density parameter, cannot be directly compared.
In order to increase the accuracy of predictions, we look for a more elaborate parametric form of the growth index: we express $\gamma$ as a function of $\ln \Omega_{\rm m}$. Plugging (\ref{eq:f_approximation}) into (\ref{eq:f_H_General}) we obtain
\begin{align}
\omega '\:\! \Big(\gamma(\omega ) + \omega \:\! \frac{d\gamma}{d\omega }\Big) + e^{\omega \:\! \gamma(\omega )} + 1 + \nu(\omega ) + \frac{H'}{H}(\omega ) - \frac{3}{2}\!\; \mu(\omega )\;\! e^{\omega (1-\gamma(\omega ))} = 0
\label{eq:f_H_x}
\end{align}
where we set $\omega =\ln \Omega_{\rm m}$. We assume that all non-constant coefficients ($\omega '$, $\tfrac{H'}{H}$, $\mu$, $\nu$) are well-defined functions of $\omega $, and are
completely specified once a theory of gravity is considered. Since the numerical solution of equation (\ref{eq:f_H_x}) for a $\Lambda$CDM model
indicates that $\gamma$ is an almost linear function of $\omega $, and guessing that viable alternative theories of gravity
should predict minimal, but detectable, deviations from standard model results, we suggest to describe the time evolution of the
growth index as
\begin{align}
\gamma(\omega ) = \sum_{n=0}^{N} \gamma_n \frac{\omega ^n}{n!} + \mathcal{O}(\omega ^{N\:\!\! +\:\!\! 1})
\label{eq:gamma_Taylor}
\end{align}
where $\{ \gamma_0,\gamma_1,\ldots,\gamma_N\}$ are constant parameters. Expressing $\gamma$ via a series expansion has already been proposed earlier by \cite{WanSte98} and improved later by
\cite{Gong}. In those works $\omega = 1-\Omega_{\rm m}$, whereas here we set $\omega =\ln \Omega_{\rm m}$. Demonstrating the gain in accuracy achieved by this change of variable
is one of the goals of this paper. A different one, but equally important, is to show that by this choice we can work out a closed analytic formula that predicts the amplitude of the coefficients $\gamma_n$ (up to an arbitrary order $N$) once any given dark energy and gravitational model is specified. Specifically, we define the structural parameters of the formalism
\begin{align}\label{eq:struct}
\mathcal{X}_n := \big[\tfrac{d^n}{d\omega ^n}\big(\omega '\big)\big]_{\omega =0} \;, \quad \mathcal{H}_{n} := \big[\tfrac{d^n}{d\omega ^n}\big(\tfrac{H'}{H}\big)\big]_{\omega =0} \;, \quad \mathcal{M}_{n} := \big[\tfrac{d^n}{d\omega ^n}\big(\mu \big)\big]_{\omega =0} \;, \quad \mathcal{N}_{n} := \big[\tfrac{d^n}{d\omega ^n}\big(\nu \big)\big]_{\omega =0}
\end{align}
where $n$ is a natural number and $\tfrac{d^0}{d\omega ^0} \equiv 1$. We obtain (see \S \ref{sec:Annexe})
\begin{align}
\gamma_0 = \frac{3\:\!(\mathcal{M}_0 +\mathcal{M}_{1}) - 2\:\!( \mathcal{H}_{1} + \mathcal{N}_{1})}{2 + 2\:\! \mathcal{X}_{1} + 3 \:\! \mathcal{M}_0}
\label{eq:gamma_0}
\end{align}
For $n \geq 1$, we have
\begin{align}
\gamma_n =& \:\! 3 \frac{\mathcal{M}_{n\:\!\! +\:\!\! 1}\:\! +\:\! \sum_{k=1}^{n\:\!\! +\:\!\! 1}\:\!\! \binom {n\:\!\! +\:\!\! 1}{k} \:\! \mathcal{M}_{n\:\!\! +\:\!\! 1\:\!\! -k}\:\! B_{k}\:\!\!(1-y_1,-y_2,-y_3,\ldots ,-y_k)}{(n+1) \big(2+2\:\!(n+1)\mathcal{X}_1 + 3\:\! \mathcal{M}_0 \big)} \nonumber \\
&- 2 \frac{ \:\! B_{n\:\!\! +\:\!\! 1}\:\!\!(y_1,y_2,\ldots ,y_{n\:\!\! +\:\!\! 1})\:\! + \:\! \sum_{k=2}^{n\:\!\! +\:\!\! 1}\:\!\! \binom{n\:\!\! +\:\!\! 1}{k}\:\! \mathcal{X}_{k} \:\! (n+2-k)\:\! \gamma_{n\:\!\! +\:\!\! 1\:\!\! -\:\!\! k}\:\! + \:\! \mathcal{H}_{n\:\!\! +\:\!\! 1} + \:\! \mathcal{N}_{n\:\!\! +\:\!\! 1} } {(n+1) \big(2+2\:\!(n+1)\mathcal{X}_1 + 3\:\! \mathcal{M}_0 \big)}
\label{eq:gamma_n}
\end{align}
where
\begin{align}
y_i=
\begin{cases}
i\:\! \gamma_{i-1} & \text{if }i \leq n \\
0 & \text{if } i=n+1
\end{cases}
\end{align}
and the Bell polynomials are defined by
\begin{align}
B_k(x_1,x_2,\ldots ,x_k)=\sum \tfrac{k!}{j_1! j_2!\cdots j_k!} \big(\tfrac{x_1}{1!}\big)^{j_1} \big(\tfrac{x_2}{2!}\big)^{j_2}\cdots \big(\tfrac{x_k}{k!}\big)^{j_k}
\end{align}
where the sum is taken over all $k$-tupels $\{j_1,j_2,\ldots ,j_k\}$ of non-negative integers satisfying
\begin{align}
1 \times j_1+2 \times j_2 + \ldots + k\times j_k = k.
\end{align}
In the following we will test the precision of the approximation (\ref{eq:gamma_Taylor}) for different accelerating models and for various cosmological parameters. In particular we will show that two coefficients $\gamma_0$ and $\gamma_1$ are sufficient for a large class of models.
\section{Standard Gravity}
\label{sec:standard}
In this section we frame our analysis in a Friedmann Lema\^{i}tre Roberston \& Walker (FLRW) model of the universe, a metric model characterized by two degrees of freedom: the spatial curvature of the universe $k$ and the scale factor $a(t)$. We therefore assume that gravity is ruled by the standard Einstein field equations, and we further assume that the hyper-surfaces of constant time are flat ($k=0$) as observations consistently suggest \cite{Ade:2013zuv}, and that, at least during the late stage of its evolution, i.e.~the period we are concerned with, the universe is only filled with cosmic matter and
dark energy. These are perfect fluid components whose $EoS$, i.e.~the ratio $w(a)=p(a)/\rho(a)$ between pressure and density, is zero and lower than $-1/3$ respectively. We refer to the particular case in which the DE $EoS$ $w=-1$ as to the $\Lambda$CDM model. The evolution of the physical density of matter and DE is given by
\begin{equation}
\rho(a)=\rho_0 \;\! a^{-3(1+\tilde{w}(a))}
\label{eq:conservation}
\end{equation}
\noindent where
\begin{equation}
\tilde{w}(a)=\frac{1}{\ln(a)}\int_{1}^{a} w(a') \;\! d\ln a',
\end{equation}
\noindent while the expansion rate of the cosmic metric, or simply the Hubble parameter $H(a)$, is given by
\begin{equation}
H^2(a)=H_0^2\left[ \Omega_{\rm m,0} \;\! a^{-3}+(1-\Omega_{\rm m,0}) \;\! a^{-3(1+\tilde{w}(a))}\right]
\label{eq:Friedmann}
\end{equation}
\noindent where $\Omega_{\rm m,0}$ is the present day reduced density of matter. In this notation, the evolution equations for the reduced density of matter and DE are
\begin{equation}
\Omega_{\rm m}(a)=\left ( 1+\frac{ 1-\Omega_{\rm m,0}}{\Omega_{\rm m,0}}\;\! a^{-3\tilde{w}(a)} \right )^{-1}
\label{eq:omegam}
\end{equation}
\noindent and
\begin{equation}
\Omega_{\rm DE}(a)=1-\Omega_{\rm m}(a)
\end{equation}
\noindent respectively.
Under these hypotheses, the relevant quantities in Eq. (\ref{eq:f_H_x}) become
\begin{align}
\omega '(\omega ) \equiv \tfrac{d \ln \Omega_{\rm m}}{d \ln a} = 3 \big(1-\Omega_{\rm m}(a) \big) w(a) \;, \quad \tfrac{H'}{H}(\omega ) \equiv \tfrac{d\ln H}{d\ln a} = -\tfrac{3}{2}\big(1+(1-\Omega_{\rm m}(a))w(a)\big)
\label{eq:relations_SG}
\end{align}
where $w(a)$ is the (possibly varying) $EoS$ parameter of the dark energy fluid. We will illustrate the performances of the proposed parameterization (cf. Eq. (\ref{eq:gamma_Taylor}))
first by assuming that dark energy is a smooth component which does not cluster on any scale, i.e.~its energy density is a function of time only. We will then increase the degrees of freedom associated to
this hypothetical component, by assuming that dark energy may become gravitationally unstable, that is its effective density varies also as a function of space.
\subsection{Smooth Dark Energy}
\label{sec:smooth}
The propagation of instabilities in the energy density and pressure of a fluid can be heuristically understood
in terms of two kinematical quantities: the speed of sound $c^2_s=\delta p/\delta \rho$ and the sound horizon $L_s=c_s/H$.
On scales larger than the sound horizon instabilities can collapse. On scales below $L_s$ the fluid is smooth.
As an example of this last case, one can consider a canonical, minimally coupled, scalar field such as quintessence.
Since $c_s=1$, the sound horizon coincides with the cosmic horizon
and the quintessence fluid can be considered homogeneous on all (observationally) relevant scales.
In our notation, such a smooth dark energy component is effectively described by setting
\begin{align}
\mu(\omega )=1 \quad \text{and} \quad \nu(\omega ) =1.
\label{eq:relations_SDE}
\end{align}
into Eq. (\ref{eq:f_H_x}). Setting
\begin{align}
\mathcal{W}_n := \big[\tfrac{d^n}{d\omega ^n}\big(w(a)\big)\big]_{\omega =0}\;
\label{eq:gamma_coeff_w}
\end{align}
we find
\begin{align}
\mathcal{X}_0 = 0 \;, \quad \mathcal{X}_n = -3\:\! \sum_{i=0}^{n-1}\binom {n}{i} \mathcal{W}_i \;, \quad \mathcal{H}_0 = -\frac{3}{2} \;, \quad \mathcal{H}_n = -\frac{1}{2} \mathcal{X}_n = \frac{3}{2} \sum_{i=0}^{n-1}\binom {n}{i} \mathcal{W}_i \;,
\label{eq:gamma_coeff_x_h}
\end{align}
for $n=1,2,3,$ etc. Smooth models of dark energy make physical sense only for $w \ge -1$. This {\it null energy condition}, indeed, prevents both ghost and gradient instabilities (e.g. \cite{PSM}). Notwithstanding,
sound theories of DE can be constructed also by relaxing the null energy condition, for example, by imposing the Lagrangian density of the gravitational field to contain
space derivatives higher than two, i.e.~terms that become important at some high momentum scale (see \S 2.2 of Ref.~\cite{Creminelli:2008wc}).
In what follows, we will thus consider also super acceleration scenarios $w<-1$, their effective, phenomenological character being understood.
We now discuss two distinct scenarios. First, we assume that the dark energy {\it EoS} varies weakly with cosmic time, i.e.~that it can be effectively described in terms
of a constant parameter $w$. Then we complete our analysis by computing the explicit amplitude of the growth index parameters in a scenario in which the DE {\it EoS} is free to vary.
\begin{figure}[t] \vspace{-1cm}
\centering
\includegraphics[trim = 0cm 0cm 0cm 0cm, scale=0.39]{./p1fig1a.ps}
\includegraphics[trim = 0cm 0cm 0cm 0cm, scale=0.39]{./p1fig1b.ps}
\includegraphics[trim = 0cm 0cm 0cm 0cm, scale=0.39]{./p1fig1c.ps}
\includegraphics[trim = 0cm 0cm 0cm 0cm, scale=0.39]{./p1fig1d.ps}
\caption{{\it Upper left panel:} relative imprecision of the approximation (\ref{eq:gamma_Taylor}) for the growth index $\gamma$ in a smooth Dark Energy model with constant equation of state $w$ as a function of redshift and of the expansion order. The exact value of the growth index is obtained by solving numerically Eq. (\ref{eq:f_H_General}) and by using Eq. (\ref{defuno}).
{\it Upper right panel:} amplitude of the Taylor series coefficients of (\ref{eq:gamma_Taylor}) (normalized to $\gamma_0$)
as a function of the expansion order (up to 6). The relative accuracy increases roughly by one order of magnitude going from one order to the next higher one. {\it Lower left panel:} relative imprecision at $z=0$ as a function of $\Omega_{\rm m,0}$ in a $\Lambda$CDM model showing the stability of the approximation within a sensible range of matter density values. {\it Lower right panel:} relative imprecision as a function of DE equation of state parameter $w$.}
\label{fig:wCDM}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.7]{./p1fig2.ps}
\caption{The precision of various parameterizations\cite{pee80,pg,Gong,FuWuYu2009} of the growth index is shown for the {\it reference} $\Lambda$CDM model.
The precision is estimated as the relative difference with respect to the numerical reconstruction of the growth index $\gamma(z)$ in the reference model.}
\label{fig:comparison}
\end{figure}
\subsubsection{DE with a constant equation of state $w$}\label{sec:w}
In this case, in Eq.~\ref{eq:gamma_coeff_w} we have $\mathcal{W}_0=w$ and $\mathcal{W}_n=0$ for $n \geq 1$. We therefore obtain from Eq. (\ref{eq:gamma_coeff_x_h})
\begin{align}
\mathcal{X}_0 = 0 \;, \quad \mathcal{X}_{n} = -3\:\! w \;,\quad \mathcal{H}_0 = -\tfrac{3}{2} \;, \quad \mathcal{H}_{n} = \tfrac{3}{2} \:\! w \;, \quad \mathcal{M}_{0} = \mathcal{N}_0 = 1\;,\quad \mathcal{M}_{n} = \mathcal{N}_{n} = 0 \;, \label{eq:coeff_GR_1}
\end{align}
where $n=1,2,3,$ etc. Replacing (\ref{eq:coeff_GR_1}) in (\ref{eq:gamma_0}) and (\ref{eq:gamma_n}) we find:
\begin{align}
\gamma_0 = \tfrac{3\:\! (1-w)}{5-6\:\! w} \;, \quad \gamma_1 = -\tfrac{3\:\!(1-w)(2-3\:\! w)}{2\:\!(5-12\:\! w)(5-6\:\! w)^2} \;,\quad \gamma_2 = \tfrac{(1-w)(485-3\:\! w\:\!(1015-3\:\! w\:\! (559-270\:\! w\:\! )))}{10\:\! (5-18\:\! w)(5-12\:\! w)(5-6\:\! w)^3}
\label{eq:gamma012wCDM}
\end{align}
In particular, for the $\Lambda$CDM model (i.~e.~for $w=-1$), we find
\begin{align}
\gamma_0^{\Lambda \rm CDM} = \frac{6}{11} \simeq 0.54545 \;,\quad
\gamma_1^{\Lambda \rm CDM} = -\frac{15}{2\:\! 057} \simeq -0.00729 \;, \quad
\gamma_2^{\Lambda \rm CDM} = \frac{410}{520\:\! 421} \simeq 0.00079
\label{eq:gamma012LambdaCDM}
\end{align}
For $\Omega_{\rm m,0}=0.315$, the relative error $\delta\gamma/\gamma < 0.1\%$ at order 1, $\delta\gamma/\gamma < 0.02\%$ at order 2 and $\delta\gamma/\gamma < 0.002\%$ at order 3.
More generally, Figure \ref{fig:wCDM} (upper right panel) shows that the accuracy of the approximation increases by roughly a factor of 10 as soon as a new coefficient is switched on.
This same figure (lower right panel) shows that the relative accuracy depends only slightly on the equation of state parameter $w$, although the error is in general slightly larger for larger $w$.
The relative error of the approximation is also quite stable as we vary $\Omega_{\rm m,0}$
within a sensitive range of values $\Omega_{\rm m,0} = [0.2,0.4]$ (see the lower left panel in Figure \ref{fig:wCDM}).
We remark that the coefficients $\gamma_0$ and $\gamma_1$ are the same\footnote{up to the sign of $\gamma_1$} when expanding the growth index in powers of $(1-\Omega_{\rm m})$ whereas $\gamma_2$ and higher orders differ. However, as a consequence of developing in $\ln \Omega_{\rm m}$, already at first order our approximation is, depending on cosmology, from 5 to 10 times better at $z=0$.
This is seen in Figure~\ref{fig:comparison}, where we also compare the accuracy of Eq.~\eqref{eq:gamma012wCDM} with various other parameterizations of the growth index.
Not only Eq.~\eqref{eq:gamma_Taylor} is more precise than formulas
based on perturbative expansions around $z=\infty$, in the relevant redshift range for dark energy studies, that is $z>0.5$, it is also more accurate than the phenomenological \cite{Fu Wu Yu 2009} or perturbative \cite{pg} expressions normalized at $z=0$.
\begin{figure}[t]
\centering
\includegraphics[scale=0.333]{./p1fig3a.ps}
\includegraphics[scale=0.333]{./p1fig3b.ps}
\includegraphics[scale=0.333]{./p1fig3c.ps}
\caption{{\it Left panel:} Relative error, as a function of the $EoS$ parameters ($w_{\rm o}, w_{a} $), in approximating the growth index of smooth dark energy models
with the first order model $\gamma_0+\gamma_1(1-\Omega_{\rm m})$ of \cite{WanSte98}. {\it Middle panel}: Accuracy of the phenomenological fitting formula of \cite{lin}. {\it Right panel}: Accuracy of the first order growth index model $\gamma_0+\gamma_1 \ln \Omega_{\rm m}$. In all plots we use the same color scale and we assume a flat cosmological model with $\Omega_{\rm m}=0.315$. The relative error represents the maximal imprecision in the redshift range surveyed by a Euclid-like survey ($0.7<z<2$). }
\label{fig:w(a)CDM_a}
\end{figure}
\subsubsection{Varying equation of state $w(a)$}
\label{sec:w(a)}
We first compute the coefficients $\mathcal{W}_n$ (see Eq. \ref{eq:gamma_coeff_w}).
If we assume $w_i \equiv w(a=0) < - \frac{1}{3}$, then the limit $\omega =\ln \Omega \rightarrow 0$ is equivalent to the limit $ a \rightarrow 0$. We obtain
\begin{align}
\label{eqs:coeff_w}
\mathcal{W}_0 = \big[w(a)\big]_{a=0}\,,\;
\mathcal{W}_1 = \big[(\omega ')^{-1} w'\big]_{a=0} \,,\;
\mathcal{W}_2 = \big[(\omega ')^{-2}\big(3w'w-(w')^2/w+w''\big)-(\omega ')^{-1} w'\big]_{a= 0} \,,
\end{align}
etc., where $\omega '= 3w(1-e^{\omega })$ from equation (\ref{eq:relations_SG}).
The formalism requires the knowledge of the $n^{th}$ order derivative of the DE {\it EoS} $w(a)$ for $a \rightarrow 0$, that is well beyond the redshift domain where the expansion rate of the universe is expected to be sensitive to contributions from the dark energy density. Noting that $\omega ' \rightarrow 0$ when $a \rightarrow 0$, the finiteness of the coefficients $\mathcal{W}_i$ (at least up to $\mathcal{W}_2$) can be enforced by requiring the {\it EoS} to satisfy the following equation
\begin{align}
3\:\! w'\:\! w - (w')^2/w + w'' = 0 \label{eq:condition_W_finite}
\end{align}
whose most general solution is
\begin{align}
w(a) = w_i \Big(1 - \frac{w_{a}}{3 w_{0}^2}a^{-3w_i}\Big)^{-1} \;, \quad
w_i = w_{0}\Big(1-\frac{w_{a}}{3w_{0}^2}\Big)
\label{eq:wi_finite}
\end{align}
where $w_{0}=w(a=1)$, $w_{a}=-\big[\tfrac{dw}{da}\big]_{a=1}$. Note that by linearizing the previous expression around $a=1$ one recovers the standard parameterization $w(a) = w_{0} + w_{a} (1-a)$ \cite{Chevallier Polarski 2000}. For this specific equation of state, we have
\begin{align}
\displaystyle
\mathcal{W}_0= w_{0} \Big(1-\frac{w_{a}}{3w_{0}^2}\Big)^{-1} \;, \quad \mathcal{W}_n = (-1)^n \frac{w_{a} \Omega_{\rm m,0}}{3w_{0} (1-\Omega_{\rm m,0})} \; \text{for} \; n>0
\label{eq:Wi_finite}
\end{align}
and, as a consequence, the coefficients of the series expansion which defines the growth index are
\begin{subequations}\label{eq:gammanw(a)CDM}
\begin{align}
\gamma_0 &= \tfrac{3(1-w_i)}{5-6 w_i} \label{eq:gamma0w(a)CDM}\\
\gamma_1 &= -\tfrac{3(1-w_i)(2 - 3 w_i)-6(5-6w_i)\mathcal{W}_1}{2(5-12w_i)(5-6w_i)^2}\label{eq:gamma1w(a)CDM}\\
\gamma_2 &= \tfrac{(1-w_i)(2-3w_i)(11-30w_i)-3(5-6w_i)(23-6w_i(10-6w_i))\mathcal{W}_1 -72(5-6w_i)^2\mathcal{W}_1^2+3(5-12w_i)(5-6w_i)^2\,\mathcal{W}_2}{(5-12w_i)(5-18w_i)(5-6w_i)^3}\label{eq:gamma2w(a)CDM}.
\end{align}
\end{subequations}
The relative variation of the growth index $\gamma(z)$ with respect to the $\Lambda$CDM value, when the {\it EoS} parameters span the range ($-2<w_{0}<-0.8, -1.6<w_{a}<1$)
can be as large as $3.7\%$ at $z=0$. As a reference, if $w_{0}$ is fixed to the value $-1$, then one can expect a relative difference as important as $1.5\%$,
a figure larger than the nominal precision with which a Euclid-like survey is expected to constrain $\gamma$, that is $0.7\%$ \cite{euclid2}.
Figure \ref{fig:w(a)CDM_a} shows that within the redshift range that will be surveyed by Euclid ($0.7<z<2$), already when truncating the expansion at first order, the maximal
imprecision with which the redshift scaling of $\gamma$ is reconstructed over all the parameter space is less than 0.1\%.
We are not aware of any theoretically justified model that accounts for possible variability in the DE {\it EoS}, so we compare the accuracy of our parameterization (cf. Eqs.~\eqref{eq:gammanw(a)CDM}) with that obtained using the Wang \& Steinhardt (1998) model\cite{WanSte98}, which is supposed to be an accurate approximation also for
weakly varying dark energy equation of states. Additionally, we also consider the phenomenological fitting formula of \cite{lin},
($\gamma\,=\, 0.55\,+\,[0.05\, \Theta(w+1)\,+\,0.02\, \Theta(1+w)\,]\,(1\,+\,w(z=1)\,)$, where $\Theta(x)$ is the Heaviside step function
and $w$ is the constant equation of state ({\it EoS}) parameter of the dark energy fluid).
As can be appreciated by inspecting Figure \ref{fig:w(a)CDM_a}, the gain in precision with respect to both these growth index models can be as high as 1 order of magnitude
in the range ($-2<w_{0}<-0.8, -1.6<w_{a}<1$), a figure which pushes the systematic error well below the expected measurement precision of the Euclid satellite.
\begin{figure}[t]
\centering
\includegraphics[scale=0.47]{./p1fig4a.ps}
\includegraphics[scale=0.47]{./p1fig4b.ps}
\caption{Relative error, as a function of the parameters ($w_{0}, \Omega_{\rm m,0}$), in approximating the growth index of a clustering quintessence model \cite{SefVer11}
with Eq. (53) of \cite{SefVer11} ({\it left panel}) and with Eq. (\ref{eq:gamma_Taylor}) truncated at first order ({\it right panel}).
The relative error, calculated at $z=0.7$, represents the maximal imprecision in the window surveyed by a Euclid-like survey ($0.7<z<2$).}
\label{fig:sefuverni}
\end{figure}
\subsection{Clustering Dark Energy}\label{sec:clustering}
Dark energy affects the process of structure formation not only through its equation of state, but also through its
speed of sound. Indeed, if the speed of sound with which dark energy perturbations propagate drops below the speed of light, then dark energy
inhomogeneities increase and couple gravitationally to matter perturbations. As a consequence, they may be detectable on correspondingly better
observable (though typically still large) scales. The influence of an unstable dark energy component on the clustering of matter can be effectively described
by switching on the additional degrees of freedom $\mu$ and $\nu$ in Eq. (\ref{eq:matter_density_fluctuations}).
As an archetype of this class of phenomena, we consider the clustering dark energy model with vanishing speed of sound introduced in \cite{SefVer11}.
In this model, the dark energy {\it EoS} is assumed to be constant, the speed of sound is effectively null, and we have
\begin{align}
\mu(\omega ) = 1 + (1+w) \frac{1-e^{\omega}}{e^{\omega}} \;, \quad
\nu(\omega ) = - \frac{3(1+w)w(1-e^{\omega})}{1+w(1-e^{\omega})}
\label{eq:Clustering_mu_nu}
\end{align}
As a consequence, the structural parameters of our formalism are
\begin{subequations}\label{eq:CQ_M_N}
\begin{align}
\mathcal{M}_0 = &\; 1\,,\; & \mathcal{M}_n = &\; (-1)^n(1+w) \,, \; n \geq 1
\label{eq:CQ_M} \\
\mathcal{N}_0 = &\; 1\,,\; & \mathcal{N}_1 = &\; -3 w(1+w)\,,\quad \mathcal{N}_2 = (1+2w)\mathcal{N}_1\,,\quad \mathcal{N}_3=(1+6(1+w)w)\mathcal{N}_1\,,\,\cdots
\label{eq:CQ_N}
\end{align}
\end{subequations}
while $\mathcal{X}_n$ and $\mathcal{H}_n$ are the same as in Eq.~\eqref{eq:coeff_GR_1}. Using Eq.~\eqref{eq:gamma_n}, it follows immediately that the growth index coefficients are
\begin{subequations}\label{eq:claqui}
\begin{align}
\gamma_0 &= \tfrac{6\:\! w^2}{5-6\:\! w} \\
\gamma_1 &= \tfrac{3\:\! w^2(75-70\:\! w- 78\:\! w^2 + 72\:\! w^3)}{(5-12\:\! w)(5-6\:\! w)^2}\\
\gamma_2 &= \tfrac{2\:\! w^2(4\:\! 375 - 6\:\! 750\:\! w -13\:\! 800\:\! w^2+33\:\! 480\:\! w^3-5\:\! 544\:\! w^4 + 26\:\! 352 \!\: w^5 + 15\:\! 552\:\! w^6)}{(5-18\:\! w)(5-12\:\! w)(5-6\:\! w)^3}
\end{align}
\end{subequations}
Note that, as for smooth quintessence, the $\Lambda$CDM model is also included in clustering quintessence as the limiting case $w=-1$ (as a matter of fact, by setting $w=-1$ in Eqs.~\eqref{eq:claqui} we finde the coefficients \eqref{eq:gamma012LambdaCDM}).
In Figure~\ref{fig:sefuverni} we compare the exact numerical calculation of the growth index $\gamma(z)$ and the approximation (\ref{eq:gamma_Taylor})
with coefficients $\gamma_0$ and $\gamma_1$ computed above.
The relative discrepancy, for the characteristic values of $\Omega_{\rm m,0}$ and $w$ is shown in Figure \ref{fig:sefuverni}. By comparing our parameterization of the growth index with that of Sefusatti and Vernizzi (2011) (left panel of Figure \ref{fig:sefuverni}), we can appreciate the gain in precision, which is roughly a factor of 2.
\section{Modified Gravity}
\label{sec:modified}
In this section we show how to predict the amplitude of the growth index in scenarios where the Einstein fields equations are modified.
We will first discuss what has become a reference benchmark for alternative gravitational scenarios, the DGP model \cite{DGP}, and we will then
generalize the discussion, by applying the formalism to Post Parametrized Friedmann models \cite{FerSko10}.
\begin{figure}[t]
\centering
\includegraphics[scale=0.6]{./p1fig5.ps}
\caption{The precision of various parameterizations \cite{pee80,pg,Gong,FuWuYu2009} of the growth index is shown for the flat DGP model with $\Omega_{\rm m,0}=0.213$.
The precision is estimated as the relative difference with respect to the numerical reconstruction of the growth index $\gamma(z)$ in the reference model.}
\label{fig:comparison_DGP}
\end{figure}
\subsection{The growth index as a diagnostic for the DGP model}
\label{sec:DGP}
We consider the Dvali-Gabadadze-Porrati brainworld model\cite{DGP} in which gravity is modified at large distances due to an extra spatial dimension.
The scale factor of the resulting cosmological model evolves according to the following
modified Friedmann equation \cite{Deffayet:2001pu}
\begin{align}
H(a)^2 + \frac{k}{a^2}-\frac{1}{r_{c}}\sqrt{H(a)^2+\frac{k}{a^2}} = \frac{8\pi G}{3} \rho_{\rm m}(a)
\label{eq:DGP_Friedmann}
\end{align}
where $r_{c}$ is the length scale at which gravity starts to leak out into the bulk. Neglecting effects of spatial curvature ($k=0$) and defining $\Omega_{r_c}=1/(2 r_c H_0)^2$, the Hubble rate can be expressed as
\begin{align}
H(a) = H_0 \Big[\sqrt{\Omega_{\rm m,0} a^{-3} + \Omega_{r_c}} + \sqrt{\Omega_{r_c}} \Big]
\label{eq:DGP_Hubble}
\end{align}
which implies the constraint $\Omega_{r_c}=(1-\Omega_{\rm m,0})^2/4$. By deriving Eq.~(\ref{eq:DGP_Friedmann}) with respect to $\ln a$ and by using the standard energy conservation equation $\dot{\rho}_{\rm m} + 3 H \rho_{\rm m} = 0$, we find, in terms of the fundamental variable of our formalism ($\omega$)
\begin{align}
\omega '(\omega) = -\frac{3\:\! (1-e^{\omega})}{1+e^{\omega}} \qquad \text{and} \qquad \frac{H'}{H}(\omega) = -\frac{3\:\! e^{\omega}}{1+e^{\omega}}.
\label{eq:DGP_x_h}
\end{align}
from which it follows that the flat DGP model is formally equivalent to a DE model with an $EoS$ varying as $w(\omega) =-(1+e^{\omega})^{-1}$. The effective Newton constant in the Poisson equation is \cite{Lue06,cdp1,nep}
\begin{align}
G_{\rm eff}(a) = G \Big[1+\frac{1}{3}\Big(1-2 r_c H(a) \big(1+\frac{1}{3}\frac{H'}{H}\big)\Big)^{-1}\Big]\,,
\end{align}
and, after some algebra, the source term $\mu = G_{\rm eff}/G$ in Eq.~(\ref{eq:matter_density_fluctuations}) can be expressed as
\begin{align}
\mu(\omega) = 1- \frac{1-e^{2\omega}}{3(1+ e^{2\omega})} \qquad \qquad \nu(\omega) = 1\,.
\label{eq:DGP_mu_nu}
\end{align}
From Eqs. \eqref{eq:DGP_x_h}, \eqref{eq:DGP_mu_nu} and \eqref{eq:struct} we find the following structural parameters
\begin{subequations}\label{eq:DGP_X_H_M_N}
\begin{align}
\mathcal{X}_n &= \lbrace 0,\tfrac{3}{2},0,-\tfrac{3}{4},\cdots \rbrace \;, \quad \mathcal{H}_n = \lbrace -\tfrac{3}{2},-\tfrac{3}{4},0,\tfrac{3}{8}, \cdots \rbrace \\
\mathcal{M}_n &= \lbrace 1,\tfrac{1}{3},0,-\tfrac{2}{3}, \cdots \rbrace \;, \quad \mathcal{N}_n = \lbrace 1,0,0,0,\cdots \rbrace
\end{align}
\end{subequations}
and, finally, the growth index coefficients
\begin{align}
\gamma_0^{\rm DGP} = \frac{11}{16} = 0.6875 \;, \qquad \gamma_1^{\rm DGP} = -\frac{7}{5\,632} \approx -0.0012 \;, \qquad \gamma_2^{\rm DGP} = -\frac{1051}{22\,528} \approx -0.0467
\label{gamdgp}
\end{align}
In what follows we consider the flat DGP model with $\Omega_{\rm m}=0.213$. With this choice of the density parameter, the DGP model best fits the expansion rate of the {\it reference} $\Lambda$CDM model in the
range $0<z<2$. In this non-standard gravity scenario, the maximal relative error, when comparing the parametric reconstruction of the growth rate with numerical results in the redshift range covered by Euclid-like survey, is $\delta\gamma/\gamma < 1.5\%$ at order 1, and $\delta\gamma/\gamma < 0.5\%$ at order 2, see Fig \ref{fig:comparison_DGP}.
To sub percentage precision we need to expand the growth index one order further.
The somewhat degraded precision for the DGP model is due to the fact that the Taylor parameterization proposed in this work (cf. the approximation \eqref{eq:gamma_Taylor}) is specifically tailored
for models in which the growth index minimally deviates from the $\Lambda$CDM prediction. It is however impressive how the DGP model, despite its extreme nature,
can still be satisfactorily described by our formalism.
\subsection{The growth index in general models of modified gravity}
\label{sec:gravity}
We will now assume that, in viable theories of modified gravity, both background and perturbed observables have values close to that of the standard
cosmological model. That is, deviations from $\Lambda$CDM are hypothesized to be small, comparable with current observational uncertainties.
We further assume that Eq. (\ref{eq:matter_density_fluctuations}) is the master equation for studying matter collapse on sub-horizon size. In other terms, linear density fluctuations
evolve in the quasi-static Newtonian regime. This being the case,
the structure coefficients of the growth index formalism are given by Eq. (\ref{eq:struct})
and, by a straightforward implementation of formula (\ref{eq:gamma_n}), we obtain
\begin{subequations}\label{eq:MG_gamma_n}
\begin{eqnarray}
\gamma_0 & = & \Bigg[\frac{3(\mu+\frac{d\mu}{d\omega }-w)-2\frac{d\nu}{d\omega } }{2+3\mu - 6w} \Bigg]_{\omega =0} \\
\gamma_{1} & = & \Bigg[ \frac{-(2-3\mu)\gamma_0^2 -6(\mu + \frac{d\mu}{d\omega }-w-2 \frac{dw}{d\omega })\gamma_0 - 3\mu + 6\frac{d\mu}{d\omega } + 3\frac{d^2\mu}{d\omega ^2}-3w-6\frac{dw}{d\omega }- 2\frac{d^2\nu}{d\omega ^2} }{2(2+3\mu - 12w)} \Bigg]_{\omega =0} \\
\gamma_{2} & = & \Bigg[\frac{-(2+3\mu)\gamma_0^3 + 9(\mu+\frac{d\mu}{d\omega })\gamma_0^2 - 6(2-3\mu)\gamma_0\gamma_1 + 3\big(2w+6\frac{dw}{d\omega }+6\frac{d^2w}{d\omega ^2}-3\mu-6\frac{d\mu}{d\omega }-3\frac{d^2\mu}{d\omega ^2}\big)\gamma_0}{3(2+3\mu -18w)} \nonumber \\
& & \;\; + \frac{18\big(2w+4\frac{dw}{d\omega }-\mu -\frac{d\mu}{d\omega } \big)\gamma_1 +3\big(1+w+3\frac{dw}{d\omega }+3\frac{d^2w}{d\omega ^2}+3\frac{d\mu}{d\omega }+3\frac{d^2\mu}{d\omega ^2}+\frac{d^3\mu}{d\omega ^3}\big)-2\frac{d^3\nu}{d\omega ^3}}{3(2+3\mu - 18w)} \Bigg]_{\omega =0}
\end{eqnarray}
\end{subequations}
One can parameterize a large class of modified gravity scenarios by expanding the model dependent quantities $\mu$ and $\nu$ in power series.
Since we are interested in alternative gravitational scenarios that might explain away the dark energy phenomenon, we expect deviations from Einstein's gravity to
become appreciable as $\Omega_{\rm DE}$ starts to diverge from 0. By assuming both $\mu$ and $\nu$ analytic at $z=\infty$, we can thus expand (see also \cite{FerSko10})
\begin{subequations}\label{eq:taylor_wmunu}
\begin{eqnarray}
w & = & w_i + w_1 \Omega_{\rm DE} + w_2 \Omega_{\rm DE}^2 + \mathcal{O}(\Omega_{\rm DE}^3)\\
\mu & = & 1+\mu_1 \Omega_{\rm DE}+\mu_2\Omega_{\rm DE}^2+\mu_3 \Omega_{\rm DE}^3+ \mathcal{O}(\Omega_{\rm DE}^4)\\
\nu & = & 1+\nu_1 \Omega_{\rm DE}+\nu_2\Omega_{\rm DE}^2+\nu_3 \Omega_{\rm DE}^3+ \mathcal{O}(\Omega_{\rm DE}^4)
\end{eqnarray}
\end{subequations}
where we denoted by $w_i$ the value of the $EoS$ in the epoch of matter domination as in \S \ref{sec:w(a)}. The corresponding growth index coefficients are
\begin{subequations}\label{eq:PMG_gamma_n}
\begin{eqnarray}
\gamma_0 & = & \tfrac{3(1-w_i-\mu_1)+2\nu_1}{5-6w_i} \\
\gamma_1 & = & \tfrac{\gamma_0^2 - 6(1-w_i+2w_1-\mu_1)\,\gamma_0 + 3(1-w_i+2w_1-3\mu_1 + 2\mu_2)+2(\nu_1 - 2\nu_2)}{10-24w_i} \\
\gamma_2 & = & -\tfrac{5\gamma_0^3 -9(1-\mu_1)\gamma_0^2 -6\gamma_0\gamma_1 + 3(3 - 2w_i + 12w_1 - 12w_2-9\mu_1 + 6 \mu_2)\gamma_0 +18(1-2w_i+4w_1-\mu_1)\gamma_1 }{15-54w_i} \nonumber \\
& & +\tfrac{3(1-w_i+6w_1-6w_2-7\mu_1 +12\mu_2 - 6\mu_3) + 2(\nu_1 - 6\nu_2 + 6\nu_3)}{15-54w_i}.
\end{eqnarray}
\end{subequations}
\begin{figure}[t
\centering
\includegraphics[scale=0.47]{./p1fig6a.ps}
\includegraphics[scale=0.47]{./p1fig6b.ps}
\caption{{\it Left panel:} relative difference, at redshift $z=1.5$, between the growth index predicted by the $\zeta$CDM model of \cite{FerSko10} and the $\Lambda$CDM value, for various values of the parameters $\zeta_1$, and $\zeta_2$. {\it Right panel:} we plot the maximal relative error, in the redshift interval surveyed by Euclid, of the first order approximation $\gamma=\gamma_0 + \gamma_1 \ln \Omega_{\rm m}$
as a function of the amplitude of the parameters $\zeta_1$ and $\zeta_2$ specifying the $\zeta$CDM model. The imprecision is maximal at $z=0.7$, and it is smaller than $1\%$ already when the parameterization given in
Eq. \ref{eq:gamma_Taylor} is truncated at first order.}
\label{fig:PPF_variation}
\end{figure}
\subsubsection{The $\zeta$CDM framework}
\label{sec:skofe}
As an example of the application of the formalism, we consider the $\zeta$CDM scenario of \cite{FerSko10}. This is a one-parameter family of
models representing a large class of modified gravity theories for which the fundamental geometric degree of freedom is the metric,
the field equations are at most 2nd order in the metric variables and gauge-invariant \cite{sko}.
The interesting facet of these non-standard gravity models is that deviations from GR are parameterized by an observable, the {\it gravitational slip} parameter
\begin{align}
\zeta \equiv 1 - \Phi /\Psi
\label{eq:gravitational_slip}
\end{align}
where $\Phi$ and $\Psi$ are the Newton and curvature potentials respectively, both assumed to be time and scale dependent functions.
In this non-standard gravity formalism, the background evolution is degenerate with that of the $w$CDM model of cosmology, that is it
is effectively described by the Friedmann equation (\eqref{eq:Friedmann}) augmented by a Dark Energy component $\Omega_{\rm DE}$ with equation of state parameter $w$, satisfying the usual conservation equation \eqref{eq:conservation}. This fixes the amplitude of the structural parameters of our formalism to those of \eqref{eq:gamma_coeff_x_h}.
Deviations from General Relativity are expected only in the perturbed sector of the theory. At first order,
indeed, this class of models predicts the following modification of the Newton constant
\begin{align}
G_{\rm eff}(a) = G \big(1- \zeta(a)\big),
\end{align}
where it is assumed that on small cosmic scales, well inside the horizon, any scale dependence in the gravitational slip parameter can be neglected.
This in turns implies that the growth of matter perturbations, in these scenarios, can be described by inserting
$\mu = 1-\zeta$ and $\nu = 1$ into Eq. \ref{eq:f_H_General}. Since, at early epochs deviations from General Relativity are constrained by Big Bang Nucleosynthesis measurements, it is a fair
hypothesis to assume that $\zeta(t)$ is a smooth function which deviates from zero as soon as the dark energy comes to dominate the overall energy budget of the universe. As a result, the gravitational slip parameter can be expanded as \cite{FerSko10}%
\begin{equation}
\zeta = \zeta_1 \Omega_{\rm DE}+\zeta_2 \Omega_{\rm DE}^2 +\zeta_3 \Omega_{\rm DE}^3 + .....
\label{eq:zeta}
\end{equation}
and the overall effects of non-standard gravity are governed by the set of discrete parameters $\zeta_i$.
From Eqs.~\eqref{eq:taylor_wmunu} and \eqref{eq:PMG_gamma_n} it is now easy to read off the value of the growth indices in the $\zeta$CDM model:
\begin{subequations}\label{eq:PPF_gamma_n}
\begin{eqnarray}
\gamma_0 & = & \tfrac{3(1-w_i+\zeta_1)}{5-6w_i} \\
\gamma_{1} & = & \tfrac{\gamma_0^2 - 6(1-w_i+2w_1+\zeta_1 )\gamma_0 +3(1-w_i+2w_1+3\zeta_1 -2\zeta_2))}{10-24w_i} \\
\gamma_{2} & = & -\tfrac{5\gamma_0^3 -9(1+\zeta_1)\gamma_0^2 -6\gamma_0\gamma_1 + 3(3 -2w_i +12w_1-12w_2 +9\zeta_1 -6\zeta_2)\gamma_0 + 18(1-2w_i+4w_1+\zeta_1)\gamma_1}{15-54w_i} \nonumber \\
& & +\tfrac{3(1-w_i+6w_1-6w_2+7\zeta_1 - 12\zeta_2 + 6\zeta_3)}{15-54w_i}
\end{eqnarray}
\end{subequations}
The amplitude of the non-standard signals expected in this alternative gravitational scenario is shown in the left panel of Figure \ref{fig:PPF_variation} where we display
the distortions which a possibly non-null value of the $\zeta$CDM parameters $\zeta_1$ and $\zeta_2$ induce on the growth index.
Also the accuracy with which the growth index is reconstructed by our formalism is shown (left panel). This last plot confirms that systematic uncertainties are below the
threshold of the Planck statistical errors over a region of the parameter space ($\zeta_1$, $\zeta_2$) which is sufficiently large to be physically interesting.
\begin{table}[h]
\centering
\begin{tabular}{|l|l|c|c|}
\hline
Label & Reference & $z$ & $f\sigma_8$ \\
\hline
\hline
THF & Turnbull {\it et~al.} (2012) \cite{TurHudFel12} & 0.02 & $0.40\pm 0.07$ \\
\hline
DNM & Davis {\it et~al.} (2011) \cite{DavNusMas11} & 0.02 & $0.31\pm 0.05$ \\
\hline
6dFGS & Beutler {\it et~al.} (2012) \cite{BeuBlaCol12} & 0.07 & $0.42\pm 0.06$ \\
\hline
2dFGRS & Percival {\it et~al.} (2004) \cite{PerBurHea04}, Song \& Percival (2009) \cite{SonPer09} & 0.17 & $0.51\pm 0.06$ \\
\hline
2SLAQ & Ross {\it et~al.} (2007) \cite{RosAngSha07} & 0.55 & $0.45\pm 0.05$ \\
\hline
SDSS & Cabr\'{e} {\it et~al.} (2009) \cite{CabGaz09} & 0.34 & $0.53\pm 0.07$ \\
\hline
SDSS II & Samushia {\it et~al.} (2012) \cite{SamPerRac12} & 0.25 & $0.35\pm 0.06$ \\
& & 0.37 & $0.46\pm 0.04$ \\
\hline
BOSS & Reid {\it et al.} (2012) \cite{ReiSamWhi12} & 0.57 & $0.43\pm 0.07$ \\
\hline
WiggleZ & Contreras {\it et~al.} (2013) \cite{ConBlaPoo13} & 0.20 & $0.40\pm 0.13$ \\
& & 0.40 & $0.39\pm 0.08$ \\
& & 0.60 & $0.40\pm 0.07$ \\
& & 0.76 & $0.48\pm 0.09$ \\
\hline
VVDS & Guzzo {\it et al.} (2008) \cite{GuzPieMen08}, Song \& Percival (2009) \cite{SonPer09} & 0.77 & $0.49\pm0.18$ \\
\hline
VIPERS & De la Torre {\it et al.} (2013) \cite{TorGuzPea13}& 0.80 & $0.47\pm0.08$ \\
\hline
\end{tabular}
\caption{Compilation of currently available growth rate data.}
\label{tab:data}
\end{table}
\section{Data analysis in the growth index parameter space}
\label{sec:constraining}
Besides being instrumental in increasing the accuracy with which the growth rate is reconstructed from data, the $\gamma$-formalism introduced in the previous sections
also serves as a guide in interpreting empirical results directly in terms of dark energy models. This is shown in this section,
where we confront the growth index predictions with current data and data simulations for a Euclid-like survey. After describing our data analysis strategy, we show here how we test whether the $\Lambda$CDM model correctly describes available data about the linear growth rate of structures and how we
use the growth index parameter space $\gamma_0-\gamma_1$ to analyze and draw statistical conclusions on various non-standard gravity scenarios, in a manner which is independent from the specific details of the expansion history of the universe.
\subsection{Testing the $\Lambda$CDM predictions in the perturbed sector}
\label{sec:null_hypothesis}
We first focus on the constraints that present day observations set on the growth indices $\gamma_i$. These are derived by computing the likelihood $\mathcal{L}$ of the data shown in Table \ref{tab:data} given the model in Eqs. \eqref{eq:f_approximation} and \eqref{eq:gamma_Taylor}. To this purpose we minimize the $\chi^2=-2\ln(\mathcal{L})$ function
\begin{align}
\chi(\boldsymbol{\gamma},\mathbf{p})^2= \sum_{i=1}^{N} \Bigg( \frac{\big(f\sigma_8\big)_{\! \rm obs}(z_i) - f(\boldsymbol{\gamma},\mathbf{p},z_i)\sigma_8(\boldsymbol{\gamma},\mathbf{p},z_i)}{\sigma_i} \Bigg)^2
\end{align}
where $\sigma_i$ is the uncertainty in the growth data, $\mathbf{p}=(\sigma_{8,0},\Omega_{\rm m,0},w_{0},w_{a})$ is the set of parameters that, except for $\sigma_{8,0}$, fix the background expansion, $\boldsymbol{\gamma} =(\gamma_0,\gamma_1,...)$ are the growth indices introduced in Eq.~\eqref{eq:gamma_Taylor} and
\begin{align}
f(\boldsymbol{\gamma},\mathbf{p},z) &= \Omega_{\rm m}(\mathbf{p},z)^{\sum_i \gamma_i \big(\ln \Omega_{\rm m}(\mathbf{p},z)\big)^i/i!} \label{eq:deff0} \\
\sigma_8(\boldsymbol{\gamma},\mathbf{p},z) &= \sigma_{8,0} D(\boldsymbol{\gamma},\mathbf{p},z) = \sigma_{8,0}\;\! e^{\int_0^z \tfrac{f(\boldsymbol{\gamma},\mathbf{p},z')}{1+z'} dz'} .
\label{eq:deff}
\end{align}
where $D=\delta(t)/\delta(t_0)$ is the growth factor. In this paper we restrict our analysis to the first two growth indices, i.e.~we set $\boldsymbol{\gamma}=(\gamma_0,\gamma_1)$. We also assume the $f\sigma_8$ measurements in Table \ref{tab:data} to be independent. A more sophisticated error analysis might eventually results in minor changes in our quantitative conclusions, but will have no impact on their
physical interpretation. It is well known that $f\sigma_8$ cannot be estimated from data without picking a
particular model, or at least a parametrization, for gravity, i.e., for the quantity being tested \cite{motta}. Despite the strong prior on the underlaying gravitational model, there is however evidence that
the estimated values of this observable depend negligibly on the distance-redshift conversion model, that is for sensible variations
of the background parameter $\Omega_{\rm m,0}$ in a flat $\Lambda$CDM model, the variation of the estimation is well below the statistical error \cite{TorGuzPea13,ConBlaPoo13}.
This should guarantee that data can be meaningfully compared against models with distinct background evolution from that assumed to estimate the observable.
The likelihood analysis is carried out by choosing a prior model (hereafter called {\it fiducial}) for the evolution of $\Omega_{\rm m}$ and $\sigma_8$ in Eqs. (\ref{eq:deff}).
To this purpose we choose the {\it reference} $\Lambda$CDM model i.e. a flat $\Lambda$CDM model with $Planck$ parameters $\mathbf{p}=(0.835,0.315,-1,0)$ \cite{Ade:2013zuv}. The resulting likelihood contours for $\gamma_0$ and $\gamma_1$ are shown in the left panel of Figure \ref{fig:planck_wmap9}.
By marginalizing in the growth index parameter space
we find that, at $68\%$ confidence level (c.l.), the relative precision on the leading and first order growth indices is $\gamma_0=0.74^{+0.44}_{-0.41}$ and $\gamma_1=0.01^{+0.46}_{-0.46}$.
On the right panel of Figure \ref{fig:planck_wmap9} we show the same analysis assuming WMAP9 background values $\mathbf{p}=(0.821,0.279,-1,0)$. In this case, the 1-dimensional marginalized $68\%$ confidence levels are $\gamma_0=0.58^{+0.40}_{-0.38}$ and $\gamma_1=-0.06^{+0.39}_{-0.38}$. The likelihood contours are appreciably smaller if the likelihood analysis is carried out
in the WMAP9 fiducial, owing to the fact that the statistical analysis depends on the background model. We will see in the next section how to factor out the specific choice of the background model
from the interpretation of growth rate data.
\begin{figure}[t]
\centering
\includegraphics[scale=0.5]{./p1fig7a.ps}
\includegraphics[scale=0.5]{./p1fig7b.ps}
\caption{{\it Left panel:} $68\%$ and $95\%$ confidence levels in the growth index parameter space $\gamma_0- \gamma_1$ (orange filled regions) obtained from the compilation of data shown in Table \ref{tab:data} and by using, as fiducial for the evolution of the background metric, the expansion rate of the {\it reference} $\Lambda$CDM model ($\Omega_{\rm m,0}=0.315$, $\sigma_{8,0}=0.835$). The marginalized best-fitting values are $\gamma_0=0.74^{+0.44}_{-0.41}$ and $\gamma_1=0.01^{+0.46}_{-0.46}$. The growth indices theoretically expected in the fiducial $\Lambda$CDM model ($\gamma_0=0.55$, $\gamma_1=-0.007$) are shown by the filled black square. The black dotted line, defined by Eq.~\eqref{eq:stronger_weaker}, shows the partition of the $\gamma_0 - \gamma_1$ plane into regions where growth is amplified/suppressed with respect to the fiducial model. {\it Right panel:} The same but using as fiducial the $\Lambda$CDM model with WMAP9 values ($\Omega_{\rm m,0}=0.279$, $\sigma_{8,0}=0.821$). The $\Lambda$CDM model is represented by the empty purple square.}
\label{fig:planck_wmap9}
\end{figure}
The traditional way of exploiting the growth index formalism is to use it as a tool for a consistency test of the scenario proposed to model the background kinematics of the universe. The goal is to check whether a given gravitational model that explains the observed cosmic expansion rate also predicts the correct growth rate of linear structures. For example, one needs to verify that the most likely amplitude of the growth index parameters derived from observations of the linear growth of structures is not in conflict with those predicted on the basis of the DE {\it EoS} parameters $w_{0}$ and $w_{a}$ which best fit expansion rate data. In absence of major observational systematics, any possible mismatch between measured and expected values of the growth index, would be the smoking gun of new gravitational physics. In the opposite case, growth data provide additional constraints on dark energy parameters.
Figure \ref{fig:planck_wmap9} shows that the growth index predicted on the basis of $w_{0}=-1$ and $w_{a}=0$, i.e.~the {\it EoS} values of the reference $Planck$ $\Lambda$CDM model agrees with results from the likelihood analysis of linear growth rate data coming from a variety of low redshift surveys of the large scale structure of the universe. Shouldn't this be the case,
one could question either the unbiased nature of the data analysis in both sectors, background and perturbed, either the effectiveness of the standard description of gravity in terms of a $\Lambda$CDM framework.
A first immediate advantage of interpreting growth history data in the growth index plane, is that this parameter space facilitates the interpretation of critical information encoded in the likelihood function.
Specifically, it allows to classify alternative dark energy models (each labeled by the pair of coordinates $(\gamma_0,\gamma_1)$) as either generating more or less growth of structures with respect to
the chosen fiducial. The line separating these two characteristic regions is shown in Figure \ref{fig:planck_wmap9} and it is computed by imposing
\begin{align}
D(\boldsymbol{\gamma}, \bar{\mathbf{p}}, z_{\rm init}) = D(\bar{\boldsymbol{\gamma}},\bar{\mathbf{p}},z_{\rm init})
\label{eq:stronger_weaker}
\end{align}
where $\bar{\boldsymbol{\gamma}}$ are the growth indices of the fiducial model (that is $\bar{\boldsymbol{\gamma}}=(6/11,-7/2057)$ in Fig \ref{fig:planck_wmap9}), $\bar{\mathbf{p}}$ are the background parameters of the fiducial model and $z_{\rm init}$ is the initial redshift at which perturbations are conventionally assumed start growing. The region of more growth in the $\gamma$-plane is defined as the region where a density perturbation
(whose amplitude is normalized to unity today) had, at the initial redshift, a smaller amplitude than that predicted in the fiducial model ($D(z_{\rm init})< \bar{D}(z_{\rm init})$) whereas
regions of weaker gravity are the $\gamma$ loci where the amplitude of the perturbation was larger ($D(z_{\rm init}) > \bar{D}(z_{\rm init})$).
Note that the orientation of the line separating these two regions depends only negligibly on the chosen initial redshift $z_{\rm init}$ (for the sake of illustration we have set $z_{\rm init} =100$ in Figure \ref{fig:planck_wmap9}). Our analysis shows that current data mildly favor models in which the strength of gravitational interactions is weaker than what is predicted in the {\it reference} $\Lambda$CDM model.
This result also confirms a conclusion of \cite{samu}. As compared to the way perturbations grow within purely matter-dominated models, it seems that data tend to prefer a suppression mechanism more efficient than that provided by the cosmological constant.
\begin{figure}[t]
\centering
\includegraphics[scale=0.7]{./p1fig8.ps}
\caption{Forecasted $68\%$ and $95\%$ confidence levels in the growth indices parameter space $\gamma_0$ - $\gamma_1$ (orange filled regions) for a Euclid-like survey, assuming as fiducial cosmological model the {\it reference} $\Lambda$CDM model ($\Omega_{\rm m,0}=0.315$, $\sigma_{8,0}=0.835$). The black dotted and blue solid lines show predictions of a smooth quintessence model, that is a Dark Energy component with variable {\it EoS} of parameters $w_{0}$ and $w_{a}$ (see Eq.~\ref{eq:wi_finite}). These parameters span the range $[-1.3,-0.85]$ and $[-0.6,1.4]$ respectively. The spacing between adjacent lines is 0.05 and 0.2 respectively. The growth indices theoretically expected in the fiducial $\Lambda$CDM model is shown by a filled black square.
}
\label{fig:euclid_wowa}
\end{figure}
\subsection{Discriminating DE models in the $\gamma_0$ and $\gamma_1$ parameter space}
\label{sec:mapping}
Once a fiducial for the background evolution is chosen to compute the likelihood function, what conclusions can we draw on the viability of gravitational models other than the fiducial itself?
We will now see that, since the dependence of the growth index on the relevant cosmological parameters is explicitly taken into account in our analysis scheme,
the likelihood in the $\gamma_0 - \gamma_1$ plane, besides allowing to reject the null hypothesis that the fiducial is compatible with data, also allows to set constraints on alternative
cosmological scenarios.
In other terms, it is possible to exploit model dependent likelihood contours in the $\gamma_0-\gamma_1$ plane to tell apart also those theoretical models characterized
by an evolution of the background sector $\mathbf{p}$ which is distinct from that of the fiducial model $\bar{\mathbf{p}}$ itself.
The growth indices $\gamma_0$ and $\gamma_1$ are model dependent quantities which can be estimated only once a specific background fiducial model for the evolution of $\Omega_{\rm m}$ is chosen (cf.~Eq.~\ref{eq:deff0}). As a consequence, the growth indices inferred in distinct background models cannot be directly compared. For example, consider the likelihood contours in the plane $\gamma_0-\gamma_1$ obtained by assuming the {\it reference} $\Lambda$CDM model as fiducial (Figure \ref{fig:planck_wmap9}). One cannot constrain the DE $EoS$ parameters $w_{0}$ and $w_{a}$ by simply computing the amplitude of the coefficients $\gamma_{0}$ and $\gamma_{1}$ expected in an effective DE model with $EoS$ parameters $w_{0}$ and $w_{a}$ (cf.~\S \ref{sec:w(a)}) and by confronting these theoretical values with the empirical likelihood. Indeed, the specific growth rate history of a DE model is not entirely
captured by the growth indices, part of the information being locked in the scaling of the background density parameter $\Omega_{\rm m}$.
We overcome this critical pitfall by computing $\boldsymbol{\gamma}^*$, the amplitude of the effective
growth indices in the fiducial background, that is the exponent that once inserted in (\ref{eq:f_approximation}) together with the matter density parameter of the fiducial model adopted in the likelihood analysis ($\bar{\Omega}_{\rm m}$), allows to match the scaling of the growth rate expected in the specific gravitational model under consideration. This is equivalent to enforcing the following identity
\begin{align}
f(\boldsymbol{\gamma}^*,\bar{\mathbf{p}},z) = f(\boldsymbol{\gamma}, \mathbf{p},z).
\label{eq:f_approximation2}
\end{align}
By means of this transformation strategy we factor out the effect of expansion from the analysis of growth rate histories.
We will illustrate this feature by simulating the constraints that the growth index parameters expected from a next generation survey such as the space mission Euclid will put on the background dark energy parameters $w_{0}$ and $w_{a}$.
The Euclid mission is designed to survey, in spectroscopic mode, $\sim 5\cdot 10^7$ galaxies
in the redshift range $0.5<z<2.1$ and in a sky area of $\sim 15000$ deg$^2$ \cite{euclid2}.
We simulate the expected growth data $\gamma_{\rm obs}$ assuming as fiducial, the reference $\Lambda$CDM model.
To this purpose, we simply split the redshift range $0.7<z<2$ into 14 intervals, and we predict $\gamma_{\rm obs}=\ln f/\ln \Omega_{\rm m}$ by using Eqs.~\eqref{eq:relations_SDE}, \eqref{eq:relations_SG} and \eqref{eq:f_H_x}. We finally assume that the relative error on the observable, in each interval, is that corresponding to the Euclid figure of merit listed in table 2.2 of \cite{euclid2}, i.e.~$\delta\gamma/\gamma=1\%$.
The resulting likelihood contours in the $\gamma_0 - \gamma_1$ plane, obtained via a standard $\chi^2$ analysis, are shown in Figure~\ref{fig:euclid_wowa}.
Notice that the growth index figure of merit, defined as the inverse of the surface of the $68\%$ likelihood contour in the $\gamma_0-\gamma_1$ plane, is expected to increase by a factor $\sim 550$ when compared to that deduced from current constraints (see Figure \ref{fig:planck_wmap9}).
In Figure~\ref{fig:euclid_wowa} we also show the effective growth indices ($\gamma_0^*,\gamma_1^*)$ predicted by various DE models (labeled by the $EoS$ parameters $w_{0}$ and $w_{a}$) obtained by means of the transformation law \eqref{eq:f_approximation2}. In practice we compute these effective growth indices ($\gamma_0^*,\gamma_1^*$) by minimizing numerically the integral
\begin{align}
\int \Big(f(\boldsymbol{\gamma}^*,\bar{\mathbf{p}},z) - f(\boldsymbol{\gamma},\mathbf{p},z)\Big)^2 dz\,,
\end{align}
over the redshift range covered by observational data (i.e.~$[0,0.8]$ for current data and $[0.7,2]$ for Euclid-data simulations). We have verified that this mapping is sufficiently precise over all the redshift intervals where acceleration effects are observable. For $0.7< z<2$ typical errors of order $0.3\%$ arise for the most extreme values of $w_{0}$ and $w_{a}$ shown in Figure~\ref{fig:euclid_wowa}. Therefore, this error is largely negligible with respect to the precision of the constraints.
Note that while varying $w_{0}$ and $w_{a}$, we have kept fixed $\Omega_{\rm m,0}$, that is we overlook any possible degeneracy between the DE $EoS$ and matter density parameters entering the expansion rate $H(z)$.
This essentially because the relative variation induced in the distance modulus $\mu(z)=5\,\text{log}_{10}(d_{L}(z)/\text{Mpc})+25$ by this simplifying assumption is less than $0.2\%$ in the domain of interest.
Overall, Figure~\ref{fig:euclid_wowa} shows how measurements in the perturbed sector help to tighten the constraints on background cosmological parameters, and, ultimately to discriminate among DE models.
Specifically, we predict that growth rate data will provide independent constraints on the value of $w_0(/w_a)$ characterized by a relative(/absolute) precision of $1\% (/0.5)$.
\begin{figure}[t]
\centering
\includegraphics[scale=0.5]{./p1fig9a.ps}
\includegraphics[scale=0.5]{./p1fig9b.ps}
\caption{{\it Left panel:} Forecasted $68\%$ and $95\%$ confidence levels in the growth indices parameter space $\gamma_0$ - $\gamma_1$ (orange filled regions) for a Euclid-like survey, assuming as fiducial of a flat clustering quintessence model \cite{SefVer11} with parameters $\Omega_{\rm m,0}=0.315$ and $w=-1.01$. The theoretically predicted values $(\gamma_0,\gamma_1)=(0.5534,-0.0118)$ for this fiducial model are shown by a light blue cross. We also show the effective growth indices (in the chosen fiducial) for smooth and clustering quintessence models with constant $EoS$ parameter $-1.03>w>-0.98$ (black and blue lines respectively).
The spacing between points is $0.01$ with $w$ values increasing along the direction specified by the arrow.
{\it Right panel:} The same as the left panel but here we have overplotted the amplitude of the growth indices $\gamma_0$ and $\gamma_1$ predicted by the $\zeta$CDM model of \cite{FerSko10} as a function $\zeta_1=[-0.1,0.1]$ (red dashed isocontours) and $\zeta_2=[-0.5,0.5]$ (blue full isocontours). The spacing between adjacent lines is $0.02$ and $0.1$ respectively. The fiducial model is the {\rm reference} $\Lambda$CDM model.
}
\label{fig:same_background}
\end{figure}
The remapping method illustrated by Eq.~\eqref{eq:f_approximation2} provides a technique with which to tell apart gravitational models with identical background evolutions.
In other terms we can use the growth index parameter space to resolve the background degeneracy of two
dark energy models, i.e.~two models predicting the same background expansions, but different linear growth histories.
To illustrate this feature, we consider two possible scenarios whose background evolutions are degenerate with that of the $\lambda$CDM model: the
clustering quintessence and the $\zeta$CDM models.
We first consider the case in which future Euclid data are simulated in a clustering quintessence model, a cosmological model in which matter is supplemented by a quintessence component
with null sound speed, i.e.~the model of \cite{SefVer11} discussed in \S \ref{sec:clustering}. Specifically, we forecast $\gamma_{\rm obs}$ by using Eqs.~\eqref{eq:Clustering_mu_nu}, \eqref{eq:relations_SG} and\eqref{eq:f_H_x} and by further imposing that the background expansion is effectively described in terms of the constant DE {\it EoS} parameter $w=-1.01$.
The likelihood contours in the $\gamma_0 - \gamma_1$ plane, obtained by adopting as fiducial model the same clustering quintessence model used to simulate data,
are displayed on the left panel of Figure~\ref{fig:same_background}.
We now show how to use these measurements in the $\gamma$ plane to tell apart the clustering quintessence from canonical smooth quintessence. To this purpose
we calculate the effective growth indices $\gamma_0^*$ and $\gamma_1^*$ in two different theoretical models of dark energy, the clustering quintessence and the smooth quintessence models
(both identified, for simplicity, via their constant $EoS$ parameter $w$).
They are calculated by using Eq.~\eqref{eq:f_approximation2} to map $(\gamma_0,\gamma_1)$ of Eqs.~\eqref{eq:gammanw(a)CDM} and \eqref{eq:claqui} respectively into the effective values $(\gamma_0^*,\gamma_1^*)$ for the fiducial background $\bar{\mathbf{p}}=(0.835,0.315,-1.01,0)$ used to analyze growth data.
By comparing likelihood results vs. the predicted amplitude of the growth index, we can appreciate how the smooth DE model with the same background expansion as the {\it true} cosmological model
(in this context, the clustering quintessence model) is clearly ruled out at 95\% c.l.~by growth rate data. Specifically, if the effective {\it EoS} deviates from the reference value $w=-1$ by at least $1\%$ ($w<-1.01$ or $w>-0.99$) a Euclid-like survey has enough resolving power to discriminate between smooth and clustering quintessence.
As an additional example, on the right panel of Figure~\ref{fig:same_background} we show the constraints that an Euclid-like survey can set on the slip parameter entering the
$\zeta$CDM model reviewed in \S \ref{sec:skofe}. To this purpose we simulate Euclid observations assuming the {\it reference} $\Lambda$CDM model and we then reconstruct the likelihood assuming this very same model as fiducial. We then compare this statistic to prediction from the $\zeta$CDM model, that is to the values
$\gamma_0$ and $\gamma_1$ of Eqs.~\eqref{eq:PPF_gamma_n} computed for different values of the slip parameters $\zeta_1$ and $\zeta_2$ defined in equation \eqref{eq:zeta} (for simplicity we here
explore only models for which $w_i=-1$ and $w_1=w_2=....=0$). Note that, in this particular case, owing to the fact that the background evolution does not change from model to model,
the remapping procedure is superfluous. The right panel of Figure~\ref{fig:same_background} displays the performances of an Euclid-like survey in detecting possible deviations of the
slip parameter $\zeta$ from its GR value, {\it i.e} from zero. We can conclude that data will have enough resolving power to exclude, at the 95\% c.l., models predicting
a parameter $\zeta_1$ larger than 0.025 , or in other terms data can detect a relative deviation between the Newtonian and the curvature potential if it is larger than about 2.5\%.
\begin{figure}[t]
\centering
\includegraphics[scale=0.5]{./p1fig10a.ps}
\includegraphics[scale=0.5]{./p1fig10b.ps}
\caption{{\it Left panel:} $68\%$ and $95\%$ confidence levels in the parameter plane $\gamma_0- \gamma_1$ obtained from the compilation of data shown in Table \ref{tab:data} and by using, as fiducial for the evolution of the background metric, the expansion rate of the {\it reference} $\Lambda$CDM model ($\Omega_{\rm m,0}=0.315$ and $\sigma_{8,0}=0.835$). The growth indices theoretically expected in the fiducial $\Lambda$CDM model is shown by the filled black square. The growth index expected in the flat DGP model (black diamond) which best fits the expansion rate of the {\it reference} $\Lambda$CDM model is obtained via \eqref{eq:f_approximation2}, that is after mapping \eqref{gamdgp} in the appropriate background model. {\it Right panel:} The same as the left panel but using as fiducial the expansion rate of the DGP model \cite{DGP} instead.}
\label{fig:lcdm_dgp}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.8]{./p1fig11.ps}
\caption{Same as in Figure \ref{fig:lcdm_dgp}, but now, in addition, we show predictions for different values of $\Omega_{\rm m,0}$ (violet line), the constant $EoS$ for the smooth DE (sDE) model (black line) and the clustering DE (cDE) model (light blue line). The parameter ranges are $0.1 < \Omega_{\rm m,0} <0.5$, $-2.0 < w < -0.4$ (sDE) and $-1.2 < w < -0.8$ (cDE). The spacing is 0.05 for $\Omega_{\rm m,0}$, 0.1 for sDE and 0.05 for cDE. The prediction for the WMAP9 $\Lambda$CDM model are shown by a violet empty square and the black triangle corresponds to a $\zeta$CDM model with ($w=-1$, $\zeta_1 = 0.6$, $\zeta_2=0$).}
\label{fig:all}
\end{figure}
Up to this point, we have shown that models with distinct background evolutions and models with distinct growth rate predictions can be analyzed in the same parameter space, thanks to
the remapping scheme of Eq.~\eqref{eq:f_approximation2}.
A neat way to demonstrate the precision and consistency of this strategy is by showing that the conclusions on the physical viability of a model are invariant with respect to the choice
of the fiducial in which data are analyzed.
We will demonstrate this key feature by exploiting the DGP\cite{DGP} model which predicts both different background and different growth rate evolutions with respect to the $\Lambda$CDM model.
To this purpose, as in \S \ref{sec:DGP} we consider the flat DGP model which best fits the expansion rate of the {\it reference} $\Lambda$CDM model. This DGP model has parameters $\mathbf{p}=(0.835,0.213,w_{\rm DGP})$, where the effective DGP $EoS$ is given in \S \ref{sec:DGP}.
In Figure \ref{fig:lcdm_dgp} we show the constraints set on the DGP model by current data (cf.~Table \ref{tab:data}).
In the left panel, we show the likelihood contours in the parameter plane $\gamma_0 - \gamma_1$ obtained by using as fiducial for the evolution of the background metric the expansion rate of the {\it reference} $\Lambda$CDM model. The prediction for the DGP model obtained with the mapping \eqref{eq:f_approximation2} is also shown.
In the right panel, instead, we show the likelihood contours in the parameter plane $\gamma_0 - \gamma_1$ obtained by using the expansion rate of the DGP model as fiducial for the evolution of the background metric. In this case, it is the predictions of the $\Lambda$CDM model that are remapped via Eq.~\eqref{eq:f_approximation2}.
By confronting these two plots we see that while the location, the amplitude and the orientation of the likelihood surfaces do vary,
the interpretation of the results is clearly unchanged. Both figures show, consistently, that measurements of the growth rate in the range $0<z<0.8$ have already a sufficient precision to rule out, at a $95\%$ c.l., a most extreme scenario of modified gravity such as the DGP model. In other terms, the structure of the likelihood contours depends on the specific fiducial model of the background, that is
each fiducial background defines its own growth index parameter space. But physical interpretation is unchanged if we change from one parameter space to the other using
the transformation equation ~\eqref{eq:f_approximation2}.
The advantage of the method is also illustrated in Figure \ref{fig:all} using current data. In this picture we simultaneously contrast
predictions from a large class of cosmological models against the likelihood contours derived assuming the {\it reference} $\Lambda$CDM as fiducial for data analysis. On top of the DGP model (black diamond), we also plot the effective growth indices associated to the $\Lambda$CDM model which best fits WMAP9 data \cite{wmap9} (purple empty square). In this scenario, characterized by the reduced density parameter $\Omega_{\rm m,0}=0.279$, the growth of structures is slightly suppressed with respect to the fiducial, i.e.~the {\it reference} $\Lambda$CDM model of Planck.
One might also remark the slight (but statistically insignificant) tendency of growth data to favor the WMAP9 substantiation of the $\Lambda$CDM model with respect to the Planck one.
This is a particular example of a general feature of $\Lambda$CDM models: the higher the matter density and the more severe the tension with growth index constraints.
For example $\Lambda$CDM models with $\Omega_{\rm m,0}>0.38$ are excluded at $95\%$ confidence level.
In Figure \ref{fig:all} we also show the constraints on DE models with constant $EoS$ parameter $w$. Smooth DE models with $w<-1.5$ and Clustering DE models with $w>-0.9$ are excluded at $95\%$ confidence level. Finally, as a curiosity, we show that a $\zeta$CDM model (black empty triangle) with the same background as the {\it reference} $\Lambda$CDM model but with a slip parameter $\zeta_1 = 0.6$ best fits observations, that is it maximizes the likelihood of current data.
\section{Conclusions}
\label{sec:conclusions}
The observational information about the growth rate history $f(t)$ of linear cosmic structures can be satisfactorily encoded into a small set of parameters,
the growth indices $\gamma_i$, whose amplitude can be analytically predicted by theory.
Their measurement allows to explore whether Einstein's field equations encompass gravity also in the infrared, i.e.~on very large cosmic scales.
In order for this to be accomplished, {\it a}) an optimal scheme for compressing the growth rate function into the smallest possible set of discrete scalars $\gamma_i$,
without sacrifying accuracy, and {\it b}) a prescription for routinely calculating their amplitude in relevant theories of gravity, in order to explore the largest
region in the space of all possible models, must be devised.
In this paper we have explored a promising approach towards this goal. We have demonstrated both the precision and the flexibility of a specific parameterization of the growth
index, that is the logarithmic expansion \eqref{eq:gamma_Taylor}.
If the fiducial gravitational model is not too different from standard GR, i.e.~possible deviations in both the background and perturbed sector
can be interpreted as first order correction to the Friedmann model, then the proposed parameterization scheme
allows to match numerical results on the redshift dependence of the growth
index with a relative error which is lower than the nominal precision with which the next generation of redshift surveys are expected to fix the scaling of this function.
The performances are demonstrated by comparing, for various fiducial gravitational models, the accuracy of our proposal
against that of different parameterizations available in the literature.
Besides accuracy, the formalism features two other critical merits, one practical and one conceptual.
First we supply a simple way for routinely calculating the amplitude of the growth indices in any gravitational model in which the master equation for the
growth of density perturbations reduces to the form of Eq. \ref{eq:matter_density_fluctuations}. To this purpose it is enough
to specify three characteristic functions of this equation, the expansion rate $H(t)$, the damping $\nu(t)$ and the response $\mu(t)$ coefficients to calculate the parameters $\gamma_i$
up to any desired order $i$. Moreover, since the parameterization of the growth rate has not a phenomenological nature, but it is constructed as a series expansion
of the exact solutions of the differential equation which rules the growth of structures (cf. Eq. \ref{eq:f_H_General}), one can easily interpret empirical results about the amplitude
of the growth indices in terms of fundamental gravitational models.
Since the growth index is a model dependent quantity, it has been traditionally used only to reject, statistically, the specific
model adopted to analyze growth data. We have shown, instead, that the growth index parameter space $\gamma_0-\gamma_1$ provides a diagnostic tool to discriminate a large class of
models, even those presenting background evolution histories different from the fiducial model adopted in data analysis.
In other terms, a detection of a present day growth index amplitude $\neq 0.55$
would not only indicate a deviation from $\Lambda$CDM predictions but could be used to disentangle among different alternative explanations of the cosmic acceleration in a straightforward way.
The key to this feature is the mapping of Eq.~\ref{eq:f_approximation2} which allows to factor out the effect of expansion from the analysis of growth rate histories. As the standard $\Omega_{\rm m,0}-\Omega_{\Lambda,0}$ plane identifies different expansion histories $H(t)$, the $\gamma_0-\gamma_1$ plane can thus be used to locate different growth rate histories $f(t)$.
We have illustrated the performance of the growth index plane in relation to modify gravity model selection/exclusion by
using current data as well as forecasts from future experiments. We have shown that the likelihood contours in the growth index plane $\gamma_0 - \gamma_1$ can be
used to tell apart a clustering quintessence component \cite{SefVer11} from a smooth dark energy fluid, to fix the parameters of viable Parameterized Post Friedman gravitational models
\cite{FerSko10} or to exclude specific gravitational models such as, for example, DGP \cite{DGP}.
The performances of the analysis tool presented in this paper are expected to be enhanced, should the formalism be coupled to models
parameterizing the large class of possible gravitational alternatives to standard GR available in the literature.
In particular various approaches have been proposed to synthetically describe all the possible gravitational laws generated by adding a single scalar degree of freedom to Einstein's equations
~\cite{GPV,JLPV, BFS, BFPW}. Besides quintessence, scalar-tensor theory
and $f(R)$ gravity, this formalism allows also to describe covariant Galileons~\cite{NRT}, kinetic gravity braiding \cite{deffa1} and Horndeski/generalized Galileons theories~\cite{hor,Deffayet:2009wt}.
Interestingly, the cosmological perturbation theory of this general class of models can be
parameterized so that a direct correspondence between the parameterization and the underlying space of theories is maintained.
In a different paper \cite{PSM} we have already explored how the effective field theory formalism of \cite{GPV} allows to interpret the empirical constraints on $\gamma_i$
directly in terms of fundamental gravity theories.
\section*{Acknowledgments}
We acknowledge useful discussions with L. Guzzo, F. Piazza, T. Sch\"ucker, P. Taxil and J. M. Virey. CM is grateful for support from specific project funding of the {\it Institut Universitaire
de France} and of the Labex OCEVU.
\footnotesize
\parskip 0pt
|
1,941,325,220,033 | arxiv | \section{Block Architectures}
\label{sec:app_block}
As illustrated in Section~\ref{sec:tri_res}, the local block $f^l_{\theta_l(\ell)}$ produces forecasts based on local observed PTS signals excluding redundant periodic effects.
This goal aligns with that of N-BEATS to extract informative representations from generic TS signals.
Therefore, we reuse the generic block design of N-BEATS to instantiate $f^l_{\theta_l(\ell)}$.
Here we include a brief description of the local block for completeness.
Please refer to Section 3.1 in~\citep{Oreshkin2020N-BEATS} for more details.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.9\linewidth]{figs/block_arch_revised.pdf}
\end{center}
\caption{Detailed architectures of the local block and the periodic block in DEPTS.}
\label{fig:block_arch}
\end{figure}
\paragraph{Local Block.}
The left part of Figure~\ref{fig:block_arch} shows the detailed architecture within a local block, where we use $\tilde{\bm{x}}^{(\ell)}_{t-L:t} = \bm{x}^{(\ell-1)}_{t-L:t} - \bm{v}^{(\ell)}_{t-L:t}$ to denote the portion of the local observations $\bm{x}^{(\ell-1)}_{t-L:t}$ excluding the periodic effects $\bm{v}^{(\ell)}_{t-L:t}$ for the $\ell$-th layer.
After taking in $\tilde{\bm{x}}^{(\ell)}_{t-L:t}$, we pass it through four fully-connected layers and then obtain the backcast coefficients $\bm{c}_b^{(\ell)}$ and the forecast coefficients $\bm{c}_f^{(\ell)}$ via two linear projections:
\begin{align*}
&\bm{u}^{(\ell),1}_{t-L:t} = {\rm FC}_{\ell,1}(\tilde{\bm{x}}^{(\ell)}_{t-L:t}),\;\;\;
\bm{u}^{(\ell),2}_{t-L:t} = {\rm FC}_{\ell,2}(\bm{u}^{(\ell),1}_{t-L:t}),\;\;\;
\bm{u}^{(\ell),3}_{t-L:t} = {\rm FC}_{\ell,3}(\bm{u}^{(\ell),2}_{t-L:t}),\;\;\;\\
&\bm{u}^{(\ell),4}_{t-L:t} = {\rm FC}_{\ell,4}(\bm{u}^{(\ell),3}_{t-L:t}),\;\;\;
\bm{c}^{(\ell)}_b = {\rm LINEAR}_{\ell}^b(\bm{u}^{(\ell),4}_{t-L:t}),\;\;\;
\bm{c}^{(\ell)}_f = {\rm LINEAR}_{\ell}^f(\bm{u}^{(\ell),4}_{t-L:t}),\;\;\;
\end{align*}
where ${\rm FC}$ is a standard fully-connected layer with ReLU activation~\citep{nair2010rectified}, and $\rm LINEAR$ denotes a linear projection function.
Then, we pass these coefficients to the basis layers, $h^b(\cdot)$ and $h^f(\cdot)$, to obtain the backcast term $\bm{u}^{(\ell)}_{t-L:t}$ and the forecast term $\bm{u}^{(\ell)}_{t:t+H}$, respectively.
The generic choice for $h^b(\cdot)$ can simply be another linear projection function, which is also adopted by us since it produces more competitive and stable performance on PTS-related benchmarks than other interpretable basis layers, as shown by~\citep{Oreshkin2020N-BEATS} in Appendix C.4.
\paragraph{Periodic Block.}
The periodic block $f^p_{\theta_p(\ell)}$ aims to extract predictive information from associated periodic states, which are relatively simple and stable compared with rapidly shifting PTS signals.
Therefore, we can adopt a simple architecture while still maintain desired effects.
In this work, we use one-layer standard fully-connected layer to encode $\bm{z}^{(\ell-1)}_{t-L:t}$ and leverage another two linear projection functions to obtain the backcast term $\bm{v}^{(\ell)}_{t-L:t}$ and the forecast term $\bm{v}^{(\ell)}_{t:t+H}$ as the periodic effects of the $\ell$-th layer.
\begin{align*}
\bm{v}^{(\ell),1}_{t-L:t+H} = {\rm FC}_{\ell}(\bm{z}^{(\ell-1)}_{t-L:t+H}),\;\;\;
\bm{v}^{(\ell)}_{t-L:t} = {\rm LINEAR}^{\ell}( \bm{v}^{(\ell),1}_{t-L:t} ),\;\;\;
\bm{v}^{(\ell)}_{t:t+H} = {\rm LINEAR}^{\ell}( \bm{v}^{(\ell),1}_{t:t+H} ),\;\;\;
\end{align*}
where ${\rm FC}$ and ${\rm LINEAR}$ share the same meanings mentioned above.
Moreover, when training for multiple series simultaneously, we use a series-specific scalar parameter $\alpha_i$ ($i$ is the series index) to take account of differences in the strengths of periodicity by updating $\bm{v}^{(\ell)}_{t-L:t+H}$ as $\alpha_i \cdot \bm{v}^{(\ell)}_{t-L:t+H}$.
\section{Parameter Initialization for the Periodicity Module}
\label{sec:app_init_period}
As illustrated in Section~\ref{sec:est_period}, we leverage a fast approximation approach to obtain an acceptable solution of the two-stage optimization problem~\brref{eq:p_opt_init} with affordable costs in practice.
Algorithm~\ref{alg:approx_algo} summarizes the overall procedure for this fast approximation.
\begin{algorithm}[h]
\SetAlgoLined
\KwIn{$D_{train} = \bm{x}_{0:T_v}$, $D_{val} = \bm{x}_{T_v:T}$, $K$, and $J$}
Conduct DCT over $\bm{x}_{0:T_v}$. \\
Sort the top-$K$ cosine bases by amplitudes in descending order to obtain
$\tilde{\phi}^* = \{\tilde{A}^*_0\} \cup \{\tilde{A}^*_k, \tilde{F}^*_k, \tilde{P}^*_k\}_{k=1}^K$. \\
Initialize $\tilde{M}^* = \bm{0}$. \\
\For{$j$ \textbf{in} $[1, \cdots, K]$}{
\eIf{$\|\tilde{M}^*\|_1 < J$} {
Update $\tilde{M}^*_j$ by $\argmin_{M_j \in \{0, 1\}} \mathcal{L}_{D_{val}}(g_{\tilde{\phi}^*}^{M_j}(t))$
}{
\Return{$\tilde{\phi}^*$ and $\tilde{M}^*$}
}
}
\KwOut{$\tilde{\phi}^*$ and $\tilde{M}^*$}
\caption{Parameter initialization for the periodicity module.}
\label{alg:approx_algo}
\end{algorithm}
First, we divide the whole PTS signals $\bm{x}_{0:T}$ into the training part $D_{train} = \bm{x}_{0:T_v}$ and the validation part $D_{val} = \bm{x}_{T_v:T}$, where $T_v$ is the split time-step.
Then, the inner optimization stage is to identify the optimal parameter set $\phi^*$ that can best fit the training data:
\begin{align}
\phi^* = \argmin_{\phi} \mathcal{L}_{D_{train}}(g_\phi(t)), \quad
g_\phi(t) = A_0 + \sum_{k=1}^K A_k \cos(2\pi F_k t + P_k),
\label{eq:app_period_opt1}
\end{align}
where the hyper-parameter $K$ controls the capacity of $g_\phi(t)$ and the discrepancy training loss $\mathcal{L}_{D_{train}}$ can be instantiated as the mean square error $\sum_{t=0}^{T_v-1} \|g_\phi(t) - x_t\|_2^2$.
Directly optimizing~\brref{eq:app_period_opt1} via gradient descent from random initialization is inefficient and time-consuming since it involves numerous gradient updates and is easily trapped into bad local optima.
Fortunately, our instantiation of $g_\phi(t)$ as a group of cosine functions shares the similar format with Discrete Cosine Transform (DCT)~\citep{ahmed1974dct}.
Accordingly, we conduct DCT over $\bm{x}_{0:T_v}$ and select top-$K$ cosine bases with the largest amplitudes, which characterize the major periodic oscillations of this series, as the approximated solution $\tilde{\phi}^*$ of~\brref{eq:app_period_opt1}.
Next, we enter the outer optimization stage to select certain periods with good generalization:
\begin{align}
M^* = \argmin_{\|M\|_1 <= J} \mathcal{L}_{D_{val}} (g^M_{\phi^*}(t)), \quad
g^M_{\phi^*}(t) = A_0^* + \sum_{k=1}^K M_k \cdot A_k^* \cos(2\pi F_k^* t + P_k^*),
\label{eq:app_period_opt2}
\end{align}
where the hyper-parameter $J$ further constrains the expressiveness of $g^M_\phi(t)$ for good generalization.
Conducting exact optimization of this binary integer programming is also costly since it involves an exponentially growing parameter space.
Similarly, to capture the major periodic oscillations as much as possible, we develop a greedy strategy that iterates the selected $K$ cosine bases from the largest amplitude to the smallest and greedily assigns $1$ or $0$ to $M_k$ depending on whether the $k$-th period further reduces the discrepancy loss on the validation data.
Specifically, assuming $K$ periods are already sorted by their amplitudes descendingly and are indexed by $k$, we construct another surrogate function $ g_{\phi^*}^{M_j}(t) $ for the $j$-th greedy step:
\begin{align}
g_{\phi^*}^{M_j}(t) = M_j \cdot A_j^* \cos(2\pi F_j^* t + P_j^*) +
\left[A_0^* + \sum_{k=1}^{j-1} \tilde{M}_k^{*} \cdot A_k^* \cos(2\pi F_k^* t + P_k^*) \right],
\label{eq:app_greed_f}
\end{align}
where $\{\tilde{M}_k^*\}_{k=1}^{j-1}$ is determined in previous steps, $M_j$ is an integer parameter to be set in the current step.
Thus, for the $j$-th step, we are actually updating $\tilde{M}_j^*$ by
\begin{align}
\tilde{M}_j^* = \argmin_{M_j \in \{0, 1\}} \mathcal{L}_{D_{val}}(g_{\phi^*}^{M_j}(t)).
\end{align}
Besides, to tolerate the approximation errors introduced by $\tilde{\phi}^*$, which may result in shifted periodic oscillations, we use Dynamic Time Warping to measure the discrepancy of $g_{\phi^*}^{M_j}(t)$ and $x_t$ on $D_{val}$.
We continue this greedily updating steps until selecting $J$ periods in total or completing the traverses of all $K$ selected periods.
Finally, we obtain an approximated solution $\tilde{M}^*$ of~\brref{eq:app_period_opt2}.
\paragraph{Complexity Analyses.}
We also provide the complexity analyses of Algorithm~\ref{alg:approx_algo}, which runs very fast in practice and takes up negligible time compared with training neural networks.
Let us denote the length of training series as $L_t$ and the length of validation series as $L_v$.
First, the complexity of conducting DCT over training series is $O(L_t log(L_t))$.
Then, the complexity of selecting top-K frequencies with the largest amplitudes is $O(L_t log(K))$, which can be ignored since $K << L_t$.
Next, we need to select at most $J$ frequencies greedily based on the generalization errors on the validation set.
Since we measure the generalization errors via dynamic time warping, the total worst complexity for this selection procedure is $O(K L_v^2)$.
In total, the worse complexity of our approximation algorithm for a series is $O(L_t log(L_t) + K L_v^2)$.
In practice, $L_v$, the length of the validation series, is relatively small, and $K$, the maximum number of frequencies, can be regarded as a constant.
So, the squared complexity term $O(K L_v^2)$ is not a big trouble.
\section{More Details on Synthetic Experiments}
\label{sec:app_sim_exp}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.99\linewidth]{figs/app_sim_data.pdf}
\end{center}
\caption{Synthetic Data.}
\label{fig:app_sim_data}
\end{figure}
As Section~\ref{sec:sim_exp} states, we produce a TS $l_t$ via an auto-regressive process, $l_t = \sum_{i=1}^L \alpha_i l_{t-i} + \epsilon_t^l$, in which $\alpha_i$ is a coefficient for the $i$-lag dependence, and the error term $\epsilon^l_t \sim \mathcal{N}(0, \sigma^l)$ follows a zero-mean Gaussian distribution with standard deviation $\sigma^l$.
Specifically, we set $L$ as 3 and $\sigma^l$ as 1.
We leverage uniform samples from $[-1, 1]$ to initialize $\{\alpha_i\}_{i=1}^3$ and also uniformly sample three values from $[0, 5$ for the initial points, $l_{-3}$, $l_{-2}$, and $l_{-1}$.
Then, we produce $p_t$ by sampling from another Gaussian distribution $\mathcal{N}(z_t, \sigma^p)$, in which $z_t$ is characterized by a periodic function (instantiated as $g_\phi(t)$ in Section~\ref{sec:est_period}), and $\sigma^p$ is a standard deviation to adjust the degree of dispersion for periodic samples.
Specifically, we also set $\sigma^p$ as 1 and produce $z_t$ via a composition of three cosine bases, $8cos(2\pi(t+2)/50)$, $4cos(2\pi(t+3)/10)$, $2cos(2\pi t/4)$, and a base level, $30$.
These three cosine bases represent long-term, mid-term, short-term periodic effects, respectively, which are very similar to the circumstance in practice.
Next, we take three types of $f^c(l_t, p_t)$, $(l_t+p_t)$, $(l_t+p_t)^2$, and $(l_t+p_t)^3$, to characterize the linear, quadratic, and cubic dependencies of $x_t$ on $l_t$ and $p_t$, respectively.
We repeat the above procedure for 5000 time steps and divide them into 4000, 100, and 900 for training, validation, and evaluation, respectively.
Figure~\ref{fig:app_sim_data} shows the first 1000 time steps of these synthetic series.
Note that after data generation, all models only have access to the final mixed signal $x_t$ for training and evaluation.
Moreover, as illustrated in Section~\ref{sec:sim_exp}, we search for the best loobkack length ($L$) for N-BEATS and the best number of periods ($J$) for DEPTS.
The lookback length for DEPTS is fixed as 3, which is also determined by hyper-parameter tuning on the validation set.
Figure~\ref{fig:app_sim_exp} shows detailed comparisons of N-BEATS and DEPTS for different configurations of $L$ and $J$.
We can see that N-BEATS always needs a relatively long lookback window, such as 48 or 96 time steps, to capture those periodic patterns effectively.
Besides, further increasing the lookback length will introduce more irrelevant noises, which overwhelm effective predictive signals and thus result in more worse performance.
In contrast, with effective periodicity modeling, DEPTS can achieve better performance by using a short lookback window, which is also consistent with the auto-regressive process that governs the local momenta.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.99\linewidth]{figs/app_sim_exp.pdf}
\end{center}
\caption{Performance comparisons of N-BEATS and DEPTS with different lookback lengths ($L$) and number of periods ($J$).}
\label{fig:app_sim_exp}
\end{figure}
\section{More Details on Real-world Experiments}
\label{sec:app_real_exp}
\subsection{Datasets}
\label{sec:app_real_data}
Table~\ref{tab:app_dataset} includes main statistics of the five datasets used by our experiments.
We can see that the existing datasets (\textsc{Electricity}, \textsc{Traffic}, and \textsc{M4 (Hourly)}) utilized by recent studies usually have a large number of series but with relatively short lengths.
Therefore, it is hard to identify or evaluate yearly or quarterly periods on these benchmarks.
In contrast, \textsc{Caiso} and \textsc{NP} contain tens of series with the lengths of several years, which can better illustrate the inherent periodicity of these series and serve as complementary benchmarks for PTS modeling.
\begin{table}[h]
\centering
\caption{Dataset statistics.}
\begin{tabular}{l|rrrrr}
\toprule
Dataset& \textsc{Electricity}& \textsc{Traffic} & \textsc{M4 (Hourly)} & \textsc{Caiso} & \textsc{NP} \\
\midrule
\# Series& 370& 963& 414 &10& 18\\
Frequency& hourly & hourly & hourly & hourly & hourly \\%\multicolumn{4}{c}{hourly}\\
Start Date & 2012-01-01& 2008-01-02 & n/a & 2013-01-01 & 2013-01-01\\
End Date & 2015-01-01& 2009-03-31 & n/a &2020-12-31 &2020-12-31 \\
Min. Length& 4008& 10560& 700 & 37272&69984\\
Max. Length& 26304& 10560& 960 &70128& 70128\\
Avg. Length& 24556& 10560& 854 &54259&70120\\
Max. Value& 764500&1.0000& 352000 &49909 & 27513\\
Avg. Value& 2378.9&0.0528& 1351.6 &5582.4& 4671.4\\
\bottomrule
\end{tabular}
\label{tab:app_dataset}
\end{table}
\begin{table}[h]
\centering
\caption{Hyper-parameters of N-BEATS on \textsc{Caiso} and \textsc{NP}.}
\begin{tabular}{lcccccccc}
\toprule
\multirow{1}{*}{Dataset} &\multicolumn{4}{c}{\textsc{Caiso / NP}} \\
\multirow{1}{*}{Split} &\multicolumn{1}{c}{2020-01-01} &\multicolumn{1}{c}{2020-04-01} & \multicolumn{1}{c}{2020-07-01} & \multicolumn{1}{c}{2020-10-01} \\
\midrule
Iterations& \multicolumn{4}{c}{4000 / 12000}\\
Loss& \multicolumn{4}{c}{sMAPE} \\
Forecast horizon ($H$) & \multicolumn{4}{c}{24} \\
Lookback horizon & \multicolumn{4}{c}{$2H, 3H, 4H, 5H, 6H, 7H$} \\
Training horizon & \multicolumn{4}{c}{$720H$ (most recent points before the split)} \\
Layer number & \multicolumn{4}{c}{30} \\
Layer size & \multicolumn{4}{c}{512} \\
Batch size& \multicolumn{4}{c}{1024} \\
Learning rate & \multicolumn{4}{c}{1e-3 / 1e-6} \\
Optimizer & \multicolumn{4}{c}{Adam~\citep{kingma2014adam}} \\
\bottomrule
\end{tabular}
\label{tab:hyper_nb_cai_np}
\end{table}
\begin{table}[h]
\centering
\caption{Hyper-parameters of DEPTS on \textsc{Electricity}, \textsc{Traffic}, and \textsc{M4 (Hourly)}.}
\begin{tabular}{lcc|cc|c}
\toprule
\multirow{1}{*}{Dataset} &\multicolumn{2}{c}{\textsc{Electricity}} & \multicolumn{2}{c}{\textsc{Traffic}}& \multicolumn{1}{c}{\multirow{2}{*}{\textsc{M4 (Hourly)}}}\\
Split& 2014-09-01 & 2014-12-25 & 2008-06-15 & 2009-03-24& \\
\midrule
Iterations & \multicolumn{2}{c|}{72000} & \multicolumn{3}{c}{12000} \\
Loss & \multicolumn{4}{c|}{sMAPE} & MASE \\
Forecast horizon ($H$) & \multicolumn{4}{c|}{24} & 48 \\
Lookback horizon & \multicolumn{4}{c|}{$2H, 3H, 4H, 5H, 6H, 7H$} & $4H, 5H, 6H, 7H$\\
Training horizon & \multicolumn{5}{c}{$10H$} \\
$J$ & 4 & 32 & \multicolumn{2}{c|}{8} & 1 \\
$K$ & \multicolumn{5}{c}{128} \\
Layer number & \multicolumn{5}{c}{30} \\
Layer size & \multicolumn{5}{c}{512} \\
Batch size & \multicolumn{5}{c}{1024} \\
Learning rate ($f_\theta$) & \multicolumn{5}{c}{1e-3} \\
Learning rate ($g_\phi$) & \multicolumn{5}{c}{5e-7} \\
Optimizer & \multicolumn{5}{c}{Adam~\citep{kingma2014adam}} \\
\bottomrule
\end{tabular}
\label{tab:hyper_de_ele_tra_m4}
\end{table}
\begin{table}[h]
\centering
\caption{Hyper-parameters of DEPTS on \textsc{Caiso} and \textsc{NP}.}
\begin{tabular}{lc|c|c|ccccc}
\toprule
\multirow{1}{*}{Dataset} &\multicolumn{4}{c}{\textsc{Caiso / NP}} \\
\multirow{1}{*}{Split} &\multicolumn{1}{c}{2020-01-01} &\multicolumn{1}{c}{2020-04-01} & \multicolumn{1}{c}{2020-07-01} & \multicolumn{1}{c}{2020-10-01} \\
\midrule
Iterations& \multicolumn{4}{c}{4000 / 12000}\\
Loss& \multicolumn{4}{c}{sMAPE} \\
Forecast horizon ($H$) & \multicolumn{4}{c}{24} \\
Lookback horizon & \multicolumn{4}{c}{$2H, 3H, 4H, 5H, 6H, 7H$} \\
Training horizon & \multicolumn{4}{c}{$720H$} \\
$J$ & 8 / 8 & 32 / 8 & 32 / 32 & 8 / 32 \\
K & \multicolumn{4}{c}{128} \\
Layer number & \multicolumn{4}{c}{30} \\
Layer size & \multicolumn{4}{c}{512} \\
Batch size & \multicolumn{4}{c}{1024} \\
Learning rate ($f_\theta$) & \multicolumn{4}{c}{1e-3 / 1e-6} \\
Learning rate ($g_\phi$) & \multicolumn{4}{c}{5e-7} \\
Optimizer & \multicolumn{4}{c}{Adam~\citep{kingma2014adam}} \\
\bottomrule
\end{tabular}
\label{tab:hyper_de_cai_np}
\end{table}
\subsection{Hyper-parameters}
\label{sec:app_hyper_param}
For N-BEATS, we use its default hyper-parameters\footnote{\url{https://github.com/ElementAI/N-BEATS}} for \textsc{Electricity}, \textsc{Traffic}, and \textsc{M4 (Hourly)}, and we report its hyper-parameters searched on \textsc{Caiso} and \textsc{NP} in Table~\ref{tab:hyper_nb_cai_np}.
Besides, N-BEATS used multiple loss functions, such as sMAPE or MASE, for model training, and we also follow these setups.
Tables~\ref{tab:hyper_de_ele_tra_m4} and~\ref{tab:hyper_de_cai_np} include the hyper-parameters of DEPTS for all five datasets.
Note that, all these hyper-parameters are searched on a validation set, which is defined as the last week before the test split.
Moreover, for a typical dataset with multiple series, we build an independent periodicity module $g_\phi$ for each series and perform respective parameter initialization procedures as described in Appendix~\ref{sec:app_init_period}.
Then, for all datasets (splits), we train 30 models (6 lookback lengths $\times$ 5 random seeds) for both N-BEATS and DEPTS and then produce ensemble forecasts for fair and robust evaluation.
\section{Ablation Tests}
\label{sec:app_abla_test}
As Figure~\ref{fig:abla_archs} shows, we adopt three ablated variants of DEPTS to demonstrate our critical designs in the expansion module (Section~\ref{sec:tri_res}):
\begin{itemize}
\item \textbf{DEPTS-1}: removing the residual connection of $(\bm{x}^{(\ell-1)}_{t-L:t} - \bm{v}^{(\ell)}_{t-L:t})$ so that the outputs of the local block $\bm{u}^{(\ell)}_{t-L:t}$ are only conditioned on the raw PTS signals $\bm{x}_{t-L:t}$, which correspond to the mixed observations of local momenta and global periodicity.
\item \textbf{DEPTS-2}: removing the residual connection of $(\hat{\bm{x}}^{(\ell-1)}_{t:t+H} + \bm{v}^{(\ell)}_{t:t+H})$ so that the contributions to the forecasts only come from the local block, which takes in the signals excluding periodic effects progressively.
\item \textbf{DEPTS-3}: removing the residual connection of $(\bm{z}^{(\ell-1)}_{t-L:t+H}) - \bm{v}^{(\ell)}_{t-L:t+H}$ so that the inputs to the periodic block of each layer are the same hidden variables $\bm{z}_{t-L:t+H}$.
\end{itemize}
We also construct another four baselines to demonstrate the importance of our customized periodicity learning:
\begin{itemize}
\item \textbf{NoPeriod}: removing the periodic blocks by directly feeding $(x_t-z_t)$ to N-BEATS.
\item \textbf{RandInit}: randomly initializing periodic coefficients ($\phi$) and directly applying the end-to-end learning.
\item \textbf{FixPeriod}: fixing the periodic coefficients ($\phi$) after the initialization stage and only tuning $\theta$ during end-to-end optimization.
\item \textbf{MultiVar}: treating $z_t$ as a covariate of $x_t$ and feeding $(x_t, z_t)$ into an N-BEATS-style model via two channels.
\end{itemize}
Moreover, as illustrated in Section~\ref{sec:est_period} and Appendix~\ref{sec:app_init_period}, the maximal number of selected periods $J$ is a critical hyper-parameter to balance expressiveness and generalization of $g_\phi(t)$.
Thus, we conduct experiments with different $J$ to verify its sensitivity on different datasets.
Tables~\ref{tab:abla_res} and~\ref{tab:abla_res_2} include experimental results of these model variants on \textsc{Electricity}, \textsc{Traffic}, \textsc{Caiso}, and \textsc{NP} with different $J$.
Since we only identify one reliable period via Algorithm~\ref{alg:approx_algo} on \textsc{M4 (Hourly)}, we report its results separately in Table~\ref{tab:abla_res_m4}.
\begin{figure}[h]
\centering
\subfigure[DEPTS]{\includegraphics[width=0.41\linewidth]{figs/ablation0.pdf}}
\hspace{6mm}
\subfigure[DEPTS-1]{\includegraphics[width=0.41\linewidth]{figs/ablation1version2.pdf}} \\
\centering
\subfigure[DEPTS-2]{\includegraphics[width=0.41\linewidth]{figs/ablation2version2.pdf}}
\hspace{6mm}
\subfigure[DEPTS-3]{\includegraphics[width=0.41\linewidth]{figs/ablation3version2.pdf}}
\caption{The residual structures of DEPTS and its three ablated variants, where the dashed line denotes the removed connection.}
\label{fig:abla_archs}
\end{figure}
\begin{table}[h]
\centering
\caption{Performance comparisons of DEPTS-1, DEPTS-2, DEPTS-3, and DEPTS.}
\begin{tabular}{llcccccccc}
\toprule
$J$ & Model & \multicolumn{2}{c}{\textsc{Electricity}} & \multicolumn{2}{c}{\textsc{Traffic}} & \multicolumn{2}{c}{\textsc{Caiso}} & \multicolumn{2}{c}{\textsc{NP}} \\
&\multirow{1}{*}{} & \multicolumn{2}{c}{2014-12-25} & \multicolumn{2}{c}{2009-03-24} & \multicolumn{2}{c}{2020-10-01} & \multicolumn{2}{c}{2020-10-01} \\
& & \textit{nd} & \textit{nrmse} & \textit{nd} & \textit{nrmse} & \textit{nd} & \textit{nrmse} & \textit{nd} & \textit{nrmse} \\
\midrule
4 &DEPTS-1& 0.15870& 0.97571& 0.11167& 0.40790& 0.02429& 0.05184& 0.19818& 0.31224 \\
&DEPTS-2& 0.15391& 0.99258& 0.10786& \textbf{0.39716}& 0.02190& 0.04402& 0.19213& \textbf{0.30317} \\
&DEPTS-3& 0.14955& 0.96602& 0.10784& 0.39811& \textbf{0.01951}& \textbf{0.04087}& 0.19201& 0.30469 \\
&DEPTS& \textbf{0.14931}& \textbf{0.96488}& \textbf{0.10745}& 0.39730& 0.02061& 0.04373& \textbf{0.19128}& {0.30381} \\
\midrule
8 &DEPTS-1& 0.15632& 0.98683& 0.11108& 0.40529& 0.02256& 0.04634& 0.19194& 0.30167 \\
&DEPTS-2& 0.15070& 0.97949& 0.10688& \textbf{0.39421}& 0.02103& 0.04329& \textbf{0.18412}& \textbf{0.28983} \\
&DEPTS-3& \textbf{0.14908}& 0.96270& 0.10714& 0.39687& 0.02017& 0.04532& 0.18618& 0.29525 \\
&DEPTS& 0.14929& \textbf{0.95627}& \textbf{0.10653}& 0.39567& \textbf{0.02008}& \textbf{0.04176}& 0.18475& 0.29214 \\
\midrule
16 &DEPTS-1& 0.14954& 0.92162& 0.11216& 0.40335& 0.02419& 0.05041& 0.18740& 0.29442 \\
&DEPTS-2& 0.14719& 0.95648& 0.10806& 0.39448& 0.02236& 0.04672& 0.18124& \textbf{0.28434} \\
&DEPTS-3& 0.14742& \textbf{0.94554}& \textbf{0.10678}& \textbf{0.39444}& \textbf{0.01991}& 0.04384& 0.18180& 0.28786 \\
&DEPTS& \textbf{0.14653}& 0.94929& 0.10770& 0.39554& 0.02116& \textbf{0.04276}& \textbf{0.18095}& 0.28620 \\
\midrule
32 &DEPTS-1& 0.14730& 0.90305& 0.11425& 0.39968& 0.02476& 0.05193& 0.18445& 0.28860 \\
&DEPTS-2& 0.14765& 0.95478& 0.11061& 0.39699& 0.02171& 0.04526& 0.18024& 0.28110 \\
&DEPTS-3& 0.14179& 0.90319& \textbf{0.10801}& \textbf{0.39403}& \textbf{0.01975}& \textbf{0.04270}& 0.18057& 0.28355 \\
&DEPTS& \textbf{0.13915}& \textbf{0.87498}& 0.11076& 0.39453& 0.02156& 0.04446& \textbf{0.17885}& \textbf{0.28031} \\
\bottomrule
\end{tabular}
\label{tab:abla_res}
\end{table}
\begin{table}[h]
\centering
\caption{Performance comparisons of NoPeriod, RandInit, FixPeriod, MultiVar, and DEPTS.}
\begin{tabular}{llcccccccc}
\toprule
$J$ & Model & \multicolumn{2}{c}{\textsc{Electricity}} & \multicolumn{2}{c}{\textsc{Traffic}} & \multicolumn{2}{c}{\textsc{Caiso}} & \multicolumn{2}{c}{\textsc{NP}} \\
&\multirow{1}{*}{} & \multicolumn{2}{c}{2014-12-25} & \multicolumn{2}{c}{2009-03-24} & \multicolumn{2}{c}{2020-10-01} & \multicolumn{2}{c}{2020-10-01} \\
& & \textit{nd} & \textit{nrmse} & \textit{nd} & \textit{nrmse} & \textit{nd} & \textit{nrmse} & \textit{nd} & \textit{nrmse} \\
\midrule
4
&NoPeriod &0.20615&1.27117&0.11829&0.40465&0.08195&0.14208&0.27128&0.40491\\
&RandInit &0.17677&1.07514&0.11051&0.40383&0.02504&0.05293&0.20869&0.32853\\
&FixPeriod&0.16756&0.99876&0.10816&0.39833&0.02282&0.04576&0.20145&0.31604\\
&MultiVar&0.15743&1.02039&\textbf{0.10733}&\textbf{0.39635}&\textbf{0.02018}&\textbf{0.04038}&0.19998&0.31393\\
&DEPTS& \textbf{0.14931}& \textbf{0.96488}& 0.10745& 0.39730& 0.02061& 0.04373& \textbf{0.19128}& \textbf{0.30381} \\
\midrule
8
&NoPeriod &0.23969&1.47537&0.11940&0.40536&0.08182&0.15585&0.24796&0.37781\\
&RandInit &0.17463&1.05695&0.11065&0.40500&0.02639&0.05680&0.20972&0.33044\\
&FixPeriod&0.16431&0.98414&0.10796&0.39764&0.02228&0.04661&0.20163&0.31666\\
&MultiVar&0.15482&0.99266&0.10667&0.39599&0.02160&0.04855&0.19494&0.30513\\
&DEPTS&\textbf{0.14929}&\textbf{0.95627}& \textbf{0.10653}&\textbf{0.39567}& \textbf{0.02008}& \textbf{0.04176}& \textbf{0.18475}& \textbf{0.29214} \\
\midrule
16
&NoPeriod &0.26851&1.65571&0.12203&0.40410&0.07275&0.15280&0.23281&0.34943\\
&RandInit &0.18167&1.08529&0.11048&0.40287&0.02496&0.05347&0.20917&0.32936\\
&FixPeriod&0.15792&0.95356&0.10803&0.39699&0.02138&0.04417&0.20293&0.31963\\
&MultiVar&0.15479&0.98805&\textbf{0.10724}&\textbf{0.39366}&0.02289&0.05023&0.19045&0.29784\\
&DEPTS& \textbf{0.14653}& \textbf{0.94929}& 0.10770& 0.39554& \textbf{0.02116}& \textbf{0.04276}& \textbf{0.18095}& \textbf{0.28620} \\
\midrule
32
&NoPeriod &0.31358&1.73706&0.12835&0.40741&0.07696&0.15433&0.22503&0.34109\\
&RandInit &0.19399&1.14278&0.11075&0.40082&0.02577&0.05723&0.20916&0.32944\\
&FixPeriod&0.15539&0.92896&\textbf{0.10862}&0.39637&\textbf{0.02055}&\textbf{0.04115}&0.20272&0.31940\\
&MultiVar &0.16405&1.01227&0.10907&0.39596&0.02159&0.04582&0.18844&0.29294\\
&DEPTS& \textbf{0.13915}& \textbf{0.87498}& 0.11076& \textbf{0.39453}& 0.02156& 0.04446& \textbf{0.17885}& \textbf{0.28031} \\
\bottomrule
\end{tabular}
\label{tab:abla_res_2}
\end{table}
\begin{table}[h]
\centering
\caption{Overall ablation studies on \textsc{M4 (Hourly)}.}
\begin{tabular}{lcccc}
\toprule
&DEPTS-1&DEPTS-2&DEPTS-3&DEPTS\\
\midrule
\textit{nd}&0.03712&0.02161&0.02252&\textbf{0.02050}\\
\textit{nrmse}&0.22785&0.07710&0.08851&\textbf{0.06872}\\
\toprule
&NoPeriod&RandInit&FixPeriod&MultiVar\\
\midrule
\textit{nd}&0.03848&0.02401&0.02526&0.02937\\
\textit{nrmse}&0.16568&0.09192&0.11221&0.14130\\
\bottomrule
\end{tabular}
\label{tab:abla_res_m4}
\end{table}
First, as Tables~\ref{tab:abla_res} and~\ref{tab:abla_res_2} show, $J$ is a crucial hyper-parameter that has huge impacts on forecasting performance.
The reason is that if $J$ is too small, the periodicity module $g_\phi$ cannot produce effective representations of the inherent periodicity to boost the predictive ability.
While if it is too large, $g_\phi$ has a high risk of over-fitting to the irrelevant noises contained by the training data, which also results in poor predictive performance.
Moreover, the interactions between local momenta and global periodicity may vary over time.
Therefore, it is critical to search for a proper $J$ for each PTS and each split point to pursue better performance.
Fortunately, we demonstrate that the hyper-parameter tuning of $J$ on the validation set can ensure its good generalization abilities on the subsequent test horizons.
Then, let us focus on Table~\ref{tab:abla_res} to compare DEPTS with DEPTS-1, DEPTS-2, and DEPTS-3.
First, we can see that DEPTS-1 usually produces the worst performance in most cases, which demonstrates that excluding periodic effects from raw PTS signals can stably and significantly boost the performance of PTS forecasting.
Second, for most cases, DEPTS outperforms DEPTS-2, and the performance gaps can be remarkable, such as $0.139$ vs. $0.148$ on \textsc{Electricity} and $0.020$ vs. $0.021$ on \textsc{Caiso}.
These results verify the importance of including the portion of forecasts solely from the periodicity module.
Third, DEPTS-3 can produce competitive results compared with DEPTS in many cases.
Nevertheless, after selecting the best $J$ for each dataset, DEPTS still slightly outperforms DEPTS-3 in most cases.
Besides, from Table~\ref{tab:abla_res_m4}, we also observe that DEPTS performs much better than DEPTS-3 on \textsc{M4 (Hourly)}.
Thus we retain the residual connection to reduce the periodic effects leveraged by previous layers.
Next, let us focus on Table~\ref{tab:abla_res_2} to compare DEPTS with other four baselines, NoPeriod, RandInit, MultiVar, and FixPeriod.
First, we observe that NoPeriod usually produces the worst preformance.
The reason is that $(x_t – z_t)$ denotes the raw time-series signal subtracting the periodic effect, so it is challenging for the model to forecast the future signals, $x_{t:t+H}$, solely based on the periodicity-agnostic inputs, $(x_{t-L:t} – z_{t-L:t})$.
Second, RandInit also produces much more worse results than DEPTS, which demonstrate the importance of initializing periodic coefficients (Section~\ref{sec:est_period}).
Third, DEPTS performs much better than FixPeriod in most cases, which demonstrate the effectiveness of fine-tuning periodic coefficients after the initialization stage.
Last, we observe that sometimes MultiVar can produce comparable and even slightly better results than DEPTS.
However, after selecting the best $J$ on each dataset for these two models, we find that DEPTS still outperforms MultiVar consistently and significantly, which also demonstrates the superiority of our expansion learning.
Moreover, as Table~\ref{tab:abla_res_m4} shows, DEPTS outperforms all these baselines by a large margin on \textsc{M4 (Hourly)}, which containing very short PTS with 854 observations on average.
Given limited data, all our critical designs, such as properly initializing periodic coefficients, fine-tuning periodic coefficients, and conducting expansion learning to decouple the dependencies of $x_t$ on $z_t$, play much crucial roles in producing accurate forecasts.
\section{More Case Studies and Interpretability Analyses}
\label{sec:app_case_interp}
In the following, we further study the interpretable effects of DEPTS with more cases.
Figures~\ref{fig:app_intrep1},~\ref{fig:app_intrep2},~\ref{fig:app_intrep3} and~\ref{fig:app_intrep4} show the additional two cases on \textsc{Electricity}, \textsc{Traffic}, \textsc{Caiso}, and \textsc{NP}, respectively.
Following Figure~\ref{fig:interp}, we compare the forecasts of N-BEATS and DEPTS in the left side, differentiate the forecasts of DEPTS into the local part (DEPTS-L) and the periodic part (DEPTS-P) in the middle side, and plot the hidden state $z_t$ together with the PTS signals in the right side.
The general observations are that with the help of explicit periodicity modeling, DEPTS achieves better performance than N-BEATS in PTS forecasting, and DEPTS has learned diverse behaviors for different cases.
Besides, we also include their critical periodic coefficients (amplitude $A_k$, frequency $F_k$, and phase $P_k$) in Tables~\ref{table:app_intrep1},~\ref{table:app_intrep2},~\ref{table:app_intrep3}, and~\ref{table:app_intrep4}.
We find that DEPTS can learn many meaningful periods that are consistent with practical domains.
\newpage
\begin{figure}[h]
\begin{center}
\includegraphics[width=\linewidth]{figs/inter_electricity1.pdf}
\includegraphics[width=\linewidth]{figs/inter_electricity2.pdf}
\end{center}
\caption{We show two cases on \textsc{Electricity} dataset. It is clear to see that other than following some inherent periodicity, the real PTS signals usually have various irregular oscillations at different time steps, while DEPTS can produce more stable forecasts by analyzing local momenta and global periodicity simultaneously. For these two cases with evident and stable periodicity, DEPTS relies more on the periodic forecasts (DEPTS-P) and thus achieves more competitive and stable results.}
\label{fig:app_intrep1}
\end{figure}
\begin{table}[h]
\centering
\caption{Periodic coefficients of the two \textsc{Electricity} examples shown in Figure~\ref{fig:app_intrep1}. We find that DEPTS has learned both short-term and long-term periods, such as three hours ($|1/F_k| \approx 3$), six hours ($|1/F_k| \approx 6$), 12 hours ($|1/F_k| \approx 12$), one day ($|1/F_k| \approx 24$), and half a year ($|1/F_k| \approx 4380$), which are very similar to the patterns of electricity utilization in practice.}
\begin{tabular}{rrr|rrr}
\toprule
\multicolumn{6}{c}{\textsc{Electricity}} \\
\multicolumn{3}{c|}{id 224} & \multicolumn{3}{c}{id 235} \\
$|A_k|$&$|1/F_k|$&$|P_k|$&$|A_k|$&$|1/F_k|$&$|P_k|$\\
\midrule
362.601& 23.995& 0.088&160.144& 23.997& 0.092\\
196.829& 8320.428& 0.422&77.804& 8256.523& 0.451\\
87.138& 4470.598& 0.487&36.918& 23.969& 0.092\\
66.418& 24.035& 0.122&19.517& 23.921& 0.102\\
52.736& 11.999& 0.052&17.714& 4.800& 0.024\\
44.248& 23.920& 0.096&12.810& 11.993& 0.054\\
27.172& 6.000& 0.027&11.964& 6068.298& 0.684\\
23.220& 6.001& 0.030&11.186& 3.000& 0.015\\
\bottomrule
\end{tabular}
\label{table:app_intrep1}
\end{table}
\newpage
\begin{figure}[h]
\begin{center}
\includegraphics[width=\linewidth]{figs/inter_traffic1.pdf}
\includegraphics[width=\linewidth]{figs/inter_traffic2.pdf}
\end{center}
\caption{We show two cases on \textsc{Traffic} dataset. We can see that DEPTS is able to characterize quite different periodic effects. For the upper case, there are unexpected peaks at different time steps. For the bottom case, there are different types of periodic oscillations. Similar to cases in Figure~\ref{fig:app_intrep1}, DEPTS has estimated roughly consistent periodic states $z_t$ and then combined DEPTS-P and DEPTS-L to produce stable and accurate forecasts.}
\label{fig:app_intrep2}
\end{figure}
\begin{table}[h]
\centering
\caption{Periodic coefficients of the two \textsc{Traffic} examples shown in Figure~\ref{fig:app_intrep2}. We find that DEPTS has also learned multiple types of periods.}
\begin{tabular}{ccc|ccc}
\toprule
\multicolumn{6}{c}{\textsc{Traffic}} \\
\multicolumn{3}{c|}{id 398} & \multicolumn{3}{c}{id 532} \\
$|A_k|$&$|1/F_k|$&$|P_k|$&$|A_k|$&$|1/F_k|$&$|P_k|$\\
\midrule
0.0231& 23.993& 0.233&0.0267& 23.987& 0.209\\
0.0066& 164.055& 2.293&0.0122& 3192.218& 0.527\\
0.0062& 1845.505& 2.226&0.0104& 4906.324& 0.382\\
0.0054& 11.999& 0.200&0.0066& 24.270& 0.434\\
0.0046& 8.003& 0.205&0.0048& 1265.645& 0.330\\
0.0045& 23.920& 0.234&0.0039& 4.801& 0.114\\
0.0044& 28.097& 0.282&0.0032& 23.637& 0.332\\
0.0038& 12.011& 0.513&0.0030& 28.053& 0.248\\
\bottomrule
\end{tabular}
\label{table:app_intrep2}
\end{table}
\newpage
\begin{figure}[h]
\begin{center}
\includegraphics[width=1\linewidth]{figs/inter_caiso1.pdf}
\includegraphics[width=1\linewidth]{figs/inter_caiso2.pdf}
\end{center}
\caption{We show two cases on \textsc{Caiso} dataset. These two cases present relatively regular oscillations, and thus N-BEATS with enough lookback lengths can also produce pretty good forecasts. Even though, DEPTS can better capture the curves of future PTS signals by modeling the dependencies of them on estimated periodicity. We can see that DEPTS first relies on the periodic part (DEPTS-P) to form the basic shape of forecasts and then leverages the forecasts from the local part (DEPTS-L) to stretch or condense the forecasting curve.}
\label{fig:app_intrep3}
\end{figure}
\begin{table}[h]
\centering
\caption{Periodic coefficients of the two \textsc{Caiso} examples shown in Figure~\ref{fig:app_intrep3}. Other than daily and yearly periods, which are observed similarly in \textsc{Electricity} and \textsc{Traffic} cases, we find that DEPTS has identified some weekly periods ($|1/F_k| \approx 168$) for two cases on \textsc{Caiso}.}
\begin{tabular}{ccc|ccc}
\toprule
\multicolumn{6}{c}{\textsc{Caiso}} \\
\multicolumn{3}{c|}{id 4}&\multicolumn{3}{c}{id 1} \\
$|A_k|$&$|1/F_k|$&$|P_k|$&$|A_k|$&$|1/F_k|$&$|P_k|$\\
\midrule
3411.731& 24.004& 0.000&1851.303& 24.002& 0.307\\
3207.279& 8344.825& 0.081&1754.576& 8629.984& 0.606\\
1712.536& 4509.138& 0.042&720.007& 23.934& 0.704\\
1493.276& 23.934& 0.000&625.465& 4299.993& 0.334\\
1434.023& 23.992& 0.000&536.312& 167.907& 0.369\\
1225.926& 9408.412& 0.086&452.345& 11.999& 0.086\\
963.309& 24.062& 0.000&409.348& 24.018& 0.323\\
854.321& 168.236& 0.001&326.875& 24.069& 0.215\\
\bottomrule
\end{tabular}
\label{table:app_intrep3}
\end{table}
\newpage
\begin{figure}[h]
\begin{center}
\includegraphics[width=1\linewidth]{figs/inter_np1.pdf}
\includegraphics[width=1\linewidth]{figs/inter_np2.pdf}
\end{center}
\caption{We show two cases on \textsc{NP} dataset. We can see that these cases are rather difficult, and both N-BEATS and DEPTS struggle to make sufficiently accurate forecasts. Nevertheless, as shown in the right side, DEPTS has a relatively stable estimation of the future trending and thus can obtain relatively good performance in forecasting future curves.}
\label{fig:app_intrep4}
\end{figure}
\begin{table}[h]
\centering
\caption{Periodic coefficients of the two \textsc{NP} examples shown in Figure~\ref{fig:app_intrep4}. We can see that the dominant periods belong to the long-term type, which characterizes the overall variation but omits those local volatile oscillations. Since this dataset contains massive noises in local oscillations, in some splits, N-BEATS even produces forecasts that are inferior to the projections of simple statistical approaches, such as PARMA, as shown in Table~\ref{tab:cai_np_exp}.}
\begin{tabular}{ccc|ccc}
\toprule
\multicolumn{6}{c}{\textsc{NP}} \\
\multicolumn{3}{c}{id 1}&\multicolumn{3}{c}{id 10} \\
$|A_k|$&$|1/F_k|$&$|P_k|$&$|A_k|$&$|1/F_k|$&$|P_k|$\\
\midrule
252.601& 8529.557& 0.960&2131.096& 8506.317& 0.809\\
140.967& 670.924& 0.715&1643.707& 24.003& 0.212\\
134.343& 366.735& 0.668&1158.437& 366.648& 0.797\\
117.755& 24.004& 0.378&1132.171& 670.602& 0.734\\
107.298& 244.195& 0.768&942.847& 244.106& 0.988\\
97.824& 794.904& 0.376&909.762& 795.685& 0.417\\
69.710& 6217.354& 0.746&867.654& 12.001& 0.309\\
65.251& 182.112& 0.452&729.163& 737.857& 0.696\\
\bottomrule
\end{tabular}
\label{table:app_intrep4}
\end{table}
\section{Conclusion}
\label{sec:conclusion}
In this paper, we develop a novel DL framework, DEPTS, for PTS forecasting.
Our core contributions are to model complicated periodic dependencies and to capture sophisticated compositions of diversified periods simultaneously.
Extensive experiments on both synthetic data and real-world data demonstrate the effectiveness of DEPTS on handling PTS.
Moreover, periodicity modeling is actually an old and crucial topic for traditional TS modeling but is rarely studied in the context of DL.
Thus we hope that the new DL framework together the two new benchmarks with evident periodicity and sufficiently long observations can facilitate more future research on PTS.
\section{Experiments}
\label{sec:exp}
Our empirical studies aim to answer three questions.
1) Why is it important to model the complicated dependencies of PTS signals on its inherent periodicity?
2) How much benefit can DEPTS gain for PTS forecasting compared with existing state-of-the-art models?
3) What kind of interpretability can DEPTS offer based on our two customized modules, $f_\theta$ and $g_\phi$?
To answer the first two questions, we conduct extensive experiments on both synthetic data and real-world data, which are illustrated in Section~\ref{sec:sim_exp} and~\ref{sec:real_exp}, respectively.
Then, Section~\ref{sec:interp} answers the third question by comparing and interpreting model behaviors for specific cases.
\paragraph{Baselines.}
We adopt the state-of-the-art DL architecture, N-BEATS~\citep{Oreshkin2020N-BEATS}, as our primary baseline since it has been shown to outperform a wide range of DL models, including MatFact~\citep{yu2016temporal}, Deep State~\citep{rangapuram2018deep}, Deep Factors~\citep{wang2019deep}, and DeepAR~\citep{salinas2020deepar}, and many competitive hybrid methods~\citep{montero2020fforma,smyl2020hybrid}.
Besides, we also include PARMA as a reference to be aware of the positions of conventional statistical models.
For this baseline, we leverage the AutoARIMA implementation provided by~\cite{loning2019sktime} to search for the best configurations automatically.
\paragraph{Evaluation Metrics.}
To compare different models, we utilize the following two metrics,
normalized deviation, abbreviated as \textit{nd},
and normalized root-mean-square error, denoted as \textit{nrmse},
which are conventionally adopted by \cite{yu2016temporal,rangapuram2018deep,salinas2020deepar,Oreshkin2020N-BEATS} on PTS-related benchmarks.
\begin{align}
nd = \frac{\frac{1}{|\Omega|} \sum_{(i,t) \in \Omega} |x^i_t - \hat{x}^i_t|}
{\frac{1}{|\Omega|} \sum_{(i,t) \in \Omega} |x^i_t|}, \quad
nrmse = \frac{\sqrt{\frac{1}{|\Omega|} \sum_{(i,t) \in \Omega} (x^i_t - \hat{x}^i_t)^2 }}
{\frac{1}{|\Omega|} \sum_{(i,t) \in \Omega} |x^i_t|},
\label{eq:metric}
\end{align}
where $i$ is the index of TS in a dataset, $t$ is the time index, and $\Omega$ denotes the whole evaluation space.
\subsection{Evaluation on Synthetic Data}
\label{sec:sim_exp}
\begin{figure}
\centering
\includegraphics[width=0.99\linewidth]{figs/main_sim_exp_nd.pdf}
\caption{Performance comparisons of N-BEATS and DEPTS (ours) on synthetic data, in which we simulate different periodic dependencies, such as linear, quadratic, and cubic.}
\label{fig:sim_exp}
\end{figure}
To intuitively illustrate the importance of periodicity modeling, we generate synthetic data with various periodic dependencies and multiple types of periods.
Specifically, we generate a simulated TS signal $x_t$ by composing an auto-regressive signal $l_t$, corresponding to the local momentum, and a compounded periodic signal $p_t$, denoting the global periodicity, via a function $f^c$ as $x_t = f^c(l_t, p_t)$, which characterizes the dependency of $x_t$ on $l_t$ and $p_t$.
First, we produce $l_t$ via an auto-regressive process, $l_t = \sum_{i=1}^{L} \alpha_i l_{t-i} + \epsilon^l_t$, in which $\alpha_i$ is a coefficient for the $i$-lag dependency, and the error term $\epsilon^l_t \sim \mathcal{N}(0, \sigma^l)$ follows a zero-mean Gaussian distribution with standard deviation $\sigma^l$.
Then, we produce $p_t$ by sampling from another Gaussian distribution $\mathcal{N}(z_t, \sigma^p)$, in which $z_t$ is characterized by a periodic function (instantiated as $g_\phi(t)$ in Section~\ref{sec:est_period}), and $\sigma^p$ is a standard deviation to adjust the degree of dispersion for periodic samples.
Next, we take three types of $f^c(l_t, p_t)$, $(l_t+p_t)$, $(l_t+p_t)^2$, and $(l_t+p_t)^3$, to characterize the linear, quadratic, and cubic dependencies of $x_t$ on $l_t$ and $p_t$, respectively.
Last, after data generation, all models only have access to the final mixed signal $x_t$ for training and evaluation.
Due to the space limit, we include the main results in Figure~\ref{fig:sim_exp} and leave finer grained parameter specifications and more experimental details to Appendix~\ref{sec:app_sim_exp}.
For each setup (linear, quadratic, cubic) in Figure~\ref{fig:sim_exp}, we have searched for the best lookback length ($L$) for N-BEATS and the best number of periods ($J$) for DEPTS on the validation set and re-run the model training with five different random seeds to produce robust results on the test set.
We can observe that for all cases, even with an exhaustive search of proper lookback lengths for N-BEATS, there exists a considerable performance gap between it and DEPTS, which verifies the utility of explicit periodicity modeling.
Moreover, as the periodic dependency becomes more complex (from linear to cubic), the average error reduction of DEPTS over N-BEATS keeps increasing (from 7\% to 11\%), which further demonstrates the importance of modeling high-order periodic effects.
\subsection{Evaluation on Real-world Data}
\label{sec:real_exp}
\begin{table}[t]
\centering
\caption{Performance comparisons (\textit{nd}) on \textsc{Electricity}, \textsc{Traffic}, and \textsc{M4 (Hourly)}. For the first two, we follow two different test splits defined in previous studies.}
\begin{tabular}{lcc|cc|cc}
\toprule
& \multicolumn{2}{c} { \textsc{Electricity} } & \multicolumn{2}{c} { \textsc{Traffic} } & \multicolumn{2}{c}{\multirow{2}{*}{\textsc{M4 (Hourly)}}} \\
Model & 2014-09-01 & 2014-12-25 & 2008-06-15 & 2009-03-24 & \\
\midrule
MatFact & $0.16$ & $0.255$ & $0.20$ & $0.187$ &\multicolumn{2}{c}{n/a}\\
DeepAR & $0.07$ & n/a & $0.17$ & n/a &\multicolumn{2}{c}{0.09}\\
Deep State & $0.083$ & n/a & $0.167$ & n/a & \multicolumn{2}{c}{0.044}\\
N-BEATS & ${0.064}$ & ${0.171}$ & ${0.114}$ &$0.112$& \multicolumn{2}{c}{0.023}\\
\midrule
DEPTS & \textbf{0.060} & \textbf{0.139} & \textbf{0.111} & \textbf{0.107} & \multicolumn{2}{c}{\textbf{0.021}}\\
\bottomrule
\centering
\label{tab:ele_tra_exp}
\end{tabular}
\end{table}
\begin{table}[t]
\centering
\caption{Performance comparisons ($nd$ and $nrmse$) on \textsc{Caiso} and \textsc{NP}, where we define four test splits to cover all four seasons of the last year for each benchmark.}
\begin{tabular}{llcccccccc}
\toprule
\multirow{1}{*}{} &\multirow{1}{*}{\textit{}} &\multicolumn{2}{c}{2020-01-01} & \multicolumn{2}{c}{2020-04-01} & \multicolumn{2}{c}{2020-07-01} & \multicolumn{2}{c}{2020-10-01} \\
Dataset& Model& \textit{nd} & \textit{nrmse} & \textit{nd} & \textit{nrmse} & \textit{nd} & \textit{nrmse} & \textit{nd} & \textit{nrmse} \\
\midrule
\multirow{3}{*}{\textsc{Caiso}}&PARMA &0.089&0.169&0.107&0.214&0.116&0.215&0.079&0.148\\
&N-BEATS&0.029&0.058&0.031&0.073&{0.030}&0.064&0.026&0.057\\
&DEPTS&\textbf{0.024}&\textbf{0.049}&\textbf{0.028}&\textbf{0.063}&\textbf{0.029}&\textbf{0.058}&\textbf{0.020}&\textbf{0.042}\\
\midrule
\multirow{3}{*}{\textsc{NP}}&PARMA &0.220& \textbf{0.350}& 0.201&0.321& 0.216& 0.352&0.199&0.305\\
&N-BEATS&0.207&0.434&0.154&0.237&0.195&0.315&0.211&0.332\\
&DEPTS&\textbf{0.196}&0.377&\textbf{0.145}&\textbf{0.224}&\textbf{0.169}&\textbf{0.269}&\textbf{0.179}&\textbf{0.281} \\
\bottomrule
\end{tabular}
\label{tab:cai_np_exp}
\end{table}
Other than simulation experiments, we further demonstrate the effectiveness of DEPTS on real-world data.
We adopt three existing PTS-related datasets,
\textsc{Electricity}\footnote{\url{https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014}},
\textsc{Traffic}\footnote{\url{https://archive.ics.uci.edu/ml/datasets/PEMS-SF}},
and \textsc{M4 (Hourly)}\footnote{\url{https://github.com/Mcompetitions/M4-methods/tree/master/Dataset/Train}},
which contain various long-term (quarterly, yearly), mid-term (monthly, weekly), and short-term (daily, hourly) periodic effects corresponding to regular economic and social activities.
These datasets serve as common benchmarks for many recent studies~\citep{yu2016temporal,rangapuram2018deep,salinas2020deepar,Oreshkin2020N-BEATS}.
For \textsc{Electricity} and \textsc{Traffic}, we follow two different test splits defined by~\cite{salinas2020deepar} and~\cite{yu2016temporal}, and the evaluation horizon covers the first week starting from the split date.
As for \textsc{M4 (Hourly)}, we adopt the official test set.
Besides, we note that the time horizons covered by these three benchmarks are still too short, which results in very limited data being left for periodicity learning if we alter the time split too early.
This drawback of lacking enough long PTS limits the power of periodicity modeling and thus may hinder the research development in this field.
To further verify the importance of periodicity modeling in real-world scenarios, we construct two new benchmarks with sufficiently long PTS from public data sources.
The first one, denoted as \textsc{Caiso}, contains eight-years hourly actual electricity load series in different zones of California\footnote{\url{http://www.energyonline.com/Data}}.
The second one, referred to as \textsc{NP}, includes eight-years hourly energy production volume series in multiple European countries\footnote{\url{https://www.nordpoolgroup.com/Market-data1/Power-system-data}}.
Accordingly, we define four test splits that correspond to all four seasons of the last year for robust evaluation.
For all benchmarks, we search for the best hyper-parameters of DEPTS on the validation set.
Similar to N-BEATS~\citep{Oreshkin2020N-BEATS}, we also produce ensemble forecasts of multiple models trained with different lookback lengths and random initialization seeds.
Tables~\ref{tab:ele_tra_exp} and~\ref{tab:cai_np_exp} show the overall performance comparisons.
On average, the error reductions ($nd$) of DEPTS over N-BEATS on \textsc{Electricity}, \textsc{Traffic}, \textsc{M4 (Hourly)}, \textsc{Caiso}, and \textsc{NP} are 12.5\%, 3.5\%, 8.7\%, 13.3\%, and 9.9\%, respectively.
Interestingly, we observe some prominent improvements in a few specific cases, such as 18.7\% in \textsc{Electricity (2014-09-01)}, 23.1\% in \textsc{Caiso (2020-10-01)}, and 15.2\% in \textsc{NP (2020-10-01)}.
At the same time, we also observe some tiny improvements, such as 2.6\% in \textsc{Traffic (2008-06-15)} and 3.3\% in \textsc{Caiso (2020-07-01)}.
These observations imply that the predictive abilities and the complexities of periodic effects may vary over time, which corresponds to the changes in performance gaps between DEPTS and N-BEATS.
Nevertheless, most of the time, DEPTS still brings stable and significant performance gains for PTS forecasting, which clearly demonstrate the importance of periodicity modeling in practice.
Due to the space limit, we leave more details about datasets and hyper-parameters used in real-world experiments to Appendix~\ref{sec:app_real_exp}.
Moreover, to achieve effective periodicity modeling, we have made several critical designs, such as the triply residual expansions in Section~\ref{sec:tri_res} and the composition of diversified periods in Section~\ref{sec:est_period}.
We also conduct extensive ablation tests to verify these critical designs, which are included in Appendix~\ref{sec:app_abla_test}.
\subsection{Interpretability}
\label{sec:exp_interp}
In Figure~\ref{fig:interp}, we illustrate the interpretable effects of DEPTS via two cases, the upper one from \textsc{Electricity} and the bottom one from \textsc{Traffic}.
First, from subplots in the left part, we observe that DEPTS obtains much more accurate forecasts than N-BEATS and PARMA.
Then, in the middle and right parts, we can visualize the inner states of DEPTS to interpret how it makes such forecasts.
As Section~\ref{sec:interp} states, DEPTS can differentiate the contributions to the final forecasts $\hat{\bm{x}}_{t:t+H}$ into the local momenta $\sum_{\ell=1}^N \bm{u}^{(\ell)}_{t:t+H}$ and the global periodicity $\sum_{\ell=1}^N \bm{v}^{(\ell)}_{t:t+H}$.
Interestingly, we can see that DEPTS has learned two different decomposition strategies:
1) for the upper case, most of the contributions to the final forecasts come from the global periodicity part, which implies that this case follows strong periodic patterns;
2) for the bottom case, the periodicity part just characterizes a major oscillation frequency, while the model relies more on the local momenta to refine the final forecasts.
Besides, the right part of Figure~\ref{fig:interp} depicts the hidden periodic state $z_t$ estimated by our periodicity module $g_\phi(t)$.
We can see that $g_\phi(t)$ indeed captures some inherent periodicity.
Moreover, the actual PTS signals also present diverse variations at different time, which further demonstrate the importance of leveraging $f_\theta$ to model the dependencies of $\bm{x}_{t:t+H}$ on both $\bm{x}_{t-L:t}$ and $\bm{z}_{t-L:t+H}$.
We include more case studies and interpretability analysis in Appendix~\ref{sec:app_case_interp}.
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{figs/main_interp.pdf}
\caption{We compare the forecasts of different models (in the left side) and visualize the intermediate states within DEPTS (in the middle and right parts), where DEPTS-P denotes the forecasts from the global periodicity, DEPTS-L denotes the forecasts from the local momenta, and DEPTS is the summation of these two parts, as illustrated in Section~\ref{sec:tri_res}.}
\label{fig:interp}
\end{figure}
\iffalse
\begin{tabular}{lcc|lc|ll}
\hline
\multicolumn{4}{c} { M4 Average $(100,000)$} & \multicolumn{2}{|c|} { M3 Average $(3,003)$} & \multicolumn{2}{l} { TOURISM Average $(1,311)$} \\
& SMAPE & OWA & & SMAPE & & MAPE \\
\hline Pure ML & $12.894$ & $0.915$ & Comb S-H-D & $13.52$ & ETS & $20.88$ \\
Statistical & $11.986$ & $0.861$ & ForecastPro & $13.19$ & Theta & $20.88$ \\
ProLogistica & $11.845$ & $0.841$ & Theta & $13.01$ & ForePro & $19.84$ \\
ML/TS combination & $11.720$ & $0.838$ & DOTM & $12.90$ & Stratometrics & $19.52$ \\
DL/TS hybrid & $11.374$ & $0.821$ & EXP & $12.71$ & LeeCBaker & $19.35$ \\
\hline N-BEATS-G & $11.168$ & $0.797$ & & $12.47$ & & $18.47$ \\
N-BEATS-I & $11.174$ & $0.798$ & & $12.43$ & & $18.97$ \\
N-BEATS-I+G & $\mathbf{1 1 . 1 3 5}$ & $\mathbf{0 . 7 9 5}$ & & $\mathbf{1 2 . 3 7}$ & & $18.52$ \\
\hline
\end{tabular}
\fi
\section{Introduction}
\label{sec:intro}
Time series (TS) with apparent periodic (seasonal) oscillations, referred to as \textit{periodic time series} (PTS) in this paper, is pervasive in a wide range of critical industries, such as seasonal electricity spot prices in power industry~\citep{koopman2007periodic}, periodic traffic flows in transportation~\citep{lippi2013short}, periodic carbon dioxide exchanges and water flows in sustainability domain~\citep{seymour2001overview,tesfaye2006identification,han2021joint}.
Apparently, PTS forecasting plays a crucial role in these industries since it can foster their business development by facilitating a variety of capabilities, including early warning, pre-planning, and resource scheduling~\citep{kahn2003measure,jain2017answers}.
Given the pervasiveness and importance of PTS, two obstacles, however, largely hinder the performance of existing forecasting models.
First, future TS signals yield complicated dependencies on both adjacent historical observations and inherent periodicity.
Nevertheless, many existing studies did not consider this distinctive periodic property~\citep{salinas2020deepar,toubeau2018deep,wang2019deep,Oreshkin2020N-BEATS}.
The performance of these methods has been greatly restrained due to its ignorance of periodicity modeling.
Some other efforts, though explicitly introducing periodicity modeling, only followed some arbitrary yet simple assumptions, such as additive or multiplicative seasonality, to capture certain plain periodic effects~\citep{holt1957forecasting,holt2004forecasting,vecchia1985periodic,taylor2018prophet}.
These methods failed to model complicated periodic dependencies beyond much simplified assumptions.
The second challenge lies in that the inherent periodicity of a typical real-world TS is usually composed of various periods with different amplitudes and frequencies.
For example, Figure~\ref{fig:intro} exemplifies the sophisticated composition of diversified periods via a real-world eight-years hourly TS of electricity load in a region of California.
However, existing methods~\citep{taylor2018prophet,smyl2020hybrid} required the pre-specification of periodic frequencies before estimating other parameters from data, which attempted to evade this obstacle by transferring the burden of periodicity coefficient initialization to practitioners.
To better tackle the aforementioned two challenges, we develop a \textit{deep expansion} learning framework, DEPTS, for \textit{PTS} forecasting.
The core idea of DEPTS is to build a deep neural network that conducts the progressive expansions of the complicated dependencies of PTS signals on periodicity to facilitate forecasting.
We start from a novel decoupled formulation for PTS forecasting by introducing the periodic state as a hidden variable.
This new formulation stimulates us to make more customized and dedicated designs to handle the two specific challenges mentioned above.
For the first challenge, we develop an expansion module on top of residual learning~\citep{he2016resnet,Oreshkin2020N-BEATS} to conduct layer-by-layer expansions between observed TS signals and hidden periodic states.
With such a design, we can build a deep architecture with both high capacities and efficient parameter optimization to model those complicated dependencies of TS signals on periodicity.
For the second challenge, we build a periodicity module to estimate the periodic states from observational data.
We represent the hidden periodic state with respect to time as a parameterized periodic function with sufficient expressiveness.
In this work, for simplicity, we instantiate this function as a series of cosine functions.
To release the burden of manually setting periodic coefficients for different data, we develop a data-driven parameter initialization strategy on top of Discrete Cosine Transform~\citep{ahmed1974dct}.
After that, we combine the periodicity module with the expansion module to perform end-to-end learning.
To the best of our knowledge, DEPTS is a very early attempt to build a customized deep learning~(DL) architecture for PTS that explicitly takes account of the periodic property.
Moreover, with two delicately designed modules, DEPTS also owns certain interpretable capabilities.
First, the expansions of forecasts can distinguish the contributions from either adjacent TS signals or inherent periodicity, which intuitively illustrate how the future TS signals may vary based on local momenta and global periodicity.
Second, coefficients of the periodicity module have their own practical meanings, such as amplitudes and frequencies, which provide certain interpretable effects inherently.
We conduct experiments on both synthetic data and real-world data, which all demonstrate the superiority of DEPTS on handling PTS.
On average, DEPTS reduces the error of the best baseline by about 10\%.
In a few cases, the error reduction can even reach up to 20\%.
Besides, we also include extensive ablation tests to verify our critical designs and visualize specific model components to interpret model behaviors.
\begin{figure} \centering
\includegraphics[width=0.99\linewidth]{figs/main_intro_example.pdf}
\caption{We visualize the electricity load TS in a region of California to show diversified periods. In the upper part, we depict the whole TS with the length of eight years, and in the bottom part, we plot three segments with the lengths of half year, one month, and one week, respectively.}
\label{fig:intro}
\end{figure}
\subsubsection*{Acknowledgments}
Use unnumbered third level headings for the acknowledgments. All
acknowledgments, including those to funding agencies, go at the end of the paper.
\fi
\section{DEPTS}
\label{sec:ngbe}
In this section, we elaborate on our new framework, DEPTS.
First, we start with a decoupled formulation of \brref{eq:period_ar} in Section~\ref{sec:our_form}.
Then, we illustrate the proposed neural architecture for this formulation in Sections~\ref{sec:tri_res} and~\ref{sec:est_period}.
Last, we discuss the interpretable capabilities in Section~\ref{sec:interp}.
\subsection{The Decoupled Formulation}
\label{sec:our_form}
To explicitly tackle the two-sided challenges of PTS forecasting, i.e., complicated periodic dependencies and diversified periodic compositions, we introduce a decoupled formulation~\brref{eq:dec_period_ar} that refines ~\brref{eq:period_ar} by introducing a hidden variable $z_t$ to represent the periodic state at time-step $t$:
\begin{align}
\bm{x}_{t:t+H} = f_\theta (\bm{x}_{t-L:t}, \bm{z}_{t-L:t+H}) + \bm{\epsilon}_{t:t+H}, \quad
z_t = g_\phi (t),
\label{eq:dec_period_ar}
\end{align}
where we treat $z_t \in \mathbb{R}^1$ as a scalar value to be consistent with the uni-variate TS $x_t \in R^1$,
we use $f_\theta: \mathbb{R}^L \times \mathbb{R}^{L+H} \rightarrow \mathbb{R}^H$ to model complicated dependencies of the future signals $\bm{x}_{t:t+H}$ on the local observations $\bm{x}_{t-L:t}$ and the corresponding periodic states $\bm{z}_{t-L:t+H}$ within the lookback and forecast horizons,
and $g_\phi: \mathbb{R}^1 \rightarrow \mathbb{R}^1$ is to produce a periodic state $z_t$ for a specific time-step $t$.
The right part of Figure~\ref{fig:arch} depicts the overall data flows of this formulation,
in which the expansion module $f_\theta$ and the periodicity module $g_\phi$ are responsible for handling the two aforementioned PTS-specific challenges, respectively.
\subsection{The Expansion Module}
\label{sec:tri_res}
To effectively model complicated periodic dependencies, the main challenge lies in the trade-off between model capacity and generalization. To avoid the over-fitting issue, many existing PTS approaches relied on the assumptions of additive or multiplicative seasonality~\citep{holt1957forecasting,vecchia1985periodic,anderson2007fourier-parma,taylor2018prophet,smyl2020hybrid}, which however can hardly express periodicity beyond such simplified assumptions.
Lately, residual learning has shown its great potentials in building expressive and generalizable DL architectures for a variety of crucial applications, such as computer vision~\citep{he2016resnet} and language understanding~\citep{vaswani2017attention}.
Specifically, N-BEATS~\citep{Oreshkin2020N-BEATS} conducted a pioneer demonstration of introducing residual learning to TS forecasting.
Inspired by these successful examples and with full consideration of PTS-specific challenges, we develop a novel expansion module $f_\theta$ on top of residual learning to characterize the complicated dependencies of $\bm{x}_{t:t+H}$ on $\bm{x}_{t-L:t}$ and $\bm{z}_{t-L:t+H}$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.95\linewidth]{figs/main_arch.pdf}
\end{center}
\caption{In the right part, we visualize the overall data flows for our framework, DEPTS. In the middle part, we plot the integral structure of three layer-by-layer expansion branches in the expansion module $f_\theta$. In the left part, we depict the detailed residual connections within a single layer.}
\label{fig:arch}
\end{figure}
The proposed architecture for $f_\theta$, as shown in the middle part of Figure~\ref{fig:arch}, consists of $N$ layers in total.
As further elaboration in the left part of Figure~\ref{fig:arch}, each layer $\ell$, share an identical residual structure consisting of three residual branches, which correspond to the recurrence relations of $\bm{z}^{(\ell)}_{t-L:t+H}$, $\bm{x}^{(\ell)}_{t-L:t}$, and $\hat{\bm{x}}^{(\ell)}_{t:t+H}$, respectively.
Here $\bm{x}^{(\ell)}_{t-L:t}$ and $\bm{z}^{(\ell)}_{t-L:t+H}$ denote the residual terms of $\bm{x}_{t-L:t}$ and $\bm{z}_{t-L:t+H}$ after $\ell$-layers expansions, and $\hat{\bm{x}}^{(\ell)}_{t:t+H}$ denotes the cumulative forecasts after $\ell$ layers.
In layer $\ell$, three residual branches are specified by two parameterized blocks, a local block $f^{l}_{\theta_l(\ell)}$ and a periodic block $f^{p}_{\theta_p(\ell)}$, where $\theta_l(\ell)$ and $\theta_p(\ell)$ are their respective parameters.
First, we present the updating equation for $z^{(\ell-1)}_{t-L:t+H}$, which aims to produce the forecasts from periodic states and exclude the periodic effects that have been used.
To be more concrete, $f^{p}_{\theta_p(\ell)}$ takes in $\bm{z}^{(\ell-1)}_{t-L:t+H}$ and emits the $\ell$-th expansion term of periodic states, denoted as $\bm{v}^{(\ell)}_{t-L:t+H} \in \mathbb{R}^{L+H}$.
$\bm{v}^{(\ell)}_{t-L:t+H}$ has two components, a backcast component $\bm{v}^{(\ell)}_{t-L:t}$ and a forecast one $\bm{v}^{(\ell)}_{t:t+H}$.
We leverage $\bm{v}^{(\ell)}_{t-L:t}$ to exclude the periodic effects from $\bm{x}^{(\ell-1)}_{t-L:t}$ and adopt $\bm{v}^{(\ell)}_{t:t+H}$ as the portion of forecasts purely from the $\ell$-th periodic block.
Besides, when moving to the next layer, we exclude $\bm{v}^{(\ell)}_{t-L:t+H}$ from $\bm{z}^{(\ell-1)}_{t-L:t+H}$ as $\bm{z}^{(\ell)}_{t-L:t+H} = \bm{z}^{(\ell-1)}_{t-L:t+H} - \bm{v}^{(\ell)}_{t-L:t+H}$ to encourage the subsequent periodic blocks to focus on the unresolved residue $\bm{z}^{(\ell)}_{t-L:t+H}$.
Then, since $\bm{v}^{(\ell)}_{t-L:t}$ is related to the periodic components that have been used to produce a part of forecasts, we construct the input to $f^{l}_{\theta_l(\ell)}$ as $(\bm{x}^{(\ell-1)}_{t-L:t} - \bm{v}^{(\ell)}_{t-L:t})$.
Here the purpose is to encourage $f^{l}_{\theta_l(\ell)}$ to focus on the unresolved patterns within $\bm{x}^{(\ell-1)}_{t-L:t}$.
$f^{l}_{\theta_l(\ell)}$ emits $\bm{u}^{(\ell)}_{t-L:t}$ and $\bm{u}^{(\ell)}_{t:t+H}$, which correspond to the local backcast and forecast expansion terms of the $\ell$-th layer, respectively.
After that, we update $\bm{x}^{(\ell)}_{t-L:t}$ by further subtracting $\bm{u}^{(\ell)}_{t-L:t}$ from $(\bm{x}^{(\ell-1)}_{t-L:t} - \bm{v}^{(\ell)}_{t-L:t})$ as $\bm{x}^{(\ell)}_{t-L:t} = \bm{x}^{(\ell-1)}_{t-L:t} - \bm{v}^{(\ell)}_{t-L:t} - \bm{u}^{(\ell)}_{t-L:t}$.
Here the insight is also to exclude all analyzed patterns of this layer to let the following layers focus on unresolved information.
Besides, we update $\hat{\bm{x}}^{(\ell)}_{t:t+H}$ by adding both $\bm{u}^{(\ell)}_{t:t+H}$ and $\bm{v}^{(\ell)}_{t:t+H}$ as $\hat{\bm{x}}^{(\ell)}_{t:t+H} = \hat{\bm{x}}^{(\ell-1)}_{t:t+H} + \bm{u}^{(\ell)}_{t:t+H} + \bm{v}^{(\ell)}_{t:t+H}$.
The motivation of such expansion is to decompose the forecasts from the $\ell$-th layer into two parts, $\bm{u}^{(\ell)}_{t:t+H}$ and $\bm{v}^{(\ell)}_{t:t+H}$, which correspond to the part from local observations excluding redundant periodic information and the other part purely from periodic states, respectively.
Note that before the first layer, we have $\bm{x}^{(0)}_{t-L:t} = \bm{x}_{t-L:t}$, $\bm{z}^{(0)}_{t-L:t+H} = \bm{z}_{t-L:t+H}$, and $\hat{\bm{x}}^{(0)}_{t:t+H} = \bm{0}$.
Besides, we collect the cumulative forecasts $\hat{\bm{x}}^{(N)}_{t:t+H}$ of the $N$-th layer as the overall forecasts $\hat{\bm{x}}_{t:t+H}$.
Therefore, after stacking $N$ layers of $\bm{z}^{(\ell)}_{t-L:t+H}$, $\bm{x}^{(\ell)}_{t-L:t}$, and $\hat{\bm{x}}^{(\ell)}_{t:t+H}$, we have the following triply residual expansions that encapsulate the left and middle parts of Figure~\ref{fig:arch}:
\begin{align}
\begin{split}
\bm{z}_{t-L:t+H} = \bm{z}^{(0)}_{t-L:t+H} &= \sum_{\ell=1}^{N} \bm{v}^{(\ell)}_{t-L:t+H} + \bm{z}^{(N)}_{t-L:t+H}, \\
\bm{x}_{t-L:t} = \bm{x}^{(0)}_{t-L:t} &= \sum_{\ell=1}^{N} (\bm{u}^{(\ell)}_{t-L:t} + \bm{v}^{(\ell)}_{t-L:t}) + \bm{x}^{(N)}_{t-L:t}, \\
\hat{\bm{x}}_{t:t+H} = \hat{\bm{x}}^{(N)}_{t:t+H} &= \sum_{\ell=1}^{N} (\bm{u}^{(\ell)}_{t:t+H} + \bm{v}^{(\ell)}_{t:t+H}),
\end{split}
\label{eq:drel_res}
\end{align}
where $\bm{z}^{(N)}_{t-L:t+H}$ and $\bm{x}^{(N)}_{t-L:t}$ are deemed to be the residues irrelevant to forecasting.
\paragraph{Connections and differences to N-BEATS.}
Our design of $f_\theta$ shares the similar insight with N-BEATS~\citep{Oreshkin2020N-BEATS},
which is stimulating a deep neural network to learn expansions of raw TS signals progressively,
whereas N-BEATS only considered the generic TS by modeling the dependencies of $\bm{x}_{t:t+H}$ on $\bm{x}_{t-L:t}$.
In contrast, our design is to capture the complicated dependencies of $\bm{x}_{t:t+H}$ on $\bm{x}_{t-L:t}$ and $\bm{z}_{t-L:t+H}$ for PTS.
Moreover, to achieve periodicity modeling, N-BEATS produces coefficients solely based on the input signals within a lookback window for a group of predefined seasonal basis vectors with fixed frequencies and phases.
However, our work can capture diversified periods in practice and model the inherent global periodicity.
\paragraph{Inner architectures of local and periodic blocks.}
The local block $f^{l}_{\theta_l(\ell)}$ aims to produce a part of forecasts based on the local observations excluding redundant periodic information as $(\bm{x}_{t-L:t} - \sum_{i=1}^{\ell-1} \bm{v}^{(i)}_{t-L:t})$.
Thus, we reuse the generic block developed by~\cite{Oreshkin2020N-BEATS}, which consists of a series of fully connected layers.
As for the periodic block $f^{p}_{\theta_p(\ell)}$, which handles the relatively stable periodic states, we can adopt a simple two-layer perception.
Due to the space limit, we include more details of inner block architectures in Appendix~\ref{sec:app_block}.
\subsection{The Periodicity Module}
\label{sec:est_period}
To represent the sophisticated periodicity composed of various periodic patterns, we estimate $z_t$ via a parameterized periodic function $g_\phi(t)$ that holds sufficient capacities to incorporate diversified periods.
In this work, for simplicity, we instantiate this function as a series of cosine functions as $g_\phi(t) = A_0 + \sum_{k=1}^K A_k \cos(2\pi F_k t + P_k)$,
where $K$ is a hyper-parameter denoting the total number of periods,
$A_0$ is a scalar parameter for the base scale,
$A_k$, $F_k$, and $P_k$ are the scalar parameters for the amplitude, the frequency, and the phase of the $k$-th cosine function, respectively,
and $\phi$ represents the set of all parameters.
Coupling $g_\phi$ with $f_\theta$ illustrated in Section~\ref{sec:ngbe}, we can effectively model the periodicity-aware auto-regressive forecasting process in the equation (\ref{eq:period_ar}).
However, it is extremely challenging to directly conduct the joint optimization of $\phi$ and $\theta$ from random initialization.
The reason is that in such a highly non-convex condition, the coefficients in $\phi$ are easily trapped into numerous local optima, which do not necessarily characterize our desired periodicity.
\paragraph{Parameter Initialization.}
To overcome the optimization obstacle mentioned above, we formalize a two-stage optimization problem based on raw PTS signals to find good initialization for $\phi$.
First, we construct a surrogate function, $g^M_{\phi}(t) = A_0 + \sum_{k=1}^K M_k \cdot A_k cos(2\pi F_k t + P_k)$, to enable the selection of a subset of periods via $M = \{M_k, k \in \{1, \cdots, K\}\}$, where each $M_k \in \{0, 1\}$ is a mask variable to enable or disable certain periods.
Note that $g_\phi(t)$ is equivalent to $g^{M}_\phi(t)$ when every $M_k$ is equal to one.
Then, we construct the following two-stage optimization problem:
\begin{align}
M^* = \argmin_{\|M\|_1 <= J} \mathcal{L}_{D_{val}} (g^M_{\phi^*}(t)), \quad
\phi^* = \argmin_{\phi} \mathcal{L}_{D_{train}}(g_\phi(t)),
\label{eq:p_opt_init}
\end{align}
where $\mathcal{L}_{D_{train}}$ and $\mathcal{L}_{D_{val}}$ denote the discrepancy losses on training and validation, respectively;
the inner stage is to obtain $\phi^*$ that minimizes the discrepancy between $z_t$ and $x_t$ on the training data $D_{train}$;
the outer stage is a binary integer programming on the validation data $D_{val}$ to find $M^*$ that can select certain periods with good generalization,
and the hyper-parameter $J$ controls the maximal number of periods being selected.
With the help of such two-stage optimization, we are able to estimate generalizable periodic coefficients from observational data as a good starting point for $\phi$ to be jointly optimized with $\theta$.
Nevertheless, it is still costly to perform exact optimization of equations (\ref{eq:p_opt_init}) in practice.
Thus, we develop a fast approximation algorithm to obtain an acceptable solution with affordable costs.
Our approximation algorithm contains the following two steps:
1) conducting Discrete Cosine Transform~\citep{ahmed1974dct} of PTS signals on $D_{train}$ to select top-$K$ cosine bases with the largest amplitude as an approximated solution of $\phi^*$;
2) iterating over the selected $K$ cosine bases from the largest amplitude to the smallest one and greedily select $J$ periods that generalize well on the validation set.
Due to the space limit, we include more details of this approximation algorithm in Appendix~\ref{sec:app_init_period}.
After obtaining approximated solutions $\tilde{\phi}^*$ and $\tilde{M}^*$, we fix $M = \tilde{M}^*$ to exclude those unstable periodic coefficients and initialize $\phi$ with $\tilde{\phi}^*$ to avoid being trapped into bad local optima.
Then, we follow the formulation \brref{eq:dec_period_ar} to perform the joint learning of $\phi$ and $\theta$ in an end-to-end manner.
\subsection{Interpretability}
\label{sec:interp}
Owing to the specific designs of $f_\theta$ and $g_\phi$, our architecture is born with a degree of interpretability.
First, for $f_\theta$, as shown in equations (\ref{eq:drel_res}), we decompose $\hat{\bm{x}}_{t:t+H}$ into two types of components, $\bm{u}_{t:t+H}^{(\ell)}$ and $\bm{v}_{t:t+H}^{(\ell)}$.
Note that $\bm{v}^{(\ell)}_{t:t+H}$ is conditioned on $\bm{z}_{t-L:t+H}$ and independent of $\bm{x}_{t-L:t}$.
Thus, $\sum_{\ell=1}^{N} \bm{v}^{(\ell)}_{t:t+H}$ represents the portion of forecasts purely from periodic states.
Meanwhile, $\bm{u}^{(\ell)}_{t:t+H}$ depends on both $\bm{x}_{t-L:t}$ and $\bm{z}_{t-L:t+H}$, and it is transformed by feeding the subtraction of $\bm{v}^{(\ell)}_{t-L:t}$ from $\bm{x}^{(\ell-1)}_{t-L:t}$ into the $\ell$-th local block.
Thus, we can regard $\sum_{\ell=1}^N \bm{u}^{(\ell)}_{t-L:t}$ as the forecasts from the local historical observations excluding the periodic effects, referred to as the local momenta in this paper.
In this way, we can differentiate the contribution to the final forecasts into both the global periodicity and the local momenta.
Second, $g_\phi$, the periodicity estimation module in our architecture, also has interpretable effects.
Specifically, the coefficients in $g_\phi(t)$ have practical meanings, such as amplitudes, frequencies, and phases.
We can interpret these coefficients as the inherent properties of the series and connect them to practical scenarios.
Furthermore, by grouping various periods together, $g_\phi$ provides us with the essential periodicity of TS filtering out various local momenta.
\section{Problem Formulations}
\label{sec:prob}
We consider the point forecasting problem of regularly sampled uni-variate TS.
Let $x_{t}$ denote the time series value at time-step $t$, and the classical auto-regressive formulation is to project the historical observations $\bm{x}_{t-L:t} = [x_{t-L}, \dots, x_{t-1}]$ into its subsequent future values $\bm{x}_{t:t+H} = [x_{t}, \dots, x_{t+H-1}]$:
\begin{align}
\bm{x}_{t:t+H} = \mathcal{F}_\Theta (\bm{x}_{t-L:t}) + \bm{\epsilon}_{t:t+H},
\label{eq:base_ar}
\end{align}
where $H$ is the length of the forecast horizon, $L$ is the length of the lookback window, $\mathcal{F}_\Theta: \mathbb{R}^L \rightarrow \mathbb{R}^H$ is a mapping function parameterized by $\Theta$, and $\bm{\epsilon}_{t:t+H} = [\epsilon_t, \dots, \epsilon_{t+H-1}]$ denotes a vector of independent and identically distributed Gaussian noises.
Essentially, the fundamental assumption behind this formulation is the Markov property $\bm{x}_{t:t+H} \perp \bm{x}_{0:t-L} | \bm{x}_{t-L:t}$, which assumes that the future values $\bm{x}_{t:t+H}$ are independent of all farther historical values $\bm{x}_{0:t-L}$ given the adjacent short-term observations $\bm{x}_{t-L:t}$.
Note that most existing DL models~\citep{salinas2020deepar,toubeau2018deep,wang2019deep,Oreshkin2020N-BEATS} directly follow this formulation to solve TS. Even traditional statistical TS models~\citep{holt1957forecasting,holt2004forecasting,winters1960forecasting} are indeed consistent with that if omitting those long-tail exponentially decayed dependencies introduced by moving averages.
To precisely formulate PTS, on the other hand, this assumption needs to be slightly modified such that the dependency of $\bm{x}_{t:t+H}$ on $\bm{x}_{t-L:t}$ is further conditioned on the inherent periodicity, which can be anchored by associated time-steps.
Accordingly, we alter the equation~(\ref{eq:base_ar}) into
\begin{align}
\bm{x}_{t:t+H} = \mathcal{F}^{'}_\Theta (\bm{x}_{t-L:t}, t) + \bm{\epsilon}_{t:t+H},
\label{eq:period_ar}
\end{align}
where other than $\bm{x}_{t-L:t}$, $\mathcal{F}^{'}_\Theta: \mathbb{R}^L \times \mathbb{R} \rightarrow \mathbb{R}^H$ takes an extra argument $t$, which denotes the forecasting time-step.
Existing methods for PTS adopt a few different instantiations of $\mathcal{F}^{'}_{\Theta}$.
For example, \cite{holt1957forecasting,holt2004forecasting} developed several exponentially weighted moving average processes with additive or multiplicative seasonality.
\cite{vecchia1985maximum,vecchia1985periodic} adopted the multiplicative seasonality by treating the coefficients of the auto-regressive moving average process as time dependent.
\cite{smyl2020hybrid} also adopted the multiplicative seasonality and built a hybrid method by coupling that with recurrent neural networks~\citep{hochreiter1997long},
while~\cite{taylor2018prophet} chose the additive seasonality by adding the periodic forecast with other parts as the final forecast.
\section{Related Work}
\label{sec:rel_work}
TS forecasting is a longstanding research topic that has been extensively studied for decades.
After a comprehensive review of the literature, we find three types of paradigms in developing TS models.
At an early stage, researchers developed simple yet effective statistical modeling approaches, including exponentially weighted moving averages~\citep{holt1957forecasting,holt2004forecasting,winters1960forecasting}, auto-regressive moving averages (ARMA)~\citep{whittle1951hypothesis,whittle1963prediction}, the unified state-space modeling approach as well as other various extensions~\citep{hyndman2008automatic}.
However, these statistical approaches only considered the linear dependencies of future TS signals on past observations.
To handle high-order dependencies, researchers attempted to adopt a hybrid design that combines statistical modeling with more advanced high-capacity models~\citep{montero2020fforma,smyl2020hybrid}.
At the same time, with the great successes of DL in computer vision~\citep{he2016resnet} and natural language processing~\citep{vaswani2017attention}, various DL models have also been developed for TS forecasting~\citep{rangapuram2018deep,toubeau2018deep,salinas2020deepar,zia2020residual,cao2020spectral}.
Among them, the most representative one is N-BEATS~\citep{Oreshkin2020N-BEATS}, which is a pure DL architecture that has achieved state-of-the-art performance across a wide range of benchmarks.
The connections between DEPTS and N-BEATS have been discussed in Section~\ref{sec:tri_res}.
As for PTS forecasting, many traditional statistical approaches explicitly considered the periodic property, such as periodic ARMA (PARMA)~\citep{vecchia1985maximum,vecchia1985periodic} and its variants~\citep{tesfaye2006identification,anderson2007fourier-parma,dudek2016periodic}.
However, as discussed in Sections~\ref{sec:intro} and~\ref{sec:prob}, these methods only followed some arbitrary yet simple assumptions, such as additive or multiplicative seasonality, and thus cannot well handle complicated periodic dependencies in many real-world scenarios.
Besides, other recent studies either followed the similar assumptions for periodicity or required the pre-specification of periodic coefficients~\citep{taylor2018prophet,smyl2020hybrid}.
To the best of our knowledge, we are the first work that develops a customized DL architecture to model complicated periodic dependencies and to capture diversified periodic compositions simultaneously.
|
1,941,325,220,034 | arxiv |
\section{Introduction}
Property testing~\cite{RS,GGR} is the study of algorithms that distinguish between objects that have a given property and those that are far from having the property, by performing a small number of queries to the object.
Goldreich and Ron~\cite{GR-dyn} initiated the study of testing \emph{dynamic environments}, which introduces a temporal aspect to property testing. In this context, the entity being tested changes with time, and is referred to as an \emph{environment}.
Starting from some initial \emph{configuration} (say, a vector or a matrix), the environment is supposed to evolve according to a prespecified \emph{local} rule. The rule is local in the sense that the value associated with each location in the environment at time $ t $ is determined by the values of nearby locations at time $ t-1 $.
The goal of the testing algorithm is then to distinguish between the case that the environment indeed evolves according to the rule, and the case in which the evolution significantly strays from obeying the rule. To this end, the algorithm can query the value of any location of the environment at any of the available time steps, as long as it does not ``go back in time''. Namely, the algorithm cannot choose to query a location at time $ t $ after it has queried some location at time $ t'>t $. We refer to this as the \emph{time-conforming} requirement.
The aim is to design time-conforming algorithms with low query complexity.
Goldreich and Ron~\cite{GR-dyn} investigate the complexity landscape of testing dynamic environments from multiple angles.
From a hardness perspective, they show that there are dynamic environments whose testing requires high query complexity and running time, and that adaptivity and time-conformity are relevant constraints which can significantly impact the query complexity.
However, as we discuss in \Cref{subsec:GR-dyn}, relatively little is known regarding positive results for testing specific rules.
In our quest for understanding which natural families of dynamic environments can be tested efficiently, we propose to first ``go back to the basics'' and study testing in the simplest of dynamic environments.
Namely, in this work we consider environments defined by one-dimensional configurations, which evolve according to local rules that are functions of the current location and its two immediate neighbors.
These dynamic environments, originally introduced by von Neumann~\cite{vonNeumann_automata}, have been extensively studied under the name of \emph{Elementary Cellular Automata}~\cite{wolfram_book} (see definition in \Cref{subsec:intro-defs}).
While these environments can be described in simple terms, they are nevertheless able to capture complex behavior.\footnote{Some rules are even Turing complete~\cite{rule110_universality}.}
Cellular automata have played a role in various research fields and applications. Examples include modeling physical~\cite{ChopardDroz} and chemical~\cite{model_chemical_systems} systems, VLSI design~\cite{VLSI-ref}, music generation~\cite{music}, analyzing
plant population dynamics~\cite{model_plant_population}, forest fire spread~\cite{model_forest_fire_spread}, city traffic~\cite{model_city_traffic}, urban sprawl~\cite{model_urban_sprawl}, and more.
As we discuss in \Cref{subsec:GR-dyn}, there are several hardness results (both regarding the query complexity and the running time) for testing dynamic environments that correspond to one-dimensional cellular automata (over non-binary alphabets)~\cite{GR-dyn}. Hence, in order to obtain efficient algorithms, it is necessary to restrict the rules considered.
In the current work, our main focus is on perhaps the most basic and natural rules, defined by threshold functions. Such functions have received much attention within the study of propagation of information/influence in networks (see, e.g., the review paper of Peleg~\cite{Peleg-review}, and the recent Ph.D. thesis of Zehmakan~\cite{ZehmakanPhD} and references within).
Our testers are based on a general meta-algorithm which works for rules that satisfy a set of conditions that we define.
In essence, the conditions capture a certain type of behavior leading to ultimate convergence. This behavior induces a global structure on the environment which we exploit in our meta-algorithm.
We hope this work can serve as a basis for further extensions and generalizations, some of which we discuss shortly in \Cref{subsec:future}.
\subsection{Testing basic evolution rules}\label{subsec:intro-defs}
We now formally define the problems we study. We use $ \nums{m} $ to denote the set $ \set{0,...,m-1} $.
For two integers $n$ and $m$, let $\E : \nums{m}\times \cycnums{n} \to \bitset$ denote the evolving environment, and for any $t \in \nums{m}$ let $\E_t :\cycnums{n}\to\bitset$ (the environment at time $t$) be defined by $\E_t(i) = \E(t,i)$.
In general, we refer to a function $\sigma : \cycnums{n} \to \bitset$ as a \emph{configuration}.
When convenient, we may view $\sigma$ as a (cyclic) binary string of length $n$.
For a function (evolution rule) $\rho:\bitset^3 \to \bitset$, we say that $\E$ \emph{evolves according to $\rho$}, if for every $i \in \cycnums{n}$ and $t>0$, we have that $\E_t(i) = \rho(\E_{t-1}(i-1),\E_{t-1}(i),\E_{t-1}(i+1))$, where all operations are modulo $n$.
We use $\calE^\rho_{m,n}$ to denote the set of environments ${\E : \nums{m}\times \cycnums{n} \to \bitset}$ that evolve according to $\rho$.
As in \cite{GR-dyn}, we employ the standard
notion of distance used in property testing and say that $\E : \nums{m}\times \cycnums{n} \to \bitset$ is \emph{$\eps$-far from evolving according to $\rho$} ($\eps$-far from $\calE^\rho_{m,n}$)
if $|\{(t,i): \E(t,i) \neq \E'(t,i)\}| > \eps m n$ for every $\E'\in \calE^\rho_{m,n}$.\footnote{In the context of dynamic environments, this notion of distance can be interpreted as capturing ``measurement errors'' due to some noise process.
Namely, it can be viewed as allowing the testing algorithm to accept not only ``perfect'' environments, but also environments that correspond to a correct evolution with a bounded fraction of corruptions.
Also note that being $\eps$-far from evolving according to $\rho$ does not simply translate to there being an $\eps$-fraction of pairs $(t,i)$ for which $\E_t(i) \neq \rho(\E_{t-1}(i-1),\E_{t-1}(i),\E_{t-1}(i+1))$ (which would be trivial to test).
}
Given $n$, $m$, and a distance parameter $\eps \in (0,1)$, a testing algorithm for evolution according to a rule $\rho$ should distinguish with constant success probability between the case that an environment $\E$ belongs to $\calE^\rho_{m,n}$ and the case that it is $\eps$-far from $\calE^\rho_{m,n}$. To this end, the algorithm is given query access to $\E$, where a query on a pair $(t,i)$ cannot follow any query on $(t',i')$ for $t' > t$. We are interested in bounding both the total number of queries performed by the algorithm (as a function of $\eps$, and possibly $m$ and $n$) and the maximum number of queries it performs at any time step (which we refer to as its \emph{temporal query complexity}).
\subsection{Our results}\label{subsec:results}
We identify several conditions on local rules (which are formally stated in~\Cref{subsec:conditions}),
such that if a rule $\rho$ satisfies these conditions, then evolution according to $\rho$ can be tested with query complexity $\poly(1/\eps)$ with one-sided error.
Our testers have the advantage that they are non-adaptive, and therefore, in particular, time-conforming.
\begin{theorem}\label{thm:cond-meta}
Let $\Psi$ be the set of conditions specified in \Cref{subsec:conditions}. For every rule $\rho$ that satisfies the conditions in $\Psi$, it is possible to test evolution according to $\rho$ by performing $O(1/\eps^4)$ queries.
Furthermore, the testing algorithm is non-adaptive and has one-sided error.
\end{theorem}
To establish \Cref{thm:cond-meta}, we present a \emph{meta-algorithm} for testing evolution and prove its correctness for rules that satisfy the aforementioned conditions (the set $\Psi$). It is a meta-algorithm in the sense that it is based on certain subroutines that are rule-specific (but have a common functionality of detecting violations of evolution according to the tested rule). We provide a high-level discussion of the conditions and the algorithm in \Cref{subsec:high-level}.
\medskip\noindent
Our main application of the meta-algorithm is to the natural family of threshold rules.
\setcounter{definition}{0}
\begin{definition}\label{definition:threshold}
We say that a rule $ \rho :\bitset^3 \to \bitset $ is a \emph{threshold} rule if there exist a threshold integer
$ 0 \leq b \leq 3 $ and a bit $\alpha \in \set{0, 1} $ such that $ \rho(\beta_1,\beta_2,\beta_3) = \alpha $ if and only if $ \beta_1+\beta_2+\beta_3 \geq b $.
\end{definition}
We prove:
\setcounter{theorem}{1}
\begin{theorem}\label{thm:threshold-test}
For each threshold rule $\rho$, evolution according to $\rho$ can be tested with query complexity $O(1/\eps^4)$. Furthermore, the testing algorithm is non-adaptive and has one-sided error.
\end{theorem}
We also show that the conditions hold for two additional (non-threshold) rules, so the applicability of our meta-algorithm is more general (\ifnum1=1 \Cref{subsec:other-rules}\else for details, see full version of this paper~\cite{fullversion}\fi).
We believe that appropriate (perhaps more complex) variants of our algorithm can be used to test an even larger variety of basic local rules (see \Cref{subsec:future}), where we conjecture that this is true for all rules that ultimately converge.
Interestingly, while the two additional rules are not threshold rules as per \Cref{definition:threshold}, they can be represented as weighted threshold rules (which are a subclass of ultimately converging rules).
\subsection{The high-level ideas behind our results}\label{subsec:high-level}
In this high-level discussion, we assume for simplicity that $m\geq n$ (the case $m<n$ can be essentially reduced to this case).
\subsubsection{Convergence, final/non-final locations and prediction functions}
To give an intuition on the convergence behavior that our conditions capture, it is useful to first discuss the notion of \emph{ultimate convergence}.
A rule $\rho$ ultimately converges if, for any initial configuration $\E_0$, an environment evolving according to $\rho$ converges after a bounded number of steps to either a single final configuration or to a constant number of configurations between which it alternates. For example, consider the majority rule (threshold $2$). Unless the initial configuration is $(01)^{n/2}$, the environment ultimately converges to some configuration that consists of blocks of $0$s and $1$s of size at least 2 each (and if it is $(01)^{n/2}$, then it alternates between $(01)^{n/2}$ and $(10)^{n/2}$).
Once an environment converges, testing is straightforward since we can easily predict the values of locations in future time steps and then verify that indeed they hold the predicted values (or else we reject).
The issue, however, is that convergence is not ensured to be reached after a small number of time steps.\footnote{In fact, there are initial configurations that require $\Omega(n)$ steps before they ultimately converge.}
In other words, knowing that a rule ultimately converges cannot be exploited directly.
Hence, the challenge is to identify and formalize conditions that allow for ``pre-convergence prediction''. Namely, conditions that imply the ability to predict future values of locations based on the current values of these and other locations (before convergence is reached).
In this context, our conditions try to formalize the idea that rules exhibit a certain \emph{local} convergence, which ``expands'' with time.
The first ingredient of our approach is the observation that, in the case of the majority rule, if at any time step $t$, $\E_t(i) \in \{ \E_t(i-1), \E_t(i+1)\}$ , then $\E_{t'}(i) = \E_t(i)$ for any $t' > t$ (operations are modulo $n$). We say in such a case that location $i$ is \emph{final} at time $t$ (in $\E$). Otherwise it is \emph{non-final}. Crucially for us, whether a location $i$ is final or not at a certain time step depends solely on its local neighborhood at that time (and can hence be verified with a constant number of queries).
An important property of a location being final at time $t$ (in addition to converging to their final value, up to alternations), is the aforementioned expansion (or ``transmission of finality''). Namely, a location $i$ that is non-final at time $t$ becomes final at time $t+1$ if either $i-1$ or $i+1$ is final at time $t$ (possibly both). Furthermore, it cannot become final if both its neighbors are non-final. Another related property of final locations is that (under certain circumstances), they can be used to predict the values of locations that become final in the future, based on a (rule-specific) \emph{prediction function}. A similar statement holds for non-final locations (though the circumstances are different).
\subsubsection{The meta-algorithm: the grid and violating pairs}
Based on these properties (which are formalized in the conditions we introduce), our (meta) algorithm works in two stages. In the first stage, it queries the environment at time $t_1 = \Theta(\eps m)$ on $O(1/\eps^2)$ equally spaced locations, which we refer to as \emph{the grid locations}, and their local neighborhoods. This allows the algorithm to determine which of the grid locations are final at time $t_1$ and which are non-final. If the answers it gets are not consistent with any environment that evolves according to $\rho$ (in which case we say that the grid is \emph{not feasible}), then it rejects.
In its second stage, the algorithm uniformly samples $O(1/\eps)$ random time-location pairs $(t,i)$ and queries $\E_t$ on $i$ and its local neighborhood. It then checks whether the answers are consistent with the answers to queries in the first stage (on the grid locations and their neighborhoods) or constitute a \emph{violation}. The definition of consistency/violation is based on the aforementioned prediction functions of the tested rule.
One may have hoped that such a consistency check is sufficient, in the sense that all (or almost all) pairs $(t,i)$ can be predicted based on the answers to the queried grid locations. Unfortunately, this is not the case. There are (possibly many) pairs $(t,i)$ whose $0/1$ values are not determined given the first-stage answers. However, we show that such pairs are constrained in a different way (in environments that evolve according to $\rho$): their location must have become final by time $t_2 = t_1+\Delta$, where $\Delta$ is the distance between grid location. Hence, for each selected pair $(t,i)$, the algorithm also queries $\E_{t_2}$ on location $i$ (and its neighborhood) and checks consistency with the queried locations at time $t_2$.
\subsubsection{On the analysis of the algorithm and ``backward prediction''}
To show that the algorithm always accepts environments that evolve according to the tested rule $\rho$, we prove that our definition of violation is such that there are no violations in such environments (assuming $\rho$ satisfies the aforementioned conditions).
The more involved part of the analysis is proving that if the environment $\E$ is $\eps$-far from evolving according to $\rho$, then the algorithm will detect this with probability at least $2/3$. To this end, we prove the contrapositive statement. Namely, we show that if the algorithm accepts with probability at least $2/3$, then there exists an environment that evolves according to $\rho$ and is $\eps$-close to $\E$. This is done by showing that we can construct an initial configuration $\E'_0$, such that if we let it evolve according to $\rho$, resulting in an environment $\E' \in \calE^\rho_{m,n}$, then we can upper bound the number of pairs $(t,i)$ such that $\E_t(i) \neq \E'_t(i)$ by $\eps m n$.
Here we build on a useful property of the prediction functions, by which they allow us a certain ``prediction back in time''. Namely (for $t_1$ and $t_2$ as mentioned above), we use the queried grid locations at time $t_1$ as well as some locations at time $t_2$ (which have not been queried) to determine values of locations at the earlier time $0$ in $\E'$. We prove that this can be done in a way that ensures that $\E'$ agrees with $\E$ on all pairs $(t,i)$ that are not violating.
\subsection{A short overview of the results in~{\cite{GR-dyn}}}\label{subsec:GR-dyn}
As stated earlier, the study of testing dynamic environments was initiated by Goldreich and Ron~\cite{GR-dyn},
who present several general results as well as analyze two natural specific rules.
We first provide a short overview of their main general results.
They prove that the query complexity of testing (one-dimensional) rules may have high query complexity.
Specifically, they show that there exists a constant $ c > 0 $ and an evolution rule $ \rho : \Sigma^3 \to \Sigma $ such that any tester of evolution according to $ \rho $ requires $ \Omega(n^c) $ queries.\footnote{Observe that it is possible to test the evolution according to any rule $\rho$ over configurations of size $n$
by performing $O(n+1/\eps)$ queries ($n$ queries to the initial configuration and $O(1/\eps)$ uniformly selected queries elsewhere). To get sublinear temporal query complexity, a total of $O(n/\eps)$ uniformly selected queries suffice (by applying a simple union bound over all possible initial configurations).}
They also prove that testing dynamic environments may be NP-Hard, provided that the temporal query complexity is ``significantly sublinear'' (where $ f(x) $ is significantly sublinear if $ f(x) < x^{1-\Omega(1)} $).
More precisely, they show that for every constant $ c > 0 $ there exists an evolution rule $ \rho : \Sigma^3 \to \Sigma $ such that no (time-conforming) polynomial-time testing algorithm with temporal query complexity $ n^{1-c} $ can test whether $ n $-sized environments evolve according to $ \rho $
(assuming $\mathcal{NP}\not\subseteq \mathcal{BPP}$).
Their general results also include a theorem concerning the usefulness of adaptivity in testing dynamic environments, a study of the relation between testing and learning dynamic environments, and a result on the power of being non time-conforming.
Goldreich and Ron~\cite{GR-dyn} also provide testers for evolution according to two specific (classes of) rules.
The first is the class of linear rules, which in the binary 1-dimensional case corresponds to the XOR rule in elementary cellular automata. They show that for any $ d \ge 1 $ and any field $ \Sigma $ of prime order, there exists a constant $ \gamma < d $ such that the following holds.
For any linear rule $ \rho : \Sigma^{3^d} \rightarrow \Sigma $ there exists a time-conforming oracle machine of (total) time complexity $ \poly(1/\eps) \cdot n^\gamma $ that tests the consistency of an evolving environment with respect to $ \rho $.
Furthermore, the tester is non-adaptive and has one-sided error.
Their second specific positive result, loosely stated, captures fixed-speed movement of objects in one-dimension such that colliding objects stop forever.
They present a (time-conforming) algorithm of (total) time complexity poly($ 1/\eps $) that tests the consistency of evolving environments with respect to that rule.
\subsection{Future directions }\label{subsec:future}
\subparagraph*{Basic dynamic environments.} A natural question that arises is whether a more nuanced version of the set of conditions formalized in this paper and the meta-algorithm can be defined and proved to work for other rules in the realm of basic dynamic environments.
Indeed, preliminary results suggest that several other rules that ultimately converge exhibit behaviors that ``resemble'' the ones captured by our conditions.
This leads us to the following conjecture.
\bigskip\noindent\textsf{Conjecture.}~\textit{
If a rule $ \rho $ ultimately converges, then it is poly($ \frac{1}{\epsilon} $)-testable.}
\medskip
While our meta-algorithm does not apply to rules that do not ultimately converge, there are natural rules that fall under this category (the XOR rule for instance).
This raises the question of whether $ \poly(1/\eps) $ testers exist for such rules.
The answer is that there are $ \poly(1/\eps) $-testable rules that do not ultimately converge, but as we'll see, the question should be slightly rephrased.\footnote{We thank one of the anonymous reviewers of this paper for pointing this out.}
To give one example, for the rule $ \rho $ defined as $ \rho(x,y,z)=x $, each configuration is simply a copy of the previous configuration, shifted one location to the right.
That is, while an environment evolving according to this rule does not, technically, ultimately converge, this rule is trivially $ \poly(1/\eps) $-testable.
However, this particular rule (and other rules that are capable of producing such ``shifting behaviors'') also has the property of not being \textit{symmetric} (i.e., it does not hold that $ \rho(x,y,z) = \rho(z, y, x) $ for every $ x,y,z $).
Hence, one way to rephrase the question is restricting it to symmetric rules.
\begin{openproblem}
Are there any symmetric rules that do not ultimately converge and are $ \poly(1/\eps) $-testable?
\end{openproblem}
Another way to rephrase this question is to define a more general notion of ultimate convergence.
Specifically, we say that a rule $\rho$ \textit{ultimately converges up to a shift} if, for any initial configuration $\E_0$, an environment evolving according to $\rho$ converges after a bounded number of steps to a constant number of configuration \textit{equivalence classes} between which it alternates, where two configurations are \textit{equivalent} if they are equal \textit{up to a shift}.
\begin{openproblem}
Are there any non-symmetric rules that do not ultimately converge up to a shift and are $ \poly(1/\eps) $-testable?
\end{openproblem}
As mentioned in \Cref{subsec:GR-dyn}, it has been shown in~\cite{GR-dyn} that the XOR rule is sublinearly testable.
However, the query complexity of the tester depends on the size of the environment and is only mildly sublinear (the complexity is $ O(n^{0.8}) $ for an environment of size $ n $).
This raises the question of whether there exists a tester for the XOR rule with significantly lower query complexity (maybe even polylogarithmic).
Another question that can be raised is whether there are other symmetric rules, ones that do not ultimately converge, that can be tested with a sublinear query complexity that depends on the size of the environment.
\begin{openproblem}
Which symmetric rules that do not ultimately converge can be tested with query complexity that is sublinear in (but strictly grows with) the size of the environment?
\end{openproblem}
\subparagraph*{More general dynamic environments.}
Building on the ideas for testing basic dynamic environments, it may be possible to venture into more general environments.
One such generalization is to consider rules that depend on more than just the three locations constituting the immediate neighborhood.
Other generalizations are to environments and rules over non-binary values, higher dimensions, and environments that evolve on more general graphs.
\subparagraph*{Non-deterministic rules.}
We also suggest considering local rules that are non-deterministic in the sense that given some configuration, the rule allows several configurations to follow.
An example of one such rule, which can be thought of as a relaxation of the $ \orr $ rule, is the rule in which each value is restricted to be monotonically non-decreasing with respect to the previous values at the location's neighborhood.
\ifnum1=0
\subsection*{Missing details}
Due to space constraints, not all details appear in this extended abstract, and can be found in the full version of this paper~\cite{fullversion}.
\fi
\section{Preliminaries}\label{sec:prel}
\setcounter{definition}{1}
In addition to the basic definitions provided in \Cref{subsec:intro-defs} regarding testing dynamic environments, here we introduce several more definitions and notations.
In all that follows, when performing operations on locations $i \in \cycnums{n}$, these operations are modulo $n$.
For a pair of locations $i,j \in \cycnums{n}$ we use
$[i,j]$ to denote the sequence $i,i+1,\dots,j$ (so that it is possible that $j < i$).
\begin{definition}\label{def:neighborhood}
For a location $i\in \cycnums{n}$ and an integer $r$, the \textsf{$r$-neighborhood} of $i$, denoted $\Gamma_r(i)$, is the sequence $[i-r,i+r]$. For a set of locations $I \subseteq \cycnums{n}$, we let $\Gamma_r(I)$ denote the set of locations in the union of sequences $[i-r,i+r]$ taken over all $i\in I$.
\end{definition}
\begin{definition}\label{def:state-machine}
For an integer $n$ and a local rule $\rho$, let $\M_{\rho}(n)$ denote the (deterministic) \textsf{state machine} that is defined as following. Each state of $\M_{\rho}(n)$ corresponds to a different configuration $\sigma : \cycnums{n} \to \bitset$.
If a state corresponds to a configuration $\sigma$, then it has a single transition going to the state corresponding to
the configuration that results from applying $\rho$ to $\sigma$.
The \textsf{period} of $\M_\rho(n)$, denoted $p_\rho(n)$, is the longest size of a (directed) cycle in $\M_\rho(n)$. If
there exists a constant $p$ such that $p_\rho(n) = p$ ($p_\rho(n) \leq p$) for every sufficiently large $n$, then we say that $\rho$ \textsf{has period} (at most) $p$, and that $\rho$ \textsf{ultimately converges}.
\end{definition}
\noindent
Observe that for every $\M_\rho(n)$, each strongly connected component in $\M_\rho(n)$ is either a single state with no edges in the component or a cycle (where in particular, the cycle may be a self-loop).
For example, if $\rho$ is the OR function, then it has period $1$ (as it contains only two cycles: one is a self-loop for the state corresponding to the configuration $1^n$ and the other is a self loop for the state corresponding to the configuration $0^n$). On the other hand, there are rules, such as XOR, for which $p_\rho(n) = \Omega(n)$.
\begin{definition}\label{def:dist}
For two locations $i,i' \in \cycnums{n}$, we let
$\ddist(i,i') = i'-i$ denote the directed distance from $i$ to $i$, and let
$\dist(i,i') = \min\{\ddist(i,i'),\ddist(i',i)\}$ denote the (undirected) distance.
\end{definition}
\noindent
Note that since operations on locations are modulo $n$, we have that
$\ddist(i,i') \leq n-1$, while
$\dist(i,i') \leq n/2$ for all $i,i'\in \cycnums{n}$.
\begin{definition}\label{def:pair}
For $t\in \nums{m}$ and $i \in \cycnums{n}$, we refer to $(t,i)$ as a \textsf{time-location pair} (or simply \textsf{pair}).
\label{def:descends}
Given two locations, $ i,i' \in \cycnums{n} $ and two time steps $ t,t' \in \nums{m} $ where $ t > t' $, we say that the pair $ (t,i) $ \textsf{descends} from the pair $ (t',i') $ if $ \dist(i,i') \le t-t' $.
We say that $(t,i)$ is a \textsf{descendant} of $(t',i')$ and that $(t',i')$ is an \textsf{ancestor} of $(t,i)$.
\end{definition}
\begin{definition}\label{def:pattern}
For an integer $r$, an \textsf{$r$-pattern} is a string in $\bitset^r$.
\end{definition}
\section{The Conditions}\label{subsec:conditions}
Let $ \rho :\bitset^3 \to \bitset$ be a local rule. We present several conditions, such that if they all hold, then the rule $ \rho $ can be tested with $ \poly(1/\eps) $ queries.
These conditions capture properties of local rules that can be exploited by our (meta) algorithm.
The conditions are defined with respect to a constant (integer) $k$ (which depends on $\rho$, but for the sake of simplicity we suppress the dependence on $\rho$ and use $k$ rather than $k_\rho$), and a partition of all $(2k+1)$-patterns.\footnote{For the local rules we apply our conditions to, $k$ is either $0$ or $1$, but using a variable parameter $k$ will hopefully allow to extend our results more easily.}
The partition is denoted by $(\F_\rho,\bF_\rho)$, where $\F$ stands for \emph{final} and $\bF$ for \emph{non-final}.
We shall say that a pair $(t,i)$ is final (respectively, non-final) \emph{with respect to $\E$ and $\rho$} if $\E_t(\Gamma_k(i)) \in \F_\rho$ (respectively, $\bF_\rho$).
Roughly speaking, if $(t,i)$ is final (with respect to $\E$ and $\rho$), then location $i$ does not change from time $t$ and onward (or, more generally, $\E_{t'}(i)$ for $t'>t$ can be predicted based on $\E_t(i)$).
Furthermore, if $(t,i)$ is non-final, then $(t+1,i)$ is final if and only if $(t,i-1)$ or $(t-1,i+1)$ is final (so that finality is ``infectious'').
In our statements of the conditions, we make use of the parity function, which we denote by $ \parity:\NN\to \set{0,1} $ (so that $\parity(x) = 1$ if $x$ is odd and $\parity(x)=0$ if $x$ is even).
Before each of the conditions is stated formally, we give a short, informal description.
It will also be useful to have a running example of a specific rule $\rho$, which is the majority rule.
Namely, $\maj(\beta_1,\beta_2,\beta_3) = 1$ for any three bits $\beta_1,\beta_2,\beta_3$, if and only if
$\beta_1+\beta_2+\beta_3 \geq 2$. For the majority rule, $k=1$, and $\F_{\maj} =\{111,110,011,000,001,100\}$ (so that $\bF_{\maj} = \{101,010\}$).
The first condition says that if a location is final, then it remains final.
\begin{condition}\label{condition:final}
Let $\E \in \calE^\rho_{m,n}$ be an environment that evolves according to $\rho$.
For any time step $t \in \nums{m-1}$ and location $i\in \cycnums{n}$, if $\E_t(\Gamma_k(i))\in \F_\rho$, then $\E_{t+1}(\Gamma_k(i)) \in \F_\rho$.
\end{condition}
\noindent
Indeed, for the majority rule, if $\E_t(\Gamma_1(i)) = 111$, then $\E_{t+1}(\Gamma_1(i)) = 111 \in \F_{\maj}$,
if $\E_t(\Gamma_1(i)) = 110$, then $\E_{t+1}(\Gamma_1(i)) \in \{110,111\} \subset \F_{\maj}$, and if $\E_t(\Gamma_1(i)) = 110$, then $\E_{t+1}(\Gamma_1(i)) \in \{110,111\} \subset \F_{\maj}$ (analogous statements hold for $\E_t(\Gamma_1(i)) \in \{000,001,100\}$).
\smallskip
The second condition says that if a location is non-final, then it can become final in one time step if and only if it has a final neighbor.
\begin{condition}\label{condition:infecting_neighbors}
Let $\E \in \calE^\rho_{m,n}$ be an environment that evolves according to $\rho$.
For any time step $ t \in \nums{m-1} $ and location $ i\in \cycnums{n} $, if $\E_t(\Gamma_k(i)) \in \bF_\rho$, then
$\E_{t+1}(\Gamma_k(i)) \in \F_\rho$ if and only if $\E_t(\Gamma_k(i-1)) \in \F_\rho$ or
$\E_t(\Gamma_k(i+1)) \in \F_\rho$ (or both).
\end{condition}
\noindent
For the majority rule, consider the case that $\E_t(\Gamma_1(i)) = 101$ (so that it belongs to $\bF_{\maj})$.
In this case, $\E_t(\Gamma_1(i-1)) \in \{110,010\}$ and $\E_t(\Gamma_1(i+1))\in \{011,010\}$.
If $\E_t(\Gamma_1(i-1))=110$ (which belongs to $\F_{\maj}$), then $\E_{t+1}(\Gamma_1(i)) \in \{110,111\} \subset \F_{\maj}$,
and the case that $\E_t(\Gamma_1(i+1))=011$ is analogous.
On the other hand, if both $\E_t(\Gamma_1(i-1))=010$ and $\E_t(\Gamma_1(i+1))=010$ (so that they both belong to $\bF_{\maj}$), then
$\E_{t+1}(\Gamma_1(i)) = 010$ (and it belongs to $\bF_{\maj}$ as well).
Note that, if for every location $ i \in \cycnums{n} $, it holds that $\E_0(\Gamma_1(i)) \in \set{010, 101}$ (that is, every location in the initial configuration is non-final), then no location would ever become final throughout the evolution of the rule.
In particular, in this case the environment alternates between $(01)^{n/2}$ and $(10)^{n/2}$, where all the locations are non-final.
\smallskip
The first two conditions intuitively imply that one can determine whether certain locations are final or non-final using particular ``past'' locations that are known to be final or non-final.
The next two conditions capture the idea that the actual \emph{values} at certain locations (and not only whether or not they are final) can also be determined based on past locations.
In particular, the third condition captures how values at locations that are final at a certain time step can be predicted using a function that depends on ``past'' final locations from which they descend (and to which they are closest).
\begin{condition}\label{condition:final_prediction}
Let $\E \in \calE^\rho_{m,n}$ be an environment that evolves according to $\rho$.
There exists a function $ \frhor: \bitset^3 \to \bitset $ for which
the following holds.
First, $\frhor$ is the XOR of its first argument and a subset of the other two arguments.
Second, let $ (t, i) $ and $ (t', i') $ be any two pairs such that $ (t,i) $ descends from $ (t',i') $, $\E_{t}(\Gamma_k(i)) ,\E_{t'}(\Gamma_k(i')) \in \F_\rho$, and for every $ i'' \ne i' $ satisfying $ \dist(i,i'') \le \dist(i,i') $ it holds that $ \E_{t'}(\Gamma_k(i'')) \in \bF_\rho$. Then
\[ \E_t(i) = \frhor(\E_{t'}(i'), \parity(t-t'), \parity(\dist(i,i')) \;.\]
\end{condition}
\noindent
For the majority rule, $\fruler{\maj}$ is simply the identity function on its first argument, namely, $\fruler{\maj}(\beta,\cdot,\cdot)=\beta$.
\smallskip
The fourth condition captures how locations that are non-final at a certain time step can be predicted using a function that depends on ``past'' non-final locations from which they descend (conditioned on there not being any final location among its ancestors in that past time step).
\begin{condition}\label{condition:noninf_prediction}
Let $\E \in \calE^\rho_{m,n}$ be an environment that evolves according to $\rho$.
There exists a function $ \hrhor :\bF_\rho\times \bitset \times \cycnums{n} \to \bF_\rho$
for which the following holds. First, $ \hrhor$ is reversible in the sense that
for each fixed $\tau \in \bF_\rho$, $\beta \in \bitset$ and $\ell \in \cycnums{n}$, there exists a unique $\tau'$ such that $\hrhor(\tau',\beta,\ell) = \tau$. Second, let $ (t, i) $ and $ (t', i') $ be any two pairs such that $ (t,i) $ descends from $ (t',i') $, $\E_{t}(\Gamma_k(i)),\E_{t'}(\Gamma_k(i')) \in \bF_\rho $, and $ \E_{t'}(\Gamma_k(i'')) \in \bF_\rho $ for every $ i'' $ such that $ (t,i) $ descends from $ (t',i'') $. Then
\[ \E_t(\Gamma_k(i)) = \hrhor(\E_{t'}(\Gamma_k(i')), \parity(t-t'), \ddist(i',i)) \;. \]
\end{condition}
\noindent
For the majority rule,
$\hruler{\maj}(010,\beta,x)=010$ if $\beta\oplus \parity(x) = 0$ and $\hruler{\maj}(010,\beta,x)=101$ if
$\beta\oplus \parity(x) = 0$. Similarly, $\hruler{\maj}(101,\beta,x)=101$ if $\beta\oplus \parity(x) = 0$
and $\hruler{\maj}(101,\beta,x)=010$ if $\beta\oplus \parity(x) = 1$.
The additional two conditions presented below are a bit more involved than Conditions~\ref{condition:final}--\ref{condition:noninf_prediction}, and perhaps initially less intuitive.
They do not play a role in the definition of the meta algorithm, but are applied in the proof of \Cref{lemma:soundness} (and we recommend that the reader return to them in that context).
In a nutshell, they allow us to show that if our testing algorithm accepts the environment $\E$ with high constant probability, then there exists an environment $\E'$ that evolves according to $\rho$ and is relatively close to $\E$.
In particular, they aid us in defining the initial configuration $\E'_0$ based on $\E_t$ for some appropriate time step $t$.
\begin{condition}\label{condition:non-localA}
Let $ \sigma : \cycnums{n} \to \bitset $ be a configuration and let $ [x,y] $ be an interval
of locations such that $ \sigma(\Gamma_k(x)) \in \F_\rho $ and $ \sigma(\Gamma_k(y)) \in \F_\rho $.
There exists a configuration $ \tsigma \in \cycnums{n} $, which differs from $\sigma$ only on locations inside $[x,y]$, for which the following holds:
For every $i \in [x,y]$ we have that $ \tsigma(\Gamma_k(i)) \in \F_\rho $, and if $\sigma(\Gamma_k(i)) \in \F_\rho$, then $ \tsigma(i)=\sigma(i) $.
This condition also covers the special case in which $y=x$ and we interpret $[x,y]$ as $x,x+1,\dots,x+n$ (with a slight abuse of notation).
\end{condition}
\begin{condition}\label{condition:non-localB}
Let $ \sigma : \cycnums{n} \to \bitset $ be a configuration and $z \in \cycnums{n}$ such that $ \sigma(\Gamma_k(z)) \in \bF_\rho $.
Let $\nu \in \{\tau_{k+1}:\tau \in \F_\rho \}$ and $\gamma,\gamma'\in \bitset$.
There exists a configuration $ \tsigma : \cycnums{n} \to \bitset $ for which the following hold.
There is a location $z' \in [z+1,z+2k+1]$ where $ \tsigma(\Gamma_k(z')) \in \F_\rho $, and $\frhor(\tsigma(z'),\gamma,\parity(z'-z)\oplus \gamma') = \nu$.
Furthermore, for every $ i \in [z+1,z'-1]$ it holds that $ \tsigma(\Gamma_k(i)) \in \bF_\rho $, and for every
$ i \notin [z + k, z'+k] $, $ \tsigma(i)=\sigma(i) $.
A (symmetric) variant of the above should also hold if we replace $z' \in [z+1,z+2k+1]$ by $z' \in [z-2k-1,z-1]$, $ i \in [z+1,z'-1]$ by $i\in [z'+1,z-1]$, and $ i \notin [z + k, z'+k] $ by $i \notin [z' - k, z-k] $.
\end{condition}
\section{The Meta-Algorithm}\label{sec:cond-alg}
In this section, we present a meta-algorithm for testing evolution of local rules that satisfy the sufficient conditions (specified in \Cref{subsec:conditions}).
Here we give an algorithm whose complexity is $\lceil n/m\rceil\cdot \poly(1/\eps)$ and, in \ifnum1=1\Cref{subsec:m-n}\else the full version of this paper~\cite{fullversion}\fi, we explain how to remove the dependence on $n/m$.
In order to precisely describe our meta-algorithm, we need to first define a particular set of locations that we designate as \textit{the 1-dimensional grid} and the notion of \textit{violating time-location pairs} with respect to the 1-dimensional grid.
The 1-dimensional grid is defined in \Cref{subsec:grid} and the notion of violating pairs is defined in \Cref{subsec:violating_pairs}.
Then, in \Cref{subsec:alg}, we describe our meta-algorithm.
\ifnum1=1 We \else In the full version of this paper~\cite{fullversion}, we \fi show that these conditions hold for all (non-trivial) threshold rules\ifnum1=1 ~(\Cref{subsec:thresh-rules})\fi, as well as a couple of additional rules\ifnum1=1 ~(\Cref{subsec:other-rules})\fi.
\subsection{The grid }\label{subsec:grid}
In this subsection we introduce the notion of a
one-dimensional ``grid'',
which will be a central building block of the meta algorithm (and its analysis). Recall that a configuration is a function $\sigma:\cycnums{n}\to\bitset$. A \emph{partial} configuration is a function $\sigma':\cycnums{n}\to\bitset \cup \bot$, which will serve us to denote restrictions of configurations to a subset of the locations.
Let $\Delta = \frac{\eps^2}{b_0}\cdot \min\{n,m\}$
where $b_0$ is a sufficiently large constant.
We assume for simplicity that $\Delta$ and $n/\Delta$ are both integers.
Let $ G \subseteq \cycnums{n} $ (the \textit{grid}) be the set of locations $\{j\cdot (n/\Delta)\}_{j=0}^{n/\Delta-1}$.
As we shall see in \Cref{subsec:alg}, our algorithm queries the tested environment on all grid locations and their $k$-neighborhoods at a specific time step $t_1$ (which will be set subsequently).
Let $\E_t[G]$ be the partial configuration that agrees with $\E_t$ on all locations in $\{\Gamma_k(g): g\in G\}$ and is $\bot$ elsewhere.
\begin{definition}\label{def:feasible}
Given a time step $ t>0 $, we say that
the partial configuration $\E_t[G]$ induced by the $k$-neighborhoods of the grid locations at time $t$
is \textsf{feasible} with respect to
a rule $\rho$,
if there exists an environment $\E'$ that evolves according to $\rho$ and such that $\E'_t(i) = \E_t(i)$ for every $i \in \Gamma_k(G)$. We say in such a case that $\E'$ is a \textsf{feasible completion} of $\E_t[G]$ with respect to $\rho$.
\end{definition}
\begin{definition}\label{def:grid_interval}
Given a pair of grid locations $ g_1,g_2 \in G $,
a time step $ t $ and a subset $\mathcal{S} \subset \{0,1\}^{2k+1}$,
if for every grid location $ g \in G \cap [g_1,g_2]$ it holds that $ \E_t(\Gamma_k(g))\in S$, then we say that the interval $ [g_1,g_2] $ is an \textsf{$S$ grid interval with respect to $\E_t$}.
We say that $ [g_1,g_2] $ is a
\textsf{maximal $\mathcal{S}$ grid interval with respect to $\E_t$},
if both $ \E_t(g_1-\Delta)$ and $ \E_t(g_2+\Delta)$ do not belong to $\mathcal{S}$.
\end{definition}
In particular, we shall be interested in the case that $\mathcal{S}$ is $\F_\rho$ or $\bF_\rho$.
Note that a grid interval $ [g_1,g_2] $ contains all the locations between $ g_1 $ and $ g_2 $, and not just the grid locations.
Also note that if $\E_t(\Gamma_k(g)) \in S$ for every $g\in G$, then by \Cref{def:grid_interval}, there is no maximal $\mathcal{S}$ grid interval with respect to $\E_t$ (we shall deal with such cases separately).
\subsection{Violating Pairs}\label{subsec:violating_pairs}
Let $\rho$ be a fixed local rule that satisfies all the conditions stated in \Cref{subsec:conditions}.
Let $ t_1= \frac{b_1 \Delta}{\epsilon} $, where $b_1$ is a sufficiently large constant
and $\Delta$ is as defined in \Cref{subsec:grid}. Let $ t_2=t_1 + \Delta$.
We now define the concept of a violating pair $(t,i) \in \nums{m}\times \cycnums{n}$
with respect to $ \E_{t_1}$.
Generally speaking, these are pairs in the environment $ \E $ whose values are inconsistent with evolving according to the rule $ \rho $ given the values at the grid locations at time $ t_1 $.
The definition of a violating pair serves us later by allowing our algorithm to reject when it encounters one, which, as we prove, happens with high constant probability if $ \E $ is $ \epsilon $-far from evolving according to the rule $ \rho $.
\begin{figure}[htb!]
\centerline{\mbox{\includegraphics[width=1\textwidth]{ABCU.jpg}
}}
\caption{\small An illustration for the sets $A$, $B$, $C$, and $U$. Here $[g_1,g_2]$ is a maximal $\F_\rho$ grid interval, $g_3 = g_2+\Delta$, where $[g_3,g_4]$ is a maximal $\bF_\rho$ grid interval, and $g_5=g_4+\Delta$ is an endpoint of a maximal $\F_\rho$ grid-interval. The area marked by $A$ corresponds to pairs $(t,i)$ such that $t>t_2$
and $i \in [g_1+ t_1,g_2-t_1]$. These pairs are supposed to be final.
The area marked by $B$ corresponds to pairs $(t,i)$ such that $t>t_2$,
$i \in [g_2-t_1+\Delta,g_2+(t-t_1)]$, and $\dist(g_2,i)< \dist(g_5,i) - \Delta$.
These pairs are supposed to be final too.
The area marked by $C$ corresponds to pairs $(t,i)$ such that $t>t_2$,
$i \in [g_3,g_4]$, and $ (t,i) $ neither descends from
$ (t_1,g_3+1) $ nor from $ (t_1,g_4-1) $.
These pairs are supposed to be non-final.
Finally, the areas marked by $U$ correspond to pairs $(t,i)$ such that $t>t_2$ and one of the following holds: {\bf (1)} $i \in [g_2-t_1+1, g_2(i)-t_1+\Delta]$; {\bf (2)}
$i \in [g_3,g_4]$ and either {\bf (a)} $ (t,i) $ descend from $ (t_1,g_3-\Delta) $ or $(t_1,g_4+\Delta)$
and $ |\dist(g_3, i) - \dist(g_4, i)| \le \Delta$, or {\bf (b)}
$ (t,i) $ does not descend from either $ (t_1,g_3-\Delta)$ or $(t_1,g_4+\Delta)$ but it descends from either $ (t_1,g_3) $ or $ (t_1,g_4) $.}
\label{fig:ABCU}
\end{figure}
We next define three disjoint sets of time-location pairs, denoted $ A $, $ B $ and $ C $, and for each of these three sets we state conditions under which the pair is considered to be a violating pair with respect to $ \E_{t_1}[G] $.
The proof that the three sets are pairwise disjoint appears in \Cref{subsec:observations},
and for an illustration, see \Cref{fig:ABCU}.
\smallskip
If $\E_{t_1}(\Gamma_k(g)) \in \F_\rho$ for every $g\in G$, then $A = \{(t,i): t_2 < t < m,\; i\in \cycnums{n}\}$. Otherwise,
$ A $ is the set of pairs $ (t,i) $ where $ t_2 < t < m $ and $ i \in \cycnums{n} $ such that there exists a maximal $\F_\rho$ grid interval $ [g_1(i), g_2(i)] $ with respect to $\E_{t_1}$
for which
$i \in [g_1(i)+ t_1,g_2(i)-t_1]$.
\begin{definition}\label{def:A-violating}
A pair $ (t,i) \in A $ is said to be a violating pair with respect to $ \E_{t_1}[G] $,
if at least one of the following requirements does not hold.
~(1) $ \E_{t_2}(\Gamma_k(i)) \in \F_\rho $. ~(2) $ \E_t(\Gamma_k(i)) \in \F_\rho $.
~(3) $ \E_t(i) = \frhor(\E_{t_2}(i), \parity(t-t_2), 0) $ where $ \frhor $ is the function referred to in \Cref{condition:final_prediction}.
\end{definition}
\smallskip
Let $ B $ be the set of pairs $(t,i)$,
where $ t_2 < t < m $ and $ i \in \cycnums{n} $ for which the following holds.
First, there exists a maximal $\F_\rho$ grid interval $ [g_1(i), g_2(i)] $ with respect to $\E_{t_1}$
such that either
$i \in [g_1(i) -(t-t_1),g_1(i)+t_1-\Delta-1]$ or $i \in [g_2(i)-t_1+\Delta+1,g_2(i)+(t-t_1)]$.
Second, for every other maximal $\F_\rho$ grid interval $ [g'_1, g'_2] $ (with respect to $\E_{t_1}$),
if $i \in [g_1(i) -(t-t_1),g_1(i)+t_1-\Delta-1]$, then
$\dist(g_1(i), i) <\dist(g_2', i) - \Delta$, and if $i \in [g_2(i)-t_1+\Delta+1,g_2(i)+(t-t_1)]$,
then $\dist(g_2(i), i) <\dist(g_1', i) - \Delta$.
\begin{definition}\label{def:B-violating}
A pair $ (t,i) \in B $ is said to be a violating pair with respect to $ \E_{t_1}[G] $, if at least one of the following requirements does not hold.
~(1) $\E_{t}(\Gamma_k(i)) \in \F_\rho $.
~(2) Let $[g_1(i),g_2(i)]$ be the maximal $\F_\rho$ grid interval ensured by the definition of $B$ given $(t,i)$. Let $g(i)$
be the grid location in $G \cap \left( [g_1(i),g_1(i)+t_1-\Delta] \cup [g_2(i),g_2(i)-t_1+\Delta] \right)$ that is closest to $i$ (if there are two such grid locations, then select the one closer to $g_1(i)$).
Then $\E_t(i) = \frhor(\E_{t_1}(g(i)), \parity(t-t_1), \parity(\dist(i,g(i)))) $,
where $ \frhor$ is the function referred to in \Cref{condition:final_prediction}.
\end{definition}
\smallskip
If $\E_{t_1}(\Gamma_k(g)) \in \bF_\rho$ for every $g\in G$, then $C = \{(t,i): t_2 < t < m,\; i\in \cycnums{n}\}$. Otherwise,
$ C $ is the set of pairs $ (t,i) $ where $ t_2 < t < m $ and $ i \in \cycnums{n} $ for which the following holds. First, there exists a maximal $\bF_\rho$ grid interval $ [g_1(i), g_2(i)] $ with respect to $\E_{t_1}$ such that
$i \in [g_1(i),g_2(i)]$.
Second, the pair $ (t,i) $ neither descends from the pair
$ (t_1,g_1(i)+1) $ nor from the pair $ (t_1,g_2(i)-1) $.
\begin{definition}\label{def:C-violating}
A pair $ (t,i) \in C $ is said to be a violating pair with respect to $ \E_{t_1}$,
if at least one of the following requirements does not hold.
~(1) $ \E_{t}(\Gamma_k(i)) \in \bF_\rho $.
~(2) Let $g(i) \in G $ be a grid location satisfying $\dist(g(i),i)< \Delta$ (if there are two such grid locations, then select the one closer to $g_1(i)$).
Then $ \E_t(\Gamma_k(i)) = \hrhor(\E_{t_1}(\Gamma_k(g(i))), \parity(t-t_1), \ddist(g(i),i)) $.
\end{definition}
Finally, we define the set $U$ of \emph{uncertain} pairs $(t,i)$, for which we cannot determine, given $\E_{t_1}[G]$ and the corresponding pairs $(t_2,i)$, whether they are violating or not.
\begin{definition}\label{def:U}
The set $U$ consists of all pairs
$ (t,i) \in \cycnums{n} \times \nums{m}$ such that $t>t_2$ and $(t,i) \notin A \cup B \cup C $.
\end{definition}
In \Cref{subsec:observations} show that the number of pairs $ (t,i) $ belonging to the set $ U $ is relatively small, provided that $ \E_{t_1}[G] $ is feasible.
\subsection{The testing algorithm}\label{subsec:alg}
Recall that Let $\Delta = \frac{\eps^2}{b_0}\cdot \min\{n,m\}$,
$ t_1= \frac{b_1 \Delta}{\epsilon} $, and $t_2 = t_1 + \Delta$
(where $b_0$ and $b_1$ are constants that will be set in the analysis).
\begin{figure}[htb!]
\fbox{\begin{minipage}[c] {\textwidth}%
\smallskip{}
\textbf{Tester for evolution according to a rule $\rho$}
\begin{enumerate}
\item\label{step:reject_infeasible_grid} Query $\E_{t_1}$ on all locations in $\Gamma_k(G)$. If $ \E_{t_1}[G] $ is infeasible with respect to $\rho$, reject.
\item\label{step:select_random_pairs} Select uniformly at random $ \Theta(\frac{1}{\epsilon}) $ pairs $ (t,i) $ where $ i \in \cycnums{n} $ and $ t_2 < t < m $. \\ For each selected pair $ (t,i) $, query $ \E_{t}(\Gamma_k(i)) $ and $ \E_{t_2}(\Gamma_k(i)) $.
\item\label{step:reject_violations} If some pair selected in Step~\ref{step:select_random_pairs} is a violating pair with respect to $\rho$, then reject. \\ Otherwise, accept.
\end{enumerate}
\vspace{3pt}
%
\end{minipage}}
\caption{The testing algorithm}\label{fig:alg}
\end{figure}
\setcounter{theorem}{2}
\begin{theorem}\label{thm:main}
Let $\rho$ be any local rule that satisfies Conditions~\ref{condition:final}--\ref{condition:non-localB}.
The algorithm described in \Cref{fig:alg} is a one-sided error non-adaptive testing algorithm for evolution according to $\rho$ whose query complexity is $O(\lceil n/m\rceil/\eps^2)$.
\end{theorem}
The bound on the query complexity of the algorithm follows from the fact that the number of queries performed in Step~\ref{step:reject_infeasible_grid} is $O(n/\Delta) = O(\lceil n/m \rceil/\eps^2)$ (recall that $k$ is a constant), and the number of queries performed in Step~\ref{step:reject_violations} is $O(1/\eps)$.
The correctness of the algorithm follows from the next two lemmas.
We prove \Cref{lemma:completeness} in \Cref{subsec:lem-E-obys} and \Cref{lemma:soundness} in \Cref{subsec:lem-E-far}.
\setcounter{lemma}{0}
\begin{lemma}[Completeness of the meta-algorithm]\label{lemma:completeness}
Let $\rho$ be any local rule that satisfies Conditions~\ref{condition:final}--\ref{condition:non-localB}.
If the environment $ \E $ evolves according to $\rho $,
then the algorithm accepts with probability 1.
\end{lemma}
\begin{lemma}[Soundness of the meta-algorithm]\label{lemma:soundness}
Let $\rho$ be any local rule that satisfies Conditions~\ref{condition:final}--\ref{condition:non-localB}.
If the environment $ \E $ is $ \epsilon $-far from evolving according to $ \rho $, then the algorithm rejects with
probability at least $2/3$.
\end{lemma}
\section{Observations and simple claims}\label{subsec:observations}
\setcounter{observation}{0}
In this subsection we present several observations and simple claims that will be used in our proofs of \Cref{lemma:completeness} and \Cref{lemma:soundness}.
The first two observations are directly implied by Conditions~\ref{condition:final} and~\ref{condition:infecting_neighbors}.
\begin{observation}\label{obs:F-F}
Let $\rho$ be a local rule that satisfies Conditions~\ref{condition:final} and~\ref{condition:infecting_neighbors},
$\E \in \calE^\rho_{m,n}$ an environment that evolves according to $\rho$
and $(t,i) \in \nums{m}\times \cycnums{n}$. If $(t,i)$ has an ancestor $(t',i')$
such that $\E_{t'}(\Gamma_k(i')) \in \F_\rho $, then $\E_{t}(\Gamma_k(i)) \in \F_\rho$.
\end{observation}
Note that \Cref{obs:F-F} implies that if $\E_{t}(\Gamma_k(i)) \in \bF_\rho$, then $\E_{t'}(\Gamma_k(i')) \in \bF_\rho$ for every ancestor $(t',i')$ of $(t,i)$.
\begin{observation}\label{obs:long_final_intervals}
Let $\rho$ be a local rule that satisfies Conditions~\ref{condition:final} and~\ref{condition:infecting_neighbors},
$\E \in \calE^\rho_{m,n}$ an environment that evolves according to $\rho$
and $(t,i) \in \nums{m}\times \cycnums{n}$, $t \leq n/2$.
If $\E_t(\Gamma_k(i))\in \F_\rho$, then the location $ i $ belongs to an interval whose size is at least $2t$ such that $\E'_t(\Gamma_k(j))\in \F_\rho$ for every location $j$ in this interval.
\end{observation}
\smallskip
The observation below directly follows from \Cref{obs:long_final_intervals}
(as well as the definition of the grid $G$ and Definitions~\ref{def:feasible} and~\ref{def:grid_interval}).
\begin{observation}\label{obs:max-F-grid-interval}
Let $\rho$ be a local rule that satisfies Conditions~\ref{condition:final} and~\ref{condition:infecting_neighbors}.
Suppose that $ \E_t[G] $ for $t \leq n/2$ is feasible with respect to $\rho$.
Then
for every $[g_1,g_2]$ that is a maximal $\F_\rho$ grid interval with respect to $\E_{t}$,
the number of locations in $[g_1,g_2]$ is at least $2t-\Delta$.
\end{observation}
The next observation follows directly from \Cref{obs:F-F}
(as well as the definition of $G$ and Definitions~\ref{def:feasible} and~\ref{def:grid_interval}).
\begin{observation}\label{obs:max-F-grid-interval-t2}
Let $\rho$ be a local rule that satisfies Conditions~\ref{condition:final} and~\ref{condition:infecting_neighbors}.
Suppose that $ \E_t[G] $ for $t \leq n/2$ is feasible with respect to $\rho$
and let $g\in G$ be such that $\E_t(\Gamma_k(g)) \in \F_\rho$.
If $ \E' $ is a feasible completion of $ \E_t[G]$ (with respect to $\rho$),
then $\E'_{t'}(\Gamma_k(i)) \in \F_\rho$ for every
$t' \geq t+\Delta$ and
$i \in [g-\Delta,g+\Delta]$.
\end{observation}
\Cref{claim:max-bF-grid-interval}, stated next, also deals with feasible completions.
\begin{claim}\label{claim:max-bF-grid-interval}
Let $\rho$ be a local rule that satisfies Conditions~\ref{condition:final} and~\ref{condition:infecting_neighbors}.
Suppose that $ \E_t[G] $ is feasible with respect to $\rho$ for $t\geq\Delta$ and let $g\in G$ be such that both $\E_t(\Gamma_k(g)) \in \bF_\rho $ and $\E_t(\Gamma_k(g+\Delta)) \in \bF_\rho $.
If $ \E' $ is a feasible completion of $ \E_t[G]$ (with respect to $\rho$),
then $\E'_{t}(\Gamma_k(i)) \in \bF_\rho $ for every
$i\in [g,g+\Delta]$.
\end{claim}
\ifnum1=1
\begin{proof}
Assume, contrary to the claim, that there exists some
$i\in [g,g+\Delta]$
such that
$\E'_t(\Gamma_k(i)) \in F_\rho$ (where $\E'$ is a feasible completion of $ \E_t[G]$ (with respect to $\rho$)).
By Conditions~\ref{condition:final} and~\ref{condition:infecting_neighbors},
there must be some location $j$ such that $(t,i)$ descends from $(0,j)$ and
$\E'_0(\Gamma_k(j)) \in \F_\rho$. But this would imply
(once again by \Cref{condition:infecting_neighbors}),
that $\E'_t(\Gamma_k(g')) \in F_\rho$ for $g'=g$ or $g' =g+\Delta$, and we have reached a contradiction.
\end{proof}
\fi
\smallskip
Recall the definitions of the sets $A$, $B$ and $C$ from \Cref{subsec:violating_pairs}.
\begin{claim}
The sets $ A $, $ B $, and $ C $ are pairwise disjoint.
\end{claim}
\ifnum1=1
\begin{proof}
We first show that $ A \cap B = \emptyset $.
Suppose by way of contradiction that there exists a pair $ (t,i) \in A \cap B $.
In order for a pair $ (t,i) $ to belong to the set $ A $, it must hold that $ i \in [g_1^A+t_1,g_2^A-t_1] $ where $ [g_1^A,g_2^A] $ is a maximal $\F_\rho$ grid interval with respect to $\E_{t_1}$.
In order for a pair $ (t,i) \in A $ to also belong to the set $ B $, it must hold that $i \in [g_1^B -(t-t_1),g_1^B+t_1-\Delta-1]$ or $i \in [g_2^B-t_1+\Delta+1,g_2^B+(t-t_1)]$ where $ [g_1^B,g_2^B] $ is a maximal $\F_\rho$ grid interval with respect to $\E_{t_1}$.
Since $ (t,i) \in A $, then, in particular, the location $ i $ must belong to a maximal $\F_\rho$ grid interval with respect to $\E_{t_1}$.
In the cases where $ i \in [g_1^B -(t-t_1),g^B_1-1] $ and $ [g_2^B+1,g_2^B+(t-t_1)] $, the location $ i $ cannot belong to a maximal $ \F_\rho $ grid interval.
The reason is that, first, by the definition of the set $ B $, for every other maximal $\F_\rho$ grid interval $ [g'_1, g'_2] \ne [g_1^B,g_2^B] $ (with respect to $\E_{t_1}$), if $i \in [g_1(i) -(t-t_1),g_1(i)+t_1-\Delta-1]$, then $\dist(g_1(i), i) < \dist(g_2', i) - \Delta$, and if $i \in [g_2(i)-t_1+\Delta+1,g_2(i)+(t-t_1)]$, then $\dist(g_2(i), i) <\dist(g_1', i) - \Delta$.
Hence, in the cases in which $ i \in [g_1^B -(t-t_1),g^B_1-1] $ or $ [g_2^B+1,g_2^B+(t-t_1)] $, if the location $ i $ were inside a maximal $ \F_\rho $ grid interval, then that interval would be adjacent to the interval $ [g_1^B,g_2^B] $.
However, this is impossible, since between each pair of maximal $ \F_\rho $ grid intervals there is a maximal $\bF_\rho$ grid interval with respect to $ \E_{t_1} $.
Therefore, it must hold that either $i \in [g_1^B,g_1^B+t_1-\Delta-1]$ or $i \in [g_2^B-t_1+\Delta+1,g_2^B]$.
If $ [g_1^A,g_2^A] \ne [g_1^B,g_2^B] $, then it clearly cannot hold that the location $ i $ also belongs to $ [g_1^A+t_1,g_2^A-t_1] $ (because the maximal $\F_\rho$ grid intervals are disjoint with respect to each other).
Also, if $ [g_1^A,g_2^A] = [g_1^B,g_2^B] $, it also cannot hold that $ i \in [g_1^A+t_1,g_2^A-t_1] $ because in this case all the locations belonging the interval $ [g_1^B,g_1^B+t_1-\Delta-1] $ are to the left of all the locations belonging to the interval $ [g_1^A+t_1,g_2^A-t_1] $.
Similarly, all the locations belonging the interval $ [g_2^B-t_1+\Delta+1,g_2^B] $ are to the right of all the locations belonging to the interval $ [g_1^A+t_1,g_2^A-t_1] $.
Hence, if $ (t,i) \in A $ then $ (t,i) \notin B $, and therefore $ A \cap B = \emptyset $.
We now show that $ B \cap C = \emptyset $.
Suppose by way of contradiction that there exists a pair $ (t,i) \in B \cap C $.
Since $ (t,i) \in C $, it must hold that $ i \in [g_1^C,g_2^C] $ where $ [g_1^C,g_2^C] $ is a maximal $ \bF_\rho $ grid interval with respect to $ \E_{t_1} $ and that $ (t,i) $ neither descends from the pair $ (t_1,g_1^C+1) $ nor from the pair $ (t_1,g_2^C-1) $.
For $ (t,i) $ to also belong to the set $ B $, it must hold that $i \in [g_1^B -(t-t_1),g_1^B+t_1-\Delta-1]$ or $i \in [g_2^B-t_1+\Delta+1,g_2^B+(t-t_1)]$ where $ [g_1^B,g_2^B] $ is a maximal $\F_\rho$ grid interval with respect to $\E_{t_1}$.
However, since $ (t,i) \in C $, and therefore, in particular, the location $ i $ must belong to a maximal $\bF_\rho$ grid interval with respect to $\E_{t_1}$, the cases where $i \in [g_1^B,g_1^B+t_1-\Delta-1]$ and $i \in [g_2^B-t_1+\Delta+1,g_2^B]$ cannot hold.
Therefore, it must hold that either $ i \in [g_1^B -(t-t_1),g_1-1] $ or $ i \in [g_2^B+1,g_2^B+(t-t_1)] $.
If $ [g_1^C,g_2^C] $ and $ [g_1^B,g_2^B] $ are not adjacent maximal grid intervals (adjacent in the sense that they are at a distance of $ \Delta $ from each other), then the location $ i $ cannot also belong to $ [g_1^C,g_2^C] $.
Hence, we can assume without loss of generality that $ i \in [g_2^B+1,g_2^B+(t-t_1)] $ and $ g_1^C=g_2^B+\Delta $.
Thus, $ \dist(i,g_1^C+1) = \dist(i,g_2^B+\Delta+1) \le t - t_1 $.
That is, the pair $ (t,i) $ descends from the pair $ (t_1,g_1^C+1) $, in contradiction to $ (t,i) \in C $.
Therefore, no pair $ (t,i) $ cannot belong to both $ B $ and $ C $.
Finally, to see that $ A \cap C = \emptyset $, note that in order for a pair $ (t,i) $ to belong to the set $ A $ it must belong to a maximal $\F_\rho$ grid interval with respect to $\E_{t_1}$, and in order for a pair $ (t,i) $ to belong to the set $ C $ it must belong to a maximal $ \bF_\rho $ grid interval with respect to $ \E_{t_1} $.
Therefore, no $ (t,i) $ pair can belong to both $ A $ and $ C $, and hence $ A \cap C = \emptyset $.
\end{proof}
\fi
\smallskip
In the last claim of this subsection, we bound the size of the set $U$ of uncertain pairs (as defined in \Cref{def:U}).
\begin{claim}\label{claim:U_is_small}
If $ \E_{t_1}[G] $ is feasible (with respect to $\rho$), then $ |U| \leq \frac{5\eps}{b_1} m n $
(where $b_1$ is the constant in the setting of $t_1 = \frac{b_1\Delta}{\eps}$).
\end{claim}
We note that \Cref{claim:U_is_small} does not depend on the setting of $\Delta$, but only on the definition of $t_1$ as a function of $\Delta$ (as well as the definition of the grid $G$, which, too is defined based on $\Delta$, and in turn is used to determine $U$).
\ifnum1=1
\begin{proof}
First note that if $\E_{t_1}(\Gamma_k(g)) \in \F_\rho$ for every $g \in G$ or
$\E_{t_1}(\Gamma_k(g)) \in \bF_\rho$ for every $g \in G$, then $U$ is empty.
Hence, we assume from this point on that neither is the case, so that there is at least one (non-empty) maximal $\F_\rho$ grid interval and at least one (non-empty) maximal $\bF_\rho$ grid interval.
By the definition of $U$ and the sets $A$, $B$, and $C$, a pair $ (t,i) $ belongs to the set $ U $ if $t> t_2$ and
one of the following holds.
\begin{enumerate}
\item There exists a maximal $\F_\rho$ grid interval $ [g_1(i), g_2(i)] $, such that either
$i\in [g_1(i)+t_1-\Delta, g_1(i)+t_1-1]$
or
$i \in [g_2(i)-t_1+1, g_2(i)-t_1+\Delta]$.
\item There exists a maximal $\bF_\rho$ grid interval $ [g_1(i), g_2(i)] $
with respect to $\E_{t_1}$
where
$i \in [g_1(i),g_2(i)]$. Furthermore, if the pair $ (t,i) $ descends from at least one of the pairs $ (t_1,g_1(i)-\Delta) $ or $ (t_1,g_2(i)+\Delta) $, then $ |\dist(g_1(i), i) - \dist(g_2(i), i)| \le \Delta$. Otherwise (the pair $ (t,i) $ does not descend from either $ (t_1,g_1(i)-\Delta) $ or $(t_1,g_2(i)+\Delta)$), the pair $ (t,i) $ descends from either the pair $ (t_1,g_1(i)) $ or from the pair $ (t_1,g_2(i)) $.
\end{enumerate}
%
By \Cref{obs:max-F-grid-interval}, the length of each maximal $\F_\rho$ grid interval at time $t_1$ is at least $t_1$. Therefore, the number of $\F_\rho$ grid intervals is at most $\frac{n}{t_1} = \frac{\eps n}{b_1\Delta}$. This implies that the total number of pairs $(t,i)\in U$ of the first type described above, is at most $2\Delta \cdot \frac{\eps n}{b_1\Delta} \cdot m = \frac{2\eps}{b_1} m n $.
Since between every two maximal $\bF_\rho$ grid intervals there is a maximal $\F$ grid interval,
the number of maximal $\bF_\rho$ grid intervals is also upper bounded by $\frac{\eps n}{b_1\Delta}$.
We shall show that for each maximal $\bF$ grid interval $[g_1,g_2]$, the number of pairs in $U$ that descend from either $ (t_1,g_1) $ or $ (t_1,g_2) $ is at most $3\Delta m$, and the current claim follows.
Consider any fixed maximal $\bF_\rho$ grid interval $[g_1,g_2]$. The number of pairs
$ (t,i) $,
where $i \in [g_1,g_2]$,
that descend from at least one of the pairs $ (t_1,g_1) $ or $ (t_1,g_2) $, and for which $ |\dist(g_1, i) - \dist(g_2, i)| \le \Delta$ is at most $\Delta m$. This upper bound actually follows by using only the condition $ |\dist(g_1, i) - \dist(g_2, i)| \le \Delta$.
The number of pairs $ (t,i) $, where $i \in [g_1,g_2]$,
that do not descend from either $ (t_1,g_1-\Delta) $ or $(t_1,g_2+\Delta)$, but do descend from either $ (t_1,g_1) $ or $ (t_1,g_2) $ is at most $2\Delta m$.
This follows directly from the definition of descending pairs.
\end{proof}
\fi
\section{Proof of {\Cref{lemma:completeness}}: \nameref{lemma:completeness}}\label{subsec:lem-E-obys}
Let $\rho$ be any local rule that satisfies Conditions~\ref{condition:final}--\ref{condition:non-localB}
(where in this proof we do not make use of Conditions~\ref{condition:non-localA} and~\ref{condition:non-localB}, which are provided in the next subsection),
and let $ \E \in \calE^\rho_{m,n} $ be a dynamic environment that evolves according to $ \rho $.
The only steps in which our algorithm may reject are Step~\ref{step:reject_infeasible_grid} and Step~\ref{step:reject_violations}. The grid is feasible by definition, and hence the algorithm does not reject in Step~\ref{step:reject_infeasible_grid}. To show that it also does not reject in Step~\ref{step:reject_violations},
we show that there are no violating pairs with respect to $ \E_{t_1}[G] $.
Recall that each violating pair belongs to one of the three sets $A$, $B$, or $C$ (as defined in \Cref{subsec:violating_pairs}).
Specifically, we next show that in each of the three cases ($ (t,i) \in A $, $ (t,i) \in B $, and $ (t,i) \in C $), the requirements (specified in \Cref{subsec:violating_pairs}) for $ (t,i) $ being a non-violating pair hold.
In what follows, if we say that a pair $(t,i)$ is final (similarly, non-final), then we mean with respect to $\E$, and when we refer to maximal grid intervals, it is always with respect to $\E_{t_1}$,
and violations are always with respect to $ \E_{t_1}[G] $.
\subparagraph{Pairs {\boldmath{$ (t,i) \in A $}}.}
By the definition of $A$,
$t > t_2$ and
there exists a grid location $g(i)\in G$ such that $\dist(i,g(i)) \leq \Delta$ and $\E_{t_1}(\Gamma_k(g(i)))\in \F_\rho$
(this holds both in the case that $\E_{t_1}(\Gamma_k(g))\in \F_\rho$ for every $g\in G$ and in the case that
there exists a maximal $\F_\rho$ grid interval $[g_1(i),g_2(i)]$ such that
$i \in [g_1(i)+ t_1,g_2(i)-t_1]$.)
By \Cref{obs:max-F-grid-interval-t2}, both $\E_{t_2}(\Gamma_k(i)) \in \F_\rho$ and
$\E_t(\Gamma_k(i)) \in \F_\rho$.
Turning to the third requirement, by \Cref{condition:final_prediction}, applied with $t'=t_2$ and $i'=i$, we get that
$\E_t(i) = \frhor(\E_{t_2}(i), \parity(t-t_2), 0) $. Therefore, all three requirements on pairs in $A$ hold, and hence $(t,i)$ is not a violating pair.
\subparagraph{Pairs {\boldmath{$ (t,i) \in B $}.}}
By the definition of $B$, $t > t_2$ and there exists a maximal $\F_\rho$ grid interval $ [g_1(i), g_2(i)] $ with respect to $\E_{t_1}$ such that either
$i \in [g_1(i) -(t-t_1),g_1(i)+t_1-\Delta-1]$ or $i \in [g_2(i)-t_1+\Delta+1,g_2(i)+(t-t_1)]$.
Furthermore, for every other maximal $\F_\rho$ grid interval $ [g'_1, g'_2] $ (with respect to $\E_{t_1}$),
if $i \in [g_1(i) -(t-t_1),g_1(i)+t_1-\Delta-1]$, then
$\dist(g_1(i), i) <\dist(g_2', i) - \Delta$, and if $i \in [g_2(i)-t_1+\Delta+1,g_2(i)+(t-t_1)]$,
then $\dist(g_2(i), i) <\dist(g_1', i) - \Delta$.
Let $g(i)$
be the grid location closest to $i$ in
$G \cap \left( [g_1(i),g_1(i)+t_1-\Delta] \cup [g_2(i)-t_1+\Delta,g_2(i)] \right)$
(as defined in the second requirement concerning (non-)violating pairs $(t,i)\in B$).
We claim that $(t,i)$ descends from $(0,g(i))$.
To see why, first consider the case in which $ i \in [g_1(i)-(t-t_1),g_1(i)] \cup [g_2(i),g_2(i)+(t-t_1)] $.
In this case, either $ g(i)=g_1(i) $ or $ g(i)=g_2(i) $, which means that $ \dist(i,g(i)) \le t-t_1 \le t $.
Second, consider the case in which $ i \in [g_1(i),g_1(i)+t_1-\Delta-1] \cup [g_2(i)-t_1+\Delta+1,g_2(i)] $.
In this case, the grid location closest to $ i $ in $ G \cap \left( [g_1(i),g_1(i)+t_1-\Delta] \cup [g_2(i)-t_1+\Delta,g_2(i)] \right) $ is within a distance of at most $ \Delta $ from the location $ i $.
Hence, $ \dist(i,g(i)) \le \Delta \le t $.
Therefore, in any case, $ \dist(i,g(i)) \le t $, and thus the pair $ (t,i) $ descends from the pair $ (0,g(i)) $.
Assume (without loss of generality) that $g(i) \in [g_2(i)-t_1+\Delta,g_2(i)]$.
Since $\E_{t_1}(\Gamma_k(g_2(i) + \Delta))\in \bF_\rho$ (as $[g_1(i),g_2(i)]$ is a maximal final grid interval),
we know (by \Cref{obs:F-F}) that
$\E_0(\Gamma_k(j)) \in \bF_\rho$ for every $j \in [g_2(i)+\Delta - t_1,g_2(i)+\Delta + t_1 ]$.
However, since $\E_{t_1}(\Gamma_k(g_2(i))\in \F_\rho$, there must be some location $\ell \in [g_2(i)-t_1, g_2(i)+\Delta - t_1-1]$
such that $\E_0(\Gamma_k(\ell)) \in \F_\rho$. Among the locations $\ell$ that satisfy these conditions, let $\ell^*$ be the one that minimizes $\dist(\ell,g_2(i)+\Delta - t_1)$, so that for every $\ell' \in [\ell^*+1, g_2(i)+\Delta - t_1]$ we have that $\E_0(\Gamma_k(\ell')) \in \bF_\rho$.
Hence, for every $ i'' \ne g(i) $ satisfying $ \dist(g(i),i'') \le \dist(g(i),\ell^*) $ it holds that $ \E_0(i'') \in \bF_\rho $.
Additionally, since $ g(i) \in [g_2(i)-t_1+\Delta,g_2(i)] $ and $ \ell^* \in [g_2(i)-t_1, g_2(i)+\Delta - t_1-1] $, it must hold that $ \dist(g(i),\ell^*) \le t_1 $, which means that the pair $ (t_1,g(i)) $ descends from the pair $ (0,\ell^*) $.
Also, both $ \E_{t_1}(g(i)) \in \F_\rho $ and $ \E_0(\ell^*) \in \F_\rho $.
Thus, we can apply \Cref{condition:final_prediction} for the two pairs $ (0,\ell^*) $ and $ (t_1,g(i)) $ to get that
$ \E_{t_1}(g(i)) = \frhor(\E_{0}(\ell^*), \parity(t_1), \parity(\dist(\ell^*,g(i)))) $.
Since the pair $ (t,i) $ descends from the pair $ (t_1, g(i)) $, and the pair $ (t_1, g(i)) $ descends from the pair $ (0, \ell^*) $, it holds that the pair $ (t,i) $ must also descend from the pair $ (0,\ell^*) $.
Additionally, both both $ \E_{t}(i) \in \F_\rho $ and $ \E_0(\ell^*) \in \F_\rho $.
Also, by the second requirement on $ (t,i) $, involving other maximal $ \F_\rho $ grid intervals $ [g'_1, g'_2] $, for every $ i'' \ne i $ satisfying $ \dist(i,i'') \le \dist(i,\ell^*) $ it holds that $ \E_0(i'') \in \bF_\rho $.
Thus, we can apply \Cref{condition:final_prediction} for the two pairs $ (0,\ell^*) $ and $ (t,i) $ to get that
$ \E_{t}(i) = \frhor(\E_{0}(\ell^*), \parity(t), \parity(\dist(\ell^*,i))) $.
But then, since $\frhor$ is the XOR of its first argument and a subset of the other two,
and $\parity(t-t_1) = \parity(t_1) \oplus \parity(t)$ as well as
$ \parity(\dist(g(i),i))) = \parity(\dist(\ell^*,g(i))) \oplus \parity(\dist(\ell^*,i))$,
we get that
$ \E_{t}(i) = \frhor(\E_{t_1}(g(i)), \parity(t-t_1), \parity(\dist(g(i),i))) $.
\subparagraph{Pairs {\boldmath{$ (t,i) \in C $}}.}
There are two cases (where in both $t> t_2$).
The first is that $\E_{t_1}(\Gamma_k(g)) \in \bF_\rho$ for every $g\in G$ (so that $i$ may be any location in $\cycnums{n}$).
In the second case there exists a maximal
$\bF_\rho$ grid interval $ [g_1(i), g_2(i)] $
such that
$i \in [g_1(i),g_2(i)]$,
and $(t,i) $ does not descend from either
$(t_1,g_1(i)-1) $ or $(t_1,g_2(i)+1) $, which implies that for every $ j \in \cycnums{n} $, if the pair $ (t,i) $ descends from $ (t_1,j) $, then $ j \in [g_1(i),g_2(i)] $.
In both cases, by \Cref{claim:max-bF-grid-interval},
all ancestors $(t_1,j)$ of $(t,i)$ satisfy $\E_{t_1}(\Gamma_k(j)) \in \bF_\rho $.
By \Cref{obs:F-F} this implies that
$ \E_{t}(\Gamma_k(i)) \in \bF_\rho $, so that the first requirement is met.
As for the second requirement, since
the grid location $g(i)$ defined in the second requirement is such that $(t_1,g(i))$ is an ancestor of $(t,i)$
(and $\E_{t_1}(\Gamma_k(g(i))) \in \bF_\rho $),
we can apply \Cref{condition:noninf_prediction} (with $t'=t_1$ and $i'=g$) to get that
$ \E_t(\Gamma_k(i)) = \hrhor(\E_{t_1}(\Gamma_k(g(i))), \parity(t-t_1), \ddist(g(i),i) $, as required.
We've shown that under the premise of the lemma, there is no pair $(t,i) \in A\cup B \cup C$ that is a violating pair.
Thus, our algorithm cannot reject at Step~\ref{step:reject_violations}.
\section{Proof of {\Cref{lemma:soundness}}: \nameref{lemma:soundness}}\label{subsec:lem-E-far}
Let $\E$ be any environment that is $ \epsilon $-far from evolving according to $\rho$,
where $\rho$ is a local rule that satisfies Conditions~\ref{condition:final}--\ref{condition:non-localB}.
If $ \E_{t_1}[G] $ is infeasible with respect to $\rho$, then the algorithm rejects (in Step~\ref{step:reject_infeasible_grid}).
Hence, we assume from now on that $ \E_{t_1}[G] $ is feasible.
We claim that the number of violating pairs with respect to $ \E_{t_1}[G] $ is at least
$\frac{\eps}{b_2}mn$, where $b_2 > 1$ is a constant. \Cref{lemma:soundness} follows, since the algorithm selects
$s=\Theta(1/\eps)$ pairs (in Step~\ref{step:select_random_pairs}), and rejects if any of them is found to be a violating pair (in Step~\ref{step:reject_violations}). Hence, the probability that the algorithm rejects is at least $1-(1-\eps/b_2)^s$, which is at least $2/3$ for $s \geq 2b_2/\eps$.
Suppose by way of contradiction that there are less than $\frac{\eps}{b_2}mn$ violating pairs.
We show how, based on $\E$ (to be precise, $\E_{t_1}[G]$ and $\E_{t_2}$) we can define an environment $\E'$ for which the following holds. First, $\E'$ evolves according to $\rho$. Second, $\E'$ differs from $\E$ on at most $\eps m n$ pairs $(t,i) \in \cycnums{n}\times \nums{m}$. But this contradicts the premise that $ \E $ is $ \epsilon $-far from evolving according to $\rho$. Details follow in the next subsections.
We first provide all details (in \Cref{subsubsec:E-prime} and \Cref{subsubsec:E-E-prime}) under the assumption that there exist grid locations $g\in G$ for which $\E_{t_1}(\Gamma_k(g))\in \F_\rho$ as well as grid locations $g'\in G$ for which $\E_{t_1}(\Gamma_k(g'))\in \bF_\rho$.
We discuss (in \ifnum1=1 \Cref{subsubsec:homogeneous}\else the full version of this paper~\cite{fullversion}\fi ) the two special cases for which either $\E_{t_1}(\Gamma_k(g))\in \F_\rho$ for every $g\in G$ or $\E_{t_1}(\Gamma_k(g))\in \bF_\rho$ for every $g\in G$, which we refer to as the \emph{homogeneous} cases.
\subsection{The definition of $\E'$}\label{subsubsec:E-prime}
To construct the dynamic environment $ \E' $, we define its initial configuration $ \E'_0 $, and then apply the local rule $ \rho $ for $ m -1$ steps. Hence, $\E'$ evolves according to $\rho$ by construction.
The initial configuration $ \E'_0 $ is defined with respect to a configuration $ \sigma $ on which we perform several
transformations to obtain $ \E'_0 $.
We define the configuration $ \sigma $ by specifying the value of $ \sigma(i) $ for each location $ i \in \cycnums{n} $
as explained next. In what follows, whenever we refer to maximal $\bF_\rho$ grid intervals (similarly, maximal $\F_\rho$ grid intervals), it is with respect to $\E_{t_1}$.
We shall make use of a function
$\hrhol: \bF_\rho \times \bitset \times \cycnums{n}$
(based on $\hrhor$ -- see \Cref{condition:noninf_prediction}).
Recall that by \Cref{condition:noninf_prediction},
for each fixed $\tau \in \bF_\rho$, $\beta \in \bitset$ and $\ell \in \cycnums{n}$, there exists a unique $\tau'$ such that $\hrhor(\tau',\beta,\ell) = \tau$.
\begin{definition}\label{def:h-lar}
For any $\tau \in \bF_\rho$, $\beta \in \bitset$ and $\ell \in \cycnums{n}$,
$\hrhol(\tau,\beta,\ell)$ equals the (unique) pattern $\tau'$ for which
$\hrhor(\tau',\beta,\ell) = \tau$.
\end{definition}
We also make the following observation, based on \Cref{condition:final_prediction}, by which $\frhor$ is the XOR of its first argument and a subset of the other two.
\begin{observation}\label{obs:f-lar}
\sloppy
For any $\beta_1,\beta_2,\beta_3 \in \bitset$,
if $\frhor(\beta_1,\beta_2,\beta_3) = \beta_1'$, then
$\frhor(\beta_1',\beta_2,\beta_3) = \beta_1$.
Furthermore, for any $\beta_2',\beta_3' \in \bitset$,
$\frhor(\frhor(\beta_1,\beta_2,\beta_3),\beta'_2,\beta'_3) = \frhor(\beta_1,\beta_2\oplus \beta_2',\beta_3\oplus\beta_3')$, and in particular,
$\frhor(\frhor(\beta_1,\beta_2,\beta_3),\beta_2,\beta_3) = \beta_1$.
\end{observation}
For each maximal $ \bF_\rho $ grid interval $ [g_1,g_2] $,
let
$J(g_1,g_2) = [g_1-t_1-k,g_2+t_1+k]$
and let $J$ be the union over all such sets.
We also define
$J_1(g_1,g_2) = [g_1-t_1,g_2+t_1]$
(for each $ \bF_\rho $ grid interval $ [g_1,g_2] $), and let $J_1\subset J$ be the union over all such sets.
We first establish two simple claims.
\begin{claim}\label{claim:disj-J}
Let $\rho$ be any local rule that satisfies Conditions~\ref{condition:final}--\ref{condition:noninf_prediction}.
For every two maximal $ \bF_\rho $ grid interval $ [g_1,g_2] $ and $[g_1',g_2']$, we have that
$J(g_1,g_2)\cap J(g'_1,g'_2) = \emptyset$.
\end{claim}
\ifnum1=1
\begin{proof}
Since $ [g_1,g_2] $ and $[g_1',g_2']$ are maximal $ \bF_\rho $ grid intervals, we know that
$g_1 - \Delta$, $g_2 + \Delta$, $g_1' - \Delta$ and $g_2'+\Delta$ are endpoints of maximal $\F_\rho$ grid intervals
(with respect to $\E_{t_1}[G]$). By \Cref{obs:max-F-grid-interval-t2} these maximal $\F_\rho$ grid intervals are of size at least $2t_1$ each. As $k$ is a constant while $\Delta = \Theta(\eps^2 \min\{n,m\})$, we get that
$ [g_1 - t_1 - k, g_2 + t_1 + k] $ must be disjoint from $ [g'_1 - t_1 - k, g'_2 + t_1 + k] $.
\end{proof}
\fi
\begin{claim}\label{claim:max-bF-zero}
Let $\rho$ be any local rule that satisfies Conditions~\ref{condition:final}--\ref{condition:noninf_prediction}.
Let $\E''$ be any environment that is a feasible extension of $\E_{t_1}[G]$ with respect to $\rho$, and let $[g_1,g_2]$ be a maximal $ \bF_\rho $ grid interval (with respect to $\E_{t_1}[G]$).
Then $\E''_0(\Gamma_k(i)) \in \bF_\rho$ for every
$i \in J_1(g_1,g_2)$.
Furthermore, $\E''_0(\Gamma_k(i)) = \hrhol(\E_{t_1}(\Gamma_k(g)), \parity(t_1), \ddist(i,g))$
for any $g \in G\cap [g_1,g_2]$ and every ancestor $(0,i)$ of $(t_1,g)$.
\end{claim}
\ifnum1=1
\begin{proof}
The first part of the claim follows
from \Cref{condition:final} and \Cref{condition:infecting_neighbors}. Namely, assume, contrary to the claim, that
$\E''_0(\Gamma_k(i)) \in \F_\rho$ for some $g_1-t_1 \leq i \leq g_2+t_2$. But then,
by \Cref{condition:final} and \Cref{condition:infecting_neighbors}, there would be at least one grid location $g \in G\cap [g_1,g_2]$ such that $\E''_{t_1}(\Gamma_k(g)) \in \F_\rho$. This stands in contradiction to $[g_1,g_2]$ being a maximal $ \bF_\rho $ grid interval
with respect to $\E_{t_1}[G]$, and $\E''$ being a feasible extension of $\E_{t_1}[G]$.
\sloppy
As for the second part of the claim, by \Cref{condition:noninf_prediction} (which can be applied given the first part of the claim), for any $i$ and $g$ as defined in the claim,
$\E''_{t_1}(\Gamma_k(g)) = \hrhor(\E''_0(\Gamma_k(i)) , \parity(t_1), \ddist(i,g))$. But then, by the definition of $\hrhol$ and that premise that $\E''$ is a feasible extension of $\E_{t_1}[G]$,
$\E''_0(\Gamma_k(i)) = \hrhol(\E_{t_1}(\Gamma_k(g)), \parity(t_1), \ddist(i,g))$, as claimed.
\end{proof}
\fi
Observe that \Cref{claim:max-bF-zero} implies that
$\E''_0$ is \emph{uniquely} determined by $\E_{t_1}[G]$ on all location in $J$ for every $\E''$ that is a feasible extension of $\E_{t_1}[G]$ (with respect to $\rho$).
Based on this observation,
we start by setting the locations of $\sigma$ that belong to $J$ as in such $\E''_0$.
In particular we have
that $\sigma(\Gamma_k(i)) \in \bF_\rho$ for every $i \in J_1$, and furthermore,
\ifnum1=0
\vspace{-3.5ex}
\fi
\begin{equation}\label{eq:sigma-J}
\begin{split}
\forall i\in J_1, g\in G & \text{ s.t. } (t_1,g) \text{ descends from } (0,i) \text{ and,} \\
&\sigma(\Gamma_k(i)) = \hrhol(\E_{t_1}(\Gamma_k(g)), \parity(t_1), \ddist(i,g)) \;.
\end{split}
\end{equation}
Turning to the locations not yet set in $\sigma$, for each location $ i \in \cycnums{n} \setminus J $,
\ifnum1=1
\[ \sigma(i) = \frhor(\E_{t_2}(i), \parity(t_2), 0) \;.\]
\else
$\sigma(i) = \frhor(\E_{t_2}(i), \parity(t_2), 0)$.
\fi
Note that by \Cref{obs:f-lar},
$\frhor(\sigma(i), \parity(t_2), 0) = \E_{t_2}(i)$.
\smallskip
We next explain how we modify $ \sigma $ so as to obtain $\E'_0$
using \Cref{condition:non-localA} and \Cref{condition:non-localB}.
The modifications are performed (strictly) within the following set of intervals $\calS$.
\ifnum1=0
\vspace{-3.5ex}
\fi
\begin{equation}\label{eq:calS}
\mathcal{S} = \set{\big[a=g_1 - \Delta + t_1, b=g_2 + \Delta - t_1\big] \medspace{ } :
\medspace{ }
\text{$ [g_1,g_2] $ is a maximal $ \F_\rho $ grid interval }} \;.
\end{equation}
The intervals in $\mathcal{S}$ are clearly disjoint (as each is a sub-interval of a different maximal $ \F_\rho $ grid interval), and by \Cref{obs:max-F-grid-interval}, each is non-empty.
Note that for each maximal $ \F_\rho $ grid interval $[g_1,g_2]$, we have that $g_1-\Delta$ and $g_2 + \Delta$ are endpoints of maximal $\bF_\rho$ grid interval. Therefore, $a,b \in J_1$
for each interval $ [a,b] \in \calS $, and by the setting of $\sigma$ and \Cref{claim:max-bF-zero},
$ \sigma(\Gamma_k(a)), \sigma(\Gamma_k(b)) \in \bF_\rho $.
For each $[a,b] \in \calS $ and the corresponding $[g_1,g_2]$, let
$\alpha(a,b) = \E_{t_1}(g_1)$, $\beta(a,b) = \E_{t_1}(g_2)$, $\gamma(a,b) = \parity(t_1)$, $\gamma'(a,b)= \parity(t_1-\Delta)$.
We shall apply \Cref{condition:non-localA} and \Cref{condition:non-localB} to modify $\sigma$ on all
$[a,b] \in \calS $ ``in parallel'' as described next,
and set $\E'_0$ to be the resulting configuration.
For each $[a,b] \in \calS $
we first apply \Cref{condition:non-localB} with $z=a$, $\nu = \alpha(a,b)$, $\gamma = \gamma(a,b)$
and $\gamma' = \gamma'(a,b)$. We let $a'=z'$ (recall that $z' \in [z+1,z+2k+1] $
and $\tsigma(\Gamma_k(z')) \in \F_\rho$). Next we apply \Cref{condition:non-localB} in its second (symmetric) variant with $z=b$, $\nu = \beta(a,b)$, $\gamma = \gamma(a,b)$
and $\gamma' = \gamma'(a,b)$. We let $b' = z'$ (recall that in this variant, $z' \in [z-2k-2,z-1]$, and here too $\tsigma(\Gamma_k(z')) \in \F_\rho$).
Finally we apply \Cref{condition:non-localA} on the modified configuration with $x=a'$ and $y=b'$.
\begin{figure}
\centerline{\mbox{\includegraphics[width=1.05\textwidth]{E0.jpg}
}}
\caption{\small An illustration for the setting of $\E_0$.
As in \Cref{fig:ABCU}, $[g_1,g_2]$ is a maximal $\F_\rho$ grid interval, $g_3 = g_2+\Delta$, where $[g_3,g_4]$ is a maximal $\bF_\rho$ grid interval, and $g_5=g_4+\Delta$ is an endpoint of a maximal $\F_\rho$ grid-interval.
The maximal $\bF_\rho$ grid interval $[g_3,g_4]$ is used to set the locations between $g_3-t_1 = g_2-t_1+\Delta$ and $g_4+t_1$ (more precisely, between $g_3-t_1-k$ and $g_4+t_1+k$ based on \Cref{claim:max-bF-zero}. The location $b=g_2-t_1+\Delta$ is an endpoint of an interval in $\calS$, and the location $b'$ is determined by the application of
\Cref{condition:non-localB}. The values in the $k$-neighborhood of $b'$ are set so that the evolution of $\rho$ will result in the $\E_{t_1}(g_2)$ at time $t_1$. The pair $(t,i)$ belongs to the set $B$.
}
\label{fig:E0}
\end{figure}
\subsection{The distance between $\E$ and $\E'$}\label{subsubsec:E-E-prime}
In this subsection we show that based on the counter-assumption regarding the number of violating pairs,
the number of pairs $(t,i) \in \cycnums{n}\times\nums{m}$ on which $\E$ and $\E'$ differ is at most $\eps m n$. To this end we show that each $(t,i)$ such that $\E_t(i) \neq \E'_t(i)$ belongs to one of the following sets:
\begin{enumerate}
\item The set of pairs $(t,i)$ for which $ 0 \le t \le t_2 $.
\item The uncertainty set $ U $.
\item The set of $ (t,i) $ pairs where $ (t,i) $ is a violation with respect to $ \E_{t_1}[G] $.
\end{enumerate}
By the setting of $t_2$ ($t_1$) and $\Delta$, the number of pairs in the first set is at most
$\frac{(b_1+1)\eps}{b_0} m n$.
By \Cref{claim:U_is_small}, $ |U| \leq \frac{5\eps}{b_1} m n $.
By our counter-assumption, the number of violating pairs is at most $\frac{\eps}{b_2}mn$ .
Setting $b_1 = 15$, $b_0 = 48$ and $b_2=3$, we get a total of at most $\eps m n$ pairs, as claimed.
To establish the claim that each $(t,i)$ for which $\E_t(i) \neq \E'_t(i)$ belongs to one of the above three sets, we
prove the contrapositive.
Suppose the pair $ (t,i) $ is not in the uncertainty set $ U $ and that $ t > t_2 $.
It follows that $ (t,i) \in A \cup B \cup C $.
We show that for each of the three types of pairs
($ (t,i) \in A $, $ (t,i) \in B $, and $ (t,i) \in C $), if the pair $ (t,i) $ is not a violating pair with respect to $ \E_{t_1}[G] $, it must hold that $ \E_t(i)=\E'_t(i) $.
\subparagraph{Pairs {\boldmath{$ (t,i) \in A $}}.}
By the definition of $A$, there exists a maximal $ \F_\rho $ grid interval $ [g_1(i), g_2(i)] $ (with respect to $ \E_{t_1} $) for which
$i \in [g_1(i)+ t_1,g_2(i)-t_1]$.
Since $ (t,i) $ is not a violating pair with respect to $ \E_{t_1}[G] $, it must hold that $ \E_{t_2}(\Gamma_k(i)), \E_t(\Gamma_k(i)) \in \F_\rho $ and that $ \E_t(i)= \frhor(\E_{t_2}(i), \parity(t-t_2), 0) $.
Since
$i\in [g_1(i)+t_1 , g_2(i)-t_1 ]$,
we know that $ i \notin J $.
Hence, by the definition of the configuration $ \sigma $, we have that $ \sigma(i) = \frhor(\E_{t_2}(i), \parity(t_2), 0) $ and that $ \sigma(\Gamma_k(i)) \in \F_\rho $.
Let $[a(i),b(i)] = [g_1(i) - \Delta + t_1, g_2(i) + \Delta - t_1]$, so that $i \in I(a(i),b(i))$. By
the definition of $E'_0$, based on \Cref{condition:non-localA}
we have that $\E'_0(i) = \sigma(i)$ and $\E'_0(\Gamma_k(i)) \in \F_\rho$.
Since $\E'$ evolves according to $\rho$, by \Cref{condition:final_prediction},
\ifnum1=0
\vspace{-3.5ex}
\fi
\begin{align*}
\E'_{t_2}(i) = \frhor(\E'_{0}(i), \parity(t_2), 0)
= \frhor(\frhor(\E_{t_2}(i), \parity(t_2), 0), \parity(t_2), 0)
= \E_{t_2}(i)
\end{align*}
where the last equality follows
from (the second part of) \Cref{obs:f-lar}.
Additionally, by \Cref{condition:final}, $ \E'_{t_2}(i) \in \F_\rho $.
Therefore, by \Cref{condition:final_prediction},
\ifnum1=0
\vspace{-3.5ex}
\fi
\begin{align*}
\E'_{t}(i) = \frhor(\E'_{t_2}(i), \parity(t-t_2), 0) = \frhor(\E_{t_2}(i), \parity(t-t_2), 0) = \E_{t}(i)\;.
\end{align*}
\subparagraph{Pairs {\boldmath{$ (t,i) \in B $}.}}
By the definition of $B$,
there exists a maximal $\F_\rho$ grid interval $ [g_1(i), g_2(i)] $ with respect to $\E_{t_1}$ such that either
$i \in [g_1(i) -(t-t_1),g_1(i)+t_1-\Delta-1]$ or $i \in [g_2(i)-t_1+\Delta+1,g_2(i)+(t-t_1)]$.
Assume (without loss of generality) that
the latter holds.
By the definition of $B$ we also know that for every other maximal $\F_\rho$ grid interval $ [g'_1, g'_2] $
it holds that $\dist(i,g_2(i)) < \dist(i,g_1'),\dist(i,g_2')- \Delta$.
Let
$g(i)$
be the grid location closest to $i$ in $[g_2(i)-t_1+\Delta,g_2(i)]$.
Since
$i \in [g_2(i)-t_1+\Delta+1,g_2(i)+(t-t_1)]$,
necessarily, $g(i) \in [g_2(i)-t_1,g_2(i)]$.
For the sake of conciseness,
in what follows we shall use $g_1$, $g_2$, and $g$ as a shorthand for $g_1(i)$, $g_2(i)$ and $g(i)$, respectively.
Since $ [g_1, g_2] $ is a maximal $ \F_\rho $ grid interval,
$ [a =g_1 - \Delta + t_1, b=g_2 + \Delta - t_1] \in \mathcal{S} $. Hence, to obtain $ \E'_0 $ from the configuration $ \sigma $, we
invoked \Cref{condition:non-localB} (the symmetric version) with $z=b$, $\nu = \E_{t_1}(g_2)$,
$\gamma=\parity(t_1)$, and $\gamma' = \parity(t_1-\Delta) = \parity(\dist(g_2,b))$.
By \Cref{condition:non-localB}, letting $b' = z'$,
$ \E_{t_1}(g_2) = \frhor(\E'_0(b'), \parity(t_1), \parity(\dist(g_2,b'))) $.
By \Cref{obs:f-lar},
\ifnum1=0
\vspace{-3.5ex}
\fi
\begin{equation}\label{eq:E0bprime}
\E'_0(b') = \frhol(\E_{t_1}(g_2), \parity(t_1), \parity(\dist(g_2,b')))\;.
\end{equation}
As $b' \in [b-2k-1,b-1]$, which by the setting of $b$ implies that $b' \in [g_2+\Delta-t_1-2k-1, g_2+\Delta-t_1-1]$,
we have that $(0,b')$ is an ancestor of $(t_1,j)$ for every $j \in [g_2-t_1,g_2]$.
In particular this holds for the grid location $g$ (that is closest to $i$ in $G\cap [g_2-t_1,g_2]$).
Since $(t,i)$ descends from $(t_1,g)$, we get that $(t,i)$ also descends from $(0,b')$.
We claim that for every $ b'' \ne b' $ with $ \dist(i,b'') < \dist(i,b') $ it holds that $ \E'_0(b'') \in \bF_\rho $.
To verify this, let $[g_3,g_4]$ be the maximal $\bF_\rho$ grid interval where $g_3 = g_2+\Delta$, and let $g_5 = g_4+\Delta$, so that $g_5$ is the endpoint of a maximal $\F_\rho$ grid interval. By the definition of $\E'_0$ (based on $\sigma$ and \Cref{condition:non-localB}) we have that $\E'_0(\Gamma_k(j)) \in \bF_\rho$ for every $j \in [b'+1,b-1] \cup J_1(g_3,g_4) = [b'+1,g_4+t_1]$.
Since (by the second requirements on pairs in $B$), $\dist(i,g_2) < \dist(i,g_5) - \Delta$
and $b' \in [g_2-t_1 +\Delta-2k-1, g_2-t_1+\Delta-1]$,
we have that $ \E'_0(b'') \in \bF_\rho $ for every $ b'' \ne b' $ with $ \dist(i,b'') < \dist(i,b') $.
Therefore, we can apply \Cref{condition:final_prediction} to obtain that
\ifnum1=0
\vspace{-3.5ex}
\fi
\begin{align}
&\E'_t(i) =\frhor(\E_0(b'), \parity(t), \parity(\dist(i,b'))) \nonumber \\
&= \frhor(\frhor(\E_{t_1}(g_2), \parity(t_1), \parity(\dist(g_2,b'))), \parity(t), \parity(\dist(i,b'))) \label{eq:ig2a}\\
&= \frhor(\E_{t_1}(g_2), \parity(t-t_1), \parity(\dist(i,g_2))) \label{eq:ig2b}
\end{align}
where the last equality follows from \Cref{obs:f-lar}.
Consider first the case that $g = g_2$.
Since the pair $ (t,i) $ is not a violating pair,
\ifnum1=0
\vspace{-3.5ex}
\fi
\begin{equation}\label{eq:ig2c}
\E_t(i) \;=\; \frhor(\E_{t_1}(g_2), \parity(t-t_1), \parity(\dist(i,g_2))) \;,
\end{equation}
and hence in this case, $\E_t(i) = \E'_t(i)$, as desired.
We next turn to the case that $g \neq g_2$.
We claim that since $\E_{t_1}[G]$ is feasible,
\ifnum1=0
\vspace{-3.5ex}
\fi
\begin{equation}\label{eq:gbprime}
\E_{t_1}(g) = \frhor(\E'_0(b'), \parity(t_1), \parity(\dist(g,b'))) \;.
\end{equation}
Conditioned on Equality~\ref{eq:gbprime} holding, the argument is the same as for the case that $g=g_2$ (replacing $g_2$ with $g$ in Equations~\eqref{eq:ig2a}--\eqref{eq:ig2c}).
To verify Equation~\eqref{eq:gbprime},
we introduce the notion of a \emph{source} for a final pair.
Let $\E''$ be an environment that evolves according to $\rho$, and $(t',i')$ a final pair with respect to
$\E''$ and $\rho$. We say that $(0,b'')$ is the \emph{source} of $(t',i')$ (at time $0$) in $\E''$ if $(0,b'')$ is an ancestor of $(t',i')$,
is final, and $\dist(b'',i') < \dist(j,i')$ for every other final $(0,j)$.
Consider any feasible extension $E''$ of $\E_{t_1}[G]$. By \Cref{claim:max-bF-zero} and the discussion above, the source $(0,b'')$ of $(t_1,g_2)$ (at time $0$ in $\E''$) must satisfy $b'' \in [g_2-t_1,g_2-t_1+\Delta-1]$. Furthermore, $(0,b'')$ must also be the source of $(t_1,g')$ for every grid location $g' \in [g_2-t_1,g_2]$.
Therefore, for each such grid location,
$\E_{t_1}(g') = \E''_{t_1}(g') = \frhor(\E''_0(b''), \parity(t_1), \parity(\dist(g',b'')))$, where
$\E''_0(b'') = \frhol(\E_{t_1}(g_2), \parity(t_1), \parity(\dist(g_2,b'')))$.
If $\frhor$ and $\frhol$ do no depend on their third argument, then, by Equation~\eqref{eq:E0bprime},
$\E'_0(b') = \E''_0(b'')$ and if they do, then
$\E'_0(b') = \E''_0(b'') \oplus \parity(\dist(b',b''))$ .
In either case, Equation~\eqref{eq:gbprime} follows.
\subparagraph{Pairs {\boldmath{$ (t,i) \in C $}}.}
By the definition of $C$, there exists a maximal
$\bF_\rho$ grid interval $ [g_1(i), g_2(i)] $
such that $ g_1(i) \leq i \leq g_2(i) $.
Additionally, the pair $ (t,i) $ does not descend from either the pair
$(t_1,g_1(i)-1) $ or from the pair $ (t_1,g_2(i)+1) $.
Let $ g(i)$ be the grid location defined in \Cref{def:C-violating} (of violating pairs in $C$), so that
$g(i) \in G \cap [g_1(i),g_2(i)]$ and
$\dist(i,g(i)) < \Delta$.
By the definition of $\E'_0$ (based on $\sigma$ -- recall Equation~\eqref{eq:sigma-J}), we have that
$ \E'_0(\Gamma_k(i)) = \hrhol(\E_{t_1}(\Gamma_k(g(i))),\parity(t_1), \ddist(i,g(i))) $.
By the definition of $\hrhol$ (\Cref{def:h-lar}), this implies that
$ \E_{t_1}(\Gamma_k(g(i))) = \hrhor(\E'_{0}(\Gamma_k(i)),\parity(t_1), \ddist(i,g(i))) $.
Since $\E'_0(\Gamma_k(j)) \in \bF_\rho$ for every location $j\in J(g_1(i)),g_2(i))$,
and the environment $\E'$ evolves according to $\rho$ where $\rho$ satisfies \Cref{condition:noninf_prediction},
we know that
$ \E'_{t_1}(\Gamma_k(g(i))) = \hrhor(\E'_{0}(\Gamma_k(i)),\parity(t_1), \ddist(i,g(i))) $.
Hence, $\E'_{t_1}(\Gamma_k(g(i))) = \E_{t_1}(\Gamma_k(g(i)))$.
Furthermore, using in addition the fact that $ (t,i) $ does not descend from either
$(t_1,g_1(i)-1) $ or $ (t_1,g_2(i)+1) $, we get all ancestors of $(t_1,j)$ of $(t,i)$
satisfy $\E'_{t_1}(\Gamma_k(j)) \in \bF_\rho$, so that
$ \E'_{t}(\Gamma_k(i)) = \hrhor(\E'_{t_1}(\Gamma_k(g(i))),\parity(t-t_1), \ddist(g(i),i))) $.
Since $(t,i)$ is not a violating pair,
$ \E_t(\Gamma_k(i)) = \hrhor(\E_{t_1}(\Gamma_k(g(i))), \parity(t-t_1), \ddist(g(i),i)) $,
and using $\E'_{t_1}(\Gamma_k(g(i))) = \E_{t_1}(\Gamma_k(g(i)))$ we get that $\E'_{t}(i) = \E_t(i)$.
\ifnum1=1
\subsection{The homogeneous cases}\label{subsubsec:homogeneous}
Consider first the case that $\E_{t_1}(\Gamma_k(g))\in \bF_\rho$ for every grid location $g\in G$, which we refer to as the \emph{fully non-final} case.
In this case we have the following variant of \Cref{claim:max-bF-zero} (whose proof is essentially the same as the proof of \Cref{claim:max-bF-zero}).
\begin{claim}\label{claim:max-bF-zero-homogeneous}
Let $\rho$ be any local rule that satisfies Conditions~\ref{condition:final}--\ref{condition:non-localB}.
Let $\E''$ be any environment that is a feasible extension of $\E_{t_1}[G]$ with respect to $\rho$, where
$\E_{t_1}(\Gamma_k(g))\in \bF_\rho$ for every $g\in G$.
Then $\E''_0(\Gamma_k(i)) \in \bF_\rho$ for every
$i \in \cycnums{n}$.
Furthermore, $\E''_0(\Gamma_k(i)) = \hrhol(\E_{t_1}(\Gamma_k(g)), \parity(t_1), \ddist(i,g))$
for any $g \in G$ and every ancestor $(0,i)$ of $(t_1,g)$.
\end{claim}
Therefore, in the fully non-final case we set $\E'_0$ as indicated by \Cref{claim:max-bF-zero-homogeneous} (and let $E'$ be the environment that evolves according to $\rho$ from $\E'_0$).
Recall that in this case, $C = \{(t,i): t_2 < t < m,\; i \in \cycnums{n}\}$ and the sets $A$ and $B$ (as well as $U$) are empty. Therefore, the only violating pairs $(t,i)$ are those that belong to $C$, and it suffices to show that $E'$ and $E$ only differ on these pairs, on top of the pairs $(t,i) \in \nums{t_2} \times \cycnums{n}$.
This is established as done in the more complex heterogeneous case (for $(t,i)\in C$), and recalled next.
Consider any pair $(t,i) \in C$, and
let $ g(i)$ be the grid location defined in \Cref{def:C-violating} (of violating pairs in $C$).
so that
$\dist(i,g(i)) < \Delta$.
By the definition of $\E'_0$ (based on $\sigma$ -- recall Equation~\eqref{eq:sigma-J}), we have that
\[ \E'_0(\Gamma_k(i)) = \hrhol(\E_{t_1}(\Gamma_k(g(i))),\parity(t_1), \ddist(i,g(i))) \;.\]
By the definition of $\hrhol$ (\Cref{def:h-lar}), this implies that
\[ \E_{t_1}(\Gamma_k(g(i))) = \hrhor(\E'_{0}(\Gamma_k(i)),\parity(t_1), \ddist(i,g(i))) \;. \]
Since $\E'_0(\Gamma_k(j)) \in \bF_\rho$ for every location $j \in \cycnums{n}$,
and the environment $\E'$ evolves according to $\rho$ where $\rho$ satisfies Condition~\ref{condition:noninf_prediction},
we know that
\[ \E'_{t_1}(\Gamma_k(g(i))) = \hrhor(\E'_{0}(\Gamma_k(i)),\parity(t_1), \ddist(i,g(i))) \;. \]
Hence, $\E'_{t_1}(\Gamma_k(g(i))) = \E_{t_1}(\Gamma_k(g(i)))$.
Furthermore,
we have that all ancestors of $(t_1,j)$ of $(t,i)$
satisfy $\E'_{t_1}(\Gamma_k(j)) \in \bF_\rho$, so that
\[ \E'_{t}(\Gamma_k(i)) = \hrhor(\E'_{t_1}(\Gamma_k(g(i))),\parity(t-t_1), \ddist(g(i),i))) \;. \]
Since $(t,i)$ is not a violating pair,
\[ \E_t(\Gamma_k(i)) = \hrhor(\E_{t_1}(\Gamma_k(g(i))), \parity(t-t_1), \ddist(g(i),i)) \;, \]
and using $\E'_{t_1}(\Gamma_k(g(i))) = \E_{t_1}(\Gamma_k(g(i)))$ we get that
$\E'_{t}(i) = \E_t(i)$, as desired.
\medskip
We now turn to the case that $\E_{t_1}(\Gamma_k(g))\in \F_\rho$ for every $g\in G$, which we refer to as the \emph{fully final} case. In this case we initialize $\sigma(i) = \frhor(\E_{t_2}(i), \parity(t_2), 0)$ for each $i\in \cycnums{n}$, and apply \Cref{condition:non-localA} (on the complete interval ranging from $a=0$ and back to $b=0$).
Recall that in this fully final case, $A = \{(t,i): t_2 < t < m,\; i \in \cycnums{n}\}$ and the sets $B$ and $C$ (as well as $U$) are empty. Therefore, the only violating pairs $(t,i)$ are those that belong to $A$, and it suffices to show that $E'$ and $E$ only differ on these pairs (on top of the pairs $(t,i) \in \nums{t_2} \times \cycnums{n}$).
This is established as done in the more complex heterogeneous case (for $(t,i)\in A$), and recalled below.
Consider any pair $(t,i)\in A$ that is non-violating.
Since $ (t,i) $ is not a violating pair with respect to $ \E_{t_1}[G] $, it must hold that $ \E_{t_2}(\Gamma_k(i)), \E_t(\Gamma_k(i)) \in \F_\rho $ and that $ \E_t(i)= \frhor(\E_{t_2}(i), \parity(t-t_2), 0) $.
By the definition of the configuration $ \sigma $, we have that $ \sigma(i) = \frhor(\E_{t_2}(i), \parity(t_2), 0) $ and that $ \sigma(\Gamma_k(i)) \in \F_\rho $.
By \Cref{condition:non-localB} and the definition of $\E'_0$ we have that $\E'_0(i) = \sigma(i)$ and $\E'_0(\Gamma_k(i)) \in \F_\rho$.
Since $\E'$ evolves according to $\rho$, by \Cref{condition:final_prediction},
\[\E'_{t_2}(i) = \frhor(\E'_{0}(i), \parity(t_2), 0) =
\frhor(\frhor(\E_{t_2}(i), \parity(t_2), 0), \parity(t_2), 0) = \E_{t_2}(i)
\]
where the last equality follows
from (the second part of) \Cref{obs:f-lar}.
Additionally, by \Cref{condition:final}, $ \E'_{t_2}(i) \in \F_\rho $.
Therefore, by \Cref{condition:final_prediction},
\[
\E'_{t}(i) = \frhor(\E'_{t_2}(i), \parity(t-t_2), 0) = \frhor(\E_{t_2}(i), \parity(t-t_2), 0) = \E_{t}(i)\;,
\]
as required.
\fi
\ifnum1=1
\input{smaller-m}
\input{rules}
\fi
\ifnum1=1
\section{Rules that satisfy the conditions}
\section{Threshold rules}\label{subsec:thresh-rules}
\paragraph*{Complementary Rules.}
Some rules are equivalent to each other in the sense that if we interchange the roles of 0 and 1 in one rule, we get the other.
Formally, two rules $ \rho, \rho':\bitset^3 \to \bitset $ are said to be \emph{complementary} if for every triplet $ (\beta_1,\beta_2,\beta_3) \in \bitset^3 $, it holds that $ \rho(\beta_1,\beta_2,\beta_3) = 1-\rho'(1-\beta_1,1-\beta_2,1-\beta_3) $. Clearly, complementary rules are equivalent for testing purposes.
Observe that some rules are the complement of themselves, so the \textit{complement} is an equivalence relation that partitions the rules into pairs and singletons.
\paragraph*{Threshold Rules.}
Recall that a rule $ \rho :\bitset^3 \to \bitset $ is a \emph{threshold} rule if there exist a threshold integer
$ 0 \leq b \leq 3 $ and a bit $ \alpha \in \set{0, 1} $ such that $ \rho(\beta_1,\beta_2,\beta_3) = \alpha $ if and only if $ \beta_1+\beta_2+\beta_3 \geq b $.
\medskip\noindent
Since the all-$1$ and all-$0$ rules are complementary, and similarly OR and AND as well as NOR and NAND, it suffices to consider the rules: all-$1$, OR, NOR, Majority and Minority.
The all-$1$ rule can clearly be tested with $O(1/\eps)$ queries (selected uniformly in $[m-1]\times \cycnums{n}$), as it converges in a single time step to the all-$1$ configuration.
Turning to NOR, we show that it converges in a single time step, from which an $O(1/\eps)$-query algorithm easily follows.
For the remaining, non-trivial, rules we show that they satisfy Conditions~\ref{condition:final}--\ref{condition:non-localB}.
\iffalse
\begin{condition}\label{condition:final_set}
For each $\alpha \in \{\tau_{k+1}:\tau \in \F_\rho \}$\footnote{Recall that $ \F_\rho $ consists of $(2k+1)$-patterns and hence $\tau_{k+1}$ is the ``middle'' bit of the pattern $\tau$.} and for any $\mu \in \bitset^k$, there exists
$\mu'\in \bitset^k$ such that $\mu' \alpha \mu \in \F_\rho$ and $\mu''\in \bitset^k$ such that $\mu \alpha \mu'' \in \F_\rho$.
\end{condition}
\fi
\subsection{The NOR and NAND rules}\label{subsec:NOR}
We show that the NOR rule converges after at most a single time step.
Since the NAND rule is equivalent, an analogous proof holds for the NAND rule as well.
We first claim that if at some time step $ t $, every block of consecutive $ 0 $'s in the configuration $ \E_t $ is of size at least 3, then $ \E_{t+2} = \E_{t} $.
Let $ i \in \cycnums{n} $ be a location.
We show that $ \E_{t+2}(i) = \E_{t}(i) $.
If $ \E_t(i)=1 $, then necessarily $ \E_{t+1}(i) = \E_{t+1}(i-1) = \E_{t+1}(i+1) = 0 $, and therefore $ \E_{t+2}(i) = \norr(0,0,0) = 1 = \E_t(i) $.
Next consider the case that $ \E_t(i)=0 $.
If $ \E_t(i-1)=\E_t(i+1)=0 $, then $ \E_{t+1}(i)=1 $, and then $ \E_{t+2}(i)=0 $ as required.
Suppose this is not the case.
That is, the value of either $ \E_t(i-1) $ or $ \E_t(i+1) $ is $ 1 $.
Without loss of generality, assume $ \E_t(i-1)=1 $.
In this case, it must hold that $ \E_t(i+1) = \E_t(i+2)=0 $, or else the location $ i $ would have been part of a $ 0 $-block of size less than 3.
Now, $ \E_{t+1}(i+1) = \norr(0,0,0) = 1 $, and therefore $ \E_{t+2}(i)=0 $.
Now we claim that if $ t>0 $, there are no $ 0 $-blocks of length less than 3 in $ \E_t $.
Suppose by way of contradiction that for some $ t>0 $, the configuration $ \E_t $ contains the pattern $ 101 $ or $ 1001 $.
Let $ i \in \cycnums{n} $ be the left-most location and $ j \in \cycnums{n} $ be the right-most location in the ``bad'' pattern.
Since $ \E_t(i)=\E_t(j)=1 $, it must hold that $ \E_{t-1}(i-1) = \E_{t-1}(i) = \E_{t-1}(i+1) = 0 $ and that $ \E_{t-1}(j-1) = \E_{t-1}(j) = \E_{t-1}(j+1) = 0 $.
In the case of the pattern being $ 101 $, $ i+1=j-1 $.
Hence, in this case, $ \E_t(i+1) = \norr(\E_{t-1}(i), \E_{t-1}(i+1), \E_{t-1}(j)) = \norr(0,0,0) = 1 $.
In the case of the pattern being $ 1001 $, $ i+2=j-1 $.
Hence, in this case, $ \E_t(i+1) = \norr(\E_{t-1}(i), \E_{t-1}(i+1), \E_{t-1}(j-1)) = \norr(0,0,0) = 1 $.
That is, in both cases, $ \E_t(i+1) = 1 $ in contradiction to our conclusion that $ \E_t(i+1) = 0 $.
Now, since a $ 0 $-block of length less than 3 can only appear in the initial configuration, and cannot arise in any other way, and, as we've shown, once there are no 0-blocks of length less than 3, the environment alternates between a pair of configurations (that is, it has converged), any environment that evolves according to the NOR rule converges after at most a single time step, and hence is trivially $ \poly(1/\eps) $-testable.
\subsection{The OR and AND rules}\label{subsec:OR}
We prove that all conditions hold for the rule $\orr(\beta_1,\beta_2,\beta_3)= \beta_1\vee \beta_2 \vee \beta_3$.
Since the AND rule is equivalent to the OR rule, all the conditions must hold for the AND rule as well.
For $\orr$, $k=0$, $\F_{\orr} = \{1\}$ (and $\bF_{\orr} = \{0\}$). This rule ultimately converges to the all-$1$ configuration, unless the starting configuration is the all-$0$ configuration (in which case it remains in this configuration indefinitely).
Since $k=0$, rather than writing $\E_t(\Gamma_0(i))$, we simply write $\E_t(i)$.
Turning to the conditions (where the numbering of the items below correspond to the numbering of the conditions).
\begin{enumerate}
\item If $\E_t(i) \in \F_{\orr}$, then $\E_t(i) =1$, so that $\E_{t+1}(i)=1$ as well (for any setting of $\E_t(i-1)$ and $\E_{t-1}(i+1)$).
\item If $\E_t(i) \in \bF_{\orr}$, then $\E_t(i) = 0$. If either $\E_t(i-1) = 1$ or $\E_t(i+1) = 1$ (or both), then $\E_{t+1}(i) = 1 \in \F_{\orr}$, and otherwise $\E_{t+1}(i) =0 \in \bF_{\orr}$.
\item The function $\fruler{\orr}$ simply equals its first argument (which is $1$ whenever the function is applied).
\item The function $\hruler{\orr}$ simply equals its first argument (which is necessarily $0$).
\item Let $ \sigma : \cycnums{n} \to \bitset $ be a configuration and let $ [x,y] $ be an interval
of locations such that $ \sigma(x) \in \F_\orr $ and $ \sigma(y) \in \F_\orr $.
That is, $ \sigma(x) = \sigma(y) = 1 $.
We simply set $ \tsigma(i) = 1 $ for every $ i \in [x,y] $ and $ \tsigma(i) = \sigma(i) $ otherwise.
It clearly holds that for every $i \in [x,y]$ we have that $ \tsigma(i) = 1 \in \F_\orr $, and if $i \in [x,y]$ and $\sigma(i) \in \F_\orr$, then $ \tsigma(i) = 1 = \sigma(i) $, and so the requirements of \Cref{condition:non-localA} hold.
\item By the premise of \Cref{condition:non-localB} (regarding $z$), we have that $\sigma(z) = 0$.
We set $z' = z+1$ (in the symmetric version, $z' = z-1$) and $\tsigma(z') = 1$.
\end{enumerate}
\subsection{The Majority rule}\label{subsec:maj}
Here we repeat what was stated in \Cref{subsec:conditions} regarding the majority rule, denoted $\maj$, and add missing explanations when needed.
For this rule, $k=1$, and $\F_{\maj} = \{111,110,011,000,001,100\}$ (so that $\bF_{\maj} = \{101,010\}$). This rule ultimately converges to configurations that constitute of intervals of at least two consecutive $1$s and intervals of at least two consecutive $0$s, unless the starting configuration is $(01)^{n/2}$ (in which case it alternates between this configuration and the complementary one $(10)^{n/2}$).
We now verify that the conditions hold.
\begin{enumerate}
\item If $\E_t(\Gamma_1(i)) = 111$, then $\E_{t+1}(\Gamma_1(i)) = 111 \in \F_{\maj}$,
if $\E_t(\Gamma_1(i)) = 110$, then $\E_{t+1}(\Gamma_1(i)) \in \{110,111\} \subset \F_{\maj}$, and if $\E_t(\Gamma_1(i)) = 110$, then $\E_{t+1}(\Gamma_1(i)) \in \{110,111\} \subset \F_{\maj}$ (analogous statements hold for $\E_t(\Gamma_1(i)) \in \{000,001,100\}$).
\item Consider first the case that $\E_t(\Gamma_1(i)) = 101$ (so that it belongs to $\bF_{\maj})$.
In this case, $\E_t(\Gamma_1(i-1)) \in \{110,010\}$ and $\E_t(\Gamma_1(i+1))\in \{011,010\}$.
If $\E_t(\Gamma_1(i-1))=110$ (which belongs to $\F_{\maj}$), then $\E_t(i-1)=1$ and $\E_t(i)=1$, so that
$\E_{t+1}(\Gamma_1(i)) \in \{110,111\} \subset \F_{\maj}$ (and the case that $\E_t(\Gamma_1(i+1))=011$ is analogous).
On the other hand, if both $\E_t(\Gamma_1(i-1))=010$ and $\E_t(\Gamma_1(i+1))=010$ (so that they both belong to $\bF_{\maj}$), then
$\E_{t+1}(\Gamma_1(i)) = 010$ (and it belongs to $\bF_{\maj}$ as well).
\item For any $\beta \in \bitset$, we have that $\fruler{\maj}(\beta,\cdot,\cdot)=\beta$.
To verify this, observe that under the premise of the condition, one of the following holds.
(1) $\E_{t'}(\Gamma_1(i')) = \beta\beta \overline{\beta} $, and for every ancestor $(t',i'')\neq (t',i')$ of $(t,i)$ for which $ \dist(i,i'') \le \dist(i,i') $ it holds that $ \E_{t'}(i'') = \overline{\parity(i''-i)}$.
(2) $\E_{t'}(\Gamma_1(i')) = \overline{\beta}\beta\beta $ and for every ancestor $(t',i'')$ of $(t,i)$ such $ \dist(i,i'') \le \dist(i,i') $ it holds that $ \E_{t'}(i'') = \overline{\parity(i''-i)}$.
To illustrate the first case, suppose that $t-t' = 5$, then $\E_{t'}$ between locations $i-5$ and $i+5$ may be of the form $01110101010$ (so that at time $t'+1$ locations $i-4$ to $i+4$ are $111101010$, and time $t'+2$ locations $i-3$ to $i+3$ are $1111010$, so that $(t'+2,i)$ has become final with respect to $\E'$ and $\rho$, and $\E_{t''}(i)=1$ for every $t'' \geq t'+2$.
\item The function $\hruler{\maj}$ is defined as follows:
$\hruler{\maj}(010,\beta,x)=010$ if $\beta\oplus \parity(x) = 0$ and $\hruler{\maj}(010,\beta,x)=101$ if
$\beta\oplus \parity(x) = 1$. Similarly, $\hruler{\maj}(101,\beta,x)=101$ if $\beta\oplus \parity(x) = 0$
and $\hruler{\maj}(101,\beta,x)=010$ if $\beta\oplus \parity(x) = 1$.
\item
Let $ \sigma : \cycnums{n} \to \bitset $ be a configuration and let $ [x,y] $ be an interval
of locations such that $ \sigma(\Gamma_1(x)) \in \F_\maj $ and $ \sigma(\Gamma_1(y)) \in \F_\maj $.
We define the configuration $ \tsigma $ as follows.
Let $ i \in \nums(n) $.
If $ i \notin [x,y] $, we set $ \tsigma(i) = \sigma(i) $.
Otherwise, let $ j_i \in [x,y] $ be the location that minimizes $ \dist(i,j) $ among the locations for which $ \sigma(\Gamma_1(j)) \in \F_\maj $, and set $ \tsigma(i) = \sigma(j_i) $.
In particular, note that if $ \sigma(\Gamma_1(i)) \in \F_\maj $, then $ \tsigma(i) = \sigma(i) $.
We claim that for every $i \in [x,y]$ we have that $ \tsigma(\Gamma_1(i)) \in \F_\maj $, and that if $\sigma(\Gamma_1(i)) \in \F_\maj$, then $ \tsigma(i)=\sigma(i) $.
Let $ i \in [x,y] $.
If $ \sigma(\Gamma_1(i)) \in \F_\maj $, then $ j_i=i $, and hence either $ \sigma(\Gamma_1(i-1)) \in \F_\maj $ or $ \sigma(\Gamma_1(i+1)) \in \F_\maj $, which implies that either $ \tsigma(i)=\tsigma(i-1) $ or $ \tsigma(i)=\tsigma(i+1) $, and in both cases, it holds that $ \tsigma(\Gamma_1(i)) \in \F_\maj $ and $ \tsigma(i)=\sigma(i) $.
If $ \sigma(\Gamma_1(i)) \in \bF_\maj $, then, assume without loss of generality that $ j^* \in [i+1,y] $ (the case in which $ j^* \in [x,i-1] $ is similar).
In this case, $ j_i = j_{i+1} $, and hence $ \tsigma(i) = \tsigma(i+1) $, which implies that $ \tsigma(\Gamma_1(i)) \in \F_\maj $.
\item
Assume first that $\sigma(\Gamma_1(z)) = 101$. If $\nu = 1$, then we set $z'=z+1$ and $\tsigma(z+2) = 1$, and if $\nu = 0$, then we set $z' = z+2$, $\tsigma(z+2)=\tsigma(z+3) = 0$.
In the first case, $\tsigma(\Gamma_1(z'))=011 \in \F_{\maj}$ and in the second $\tsigma(\Gamma_1(z'))=100 \in \F_{\maj}$ and $\tsigma(\Gamma_1(z+1)) = 010 \in \bF_{\maj}$. In both cases,
$ \frhor(\tsigma(z'),\gamma,\parity(z'-z)\oplus \gamma') = \tsigma(z') = \nu $ for any $\gamma$ and $\gamma'$.
If $\sigma(\Gamma_1(z)) = 010$, then for $\nu = 0$ we set $z'=z+1$ and $\tsigma(z+2) = 0$, and for $\nu = 1$, we set $z' = z+2$, $\tsigma(z+2)=\tsigma(z+3) = 1$.
The symmetric variant of this condition is established similarly.
\end{enumerate}
\subsection{The Minority rule}\label{subsec:min}
\sloppy
For the minority rule, denoted $\mino$, like the majority rule, $k=1$, and $\F_{\mino} = \{111,110,011,000,001,100\}$ (so that $\bF_{\mino} = \{101,010\}$). Ultimate Convergence is also similar to the majority rule, except that in each time step, blocks of $1$s ``flip'' to become blocks of $0$s and vice versa, and if the initial configuration is $(01)^{n/2}$, then it does not change.
We now turn to the conditions.
\begin{enumerate}
\item If $\E_t(\Gamma_1(i)) = 111$, then $\E_{t+1}(\Gamma_1(i)) = 000 \in \F_{\mino}$,
if $\E_t(\Gamma_1(i)) = 110$, then $\E_{t+1}(\Gamma_1(i)) \in \{001,000\} \subset \F_{\mino}$, and if $\E_t(\Gamma_1(i)) = 011$, then $\E_{t+1}(\Gamma_1(i)) \in \{100,000\} \subset \F_{\mino}$ (analogous statements hold for $\E_t(\Gamma_1(i)) \in \{000,001,100\}$).
\item Consider first the case that $\E_t(\Gamma_1(i)) = 101$ (so that it belongs to $\bF_{\mino})$.
In this case, $\E_t(\Gamma_1(i-1)) \in \{110,010\}$ and $\E_t(\Gamma_1(i+1))\in \{011,010\}$.
If $\E_t(\Gamma_1(i-1))=110$ (which belongs to $\F_{\mino}$), then $\E_t(i-1)=0$ and $\E_t(i)=0$, so that
$\E_{t+1}(\Gamma_1(i)) \in \{001,000\} \subset \F_{\mino}$ (and the case that $\E_t(\Gamma_1(i+1))=011$ is analogous).
On the other hand, if both $\E_t(\Gamma_1(i-1))=010$ and $\E_t(\Gamma_1(i+1))=010$ (so that they both belong to $\bF_{\mino}$), then
$\E_{t+1}(\Gamma_1(i)) = 101$ (which is the same as $\E_t(\Gamma_1(i))$, and it belongs to $\bF_{\mino}$ as well).
\item For any $\beta \in \bitset$, we have that $\fruler{\mino}(\beta_1,\beta_2,\cdot) = \beta_1 \oplus \beta_2$.
This can be verified similarly to what was shown for $\maj$.
\item The function $\hruler{\mino}$ is defined as follows:
$\hruler{\mino}(010,\beta,x)=101$ if $\beta\oplus \parity(x) = 0$ and $\hruler{\mino}(010,\beta,x)=010$ if
$\beta\oplus \parity(x) = 1$. Similarly, $\hruler{\mino}(101,\beta,x)=010$ if $\beta\oplus \parity(x) = 0$
and $\hruler{\mino}(101,\beta,x)=101$ if $\beta\oplus \parity(x) = 1$.
\item This item is the same as the corresponding one for $\maj$.
\item
This item is very similar to the corresponding one for $\maj$.
The only difference is that there is a dependence on the parameter $\gamma$.
Specifically, if $\gamma=0$, then the setting is exactly as for $\maj$.
Suppose $\gamma=1$.
We assume first that $\sigma(\Gamma_1(z)) = 101$.
If $\nu = 0$, then we set $z'=z+1$ and $\tsigma(z+2) = 1$, and if $\nu = 1$, then we set $z' = z+2$, $\tsigma(z+2)=\tsigma(z+3) = 0$.
In the first case, $\tsigma(\Gamma_1(z'))=011 \in \F_{\mino}$ and in the second $\tsigma(\Gamma_1(z'))=100 \in \F_{\mino}$ and $\tsigma(\Gamma_1(z+1)) = 010 \in \bF_{\mino}$.
In both cases, $ \frhor(\tsigma(z'),\gamma,\parity(z'-z)\oplus \gamma') = \tsigma(z') \oplus \gamma = 1 - \tsigma(z') = \nu $ for any $\gamma'$.
If $\sigma(\Gamma_1(z)) = 010$, then for $\nu = 1$ we set $z'=z+1$ and $\tsigma(z+2) = 0$, and for $\nu = 0$, we set $z' = z+2$, $\tsigma(z+2)=\tsigma(z+3) = 1$.
\end{enumerate}
\section{Other rules}\label{subsec:other-rules}
\iffalse
\subsection{A variant of $\orr$}
Consider the rule $\orr'$, which is the same as $\orr$, except that $\orr'(101)=0$.
This rule converges to configurations that are all-$1$ except for singleton $0$s, unless the starting configuration is the all-$0$ configuration (in which case it remains in this configuration indefinitely).
For this rule $k=1$, and
$\F_{\orr'} = \{?1?,101\}$ (so that $\bF_{\orr'} = \{?0?\}\setminus \{101\}$).
\dana{Something isn't working quite right here with the second condition and taking larger $k$ doesn't see to solve it. We could say that only $000$ is non-final, but then we need to modify what we require of $\frhor$. Or do we want to reduce to testing OR?}
\begin{enumerate}
\item If $\E_t(\Gamma_1(i)) \in \F_{\orr'}$, then either $\E_t(i) =1$, so that $\E_{t+1}(i)=1$ as well (for any setting of $\E_t(i-1)$ and $\E_{t-1}(i+1)$) or $\E_t(\Gamma_1(i))=101$, so that $\E_{t+1}(\Gamma_1(i))=101$ as well.
\item If $\E_t(\Gamma_1(i)) \in \bF_{\orr}$, then $\E_t(i) = 0$ (and $\E_t(\Gamma_1(i)) \neq 101$).
If both $\E_t(\Gamma_1(i-1)) \in \bF_\rho$ and $\E_t(\Gamma_1(i+1)) \in \bF_{\rho}$, then
$\E_t(\Gamma_2(i)) \in \{00000,10001,00001,10000\}$.
\iffalse
\item The function $\fruler{\orr}$
simply equals its first argument.
\item The function $\hruler{\orr}$
simply equals its first argument (which is necessarily $0$).
\item Observe that by the premise of \Cref{condition:non-local} (regarding $a$ and $b$), we have that
$\sigma(a)=\sigma(b) =0$.
Furthermore, $\alpha=\beta=1$.
We set $a'=a+1$, $b'=b-1$ and $\tsigma(i)=1$
for every $a' \leq i \leq b'$ (as well as $\tsigma(a) = \tsigma(b) =0$ and $ \tsigma(i)=\sigma(i) $ for every $ i \notin [a, b ] $), thus satisfying \Cref{condition:non-local}.
\fi
\end{enumerate}
\fi
\subsection{Flip if homogeneous}
This rule, denoted $\fih$, is defined as follows: $\fih(\beta_1,\beta_2,\beta_3) = \beta_2$ unless
$\beta_1=\beta_2=\beta_3$, in which case $\fih(\beta_1,\beta_2,\beta_3) = \overline{\beta}_2$.
This rule ultimately converges to configurations with blocks of size one or two (unless the initial configuration is the all-$0$ configuration or the all-$1$ configuration, in which case it alternates between the two).
Here we have $\bF_{\fih} = \{000,111\}$ (so that $\F_{\fih} = \{01?,?10,10?,?01\}$.
\begin{enumerate}
\item If $\E_t(\Gamma_1(i)) \in \bF_{\fih}$ then either $\E_t(i) \neq \E_t(i-1)$ or
$\E_t(i) \neq \E_t(i+1)$ (possibly both), so that in the first case $\E_{t+1}(i-1) = \E_t(i-1)$ and in the second case, $\E_{t+1}(i) = \E_t(i)$. In both cases $\E_{t+1}(i+1) = \E_t(i+1)$, so that $\E_{t+1}(\Gamma_1(i)) \in \F_{\fih}$ as required.
\item If $\E_t(\Gamma_1(i)) \in \bF_{\fih}$, then $\E_t(\Gamma_1(i)) \in \{000,111\}$. If both $\E_t(\Gamma_1(i-1))\in \bF_{\fih}$ and $\E_t(\Gamma_1(i-1))\in \bF_{\fih}$, then $\E_t(\Gamma_2(i)) \in \{00000,11111\}$, so that $\E_{t+1}(\Gamma_1(i))\in \{111,000\} = \bF_{\fih}$. On the other hand, if $\E_t(\Gamma_1(i-1))\in \F_{\fih}$ or $\E_t(\Gamma_1(i+1)) \in \F_{\fih}$, then $\E_t(\Gamma_2(i)) \in \{1000?,?0001,0111?,?1110\}$ so that $\E_{t+1}\in \{01?,?10,10?,?10\} = \F_{\fih}$.
\item The function $\frhor$ is defined as follows: $\frhor(\beta_1,\beta_2,\beta_3) = \beta_1 \oplus \beta_3$.
To verify this consider any $(t,i)$ and $(t',i')$ that satisfy the requirements in \Cref{condition:final_prediction}. Assume that $\dist(i',i) = \ddist(i',i) $ (the case that $\dist(i',i) = \ddist(i,i')$ is verified similarly.
Since $\E_{t'}(\Gamma_1(i')) \in \F_{\fih}$ while
$\E_{t'}(\Gamma_1(i'')) \in \bF_{\fih}$ for every $i'' \ne i'$ satisfying $ \dist(i,i'') \le \dist(i,i') $,
we have that $\E_{t'}(i'-1)\neq \E_{t'}(i') $ while $ \E_{t'}(i') = \E_{t'}(i'+1) = \E_{t'}(i'')$ for every
$i''$ as above. This implies that $\E_t(i) = \E_{t'}(i')$ if $\parity(\dist(i',i))=0$ and
$\E_t(i) \neq \E_{t'}(i')$ otherwise.
\item The function $\hrhor$ is defined as follows: $\hrhor(\tau,\beta,\ell) = \tau$ if $\beta=0$ and it equals $\overline{\tau} = \overline{\tau}_1 \overline{\tau}_2 \overline{\tau}_3$, otherwise.
\item
Let $ \sigma : \cycnums{n} \to \bitset $ be a configuration and let $ [x,y] $ be an interval
of locations such that $ \sigma(\Gamma_1(x)) \in \F_\fih $ and $ \sigma(\Gamma_1(y)) \in \F_\fih $.
We define the configuration $ \tsigma $ as follows.
Let $ i \in \nums(n) $.
If $ i \notin [x,y] $, we set $ \tsigma(i) = \sigma(i) $.
Otherwise, let $ j_i \in [x,y] $ be the location that minimizes $ \dist(i,j) $ among the locations for which $ \sigma(\Gamma_1(j)) \in \F_\fih $, and set $ \tsigma(i) = \sigma(j_i) \oplus \parity(\dist(i,j_i)) $.
In particular, note that if $ \sigma(\Gamma_1(i)) \in \F_\fih $, then $ \tsigma(i) = \sigma(i) $.
We claim that for every $i \in [x,y]$ we have that $ \tsigma(\Gamma_1(i)) \in \F_\fih $, and that if $\sigma(\Gamma_1(i)) \in \F_\fih$, then $ \tsigma(i)=\sigma(i) $.
Let $ i \in [x,y] $.
If $ \sigma(\Gamma_1(i)) \in \F_\fih $, then $ j_i=i $, and hence either $ \sigma(\Gamma_1(i-1)) \in \F_\fih $ or $ \sigma(\Gamma_1(i+1)) \in \F_\fih $, which implies that either $ \tsigma(i) \ne \tsigma(i-1) $ or $ \tsigma(i) \ne \tsigma(i+1) $, and in both cases, it holds that $ \tsigma(\Gamma_1(i)) \in \F_\fih $ and $ \tsigma(i)=\sigma(i) $.
If $ \sigma(\Gamma_1(i)) \in \bF_\fih $, then, assume without loss of generality that $ j^* \in [i+1,y] $ (the case in which $ j^* \in [x,i-1] $ is similar).
In this case, $ j_i = j_{i+1} $, and hence $ \tsigma(i) \ne \tsigma(i+1) $, which implies that $ \tsigma(\Gamma_1(i)) \in \F_\fih $.
\item Assume that $\sigma(\Gamma_1(z)) = 000$ (the case that $\sigma(\Gamma_1(z)) = 111$ is handled analogously).
If $\nu \oplus \gamma' = 0$, then we set $z' = z+1$ and $\tsigma(z'+1) = 1$. Otherwise, we set $z'=z+2$ and $\tsigma(z')=\tsigma(z'+1) = 1$.
The symmetric variant is established analogously.
\end{enumerate}
\subsection{Flip unless homogeneous}
This rule, denoted $\fuh$, is defined as follows: $\fuh(\beta_1,\beta_2,\beta_3) = \overline{\beta}_2$ unless
$\beta_1=\beta_2=\beta_3$ in which case $\fuh(\beta_1,\beta_2,\beta_3) = \beta_2$.
Similarly to $\fuh$, this rule ultimately converges to configurations with blocks of size one or two, where each block flips all values in consecutive time steps (unless the initial configuration is the all-$0$ configuration or the all-$1$ configuration, in which case it remains the initial configuration).
Here too $\bF_{\fuh} = \{000,111\}$.
\begin{enumerate}
\item If $\E_t(\Gamma_1(i)) \in \F_{\fuh}$ then either $\E_t(i) \neq \E_t(i-1)$ or $\E_t(i) \neq \E_t(i+1)$ (possibly both), so that in the first case $\E_{t+1}(i-1) \ne \E_t(i-1)$ and in the second case, $\E_{t+1}(i+1) \ne \E_t(i+1)$.
In both cases $\E_{t+1}(i) \ne \E_t(i)$, so that either $\E_{t+1}(i) \neq \E_{t+1}(i-1)$ or $\E_{t+1}(i) \neq \E_{t+1}(i+1)$.
That is, $\E_{t+1}(\Gamma_1(i)) \in \F_{\fuh}$ as required.
\item If $\E_t(\Gamma_1(i)) \in \bF_{\fuh}$, then $\E_t(\Gamma_1(i)) \in \{000,111\}$. If both $\E_t(\Gamma_1(i-1))\in \bF_{\fuh}$ and $\E_t(\Gamma_1(i-1))\in \bF_{\fuh}$, then $\E_t(\Gamma_2(i)) \in \{00000,11111\}$, so that $\E_{t+1}(\Gamma_1(i))\in \{000,111\} = \bF_{\fuh}$. On the other hand, if $\E_t(\Gamma_1(i-1))\in \F_{\fuh}$ or $\E_t(\Gamma_1(i+1)) \in \F_{\fuh}$, then $\E_t(\Gamma_2(i)) \in \{1000?,?0001,0111?,?1110\}$ so that $\E_{t+1}\in \{01?,?10,10?,?10\} = \F_{\fuh}$.
\item The function $\frhor$ is defined as follows: $\frhor(\beta_1,\beta_2,\beta_3) = \beta_1 \oplus \beta_2 \oplus \beta_3$.
This can be verified similarly to what was shown for $ \fih $.
\item The function $\hrhor$ is defined as follows: $\hrhor(\beta,\cdot,\cdot) = \beta$.
\item
This item is the same as the corresponding one for $ \fih $.
\item This item is very similar to the corresponding one for $ \fih $. The only difference is that there is a dependence on the parameter $ \gamma $, where if $ \gamma = 0 $, the setting is exactly the same as for $ \fih $, and if $ \gamma = 1 $, the setting is the opposite (that is, if $ \nu=0 $, we set $ \tsigma $ the way we did for $ \nu=1 $ in $ \fih $, and vice versa).
\end{enumerate}
\iffalse
\subsection{The Majority rule for distance (radius) $2$}\label{subsec:maj2}
\dana{am writing this for now, just for the record. Actually, can also think of other thresholds for $r=2$ (not clear if harder or easier (or similar). }
To define $\F_{\maj,2}$ we will need to look up to distance $3$. In what follows I consider the case that the central bit is $1$. The cases in which it is $0$ are analogous (i.e., by flipping all bits).
\begin{enumerate}
\item The easiest cases are $1\underline{1}1,11\underline{1},\underline{1}11$, where the underlined $1$ remains a $1$ indefinitely (because all three bits remain $1$). So these are all final.
\item The previous item implies that if we have $000\underline 1$, where this $1$ is not part of a triple $1$, then in one time step we get $00\underline{0}$, so the central $1$ turns into a final $0$.
\item Next consider those patterns that extend $0\underline{1} 0$. Here we have:
\begin{enumerate}
\item $00\underline{1}00$ (added $0$ on both sides), where in one time step we get $0\underline{0} 0$.
\item $110\underline{1}01$, which gives $11\underline{1}$, and similarly $10\underline{1}011$, which gives $\underline{1} 11$ (added $1$s on both sides and another $1$ on at least one side).
\item In addition we have $010\underline{1}00$, which gives $0 \underline{0} 0$ (and recall that we already considered $\underline{1}000$).
\end{enumerate}
What extensions of $0\underline{1} 0$ didn't this cover? One is $010\underline{1}010$. If we can do with only distance $3$, then it is non-final (it can be part of $(01)^*$, which remains $(01)^*$). Another is $110\underline{1} 001$. It may be part of $(110100)^*$, which transforms to $(011001)^*$ and back.
If we indeed need to go to distance $4$, then we'll need to see what to do.
\item Next consider those that extend $1 \underline{1} 0$ (and symmetrically, $0\underline{1} 1$). Since we already covered $11\underline{1}0$, we consider extensions of $01\underline{1}0$.
\begin{enumerate}
\item If we put $0$ on both sides, then we get $001\underline{1}00$, and assuming there isn't a third $0$ on the right (which is a case we addressed), we have $001\underline{1}001$. This may be part of $(0011)^*$ which flips to $(1100)^*$ and back. So again, if we look to distance 3, then this is non-final.
\item If we put $1$ on both sides, then we have $101\underline{1}01$, which gives $1\underline {1} 1$.
\item If we add a $1$ on the left and a $0$ on the right, we have $101\underline{1}00$, and once again, assuming there is not an additional $0$ on the right, which we covered, we have $101\underline{1}001$. This may be part of $(101100)^*$, which we are familiar with (we saw a shift above).
\item If we add a $0$ on the left and a $1$ on the right, we have $001\underline{1}01$. If we add a $1$ on the right, we have $001\underline{1}011$, which gives $\underline{1}11$. If we add a $0$ on the right, we have $001\underline{1}010$, which may be part of $(001101)^*$.
\end{enumerate}
\end{enumerate}
Note that neighbors of ``finals-in-one-step'' must be either final or ``final-in-one-step'' in the next step (because in the next step they neighbor a final).
By the above we seem to have three categories (once again, can add the same for $0$ as the central bit):
\begin{enumerate}
\item The finals $1\underline{1}1,11\underline{1},\underline{1}11$.
\item The ``finals-in-one-step'' $000\underline 1$, $\underline{1}000$, $00\underline{1}00$, $110\underline{1}01$, $10\underline{1}011$, $010\underline{1}00$, $101\underline{1}01$, and $001\underline{1}011$.
\item The non-finals. These have a view that is consistent with $(10)^*$ or $(1100)^*$ or $(110010)^*$ or $(001101)^*$.
\end{enumerate}
Assuming we didn't miss any pattern, and these categories are disjoint, it looks like this may fit a slightly extended version of the meta algorithm. Or maybe the meta algorithm is even the same, but just the conditions need to be slightly modified so as to extend the notion of final and the prediction function to be used on finals-in-one-step.
\fi
\section{The case of $m\ll n$}\label{subsec:m-n}
Let $\rho$ be a fixed local rule that satisfies Conditions~\ref{condition:final}--\ref{condition:non-localB}.
\iffalse
We first explain how to slightly modify our algorithm so as to obtain an algorithm that works for $m<n$ and whose query complexity is $O((1/\eps^2)\cdot (n/m))$.
This gives us complexity $\poly(1/\eps)$ conditioned on $m = \poly(\eps) n$, and in particular, $O(1/\eps^3)$ for $m=\Omega(\eps n)$. We then show how to further extend the algorithm so as to obtain complexity $\poly(1/\eps)$ in general.
\paragraph{A variant with a dependence on $n/m$}
Recall that we set $\Delta = \eps^2 n/b_0$, $t_1 = \frac{b_1 \Delta}{\epsilon} $, and $t_2 = t_1 + \Delta$ (where $b_0$ and $b_1$ are sufficiently large constants, and $b_0$ is a constant factor larger than $b_1$). If $m<n$, then we redefine $\Delta$ to be $\eps^2 m/b_0$, and retain the settings of $t_1$ and $t_2$ as a function of $\Delta$. Since the number of grid locations queried by the algorithm is $n/\Delta$, which is now $O((1/\eps^2)\cdot (n/m))$, we get the desired query complexity (recall that the second part of the algorithm contributes only $O(1/\eps)$ queries). As for the correctness of the algorithm, if $\E$ evolves according to $\rho$, then it is still accepted with probability $1$, as in the case of $m\geq n$. For the other direction, recall that in the proof of \Cref{lemma:soundness}, it is shown that for any environment $\E$, there exists an environment $\E'$ that evolves according to $\rho$ and such that $\E$ and $\E'$ differ on a subset of the following three types of pairs $(t,i)$: pairs in $U$, violating pairs, and pairs $(t,i)$ such that $0 \leq t \leq t_2$. The only change in the analysis is in the number of pairs of the third kind, which is upper bounded by $(b_1+1) n\Delta/\eps$. Given the change in the setting of $\Delta$, the number of such pairs is now upper bounded by $(b_1+1)\eps m n / b_0$ (while the previous bound was $(b_1+1) \eps n^2/b_0$, which is sufficiently small only when $n\leq m$).
\paragraph{removing the dependence on $n/m$ for small $m$.}
\fi
In order to address the case that $m$ is much smaller than $n$ (so that the multiplicative factor of $n/m$
in the complexity stated in \Cref{thm:main} is too large), we essentially reduce to the case that $m = \Theta(\eps n)$, as explained next.
Let $n' = b_3 m/\eps$ (for a sufficiently large constant $b_3$), and assume that $n > b_3 m/\eps^2$, or else we run the algorithm given in \Cref{subsec:alg}, so that we get query complexity $O(1/\eps^4)$.
Also, assume for simplicity that $n$ is divisible by $n'$. We partition $\cycnums{n}$ into consecutive disjoint intervals of size $n'$ each. The algorithm selects $\Theta(1/\eps)$ intervals, uniformly at random and for each selected interval it runs a test to check whether the interval evolves according to $\rho$. We next give more precise details about how such a test is performed. At this point, we just note that the test for each selected interval performs $O(1/\eps^3)$ queries, so that here too we get a total of $O(1/\eps^4)$ queries.
Consider any such interval of size $n'$, denoted $I= [i_0,j_0=i_0+n'-1]$. Here too, $\Delta = \eps^2 m/b_0$, where we assume that $n'$ is divisible by $\Delta$, $t_1 = \frac{b_1 \Delta}{\epsilon} $, and $t_2 = t_1 + \Delta$. Observe that $n'/\Delta = O(1/\eps^3)$, and that $t_2\cdot n' = O(\eps mn')$.
For $i'_0 = i_0+t_1+k+1$ and $j'_0 = j_0 - t_1-k-1$,
we let $G_I = \{i'_0,i'_0+\Delta,\dots,j'_0\}$
be the grid associated with $I$ (we assume for simplicity that $j_0-i_0$ is divisible by $\Delta$).
The algorithm starts by querying $\E_{t_1}$ on all locations in $\Gamma_k(G_I)$. If $ \E_{t_1}[G_I] $ is infeasible with respect to $\rho$, then the algorithm rejects. Otherwise, the algorithm selects, uniformly at random, $ \Theta(\frac{1}{\epsilon}) $ pairs $ (t,i) $ where $ t_2 < t < m $ and $ i \in \{i'_0+(t-t_1),\dots,j'_0-(t-t_1)\}$ (which equals $\{i_0+k+1+t,j_0-k-1-t\}$). For each selected pair $ (t,i) $, it queries $ \E_{t}(\Gamma_k(i)) $ and $ \E_{t_2}(\Gamma_k(i)) $. If some selected pair is a violating pair with respect to $\rho$ (as defined in \Cref{subsec:violating_pairs}), then it rejects, and otherwise, it accepts.
Note is that for each interval $I=[i_0,j_0]$ and for every $ (t,i) $ where $ t_2 < t < m $ and $ i \in \{i'_0+(t-t_1),\dots,j'_0-(t-t_1)\}$, the pair $(t,i)$ cannot descend form a pair $(0,\ell)$ such that $\ell \notin I$.
If $\E$ evolves according to $\rho$, then the algorithm never rejects, as $\E_{t_1}[G_I]$ is feasible for every $I$, and there are no violating pairs with respect to $\E_{t_1}[G_I]$ and $\rho$, for every $I$ (following the proof of \Cref{lemma:completeness}).
It remains to verify that if the algorithm accepts $\E$ with probability at least $2/3$, then $\E$ is $\eps$-close to some $\E'$ that evolves according to $\rho$.
We say that an interval $I= [i_0,j_0]$ is \emph{bad} if either $ \E_{t_1}[G_I] $ is infeasible with respect to $\rho$ or there are more than $ (\eps/b_4) m n' $ violating pairs $(t,i)$ (for $ t_2 < t < m $ and $ i \in \{i'_0+(t-t_1),\dots,j'_0-(t-t_1)\}$, where $i'_0$ and $j'_0$ are as defined above and $b_4$ is a sufficiently large constant). Otherwise it is \emph{good}. Since the algorithm accepts $\E$ with probability at least $2/3$, the fraction of bad intervals is at most $\eps/b_5$ (for an appropriate constant $b_5$). For each good interval $[i_0,j_0]$ we set $\E'_0$ restricted to (a sub-interval of) $[i_0'-(t_1+k),j'_0-(t_1+k)]$ as in the proof of \Cref{lemma:soundness} (in \Cref{subsubsec:E-prime}). All remaining locations in $\E'_0$ (i.e., $[i_0,i'_0-(t_1+k)-1]$ and
$[j'_0+(t_1+k)+1,j_0]$ in each good interval $I=[i_0,j_0]$, and all of $I$ for each bad interval $I'$), are set arbitrarily.
We let $\E'$ be the environment that evolves from $\E'_0$ (according to $\rho$).
Following the second part of the proof of \Cref{lemma:soundness} (\Cref{subsubsec:E-E-prime}), for each good interval $I = [i_0,j_0]$, the environments $\E$ and $\E'$ differ on at most all locations in $\nums{t_2}\times I$
(whose number is $t_2\cdot n' \leq \frac{(b_1+1) \Delta}{\epsilon} \cdot n' = \frac{b_1+1}{b_0}\cdot \eps mn'$), on at most all locations in $U$ restricted to $I$ (whose number is at most $\frac{5}{b_1}\eps mn' $), and on at most all violating pairs restricted to $I$ (whose number is at most $\frac{1}{b_4}\eps mn'$). In addition, they differ on at most all pairs $(t,i)$ where $t_2+1 \leq t < m$ and $i \in [i_0,i_0+2m] \cup [j_0-2m,j_0]$, whose number is upper bounded by $4m^2 = \frac{4}{b_3}\cdot \eps mn'$.
They agree on all other locations in $\nums{m}\times I$. For each bad interval $I'$ they may disagree on all locations in $\nums{m}\times I$.
Summing up all disagreements (setting e.g., $b_1=20$, $b_0 = 84$, $b_4=4$, $b_3=32$, and $b_5=8$), we get at most $\eps m n$, as claimed.
|
1,941,325,220,035 | arxiv | \section*{Appendix}
\subsection*{Minimization at constant pressure}
In this section we present further details of how packings are prepared
at a constant pressure. As discussed in the main text the energy of
the system depends on the inter-particle distance:
\begin{equation}
U_{ij}=\frac{1}{2}k\left(1-\frac{r_{ij}}{R_{i}+R_{j}}\right)^{2}\Theta\left(R_{i}+R_{j}-r_{ij}\right).
\end{equation}
Working at a finite pressure provides a tighter control over the distance
from the jamming transition, than at constant packing fraction. In
the latter, near the jamming transitions some of the packings may
be under constrained and some packings could be over constrained.
To maintain a constant pressure we minimize the enthalpy
\begin{equation}
H=U+P_{0}V.
\end{equation}
Here the target pressure is $P_{0}$. We employ the FIRE minimization
algorithm, which evolves based in the gradients of energy (or enthalpy)
\cite{FIRE}. The volume of the box is also a coordinate that varies
during the minimization and its dynamics depend on the gradient with
respect to the volume, $P=-\frac{\partial U}{\partial V}$. When H is a minimum $\frac{\partial H}{\partial V}=0$, implying that $P=-\frac{\partial U}{\partial V}=P_0$. The
pressure of the system, $P$, is given by the diagonal of the virial
stress tensor:
\begin{equation}
\tau_{ij}=\frac{1}{V}\sum_{b}r_{b,i}f_{b,j}.
\end{equation}
The sum is over all bonds, $V$ is the volume, $r_{b}$ is a vector
that connects the center of two interacting particles and $f_{b}$
is inter-particle force along and it is pointed in the same direction
as $f_{b}$.
\subsection*{Definition of $\sigma_Z^2(\ell)$.}
The definition of $\sigma_Z^2(\ell)$ given in the main text is
\begin{equation}
\sigma_{Z}^{2}\left(\ell\right)=\frac{1}{\ell^{d}}\overline{\big\langle \big(\sum_{i\in\ell^{d}}\delta Z_{i}\big)^{2}\big\rangle} \label{sss}
\end{equation}
where
\begin{equation}
\delta Z_i = Z_i - Z
\label{ZZZ_i}
\end{equation}
and
\begin{equation}
Z= \frac 1N \sum_{i=1}^N Z_i
\end{equation}
A small variant of Eq.~(\ref{sss}) is to replace $Z$ with $\overline{Z}$ which is the sample-to-sample average of $Z$.
We denote the corresponding $\sigma_Z^2$ as $\overline \sigma_Z^2(\ell)$.
For $\ell\to \infty$ we have that
\begin{equation}
\overline \sigma_Z^2(\ell\to \infty)/\rho \equiv \delta^2Z(N\to \infty)
\end{equation}
However, since for $N\to \infty$, $Z\to \overline{Z}>0$ we get that the difference between $\sigma_Z^2$ and $\overline \sigma_Z^2$ is a subleading term that vanishes for $N\to \infty$ and $\ell\to \infty$, so that
\begin{equation}
\sigma_Z^2(\ell\to \infty)/\rho \equiv \delta^2Z(N\to \infty)
\end{equation}
Finally the same argument holds if we replace
$Z$ by its local average $Z_{(i)}$, meaning its average inside the box in which $Z_i$ is computed in Eq.~(\ref{ZZZ_i}).
Since for $\ell\to \infty$, $Z_{(i)}\to Z>0$ up to subleading corrections, one can interchange the definitions without affecting the large $\ell$ behavior.
\subsection*{Comparison of $\nu_{f}$ in 2d and 3d}
In the main text we showed that in the large system limit $\delta^{2}Z\propto\Delta Z^{\nu_{f}}$,
where our data suggested that $\nu_{f}^{2d}\approx1.0$, while in
higher dimension $\nu_{f}\approx1.25$. To visualize these two possible
scalings we plot these two power-laws. We note that the exponents
are deduced based on the collapse of $\delta^{2}Z$, as well as the
collapse of $\sigma_{f}^{2}$ in Ref. \cite{hexner2018two}.
\begin{figure}[H]
\includegraphics[scale=0.6]{dzsup}
\caption{A comparison of the two slopes, $\Delta Z^{1.0}$ and $\Delta Z^{1.25}$.
In two dimension the collapse suggests $\nu_{f}^{2d}\approx1.0$,
while in three and four dimensions $\nu_{f}\approx1.25$. }
\end{figure}
\bibliographystyle{apsrev4-1}
|
1,941,325,220,036 | arxiv | \section{Introduction}
Wetting and dewetting phenomena are ubiquitous in soft matter systems
and have a profound impact on many disciplines, including
biology~\cite{Prakash2012}, microfluidics~\cite{Geoghegan2003}, and
microfabrication~\cite{Chakraborty2010}. One problem of great interest
concerns the suspension of fluid films on or near structured surfaces
where, depending on the interplay of competing short-range molecular
or capillary forces (e.g. surface tension), gravity, and long-range
dispersive interactions (i.e. van der Waals or more generally, Casimir
forces), the film may undergo wetting or dewetting transitions, or
exist in some intermediate state, forming a continuous surface profile
of finite thickness~\cite{Bonn2009, Geoghegan2003}. Thus far,
theoretical analyses of these competing effects have relied on
approximate descriptions of the dispersive van der Waals (vdW)
forces~\cite{arodreview, Israelachvili, Parsegian}, i.e. so-called
Derjaguin~\cite{Derjaguin1934} and Hamaker~\cite{Hamaker1937}
approximations, which have recently been shown to fail when applied in
regimes that fall outside of their narrow range of
validity~\cite{Buscher2004, lambrechtPWS, Emig2001,arodreview}.
In this paper, building on recently developed theoretical techniques
for computing Casimir forces in arbitrary geometries~\cite{Reid2013,
Reid2009}, we demonstrate an approach for studying the equilibrium
shapes (the wetting and dewetting properties) of liquid surfaces that
captures the full non-additivity and non-locality of vdW
interactions~\cite{aroddesigner}. As a proof of concept, we consider
the problem of a fluid surface on or near a periodic grating,
idealized as a deformable perfect electrical conductor (PEC) surface
(playing the role of a fluid surface) interacting through vacuum below
a fixed periodic PEC grating [\figref{schematic}], and show that the
competition between surface tension and non-additive vdW pressure
leads to quantitatively and qualitatively different equilibrium fluid
shapes and wetting properties compared with predictions based on
commonly employed additive approximations. Our simplifying choice of
PEC surfaces allows for a scale-invariant analysis of the role of
geometry on both non-additivity and fluid deformations, ignoring
effects associated with material dispersion that would otherwise
further complicate our analysis and which are likely to result in even
larger deviations~\cite{Noguez2004,arodreview}. Our results provide a
basis for experimental studies of fluid suspensions in situations where
vdW non-additivity can have a significant impact.
Equilibrium fluid problems are typically studied by way of the
augmented Young-Laplace equation~\cite{interfacecolloidYLE},
\begin{equation}
\gamma \nabla \cdot \left(\frac{\nabla \Psi}{\sqrt{1+|\nabla
\Psi|^2}}\right) + \frac{\delta}{\delta\Psi}
\left(\mathcal{E}_{\mathrm{other}}[\Psi]+\mathcal{E}_{\mathrm{vdW}}[\Psi]
\right) = 0
\label{eq:YLE}
\end{equation}
describing the local balance of forces (variational derivatives of
energies) acting on a fluid of surface profile $\Psi(\vec{x})$. The
first two terms describe surface and other external forces (e.g.
gravity), with $\gamma$ denoting the fluid--vacuum surface tension,
while the third term $\frac{\delta}{\delta \Psi}
\mathcal{E}_{\mathrm{vdW}}$ denotes the local disjoining pressure
arising from the changing vdW fluid--substrate interaction energy
$\mathcal{E}_{\mathrm{vdW}}$. Semi-analytical~\cite{Quinn2013,
Ledesma2012nanoscale} and brute-force~\cite{Ledesma2012multiscale,
Sweeney1993} solutions of the YLE have been pursued in order to
examine various classes of wetting problems, including those arising
in atomic force microscopy, wherein a solid object (e.g. spherical
tip) is brought into close proximity to a fluid
surface~\cite{Quinn2013, Ledesma2012nanoscale, Ledesma2012multiscale},
or those involving liquids on chemically~\cite{Bauer1999, Checco2006}
or physically~\cite{Geoghegan2003, Bonn2009, Sweeney1993} textured
surfaces.
A commonality among prior theoretical studies of \eqref{YLE} is the
use of simple, albeit heuristic approximations that treat vdW
interactions as additive forces, often depending on the shape of the
fluid in a power-law fashion~\cite{Derjaguin1934, Derjaguin1956,
Hamaker1937}. Derjaguin or proximity-force approximations (PFA) are
applicable in situations involving nearly planar structures,
i.e. small curvatures compared to their separation, approximating the
interaction between the objects as an additive, pointwise summation of
plate--plate interactions between differential elements comprising
their surfaces~\cite{Derjaguin1934, Derjaguin1956}. Hamaker or
pairwise-summation (PWS) approximations are applicable in situations
involving dilute media~\cite{lambrechtPWS}, approximating the
interaction between two objects as arising from the pairwise summation
of (dipolar) London--vdW~\cite{caspol1} or
Casimir--Polder~\cite{caspol2} forces between volumetric elements of
the same constitutive materials~\cite{Hamaker1937}; such a treatment
necessarily neglects multiple-scattering and other non-additive
effects. When applied to geometries consisting of planar interfaces,
PFA can replicate exact results based on the so-called Lifshitz theory
(upon which it is based)~\cite{Dzyaloshinskii1961}, whereas PWS
captures the distance dependence obtained by exact calculations but
differs in magnitude (except in dilute situations)~\cite{lambrechtPWS}.
Typically, the quantitative discrepancy of PWS is rectified via a
renormalization of the force coefficient to that of the Lifshitz formula,
widely known as the Hamaker constant~\cite{Bergstrom1997}.
The inadequacy of these additive approximations in situations that
fall outside of their range of validity has been a topic of
significant interest, spurred by the recent development of techniques
that take full account of complicated non-additive and boundary
effects arising in non-planar structures, revealing non-monotonic,
logarithmic, and even repulsive interactions stemming from geometry
alone~\cite{arodreview, aroddesigner, arodpistons, bordagcyl}. These
brute-force techniques share little semblance with additive
approximations, which offer computational simplicity and intuition at
the expense of neglecting important electromagnetic effects. In
particular, the exact vdW energy in these modern formulations is often
cast as a log-determinant expression involving the full (no
approximations) electromagnetic scattering properties of the
individual objects, obtained semi-analytically or numerically by
exploiting spectral or localized basis expansions of the scattering
unknowns~\cite{arodreview, Lambrecht2006}. The generality of these
methods does, however, come at a price, with even the most
sophisticated of formulations requiring thousands or hundreds of
thousands of scattering calculations to be performed~\cite{arodreview}.
Despite the fact that fluid suspensions motivated much of the original
theoretical work on vdW interactions between macroscopic
bodies~\cite{Lamoreaux2006, Dzyaloshinskii1961, Israelachvili,
Parsegian}, to our knowledge these recent techniques have yet to be
applied to wetting problems in which non-additivity and boundary
effects are bound to play a significant role on fluid deformations.
\begin{figure}[t!]
\centering
\includegraphics[width=0.75\columnwidth]{unitcellschematic.eps}
\caption{Schematic of fluid--grating geometry comprising a fluid
(blue) of surface profile $\Psi(\vec{x})$ in close proximity
(average distance $d$) to a solid grating (red) of height profile
$h(\vec{x})$, involving thin nanorods of height $H$, thickness $2P$,
and period $\Lambda$. (a) Representative mesh employed by a recently
developed FSC boundary-element method~\cite{SCUFF1} for computing
exact vdW energies in complex geometries. (b) and (c) illustrate
commonly employed pairwise--summation (PWS) and proximity--force
approximations (PFA), involving volumetric and surface interactions
throughout the bodies, respectively.}
\label{fig:schematic}
\end{figure}
\emph{Methods.--} In order to solve \eqref{YLE} in general settings,
we require knowledge of $\frac{\delta}{\delta\Psi}
\mathcal{E}_{\mathrm{vdW}} [\Psi]$ for arbitrary $\Psi$. We employ a
mature and freely available method for computing vdW interactions in
arbitrary geometries and materials~\cite{SCUFF1,SCUFF2}, based on the
fluctuating--surface current (FSC) framework~\cite{Reid2009, Reid2013}
of electromagnetic scattering, in which the vdW energy,
\begin{equation}
\mathcal{E}_\mathrm{FSC} = \frac{\hbar}{2\pi}\int_0^\infty
\mathrm{d}\xi \, \ln(\det(\mathbb{M} \mathbb{M}_{\infty}^{-1}))
\label{eq:FSC}
\end{equation}
is expressed in terms of ``scattering'' matrices $\mathbb{M}$,
$\mathbb{M}_\infty$ involving interactions of surface currents
(unknowns) flowing on the boundaries of the bodies~\cite{Reid2009,
Reid2013} and integrated along imaginary frequencies $\xi =
\mathrm{i} \omega$; these are computed numerically via expansions in
terms of localized basis functions, or triangular meshes interpolated
by linear polynomials [\figref{schematic}(a)], in which case it is
known as a boundary element method. Because exact methods most commonly
yield the total vdW energy or force, rather than the local pressure on
$\Psi$, it is convenient to consider the YLE in terms of an equivalent
variational problem for the total energy~\cite{bormashenko,silin}:
\begin{equation}
\min_{\Psi} \; \left(\gamma \int \sqrt{1 + |\nabla \Psi|^2} +
\mathcal{E}_{\mathrm{other}}[\Psi] +
\mathcal{E}_{\mathrm{vdW}}[\Psi]\right),
\label{eq:Emin}
\end{equation}
where just as in \eqref{YLE}, the first term captures the surface
energy, the second captures contributions from gravity or bulk
thermodynamic/fluid interactions, and the third captures the dispersive
vdW interaction energy. For simplicity, we ignore other competing
interactions, including thermodynamic and viscous
forces~\cite{Ledesma2012multiscale, Ledesma2012nanoscale} and neglect
gravity when considering nanoscale fluid deformations, focusing
instead only on the impact of surface and dispersive vdW interactions.
\Eqref{Emin} can be solved numerically via any number of available
nonlinear optimization/minimization
techniques~\cite{bormashenko,silin}, requiring only a convenient
parametrization of $\Psi$ using a finite number of degrees of
freedom. In what follows, we consider numerical solution of
\eqref{Emin} for the particular case of a deformable incompressible
PEC surface $\Psi$ interacting through vacuum with a 1d-periodic PEC
grating of period $\Lambda$ and shape $h(\vec{x}) = d -
H\left(\frac{1}{e^{\alpha (x - P)} + 1} + \frac{1}{e^{-\alpha (x + P)}
+ 1} - 2\right)$, for $|x| < \frac{\Lambda}{2}$, with half-pitch
$P = 0.03\Lambda$ and height $H =1.2\Lambda$. \Figref{schematic} shows
the grating surface and fluid profile obtained by solving \eqref{Emin}
for a representative set of parameters and mesh discretization. Here,
$d = 0.4\Lambda$ is the initial minimum grating-fluid separation, and
$\alpha\Lambda = 150$ is a parameter that smoothens otherwise sharp
corners in the grating, alleviating spatial discretization errors in
the calculation of $\mathcal{E}_{\mathrm{vdW}}$ while having a
negligible impact on the qualitative behavior of the energy compared
to what one might expect from more typical, piecewise-constant
gratings~\cite{Buscher2004}.
To minimize the energy, we employ a combination of algorithms found in
the NLOPT optimization suite~\cite{NLOPT, COBYLA, BOBYQA}. Although
the localized basis functions or mesh of the FSC method provide one
possible parametrization of the surface, for the class of periodic
problems explored here, a simple Fourier expansion of the surface
provides a far more efficient and convenient basis, requiring far
fewer degrees of freedom to describe a wide range of periodic
shapes. Because the grating is translationally invariant along the $z$
direction and mirror-symmetric about $x = 0$, we parametrize $\Psi$ in
terms of a cosine basis, $\Psi(\vec{x}) = \sum_{n} c_n
\cos\left(\frac{2\pi{} nx}{\Lambda}\right)$, with the finite number of
coefficients $\{c_n\}$ functioning as minimization parameters. As we
show below, this choice not only offers a high degree of convergence,
requiring typically less than a dozen coefficients, but also
automatically satisfies the incompressibility or volume-conservation
condition $\int \Psi = 0$, which would otherwise require an
additional, nonlinear constraint. Note that the optimality and
efficiency of the minimization can be significantly improved when
local derivative information (with respect to the minimization
parameters) is available, but given that even a single evaluation of
$\mathcal{E}_{\mathrm{vdW}} [\Psi]$ is expensive---a tour-de-force
calculation involving hundreds of scattering
calculations~\cite{arodreview}---this is currently prohibitive in the
absence of an adjoint formulation (a topic of future
work)~\cite{Giles2000}. Given our interest in equilibrium fluid shapes
close to the initial condition of a flat fluid surface ($\Psi = 0$) and
because of the small number of degrees of freedom $\{c_n\}$ needed to
resolve the shapes, we find that local, derivative-free optimization is
sufficiently effective, yielding fast-converging solutions.
In what follows, we compare the solutions of~\eqref{Emin} based
on~\eqref{FSC} against those obtained through PFA and PWS, which
approximate $\mathcal{E}_{\mathrm{vdW}}$ in this periodic geometry as:
\begin{align}
\mathcal{E}_{\mathrm{PFA}} &= -\frac{\pi^2 \hbar c}{720}
\int_{-\Lambda/2}^{\Lambda/2} \mathrm{d}x
\left(\frac{1}{h(x) - \Psi(x)}\right)^{3} \label{eq:PFA} \\
\mathcal{E}_\mathrm{PWS} &=
A \int_{-\Lambda/2}^{\Lambda/2} \mathrm{d}x'
\int_{-\infty}^{\infty} \mathrm{d}x
\int_{h(x')}^{\infty} \mathrm{d}y'
\int_{-\infty}^{\Psi(x)} \mathrm{d}y \frac{1}{s^6},
\label{eq:PWS}
\end{align}
where $A = -\frac{2\pi\hbar c}{45}$ is a Hamaker-like coefficient
obtained by requiring that \eqref{PWS} yield the correct vdW energy
for two parallel PEC plates, as is typically
done~\cite{Bergstrom1997}. \Eqref{PWS} is obtained from pairwise
integration of the $r^{-7}$ Casimir--Polder interactions following
integration over $z$ and $z'$, with $r = \sqrt{s^2 + (z - z')^2}$ and
$s = \sqrt{(x - x')^2 + (y - y')^2}$~\footnote{Note that in situations
involving a deformed PEC surface and flat PEC plate, one can show
that $\mathcal{E}_\mathrm{PWS} =
\mathcal{E}_\mathrm{PFA}$~\cite{Emig2003}, as this is a direct
consequence of the additivity of the interaction.}. Note that
because we only consider perfect conductors, there is no dispersion to
set a characteristic length scale and hence all results can be quoted
in terms of an arbitrary length scale, which we choose to be
$\Lambda$. Additionally, we express the surface tension $\gamma$ in
units of $\gamma_{\mathrm{vdW}} = \frac{\pi^{2}\hbar c}{720d^{3}}$,
the vdW energy per unit area between two flat PEC plates separated by
distance $d$. In what follows, we consider the impact of
non-additivity on the fluid shape under both repulsive [\figref{fig2}]
or attractive [\figref{fig3}] vdW pressures (obtained by appropriate
choice of its sign), under the simplifying assumption of PEC surfaces
interacting through vacuum. In either case, we consider local
optimizations with small initial trust radii around $\Psi = 0$, and
characterize the equilibrium fluid profile $\Psi(x)$ as $\gamma$ is
varied. Our minimization approach is also validated against numerical
solution of \eqref{YLE} under PFA (green circles).
\begin{figure}[t!]
\centering
\includegraphics[width=0.97\columnwidth]{repulsivehdiffsvsg_smallstep_withinsets.eps}
\caption{Maximum displacement $\Delta\Psi/d$ of a fluid--vacuum
interface that is repelled from a grating (insets) by a repulsive
vdW force, as a function of surface tension
$\gamma/\gamma_{\mathrm{vdW}}$, obtained via solution of
\eqref{Emin} using FSC (blue), PWS (red), and PFA (green)
methods. Circles indicate results obtained through \eqref{YLE}.
Insets show the equilibrium fluid--surface profiles at selected
$\gamma \in{} \{0.006, 0.055, 0.277\}\gamma_{\mathrm{vdW}}$, with
the unperturbed $\Psi = 0$ surface denoted by black dashed lines.}
\label{fig:fig2}
\end{figure}
\emph{Repulsion.--} We first consider the effects of vdW repulsion on
the equilibrium profile of the fluid--vacuum interface, enforced in
our PEC model by flipping the sign of the otherwise attractive vdW
energy. Such a situation can arise when a fluid film either sits on or
is brought in close proximity to a solid grating
[\figref{fig2}(insets)], causing the fluid to either wet or dewet the
grating~\cite{Israelachvili}, respectively. \Figref{fig2} compares the
dependence of the maximum displacement $\Delta\Psi =
\Psi_{\mathrm{max}} - \Psi_{\mathrm{min}}$ of the fluid surface on
$\gamma$, as computed by FSC (blue), PWS (red), and PFA (green). Also
shown are selected surface profiles at small, intermediate, and large
$\gamma/\gamma_\mathrm{vdW}$. Note that the combination of a repulsive
vdW force, surface tension, and incompressibility leads to a
\emph{local} equilibrium shape that is corroborated via linear
stability analysis~\cite{ivanov1988thin}.
Under large $\gamma$, the surface energy dominates and thus all three
methods result in nearly-flat profiles, with $|\Psi| \ll d$. While
both additive approximations reproduce the exact energy of the
plane--plane geometry (with the unnormalized PWS energy
underestimating the exact energy by 20\%~\cite{lambrechtPWS}), we find
that (at least for this particular grating geometry)
$\mathcal{E}_\mathrm{PWS,PFA} / \mathcal{E}_\mathrm{FSC} \approx 0.25$
in the limit $\gamma \to \infty$, revealing that even for a flat fluid
surface, the grating structure contributes significant non-additivity.
Noticeably, at large but finite $\gamma \gg \gamma_\mathrm{vdW}$,
$\Delta\Psi$ is significantly larger under FSC and PFA than under PWS,
with $\Psi_\mathrm{FSC,PWS}$ exhibiting increasingly better qualitative
and quantitative agreement compared to the sharply peaked
$\Psi_\mathrm{PFA}$ as $\gamma$ decreases [\figref{fig2}(insets)]. The
stark deviation of PFA from FSC and PWS in the vdW--dominated regime
$\gamma \ll \gamma_{\mathrm{vdW}}$ is surprising in that PWS involves
volumetric interactions within the objects, whereas PFA and FSC depend
only on surface topologies. Essentially, the pointwise nature of PFA
means $\mathcal{E}_{\mathrm{PFA}}$ depends only on the local
surface--surface separation, decreasing monotonically with decreasing
separations and competing with surface tension and incompressibility
to yield a surface profile that nearly replicates the shape of the
grating in the limit $\gamma\to 0$. Quantitatively, PFA leads to
larger $\Delta\Psi$ as $\gamma \to 0$, asymptoting to a constant
$\lim_{\gamma \to 0} \Delta \Psi_{\mathrm{PFA}} \to H = 3d$ at
significantly lower $\frac{\gamma}{\gamma_{\mathrm{vdW}}} <
10^{-5}$. On the other hand, both $\mathcal{E}_{\mathrm{FSC}}$ and
$\mathcal{E}_{\mathrm{PWS}}$ exhibit much weaker dependences on the
fluid shape at low $\gamma$, with the former depending slightly more
strongly on the surface amplitude and hence leading to asymptotically
larger $\Delta\Psi$ as $\gamma \to 0$; in this geometry, we find that
$\Delta\Psi_{\mathrm{FSC,PWS}} \to \{0.32, 0.28\}d$ for
$\frac{\gamma}{\gamma_{\mathrm{vdW}}} \lesssim 10^{-2}$. Furthermore,
while PFA and PWS are found to agree with FSC at large and small
$\gamma$, respectively, neither approximation accurately predicts the
surface profile in the intermediate regime $\gamma \sim
\gamma_\mathrm{vdW}$, where neither vdW nor surface energies
dominate. Ultimately, neither of these approximations is capable of
predicting the fluid shape over the entire range of $\gamma$.
\emph{Attraction.--} We now consider the effects of vdW attraction,
which can cause a fluid film either sitting on or brought into
close proximity to a solid grating [\figref{fig3}(insets)] to dewet
or wet the grating, respectively~\cite{Israelachvili}. Here, matters
are complicated by the fact that $\mathcal{E}_\mathrm{vdW} \to -\infty$
as the fluid surface approaches the grating, leading to a fluid
instability or wetting transition below some critical
$\gamma^{(\mathrm{c})}$, depending on the competition between the
restoring surface tension and attractive vdW pressure. Such
instabilities have been studied in microfluidic systems through both
additive approximations~\cite{Bonn2009, kerle, Geoghegan2003,
Quinn2013}, but as we show in \figref{fig3}, non-additivity can lead
to dramatic quantitative discrepancies in the predictions obtained from
each method of computing $\mathcal{E}_{\mathrm{vdW}}$. To obtain
$\gamma^{(\mathrm{c})}$ along with the shape of the fluid surface for
$\gamma > \gamma^{(\mathrm{c})}$, we seek the nearest local solution
of~\eqref{Emin} starting from $\Psi = 0$. \Figref{fig3} quantifies the
onset of the wetting transition by showing the variation of the
minimum grating-fluid separation $h_{\mathrm{min}} -
\Psi_{\mathrm{max}}$ with respect to $\gamma$, as computed by FSC
(blue), PWS (red), and PFA (green), along with the corresponding
$\mathcal{E}_{\mathrm{vdW}}$ [\figref{fig3}(inset)] normalized to
their respective values for the plane--grating geometry (attained in
the limit $\gamma\to\infty$). Also shown in the top-right inset are
the optimal surface profiles at $\gamma\approx\gamma^{(\mathrm{c})}$
obtained from the three methods.
In contrast to the case of repulsion, here the fluid surface
approaches rather than moves away from the grating, which ends up
changing the scaling of $\mathcal{E}_{\mathrm{vdW}}$ with $\Psi$ and
leads to very different qualitative results. In particular, we find
that $\mathcal{E}_{\mathrm{FSC}}$ exhibits a much stronger dependence
on $\Psi_{\mathrm{max}}$ compared to PWS and PFA, leading to a much
larger $\gamma^{(\mathrm{c})}$ and a correspondingly broad surface
profile. As before, the strong dependence of
$\mathcal{E}_{\mathrm{PFA}}$ on the fluid surface, a consequence of
the pointwise nature of the approximation, produces a sharply peaked
surface profile, while the very weak dependence of
$\mathcal{E}_{\mathrm{PWS}}$ on the fluid shape ensures both a gross
underestimation of $\gamma^{(\mathrm{c})}$ along with a broader
surface profile. Interestingly, we find that
$\gamma^{(\mathrm{c})}_{\mathrm{FSC,PFA, PWS}} \approx
\{0.65,0.38,0.07\}\gamma_{\mathrm{vdW}}$, emphasizing the failure of
PWS to capture the critical surface tension by nearly an order of
magnitude.
\begin{figure}[t!]
\centering
\includegraphics[width=0.87\columnwidth]{attractive_new.eps}
\caption{Minimum surface--surface separation $\frac{h_{\mathrm{min}} -
\Psi_{\mathrm{max}}}{d}$ of a fluid--vacuum interface that is
attracted to a grating (insets) by an attractive vdW force, as a
function of surface tension $\frac{\gamma}{\gamma_{\mathrm{vdW}}}$,
obtained via solution of \eqref{Emin} using FSC (blue), PWS (red),
and PFA (green) methods. Circles indicate results obtained through
\eqref{YLE}. Wetting transitions occurring at critical values of
surface tension $\gamma^{(\mathrm{c})}$, marked as 'x'. The
top-right inset shows the equilibrium fluid--surface profiles near
$\gamma^{(\mathrm{c})}$ while the bottom-left inset shows the
equilibrium vdW energies normalized by the energies of the
unperturbed ($\Psi=0$) plane--grating geometry (the limit of $\gamma
\to \infty$).}
\label{fig:fig3}
\end{figure}
\emph{Concluding Remarks.--} The predictions and approach described
above offer evidence of the need for exact vdW calculations for the
accurate determination of the wetting and dewetting behavior of fluids
on or near structured surfaces. While we chose to employ a simple
materials-agnostic and scale-invariant model for the vdW energy,
realistic (dispersive) materials can be readily analyzed within the
same formalism, requiring no modifications. We expect that in these
cases, non-additivity will play an even larger role. In fact, recent
works~\cite{Noguez2004, lambrechtPWS} have shown that additive
approximations applied to even simpler structures can contribute
larger discrepancies in dielectric as opposed to PEC bodies. For the
geometry considered above, assuming $\Lambda = 50 \, \mathrm{nm}$ and
a nonretarded Hamaker constant $A =
10^{-19}~\mathrm{J}$~\cite{Gu2001, Bergstrom1997, Israelachvili},
corresponding to a gold--water--oil material combination (with the thin
$d = 20~\mathrm{nm}$ water film taking the role of vacuum in our
model), we estimate that significant fluid displacements $\Delta \Psi
\sim 10~\mathrm{nm}$ and non-additivity can arise at $\gamma \approx
10^{-6}~\mathrm{J/m^{2}}$. By exploiting surfactants, it
should be possible to explore a wide range of $\gamma \in [10^{-7},
10^{-2}] \, \mathrm{J/m^{2}}$~\cite{Quinn2013} and hence fluid
behaviors, from vdW- to surface-energy dominated regimes. Yet another
tantalizing possibility is that of observing these kinds of
non-additive interactions in extensions of the original liquid He$^4$
wetting experiments that motivated development of more general
theories of vdW forces (Lifshitz theory) in the first
place~\cite{Dzyaloshinskii1961}. In the future, it might also be
interesting to consider the impact of other forces, including but not
limited to gravity as well as finite-temperature thermodynamic effects
arising in the presence of gases in contact with fluid surfaces.
\emph{Acknowledgments.--} We are grateful to Howard A. Stone, M. T.
Homer Reid, and Steven G. Johnson for useful discussions. This
material is based upon work supported by the National Science
Foundation under Grant No. DMR-1454836 and by the National Science
Foundation Graduate Research Fellowship Program under Grant No. DGE
1148900.
|
1,941,325,220,037 | arxiv | \section{Abstract}
The evidence is mounting that star formation necessarily involves planet
formation. We clearly have a vested interest in finding other Earths but a
true understanding of planet formation requires completing the census and
mapping planetary architecture in all its grandeur and diversity. Here, we
show that a 2000-star survey undertaken with SIM Lite will uniquely probe
planets around B-A-F stars, bright and binary stars and white dwarfs. In
addition, we show that the high precision of SIM Lite allows us to gain unique
insights into planet formation via accurate measurements of mutual
inclinations.
\keywords{planetary systems: formation \&\ architecture}
\section{Introduction}
Our understanding of extrasolar planetary systems has grown
exponentially over the past decade and half.
In addition to familiar designations of rocky planets,
giant planets and icy giants we now have
new names such as ``Hot Jupiters'', ``Eccentric Giants'',
``Hot Neptunes'' and ``Super Earths''.
The first wave of these discoveries was driven by precision
Radial Velocity (RV) studies. The transit method is now contributing
handsomely to the detailed studies (radius, composition) of the
hot Jupiters.
COROT and Kepler (launch in 3 weeks!) will determine
the statistics of rocky planets.
Recently, the ExoPlanet Task Force (ExoPTF)\footnote{\texttt{http://www.nsf.gov/mps/ast/exoptf.jsp}}
reviewed the state of the field. Their strategy consisted of addressing the following fundamental questions (in priority order) over the next decade and half:
\begin{enumerate}
\item What are the physical characteristics of planets in the habitable zones
around bright, nearby stars?
\item What is the architecture of planetary systems?
\item When, how and in what environments are planets formed?
\end{enumerate}
Other white papers (e.g. Marcy-Shao, Traub-Kastings, Beichman)
address the first and last question.
Here, we address the second question.
\section{Planetary Diversity \&\ Architecture}
For the Solar system, the
observations and
measurements strongly support the bottom-up
(dust to rocks to planetary cores), also known as
Safronov model for planet formation. In contrast, the prevailing
hypothesis for the formation of brown dwarfs (and stars) is a
top-down (gravitational condensation) scenario.
The discovery of 51 Pegasi b, a Jupiter with an orbital separation
of only 0.05\,AU (as opposed to 5.2\,AU for Jupiter), was
a dramatic illustration of the limitations of the standard model for planet
formation.
Observations have now established a
strong correlation
between the metallicity of stars and the occurrence of an planet
(identified by RV approach). The sense of connection
(metals to planets) as well as whether this correlation is proportional
(low metallicity, fewer or lower mass planets as opposed to a sharp transition)
are being debated heavily.
It is well known that most stars are in binary or multiple
systems. A full understanding of planet formation should naturally
address the issue of planets around and in binary (and
multiple) star systems.
Finally, the current extra-solar planet sample is dominated by those
found using the RV technique, namely stars with spectral type FGK.
OBA stars have no strong absorption features and M dwarfs
have prominent lines but primarily in the near-IR. Binaries with
small angular separation pose additional difficulties for observations.
These gaps in our knowledge show the importance of a comprehensive search
for planets in every conceivable ecological niche: stars with varying
metallicity, binary stars and stars across the entire mass spectrum.
Apart from these astrophysical ``biases'', the search techniques
have their own biases:
RV and transits favor close in planets whereas astrometry gains ascendency
with longer period planets. Both RV and astrometry are limited by
the duration of the survey. Micro-lensing, while sensitive, is limited
to statistical studies. Imaging techniques will be valuable but
the meaningfully powerful instruments are a decade away.
Mapping planetary architecture would be immensely aided by
having sensitive astrometric measurements.
Fortunately, recent advances
in technology will soon see astrometry fulfilling its expected
promise.
\begin{figure}[htbp]
\centering
\includegraphics[width=3.3in]{GAIASIM.ps}
\caption{\small Phase Space of SIM Habitable Zone planet search
and the GAIA planet search.}
\label{fig:SIMGAIA}
\end{figure}
\section{The Decade of Astrometry}
Ground-based interferometers have already demonstrated
GAIA-like single-epoch (or better) performance for close
binaries ($20\,\mu$arcsec; \citet{mlf+08}); seeing-limited
imaging and HST/FGS observations have achieved precision of sub-milliarcsecond
for relative astrometry \citep{l06,psw+06,bmb08};
and adaptive optics observations show great promise of
beating $100\,\mu$arcsec \citep{cbk08}.
The main limitation for ground-based interferometric and AO astrometry
is the availability of suitably bright reference stars.
As a result
ground-based interferometry is ideally suited to exploring planets
in binary systems. AO observations with large telescopes is well suited
to probing planets around faint targets especially at low
Galactic latitudes (M dwarfs, brown dwarfs).
But for most stars, the requirement of reference stars
makes space based astrometry a must. This basic conclusion has
been discussed and reaffirmed by two decadal reviews (1990 and 2000) and
again reaffirmed recently by ExoPTF.
GAIA (expected launch of late 2011) is expected to achieve single epoch
astrometric precision of $55\,\mu$arcsec (for the range 6--13\,mag). With an
average of 84 visits to an object GAIA has very good sensitivity to detect
Jupiter mass objects around a very large number stars.
SIM Lite is designed for both wide and narrow angle astrometry.
Three planet searches have been envisaged with SIM Lite: an ultra-deep
sub-microarcsecond search of nearby Earth-like planets around nearby
Sun-like stars (PI: Shao, PI: Marcy; hereafter, HZ search),
a search for planets around young stars (PI: Beichman) and a
broad search. This latter search is the topic of this paper.
The phase space covered by GAIA and the SIM habitable zone search is shown
in Figure~\ref{fig:SIMGAIA}.
Over the range 0--13\,mag SIM Lite can easily achieve 5\,$\mu$arcsec
single-epoch precision. With 10\% of SIM Lite time one can survey nearly 2,000
stars at this single-epoch sensitivity (visiting each star 150 times). We
call this as the ``Broad Survey with High Precision'' (BSHP for short) and
discuss the potential astrophysical returns of this survey. The relative
phase space between GAIA and BSHP is shown in Figure~\ref{fig:BSHP}.
\section{Planets around B- and A-type Stars}
RV studies, by necessity, have targeted FGK stars. For example,
the bulk of the California and Carnegie Planet Search probe the
mass range 0.8--1.2\,$M_\odot$ \citep{vf05,tfs+07}. The intermediate-
and high-mass main sequence stars ($M_*>1.4\,M_\odot$) suffer from
fewer spectral lines, rapid rotation and surface inhomogeneities
\citep{sbm+98,nmm+03,glu+05,w05}. By cleverly observing
evolved versions of these stars, \citet{jbm+07} find that the planet
occurrence rate increases with increasing stellar mass.
SIM Lite is well positioned to undertake a comprehensive survey
of hundreds of type A and B stars.
For example, SIM Lite will be able to detect a
$19 M_{\oplus}$ planet on a 4 year orbit around a $2 M_{\odot}$ A-type star
located at 30\,pc with 150 50-second visits. Similarly, a $130 M_{\oplus}$
planet can be detected around a $6 M_{\odot}$ B-type star located at 100pc.
\begin{figure}[htbp]
\centering
\includegraphics[width=3.5in]{tier2_discovery_space2.ps}
\caption{\small Phase space of SIM Lite Broad Survey with High Precision relative to that
of GAIA. SIM Lite enjoys a clear advantage over GAIA for BAF stars. Nearby
GK and some F and M stars will be observed intensively by the SIM Lite
HZ search (see previous Figure).}
\label{fig:BSHP}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=3in]{Acc.ps}
\caption{\small The discovery space for a five-year astrometric planet search
around white dwarfs (distance 10 pc).
See text (\S\ref{sec:DAZd}) for explanation of the dotted line.
The solid points indicate
the positions of the solar system planets assuming the Sun loses
half its mass.
Similarly, the open circles indicate the positions of the detected
radial velocity planets\citep{bwm+} if they spiraled out by a
factor of two during the evolution to a white dwarf.
Planets in the shaded region will be swallowed up by the red giant
precursor phase.
\label{fig:Acc}}
\end{figure}
\section{Planets and Host Star Metalicity}
Jovian-like planets are preferentially found around metal rich stars
\citep{sim+04,fv05}. Recent observational findings suggest that this
well-established result does not hold for Neptunian-like
planets. \citet{ssm+08} find a wide spread in metallicities for stars hosting
Neptunian-like planets and find that the Jupiter-to-Neptune ratio is higher
for higher metalicity stars. These results suggest that the mass of the
largest planet in any given system is determined by the metalicity of the host
star. This trend is expected from planet formation theories based on the
core-accretion process provided that the host star metalicity is
representative of the metalicity in the planetesimal disk \citep{bbe+06}. This
suggests that lower-mass planets might even be preferentially found orbiting
metal-poor stars. A SIM Lite survey (discussed in the previous section) will
be able to test whether the planet-metalicity relation holds for A- and B-type
stars.
\section{Binary \&\ Bright Stars}
GAIA is not able to observe stars brighter than 6 mag. For stars with
$6<V<13$ saturation is avoided by dumping the accumulated charge.
As a result GAIA has a flat astrometric performance to 13 mag (after
which photon noise becomes important). For bright stars a surrounding region
(proportional to the brightness) is not observable. This limitation
means that a range of binary stars (with at least one bright companion)
are not accessible to GAIA.
The absolute V magnitudes of dwarfs is as follows: G5 (5.1), G0 (4.4),
F5 (3.4), F0 (2.6), A5 (2.0), A0 (0.7), B5 (-1.1) and B0 (-4.7).
The following stars are not accessible to GAIA: $\alpha$ Centauri
(G2V), Sirius~A (sp type A0),
Altair (A7), Procyon (F5), Regulus (B8),
Alkaid (B3)
and so on.
Next, the neighborhood restriction discussed above excludes
planet searches around fainter members of a bright star. Ground-based interferometers equipped with
dual-beam correlators (phase referencing) have already demonstrated
GAIA-like precision for planet searches (cf. the PHASES project,
\cite{mlf+08}). A SIM Lite + VLTI program targeting suitable binaries
would offer the best of both worlds:
high precision over a 5-yr period and a 25-yr search for distant
companions.
\section{Planets around White Dwarfs}
Eventually, the majority of stars evolve to become white dwarfs.
It is a natural question to ask what happens to a pre-existing
planetary system as the star evolves. For planets sufficiently far
from their parent stars, the mass loss in the later evolutionary
phases results in an adiabatic expansion of the orbit, so that the
planets spiral outwards, but otherwise remain bound. For the closest
planets, however, the out-spiral is not sufficiently fast, and the
planet is, at some point engulfed by the expanding host. Tidal
interactions between the star and the planet also influence where
this boundary lies. In addition to the general astrophysical interest,
this question has an anthropocentric (if morbid) interest, in that
studies show that the long-term survival of Earth in the face of
the Sun's evolution is uncertain, as it lies near the boundary,
where different treatments of tidal and wind effects can yield
different answers\citep{rtl+}; see also
\citep{dl,vl}.
Lack of strong and/or narrow absorption lines limit RV precision to 10
km\,s$^{-1}$ (except for the very special cases of pulsating white dwarfs
\citep{mwd+}). Furthermore, the red-giant phase of the host star leads to
spiraling of inner planets. Astrometry is ideally suited to probe planets
around white dwarfs. The astrometric method is further favored by the
proximity of white dwarfs (122 within the local 20\,pc; \cite{hso+08}).
A five year astrometric program\footnote{Two hundred visits
with SIM Lite over a five year period. Integration time
of 15s (V=13; solid line) and 30-s (V=15; dashed line);
see Figure 3}.
probes precisely the original $\sim
1 AU$ region of anthropocentric interest. Assuming a traditional
initial-final mass relation $M_{f} = 0.49 M_{\odot} \exp (0.095
M_{i})$ (e.g.\citet{w}), conservation of angular momentum during
the main sequence to white dwarf transition implies that a final
(circular) orbit with a period of five years around the white dwarf
corresponds to an original semi-major axis
\begin{equation}
a_i = 1.05 AU
\frac{e^{0.127 M_i}}{M_i} \left( \frac{P_f}{5{\rm\,yr}} \right)^{2/3}.
\end{equation}
A SIM Lite white dwarf planet search will probe planets down to
roughly a Neptune mass at original distances
0.5--2 AU (Figure~\ref{fig:Acc}).
\subsection{DAZd White Dwarfs}
\label{sec:DAZd}
Approximately 2\% of all white dwarfs with cooling ages $< 0.5$~Gyr show
evidence for an infrared excess \citep{fjz} and some show evidence for metal
pollution \citep{kvl+,jfz}. These are attributed to the tidal disruption of a
planetary minor body, either a comet or asteroid \citep{afs,j} to form a disk
that reprocesses stellar light and slowly accretes onto the star.
Because a white dwarf progenitor swells to radii $\sim 1 AU$ during
prior evolutionary stages, asteroids that approach close enough to
be tidally disrupted must be scattered inwards at late times by
planetary bodies \citep{ds}. Planets large enough to scatter
significantly without accreting must have a mass:
$M > 20 M_{\oplus} \left( \frac{a}{1 AU} \right)^{-1}$ where $a$ is the
semi-major axis (shown as dotted line in Figure~\ref{fig:Acc}).
The SIM Lite white dwarf program
will probe a significant
fraction of the parameter space occupied by planets that
generate these dusty disks through asteroid scattering.
The sample of $V<15$ white dwarfs is large enough to test
the hypothesis that most of this particular class of white dwarfs
have surviving planetary systems.
\section{Insight through Precision}
It has long been appreciated that mutual inclination (the inclination
of planets with respect to each other) and eccentricity give fundamental
insight into details of planet formation. Astrometry (and imaging)
is uniquely suited to measuring inclinations.
Among the great variety of planetary systems uncovered by the radial velocity
studies are a number of multiple planet systems (32 as of Feb 14, 2009).
Sometimes interactions result in resonant states. For example,
a 3:1 mean motion resonance is claimed in HD~60532 \citep{dlg+,lc}.
SIM Lite is particularly well suited to probing these subtle but key diagnostic
dynamical clues for planets with $a>0.5\,$AU. True mass determination is
clearly essential for a correct understanding of the dynamics of the system
(stability, identification of mean motion resonances and secular resonances).
Next, the mutual inclinations of eccentric planets shed light on the prior
evolution of the system (e.g. diffusive scattering processes should lead to
approximate energy equipartition in radial and vertical motions, whereas
resonant processes need not do so). In addition, determining the mass ratio
and resonance configuration of multiple planet systems will place constraints
on the strength of eccentricity damping during migration and the rate of
planetary the migration itself \citep{lt04}.
Separately, should the orbit of a planet be inclined significantly with
respect to the binary orbit, Kozai oscillations can significant affect the
orbital parameters of the system \citep{htt,wm}. Furthermore, a statistically
significant correlation between the sense of rotation for stellar orbits and
planetary orbits may provide information on the degree to which the binarity
affects the formation of a planetary system.
|
1,941,325,220,038 | arxiv | \section{Introduction}\label{SecIntro}
In physics, the concept of a discrete (space)time is gaining in interest \cite{Antippa07}. Theories, such as loop quantum gravity \cite{RV18}, use a discretization of the spacetime to solve some fundamental problems. In numerical analysis, the choice of discretization is critical to get good and efficient numerical approximations. The easiness with one gets approximations of solutions of very complex systems allows one to guess and explore behaviours of the systems even without being able to solve differential equations. However in general, from a physical or mathematical point of view, the breakdown from the continuous limit to the discrete case challenges conceptual considerations. The choice of the discretization scheme must be a smart one to get a consistent system and stable solutions, and to represent reality, especially. If one uses an arbitrary scheme, the numerical approximation could be completely wrong. Over the years, many approaches were taken to study time discretization of dynamic equations. Geometric approaches using symmetries seem to provide interesting results, see e.g. \cite{BV17,BJV19,HLW06} and references therein.
Independently, the study of Lie groups and symmetries have been growing, especially in mathematical physics, see e.g. \cite{Newell85,Olver,RS02} and references therein. Among others, Lie symmetries have been successfully used to derive dynamical equations (such as the Dirac equation \cite{Dirac48,Marchildon,Schwitchtenberg}), to reduce differential equations by facilitating the finding of their solutions (see e.g. \cite{Bertrand17,CW99,LRT20}) and even in the context of numerical analysis, e.g. \cite{BJV19,BV17,BCW06,BRW08,HLW06,KO04}. A particularly interesting theorem was provided by Noether \cite{Noether}, linking conservation laws to Lie symmetries. The conservation laws (or integrals of motion in the context of Hamiltonian systems) represent quantities that are preserved along any solution. These quantities led to the study of integrable systems in the sense of solitons (where a system of nonlinear partial differential equations possesses an infinite number of conservation laws \cite{Newell85}) and in the Liouville sense for Hamiltonian systems. In the case where the Hamiltonian is time-independent, integrability is defined by the existence of $N$ integrals of motion that are functionally independent and in involution with respect to the Poisson bracket, where $N$ is half the number of dimensions of the phase space. Integrable Hamiltonian systems possess a remarkable property, the existence of action-angle coordinates \cite{Arnold,Liouville}. Furthermore, some integrable Hamiltonian systems possess additional integrals of motion, they are called superintegrable. Maximally superintegrable systems possess the maximal number of integrals of motion, i.e. $2N-1$, and they possess the characteristic that bounded trajectories are closed and periodic \cite{BS19,MSW15,MPW13}.
The goal of this paper is to investigate an algebraic discretization of time for time-independent Hamiltonian systems. In order to do so, we propose a Lie-group/algebra approach such that time can be treated as a group parameter. We ensure that the Lie-group transformation and its Lie algebra satisfy many intrinsic conditions, such as the canonicality and the invariance of the integrals of motion under the group transformation. In the case where the time-independent system is integrable, we can rectify the evolution generator using the action-angle coordinates. This leads to a method to obtain exact schemes. In addition, we study non-exact schemes. We propose a method to obtain schemes based on the above formalism and we also investigate the errors of consistent schemes and provide methods to reduce the errors.
The paper is structured as follow. In section 2, we construct the formalism, in a first time for the continuous case, and then for the discrete case of time-independent Hamiltonian systems. In section 3, we investigate the special case of time-independent integrable Hamiltonian systems. For these considerations, we provide a step-by-step method to construct exact schemes and we show the method for the well-known case of the one-dimensional harmonic oscillator. In section 4, we apply the formalism of section 2 to learn more about non-exact (approximative) schemes. We illustrate these results using three different schemes for the one-dimensional harmonic oscillator: the exact scheme, the Euler method and the discrete gradient method. In section 5, we give some conclusions and provide some future perspective.
\section{Construction of the time-evolution generator}\label{SecForm}
Let us consider a time-independent Hamiltonian $H=H(\vec{q},\vec{p})$ defined on a $2N$-dimensional phase space $\mathcal{M}$, where the positions and the momenta are described respectively by the canonical variables $q_j$ and $p_j$, $j=1,...,N$. The associated equations of motion are given by
\begin{eqnarray}
\dot{q}_j=\frac{\partial H}{\partial p_j}=\lbrace q_j,H\rbrace,\qquad \dot{p}_j=-\frac{\partial H}{\partial q_j}=\lbrace p_j,H\rbrace,\label{eqsMot}
\end{eqnarray}
where the dot represents the total derivative with respect to time, i.e. $\dot{f}(t)=\frac{df(t)}{dt}$, and the bracket $\lbrace\cdot,\cdot\rbrace$ is the usual Poisson bracket, i.e.
\begin{eqnarray}
\lbrace a,b\rbrace=\sum_{j=1}^N\left(\frac{\partial a}{\partial q_j}\frac{\partial b}{\partial p_j}-\frac{\partial b}{\partial q_j}\frac{\partial a}{\partial p_j}\right),\label{PB}
\end{eqnarray}
The canonical variables $\vec{q}$, $\vec{p}$ satisfy the relations
\begin{eqnarray}
\lbrace q_i,p_j\rbrace=\delta_{ij},\qquad \lbrace q_i,q_j\rbrace=\lbrace p_i,p_j\rbrace=0,\qquad i,j=1,...,N, \label{PBrel}
\end{eqnarray}
where $\delta_{ij}$ is the Kronecker delta. One should note that since the Hamiltonian $H$ is time-independent, then the system possesses at least one integral of motion, namely the Hamiltonian $H$ itself.
In addition, we will assume that the solution the Hamiltonian system (\ref{eqsMot}) with the initial condition
\begin{eqnarray}
q_j(t_0)=c_j,\qquad p_j(t_0)=c_{j+N},\qquad j=1,...,N,
\end{eqnarray}
exists, is unique, smooth and equivalent to a curve in a $2N+1$ dimension space (the phase space augmented with the time dimension).
\subsection{The continuous case}
Let us assume the existence of a Lie-group transformation $g_\Delta$ allowing an advance in time $\Delta$. We require that the Lie-group transformation is linear in time, that is successive iterations are equivalent to one iteration of the sum of their steps, $g_{\Delta_1}\circ g_{\Delta_2}=g_{\Delta_1+\Delta_2}$. However, instead of using an explicit advance in time, i.e. $g_\Delta\circ f(t)=f(t+\Delta)$, we will consider an implicit version, that is the transformation will solely depend on the parameter $\Delta$ and the current-time variables $\vec{q}(t)$ and $\vec{p}(t)$, i.e.
\begin{eqnarray}
g_\Delta\circ q_j\equiv Q_j=F_j(\vec{q},\vec{p};\Delta),\qquad g_\Delta\circ p_j\equiv P_j=G_j(\vec{q},\vec{p};\Delta),\label{Gtrans}
\end{eqnarray}
where the variables $Q_j$ and $P_j$ are the advanced-time variables and the transformation applies on the current-time variables $\vec{q}(t)$ and $\vec{p}(t)$. The identity transformation is given by taking the limit of $\Delta$ to $0$,
\begin{eqnarray}
\lim_{\Delta\rightarrow0}\left(F_j(\vec{q},\vec{p};\Delta)\right)=q_j,\qquad \lim_{\Delta\rightarrow0}\left(G_j(\vec{q},\vec{p};\Delta)\right)=p_j,\label{Idlim}
\end{eqnarray}
and the inverse transformation by setting $\Delta$ to $-\Delta$, i.e. $g_\Delta^{-1}=g_{-\Delta}$.
The associated infinitesimal deformation, the Lie algebra, is the vector field
\begin{eqnarray}
\mathfrak{g}=\sum_{j=1}^N\left(\xi_j(\vec{q},\vec{p})\frac{\partial}{\partial q_j}+\eta_j(\vec{q},\vec{p})\frac{\partial}{\partial p_j}\right),\label{Gvec}
\end{eqnarray}
where the functions $\xi_j$ and $\eta_j$ are usually defined by the relations
\begin{eqnarray}
\xi_j(\vec{q},\vec{p})=\lim_{\Delta\rightarrow0}\left(\frac{\partial}{\partial\Delta}F_j(\vec{q},\vec{p};\Delta)\right),\qquad \eta_j(\vec{q},\vec{p})=\lim_{\Delta\rightarrow0}\left(\frac{\partial}{\partial\Delta}G_j(\vec{q},\vec{p};\Delta)\right).\label{Vlim}
\end{eqnarray}
The group transformation (\ref{Gtrans}) can be written using the step $\Delta$ and the evolution generator as
\begin{eqnarray}
g_\Delta = \exp(\Delta \mathfrak{g}).
\end{eqnarray}
The group transformation (\ref{Gtrans}) and its Lie algebra (\ref{Gvec}) must satisfy many conditions. One of them is that they must leave the integrals of motion $I_k=I_k(\vec{q},\vec{p})$ invariant. This consequence can be expressed as
\begin{eqnarray}
\mathfrak{g}\circ I_k=0\qquad \mbox{mod}\quad I_k=0,\label{SymCon}
\end{eqnarray}
which is the condition for the Lie algebra $\mathfrak{g}$ to be a symmetry of the algebraic set of integrals of motion. One can look e.g. in \cite{Olver} for details on symmetries of sets of algebraic equations.
In addition, transformation (\ref{Gtrans}) must be canonical. We require that the same relations as in (\ref{PBrel}) must be satisfied regardless of the advance in time, that is
\begin{eqnarray}
\lbrace Q_i,P_j\rbrace=\delta_{ij},\qquad\lbrace Q_i,Q_j\rbrace=\lbrace P_i,P_j\rbrace=0,\qquad i,j=1,...,N.
\end{eqnarray}
For the mixed bracket $\lbrace Q_i,P_j\rbrace$, we have
\begin{eqnarray}
\delta_{ij}&=&\lbrace Q_i,P_j\rbrace=\lbrace F_i(\vec{q},\vec{p};\Delta),G_j(\vec{q},\vec{p};\Delta)\rbrace\nonumber\\
&=&\lbrace\exp(\Delta \mathfrak{g})\circ q_i,\exp(\Delta \mathfrak{g})\circ p_j\rbrace.\label{PBcons}
\end{eqnarray}
However, working in the Lie-group formalism, with the exponentials, is much harder than working in the Lie-algebra formalism, so we want to decompose the exponential and want to know what are the implications of (\ref{PBcons}) on the vector field $\mathfrak{g}$. From one side, the composition of two canonical transformations remains canonical. From the other side, the group transformation can be seen as applying $n$ times the infinitesimal deformation $1+\frac{\Delta}{n}\mathfrak{g}$ and taking the limit of $n$ to infinity. Hence, if the infinitesimal deformation is canonical, so is the group transformation. This implies that the conditions from equation (\ref{PBcons}) are equivalent to
\begin{eqnarray}
\delta_{ij}&=&\left\lbrace\left(1+\frac{\Delta}{n}\mathfrak{g}\right)\circ q_i,\left(1+\frac{\Delta}{n}\mathfrak{g}\right)\circ p_j\right\rbrace\nonumber\\
&=&\left\lbrace q_i+\frac{\Delta}{n}\xi_i,p_j+\frac{\Delta}{n}\eta_j\right\rbrace\nonumber\\
&=&\lbrace q_i,p_j\rbrace+\frac{\Delta}{n}\left(\lbrace q_i,\eta_j\rbrace+\lbrace\xi_i,p_j\rbrace\right)+O^2\left(\frac{1}{n}\right)\nonumber\\
&=&\delta_{ij}+\frac{\Delta}{n}\left(\frac{\partial\eta_j}{\partial p_i}+\frac{\partial\xi_i}{\partial q_j}\right)+O^2\left(\frac{1}{n}\right).\nonumber
\end{eqnarray}
Hence we have the conditions
\begin{eqnarray}
\frac{\partial\eta_j}{\partial p_i}+\frac{\partial\xi_i}{\partial q_j}=0\label{Vcan1}
\end{eqnarray}
and similarly for the relations $\lbrace Q_i,Q_j\rbrace$ and $\lbrace P_i,P_j\rbrace$:
\begin{eqnarray}
\frac{\partial \xi_j}{\partial p_i}-\frac{\partial \xi_i}{\partial p_j}=0,\qquad \frac{\partial \eta_j}{\partial q_i}-\frac{\partial \eta_i}{\partial q_j}=0.\label{Vcan2}
\end{eqnarray}
The general solution to the canonical conditions (\ref{Vcan1}) and (\ref{Vcan2}) is
\begin{eqnarray}
\xi_j(\vec{q},\vec{p})=\frac{\partial h(\vec{q},\vec{p})}{\partial p_j},\qquad \eta_j(\vec{q},\vec{p})=-\frac{\partial h(\vec{q},\vec{p})}{\partial q_j},\label{CanCon}
\end{eqnarray}
where $h(\vec{q},\vec{p})$ is an arbitrary function of the phase-space coordinates.
Combining the symmetry condition (\ref{SymCon}) with the canonical condition (\ref{CanCon}), we see that the function $h$ must Poisson-bracket commute with all the integrals of motion, i.e.
\begin{eqnarray}
\mathfrak{g}\circ I_k=\sum_{j=0}^N\left(\frac{\partial h}{\partial p_j}\frac{\partial I_k}{\partial q_j}-\frac{\partial h}{\partial q_j}\frac{\partial I_k}{p_j}\right)=\lbrace I_k,h\rbrace=0\qquad\mbox{mod}\quad I_k=0.
\end{eqnarray}
In 1 dimension, it is trivial to see that the only arbitrary function that can commute with the Hamiltonian $H$ is a function of the Hamiltonian. Hence, the vector field takes the form
\begin{eqnarray}
\mathfrak{g}&=&h'(H)\left(\frac{\partial H}{\partial p_1}\frac{\partial}{\partial q_1}-\frac{\partial H}{\partial q_1}\frac{\partial}{\partial p_1}\right)=h'(H)\left(\dot{q}_1\frac{\partial}{\partial p_1}+\dot{p}_1\frac{\partial}{\partial p_1}\right)\nonumber\\
&=&h'(H)\frac{d}{dt},\nonumber
\end{eqnarray}
which is equivalent to the symmetry of translation in time multiplied by a constant $h'(H)$. In fact, by setting $h=H$, we can recover the time evolution of the system, the equations of motion, by going back to the group transformation, i.e.
\begin{eqnarray}
\frac{dq_1}{\xi_1(q_1,p_2)}=d\Delta=\frac{dp_1}{\eta_1(q_1,p_1)}\nonumber
\end{eqnarray}
or more explicitly
\begin{eqnarray}
\frac{dq_1}{d\Delta}=\frac{\partial H}{\partial p_1}=\xi(q_1,p_1),\qquad \frac{dp}{d\Delta}=-\frac{\partial H}{\partial q_1}=\eta(q_1,p_1),\qquad \frac{dq_1}{dp_1}=-\frac{\frac{\partial H}{\partial p_1}}{\frac{\partial H}{\partial q_1}},
\end{eqnarray}
where the group parameter $\Delta$ acts as the time. Thus, the infinitesimal generator of evolution in time takes the form
\begin{eqnarray}
\mathfrak{g}=\frac{\partial H}{\partial p_1}\frac{\partial}{\partial q_1}-\frac{\partial H}{\partial q_1}\frac{\partial}{\partial p_1},\qquad\mbox{such that}\quad \mathfrak{g}\circ\phi=\lbrace\phi,H\rbrace.
\end{eqnarray}
In higher dimension, the argument is less straightforward. For maximally superintegrable systems, only the Hamiltonian can Poisson-bracket commute with all the integrals of motion, hence the derivation is similar, but with more variables to consider. For non-maximally superintegrable systems, some integrals of motion may Poisson-bracket with all other integrals, however, only the Hamiltonian will ensure commutation, recover the time-evolution in the form of the equations of motion and represent a time-translation symmetry. Therefore, the infinitesimal generator of the group transformation (\ref{Gtrans}) is given uniquely by
\begin{eqnarray}
\mathfrak{g}=\sum_{j=1}^N\left(\frac{\partial H}{\partial p_j}\frac{\partial}{\partial q_j}-\frac{\partial H}{\partial q_j}\frac{\partial}{\partial p_j}\right),\qquad \mathfrak{g}\circ\phi=\lbrace\phi,H\rbrace
\end{eqnarray}
and will be called the (time-)evolution generator. It represents the flow of the Hamiltonian vector field and is already know in the literature, see e.g. \cite{Olver}. Here, we approached it without using total-derivatives in time, which will be useful in the next section to avoid the definition of discrete time-derivative.
\subsection{The discrete case}
A very similar derivation can be done in the discrete case, but problems arise when one wants to take the limit of $\Delta$ to $0$. Hence, we need to use a different approach in the discrete case when the limit $\Delta\rightarrow0$ is considered. In the continuous version, we used the limit at two places, that is in equations (\ref{Idlim}) and (\ref{Vlim}). For the equations (\ref{Idlim}), the identity of the Lie-group transformation must exist. By taking the composition of an advance in time and a return in time, i.e.
\begin{eqnarray}
g_\Delta^{-1}\circ g_\Delta=\exp(-\Delta \mathfrak{g})\circ\exp(\Delta \mathfrak{g})=\exp((\Delta-\Delta)\mathfrak{g})=I,
\end{eqnarray}
we can see that the identity exists and is well-defined without using the limit. For equations (\ref{Vlim}), we use the alternative definitions:
\begin{eqnarray}
\xi_j(\vec{q},\vec{p})=\exp(-\Delta \mathfrak{g})\circ\frac{\partial}{\partial\Delta}F_j(\vec{q},\vec{p};\Delta),\label{DisXi}\\
\eta_j(\vec{q},\vec{p})=\exp(-\Delta \mathfrak{g})\circ\frac{\partial}{\partial\Delta}G_j(\vec{q},\vec{p};\Delta),\label{DisEta}
\end{eqnarray}
which are equivalent in the continuous case, but not in the discrete case. It can be seen as applying the group transformation, then differentiating with respect to $\Delta$ and applying the inverse transformation. If one uses the limit $\Delta\rightarrow0$ instead of the above definition, then any numerical/approximation scheme that is consistent would satisfy the constraints. We shall discuss it further when we consider non-exact schemes.
Thus, if one follows the same steps as previously, but with the relations (\ref{DisXi}) and (\ref{DisEta}) for the vector fields, one will get that the evolution generator takes the form
\begin{eqnarray}
\mathfrak{g}=\sum_{j=1}^N\left(\frac{\partial H}{\partial p_j}\frac{\partial}{\partial q_j}-\frac{\partial H}{\partial q_j}\frac{\partial}{\partial p_j}\right),\qquad \mathfrak{g}\circ\phi=\lbrace\phi,H\rbrace,
\end{eqnarray}
which is the same as in the continuous case. Hence, the exact scheme of evolution of the system can be obtained by returning to the group transformation. However, we saw that getting back to the group transformation is equivalent to solving the equations of motion, which is not necessarily an easy task. This approach does not help to solve the equations of motion, but it provides with a guide for the evolution of the system.
In the discrete case, interpolations between the points are not physically interesting. The continuous trajectory generated by the Lie group is to be considered a tool, not
a real behaviour. This tool allows us to have an arbitrary mesh, which may be useful to jump over singularities.
Overall, the group transformation $g_\Delta$ can be seen as a transformation of the initial conditions at $t=t_0$ to new``initial conditions'' at $t=t_0+\Delta$. By applying repetitively the group transformation, we can map the solution or a subset of it.
\section{Exact schemes for integrable systems}\label{SecExact}
Let us consider that the system (\ref{eqsMot}) is integrable, i.e. there exist at least $N$ integrals of motion $I_k$ that are functionally independent and in involution with respect to the Poisson bracket (\ref{PB}). According to the Liouville--Arnold theorem \cite{Arnold,Liouville}, integrable systems possess a distinguished set of coordinates called the action-angle coordinates. The action-angle coordinates can be obtained via a canonical transformation where the associated equations of motion take the form
\begin{eqnarray}
\frac{dz_k}{dt}=\nu_k(\vec{I}),\qquad \frac{dI_k}{dt}=0,\qquad k=1,...,N.\label{AAeqm}
\end{eqnarray}
Even if the $\nu_k$ are functions of $I_k$, they can be treated as constants since the $I_k$ are constant over time. Moreover, the Liouville-Arnold theorem states that the trajectories in the phase space are diffeomorphic to an $N$-dimensional torus under some additional conditions. However, we will ignore this geometric interpretation and weaken the constrains on the system and its solutions. Hence, by the action-angle coordinates, we solely refer to the system of coordinates leading to the equations of motion (\ref{AAeqm}).
In the action-angle coordinates, the evolution generator is straightened to
\begin{eqnarray}
\mathfrak{g}=\sum_{k=1}^N\nu_k(\vec{I})\frac{\partial}{\partial z_k},
\end{eqnarray}
for which the associated group transformation is
\begin{eqnarray}
z_k\rightarrow z_k+\nu_k\Delta,\qquad I_k\rightarrow I_k.
\end{eqnarray}
The action-angle coordinates are a particularly good choice for discretization. This comes from the fact that $N$ integrals of motion will be preserved in the discretization by definition. Furthermore, since the frequencies $\nu_k$ are treated as constants, there is no arbitrarity in the choice of the discretization scheme, conversely with the choice made in the traditional methods, such as Runge--Kutta methods. Hence, if one knows the algebraic transformations between the original phase-space coordinates and the action-angle coordinates, then one can go to the action-angle coordinates to discretize and come back to the original system to obtain an exact scheme.
\subsection{Method using a generating function of type 2}\label{SSecMethAA}
We assume that $N$ integrals of motion in involution are known explicitly. If the integrals of motion are not known, there exist many ways to find them. There is a tremendous number of articles in the literature trying the classify all integrable systems, see e.g. \cite{BS19, MSW15, MPW13} and references therein. One could also try to find them by brute force or using physical intuition. Symbolic softwares, such as Maple \cite{Maple}, are able to find in some cases Lie point symmetries of a system of differential equations. By combinating it with Noether's theorem \cite{Noether} (making a link between symmetries and conservation laws), one can find integrals of motion.
By performing the following steps, one should be able (at least in theory) to obtain the time-discretized version of the equations of motion associated with the Hamiltonian system. Here, we use a generating function to get the relations between the action and the angle variables, but other methods exist. In addition, one should note that the action-angle coordinates are not unique. One can always use a different function of the same integral and obtain different coordinates, which will lead to equivalent results.
\bigskip
\noindent\textbf{Steps:}
\begin{enumerate}
\item Assign the commuting integrals of motion $I_k$ as the action coordinates, i.e.
\begin{eqnarray}
I_k=I_k(\vec{q},\vec{p}).\label{eqsStep1}
\end{eqnarray}
\item From equations (\ref{eqsStep1}), solve the momenta $p_k$ in terms of the generalized coordinates $q_k$ and the action coordinates $I_k$, i.e.
\begin{eqnarray}
p_k=s_k(\vec{q},\vec{I}).
\end{eqnarray}
\item Find a generating function $K$ of second type satisfying the system of partial differential equations
\begin{eqnarray}
\frac{\partial K(\vec{q},\vec{I})}{\partial q_k}=s_k(\vec{q},\vec{I}).
\end{eqnarray}
\item By substituting the solution of the generating function $K$ into the equations
\begin{eqnarray}
z_k=\theta_k(\vec{q},\vec{I})=\frac{\partial K(\vec{q},\vec{I})}{\partial I_k},
\end{eqnarray}
get the relations for the angle coordinates $z_k$.
\item Solve the inverse transformation of $\theta_k$ to get the $q_k$ in terms of the action-angle coordinates, i.e.
\begin{eqnarray}
q_k=r_k(\vec{z},\vec{I}).
\end{eqnarray}
\item Calculate the frequencies $\nu_k$ using the relations of the angle coordinates $z_k$ and the original equations of motion, i.e.
\begin{eqnarray}
\nu_k=\left.\sum_{j=1}^N\frac{\partial \theta_k(\vec{q},\vec{I})}{\partial q_j}\frac{\partial H(\vec{q},\vec{p})}{\partial p_j}\right\vert_{t=t_0}
\end{eqnarray}
\item Calculate the constant values of the integrals of motion, i.e.
\begin{eqnarray}
\gamma_k=\left.I_k(\vec{q},\vec{p})^{\phantom{^2}}\right\vert_{t=t_0}
\end{eqnarray}
\end{enumerate}
\textbf{Results:}
The advance-time original coordinates $Q_k,P_k$ can be expressed using the current-time original coordinates $\vec{q},\vec{p}$ as
\begin{eqnarray}
Q_k=r_k\left(\theta_1(\vec{q},\vec{\gamma})+\nu_1\Delta,...,\theta_N(\vec{q},\vec{\gamma})+\nu_N\Delta,\gamma_1,...,\gamma_N\right),\label{ExactX}\\
P_k=s_k\left(\theta_1(\vec{q},\vec{\gamma})+\nu_1\Delta,...,\theta_N(\vec{q},\vec{\gamma})+\nu_N\Delta,\gamma_1,...,\gamma_N\right),\label{ExactP}
\end{eqnarray}
where $\Delta$ is the time step. The results remain true regardless of the method used to find the action-angle coordinates.
\subsection{Example --- The 1-dimensional harmonic oscillator}\label{SSecExAA}
Let us consider the 1-dimensional harmonic oscillator given by the Hamiltonian
\begin{eqnarray}
H=\frac{\rho^2}{2m}+\frac{k}{2}\zeta^2,
\end{eqnarray}
where $\rho(\tau)$ is the momentum and $\zeta(\tau)$ is the position in time $\tau$. The equations of motion are given by
\begin{eqnarray}
\dot{\zeta}=\frac{\rho}{m},\qquad \dot{\rho}=-k\zeta.
\end{eqnarray}
In order to simplify the notation, we will absorb the mass $m$ and the spring constant $k$ using the (extended) canonical transformation \cite{Antippa07}
\begin{eqnarray}
\zeta(\tau)=\frac{1}{\sqrt{k}}\;x(t),\quad \rho(\tau)=\sqrt{m}\;p(t),\quad \tau=\sqrt{\frac{m}{k}}\;t,\nonumber
\end{eqnarray}
such that the new Hamiltonian becomes
\begin{eqnarray}
H=\frac{p^2+x^2}{2}\label{Hex}
\end{eqnarray}
together with the new equations of motion
\begin{eqnarray}
\frac{dx}{dt}=\frac{\partial H}{\partial p}=p,\qquad \frac{dp}{dt}=-\frac{\partial H}{\partial x}=-x.
\end{eqnarray}
The analytic solution of this system is well-known and given by a rotation transformation in the phase space
\begin{eqnarray}
x(t)&=&x_0\cos\left(t\right)+p_0\sin\left(t\right),\label{solx}\\
p(t)&=&p_0\cos\left(t\right)-x_0\sin\left(t\right),\label{solp}
\end{eqnarray}
where $x_0$ and $p_0$ represent the initial position and momentum.
\bigskip
\noindent\textbf{Step 1:}\\
We set $H(x,p)=E$ to be the integral of motion used, i.e. the action coordinate.
\bigskip
\noindent\textbf{Step 2:}\\
We can solve $p$ in terms of $x$ and $E$, i.e.
\begin{eqnarray}
p=\epsilon\sqrt{2E-x^2},\qquad \epsilon^2=1.
\end{eqnarray}
\bigskip
\noindent\textbf{Step 3:}\\
We can obtain the generating function by integration,
\begin{eqnarray}
K(x,E)&=&\int p dx=\int \epsilon\sqrt{2E-x^2} dx\nonumber\\
&=&\frac{\epsilon}{2}x\sqrt{2E-x^2}+\epsilon E\arctan\left(\frac{x}{\sqrt{2E-x^2}}\right).\nonumber
\end{eqnarray}
\bigskip
\noindent\textbf{Step 4:}\\
The angle coordinate $\theta$ is given by
\begin{eqnarray}
\theta&=&\frac{\partial K}{\partial E}=\epsilon \arctan\left(\frac{x}{\sqrt{2E-x^2}}\right)\nonumber\\
&=&\arctan\left(\frac{x}{p}\right),\label{thetaof}
\end{eqnarray}
together with the action variable
\begin{eqnarray}
E=\frac{p^2+x^2}{2}.\label{rof}
\end{eqnarray}
\bigskip
\noindent\textbf{Step 5:}\\
From equations (\ref{thetaof}) and (\ref{rof}), we can see that this transformation is linked with the polar coordinates, i.e. by inverting the transformation, we get
\begin{eqnarray}
x=\sqrt{2E}\sin(\theta),\qquad p=\sqrt{2E}\cos(\theta).\label{xietaof}
\end{eqnarray}
\bigskip
\noindent\textbf{Step 6:}\\
To obtain the frequency of the system, one can check the equations of motion in the action-angle coordinates, which take the form
\begin{eqnarray}
\frac{dE}{dt}=0,\qquad \frac{d\theta}{dt}=1.\label{AAeqsmot}
\end{eqnarray}
\bigskip
\noindent\textbf{Step 7:}\\
The integral of motion is a constant given by
\begin{eqnarray}
E=\frac{p_0^2+x_0^2}{2}
\end{eqnarray}
at $t=0$.
\bigskip
\noindent\textbf{Result:}\\
Therefore, after some algebraic manipulations, we get that the exact scheme using (\ref{ExactX}-\ref{ExactP}) is given by
\begin{eqnarray}
X=x_0\cos(\Delta)+p_0\sin(\Delta),\qquad P=p_0\cos(\Delta)-x_0\sin(\Delta).\label{Exact}
\end{eqnarray}
In addition, the discrete equations of motion take the form
\begin{eqnarray}
\frac{X-x}{\Delta}=\frac{x_0(\cos(\Delta)-1)+p_0\sin(\Delta)}{\Delta},\\
\frac{P-p}{\Delta}=\frac{p_0(\cos(\Delta)-1)-x_0\sin(\Delta)}{\Delta}.
\end{eqnarray}
By taking the limit $\Delta\rightarrow0$, we get back the equations of motion of the continuous case, which makes the scheme consistent. The discretization scheme of the one-dimensional harmonic oscillator corroborates the results found in \cite{Cieslinski09} using a different approach.
It is also possible the to apply repetitively the transformations (\ref{Exact}) with a constant step $\Delta$, such that it is possible to get an equivalent of integration of the equations motion, i.e. after applying the transformation $n$-times (where $n$ is an arbitrary integer), we get after some algebraic transformations
\begin{eqnarray}
x(n\Delta)=x_0\cos(n\Delta)+p_0\sin(n\Delta),\qquad p(n\Delta)=p_0\cos(n\Delta)-x_0\sin(n\Delta),
\end{eqnarray}
where $n\Delta$ can be seen as the time $t$, i.e.
\begin{eqnarray}
t=\lim_{\stackrel{n\rightarrow\infty}{\Delta\rightarrow0}}n\Delta
\end{eqnarray}
Similar results can be obtained using inhomogeneous step sizes.
\section{Applications for non-exact schemes}\label{NonExact}
Let us consider an approximative scheme
\begin{eqnarray}
Q_j=F_j(\vec{q},\vec{p};\Delta),\qquad P_j=G_j(\vec{q},\vec{p};\Delta),
\end{eqnarray}
which is not necessarily a group transformation with the parameter $\Delta$. We can write this transformation as
\begin{eqnarray}
\psi_\Delta=g_\Delta+w_\Delta,
\end{eqnarray}
such that the approximated advanced values $Q_j$ and $P_j$ are
\begin{eqnarray}
Q_j=\psi_\Delta\circ q_j,\qquad P_j=\psi_\Delta\circ p_j,
\end{eqnarray}
where $g_\Delta$ is the exact evolution transformation, i.e.
\begin{eqnarray}
g_\Delta=\exp(\Delta \mathfrak{g}),\qquad \mathfrak{g}=\sum_{j=1}^N\left(\frac{\partial H}{\partial p_j}\frac{\partial}{\partial q_j}-\frac{\partial H}{\partial q_j}\frac{\partial}{\partial p_j}\right),
\end{eqnarray}
and $w_\Delta$ is the local error of the step taking the form of a vector field using the current-time variables,
\begin{eqnarray}
w_\Delta=\sum_{j=1}^N\alpha_j(\vec{q},\vec{p};\Delta)\frac{\partial}{\partial q_j}+\beta(\vec{q},\vec{p};\Delta)\frac{\partial}{\partial p_j}.
\end{eqnarray}
If the transformation $\psi_\Delta$ is consistent, then the error $w_\Delta$ can be expressed using $\Delta^2$-terms and higher-order terms in $\Delta$, that is the Taylor expansion takes the form
\begin{eqnarray}
w_\Delta=\sum_{k=2}^\infty\frac{\Delta^k}{k!}v_k,\qquad v_k=\left.\frac{\partial^k w_\Delta}{\partial\Delta^k}\right\vert_{\Delta=0},\qquad v_0=v_1=0.
\end{eqnarray}
In this formalism, we know $\psi_\Delta$ and $\mathfrak{g}$, but a priori not $g_\Delta=\exp(\Delta \mathfrak{g})$, $w_\Delta$ and $v_k$. (One should note that $g_\Delta$ is known using $\mathfrak{g}$, but it may be hard to compute explicitly.) The lower-order $v_k$ can be computed from the known quantity
\begin{eqnarray}
\psi_{-\Delta}\circ\frac{\partial}{\partial\Delta}\psi_\Delta &=& \left(\exp(-\Delta \mathfrak{g})+w_{-\Delta}\right)\circ\left( \mathfrak{g}\circ\exp(\Delta \mathfrak{g})+\frac{\partial w_\Delta}{\partial \Delta}\right)\nonumber\\
&=&\mathfrak{g}+\exp(-\Delta \mathfrak{g})\circ\frac{\partial w_\Delta}{\partial\Delta}+w_{-\Delta}\circ \mathfrak{g}\circ\exp(\Delta \mathfrak{g})+w_{-\Delta}\circ \frac{\partial w_\Delta}{\partial\Delta}.\label{Err}
\end{eqnarray}
By expanding in $\Delta$ on both sides, we get for each coefficient of $\Delta$ the following relations:
\begin{eqnarray}
&\Delta^1:&\,\left.\frac{\partial}{\partial\Delta}\left(\psi_{-\Delta}\circ\frac{\partial}{\partial\Delta}\psi_\Delta\right)\right\vert_{\Delta=0}=v_2,\nonumber\\
&\Delta^2:&\,\left.\frac{\partial^2}{\partial\Delta^2}\left(\psi_{-\Delta}\circ\frac{\partial}{\partial\Delta}\psi_\Delta\right)\right\vert_{\Delta=0}=v_3-2\mathfrak{g}\circ v_2+v_2\circ \mathfrak{g},\nonumber\\
&\Delta^3:&\,\left.\frac{\partial^3}{\partial\Delta^3}\left(\psi_{-\Delta}\circ\frac{\partial}{\partial\Delta}\psi_\Delta\right)\right\vert_{\Delta=0}=v_4-3\mathfrak{g}\circ v_3+v_3\circ \mathfrak{g}+3v_2^2+3\mathfrak{g}^2\circ v_2+3v_2\circ \mathfrak{g}^2\nonumber\\
&\vdots&\nonumber
\end{eqnarray}
From the $\Delta^1$--equation, we can obtain the second-order error $v_2$. Then, we can compute the third-order error $v_3$ from $v_2$ and the $\Delta^2$--equation. Then, $v_4$ from $\Delta^3$ and so on. Therefore, the error of any scheme can be predicted analytically.
Being able to know what are the errors on a scheme allows to control with a better precision the numerical trajectories without even knowing the solution. Here, we propose two options to improve the numerical results:
\begin{itemize}
\item \textbf{Find the associated invariants of the error}\\
An invariant $\phi$ of the leading-order error $w_k$ will be more precise, that is because
\begin{eqnarray}
w_k\circ\phi=0,
\end{eqnarray}
hence the error will be diminished. However, some information would be lost since the number of functionally independent invariant is always lower than the number of variables.
\item \textbf{Increase the order of the error}\\
By subtracting the lowest-order error(s) $w_k$ to the transformation $\phi$, you can raise the order of the error, and get a more precise scheme. However, for geometric schemes, to increase the order of the error may break some qualitative properties.
\end{itemize}
A problem arises from the fact that the deformations of variables are only tangent to the solution (unless the variables are the action-angle coordinates). Since the deformations are tangent to the solution, the lower-order errors tend to spread into the higher-order terms and it gets harder to interpret correctly the errors. However, for the leading-order error, this method seems accurate, at least for non-stiff systems. As proposed in the conclusions, a better way to interpret geometrically the errors would help in this matter.
One should note that in general, the Euler method is equivalent to approximating the Lie-group transformation to a linear operator, that is
\begin{eqnarray}
Q_j=\exp(\Delta \mathfrak{g})\circ q_j\approx (1+\Delta \mathfrak{g})\circ q_j=q_j+\Delta\frac{\partial H}{\partial p_j}\\
\Rightarrow \frac{Q_j-q_j}{\Delta}\approx\frac{\partial H(\vec{q},\vec{p})}{\partial p_j}
\end{eqnarray}
and similarly for $P_j$. One can truncate the exponential at a higher order to generate schemes, however, the results will depend on higher partial derivatives of the Hamiltonian. The Runge--Kutta method can be obtained by further expanding the partial derivatives using Taylor series \cite{BFB16,Kutta01,Runge95}.
\subsection{Examples of schemes of the 1-dimensional harmonic oscillator}
Throughout the rest of this section, we will solely focus on examples of schemes for the 1-dimensional harmonic oscillator with the variables rescaled to absorb to the parameters of the system, i.e.
\begin{eqnarray}
H=\frac{p^2+x^2}{2}.\label{Hex2}
\end{eqnarray}
The evolution generator of this Hamiltonian is given by
\begin{eqnarray}
\mathfrak{g}=p\frac{\partial}{\partial x}-x\frac{\partial}{\partial p}.
\end{eqnarray}
To illustrate the numerical results, we consider the trajectory starting at $x(0)=1$ and $p(0)=0$ until it reaches $t=20$. The continuous version is represented by a continuous red line and the discrete version is represented by blue dots, using iteration $\Delta=0.1$ in the software Maple \cite{Maple}.
\subsubsection{The exact scheme}
As a first example, we will consider the exact scheme from section \ref{SecExact}. The iteration scheme is given by
\begin{eqnarray}
X=x\cos(\Delta)+p\sin(\Delta),\qquad P=p\cos(\Delta)-x\sin(\Delta),
\end{eqnarray}
which suggests that the discrete equations of motion take the form
\begin{eqnarray}
\frac{X-x}{\Delta}=\frac{x\cos(\Delta)+p\sin(\Delta)-x}{\Delta},\qquad \frac{P-p}{\Delta}=\frac{p\cos(\Delta)-x\sin(\Delta)-p}{\Delta}.
\end{eqnarray}
By direct substitution into equation (\ref{Err}), we obtain that the error $w_\Delta$ is null, as expected.
Figure \ref{Exact-phase} represents the trajectory in the phase space, Figure \ref{Exact-x} represents the evolution of $x$ in time and Figure \ref{Exact-p} represents the evolution of $p$ in time. Figure \ref{Exact-error} shows the error over time in the phase space,
\begin{eqnarray}
\sigma(x,p)=(x(t)-x(n\Delta))^2+(p(t)-p(n\Delta))^2,
\end{eqnarray}
such that $t=n\Delta$. In this scheme, the errors come from the approximation of the software.
\begin{figure}[h!]
\centering
\caption{}
\includegraphics[scale=0.4]{Exact-phase.eps}
\label{Exact-phase}
\end{figure}
\begin{figure}[h!]
\centering
\caption{}
\includegraphics[scale=0.7]{Exact-x.eps}
\label{Exact-x}
\end{figure}
\begin{figure}[h!]
\centering
\caption{}
\includegraphics[scale=0.7]{Exact-p.eps}
\label{Exact-p}
\end{figure}
\begin{figure}[h!]
\centering
\caption{}
\includegraphics[scale=0.7]{Exact-error.eps}
\label{Exact-error}
\end{figure}
\pagebreak
~
\pagebreak
\subsubsection{The Euler method}
The discrete equations of motion for the Euler method are
\begin{eqnarray}
\frac{X-x}{\Delta}=p,\qquad \frac{P-p}{\Delta}=-x
\end{eqnarray}
and the iteration scheme is given by
\begin{eqnarray}
X=x+\Delta p,\qquad P=p-\Delta x,
\end{eqnarray}
which is equivalent to the truncation of the exponential $\exp(\Delta \mathfrak{g})$ after the linear term in $\Delta$.
By direct substitution into equation (\ref{Err}), we obtain
\begin{eqnarray}
\psi_{-\Delta}\circ\frac{\partial\psi_\Delta}{\partial\Delta}=(p+\Delta x)\frac{\partial}{\partial x}+(\Delta p-x)\frac{\partial}{\partial p}.
\end{eqnarray}
We can calculate the errors, that is
\begin{eqnarray}
v_0&=&v_1=0,\nonumber\\
v_2&=&x\frac{\partial}{\partial x}+p\frac{\partial}{\partial p},\nonumber\\
v_3&=&p\frac{\partial}{\partial x}+x\frac{\partial}{\partial p},\nonumber\\
v_4&=&-x\frac{\partial}{\partial x}-p\frac{\partial}{\partial p},\nonumber\\
&\vdots&\nonumber
\end{eqnarray}
The second-order error $v_2$ represents a scaling transformation, i.e.
\begin{eqnarray}
x\rightarrow e^\epsilon x\qquad p\rightarrow e^\epsilon p.\label{EulerTrans}
\end{eqnarray}
Hence, any function of the ratio $\frac{x}{p}$ will be an invariant of the error deformation $v_2$.
To illustrate the results, Figure \ref{Euler-phase} is the trajectory in the phase space, Figure \ref{Euler-x} is the evolution of $x$ in time, Figure \ref{Euler-p} is the evolution of $p$ in time and Figure \ref{Euler-inv} is the evolution of the invariant $\frac{x}{p}$ in time. In Figure \ref{Euler-error} the error over time in the phase space $\sigma(x,p)$ is given by the black curve while the error over time of the invariant $\sigma(\frac{x}{p})$ is given by the green curve,
\begin{eqnarray}
\sigma(x,p)&=&(x(t)-x(n\Delta))^2+(p(t)-p(n\Delta))^2,\\
\sigma\left(\frac{x}{p}\right)&=&\left(\frac{x(t)}{p(t)}-\frac{x(n\Delta)}{p(n\Delta)}\right)^2
\end{eqnarray}
such that $t=n\Delta$.
\begin{figure}[h!]
\centering
\caption{}
\includegraphics[scale=0.4]{Euler-phase.eps}
\label{Euler-phase}
\end{figure}
\begin{figure}[h!]
\centering
\caption{}
\includegraphics[scale=0.7]{Euler-x.eps}
\label{Euler-x}
\end{figure}
\begin{figure}[h!]
\centering
\caption{}
\includegraphics[scale=0.7]{Euler-p.eps}
\label{Euler-p}
\end{figure}
\begin{figure}[h!]
\centering
\caption{}
\includegraphics[scale=0.7]{Euler-inv.eps}
\label{Euler-inv}
\end{figure}
\begin{figure}[h!]
\centering
\caption{}
\includegraphics[scale=0.7]{Euler-error.eps}
\label{Euler-error}
\end{figure}
We can see in Figure \ref{Euler-error} that the ratio $\frac{x}{p}$ provides a better approximation than the set of variables $x$ and $p$. (Of course, the approximation of the invariant is not good when $p$ approaches zero since it hits a singularity, but around those points, one can use the inverse function $\frac{p}{x}$ to get a better approximation.) In addition, we can see in Figure \ref{Euler-inv} that the blue dots are drifting to the right of the curve. This can be explained by higher-order errors generating an extra time translation.
\pagebreak
In this case, if one corrects the scheme by subtracting the errors $v_2$, $v_3$ and $v_4$ of the Euler scheme, we obtain
\begin{eqnarray}
X=x+\Delta p-\frac{\Delta^2}{2}x-\frac{\Delta^3}{6}p+\frac{\Delta^4}{24}x,\qquad P=p-\Delta x-\frac{\Delta^2}{2}p+\frac{\Delta^3}{6}x+\frac{\Delta^4}{24}p,
\end{eqnarray}
which is equivalent to the Runge--Kutta scheme (RK4), i.e.
\begin{eqnarray}
\frac{X-x}{\Delta}=p-\frac{\Delta}{2}x-\frac{\Delta^2}{6}p+\frac{\Delta^3}{24}x,\qquad \frac{P-p}{\Delta}=-x-\frac{\Delta}{2}p+\frac{\Delta^2}{6}x+\frac{\Delta^3}{24}p.
\end{eqnarray}
\subsubsection{The discrete gradient method}
Let us consider the discrete-gradient method which preserves the Hamiltonian (\ref{Hex2}), i.e. the discrete equations of motion takes the form
\begin{eqnarray}
\frac{X-x}{\Delta}=\frac{1}{2}\frac{P^2-p^2}{P-p},\qquad \frac{P-p}{\Delta}=-\frac{1}{2}\frac{X^2-x^2}{X-x}
\end{eqnarray}
and the iteration scheme is given by
\begin{eqnarray}
X=\frac{4x+4\Delta p-\Delta^2 x}{4+\Delta^2},\qquad P=\frac{4p-4\Delta x-\Delta^2p}{4+\Delta^2}.
\end{eqnarray}
It is interesting to note that for this scheme, the inverse transformation is the same as inverting the sign of $\Delta$. However, it is not a time-linear Lie-group transformation since the combination of two iterations is not the iteration of the sum of the steps, e.g.
\begin{eqnarray}
\psi_{-\Delta}\circ\psi_\Delta=1,\qquad \psi_\Delta\circ\psi_\Delta\circ x\neq F_j(x,p;2\Delta)\quad\psi_\Delta\circ\psi_\Delta\circ p\neq G_j(x,p;2\Delta).
\end{eqnarray}
Applying two iterations with the same step $\Delta$ is equivalent to applying one iteration with the step $\frac{8\Delta}{4-\Delta^2}$.
By direct substitution into equation (\ref{Err}), we obtain
\begin{eqnarray}
\psi_{-\Delta}\circ\frac{\partial}{\partial\Delta}\psi_\Delta=\left(1+\frac{\Delta^2}{4+\Delta^2}\right)\left(p\frac{\partial}{\partial x}-x\frac{\partial}{\partial p}\right).\label{ErrDG}
\end{eqnarray}
and the lower-order terms take the form
\begin{eqnarray}
v_0&=&v_1\;=\;v_2\;=\;0,\nonumber\\
v_3&=&\frac{-1}{2}\left(p\frac{\partial}{\partial x}-x\frac{\partial}{\partial p}\right),\nonumber\\
v_4&=&2\left(x\frac{\partial}{\partial x}+p\frac{\partial}{\partial p}\right),\nonumber\\
v_5&=&\frac{13}{2}\left(p\frac{\partial}{\partial x}-x\frac{\partial}{\partial p}\right),\nonumber\\
&\vdots&\nonumber
\end{eqnarray}
We can see that the error is of order 3 and higher and the first term is proportional to the deformation $\mathfrak{g}$. As we can see, the leading-order error is a translation in time, that is the numerical solution will remain on the trajectory, but it represents a point too far in time. By computing the solution for greater time, we can see that the graphs of the position and momentum in time will slowly drift to the right of the exact curves.
To illustrate the results, Figure \ref{DisGra-phase} is the trajectory in the phase space, Figure \ref{DisGra-x} is the evolution of $x$ in time and Figure \ref{DisGra-p} is the evolution of $p$ in time and Figure \ref{DisGra-inv} is the evolution of $H$ in time. In Figure \ref{DisGra-error} the error over time in the phase space $\sigma(x,p)$ is given by the black curve while the error over time of the invariant $\sigma(2H)$ is given by the green curve,
\begin{eqnarray}
\sigma(x,p)&=&(x(t)-x(n\Delta))^2+(p(t)-p(n\Delta))^2,\\
\sigma(2H)&=&\left(p(t)^2+x(t)^2-p(n\Delta)^2-x(n\Delta)^2\right)^2
\end{eqnarray}
such that $t=n\Delta$.
\begin{figure}[h!]
\centering
\caption{}
\includegraphics[scale=0.4]{DisGra-phase.eps}
\label{DisGra-phase}
\end{figure}
\begin{figure}[h!]
\centering
\caption{}
\includegraphics[scale=0.7]{DisGra-x.eps}
\label{DisGra-x}
\end{figure}
\begin{figure}[h!]
\centering
\caption{}
\includegraphics[scale=0.7]{DisGra-p.eps}
\label{DisGra-p}
\end{figure}
\begin{figure}[h!]
\centering
\caption{}
\includegraphics[scale=0.7]{DisGra-inv.eps}
\label{DisGra-inv}
\end{figure}
\begin{figure}[h!]
\centering
\caption{}
\includegraphics[scale=0.7]{DisGra-error.eps}
\label{DisGra-error}
\end{figure}
\pagebreak
~
\pagebreak
Knowing that the scheme is a non-linear in time group transformation and that the errors solely take the form of extra translation in time, we can express the approximative scheme differently, i.e.
\begin{eqnarray}
\psi_\Delta = \exp\left((\Delta+W(\Delta))\mathfrak{g}\right),
\end{eqnarray}
where $W(\Delta)$ is a function of $\Delta$ representing the error on time. Using (\ref{ErrDG}) and the new definition, we can calculate $W(\Delta)$ to obtain
\begin{eqnarray}
W(\Delta)=\Delta - 2 \arctan\left(\frac{\Delta}{2}\right).
\end{eqnarray}
Hence, by breaking the assumption that $\Delta$ acts as the time $t$, we can correct the scheme by parametrizing the time by a function of $\Delta$. The resulting scheme will be exact, i.e.
\begin{eqnarray}
X=\frac{4x+4\Delta p-\Delta^2 x}{4+\Delta^2},\qquad P=\frac{4p-4\Delta x-\Delta^2p}{4+\Delta^2}, \qquad T=t+ 2 \arctan\left(\frac{\Delta}{2}\right),\nonumber
\end{eqnarray}
where $X$, $P$ and $T$ are the ``advance-time'' coordinates in the augmented phase space.
\section{Conclusions}\label{SecConc}
As a summary, we constructed a formalism allowing us to express solutions of time-independent Hamiltonian systems as deformation of the initial conditions via a Lie-group transformation. We found explicitly the associated Lie algebra, which allows us to compare approximative/numerical schemes with the exact scheme or real solution. This comparison provide information on the amplitude and type of errors.
More precisely, in section \ref{SecForm} we constructed the formalism for time-independent Hamiltonian systems using Lie-group transformations and an evolution generator taking values into the associated Lie algebra. This formalism considers an implicit advance in time, i.e. using the dependent variables, instead of an explicit advance in time, that is transforming solely the time. Under the canonical transformation conditions, the algebraical preservation of all integrals of motion and an equivalent of a time translation, we were able to determine the evolution generator. This vector field generates a Lie-group transformation which is consistent with both continuous and discrete versions of time. The mesh of the discretization is left without constraints. One of the advantages of this method is that we do not require to discretize differential equations and we look for an algebraic substitute. Nothing is needed to be known about the system (except the Hamiltonian and the initial conditions).
In section \ref{SecExact}, we propose a method to construct exact schemes for time-independent Hamiltonian systems that are integrable. These schemes keep the full precision of the continuous case with an arbitrary mesh. However, this method also possesses some problems. Some integrals of motion must be known, which can be a tremendous task by itself. The method can also hit some computational problems when one has to find the action-angle coordinates and/or the (inverse) canonical transformations, i.e. to give the relation between the original coordinates and the action-angle coordinates.
In section \ref{NonExact}, we investigated non-exact schemes using the formalism of section \ref{SecForm}. This allows us the determine the errors coming from the numerical schemes. It is possible to correct them or look for functions of the coordinates (invariant of the errors) that are more precise. In addition, we showed how the Euler method can be derived from this formalism. We illustrated these considerations via the 1-dimensional harmonic oscillator. For the Euler method, we were able to correct the scheme to get RK4. We also looked into the discrete gradient method which preserves the Hamiltonian.
This Lie-algebra formalism can be extended in many directions: for one, providing a similar proof of this formalism for autonomous systems of ordinary differential equations. It would be of great use to generalize the formalism to include time-dependent systems. Also, it would be interesting to approach perturbation theory using this method, that is the first-order error is non-zero. It could be also interesting to break the relation between the group parameter and the time, allowing for implicit parametrization of the spacetime. In a more general way, ordinary differential equations are often used as a basis for partial differential equations. It would be interesting to further extend this formalism to partial differential equations. Also, it could have some applications with delay equations. Lastly, a geometric study of the errors of different schemes could be undertaken to investigate the global errors or methods to get exact schemes from non-exact schemes using additional numerical analysis tools.
\section*{Acknowledgements}
SB was partially supported by postdoctoral fellowships provided by the Fonds de Recherche du Qu\'ebec : Nature et Technologie (FRQNT) and the Natural Sciences and Engineering Research Council of Canada (NSERC). SB would like to thank Libor~\v{S}nobl (\v{C}VUT, Czech Republic) and Adel~F.~Antippa (UQTR, Canada) for interesting discussions on the subject of this paper.
|
1,941,325,220,039 | arxiv | \section{Introduction}
\label{sec1:Introduction}
The Isogeometric Analysis (IGA)
was introduced by \cite{HughesCottrellBazilevs:2005a}
and has since been developed intensively, see also monograph
\cite{CottrellHughesBazilevs:2009a}, is a very suitable framework
for representing and discretizing
Partial Differential Equations (PDEs) on surfaces.
We refer the reader to the survey paper by
\cite{DziukElliot:2013a}
where different finite element approaches to the numerical solution
of PDEs on surfaces are discussed.
Very recently,
\cite{DednerMadhavanStinner:2013a}
have used and analyzed the Discontinuous Galerkin (DG) finite element method
for solving elliptic problems on surfaces.
The IGA of second-order PDEs on surfaces
that avoid errors arising from the approximation of the surface,
has been introduced and numerically studied by
\cite{DedeQuarteroni:2012a}.
\cite{Brunero:2012a} presented some
discretization error analysis
of the DG-IGA applied to plane (2d) diffusion problems that carries over to
plane linear elasticity problems which have recently been studied
numerically in
\cite{ApostolatosSchmidtWuencherBletzinger:2013a}.
The efficient generation of the IGA equations, their fast solution,
and the implementation of adaptive IGA schemes are currently
hot research topics. The use of DG technologies will certainly
facilitate the handling of the multi-patch case.
\par In this paper, we use the DG method to handle
the IGA of diffusion problems on closed or open, multi-patch NURBS surfaces.
The DG technology easily allows us to handle non-homogeneous Dirichlet
boundary condition as in the Nitsche method and the multi-patch
NURBS spaces which can be discontinuous across the patch boundaries.
We also derive discretization error estimates
in the DG- and $L_{2}$-norms. Finally, we present some numerical results
confirming our theoretical estimates.
\section{Surface Diffusion Model Problem}
\label{sec2:SurfaceDiffusionModelProblem}
Let us assume that the physical (computational) domain $\Omega$,
where we are going to solve our diffusion problem,
is a sufficiently smooth, two-dimensional generic (Riemannian) manifold (surface)
defined in the physical space $\mathbb{R}^{3}$
by means of a smooth multi-patch NURBS mapping
that is defined as follows.
Let $\mathcal{T}_{H}= \{\Omega^{(i)}\}_{i=1}^{N}$ be a partition of
our physical computational domain $\Omega$ into
non-overlapping patches (subdomains) $\Omega^{(i)}$ such that
$\overline{\Omega}= \bigcup_{i=1}^{N} \overline{\Omega}^{(i)} $ and
$\Omega^{(i)} \cap \Omega^{(j)}= \emptyset $ for $i \neq j$,
and let each patch $\Omega^{(i)}$ be the image of the
parameter domain $\widehat{\Omega} = (0,1)^2 \subset \mathbb{R}^{2}$
by some NURBS mapping
$G^{(i)} : \widehat{\Omega} \rightarrow \Omega^{(i)} \subset \mathbb{R}^{3},
\mathbf{\xi} = (\mathbf{\xi}_1,\mathbf{\xi}_2)
\mapsto \mathbf{x} = (\mathbf{x}_1,\mathbf{x}_2,\mathbf{x}_3)=G^{(i)}(\mathbf{\xi})$,
which can be represented in the form
\begin{equation}
\label{sec2:GeometricalMappingRepresentation}
G^{(i)}(\xi_{1},\xi_{2}) = \sum_{k_{1}=1}^{n_{1}} \sum_{k_{2}=1}^{n_{2}}
\mathbf{P}^{(i)}_{(k_{1},k_{2})} \widehat{R}^{(i)}_{(k_{1},k_{2})}(\xi_{1},\xi_{2})
\end{equation}
where $\{ \widehat{R}^{(i)}_{(k_{1},k_{2})} \}$ are the bivariate NURBS basis functions,
and $\{\mathbf{P}^{(i)}_{(k_{1},k_{2})} \}$ are the control points,
see \cite{CottrellHughesBazilevs:2009a} for a detailed
description.
\par Let us now consider a
diffusion problem
on the surface $\Omega$ the weak formulation of which
can be written as follows: find $u \in V_g$ such that
\begin{equation}
\label{sec2:VariationalFormulation}
a(u,v) = \langle F,v \rangle \quad \forall v \in V_0,
\end{equation}
with the bilinear and linear forms are given by the relations
\begin{equation*}
a(u,v) = \int_\Omega \alpha \, \nabla_\Omega u \cdot \nabla_\Omega v \, d \Omega
\quad \mbox{and} \quad
\langle F,v \rangle = \int_\Omega f v \, d \Omega + \int_{\Gamma_N} g_N v \,d \Gamma,
\end{equation*}
respectively, where $\nabla_\Omega$ denotes the so-called tangential or surface
gradient, see e.g. Definition~2.3 in \cite{DziukElliot:2013a}
for its precise description.
The hyperplane $V_g$ and the test space $V_0$ are given by
$V_g=\{v \in V = H^1(\Omega): v=g_D \;\mbox{on}\; \Gamma_D\}$
and
$V_0=\{v \in V: v=0 \;\mbox{on}\; \Gamma_D\}$
for the case of an open surface $\Omega$ with the boundary
$\Gamma = \overline{\Gamma}_D \cup \overline{\Gamma}_N$
such that $\mbox{meas}_1(\Gamma_D) > 0$,
whereas
$V_g=V_0=\{v \in V: \int_\Omega v \, d \Omega =0\}$
in the case of a pure Neumann problem ($\Gamma_N = \Gamma$)
as well as in the case of closed surfaces unless there is
a reaction term. In case of closed
surfaces there is of course no integral over $\Gamma_N$
in the linear functional on the right-hand side of
(\ref{sec2:VariationalFormulation}).
In the remainder of the paper, we will mainly discuss the case of mixed boundary
value problems on an open surface under appropriate assumptions
(e.g., $\mbox{meas}_1(\Gamma_D) > 0$, $\alpha$ - uniformly positive and bounded,
$f\in L_2(\Omega)$, $g_D \in H^{\frac{1}{2}}(\Gamma_{D})$ and $g_{N} \in L_{2}(\Gamma_{N})$~)
ensuring existence and uniqueness
of the solution of (\ref{sec2:VariationalFormulation}).
For simplicity, we assume that the diffusion coefficient $\alpha$ is patch-wise constant,
i.e. $\alpha = \alpha_i$ on $\Omega^{(i)}$ for $i=1,2,\ldots,N$.
The other cases including the reaction-diffusion case can be treated in the same way
and yield the same results like presented below.
\section{DG-IGA Schemes and their Properties}
\label{sec3:DGIGASchemesAndProperties}
The DG-IGA variational identity
\begin{equation}
\label{sec3:DG-VariationalIdentity}
a_{DG}(u,v) = \langle F_{DG},v \rangle \quad \forall v \in \mathcal{V} = H^{1+s}(\mathcal{T}_{H}),
\end{equation}
which corresponds to (\ref{sec2:VariationalFormulation}),
can be derived in the same way as their FE counterpart, where
$H^{1+s}(\mathcal{T}_{H}) =\{v \in L_{2}(\Omega): v|_{\Omega^{(i)}}
\in H^{1+s}(\Omega^{(i)}), \; \forall \, i = 1,\ldots,N\}$
with some $s > 1/2$.
The DG bilinear and linear forms in the
\textit{Symmetric Interior Penalty Galerkin} (SIPG) version,
that is considered throughout this paper for definiteness,
are defined by the relationships
\begin{eqnarray}
\label{sec3:DG-BilinearForm}
\nonumber
a_{DG}(u,v) &=&\sum_{i=1}^{N} \int_{\Omega^{(i)}}
\alpha_{i} \nabla_{\Omega} u \cdot \nabla_{\Omega} v \, d\Omega\\ \nonumber
&& -\sum_{\gamma \in \mathcal{E}_{I} \cup \mathcal{E}_{D}}
\int_{\gamma}
\left(
\{ \alpha \nabla_{\Omega} u \cdot \mathbf{n}\}
[v] +
\{\alpha \nabla_{\Omega} v \cdot \mathbf{n}\} [u]
\right)\,d\Gamma\\
&& + \sum_{ \gamma \in \mathcal{E}_{I} \cup \mathcal{E}_{D}}
\frac{\delta}{ h_{\gamma} }
\int_{\gamma} \alpha_{\gamma} [u][v]\,d\Gamma
\end{eqnarray}
and
\begin{eqnarray}
\label{sec3:DG-LinearForm}
\nonumber
\langle F_{DG},v \rangle
&=& \int_{\Omega} f v d\,\Omega
+ \sum_{\gamma \in \mathcal{E}_{N}} \int_{\gamma} g_{N}v\, d\Gamma\\
&& + \sum_{\gamma \in \mathcal{E}_{D}} \int_{\gamma}
\alpha_{\gamma} \left( - \nabla_{\Omega} v \cdot \mathbf{n}
+ \frac{\delta}{ h_{\gamma} } v \right) g_{D}\,d\Gamma,
\end{eqnarray}
respectively, where the usual DG notations
for the averages $\{ \cdot \}$ and jumps $[\cdot ]$
are used,
see, e.g., \cite{Riviere:2008a}.
The sets
$\mathcal{E}_{I}$, $\mathcal{E}_{D}$ and $\mathcal{E}_{N}$
denote the sets of edges $\gamma$ of the patches
belonging to $\Gamma_I = \cup \,\partial \Omega^{(i)} \setminus \{\Gamma_D \cup \Gamma_N\}$,
$\Gamma_D$ and $\Gamma_N$, respectively whereas
$h_{\gamma}$ is the mesh-size on $\gamma$.
The penalty parameter $\delta$ must be chosen such that
the ellipticity of the DG bilinear on $\mathcal{V}_{h}$
can be ensured.
The relationship between our model problem
(\ref{sec2:VariationalFormulation})
and the DG variational identity
(\ref{sec3:DG-VariationalIdentity})
is given by the consistency theorem that can easily be verified.
\begin{theorem}\label{Thm:Sec3:Consistency}
If the solution $u$ of the variational problem (\ref{sec2:VariationalFormulation}) belongs
to $V_g \cap H^{1+s}(\mathcal{T}_{H })$ with some $s > 1/2$, then
$u$ satisfies the DG variational identity (\ref{sec3:DG-VariationalIdentity}).
Conversely, if $u \in H^{1+s}(\mathcal{T}_{H})$
satisfies (\ref{sec3:DG-VariationalIdentity}),
then $u$ is the solution of our original variational problem
(\ref{sec2:VariationalFormulation}).
\end{theorem}
\par Now we consider the finite-dimensional Multi-Patch NURBS subspace
\begin{equation*}
\mathcal{V}_{h}= \{v \in L_{2}(\Omega): \; v|_{\Omega^{(i)}}\in V^{i}_{h}(\Omega^{(i)}), \; i= 1,\ldots,N \}
\end{equation*}
of our DG space $\mathcal{V}$,
where
$
V^{i}_{h}(\Omega^{(i)}) = \text{span}\{R_{\textbf{k}}^{(i)} \}
$
denotes the space of NURBS functions on each single-patch $ \Omega^{(i)}, \; i= 1,\ldots,N$,
and the NURBS basis functions
$
R_{\textbf{k}}^{(i)} = \widehat{R}^{(i)}_{\textbf{k}} \circ G^{(i)^{-1}}
$
are given by the push-forward of the NURBS functions $\widehat{R}^{(i)}_{\textbf{k}}$
to their corresponding physical sub-domains $ \Omega^{(i)}$ on the surface $\Omega$.
Finally, the DG scheme for our model problem \eqref{sec2:VariationalFormulation}
reads as follows:
find $u_{h} \in \mathcal{V}_{h}$ such that
\begin{equation}
\label{sec3:DiscreteDGVariationalFormulation}
a_{DG}(u_{h},v_{h}) = \langle F_{DG},v_{h} \rangle , \quad \forall v_{h} \in \mathcal{V}_{h}.
\end{equation}
For simplicity of our analysis, we assume matching meshes in the IGA sense,
where the discretization parameter $h_i$ characterizes the mesh-size in the patch $\Omega^{(i)}$
whereas $p$ always denotes the underlying polynomial degree of the NURBS.
Using special trace and inverse inequalities in the NURBS spaces $\mathcal{V}_{h}$
and Young's inequality, for sufficiently large DG penalty parameter $\delta$,
we can easily establish $\mathcal{V}_{h}$ coercivity and boundedness
of the DG bilinear form with respect to the DG energy norm
\begin{equation}
\label{sec3:DG-Norm}
\|v \|^{2}_{DG} =
\sum_{i=1}^{N} \alpha_{i} \|\nabla_{\Omega} v_{i} \|_{L^{2}(\Omega^{i})}^{2}
+ \sum_{ \gamma \in \mathcal{E}_{I} \cup \mathcal{E}_{D}}
\alpha_{\gamma} \frac{\delta}{h_{\gamma}} \| [v] \|_{L^{2}(\gamma)}^{2},
\end{equation}
yielding existence and uniqueness of the DG solution $u_{h} \in \mathcal{V}_{h}$
of (\ref{sec3:DiscreteDGVariationalFormulation})
that can be determined by the solution of a linear system of algebraic
equations.
\section{Discretization Error Estimates}
\label{sec4:DiscretizationErrorEstimates}
\begin{theorem}\label{sec4:DG-Norm-ErrorEstimate}
Let $u \in V_g \cap H^{1+s}(\mathcal{T}_{H})$ with some $s > 1/2$ be the solution of
(\ref{sec2:VariationalFormulation}),
$u_{h} \in \mathcal{V}_{h}$ be the solution of (\ref{sec3:DiscreteDGVariationalFormulation}),
and the penalty parameter $\delta$ be chosen large enough .
Then there exists a positive constant $c$
that is independent of $u$, the discretization parameters
and the jumps in the diffusion coefficients
such that the DG-norm error estimate
\begin{equation}
\label{sec4:DG-normError Estimate}
\|u-u_{h} \|_{DG}^{2} \leq c
\left(\sum_{i=1}^{N} \alpha_{i} h_{i}^{2t}\|u\|^{2}_{H^{1+t}(\Omega^{(i)})}\right)^{1/2},
\end{equation}
holds with $t:= \min\{s,p\} $.
\end{theorem}
\begin{proof}
Let us give a sketch of the proof.
By the triangle inequality, we have
\begin{equation}\label{sec4:DGTriangleInequality}
\|u-u_{h} \|_{DG} \leq \|u- \Pi_{h} u\|_{DG} + \|\Pi_{h} u - u_{h}\|_{DG}
\end{equation}
with some quasi-interpolation operator $\Pi_{h}: \mathcal{V} \mapsto \mathcal{V}_{h}$
such that the first term can be estimated with optimal order, i.e. by the term
on the right-hand side of (\ref{sec4:DG-normError Estimate})
with some other constant $c$. This is possible due to the
approximation results known for NURBS,
see, e.g., \cite{BazilevsBeiraoCottrellHughesSangalli:2006a}
and \cite{CottrellHughesBazilevs:2009a}.
Now it remains to estimate the second term in the same way.
Using the Galerkin orthogonality $a_{DG}(u-u_h,v_h)=0$ for all $v_h \in \mathcal{V}_{h}$,
the $\mathcal{V}_{h}$ coercitivity of the bilinear form $a_{DG}(\cdot,\cdot)$,
the scaled trace inequality
\begin{equation}
\label{sec4:EpsilonTraceInequality}
\|v \|_{L^{2}(e)} \leq
C h^{-1/2}_{E} \left( \|v \|_{L^{2}(E)}
+ h^{1/2+\epsilon}_{E} |v|_{H^{1/2+\epsilon}(E)} \right),
\end{equation}
that holds for all $v \in H^{1/2+\epsilon}(E)$, for all IGA mesh elements $E$,
for all edges $e \subset \partial E$, and for $\epsilon > 0$,
where $ h_{E}$ denotes the mesh-size of $E$ or the length of $e$,
Young's inequality, and again the approximation properties
of the quasi-interpolation operator $\Pi_{h}$, we can estimate the second
term by the same term
$ c \left(\sum_{i=1}^{N} \alpha_{i} h_{i}^{2t}\|u\|^{2}_{H^{1+t}(\Omega^{(i)})}\right)^{1/2}$
with some (other) constant $c$.
This completes the proof of the theorem.
\end{proof}
Using duality arguments, we can also derive $L_2$-norm
error estimates that depend on the elliptic regularity.
Under the assumption of full elliptic regularity, we get
$\|u-u_{h} \|_{L_2(\Omega)} \le c\, h^{p+1} \|u\|_{H^{p+1}(\Omega)}$
that is nicely confirmed by our numerical experiments
presented in the next section for $p=1,2,3,4$.
\section{Numerical Results}
\label{sec5:NumericalResults}
The DG IGA method presented in this paper
as well as its continuous
Galerkin
counterpart have been implemented
in the object oriented C++ IGA library ''Geometry + Simulation Modules''
(G+SMO)~\footnote{G+SMO : https://ricamsvn.ricam.oeaw.ac.at/trac/gismo}.
We present some first numerical results for testing
the numerical behavior of
the discretization error with respect to the mesh parameter $h$
and the polynomial degree $p$ Concerning the choice of the penalty parameter,
we used $\delta = 2(p+2)(p+1).$
\par As a first example, we consider a non-homogeneous Dirichlet problem for
the Poisson equation
in the 2d computational domain
$\Omega \subset \mathbb{R}^2$
called Yeti's footprint, see also \cite{KleissPechsteinJuttlerTomar:2012a},
where the right-hand side $f$ and the Dirichlet data $g_D$ are chosen such
that $u(x_1,x_2) = \sin(\pi x_1)\sin(\pi x_2)$ is the solution of the boundary value problem.
The computational domain (left) and the solution (right) can be seen in
Fig.~\ref{sec5:fig1:YetiFoot}.
The Yeti footprint consists of 21 patches with varying open knot vectors $\Xi$
describing the
NURBS
discretization in a short and precise way,
see, e.g., \cite{CottrellHughesBazilevs:2009a}
for a detailed definition.
The knot vector for patches 1 to 16 and 21 is given as
$\Xi = (0,\ldots,0,0.5,1,\ldots,1)$ in both directions whereas
the knot vectors for the patches 17 to 20 are given as
$\Xi_1 = ( 0,\ldots,0,0.5,1,\ldots,1)$ and
$\Xi_2 = ( 0,\ldots,0,0.25,0.5,0.75,1,\ldots,1).$
\begin{figure}
\centering
\includegraphics[width = 0.35\textwidth]{Yeti} \hfil
\includegraphics[width = 0.35\textwidth]{YetiSolution}
\caption{Yeti foot: geometry (left) and DG-IGA solution (right)}
\label{sec5:fig1:YetiFoot}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 0.7\textwidth]{L2NormErrorEstimate}
\caption{Yeti foot: $L_2-$ Norm Errors with polynomial degree $p$}
\label{sec5:fig2:YetiFootL2Error}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 0.7\textwidth]{DGNormErrorEstimate}
\caption{Yeti foot: DG Norm Errors with polynomial degree $p$}
\label{sec5:fig3:YetiFootDGError}
\end{figure}
In Fig.~\ref{sec5:fig2:YetiFootL2Error} and \ref{sec5:fig3:YetiFootDGError},
the errors in the $L_2$-norm and in the DG energy norm (\ref{sec3:DG-Norm})
are plotted against the degree of freedom (DOFs) with polynomial degrees from 1 to 4.
It can be observed that we have convergence rates of $\mathcal{O}(h^{p+1})$
and $\mathcal{O}(h^{p})$ respectively.
This corresponds to our theory in Section~\ref{sec4:DiscretizationErrorEstimates}.
In the second example, we apply the DG-IGA to the same
Laplace-Beltrami problem on an open surface
as described in \cite{DedeQuarteroni:2012a}, section~5.1,
where $\Omega$ is a quarter cylinder represented by four patches in our computations,
see Fig.~\ref{sec5:fig4:MultiPatchCylinder} (left).
The $L_{2}-$norm errors plotted on the right side of
Fig.~\ref{sec5:fig4:MultiPatchCylinder}
exhibit the same numerical behavior as in the plane
case of the Yeti foot.
The same is true for the DG-norm.
\begin{figure}[bth!]
\centering
\includegraphics[width = 0.38\textwidth]{QuarterCylinderSolution}
\includegraphics[width = 0.60\textwidth]{L2NormErrorQuarterCylinder}
\caption{Quarter cylinder: geometry with the solution (left)
and $L_{2}$ norm errors (right).}
\label{sec5:fig4:MultiPatchCylinder}
\end{figure}
\section{Conclusions}
\label{sec6:Conclusion}
We have developed and analyzed a new method for the numerical approximation
of diffusion problems on open and closed surfaces
by combining the discontinuous Galerkin technique with isogeometric analysis.
We refer to our approach as the Discontinuous Galerkin Isogeometric Analysis (DG-IGA).
In our DG approach we allow discontinuities only across the
boundaries of the patches,
into which the computational domain is decomposed,
and enforce the interface conditions in the DG framework.
For simplicity of presentation, we assume that
the meshes are matching across the patches, and
the solution $u$
is at least patch-wise in $H^{1+s}$, i.e. $u \in H^{1+s}(\mathcal{T}_{H})$,
with some $s > 1/2$.
The cases of non-matching meshes and low-regularity solution,
that are technically more involved and that were investigated,
e.g., by \cite{Dryja:2003a} and \cite{DiPietroErn:2012a},
will be considered in a forthcoming paper.
The parallel solution of the DG-IGA equations can efficiently be performed by
Domain Decomposition (DD) solvers like the IETI technique
proposed by \cite{KleissPechsteinJuttlerTomar:2012a},
see also \cite{ApostolatosSchmidtWuencherBletzinger:2013a}
for other DD solvers. The construction and analysis of
efficient solution strategies is currently a hot research topic
since, beside efficient generation techniques,
the solvers are the efficiency bottleneck
in large-scale IGA computations.
\section*{Acknowledgement}
The authors gratefully acknowledge the financial support of the
research project NFN S117-03 by the Austrian Science Fund (FWF).
Furthermore, the authors want to thank their colleagues
Angelos Mantzaflaris, Satyendra Tomar, Ioannis Toulopoulos and Walter Zulehner
for fruitful and enlighting discussions as well as for their
help in the implementation in GISMO.
\bibliographystyle{plainnat}
|
1,941,325,220,040 | arxiv | \section{Introduction}
A rod (or rod-like body) can be regarded as a spatial curve, to which two deformable vectors, called \textit{directors} are assigned. This curve is also called \textit{directed} or \textit{Cosserat} curve. The balance laws can be stated directly in terms of the curve velocity and director velocity vectors, and their work conjugate force and director force vectors, which eventually yields the equations of motion in the one-dimensional (curve) domain \citep{green1966general}. Since we actually deal with a three-dimensional continuum, one can consistently derive the equations of motion of the rod from those of the full three-dimensional continuum. This \textit{dimensional reduction}, or \textit{degeneration} procedure is based on a suitable kinematic assumption, and this dimensionally reduced theoretical model is referred to as \textit{beam} model. An \textit{exact} expansion of the position vector of any point of the beam at time $t$ is given as \citep{antman1966dynamical}
\begin{equation}
\label{intro_beam_kin_general}
{{\boldsymbol{x}}_t} = \sum\limits_{p = 0}^\infty {\sum\limits_{q = 0}^p {{{({\xi ^1})}^{p - q}}{{({\xi ^2})}^q}{{\boldsymbol{d}}^{(p - q,q)}}(\xi^3,t)} },
\end{equation}
where $\xi^\gamma$ ($\gamma=1,2$) denote the two coordinates in transverse (principal) directions of the cross-section plane, $\xi^3$ denotes the coordinate along the central axis, and
\begin{equation}
\label{def_taylor_deriv_xt}
{{\boldsymbol{d}}^{(p - q,q)}}(\xi^3,t) \coloneqq \frac{1}{{(p - q)!q!}}\left( {\frac{{{\partial ^p}{{\boldsymbol{x}}_t}}}{{\partial {{({\xi^1})}^{p - q}}\partial {{({\xi^2})}^q}}}} \right).
\end{equation}
Using the full conservation laws of a three-dimensional continuum as a starting point, applying the kinematics in Eq.\,(\ref{intro_beam_kin_general}) offers an \textit{exact} reparameterization of the three-dimensional theory into the one-dimensional one \citep{antman1966dynamical,green1968rods}. However, this theory has infinite number of equations and unknowns, which makes it intractable for a finite element formulation and computation. The \textit{first order theory} assumes the position vector to be a linear function of the coordinates $\xi^{\gamma}$, i.e. \citep{volterra1956equations,antman1966dynamical}
\begin{equation}\label{intro_beam_th_str_cur_config}
{{\boldsymbol{x}}_t} = {{\boldsymbol{\varphi }}}({\xi^3},t) + \sum\limits_{\gamma = 1}^2 {{\xi ^\gamma }{{\boldsymbol{d}}_\gamma }({\xi ^3},t)},
\end{equation}
where ${\boldsymbol{\varphi }}(\xi^3,t)\equiv{{\boldsymbol{d}}^{(0,0)}}(\xi^3,t)$ denotes the position of the beam central axis, and two directors are denoted by ${\boldsymbol{d}}_1(\xi ^3,t)\equiv{\boldsymbol{d}}_{}^{(1,0)}(\xi ^3,t)$ and ${\boldsymbol{d}}_2(\xi ^3,t)\equiv{\boldsymbol{d}}_{}^{(0,1)}(\xi ^3,t)$. This approximation simplifies the strain field; it physically implies that planar cross-sections still remain planar after deformation, but allows for constant in-plane stretching and shear deformations of the cross-section. This implies that the linear in-plane strain field in the cross-section due to the Poisson effect in bending mode cannot be accommodated in the first order theory\footnote{One can find an analytical example and discussion on this in section 6 of \citet{green1967linear}.}, which consequently increases the bending stiffness. This problem is often referred to as \textit{Poisson locking}, and the resulting error does not reduce with mesh refinement along the central axis since the displacement field in the cross-section is still linear \citep{bischoff1997shear}. One may extend the formulation in Eq.\,(\ref{intro_beam_th_str_cur_config}) to quadratic displacement field in the cross-section by adding the second order terms about the coordinates $\xi^{\gamma}$ in order to allow for a linear in-plane strain field. There are several theoretical works on this \textit{second order theory} including the work by \citet{pastrone1978dynamics} and on even higher $N$-th order theory by \citet{antman1966dynamical}. Since shell formulations have only one thickness direction, higher-order formulations are simpler than for beams. Several works including \citet{parisch1995continuum}, \citet{brank2002nonlinear}, and \citet{hokkanen2019isogeometric} employed second order theory in shell formulations. In beam formulations, several previous works considering the extensible director kinematics, which allows in-plane cross-section deformations, can be found. A theoretical study to derive balance equations and objective strain measures based on the polar decomposition of the in-plane cross-sectional deformation can be found in \citet{kumar2011geometrically}. Further extension to initially curved beams was proposed in \citet{genovese2014two}, where unconstrained quaternion parameters were utilized to represent both in-plane stretching and rotation of cross-sections. In those works, constitutive models are typically simplified to the form of quadratic strain energy density function. \citet{durville2012contact} also employed a first order theory in frictional beam-to-beam contact problems, where the constitutive law was simplified to avoid Poisson locking. \citet{coda2009solid} employed second order theory combined with an additional warping degree-of-freedom. However, it turns out that the linear in-plane strain field for the cross-section is not complete, so that the missing bilinear terms may lead to severe Poisson locking. In order to have a linear strain field in the cross-section with the increase of the number of unknowns minimized, one may extend the kinematics of Eq.\,(\ref{intro_beam_th_str_cur_config}) to
\begin{equation}\label{intro_beam_th_str_cur_config_2nd}
{\boldsymbol{x}} = {\boldsymbol{\varphi }}({\xi ^3}) + {\xi ^1}\left( {1 + {a_1}{\xi ^1} + {a_2}{\xi ^2}} \right){{\boldsymbol{d}}_1}({\xi ^3}) + {\xi ^2}\left({1 + {b_1}{\xi ^1} + {b_2}{\xi ^2}} \right){{\boldsymbol{d}}_2}({\xi ^3}),
\end{equation}
where four additional unknown coefficient functions $a_{\gamma}=a_{\gamma}({\xi^3})$ and $b_{\gamma}=b_{\gamma}({\xi^3})$ $({\gamma}=1,2)$ are introduced. Here and hereafter, the dependence of variables on time $t$ is usually omitted for brevity of expressions. This enrichment enables additional modes of the cross-sectional deformation (see Fig.\,\ref{intro_cs_deform_linear} for an illustration), which are also induced in bending deformation due to the Poisson effect.
\begin{figure*}[htb]
\centering
%
\begin{subfigure}[b] {0.45\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/intro_in-plane_deform_2nd.png}
\caption{Linear strain due to quadratic terms}
\label{intro_cs_deform_2nd}
\end{subfigure}
%
\begin{subfigure}[b] {0.45\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/intro_in-plane_deform_2nd_trapezoid.png}
\caption{Linear strain due to bilinear terms}
\label{intro_cs_deform_2nd_trapezoid}
\end{subfigure}
\caption{{Illustration of in-plane deformations of the cross-section with linear strain field. The dashed and solid lines represent the undeformed and deformed cross-sections, respectively. (a) The through-the-thickness stretching strain is linear along the $\xi^1$ and $\xi^2$ directions in case of $a_1\ne0$ and $b_2\ne0$, respectively. (b) The through-the-thickness stretching strain is linear along the $\xi^2$ and $\xi^1$ directions in case of $a_2\ne0$ and $b_1\ne0$, respectively. Note that the deformed cross-sections have trapezoidal shapes.}}
\label{intro_cs_deform_linear}
\end{figure*}
{Eq.\,(\ref{intro_beam_th_str_cur_config_2nd}) recovers the kinematic assumption\footnote{In this paper, we focus on in-plane deformation of cross-section, although additional warping degrees-of-freedom was considered in the work of \cite{coda2009solid}. This restricts the range of application to compact convex cross-sections, where the warping effect is not pronounced.} in \cite{coda2009solid} if $a_2=b_1=0$, which means the absence of bilinear terms, so that the trapezoidal cross-section deformation, shown in Fig.\,\ref{intro_cs_deform_2nd_trapezoid}, cannot be accomodated. Therefore, Poisson locking cannot be effectively alleviated. In this paper, we employ the enhanced assumed strain (EAS) method to circumvent Poisson locking in the first order theory. In order to verify the significance of those bilinear terms in Eq.\,(\ref{intro_beam_th_str_cur_config_2nd}), in a numerical example of section \ref{ex_cant_b_end_f}, we compare two different EAS formulations based on five and nine enhanced strain parameters, respectively. The formulation of five enhanced strain parameters is obtained by ignoring the incompatible modes of trapezoidal cross-section deformation, i.e., it considers only the incompatible modes of Fig.\,\ref{intro_cs_deform_2nd}. The other one with nine enhanced strain parameters considers the whole set of incompatible linear cross-section modes, i.e., it considers both of the incompatible modes of Fig.\,\ref{intro_cs_deform_2nd} and \ref{intro_cs_deform_2nd_trapezoid}.}
{The enhanced assumed strain (EAS) method developed in \citet{simo1990class} is based on the three-field Hu-Washizu variational principle. As the independent stress field is eliminated from the variational formulation by an orthogonality condition, it becomes a two-field variational formulation in terms of displacement and enhanced strain fields. Further, the enhanced strain parameters can be condensed out on the element level; thus the basic features of a displacement-based formulation are preserved. This method was generalized in the context of nonlinear problems in \citet{simo1992geometrically} in which a multiplicative decomposition of the deformation gradient into compatible and incompatible parts is used. One can refer to several works including \citet{buchter1994three}, \citet{betsch19964}, \citet{bischoff1997shear}, and \cite{brank2002nonlinear} for EAS-based shell formulations. In this paper, we apply the EAS method to the beam formulation. Beyond previous beam formulations based on the kinematics of extensible directors, our work has the following highlights:}
\begin{itemize}
\item {Consistency in balance equations and boundary conditions: The director field as well as the central axis displacement field satisfy the momentum balance equations and boundary conditions consistently derived from those of the three-dimensional continuum body. In the formulation of \citet{coda2009solid} and \citet{durville2012contact}, there are no detailed expressions of balance equations, beam strains, and stress resultants. To the best of our knowledge, in those works, the finite element formulation can be obtained by substituting the beam kinematic expression of the current material point position into the deformation gradient of three-dimensional elasticity. This \textit{solid-like} formulation yields an equivalent finite element formulation through a much more simplified derivation process. However, in addition to the possibility of applying mixed variational formulations in future works, the derivation of balance equations, beam strains, and stress resultants turns out to be significant in the interpretation of coupling between different strain components (for examples, see sections \ref{ex_end_mnt_subsub_axial} and \ref{ex_end_mnt_subsub_th}.)}
\item {We employ the EAS-method, where the additional strain parameters are statically condensed out, so that the same number of nodal degrees-of-freedom is used as in the pure displacement-based formulation. Each of the enhanced in-plane transverse normal strain components is linear in both of $\xi^1$ and $\xi^2$, which is in contrast to the strains obtained from the kinematic assumption in \cite{coda2009solid}. In the numerical example of section \ref{ex_cant_b_end_f}, it is verified that this further enrichment alleviates Poisson locking more effectively.}
\item Significance of correct surface loads: The consistently derived traction boundary condition shows that considering the correct surface load leads to an external director stress couple term that turns out to play a significant role in the accuracy of analysis.
\item Incorporation of general hyperelastic constitutive laws: As we consider the complete six stress components without any zero stress condition, our beam formulation naturally includes a straightforward interface for general three-dimensional constitutive laws.
\item Verification by comparison with brick element solution: We verify the accuracy and efficiency of our beam formulation by comparison with the results from brick elements.
\end{itemize}
It turns out that if linear shape functions are used to interpolate the director field, an artificial thickness stretch arises in bending deformations due to parasitic strain terms, and it eventually increases the bending stiffness. This effect is called \textit{curvature thickness locking}. Since the parasitic terms vanish at the nodal points, the assumed natural strain (ANS) method interpolates the transverse normal (through-the-thickness) stretch at nodes instead of evaluating it at Gauss integration points \citep*{betsch1995assumed, bischoff1997shear}. {For membrane and transverse shear locking, there are several other existing methods, for examples, selective reduced integration method in \citet{adam2014improved}, Greville quadrature method in \citet{zou2021galerkin}, and mixed variational formulation in \citet{wackerfuss2009mixed,wackerfuss2011nonlinear}. However, since curvature-thickness, membrane, and transverse shear locking issues become less significant by mesh refinement or higher-order basis functions, especially in low to moderate slenderness ratio of our interests, no special treatment is implemented in this paper (see the investigation on those locking issues in section \ref{ex_beam_end_mnt_allev_lock}). Further investigation on the application of existing method remains future work.}
If we restrict the two directors in Eq.\,(\ref{intro_beam_th_str_cur_config}) to be orthonormal, which physically means that the cross-section is rigid, large rotations of the cross-section can be described by an orthogonal transformation. In planar static problems, \citet{reissner1972one} derived the force and moment balance equations, from which the strain-displacement relation is obtained via the principle of virtual work and work conjugate relations. Since this approach poses no assumption on the magnitude of deformations, it is often called \textit{geometrically exact beam theory}. This work was extended to three-dimensional dynamic problems by \citet{simo1985finite}, which was followed by the finite element formulation of static problems in \citet{simo1986three}. An additional degree-of-freedom related to torsion-warping deformation was added in \citet{simo1991geometrically}, and this work was extended by \citet{gruttmann1998geometrical} to consider eccentricity with arbitrary cross-section shapes. There have been a number of works on the parameterization of finite rotations, and the multiplicative or additive configuration update process. One may refer to the overviews on this given by \citet{meier2014objective} and \citet{crisfield1999objectivity}. In \citet{crisfield1999objectivity}, it was pointed out that the usual spatial discretization of the strain measures in \citet{simo1986three} leads to non-invariance of the interpolated strain measures in rigid body rotation, even though the strain measures in continuum form are objective. This non-objectivity stems from the non-commutativity, i.e., non-vectorial nature of the finite rotation. To retain the objectivity of strain measures in the underlying continuum formulation, the isoparametric interpolation of director vectors is used instead of interpolating the rotational parameters (see for example \citealp{betsch2002frame, romero2002objective, eugster2014director}), and the subsequent weak form of finite element formulation is reformulated. As those beam formulations still assume rigid cross-sections, the orthonormality condition of the director vectors should be satisfied. Several methods to impose the constraint can be found in the literature, examples are the Lagrange multiplier method \citep{betsch2002frame, eugster2014director}, and the introduction of nodal rotational degrees-of-freedom \citep*{betsch2002frame, romero2002objective}. {In order to preserve the objectivity and path-independence in the rotation interpolation, several methods have been developed; for examples, orthogonal interpolation of relative rotation vectors \citep{crisfield1999objectivity,ghosh2009frame}, geodesic interpolation \citep{sander2010geodesic}, interpolation of quaternion parameters \citep{zupan2013virtual}. \citet{romero2004interpolation} compared several rotation interpolation schemes in perspective of computational accuracy and efficiency. A more comprehensive review on geometrically exact finite element beam formulations can be found in \citet{meier2019geometrically}}. In the isoparametric approximation of directors, employed in our beam formulation, the director vectors belong to $\Bbb{R}^3$, that is, no orthonormality condition is imposed. This means that the cross-section can undergo in-plane deformations like transverse normal stretch and in-plane shear deformations. {Further, it enables us to avoid the rotation group, which is a nonlinear manifold, in the configuration space of the beam, and consequently complicates the configuration and strain update process \citep{durville2012contact}. \citet{coda2009solid} and \citet{coda2011fem}, who employed an isoparametric interpolation of directors without orthonormality condition, presented several numerical examples showing the objectivity and path-independence of the finite element formulation.}
Classical beam theories introduce the zero transverse stress condition based on the assumption that the transverse stresses are much smaller than the axial and transverse shear stresses. Thus, six stress components in the three-dimensional theory reduce to three components including the transverse shear components in the Timoshenko beam theory. However, this often complicates the application of three-dimensional nonlinear material laws, and requires a computationally expensive iteration process. Global and local iteration algorithms to enforce the zero stress condition at Gauss integration points were developed in \citet{de1991zero} and \citet{klinkel2002using}, respectively. One can also refer to several recent works on Kirchhoff-Love shell formulations with general three-dimensional constitutive laws, where the transverse normal strain component can be condensed out by applying the plane stress condition in an analytical or iterative manner, for example, for hyperelasticity by \citet{kiendl2015isogeometric} and \citet{duong2017new}, and elasto-plasticity by \citet{ambati2018isogeometric}. There are several other finite element formulations to dimensionally reduce slender three-dimensional bodies and incorporate general three-dimensional constitutive laws. The so-called \textit{solid beam formulation} uses a single brick element\footnote{This is sometimes called a \textit{solid element}.} in thickness direction. To avoid severe stiffening effects typically observed in low-order elements, a brick element was developed based on the EAS method in geometrically nonlinear problems \citep*{klinkel1997geometrical}. A brick element combined with EAS, ANS, and reduced integration methods in order to alleviate locking was presented in \citet{frischkorn2013solid}. The absolute nodal coordinate (ANC) formulation uses slope vectors as nodal variables to describe the orientation of the cross-section. The \textit{fully parameterized} ANC element enables straightforward implementation of general nonlinear constitutive laws. A comprehensive review on the ANC element can be found in \citet{gerstmayr2013review}, and one can also refer to a comparison with the geometrically exact beam formulation in \citet{romero2008comparison}. \citet{wackerfuss2009mixed, wackerfuss2011nonlinear} presented a mixed variational formulation, which allows a straightforward interface to arbitrary three-dimensional constitutive laws, where each node has the common three translational and three rotational degrees-of-freedom, as the additional degrees-of-freedom are eliminated on element level via static condensation.
Isogeometric analysis (IGA) was introduced in \citet{hughes2005isogeometric} to bridge the gap between computer-aided design (CAD) and computer-aided engineering (CAE) like finite element analysis (FEA) by employing non-uniform rational B-splines (NURBS) basis functions to approximate the solution field as well as the geometry. IGA enables exact geometrical representation of initial configuration in CAD to be directly utilized in the analysis without any approximation even in coarse level of spatial discretization. Further, the high-order continuity in NURBS basis function is advantageous in describing the beam and shell kinematics under the Kirchhoff-Love constraint, which requires at least $C^1$-continuity in the displacement field. IGA was utilized for example in \citet{kiendl2015isogeometric}, \citet{duong2017new}, and \citet{ambati2018isogeometric} for Kirchhoff-Love shells, and in \citet{bauer2020weak} for Euler-Bernoulli beams. For geometrically exact Timoshenko beams, an isogeometric collocation method was presented by \citet{marino2016isogeometric}, and it was extended to a mixed formulation in \citet{marino2017locking}. An isogeometric finite element formulation and configuration design sensitivity analysis were presented in \citet{choi2019isogeometric}. Recently, \citet{vo2020total} used the Green-Lagrange strain measure with the St.\,Venant-Kirchhoff material model under the zero stress condition. There have been several works to develop optimal quadrature rules for higher order NURBS basis functions to alleviate shear and membrane locking, for examples, a selective reduced integration in \citet{adam2014improved}, and Greville quadrature in \cite{zou2021galerkin}. Since our beam formulation allows for additional cross-sectional deformations from which another type of locking due to the coupling between bending and cross-section deformations appears, it requires further investigation to apply those quadrature rules to our beam formulation, which remains future work.
{There are many applications where one may find deformable cross-sections of rods or rod-like bodies with low or moderate slenderness ratios. Although one can find many beam structures with open and thin-walled cross-sections in industrial applications, which requires to consider torsion-warping deformations, we focus on convex cross-sections in this paper, and the incorporation of out-of-plane deformations in the cross-section remains future work. Our beam formulation is useful for the analysis of beams with low to moderate slenderness ratios, where the deformation of cross-section shape is significant, for examples, due to local contact or the Poisson effect. For example, our beam formulation can be applied to the analysis of lattice or textile structures where individual ligaments or fibers have moderate slenderness ratio, and coarse-grained modeling of carbon nanotubes and DNA. Those applications are often characterized by circular or elliptical cross-section shapes. For highly slender beams, it has been shown that the assumption of undeformable cross-sections and shear-free deformations, i.e., Kirchhoff-Love theory, can be effectively and efficiently utilized \citep{meier2019geometrically}, since it enables to further reduce the number of degrees-of-freedom and avoid numerical instability due to the coupling of shear and cross-sectional deformations with bending deformation. This formulation was successfully applied to contact problems, for example, contact interactions in complex system of fibers \citep{meier2017unified}. As the slenderness ratio decreases, the analysis of local contact with cross-sectional deformations becomes significant. One example is the coupling between normal extension of the cross-section and bending deformation that can be found in the works of \citet{naghdi1989significance} and \citet{nordenholz1997steady}.} Especially, \citet{naghdi1989significance} illustrated that the difference in the transverse normal forces on the upper and lower lateral surfaces leads to flexural deformation via the Poisson effect. They also showed that the consideration of transverse normal strains plays a significant role to accurately predict a continuous surface force distribution. Another example that can lead to significant deformation of the beam cross section is local contact and adhesion of soft beams. For example, in \citet{sauer2009multiscale}, the adhesion mechanism of geckos was described by beam-to-rigid surface contact, where no deformation through the beam thickness was assumed, even though local contact can be expected to have a significant influence on beam deformation. \citet{olga2018contact} applied the Hertz theory to incorporate the effect of cross-section deformation in beam-to-beam contact, where the penalty parameter in the contact constraint was obtained as a function of the amount of penetration. Another interesting application can be found in the development of continuum models for atomistic structures like carbon nanotubes. \citet{kumar2011rod} developed a beam model for single-walled carbon nanotubes that allows for deformation of the nanotube's lateral surface in a one-dimensional framework, which can be an efficient substitute to two-dimensional shell models.
The remainder of this paper is organized as follows. In section \ref{beam_kin}, we present the beam kinematics based on extensible directors. In section \ref{eq_motion}, we derive the momentum balance equations from the balance laws of a three-dimensional continuum, and define stress resultants and director stress couples. In section \ref{var_for_weak_form}, we derive the beam strain measures that are work conjugate to the stress resultants and director stress couples. Further, the expression of external stress resultants and director stress couples are obtained from the surface loads. In section \ref{var_form_constitutive_law} we detail the process of reducing three-dimensional hyperelastic constitutive laws to one-dimensional ones. {In section \ref{eas_formulation}, we present the enhanced assumed strain method to alleviate Poisson locking.} In section \ref{num_ex}, we verify the developed beam formulation in various numerical examples by comparing the results with those of IGA brick elements. For completeness, appendices to the beam formulation and further numerical examples are given in Appendices \ref{app_theory} and \ref{app_hypelas_conv_test}, respectively.
\section{Beam kinematics}
\label{beam_kin}
The configuration of a beam is described by a family of \textit{cross-sections} whose centroids\footnote{In this paper, the \textit{centroid} refers to the mass centroid. If we assume a constant mass density, it coincides with the \textit{geometrical} centroid.} are connected by a spatial curve referred to as the \textit{central axis}. An initial (undeformed) configuration of the central axis $\mathcal{C}_0$ is given by a spatial curve parameterized by a parametric coordinate $\xi\in{\Bbb{R}^1}$, i.e., ${\mathcal{C}_0}:\,{\xi} \to {{\boldsymbol{\varphi }}_0}({\xi}) \in {{\Bbb{R}}^3}$. The initial configuration of the central axis is reparameterized by the arc-length parameter $s \in \left[ {0,L} \right] \subset {{\Bbb{R}}^1}$, that is, ${\mathcal{C}_0}:\,s \to {{\boldsymbol{\varphi }}_0}(s) \in {{\Bbb{R}}^3}$. $L$ represents the length of the initial central axis. This reparameterization is advantageous to simplify the subsequent expressions due to $\left\| {{{\boldsymbol{\varphi }}_{0,s}}} \right\| = 1$. The cross-section $\mathcal{A}_0\subset {\Bbb{R}^2}$ is spanned by two orthonormal base vectors ${{\boldsymbol{D}}_{\gamma}}(s) \in {\Bbb R}^3$ ($\gamma = 1,2$), which are called \textit{initial directors}, aligned along the principal directions of the second moment of inertia of the cross-section. Further, ${{{\boldsymbol{D}}_3}(s)}$ is defined as a unit normal vector to the initial cross-section. In this paper, it is assumed that the cross-section is orthogonal to the central axis in the initial configuration, so that we simply obtain ${{{\boldsymbol{D}}_3}(s)} \coloneqq {{\boldsymbol{\varphi }}_{0,{s}}}(s)$, which is tangent to the initial central axis. Here and hereafter, $(\bullet)_{,s}$ denotes the partial differentiation with respect to the arc-length parameter $s$. The current (deformed) configuration of the central axis is defined by the spatial curve ${{\mathcal{C}}_t}:\,s \to {{\boldsymbol{\varphi }}}(s,t) \in {{\Bbb{R}}^3}$, where $t\in{\Bbb R}^{+}$ denotes time. In the current configuration, the cross-section $\mathcal{A}_t\subset {\Bbb{R}^2}$ is defined by a plane normal to the \textit{unit vector} ${{\boldsymbol{d}_3}}(s,t) \in {{\Bbb{R}}^3}$, and the plane is spanned by two base vectors ${\boldsymbol{d}_{\gamma}}(s,t) \in {\Bbb R}^3$ ($\gamma = 1,2$), which are referred to as \textit{current directors}. In contrast to the initial configuration, those current directors are not necessarily orthogonal to each other or of unit length. Their length only needs to satisfy
\begin{equation}\label{beam_th_str_lambda_def}
{\lambda _\gamma }(s,t) \coloneqq \left\| {{{\boldsymbol{d}}_\gamma }(s,t)} \right\| > 0\,\,{\text{for}}\,\,s \in [0,L].
\end{equation}
Furthermore, in the current configuration, the cross-section remains plane but not necessarily normal to the tangent vector ${\boldsymbol{\varphi}_{\!,s}}(s,t)$, due to transverse shear deformation. ${{\boldsymbol{d}_3}}(s,t)$, which is normal to the current cross-section, can be obtained from the current directors as
\begin{equation}\label{beam_th_str_calc_d3_vec}
{{\boldsymbol{d}}_3} = \frac{{{{\boldsymbol{d}}_1} \times {{\boldsymbol{d}}_2}}}{{\left\| {{{\boldsymbol{d}}_1} \times {{\boldsymbol{d}}_2}} \right\|}}\,\,\text{where}\,\,{\left\| {{{\boldsymbol{d}}_1} \times {{\boldsymbol{d}}_2}} \right\|}\ne 0.
\end{equation}
Note that the condition ${\left\| {{{\boldsymbol{d}}_1} \times {{\boldsymbol{d}}_2}} \right\|}\ne 0$ precludes the physically unreasonable situation of infinite in-plane shear deformation of the cross-section. We also postulate the condition
\begin{equation}\label{beam_th_str_calc_d3_vec}
{{\boldsymbol{\varphi}}_{\!,s}} \cdot ({{\boldsymbol{d}}_1} \times {{\boldsymbol{d}}_2}) > 0,
\end{equation}
which precludes the unphysical situation of infinite transverse shear deformation. We define $\left\{ {{{\boldsymbol{e}}_1},{{\boldsymbol{e}}_2},{{\boldsymbol{e}}_3}} \right\}$ as a standard Cartesian basis in ${{\Bbb R}^3}$. Fig.\,\ref{beam_kin_3_domains} schematically illustrates the above kinematic description of the initial and current beam configurations.
\begin{figure}[htp]
\centering
\includegraphics[width=0.565\linewidth]{Figure/beam_kin_rev_1_low_res.png
\caption{A schematic illustration of the beam kinematics in the initial and current configurations.}
\label{beam_kin_3_domains}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=0.4\linewidth]{Figure/beam_ref_domain.png}
\caption{An example of the reference domain $\mathcal{B}$ in the case of rectangular cross-section with dimension $h_1\times{h_2}$.}
\label{beam_ref_domain}
\end{figure}
\noindent We define a \textit{reference domain} ${\mathcal{B}}\coloneqq (0,L) \times {\mathcal{A}}$, where ${\mathcal{A}}$ denotes the open domain of coordinates $\xi^1$ and $\xi^2$. For example, for a rectangular cross-section with dimension $h_1\times{h_2}$ we have ${({\xi^1},{\xi^2})}\in{\mathcal{A}}\coloneqq (-{h_1}/2,{h_1}/2)\times(-{h_2}/2,{h_2}/2)$, see Fig.\,\ref{beam_ref_domain} for an illustration. The location of each point in the reference domain is expressed in terms of the coordinates ${\xi^1}$, ${\xi^2}$, and ${\xi^3}$ in the standard Cartesian basis in ${{\Bbb R}^3}$ denoted by ${{\boldsymbol{E}}_1}$, ${{\boldsymbol{E}}_2}$, and ${{\boldsymbol{E}}_3}$. We then define two mappings from the reference domain to the initial configuration $\mathcal{B}_0$ and to the current configuration $\mathcal{B}_t$ respectively by ${{\boldsymbol{x}}_0}:{\mathcal{B}} \to {{\mathcal{B}}_0}$ and ${{\boldsymbol{x}}_t}:{\mathcal{B}} \to {{\mathcal{B}}_t}$. The deformation from the initial to the current configuration is then expressed by the mapping
\begin{equation}
{{\boldsymbol{\Phi }}_t} \coloneqq {{\boldsymbol{x}}_t} \circ {{\boldsymbol{x}}_0}^{ - 1}:{{\mathcal{B}}_0} \to {{\mathcal{B}}_t}.
\end{equation}
The initial (undeformed) configuration is expressed by
\begin{equation}\label{beam_th_str_init_config}
{{\boldsymbol{x}}_0} = {{\boldsymbol{x}}_0}({\xi ^1},{\xi ^2},{\xi ^3}) \coloneqq {{\boldsymbol{\varphi }}_0}(s) + {{\xi ^\gamma }{{\boldsymbol{D}}_\gamma }(s)},
\end{equation}
where ${\xi^3} \equiv s$. We note that the coordinates ${\xi_1},{\xi_2},{\xi_3}$ are chosen to have dimension of length, and so the director vectors ${\boldsymbol{d}_1}$ and ${\boldsymbol{d}_2}$ are dimensionless. Here and hereafter, unless stated otherwise, repeated Latin indices like $i$ and $j$ imply summation over $1$ to $3$, and repeated Greek indices like $\alpha$, $\beta$, and $\gamma$ imply summation over $1$ to $2$. Also, it is noted that the parameter $s$ is often replaced by $\xi^3$ for notational convenience. We define a covariant basis ${{\boldsymbol{G}}_{i}} \coloneqq \partial {{\boldsymbol{x}}_0}/\partial {\xi ^i}$ ($i=1,2,3)$, which then follows as
\begin{equation}\label{beam_th_str_init_cov_base}
\left\{ \begin{array}{l}
\begin{aligned}
{{\boldsymbol{G}}_1}({{\xi}^1},{{\xi}^2},{{\xi}^3}) &= {{\boldsymbol{D}}_1}(s),\\
{{\boldsymbol{G}}_2}({{\xi}^1},{{\xi}^2},{{\xi}^3}) &= {{\boldsymbol{D}}_2}(s),\\
{{\boldsymbol{G}}_3}({{\xi}^1},{{\xi}^2},{{\xi}^3}) &= {{\boldsymbol{D}}_3}(s) + {{\xi ^\gamma }{{\boldsymbol{D}}_{\gamma ,s}}(s)}.\\
\end{aligned}
\end{array} \right.
\end{equation}
\noindent The Fr{\'{e}}chet derivative of the initial configuration is then written as
\begin{align}\label{beam_th_str_frechet_deriv}
D{{\boldsymbol{x}}_0} \coloneqq {{\boldsymbol{G}}_{i}} \otimes {{{\boldsymbol{E}}^i}},
\end{align}
where ${{\boldsymbol{E}}^i} \equiv {{\boldsymbol{E}}_i}$. From the orthogonality condition ${\boldsymbol{G}_{i}} \cdot {{\boldsymbol{G}}^j} = \delta _i^j$, where the Kronecker-delta symbol is defined as
\begin{equation}
\delta _i^j = \left\{ {\begin{array}{*{20}{c}}
{0\,\,\,\,{\rm{if}}\,\,i \ne j},\\
{1\,\,\,\,{\rm{if}}\,\,i = j},
\end{array}} \right.\,\,\,\,(i,j=1,2,3),
\end{equation}
we obtain a contravariant (reciprocal) basis as
\begin{equation}\label{beam_th_str_reciprocal_basis_init}
{{\boldsymbol{G}}^i} \coloneqq D{{\boldsymbol{x}}_0}^{ - \mathrm{T}}{{\boldsymbol{E}}^i}\,\,\,\,(i=1,2,3).
\end{equation}
For convenience, here we recall the expression of current position vector of any point of the beam at time $t$ from Eq.\,(\ref{intro_beam_th_str_cur_config})
\begin{equation}\label{beam_th_str_cur_config}
{{\boldsymbol{x}}_t} = {{\boldsymbol{x}}_t({\xi^1},{\xi^2},{\xi^3},t)} = {{\boldsymbol{\varphi }}}(s,t) + {{\xi ^\gamma }{{\boldsymbol{d}}_\gamma }(s,t)}.
\end{equation}
A covariant basis, defined as ${{\boldsymbol{g}}_i} \coloneqq \partial {{\boldsymbol{x}}_t}/\partial {\xi ^i}$, is expressed by
\begin{equation}\label{beam_th_str_cur_cov_base}
\left\{ \begin{array}{l}
\begin{aligned}
{{\boldsymbol{g}}_1}({{\xi}^1},{{\xi}^2},{{\xi}^3},t) &= {{\boldsymbol{d}}_1}(s,t),\\
{{\boldsymbol{g}}_2}({{\xi}^1},{{\xi}^2},{{\xi}^3},t) &= {{\boldsymbol{d}}_2}(s,t),\\
{{\boldsymbol{g}}_3}({{\xi}^1},{{\xi}^2},{{\xi}^3},t) &= {{{\boldsymbol{\varphi}}_{\!,s}}}(s,t) + {{\xi ^\gamma }{{\boldsymbol{d}}_{\gamma ,s}}(s,t)}.
\end{aligned}
\end{array} \right.
\end{equation}
The Fr{\'{e}}chet derivative of the mapping $\boldsymbol{x}_t({\xi^1},{\xi^2},{\xi^3},t)$ is written as
\begin{align}\label{beam_th_str_frechet_deriv}
D{{\boldsymbol{x}}_t} \coloneqq {{\boldsymbol{g}}_i} \otimes {{\boldsymbol{E}}^i}.
\end{align}
From the orthogonality condition ${{\boldsymbol{g}}_i} \cdot {{\boldsymbol{g}}^j} = \delta _i^j$ we obtain the contravariant basis as
\begin{equation}\label{beam_th_str_reciprocal_basis}
{{\boldsymbol{g}}^i} \coloneqq D{{\boldsymbol{x}}_t}^{ - {\mathrm{T}}}{{\boldsymbol{E}}^i}\,\,\,\,(i=1,2,3).
\end{equation}
The deformation gradient tensor of the mapping is obtained by
\begin{align} \label{beam_th_str_deform_grad}
{{\boldsymbol{F}}} \coloneqq D{{\boldsymbol{\Phi }}_t}= D{{\boldsymbol{x}}_t}D{{\boldsymbol{x}}_0}^{ - 1}= {\boldsymbol{g}_i}\otimes{\boldsymbol{G}^i}.
\end{align}
The Jacobian of the mapping ${{\boldsymbol{\Phi}}_t}$ is then given by
\begin{equation}\label{beam_th_str_jcb_init_to_cur}
{J_t} \coloneqq \det {{\boldsymbol{F}}} = \frac{{{j_t}}}{{{j_0}}},
\end{equation}
where $\det [\bullet]$ denotes the determinant. Here, $j_0$ and $j_t$ respectively define the Jacobians of the mappings ${\boldsymbol{x}}_0({\xi^1},{\xi^2},{\xi^3})$ and ${\boldsymbol{x}}_t({\xi^1},{\xi^2},{\xi^3},t)$, and can be expressed in terms of the covariant base vectors, as (see Appendix \ref{deriv_jacob} for a derivation)
\begin{equation}\label{beam_th_str_jcb_init}
{j_0} \coloneqq \det D{{\boldsymbol{x}}_0} = \left({{\boldsymbol{G}}_1} \times {{\boldsymbol{G}}_2}\right)\cdot{{\boldsymbol{G}}_3},
\end{equation}
and
\begin{equation}\label{beam_th_str_jcb_cur}
{j_t} \coloneqq \det D{{\boldsymbol{x}}_t} = \left({{\boldsymbol{g}}_1} \times {{\boldsymbol{g}}_2}\right)\cdot{{\boldsymbol{g}}_3}.
\end{equation}
The infinitesimal volume in the reference configuration can be expressed by
\begin{equation}
{\mathrm{d}}{\mathcal{B}} = {\mathrm{d}}{\xi^1}{\mathrm{d}}{\xi^2}{\mathrm{d}}{\xi^3}.
\end{equation}
Then the corresponding infinitesimal volume due to the mappings of Eqs.\,(\ref{beam_th_str_init_config}) and (\ref{beam_th_str_cur_config}) are, respectively, obtained by
\begin{subequations}
\begin{align}\label{beam_inf_vol_jcb}
{\mathrm{d}}{\mathcal{B}_0} &= {j_0}\,{\mathrm{d}}{\mathcal{B}},\\
{\mathrm{d}}{\mathcal{B}_t}&={j_t}\,{\mathrm{d}}{\mathcal{B}}={J_t}\,{\mathrm{d}}{\mathcal{B}_0}.
\end{align}
\end{subequations}
\begin{definition}
\label{remark_lat_bd_surf_new}
\textit{Area change of the lateral boundary surface.}
Let ${\boldsymbol{\nu }} = {{\nu}_i}{{\boldsymbol{E}}^i}$ denote the outward unit normal vector on the boundary surface $\mathcal{S}\coloneqq{\partial{\mathcal{B}}}$, and ${\mathrm{d}}\mathcal{S}$ represent an infinitesimal area. The surface area vector in the current configuration can be expressed by\footnote{This formula of area change is often called \textit{Nanson's formula}.}
\begin{align}
{\mathrm{d}}{{\boldsymbol{\mathcal{S}}}_t} &\coloneqq {{\boldsymbol{\nu }}_t}\,{\mathrm{d}}{{\mathcal{S}}_t} = {j_t}\,D{{\boldsymbol{x}}_t}^{ - {\mathrm{T}}}{\boldsymbol{\nu }}\,{\mathrm{d}}{\mathcal{S}}\label{cur_surf_transform_1},
\end{align}
\noindent where $\boldsymbol{\nu}_t$ denotes the outward unit normal vector on the surface $\mathcal{S}_t$, and ${\mathrm{d}}\mathcal{S}_t$ denotes the infinitesimal area. In the same way, the surface area vector in the initial configuration can be expressed by
\begin{align}
{\mathrm{d}}{{\boldsymbol{\mathcal{S}}}_0} \coloneqq {{\boldsymbol{\nu }}_0}\,{\mathrm{d}}{{\mathcal{S}}_0}
= {j_0}\,D{{\boldsymbol{x}}_0}^{ - {\mathrm{T}}}{\boldsymbol{\nu }}\,{\mathrm{d}}{\mathcal{S}},\label{init_surf_transform}
\end{align}
where $\boldsymbol{\nu}_0$ denotes the outward unit normal vector on the surface $\mathcal{S}_0$, and ${\mathrm{d}}\mathcal{S}_0$ denotes the infinitesimal area. Combining Eqs.\,(\ref{cur_surf_transform_1}) and (\ref{init_surf_transform}), we have
\begin{align}\label{cur_surf_transform_2}
{\mathrm{d}}{{\boldsymbol{\mathcal{S}}}_t} = {J_t}\,{{\boldsymbol{F}}^{ - {\mathrm{T}}}}{{\boldsymbol{\nu }}_0}\,{\mathrm{d}}{{\mathcal{S}}_0}.
\end{align}
\end{definition}
\noindent {If the lateral boundary surface $\mathcal{S}^\mathrm{L}_0$ is parameterized by two convective coordinates $\zeta^1$ and $\zeta^2$, i.e., $\mathcal{S}^\mathrm{L}_0: (\zeta^1,\zeta^2)\in\Bbb{R}^2\to\boldsymbol{X}^\mathrm{L}(\zeta^1,\zeta^2)\in\Bbb{R}^3$, the infinitesimal area of lateral boundary surface $\mathcal{S}^{\mathrm{L}}_0$ can be expressed by
\begin{align}
\label{init_lat_aurf_inf_area}
\mathrm{d}{{\mathcal{S}}^\mathrm{L}_0} = \left\| {{{\boldsymbol{A}}_{1}}} \times {{\boldsymbol{A}}_{2}} \right\|\mathrm{d}{\zeta^1}\mathrm{d}{\zeta^2},
\end{align}
where ${{\boldsymbol{A}}_{\alpha}}\coloneqq\partial\boldsymbol{X}^\mathrm{L}/\partial{\zeta^\alpha}\,(\alpha=1,2)$ denotes the surface covariant base vectors. For example, if the lateral boundary surface is parameterized by a NURBS surface, and the convective coordinate $\zeta^1$ represents the coordinate along the central axis, Eq.\,(\ref{init_lat_aurf_inf_area}) can be rewritten, using $\mathrm{d}s={\tilde j}\mathrm{d}\zeta^1$ with ${\tilde j}\coloneqq\left\|\partial\boldsymbol{\varphi}_0/\partial{\zeta^1}\right\|$, as
\begin{align}
\label{inf_area_lat_bd_surf}
\mathrm{d}{{\mathcal{S}}^{\mathrm{L}}_0} = \mathrm{d}{\Gamma _0}\mathrm{d}s\,\,\mathrm{with}\,\,\mathrm{d}{\Gamma _0}\coloneqq \frac{1}{{\tilde j}}\left\| {{{\boldsymbol{A}}_{1}} \times {{\boldsymbol{A}}_{2}}} \right\|\mathrm{d}\zeta^2.
\end{align}
It is clear advantage of using IGA that the beam central axis curve and the lateral boundary surface can be parameterized by the same coordinate in the axial direction, which enables to calculate the exact surface geometrical information like covariant base vectors $\boldsymbol{A}_1$ and ${\boldsymbol{A}_2}$ in Eq.\,(\ref{inf_area_lat_bd_surf}). The significance of geometrical exactness in the calculation of the surface integral might be more significant in laterally loaded beam with varying cross-section. However, in this paper, we deal only with uniform cross-sections along the central axis, and the investigation on the different kinds of parameterization of lateral boundary surface and the significance of geometrical exactness remain future works.
}
\section{Equations of motion}
\label{eq_motion}
\subsection{Three-dimensional elasticity}
We recall the equilibrium equations and boundary conditions of a three-dimensional deformable body, which occupies an open domain $\mathcal{B}_t$ bounded by the boundary surface $\mathcal{S}_t\coloneqq \partial{\mathcal{B}_t}$ in the current configuration. The boundary is composed of a prescribed displacement boundary $\mathcal{S}^{\mathrm{D}}_t$ and a prescribed traction boundary $\mathcal{S}^{\mathrm{N}}_t$, which are mutually disjoint, i.e.\footnote{Strictly speaking, those boundary conditions are defined for each independent component in the global Cartesian frame.}
\begin{align}\label{solid_elas_boundary_disjoint}
\mathcal{S}_t = \mathcal{S}^{\mathrm{D}}_t \cup \mathcal{S}^\mathrm{N}_t,\,\,{\text{and}}\,\,\mathcal{S}_t^{\mathrm{D}} \cap \mathcal{S}_t^\mathrm{N} = \emptyset.
\end{align}
The equations of motion are obtained from the local forms of the balance laws whose derivation can be found in many references on the continuum mechanics, for example, \citet{bonet2010nonlinear}. First, the local conservation of mass is expressed by ${\rho _0} = {\rho _t}{J_t}\,\,\text{in}\,\,{\mathcal{B}}_t$, where $\rho_0$ and $\rho_t$ define the mass densities at the initial and current configurations, respectively. Second, the local balance of linear momentum in a three-dimensional body is expressed as
\begin{align}\label{conserv_lin_mnt_intrinsic}
\text{div}\boldsymbol{\sigma} + {\boldsymbol{b}} = {{\rho}_t}\,{\boldsymbol{x}_{t,tt}}\,\,\text{in}\,\,{\mathcal{B}}_t,
\end{align}
where $\boldsymbol{\sigma}$ denotes the Cauchy stress tensor, and $\text{div}(\bullet)$ represents the divergence operator with respect to the current configuration, and $\boldsymbol{b}$ represents the body force per unit current volume, and $(\bullet)_{,tt}$ represents the second order partial differentiation with respect to time. Third, the local balance of angular momentum in the absence of body moment is expressed by the symmetry of the Cauchy stress tensor, i.e., $\boldsymbol{\sigma} = \boldsymbol{\sigma}^{\mathrm{T}}\,\,\text{in}\,\,{\mathcal{B}}_t$. The non-homogeneous Dirichlet (displacement) boundary condition is given as
\begin{equation}\label{solid_elas_disp_bdc}
\boldsymbol{u}_t = \boldsymbol{\bar u}_0,\,\text{or equivalently}\,\,\boldsymbol{x}_t = {\bar {\boldsymbol{x}}}_0\,\,\text{on}\,\,\mathcal{S}^{\mathrm{D}}_t,
\end{equation}
where $\boldsymbol{u}_t\coloneqq {\boldsymbol{x}_t}-{\boldsymbol{x}_0}$ denotes the displacement vector, and $\boldsymbol{\bar u}_0$ and $\boldsymbol{\bar x}_0$ are the prescribed values. Taking the first variation of Eq.\,(\ref{solid_elas_disp_bdc}) yields the homogeneous Dirichlet boundary condition
\begin{equation}\label{solid_elas_disp_bdc_homo}
\delta\boldsymbol{u}_t = \boldsymbol{0},\,\,\text{or equivalently}\,\,\delta{\boldsymbol{x}}_t=\boldsymbol{0}\,\,\text{on}\,\,\mathcal{S}^{\mathrm{D}}_t.
\end{equation}
Further, the natural (traction) boundary condition is given as
\begin{equation}\label{solid_elas_traction_bdc}
\boldsymbol{\sigma}\boldsymbol{\nu}_t = {\bar {\boldsymbol{t}}}_0\,\,\text{on}\,\,\mathcal{S}^{\mathrm{N}}_t,
\end{equation}
where $\boldsymbol{\nu}_t$ defines the unit outward normal vector on $\mathcal{S}^{\mathrm{N}}_t$, and ${\bar {\boldsymbol{t}}}_0$ defines the prescribed surface traction vector in the current configuration. The surface traction can be also defined with respect to the initial configuration, as
\begin{equation}\label{solid_elas_traction_bdc_init}
\boldsymbol{P}\boldsymbol{\nu}_0 = {\bar{\boldsymbol{T}}}_0\,\,\text{on}\,\,\mathcal{S}^{\mathrm{N}}_0,
\end{equation}
where ${\boldsymbol{P}} \coloneqq {J_t}{\boldsymbol{\sigma }}{{\boldsymbol{F}}^{ - \mathrm{T}}}$
denotes the first Piola-Kirchhoff stress tensor, and $\boldsymbol{\nu}_0$ and $\bar{\boldsymbol{T}}_0$ define the unit outward normal vector and the prescribed surface traction vector, respectively, on $\mathcal{S}^{\mathrm{N}}_0$.
\subsection{Resultant linear and director momentum}
The \textit{resultant linear momentum} over the cross-section $\mathcal{A}_t$, with units of linear momentum per unit of initial arc-length, is defined as
\begin{align}\label{lin_momentum_def}
{{\boldsymbol{p}}_t} \coloneqq \int_{\mathcal {A}} {{\rho _t}}\,{\boldsymbol{x}}_{t,t}\,{j_t}\,{\mathrm{d}}{\mathcal{A}}= \int_{\mathcal {A}} {{\rho _0}}\,{\boldsymbol{x}}_{t,t}\,{j_0}\,{\mathrm{d}}{\mathcal{A}},
\end{align}
where $\mathrm{d}{\mathcal{A}} \coloneqq {\mathrm{d}}{\xi ^1}{\mathrm{d}}{\xi ^2}$ denotes the infinitesimal area of the cross-section in the reference domain. $(\bullet)_{,t}$ denotes the partial differentiation with respect to time. As $\boldsymbol{\varphi}(s,t)$ represents the current position of the centroid, the parametric position $({\xi^1},{\xi^2})\in\mathcal{A}$ satisfies
\begin{align}\label{vanish_1st_integ_rho}
\int_{\mathcal{A}} {{\xi^{\gamma}}\,{\rho _0}\,{j_0}\,{\mathrm{d}}{\mathcal{A}}} = 0\,\,\quad(\gamma=1,2).
\end{align}
By substituting Eq.\,(\ref{beam_th_str_cur_config}) into Eq.\,(\ref{lin_momentum_def}) and using Eq.\,(\ref{vanish_1st_integ_rho}), we have
\begin{align}\label{lin_momentum_fin}
{{\boldsymbol{p}}_t} = {{\rho}_A}{\boldsymbol{\varphi}_{\!,t}},
\end{align}
where ${{\rho}_A}$ represents the initial line density (mass per unit of initial arc-length), defined as
\begin{align}\label{area_rho_def}
{{\rho}_A} \coloneqq \int_{\mathcal{A}} {{\rho _0}\,{j_0}\,{\mathrm{d}}{\mathcal{A}}}.
\end{align}
Similarly, we define the \textit{resultant angular momentum} over the cross-section $\mathcal{A}_t$, with units of angular momentum per unit of initial arc-length, as
\begin{align}\label{ang_momentum_def}
{{\boldsymbol{H}}_t} &\coloneqq \int_{\mathcal{A}} {\left\{({{\boldsymbol{x}}_t} - {\boldsymbol{\varphi }}) \times {\rho _t}\,{{\boldsymbol{x}}_{t,t}}\,{j_t}\right\}{\mathrm{d}}{\mathcal{A}}} = {{\boldsymbol{d}}_\gamma } \times {{\boldsymbol{\tilde H}}^{\gamma}_t},
\end{align}
where ${{\boldsymbol{\tilde H}}^{\gamma}_t}$ defines the \textit{resultant director momentum}, given by
\begin{align}\label{dir_momentum_def}
{{\boldsymbol{\tilde H}}^{\gamma}_t} &\coloneqq \int_{\mathcal{A}} {{\xi ^\gamma }{\rho _t}\,{{\boldsymbol{x}}_{t,t}}\,{j_t}\,{\mathrm{d}}{\mathcal{A}}}\,\,\quad(\gamma=1,2).
\end{align}
Substituting Eq. (\ref{beam_th_str_cur_config}) into Eq. (\ref{dir_momentum_def}), we obtain
\begin{align}\label{dir_momentum_1}
{{\boldsymbol{\tilde H}}^{\gamma}_t} = I_\rho ^{\gamma \delta }{{\boldsymbol{d}}_{\delta ,t}}\,\,\quad(\gamma=1,2),
\end{align}
where the components of the second moment of inertia tensor are expressed by
\begin{align}\label{dir_momentum_inertia}
I_\rho ^{\gamma \delta } \coloneqq \int_{\mathcal{A}} {{\rho _t}\,{\xi ^\gamma }\,{\xi ^\delta }{j_t}\,{\mathrm{d}}{\mathcal{A}}} = \int_{\mathcal{A}} {{\rho _0}\,{\xi ^\gamma }\,{\xi ^\delta }{j_0}\,{\mathrm{d}}{\mathcal{A}}}.
\end{align}
Note that these components of the second moment of inertia tensor do not depend on time.
\subsection{Stress resultants and stress couples}
We formulate the balance equations in terms of stress resultants and director stress couples.
We define the \textit{stress resultant} as the force acting on the cross-section $\mathcal{A}_t$, i.e.
\begin{equation}\label{beam_th_str_def_res_force}
{\boldsymbol{n}} \coloneqq \int_{\mathcal{A}} {{\boldsymbol{\sigma }}{{\boldsymbol{g}}^3}{j_t}\,{\mathrm{d}}{\mathcal{A}}}.
\end{equation}
Similarly, we define the \textit{stress couple} as the moment acting on the cross-section $\mathcal{A}_t$, i.e.
\begin{align}\label{def_strs_couple}
{{\boldsymbol{m}}} &\coloneqq \int_{\mathcal{A}} {({{\boldsymbol{x}}_t} - {\boldsymbol{\varphi }}) \times {\boldsymbol{\sigma }}{{\boldsymbol{g}}^3}{j_t}\,{\mathrm{d}}{\mathcal{A}}} = {{\boldsymbol{d}}_\alpha } \times {{{\boldsymbol{\tilde m}}}^\alpha },
\end{align}
where ${{{\boldsymbol{\tilde m}}}^\alpha}$ defines the \textit{director stress couple}, given by
\begin{align}\label{def_dir_strs_couple}
{{\boldsymbol{\tilde m}}^\alpha } \coloneqq \int_{\mathcal{A}} {{\xi ^\alpha}{\boldsymbol{\sigma }}{{\boldsymbol{g}}^3}{j_t}\,{\mathrm{d}}{\mathcal{A}}}\quad{({\alpha}=1,2).}
\end{align}
We further define the \textit{through-the-thickness stress resultant} as
\begin{equation}\label{def_th_strs_res}
{{\boldsymbol{l}}^\alpha } \coloneqq \int_{\mathcal{A}} {{\boldsymbol{\sigma }}{{\boldsymbol{g}}^\alpha }{j_t}\,{\mathrm{d}}{\mathcal{A}}}\quad{({\alpha}=1,2).}
\end{equation}
\subsection{Momentum balance equations}
Starting from Eq.\,(\ref{conserv_lin_mnt_intrinsic}) the resultant forms of the local linear and director momentum balance equations are respectively derived as (see Appendix \ref{deriv_bal_lin_momentum} for a detailed derivation)
\begin{align}\label{beam_lin_mnt_balance_app_eq}
{{\boldsymbol{n}}_{,s}} + {\boldsymbol{\bar n}} = {{\rho}_A}{\boldsymbol{\varphi}_{\!,tt}},
\end{align}
and
\begin{align}\label{beam_dir_mnt_bal_eq}
{\boldsymbol{\tilde m}}_{,s}^\gamma - {{\boldsymbol{l}}^\gamma } + {{\boldsymbol{\bar {\tilde m}}}^\gamma } = I_\rho ^{\gamma \delta }{{\boldsymbol{d}}_{\delta ,tt}}\quad(\gamma=1,2).
\end{align}
Here, ${\boldsymbol {\bar n}} = {\boldsymbol {\bar n}}(s,t)$ denotes the \textit{external stress resultant}, with units of external force per unit of initial arc-length, given by
\begin{align}\label{beam_lin_mnt_balance_ext_f}
{\boldsymbol{\bar n}} \coloneqq \int_{\partial {{\mathcal{A}}_0}} {{{{\boldsymbol{\bar T}}}_0}\,{\mathrm{d}}{{\Gamma}_0}} + \int_{{{\mathcal{A}}}} {{\boldsymbol{b}_0}\,{j_0}\,{\mathrm{d}}{{\mathcal{A}}}},
\end{align}
where $\boldsymbol{b}_0$ denotes the body force per unit initial volume such that ${j_t}\boldsymbol{b}_t={j_0}\boldsymbol{b}_0$. ${{\boldsymbol{\bar {\tilde m}}}^\gamma } = {{\boldsymbol{\bar {\tilde m}}}^\gamma }(s)$ denotes the \textit{external director stress couple}, which is an external moment per unit of initial arc-length due to the surface and body force fields, given by
\begin{align}\label{beam_dir_mnt_balance_ext_m}
{{\boldsymbol{\bar {\tilde m}}}^\gamma } \coloneqq \int_{\partial {{\mathcal{A}}_0}} {{\xi ^\gamma }{{{\boldsymbol{\bar T}}}_0}\,{\mathrm{d}}{{\Gamma}_0}} + \int_{{{\mathcal{A}}}} {{\xi ^\gamma }{\boldsymbol{b}_0}\,{j_0}\,{\mathrm{d}}{{\mathcal{A}}}}\quad(\gamma=1,2).
\end{align}
We also obtain the resultant form of the balance of angular momentum from the symmetry of the Cauchy stress tensor, as (see Appendix \ref{deriv_bal_ang_momentum} for a detailed derivation)
\begin{align}\label{beam_ang_mnt_balance}
{{\boldsymbol{\varphi }}_{\!,s}} \times {\boldsymbol{n}} + {{\boldsymbol{d}}_{\gamma ,s}} \times {{\boldsymbol{\tilde m}}^\gamma } + {{\boldsymbol{d}}_\gamma } \times {{\boldsymbol{l}}^\gamma } = {\boldsymbol{0}}.
\end{align}
We finally state the static beam problem: Find
${\boldsymbol{y}} \coloneqq {\left[ {{{\boldsymbol{\varphi }}^{\mathrm{T}}},{{\boldsymbol{d}}_1}^{\mathrm{T}},{{\boldsymbol{d}}_2}^{\mathrm{T}}} \right]^{\mathrm{T}}} \in {\left[ {{{\Bbb{R}}^3}} \right]^3}$ that satisfies \citep{naghdi1981finite}
\begin{subequations}
\label{recall_momentum_balance_eq}
\begin{alignat}{2}
{{\boldsymbol{n}}_{,s}} + {\boldsymbol{\bar n}} = {\boldsymbol{0}}&{}\quad\text{(linear momentum balance),}\label{mnt_bal_eqn_lin_mnt}\\
{\boldsymbol{\tilde m}}_{,s}^\gamma - {{\boldsymbol{l}}^\gamma } + {{{\boldsymbol{\bar {\tilde m}}}}^\gamma } = {\boldsymbol{0}}&{}\quad(\text{director momentum balance),}\label{mnt_bal_eqn_dir_mnt}\\
{{\boldsymbol{\varphi }}_{\!,s}} \times {\boldsymbol{n}} + {{\boldsymbol{d}}_{\gamma ,s}} \times {{{\boldsymbol{\tilde m}}}^\gamma } + {{\boldsymbol{d}}_\gamma } \times {{\boldsymbol{l}}^\gamma } = {\boldsymbol{0}}&{}\quad\text{(angular momentum balance).}\label{final_ang_mnt_bal}
\end{alignat}
\end{subequations}
We define the Dirichlet boundary condition, as
\begin{align}\label{dirichlet_bdc}
{\boldsymbol{\varphi }} = {{\boldsymbol{\bar \varphi }}_0},\,\,{{\boldsymbol{d}}_1} = {{\boldsymbol{\bar d}}_{01}},\,\,{{\boldsymbol{d}}_2} = {{\boldsymbol{\bar d}}_{02}}\,\,\,\text{on}\,\,{\Gamma_\mathrm{D}},
\end{align}
where the central axis position and director vectors are prescribed at the boundary ${\Gamma_\mathrm{D}}\ni{s}$. The Neumann boundary condition is defined as
\begin{align}\label{natural_bdc}
{\boldsymbol{n}} = {{\boldsymbol{\bar n}}_0},\,\,{{\boldsymbol{{\tilde m}}}^\gamma } = {\boldsymbol{\bar {\tilde m}}}_0^\gamma\,\,\,\text{on}\,\,{\Gamma_\mathrm{N}}\,\,\quad(\gamma=1,2).
\end{align}
It is noted that ${\Gamma_\mathrm{D}} \cap {\Gamma_\mathrm{N}} = \emptyset$, and ${\Gamma_\mathrm{D}} \cup {\Gamma_\mathrm{N}} = \left\{ {0,L} \right\}$.
\subsection{Effective stress resultant}
The balance of angular momentum given by Eq.\,(\ref{final_ang_mnt_bal}) can be automatically satisfied by representing the balance laws in terms of an effective stress resultant tensor\,\citep{simo1990stress}. We define this effective stress resultant tensor as
\begin{align}\label{beam_th_def_eff_strs_res}
{\boldsymbol{\tilde n}} &\coloneqq {\boldsymbol{n}} \otimes {{\boldsymbol{\varphi}}_{\!,s}} - {{\boldsymbol{d}}_{\gamma ,s}} \otimes {{\tilde {\boldsymbol{m}}}^\gamma } + {{\boldsymbol{l}}^\gamma } \otimes {{\boldsymbol{d}}_\gamma }.
\end{align}
We also recall the identities $\widehat {{\boldsymbol{a}} \times {\boldsymbol{b}}} = {2\,\rm{skew}}[{\boldsymbol{b}} \otimes {\boldsymbol{a}}]$ and ${\rm{skew}}[{\boldsymbol{a}} \otimes {\boldsymbol{b}}] = - {\rm{skew}}[{\boldsymbol{b}} \otimes {\boldsymbol{a}}]$ for vectors ${\boldsymbol{a}},{\boldsymbol{b}} \in {{\Bbb R}^3}$ where $\widehat {(\bullet)}$ represents the skew-symmetric matrix associated with the vector $(\bullet)\in{{\Bbb R}^3}$, that is, $\widehat {(\bullet)}{\boldsymbol{a}} = (\bullet) \times {\boldsymbol{a}},\,\forall {\boldsymbol{a}} \in {{\Bbb{R}}^3}$, and ${\rm{skew}}[(\bullet)] \coloneqq {\frac{1}{2}}\left\{{(\bullet) - {(\bullet)^{\mathrm{T}}}}\right\}$. Then Eq.\,(\ref{final_ang_mnt_bal}) can be rewritten as the symmetry condition of the effective stress resultant tensor, i.e., ${\tilde{\boldsymbol{n}}}={{\tilde{\boldsymbol{n}}}^{\mathrm{T}}}$.
Decomposing the stress resultant forces and moment relative to the basis of $\{{{{\boldsymbol{d}}_1},{{\boldsymbol{d}}_2},{{\boldsymbol{\varphi}}_{\!,s}}}\}$ yields
\begin{subequations}
\label{beam_dec_strs_res}
\begin{alignat}{2}
{\boldsymbol{n}} &= {n}{{\boldsymbol{\varphi}}_{\!,s}} + {q^\alpha }{{\boldsymbol{d}}_\alpha },\label{strs_res_f}\\
{{\boldsymbol{\tilde m}}^\alpha } &= {{\tilde m}^{\alpha}}{{\boldsymbol{\varphi}}_{\!,s}} + {{\tilde m}^{\beta \alpha}}{{\boldsymbol{d}}_\beta},\label{strs_res_dir_mnt}\\
{{\boldsymbol{l}}^\alpha} &= {l^{\alpha}}{{\boldsymbol{\varphi}}_{\!,s}} + {l^{\beta\alpha}}{{\boldsymbol{d}}_\beta }.\label{strs_res_dir_f}
\end{alignat}
\end{subequations}
We also decompose ${{\boldsymbol{d}}_{\alpha ,s}}$ in the same basis as
\begin{align}\label{beam_dec_d_s}
{{\boldsymbol{d}}_{\alpha ,s}} = {k} _\alpha{{\boldsymbol{\varphi}}_{\!,s}} + {k} _\alpha ^\beta {{\boldsymbol{d}}_\beta }.
\end{align}
\begin{definition}
\label{remark_curv_k}
\textit{Physical interpretation of current curvatures}. Without loss of generality, we examine the case $\alpha=1$ in Eq.\,(\ref{beam_dec_d_s}). The change of director vector along the central axis has three different components, i.e.
\begin{equation}\label{remark_beam_dec_d_s}
{{\boldsymbol{d}}_{1,s}} = {k}_{1}{{\boldsymbol{\varphi}}_{\!,s}} + {k}_{1}^{1} {{\boldsymbol{d}}_{1}} + {k} _{1}^{2}{{\boldsymbol{d}}_{2}}.
\end{equation}
The components $k_1$, $k_1^2$ represent the \textit{bending} and \textit{torsional} curvatures in the current configuration. However, they are not exactly geometrical curvatures, since the basis $\left\{ {{{\boldsymbol{d}}_1},{{\boldsymbol{d}}_2},{{\boldsymbol{\varphi }}_{\!,s}}} \right\}$ is not orthonormal. $k_1^1$ is associated with a varying cross-section stretch ($\lambda_1$) along the central axis in the current configuration. If the transverse and in-plane cross-section shear deformations are zero (i.e., ${{\boldsymbol{\varphi }}_{,s}} \cdot {{\boldsymbol{d}}_1} = {{\boldsymbol{d}}_1} \cdot {{\boldsymbol{d}}_2} = 0$), we have $k_1^1 = {\lambda _{1,s}}/{\lambda _1}$. In other words, if the cross-section stretch is non-varying along the central axis in the current configuration, we have $k_1^1 = 0$.
\end{definition}
Using the component forms in Eqs.\,(\ref{beam_dec_strs_res}) and (\ref{beam_dec_d_s}), the effective stress resultant tensor of Eq.\,(\ref{beam_th_def_eff_strs_res}) can be rewritten as
\begin{equation}
{\boldsymbol{\tilde n}} = {{\tilde n}}{{\boldsymbol{\varphi}}_{\!,s}} \otimes {{\boldsymbol{\varphi}}_{\!,s}} + {{\tilde q}^\alpha }{{\boldsymbol{d}}_\alpha } \otimes {{\boldsymbol{\varphi}}_{\!,s}}
+ {{\tilde l}^{\alpha }} {{\boldsymbol{\varphi}}_{\!,s}} \otimes {{\boldsymbol{d}}_\alpha } + {{\tilde l}^{\alpha \beta }}{{\boldsymbol{d}}_\alpha } \otimes {{\boldsymbol{d}}_\beta },
\end{equation}
where the following component expressions are defined relative to the basis $\{{{{\boldsymbol{d}}_1},{{\boldsymbol{d}}_2},{{\boldsymbol{\varphi}}_{\!,s}}}\}$
\begin{subequations}
\label{beam_th_strs_res_comp_basis_d123}
\begin{alignat}{2}
{{\tilde n}} &\coloneqq {n} - {{\tilde m}^{\gamma }}k _\gamma\,\,&&({\text{effective axial stress resultant}}),\label{eff_axial_res}\\
{{\tilde q}^\alpha } &\coloneqq {q^\alpha } - {{\tilde m}^{\gamma }}k _\gamma ^\alpha\,\,&&({\text{effective transverse shear stress resultant}}), \label{eff_trans_shear_res}\\
{{\tilde l}^{\alpha }} &\coloneqq {l^{\alpha }} - {{\tilde m}^{\alpha \gamma }}k _\gamma\,\,\,&&({\text{effective longitudinal shear stress resultant}}),\label{eff_sym_shear_res}\\
{{\tilde l}^{\alpha \beta }} &\coloneqq {l^{\beta \alpha }} - {{\tilde m}^{\alpha \gamma }}k _\gamma ^\beta \,\,&&({\text{effective transverse normal and cross-section shear stress resultants).}}\label{eff_trans_nm_cs_shear}
\end{alignat}
\end{subequations}
The symmetry condition ${\tilde{\boldsymbol{n}}}={{\tilde{\boldsymbol{n}}}^{\mathrm{T}}}$ yields the following symmetry conditions on the components
\begin{align}
{\tilde q^\alpha } = {\tilde l^{\alpha }}\,\,{\text{and}}\,\,\,{\tilde l^{\alpha \beta }} = {\tilde l^{\beta \alpha }}.
\end{align}
\section{Variational formulation}
\subsection{Weak form of the governing equation}
\label{var_for_weak_form}
We define a variational space by
\begin{align}\label{var_space}
{\mathcal{V}} \coloneqq \left\{ {\left. {\delta\boldsymbol{y}\coloneqq\left[\delta {\boldsymbol{\varphi }}^{\mathrm{T}},\delta {{\boldsymbol{d}}_1}^{\mathrm{T}},\delta {{\boldsymbol{d}}_2}^{\mathrm{T}}\right]^{\mathrm{T}} \in {\left[{H^1}(0,L)\right]^{d}}} \right|\delta {\boldsymbol{\varphi }} = \delta {{\boldsymbol{d}}_1} = \delta {{\boldsymbol{d}}_2} = \boldsymbol{0}\,\,\text{on}\,\,{\Gamma _\mathrm{D}}} \right\},
\end{align}
where ${H^1}(0,L)$ defines the Sobolev space of order one which is the collection of all continuous functions whose first order derivatives are square integrable in the open domain $(0,L)\ni{s}$. Here the components of $\delta \boldsymbol{y}$ in the global Cartesian frame are considered as independent solution functions, so that the dimension becomes $d=9$. In the following, we restrict our attention to the static case. By multiplying the linear and director momentum balance equations by $\delta\boldsymbol{\varphi}$ and $\delta\boldsymbol{d}_{\gamma}$ ($\gamma=1,2$), respectively, we have
\begin{align}\label{beam_weak_form_static_balance_eq}
\int_0^L {\left\{ {\left( {{{\boldsymbol{n}}_{,s}} + {\boldsymbol{\bar n}}} \right) \cdot \delta {\boldsymbol{\varphi }} + \left( {{\boldsymbol{\tilde m}}_{,s}^\gamma - {{\boldsymbol{l}}^\gamma } + {{{\boldsymbol{\bar {\tilde m}}}}^\gamma }} \right) \cdot \delta {{\boldsymbol{d}}_\gamma }} \right\}\,{\mathrm{d}}s = 0},
\end{align}
where $\delta (\bullet)$ denotes the first variation. Integration by parts of Eq.\,(\ref{beam_weak_form_static_balance_eq}) leads to the following variational equation\footnote{See Appendix \ref{pdisp_linearize_sec} for the linearization of Eq.\,(\ref{beam_var_eq_balance_eq}) and the configuration update process.}
\begin{align}\label{beam_var_eq_balance_eq}
{G_{{\mathop{\rm int}} }}({\boldsymbol{y}},\delta {\boldsymbol{y}}) = {G_{{\rm{ext}}}}({\boldsymbol{y}},\delta {\boldsymbol{y}}),\,\,{\forall} \delta {\boldsymbol{y}} \in {\mathcal{V}},
\end{align}
where
\begin{align}\label{beam_var_int_vir_work}
{G_{{\mathop{\rm int}} }}({\boldsymbol{y}},\delta {\boldsymbol{y}}) \coloneqq \int_0^L {\left( {{\boldsymbol{n}} \cdot \delta {{\boldsymbol{\varphi }}_{\!,s}} + {{{\boldsymbol{\tilde m}}}^\gamma } \cdot \delta {{\boldsymbol{d}}_{\gamma ,s}} + {{\boldsymbol{l}}^\gamma } \cdot \delta {{\boldsymbol{d}}_\gamma }} \right){\mathrm{d}}s},
\end{align}
and
\begin{align}\label{beam_var_ext_vir_work}
{G_{{\rm{ext}}}}({\boldsymbol{y}},\delta {\boldsymbol{y}}) \coloneqq \left[ {{{{{{\boldsymbol{\bar n}}}_0}}} \cdot \delta {\boldsymbol{\varphi }}} \right]_{\Gamma_\mathrm{N}} + \left[ {{\boldsymbol{\bar {\tilde m}}}^\gamma_0 \cdot \delta {{\boldsymbol{d}}_\gamma }} \right]_{\Gamma_\mathrm{N}} + \int_0^L {\left( {{\boldsymbol{\bar n}} \cdot \delta {\boldsymbol{\varphi }} + {{{\boldsymbol{\bar {\tilde m}}}}^\gamma}\cdot\delta {{\boldsymbol{d}}_\gamma }} \right){\mathrm{d}}s}.
\end{align}
The external virtual work of Eq.\,(\ref{beam_var_ext_vir_work}) depends on the current configuration if a non-conservative load is applied (see for example the distributed follower load in section \ref{ex_beam_end_mnt}, and the external virtual work, expressed by Eq.\,(\ref{pure_bend_vir_work})), and it can be rewritten in compact form by
\begin{equation} \label{ext_vir_work_compact_form}
{G_{{\rm{ext}}}}({\boldsymbol{y}},\delta {\boldsymbol{y}}) = {\left[ {\delta {{{\boldsymbol{y}}}^{\mathrm{T}}}{{{\boldsymbol{\bar R}}}_0}} \right]_{{\Gamma_\mathrm{N}}}} + \int_0^L {\delta {{{\boldsymbol{y}}}^{\mathrm{T}}}{\boldsymbol{\bar R}}\,{\mathrm{d}}s},
\end{equation}
where we define
\begin{equation}
{{\boldsymbol{\bar R}}_0} \coloneqq \left\{ {\begin{array}{l}
\begin{aligned}
{{{{\boldsymbol{\bar n}}}_0}}\\
{{\boldsymbol{\bar {\tilde m}}}_0^1}\\
{{\boldsymbol{\bar {\tilde m}}}_0^2}
\end{aligned}
\end{array}} \right\},\,\,\text{and}\,\,{\boldsymbol{\bar R}} \coloneqq \left\{ {\begin{array}{l}
\begin{aligned}
{{\boldsymbol{\bar n}}}{\,\,\,}\\
{{{{\boldsymbol{\bar {\tilde m}}}}^1}}\\
{{{{\boldsymbol{\bar {\tilde m}}}}^2}}
\end{aligned}
\end{array}} \right\}.
\end{equation}
Using Eqs.\,(\ref{beam_dec_strs_res}) and (\ref{beam_dec_d_s}), the internal virtual work of Eq.\,(\ref{beam_var_int_vir_work}) can be rewritten by the effective stress resultants and director stress couples, as
\begin{align}\label{beam_int_vir_work_effective_strs}
{G_{{\mathop{\rm int}} }}({\boldsymbol{y}},\delta {\boldsymbol{y}}) = \int_0^L {\left( {\tilde n\,\delta \varepsilon + {{\tilde m}^\alpha }\delta {\rho _\alpha } + {{\tilde q}^\alpha }\delta {\delta _\alpha } + {{\tilde m}^{\alpha \beta }}\delta {\gamma _{\alpha \beta }} + {{\tilde l}^{\alpha \beta }}\delta {\chi _{\alpha \beta }}} \right)\,{\mathrm{d}}s},
\end{align}
where the variations of the strain measures (virtual strains) are derived as
\begin{subequations}
\label{beam_var_strains}
\begin{align}
\delta {\varepsilon} &= \delta {{\boldsymbol{\varphi}}_{\!,s}} \cdot {{\boldsymbol{\varphi}}_{\!,s}},
\label{beam_var_strns_eps}\\
\delta {{\rho }_\alpha } &= \delta {{\boldsymbol{\varphi}}_{\!,s}} \cdot {{\boldsymbol{d}}_{\alpha ,s}} + {{\boldsymbol{\varphi}}_{\!,s}} \cdot \delta {{\boldsymbol{d}}_{\alpha ,s}},\label{beam_var_strns_rho}\\
\delta {{\delta }_\alpha } &= \delta {{\boldsymbol{\varphi}}_{\!,s}} \cdot {{\boldsymbol{d}}_\alpha } + {{\boldsymbol{\varphi}}_{\!,s}} \cdot \delta {{\boldsymbol{d}}_\alpha },\label{beam_var_strns_del}\\
\delta {{\gamma }_{\alpha \beta }} &= \delta {{\boldsymbol{d}}_\alpha } \cdot {{\boldsymbol{d}}_{\beta ,s}} + {{\boldsymbol{d}}_\alpha } \cdot \delta {{\boldsymbol{d}}_{\beta ,s}},\label{beam_var_strns_gm}\\
\delta {{\chi }_{\alpha \beta }} &= \frac{1}{2}\left( {\delta {{\boldsymbol{d}}_\alpha } \cdot {{\boldsymbol{d}}_\beta } + {{\boldsymbol{d}}_\alpha } \cdot \delta {{\boldsymbol{d}}_\beta }} \right).\label{beam_var_strns_chi}
\end{align}
\end{subequations}
Using the fact that these strains vanish in the initial beam configuration, we obtain the following strain expressions,
\begin{subequations}
\label{beam_th_strn_comp_basis_d123}
\begin{alignat}{2}
\varepsilon &\coloneqq \frac{1}{2}({\left\| {{{\boldsymbol{\varphi }}_{\!,s}}} \right\|^2} - 1)\,\,&&({\text{axial stretching strain}}),\\
{{\rho }_\alpha } &\coloneqq {{\boldsymbol{\varphi}}_{\!,s}} \cdot {{\boldsymbol{d}}_{\alpha ,s}} - {{\boldsymbol{\varphi}}_{0,s}} \cdot {{\boldsymbol{D}}_{\alpha ,s}}\,\,&&({\text{bending strain}}),\\
{{\delta }_\alpha } &\coloneqq {{\boldsymbol{\varphi}}_{\!,s}} \cdot {{\boldsymbol{d}}_\alpha }- {{\boldsymbol{\varphi }}_{0,s}} \cdot {{\boldsymbol{D}}_\alpha }\,\,&&({\text{transverse shear strain}}),\\
{{\gamma }_{\alpha \beta }} &\coloneqq {{\boldsymbol{d}}_\alpha } \cdot {{\boldsymbol{d}}_{\beta ,s}} - {{\boldsymbol{D}}_\alpha } \cdot {{\boldsymbol{D}}_{\beta ,s}}\,\,&&({\text{couple shear strain}}),\label{def_b_strn_coup_sh}\\
{{\chi }_{\alpha \beta }} &\coloneqq \frac{1}{2}({{\boldsymbol{d}}_\alpha } \cdot {{\boldsymbol{d}}_\beta } - \boldsymbol{D}_{\alpha}\cdot\boldsymbol{D}_{\beta})\,\,&&({\text{cross-section stretching and shear strains}}).\label{strn_comp_chi}
\end{alignat}
\end{subequations}
\begin{definition} \textit{Physical interpretation of director stress couple components}.
\label{remark_bending_mnt}
Substituting Eq.\,(\ref{strs_res_dir_mnt}) into Eq.\,(\ref{def_strs_couple}) yields
\begin{equation}
{\boldsymbol{m}} = {\tilde m^1}{{\boldsymbol{d}}_1} \times {{\boldsymbol{\varphi }}_{\!,s}} + {\tilde m^2}{{\boldsymbol{d}}_2} \times {{\boldsymbol{\varphi }}_{\!,s}} + \left( {{{\tilde m}^{21}} - {{\tilde m}^{12}}} \right){{\boldsymbol{d}}_1} \times {{\boldsymbol{d}}_2}.
\end{equation}
Here, ${\tilde m}^\alpha\,\left(\alpha=1,2\right)$ represents the bending moment around the axis orthogonal to the current tangent vector to the central axis (i.e., ${\boldsymbol{\varphi}_{\!,s}}$) and director $\boldsymbol{d}_\alpha$, and ${\tilde m}^{12}$ and ${\tilde m}^{21}$ represent torsional moments in the opposite directions around the normal vector of the cross-section. The other components ${\tilde m}^{11}$ and ${\tilde m}^{22}$ are associated with the non-uniform transverse normal stretching in the directions of directors $\boldsymbol{d}_1$ and $\boldsymbol{d}_2$, respectively. Without loss of generality, we examine the component ${\tilde m}^{11}$ and its work conjugate strain $\gamma_{11}$ only. From Eq.\,(\ref{def_b_strn_coup_sh}), we have
\begin{equation}
\label{remark_gamma_11}
\gamma_{11}={\boldsymbol{d}_1}\cdot{\boldsymbol{d}_{1,s}}={\lambda_1}{\lambda_{1,s}},
\end{equation}
where ${{\boldsymbol{D}}_1} \cdot {{\boldsymbol{D}}_{1,s}} = 0$ is used, since we assume $\boldsymbol{D}_1$ is a unit vector. A material fiber aligned in the axial direction rotates, i.e.,~$\gamma_{11}\ne0$ if the transverse normal stretch of the cross-section ($\lambda_1$) is not constant along the central axis, and ${\tilde m}^{11}$ represents the work conjugate moment. If the cross-section deforms uniformly along the central axis, then ${\gamma_{11}}={\tilde m}^{11}=0$.
\end{definition}
\subsection{Hyperelastic constitutive equation}
\label{var_form_constitutive_law}
We can obtain constitutive equations by a reduction of a three-dimensional hyperelastic constitutive model. In what follows, we consider two hyperelastic materials: the St.\,Venant-Kirchhoff material, and the compressible Neo-Hookean material.
\subsubsection{Work conjugate stresses and elasticity tensor}
The Green-Lagrange strain tensor is defined as
\begin{equation} \label{def_GL_strain}
{\boldsymbol{E}} \coloneqq \frac{1}{2}\left( {{{\boldsymbol{F}}^{\mathrm{T}}}{\boldsymbol{F}} - {\boldsymbol{1}}} \right),
\end{equation}
where $\boldsymbol{1}$ represents the identity tensor in $\Bbb{R}^3$. The identity tensor can be expressed in the basis ${\left\{ {{{\boldsymbol{G}}^1},{{\boldsymbol{G}}^2},{{\boldsymbol{G}}^3}} \right\}}$ as
\begin{equation}\label{def_identity_curv}
{\boldsymbol{1}} = {G_{ij}}{{\boldsymbol{G}}^i} \otimes {{\boldsymbol{G}}^j}\,\,\text{where}\,\,G_{ij}\coloneqq{\boldsymbol{G}_i}\cdot{\boldsymbol{G}_j}.
\end{equation}
Using Eq.\,(\ref{beam_th_str_init_cov_base}) the identity tensor can be rewritten as
\begin{align}\label{def_identity_curv_1}
{\boldsymbol{1}} &= {{\boldsymbol{G}}^\alpha } \otimes {{\boldsymbol{G}}^\alpha } + {\xi ^\beta }{{\boldsymbol{D}}_\alpha } \cdot {{\boldsymbol{D}}_{\beta ,s}}\left( {{{\boldsymbol{G}}^\alpha } \otimes {{\boldsymbol{G}}^3} + {{\boldsymbol{G}}^3} \otimes {{\boldsymbol{G}}^\alpha }} \right)\nonumber\\
&+ \left( {1 + 2{\xi ^\alpha }{{\boldsymbol{D}}_{\alpha ,s}} \cdot {{\boldsymbol{D}}_3} + {\xi ^\alpha }{\xi ^\beta }{{\boldsymbol{D}}_{\alpha ,s}} \cdot {{\boldsymbol{D}}_{\beta ,s}}} \right){{\boldsymbol{G}}^3} \otimes {{\boldsymbol{G}}^3}.
\end{align}
Then substituting Eqs.\,(\ref{beam_th_str_deform_grad}) and (\ref{def_identity_curv_1}) into Eq.\,(\ref{def_GL_strain}), the Green-Lagrange strain tensor can be rewritten in terms of the strains in Eq.\,(\ref{beam_th_strn_comp_basis_d123}) as
\begin{align}\label{def_GL_strain_1}
{\boldsymbol{E}} = {E_{\alpha \beta }}{{\boldsymbol{G}}^\alpha } \otimes {{\boldsymbol{G}}^\beta } + {E_{3\gamma }}\left( {{{\boldsymbol{G}}^3} \otimes {{\boldsymbol{G}}^\gamma } + {{\boldsymbol{G}}^\gamma } \otimes {{\boldsymbol{G}}^3}} \right) + {E_{33}}{{\boldsymbol{G}}^3} \otimes {{\boldsymbol{G}}^3},
\end{align}
where the components are
\begin{equation} \label{GL_strn_components}
\left\{ \begin{array}{lcl}
\begin{aligned}
{E_{\alpha \beta }} &= {{\chi }_{\alpha \beta }},\\
{E_{3\alpha }} &= {E_{\alpha 3}} = \frac{1}{2}\left( {{{\delta }_\alpha } + {\xi ^\gamma }{{\gamma }_{\alpha \gamma }}} \right),\\
{E_{33}} &= \varepsilon + {\xi ^\gamma }{{\rho }_\gamma } + {{\xi ^\gamma }{\xi ^\delta {{\kappa}}_{\gamma \delta} }},
\end{aligned}
\end{array} \right.
\end{equation}
and we define a \textit{high-order bending strain component} as
\begin{align}\label{def_strain_phi}
{{{\kappa}} _{\alpha \beta }} \coloneqq \frac{1}{2}\left( {{{\boldsymbol{d}}_{\alpha ,s}} \cdot {{\boldsymbol{d}}_{\beta ,s}} - {{\boldsymbol{D}}_{\alpha ,s}} \cdot {{\boldsymbol{D}}_{\beta ,s}}} \right).
\end{align}
Taking the first variation of Eq.\,(\ref{def_strain_phi}), we obtain
\begin{equation}\label{def_strain_var_kappa}
\delta {{\kappa }_{\alpha \beta }} = \frac{1}{2}\left( {\delta {{\boldsymbol{d}}_{\alpha ,s}} \cdot {{\boldsymbol{d}}_{\beta ,s}} + {{\boldsymbol{d}}_{\alpha ,s}} \cdot \delta {{\boldsymbol{d}}_{\beta ,s}}} \right).
\end{equation}
For brevity we define the following arrays by exploiting the symmetry of the strains (i.e., $\kappa_{12}=\kappa_{21}$ and $\chi_{12}=\chi_{21}$)
\begin{equation}
{\boldsymbol{\rho }} \coloneqq \left\{ {\begin{array}{*{20}{c}}
{{{\rho }_1}}\\
{{{\rho }_2}}
\end{array}} \right\},\,{\boldsymbol{\kappa }} \coloneqq \left\{ {\begin{array}{*{20}{c}}
{{{\kappa }_{11}}}\\
{{{\kappa }_{22}}}\\
{2{{\kappa }_{12}}}
\end{array}} \right\},\,{\boldsymbol{\delta }} \coloneqq \left\{ {\begin{array}{*{20}{c}}
{{{\delta }_1}}\\
{{{\delta }_2}}
\end{array}} \right\},\,{\boldsymbol{\gamma }} \coloneqq \left\{ {\begin{array}{*{20}{c}}
{{{\gamma }_{11}}}\\
{{{\gamma }_{12}}}\\
{{{\gamma }_{21}}}\\
{{{\gamma }_{22}}}
\end{array}} \right\},\,{\boldsymbol{\chi }} \coloneqq \left\{ {\begin{array}{*{20}{c}}
{{{\chi }_{11}}}\\
{{{\chi }_{22}}}\\
{2{{\chi }_{12}}}
\end{array}} \right\},
\end{equation}
and
\begin{equation}\label{def_eps_hat}
{\boldsymbol{\munderbar \varepsilon}} \coloneqq \left\{ {\begin{array}{*{20}{c}}
{\varepsilon }\\
{{\boldsymbol{\rho }}}\\
{{\boldsymbol{\kappa }}}\\
{{\boldsymbol{\delta }}}\\
{{\boldsymbol{\gamma }}}\\
{{\boldsymbol{\chi }}}
\end{array}} \right\}.
\end{equation}
\begin{definition}{\textit{Incompleteness of the Green-Lagrange strain components in the beam kinematic description of Eq.\,(\ref{intro_beam_th_str_cur_config_2nd}) with ${a_2}={b_1}=0$}. Here it is shown that the kinematic expression of Eq.\,(\ref{intro_beam_th_str_cur_config_2nd}) leads to the Green-Lagrange strain tensor having a complete linear polynomial expression in terms of the coordinates $\xi^1$ and $\xi^2$, but it does not if ${a_2}={b_1}=0$. Based on the kinematic expression of Eq.\,(\ref{intro_beam_th_str_cur_config_2nd}), the covariant base vectors are obtained as
\begin{equation}\label{beam_th_str_cur_cov_base_quad}
\left\{ \begin{array}{l}
\begin{aligned}
{{\boldsymbol{g}}_1} &= {{\boldsymbol{d}}_1} + 2{a_1}{\xi ^1}{{\boldsymbol{d}}_1} + {\xi ^2}\left( {{b_1}{{\boldsymbol{d}}_2} + {a_2}{{\boldsymbol{d}}_1}} \right),\\
{{\boldsymbol{g}}_2} &= {{\boldsymbol{d}}_2} + {\xi ^1}\left( {{a_2}{{\boldsymbol{d}}_1} + {b_1}{{\boldsymbol{d}}_2}} \right) + 2{b_2}{\xi ^2}{{\boldsymbol{d}}_2}.\\
\end{aligned}
\end{array} \right.
\end{equation}
The in-plane components of the Green-Lagrange strain tensor are obtained by substituting Eq.\,(\ref{beam_th_str_cur_cov_base_quad}) into Eq.\,(\ref{def_GL_strain}), as
\begin{equation}
E_{\alpha\beta}={E}^{\mathrm{c}}_{\alpha\beta}+{\tilde E}_{\alpha\beta},
\end{equation}
where the additional parts, after neglecting the quadratic terms of $\xi^1$ and $\xi^2$ \footnote{{The quadratic terms are neglected since the enhanced strain field should satisfy the orthogonality to the constant stress fields \citep{betsch19964}.}}, are
\begin{subequations}
\label{gl_strn_add_part_compat_quad}
\begin{align}
{\tilde E}_{11} &= 2{\xi ^1}{a_1}{{\boldsymbol{d}}_1} \cdot {{\boldsymbol{d}}_1} + {\xi ^2}\left( {{a_2}{{\boldsymbol{d}}_1} \cdot {{\boldsymbol{d}}_1} + {b_1}{{\boldsymbol{d}}_1} \cdot {{\boldsymbol{d}}_2}} \right)+ 2{\xi ^1}{\xi ^2}\left( {{a_1}{a_2}{{\boldsymbol{d}}_1} \cdot {{\boldsymbol{d}}_1} + {a_1}{b_1}{{\boldsymbol{d}}_1} \cdot {{\boldsymbol{d}}_2}} \right),\\
{\tilde E}_{22} &= {\xi ^1}\left( {{a_2}{{\boldsymbol{d}}_1} \cdot {{\boldsymbol{d}}_2} + {b_1}{{\boldsymbol{d}}_2} \cdot {{\boldsymbol{d}}_2}} \right) + 2{b_2}{\xi ^2}{{\boldsymbol{d}}_2} \cdot {{\boldsymbol{d}}_2} + 2{\xi ^1}{\xi ^2}\left( {{a_2}{b_2}{{\boldsymbol{d}}_1} \cdot {{\boldsymbol{d}}_2} + {b_1}{b_2}{{\boldsymbol{d}}_2} \cdot {{\boldsymbol{d}}_2}} \right),\\
2{\tilde E}_{12} &= {\xi ^1}\left( {{a_2}{{\boldsymbol{d}}_1} \cdot {{\boldsymbol{d}}_1} + {b_1}{{\boldsymbol{d}}_1} \cdot {{\boldsymbol{d}}_2} + 2{a_1}{{\boldsymbol{d}}_1} \cdot {{\boldsymbol{d}}_2}} \right) + {\xi ^2}\left( {2{b_2}{{\boldsymbol{d}}_1} \cdot {{\boldsymbol{d}}_2} + {a_2}{{\boldsymbol{d}}_1} \cdot {{\boldsymbol{d}}_2} + {b_1}{{\boldsymbol{d}}_2} \cdot {{\boldsymbol{d}}_2}} \right)\nonumber\\
&+ {\xi ^1}{\xi ^2}\left( {{a_2}^2{{\boldsymbol{d}}_1} \cdot {{\boldsymbol{d}}_1} + 4{a_1}{b_2}{{\boldsymbol{d}}_1} \cdot {{\boldsymbol{d}}_2} + 2{a_2}{b_1}{{\boldsymbol{d}}_1} \cdot {{\boldsymbol{d}}_2} + {b_1}^2{{\boldsymbol{d}}_2} \cdot {{\boldsymbol{d}}_2}} \right).
\end{align}
\end{subequations}
However, if the bilinear terms in $\xi^1$ and $\xi^2$ are missing in Eq.\,(\ref{intro_beam_th_str_cur_config_2nd}), i.e., $a_2=b_1=0$, those in-plane Green-Lagrange strain components, {after neglecting the quadratic terms}, become
\begin{subequations}
\label{inplane_without_bilinear_GL}
\begin{align}
\tilde E_{11}^{*} &= 2{a_1}{\xi ^1}{{\boldsymbol{d}}_1} \cdot {{\boldsymbol{d}}_1},\label{GL_strn_5p_e11}\\
\tilde E_{22}^{*} &= 2{b_2}{\xi ^2}{{\boldsymbol{d}}_2} \cdot {{\boldsymbol{d}}_2},\label{GL_strn_5p_e22}\\
2\tilde E_{12}^{*} &= 2\left( {{\xi ^1}{a_1} + {\xi ^2}{b_2} + 2{\xi ^1}{\xi ^2}{a_1}{b_2}} \right){{\boldsymbol{d}}_1} \cdot {{\boldsymbol{d}}_2}.\label{GL_strn_5p_e12}
\end{align}
\end{subequations}
\noindent It is noticeable that Eqs.\,(\ref{GL_strn_5p_e11}) and (\ref{GL_strn_5p_e22}) do not have linear terms of the coordinates $\xi^2$ and $\xi^1$, respectively. This means that the kinematic expression of Eq.(\ref{intro_beam_th_str_cur_config_2nd}) without bilinear terms (i.e., $a_2=b_1=0$) is not able to represent trapezoidal deformations of the cross-section, illustrated in Fig.\,\ref{intro_cs_deform_2nd_trapezoid}.}
\end{definition}
We assume that the \textit{strain energy density} (defined as the strain energy per unit undeformed volume) is expressed in terms of the Green-Lagrange strain tensor, as
\begin{equation}
\Psi = \Psi(\boldsymbol{E}).
\end{equation}
The second Piola-Kirchhoff stress tensor, which is \textit{work conjugate} to the Green-Lagrange strain tensor, is obtained by
\begin{equation}\label{2nd_pk_strs_comp}
{\boldsymbol{S}} = {S^{ij}}{{\boldsymbol{G}}_i} \otimes {{\boldsymbol{G}}_j}\,\,\,\text{with}\,\,\,{S^{ij}} = \frac{{\partial \Psi }}{{\partial {E_{ij}}}}.
\end{equation}
The components $S^{11}$,\,$S^{22}$, and $S^{12}$ are typically assumed to be zero in many beam formulations and this zero stress condition has made the application of general nonlinear constitutive laws not straightforward. Exploiting the symmetries, the second order tensors ${\boldsymbol{E}}$ and ${\boldsymbol{S}}$ can be expressed in array form (Voigt notation), as ${\boldsymbol{\munderbar S}} \coloneqq {\left[ {{S^{11}},{S^{22}},{S^{33}},{S^{12}},{S^{13}},{S^{23}}} \right]^{\mathrm{T}}}$, and ${\boldsymbol{\munderbar E}} \coloneqq {\left[ {{E_{11}},{E_{22}},{E_{33}},2{E_{12}},2{E_{13}},2{E_{23}}} \right]^{\mathrm{T}}}$. The total strain energy of the beam can be expressed as
\begin{align}\label{tot_strn_energy_beam}
U = {\int_0^L {\int_{\mathcal{A}} {{{\Psi}}\,{j_0}\,{\mathrm{d}}\mathcal{A}}\,{\mathrm{d}}s}}\,.
\end{align}
The first variation of the strain energy density function can be obtained, by using the chain rule of differentiation, as (see Appendix \ref{1st_var_strn_e_M_mat} for the details)
\begin{align}\label{deriv_energy_general}
\delta \Psi = \boldsymbol{\munderbar S}^{\mathrm{T}}\delta{\boldsymbol{\munderbar E}} = \boldsymbol{\munderbar S}^{\mathrm{T}}{\boldsymbol{\munderbar D}}\delta {\boldsymbol{\munderbar \varepsilon }}\,\,\,\mathrm{with}\,\,\,\boldsymbol{\munderbar D}\coloneqq{\frac{\partial\boldsymbol{\munderbar E}}{\partial\boldsymbol{\munderbar\varepsilon}}}.
\end{align}
Taking the first variation of the total strain energy of Eq.\,(\ref{tot_strn_energy_beam}) and using Eq.\,(\ref{deriv_energy_general}) we obtain the internal virtual work
\begin{align}\label{tot_strn_energy_beam_time_deriv}
{G_\text{int}}(\boldsymbol{y},\delta\boldsymbol{y})\equiv{\delta U} = \int_0^L {\delta {{{\boldsymbol{\munderbar \varepsilon }}^{\mathrm{T}}}}{{\boldsymbol{R}}}\,{\mathrm{d}}s},
\end{align}
where $\boldsymbol{R}$ defines the array of stress resultants and director stress couples,
\begin{align}
{\boldsymbol{R}} \coloneqq\int_{\mathcal{A}} {{\boldsymbol{\munderbar D}^{\mathrm{T}}}{\boldsymbol{\munderbar S}}\,{j_0}\,{\mathrm{d}}{\mathcal{A}}}= {\left[ {{{\tilde n}},{{\tilde m}^{1}},{{\tilde m}^{2}},{{\tilde h}^{11}},{{\tilde h}^{22}},{{\tilde h}^{12}},{{\tilde q}^1},{{\tilde q}^2},{{\tilde m}^{11}},{{\tilde m}^{12}},{{\tilde m}^{21}},{{\tilde m}^{22}},{{\tilde l}^{11}},{{\tilde l}^{22}},{{\tilde l}^{12}}} \right]^{\mathrm{T}}}.
\end{align}
Here, ${{\tilde h}^{\alpha \beta }}$ defines the component of the \textit{high-order director stress couple}.
For general hyperelastic materials, the constitutive relation between $\boldsymbol{S}$ and $\boldsymbol{E}$ is nonlinear. Thus, we need to linearize the constitutive relation, by taking the directional derivative of $\boldsymbol{S}$,
\begin{equation}\label{linear_rel_dir_deriv_SE}
D{\boldsymbol{S}} \cdot \Delta {\boldsymbol{x}_t} = {\boldsymbol{\mathcal{C}}}:D{\boldsymbol{E}} \cdot \Delta {\boldsymbol{x}_t},
\end{equation}
where $D(\bullet)\cdot(*)$ represents the directional derivative of $(\bullet)$ in direction $(*)$, and $\Delta\boldsymbol{x}_t$ denotes the increment of the material point position at the current configuration. The fourth-order tensor $\boldsymbol{\mathcal{C}}$, called the \textit{Lagrangian} or \textit{material elasticity tensor}, is expressed by
\begin{equation}\label{mat_lag_elasticity_tensor}
{\boldsymbol{\mathcal{C}}} \coloneqq \frac{{\partial {\boldsymbol{S}}}}{{\partial {\boldsymbol{E}}}} = {{\mathcal{C}}^{ijk\ell }}{{\boldsymbol{G}}_i} \otimes {{\boldsymbol{G}}_j} \otimes {{\boldsymbol{G}}_k} \otimes {{\boldsymbol{G}}_\ell }\,\,\,\text{with}\,\,\,{{\mathcal{C}}^{ijk\ell }} = \frac{{{\partial ^2}\Psi }}{{\partial {E_{ij}}\,\partial {E_{k\ell }}}}.
\end{equation}
Note that the elasticity tensor has both major and minor symmetries. For computational purposes we can therefore represent the fourth order tensor $\boldsymbol{\mathcal{C}}$ in matrix form as
\begin{equation}
{\boldsymbol{\munderbar{\munderbar{\mathcal{C}}}}} \coloneqq \left[ {\begin{array}{*{20}{c}}{{{\mathcal{C}}^{1111}}}&{{{\mathcal{C}}^{1122}}}&{{{\mathcal{C}}^{1133}}}&{{{\mathcal{C}}^{1112}}}&{{{\mathcal{C}}^{1113}}}&{{{\mathcal{C}}^{1123}}}\\
{}&{{{\mathcal{C}}^{2222}}}&{{{\mathcal{C}}^{2233}}}&{{{\mathcal{C}}^{2212}}}&{{{\mathcal{C}}^{2213}}}&{{{\mathcal{C}}^{2223}}}\\
{}&{}&{{{\mathcal{C}}^{3333}}}&{{{\mathcal{C}}^{3312}}}&{{{\mathcal{C}}^{3313}}}&{{{\mathcal{C}}^{3323}}}\\
{}&{}&{}&{{{\mathcal{C}}^{1212}}}&{{{\mathcal{C}}^{1213}}}&{{{\mathcal{C}}^{1223}}}\\
{}&{{\rm{sym}}{\rm{.}}}&{}&{}&{{{\mathcal{C}}^{1313}}}&{{{\mathcal{C}}^{1323}}}\\
{}&{}&{}&{}&{}&{{{\mathcal{C}}^{2323}}}
\end{array}} \right].
\end{equation}
In a similar manner to the derivation of Eq.\,(\ref{deriv_energy_general}), the directional derivative of $\boldsymbol{\munderbar S}$ can be derived as
\begin{equation}\label{dir_deriv_2pk}
D{\boldsymbol{\munderbar S}}\cdot \Delta {\boldsymbol{y}}= {\boldsymbol{\munderbar{\munderbar {\mathcal{C}}}}}{\boldsymbol{\munderbar D}}\left(D{\boldsymbol{\munderbar \varepsilon }}\cdot \Delta {\boldsymbol{y}}\right).
\end{equation}
Then, the directional derivative of $\boldsymbol{R}$ is obtained by using Eq.\,(\ref{dir_deriv_2pk}), as
\begin{equation}\label{beam_lin_strs_resultant_R}
D{\boldsymbol{R}} \cdot \Delta {\boldsymbol{y}} = {\Bbb{C}}\left(D{\boldsymbol{\munderbar \varepsilon }} \cdot \Delta {\boldsymbol{y}}\right),
\end{equation}
where $\Delta {\boldsymbol{y}} \coloneqq {\left[ {\Delta {{\boldsymbol{\varphi}}}^{\mathrm{T}},\Delta {{\boldsymbol{d}}_1}^{\mathrm{T}},\Delta {{\boldsymbol{d}}_2}^{\mathrm{T}}} \right]^{\mathrm{T}}}$, and $\Bbb{C}$ represents the symmetric constitutive matrix, defined by
\begin{equation}
{\Bbb{C}} \coloneqq \int_{\mathcal{A}} {\left( {{{\boldsymbol{\munderbar D}}^{\mathrm{T}}}
{\boldsymbol{\munderbar{\munderbar{\mathcal{C}}}}}{\boldsymbol{\munderbar D}}{j_0}} \right){\mathrm{d}}{\mathcal{A}}}.
\end{equation}
\begin{definition}
\label{num_integ_polar_circ}
\textit{Numerical integration over the circular cross-section}. In this paper, we restrict our discussion to rectangular and circular cross-sections. In the case of circular cross-section of radius $R$, we can simply parametrize the domain by polar coordinates, as
\begin{equation}
{\xi^1}=r\,{\mathrm{cos}}\,\theta\,\,\text{and}\,\,{\xi^2}=r\,{\mathrm{sin}}\,\theta\,\,\text{with}\,\,0 \le r \le R,\,\,\text{and}\,\,0 \le \theta < 2\pi.
\end{equation}
Then, the infinitesimal area simply becomes
\begin{equation}
{\mathrm{d}}{\mathcal{A}} = r\,{\mathrm{d}}r\,{\mathrm{d}}\theta,\,\,r = \sqrt {{{\left( {{\xi ^1}} \right)}^2} + {{\left( {{\xi ^2}} \right)}^2}}.
\end{equation}
\end{definition}
\subsubsection{St.\,Venant-Kirchhoff material}
In the St.\,Venant-Kirchhoff material model, the strain energy density is expressed by
\begin{equation}\label{mat_stvk_def_energy_func}
\Psi = \frac{1}{2}\lambda {\left( {{\mathrm{tr}}{\boldsymbol{E}}} \right)^2} + \mu {\boldsymbol{E}}:{\boldsymbol{E}},
\end{equation}
where $\lambda$ and $\mu$ are the Lam{\'{e}} constants, which are related to Young's modulus $E$ and Poisson's ratio $\nu$ by
\begin{equation}\label{mat_lame_cnst_emod_shear_mod}
\lambda = \frac{{E\nu }}{{(1 + \nu )(1 - 2\nu )}}\,\,\text{and}\,\,\mu = \frac{E}{{2(1 + \nu )}}.
\end{equation}
The second Piola-Kirchhoff stress tensor is then obtained by
\begin{equation}\label{def_2nd_pk_stress_tilde}
{\boldsymbol{S}} = \frac{{\partial \Psi}}{{\partial {\boldsymbol{E}}}} = \lambda \left( {\mathrm{tr}{\boldsymbol{E}}} \right){\boldsymbol{1}} + 2\mu {\boldsymbol{E}}.
\end{equation}
Note the linearity in the constitutive relation of Eq.\,(\ref{def_2nd_pk_stress_tilde}), which restricts the applicability of this material law to moderate strains. The contravariant component of $\boldsymbol{S}$ follows as
\begin{equation}
{S^{ij}} = {\boldsymbol{S}}:{{\boldsymbol{G}}^i} \otimes {{\boldsymbol{G}}^j} = {C^{ijk\ell }}{E_{k\ell }},
\end{equation}
where
\begin{align}\label{c_comp_st_venant}
{C^{ijk\ell }} =\lambda {G^{ij}}{G^{k\ell }} + \mu \left( {{G^{ik}}{G^{j\ell }} + {G^{i\ell }}{G^{jk}}} \right).
\end{align}
\subsubsection{Compressible Neo-Hookean material}
The stored energy function of the three-dimensional compressible Neo-Hookean material is defined as
\begin{align}\label{nh_mat_stored_efunc}
\Psi = \frac{\mu }{2}({\mathrm{tr}}{\boldsymbol{C}} - 3) - \mu \ln J + \frac{\lambda }{2}{(\ln J)^2},
\end{align}
where $\boldsymbol{C}\coloneqq{{\boldsymbol{F}}^{\mathrm{T}}}{\boldsymbol{F}}$ is the right Cauchy-Green deformation tensor. The second Piola-Kirchhoff stress tensor follows as \citep{bonet2010nonlinear}
\begin{align}\label{nh_mat_2nd_pk}
{\boldsymbol{S}} = \frac{{\partial \Psi }}{{\partial {\boldsymbol{E}}}} = \mu ({\boldsymbol{1}} - {{\boldsymbol{C}}^{ - 1}}) + \lambda (\ln J){{\boldsymbol{C}}^{ - 1}}.
\end{align}
The contravariant components of $\boldsymbol{S}$ can then be derived as
\begin{align}
{S^{ij}} = {\boldsymbol{S}}:{{\boldsymbol{G}}^i} \otimes {{\boldsymbol{G}}^j} = \mu \left\{ {{G^{ij}} - {{\left( {{{\boldsymbol{C}}^{ - 1}}} \right)}^{ij}}} \right\} + \lambda (\ln J){\left( {{{\boldsymbol{C}}^{ - 1}}} \right)^{ij}}.
\end{align}
The corresponding Lagrangian elasticity tensor follows as \citep{bonet2010nonlinear}
\begin{align}\label{nh_mat_lag_elas_tensor}
{\boldsymbol{\mathcal{C}}} = \lambda {{\boldsymbol{C}}^{ - 1}} \otimes {{\boldsymbol{C}}^{ - 1}} + 2(\mu - \lambda \ln J){\boldsymbol{\mathcal{I}}},
\end{align}
where
\begin{align}\label{nh_mat_inv_c_tensor}
{{\boldsymbol{C}}^{ - 1}} \otimes {{\boldsymbol{C}}^{ - 1}} = {\left( {{{\boldsymbol{C}}^{ - 1}}} \right)^{ij}}{\left( {{{\boldsymbol{C}}^{ - 1}}} \right)^{k\ell }}{{\boldsymbol{G}}_i} \otimes {{\boldsymbol{G}}_j} \otimes {{\boldsymbol{G}}_k} \otimes {{\boldsymbol{G}}_\ell},
\end{align}
and the fourth order tensor $\boldsymbol{\mathcal{I}}$ can be expressed in terms of the covariant basis, as (see Appendix \ref{app_constitutive_nh} for the derivation)
\begin{align}\label{nh_mat_lag_elas_tensor}
{\boldsymbol{\mathcal{I}}} \coloneqq - \frac{{\partial {{\boldsymbol{C}}^{ - 1}}}}{{\partial {\boldsymbol{C}}}} = \frac{1}{2}\left\{ {{{\left( {{{\boldsymbol{C}}^{ - 1}}} \right)}^{ik}}{{\left( {{{\boldsymbol{C}}^{ - 1}}} \right)}^{j\ell}} + {{\left( {{{\boldsymbol{C}}^{ - 1}}} \right)}^{i\ell}}{{\left( {{{\boldsymbol{C}}^{ - 1}}} \right)}^{jk}}} \right\}{{\boldsymbol{G}}_i} \otimes {{\boldsymbol{G}}_j} \otimes {{\boldsymbol{G}}_k} \otimes {{\boldsymbol{G}}_\ell}.
\end{align}
Then the contravariant components of $\boldsymbol{\mathcal{C}}$ are obtained as
\begin{align}\label{nh_mat_lag_c_cont_comp}
{C^{ijk\ell }} = \lambda {\left( {{{\boldsymbol{C}}^{ - 1}}} \right)^{ij}}{\left( {{{\boldsymbol{C}}^{ - 1}}} \right)^{k\ell }} + (\mu - \lambda \ln J)\left\{ {{{\left( {{{\boldsymbol{C}}^{ - 1}}} \right)}^{ik}}{{\left( {{{\boldsymbol{C}}^{ - 1}}} \right)}^{j\ell }} + {{\left( {{{\boldsymbol{C}}^{ - 1}}} \right)}^{i\ell }}{{\left( {{{\boldsymbol{C}}^{ - 1}}} \right)}^{jk}}} \right\}.
\end{align}
\subsection{Isogeometric discretization}
\subsubsection{NURBS curve}
The geometry of the beam's central axis can be represented by a NURBS curve. Here we summarize the construction of a NURBS curve. More detailed explanation on the properties of NURBS and geometric algorithms like knot insertion and degree elevation can be found in \citet{piegl2012nurbs}. Further discussions on the important properties of NURBS in the analysis can be found in \citet{hughes2005isogeometric}. For a given knot vector ${\tilde \varXi}={\left\{{\xi_1},{\xi_2},...,{\xi_{{n_{\mathrm{cp}}}+p+1}}\right\}}$, where ${\xi_i}\in{\Bbb R}$ is the $i$th knot, $p$ is the degree of basis function, and ${n_{\mathrm{cp}}}$ is the number of basis functions (or control points), B-spline basis functions are recursively defined \citep{piegl2012nurbs}. For $p=0$, they are defined by
\begin{equation} \label{Bspline_basis_0}
B_I^0(\xi ) =
\begin{cases}
1&{{\rm{if~~ }}{\xi _I} \le \xi < {\xi _{I + 1}}},\\
0&{{\text{otherwise, }}}
\end{cases}
\end{equation}
and for $p=1,2,3,...,$ they are defined by
\begin{equation} \label{Bspline_basis_p}
B_I^p(\xi ) = \frac{{\xi - {\xi _I}}}{{{\xi _{I + p}} - {\xi _I}}}B_I^{p - 1}(\xi ) + \frac{{{\xi _{I + p + 1}} - \xi }}{{{\xi _{I + p + 1}} - {\xi _{I + 1}}}}B_{I + 1}^{p - 1}(\xi ),
\end{equation}
where $\xi\in\varXi\subset{\Bbb R}$ denotes the parametric coordinate, and $\varXi\coloneqq\left[\xi_{1},\xi_{{n_{\mathrm{cp}}}+p+1}\right]$ represents the parametric space. From the B-spline basis functions the NURBS basis functions are defined by
\begin{equation}\label{nurbs_basis_1d_def}
{N_I}(\xi ) = \frac{{B_I^p(\xi )\,{w_I}}}{{\sum\limits_{J = 1}^{n_{\mathrm{cp}}} {B_J^p(\xi )\,{w_J}} }},
\end{equation}
where ${w_I}$ denotes the given weight of the $I$th control point. If weights are equal, NURBS becomes B-spline. The geometry of the initial beam central axis can be represented by a NURBS curve, as
\begin{equation}\label{beam_curve_pos_nurbs}
{\boldsymbol{X}}(\xi ) = \sum\limits_{I = 1}^{n_{\mathrm{cp}}} {{N_I}(\xi )\,{{\boldsymbol{X}}_{\!I}}},
\end{equation}
where $\boldsymbol{X}_{\!I}$ are the control point positions. The arc-length parameter along the initial central axis can be expressed by the mapping $s(\xi):{\varXi}\to{\left[0,L\right]}$, defined by
\begin{equation}\label{beam_curve_pos_nurbs_alen_map}
s(\xi )\coloneqq \int_{{\xi _1}}^{\eta = \xi } {\left\| {{{\boldsymbol{X}}_{\!,\eta }}(\eta )} \right\|{\mathrm{d}}\eta }.
\end{equation}
Then the Jacobian of the mapping is derived as
\begin{align}\label{beam_curve_pos_nurbs_alen_map_jcb}
\tilde j\coloneqq \frac{{{\mathrm{d}}s}}{{{\mathrm{d}}\xi }}= \left\| {{{\boldsymbol{X}}_{\!,\xi }}(\xi )} \right\|.
\end{align}
In the discretization of the variational form, we often use the notation ${N_{I,s}}$ for brevity, which is defined by
\begin{align}\label{beam_curve_pos_nurbs_alen_map_jcb}
{N_{I,s}} \coloneqq {N_{I,\xi }}\frac{{{\mathrm{d}}\xi }}{{{\mathrm{d}}s}} = \frac{1}{{\tilde j}}{N_{I,\xi }},
\end{align}
where ${N_{I,\xi }}$ denotes the differentiation of the basis function ${N_{I}(\xi)}$ with respect to $\xi$.\\
\subsubsection{Discretization of the variational form}
In the discretization of the variational form using NURBS basis functions, an \textit{element} in one-dimension is defined as the nonzero \textit{knot span}, which means the span between two distinct knot values. Let $\varXi_{e}$ denote the $e$th nonzero knot span (element), then the entire parametric domain is the sum of the whole knot spans, i.e., $\varXi = {\varXi _1} \cup {\varXi _2} \cup \cdots \cup {\varXi_{n_{\mathrm{el}}}}$, where $n_{\mathrm{el}}$ denotes the total number of nonzero knot spans. Using the NURBS basis of Eq.\,(\ref{nurbs_basis_1d_def}), the variations of the central axis position and the two director vectors at $\xi\in{\varXi}_e$ are discretized as
\begin{equation}\label{beam_disp_director_disc_nurbs}
\delta {{\boldsymbol{y}}^h}(s(\xi )) = \left[ {\begin{array}{*{20}{c}}
{{N_1}(\xi )\,{{\bf{1}}_{9 \times 9}}}& \cdots &{{N_{{n_{{e}}}}}(\xi )\,{{\bf{1}}_{9 \times 9}}}
\end{array}} \right]\left\{ {\begin{array}{*{20}{c}}
{\delta {{\bf{y}}_1}}\\
\vdots \\
{\delta {{\bf{y}}_{{n_{{e}}}}}}
\end{array}} \right\} \eqqcolon {{\Bbb{N}}_e}\delta {{\bf{y}}^e},\,\,\text{with}\,\,\delta {{\bf{y}}_I} \coloneqq \left\{ {\begin{array}{*{20}{c}}
{\delta {{\boldsymbol{\varphi }}_I}}\\
{\delta {{\boldsymbol{d}}_{1I}}}\\
{\delta {{\boldsymbol{d}}_{2I}}}
\end{array}} \right\},
\end{equation}
where ${{\delta {\boldsymbol{\varphi}}}_I} \in {{\Bbb R}^3}$ and ${{\delta {\boldsymbol{d}}}_{\alpha I}} \in {{\Bbb R}^3}$ denote the displacement and director coefficient vectors, and ${\bf{1}}_{m\times{m}}$ denotes the identity matrix of dimension $m\times{m}$. $n_e$ denotes the number of basis functions having local support in the knot span ${\varXi}_e$.
\begin{definition}
{It is noted that the spatial discretization is applied to the increment (variation) of the director vectors, not to the total director vectors. This is because the initial directors are assumed to be orthonormal, and the spatial discretization by NURBS basis functions does not preserve the orthonormality. The initial orthonormal director vectors at an arbitrary position on the central axis may be calculated in many different ways. For example, for a given $C^1$ continuous curve, the \textit{smallest rotation method} gives a smooth parameterization of initial orthonormal directors. More details on this method can be found in \citet{meier2014objective} and \citet{choi2019isogeometric}.}
\end{definition}
Using Eq.\,(\ref{beam_disp_director_disc_nurbs}) and the standard element assembly operator ${\bf{A}}$, we obtain
\begin{equation}
\label{new_disc_int_force_vec}
{G_{{\mathop{\rm int}} }}({{\boldsymbol{y}}^h},\delta {{\boldsymbol{y}}^h}) = \delta {{\bf{y}}}^{\mathrm{T}}{\bf{F}}_{{\mathop{\rm int}} },\,\,\text{with}\,\,{\bf{F}}_{{\mathop{\rm int}} } \coloneqq \mathop {\bf{A}}\limits_{e = 1}^{{n_{{\rm{el}}}}} {\bf{F}}_{{\mathop{\rm int}}}^e\,\,{\rm{and}}\,\,\delta {{\bf{y}}} \coloneqq \mathop {\bf{A}}\limits_{e = 1}^{{n_{{\rm{el}}}}} \delta {{\bf{y}}^e},
\end{equation}
where the element internal force vector is obtained, from Eq.\,(\ref{beam_int_vir_work_compact_Form}), by
\begin{equation}
{\bf{F}}^e_{{\mathop{\rm int}} } \coloneqq \int_{{\varXi _e}} {{{{{\Bbb{B}}_{\rm{total}}^{e\,{\mathrm{T}}}}}}{\boldsymbol{R}}\,\tilde j\,{\mathrm{d}}\xi },
\end{equation}
and the matrix ${{{\Bbb{B}}_{\rm{total}}^e}}$ is defined in Eq.\,(\ref{disc_grad_operator_B_tot}). The external virtual work of Eq.\,(\ref{ext_vir_work_compact_form}) is also discretized as
\begin{equation}
\label{new_disc_ext_vir_work_compact_form}
{G_{{\rm{ext}}}}({{\boldsymbol{y}}^h},\delta {{\boldsymbol{y}}^h}) = \delta {{\bf{y}}}^{\mathrm{T}}{\bf{F}}_{{\rm{ext}}},\,\,\text{with}\,\,{\bf{F}}_{{\rm{ext}}} \coloneqq \mathop {\bf{A}}\limits_{e = 1}^{{n_{{\rm{el}}}}} {\bf{F}}^{e}_{\text{ext}} + {\bf{A}}{\left[ {{{{\boldsymbol{\bar R}}}_0}} \right]_{{\Gamma _{\text{N}}}}},
\end{equation}
where the second term on the right-hand side represents the assembly of load vector at the boundary $\Gamma_{\text{N}}$, and the element external load vector is obtained by
\begin{equation}
{\bf{F}}^e_{{\rm{ext}}} \coloneqq \int_{{\varXi _e}} {{{\Bbb{N}}_e^\mathrm{T}}{\boldsymbol{\bar R}}\,\tilde j\,{\mathrm{d}}\xi }.
\end{equation}
Similarly, the linearized internal virtual work of Eq.\,(\ref{beam_tangent_stiff_cont_form}) is discretized as
\begin{equation}
\label{new_disc_lin_int_force}
\Delta G_{{\mathop{\rm int}} }({{\boldsymbol{y}}^h};\delta {{\boldsymbol{y}}^h},\Delta {{\boldsymbol{y}}^h}) = \delta {{\bf{y}}}^{\mathrm{T}}{{\bf{K}}_\mathrm{int}}\,\Delta {{\bf{y}}}\,\,\,\text{with}\,\,{{\bf{K}}_\mathrm{int}} \coloneqq \mathop {\bf{A}}\limits_{e = 1}^{{n_{{\rm{el}}}}} {{\bf{K}}^e_\mathrm{int}}.
\end{equation}
The element tangent stiffness matrix is obtained by
\begin{equation}\label{elem_tan_mat_fin}
{{\bf{K}}^e_\mathrm{int}} = \int_{{\varXi _e}} {\left( {{{{{\Bbb{B}}_{{\rm{total}}}^{e\,{\mathrm{T}}}}}}{\Bbb{C}}{\Bbb{B}}_{{\rm{total}}}^e + {{\Bbb{Y}}_e}^{\mathrm{T}}{{\boldsymbol{k}}_\mathrm{G}}{{\Bbb{Y}}_e}} \right)\tilde j\,{\mathrm{d}}\xi },
\end{equation}
where ${{\Bbb{Y}}_e}$ is defined in Eq.\,(\ref{def_Y_e_g_tan}). It is noted that the global tangent stiffness matrix $\bf{K}_\mathrm{int}$ is symmetric, since ${\Bbb{C}}$ and ${{\boldsymbol{k}}_\mathrm{G}}$ are symmetric. Substituting Eqs.\,(\ref{new_disc_int_force_vec}), (\ref{new_disc_ext_vir_work_compact_form}), and Eq.\,(\ref{new_disc_lin_int_force}) into Eq.\,(\ref{new_config_update_lin_eq}) leads to
\begin{equation}
\label{disc_var_eq_global}
\delta {{\bf{y}}}^{\mathrm{T}}\,\leftidx{^{n + 1}}{{{\bf{K}}}}^{(i - 1)}\,\Delta {{\bf{y}}} = \delta {{\bf{y}}}^{\mathrm{T}}\,\leftidx{^{n + 1}}{{{\bf{R}}}}^{(i - 1)},
\end{equation}
where $\bf{K}\coloneqq{\bf{K}_\mathrm{int}}-{\bf{K}_\mathrm{ext}}$, and the global load stiffness matrix ${\bf{K}_\mathrm{ext}}$ appears, e.g., due to non-conservative follower loads, and it is generally unsymmetric (see for example Eq.\,(\ref{pure_bend_lstiff_op_disc_new})). The global residual vector is
\begin{equation}
{\bf{R}}\coloneqq \mathop {\bf{A}}\limits_{e = 1}^{{n_{{\rm{el}}}}} \left( {{\bf{F}}^e_{\rm{ext}} - {\bf{F}}^e_{{\mathop{\rm int}} }} \right).
\end{equation}
After applying the kinematic boundary conditions to Eq.\,(\ref{disc_var_eq_global}), we obtain
\begin{equation}\label{enhanced_global_eq_reduced}
\leftidx{^{n + 1}}{{{\bf{K}}}}^{(i - 1)}_\mathrm{r}\,\Delta {{{\bf{y}}}_{\text{r}}} = \leftidx{^{n + 1}}{{{{\bf{R}}}_{\text{r}}^{(i - 1)}}},
\end{equation}
where $(\bullet)_{\text{r}}$ denotes the \textit{reduced} vector or matrix after applying the kinematic boundary conditions.
\begin{definition} The symmetry of the global tangent stiffness matrix $\bf{K}$ depends solely on whether the external loading is conservative. If a non-conservative load is applied, the load stiffness leads to unsymmetric tangent stiffness matrix.
\end{definition}
\section{{Alleviation of Poisson locking by the EAS method}}
\label{eas_formulation}
{In order to alleviate Poisson locking, the in-plane strain field in the cross-section should be at least linear. We employ the EAS method, and we modify the Green-Lagrange strain tensor as
\begin{equation}\label{modify_green_lag_strn_enhanced}
{\boldsymbol{E}} = \underbrace {{{\boldsymbol{E}^{\text{c}}}}}_{{\rm{compatible}}} + \underbrace {{\boldsymbol{\tilde E}}}_{{\rm{enhanced}}},
\end{equation}
where the compatible strain part is the same as in Eq.\,(\ref{def_GL_strain_1}), and the additional strain part ${\boldsymbol{\tilde E}}$, which is incompatible, is intended to enhance the in-plane strain components of the cross-section, expressed by
\begin{equation}
{{\boldsymbol{\tilde E}}} = {\tilde E_{\alpha \beta }}\,{{\boldsymbol{G}}^\alpha } \otimes {{\boldsymbol{G}}^\beta }.
\end{equation}
The enhanced strain components are assumed as the linear and the bi-linear terms of the coordinates $\xi^1$ and $\xi^2$ in the cross-section, i.e.,
\begin{equation}\label{enhanced_strn}
\left\{ {\begin{array}{*{20}{c}}
{{{\tilde E}_{11}}}\\
{{{\tilde E}_{22}}}\\
{2{{\tilde E}_{12}}}
\end{array}} \right\} = \left[ {\begin{array}{*{20}{c}}
{{\xi ^1}}&{{\xi ^2}}&{{\xi ^1}{\xi ^2}}&0&0&0&0&0&0\\
0&0&0&{{\xi ^1}}&{{\xi ^2}}&{{\xi ^1}{\xi ^2}}&0&0&0\\
0&0&0&0&0&0&{{\xi ^1}}&{{\xi ^2}}&{{\xi ^1}{\xi ^2}}
\end{array}} \right]\left\{ {\begin{array}{*{20}{c}}
{{\alpha _1}}\\
{{\alpha _2}}\\
\vdots \\
{{\alpha _8}}\\
{{\alpha _9}}
\end{array}} \right\} \eqqcolon {\boldsymbol{\Gamma \alpha }},
\end{equation}
where nine independent enhanced strain parameters $\alpha_i\in{L_2}(0,L)\,\,(i=1\sim9)$ are introduced. ${L_2}(0,L)$ defines the collection of all the functions, which are square integrable in the domain $(0,L)\ni{s}$. Even though the additional Green-Lagrange strain parts may include quadratic or higher order terms, we enrich the linear strain field only, since the enhanced strain is required to be orthogonal to constant stress fields in order to satisfy the stability condition \citep{betsch19964}.
\begin{definition}
\label{remark_5param_form_eas}
In this paper, it is shown that it may lead to erroneous results if the expression of Eq.\,(\ref{inplane_without_bilinear_GL}) is applied. For example, following Eq.\,(\ref{inplane_without_bilinear_GL}), one could define the enhanced strain part, as
\begin{align}
\label{eas_strn_5param_form}
\left\{ {\begin{array}{*{20}{c}}
{\tilde E_{11}^ * }\\
{\tilde E_{22}^ * }\\
{2\tilde E_{12}^ * }
\end{array}} \right\} = \left[ {\begin{array}{*{20}{c}}
{{\xi ^1}}&0&0&0&0\\
0&{{\xi ^2}}&0&0&0\\
0&0&{{\xi ^1}}&{{\xi ^2}}&{{\xi ^1}{\xi ^2}}
\end{array}} \right]\left\{ {\begin{array}{*{20}{c}}
{{\alpha^{*}_1}}\\
{{\alpha^{*}_2}}\\
{{\alpha^{*}_3}}\\
{{\alpha^{*}_4}}\\
{{\alpha^{*}_5}}
\end{array}} \right\},
\end{align}
where five enhanced strain parameters $\alpha^{*}_i\in{L_2}(0,L)\,\,(i=1\sim5)$ are introduced. In numerical examples of section \ref{ex_cant_b_end_f}, it is shown that the EAS method based on Eq.\,(\ref{eas_strn_5param_form}) still suffers from significant Poisson locking.
\end{definition}
Applying the modified Green-Lagrange strain tensor to the three-field Hu-Washizu variational principle, the total strain energy is written in terms of the modified Green-Lagrange strain tensor as \citep{bischoff1997shear}
\begin{equation}
\tilde U\left(\boldsymbol{y},\boldsymbol{\tilde {E}},\boldsymbol{\tilde {S}}\right) = \int_0^L {\int_{\mathcal{A}} {\left\{ {\Psi ({{\boldsymbol{E}}^{\rm{c}}} + {\boldsymbol{\tilde E}}) - {\boldsymbol{\tilde S}}:{\boldsymbol{\tilde E}}} \right\}{j_0}\,{\mathrm{d}}{\mathcal{A}}\,{\mathrm{d}}s} }.
\end{equation}
The following condition that the stress field is $L_2$-orthogonal to the enhanced strain field enables to eliminate the stress field from the formulation, which leads to a \textit{two-field variational formulation}.
\begin{equation}
\label{ortho_condition}
\int_0^L {\int_{\mathcal{A}} {\left({\boldsymbol{\tilde S}}:{\boldsymbol{\tilde E}}\,{j_0}\right)\,{\mathrm{d}}{\mathcal{A}}\,{\mathrm{d}}s} }=0.
\end{equation}
The independent stress field $\boldsymbol{\tilde S}$, which satisfy the orthogonality condition of Eq.\,(\ref{ortho_condition}), does not explicitly appear in the subsequent formulation, and is generally different from the stress field $\boldsymbol{S}$, which is calculated by the constitutive law\footnote{See page 2,557 of \cite{buchter1994three}.}.
The first variation of the total strain energy is obtained by
\begin{align}\label{int_vir_work_modified_enhanced}
\delta {\tilde U} &= \int_0^L {\int_{\mathcal{A}} {\left( {\frac{{\partial \Psi }}{{\partial {\boldsymbol{E}}}}:\delta {{\boldsymbol{E}}^{\text{c}}}\,{j_0}} \right){\mathrm{d}}{\mathcal{A}}}\,{\mathrm{d}}s} + \int_0^L {\int_{\mathcal{A}} {\left( {\frac{{\partial \Psi }}{{\partial {\boldsymbol{E}}}}} :\delta {\boldsymbol{\tilde E}}\,{j_0}\right)\,{\mathrm{d}}{\mathcal{A}}}\,{\mathrm{d}}s}.
\end{align}
We rewrite Eq.\,(\ref{int_vir_work_modified_enhanced}), using Eqs.\,(\ref{tot_strn_energy_beam_time_deriv}), (\ref{del_eps_hat_compact}), and (\ref{enhanced_strn}), as
\begin{align}\label{re_mod_int_virt_work_enhance}
{G_{{\mathop{\rm int}} }}(\boldsymbol{\eta},{\delta \boldsymbol{\eta}}) \equiv{\delta{\tilde U}}= \int_0^L {\left( {\delta {{\boldsymbol{y}}^{\mathrm{T}}}{{{{{\Bbb{B}}_{\text{total}}^{\mathrm{T}}}}}}{\boldsymbol{R}}} \right){\mathrm{d}}s} + \int_0^L {{\delta {{\boldsymbol{\alpha }}^{\mathrm{T}}}\boldsymbol{s}}\,{\mathrm{d}}s},
\end{align}
where $\delta\boldsymbol{\eta}\coloneqq\left[\delta {\boldsymbol{y}^\mathrm{T}}, \delta {\boldsymbol{\alpha}}^\mathrm{T}\right]^\mathrm{T}$, and
\begin{align}
{\boldsymbol{s}} \coloneqq \int_{\mathcal{A}} {\left( {{\boldsymbol{\Gamma }}^\mathrm{T}}{\munderbar{\boldsymbol{\hat S}}}\,{j_0} \right){\mathrm{d}}{\mathcal{A}}}\,\,\mathrm{with}\,\,{\munderbar{\boldsymbol{\hat S}}} \coloneqq {\left[S^{11}, S^{22}, S^{12}\right]^\mathrm{T}} = {\left[ {\frac{{\partial \Psi }}{{\partial {E_{11}}}},\frac{{\partial \Psi }}{{\partial {E_{22}}}},\frac{{\partial \Psi }}{{\partial {E_{12}}}}} \right]^\mathrm{T}}.
\end{align}
\subsection{Linearization}
The directional derivative of the internal virtual work of Eq.\,(\ref{re_mod_int_virt_work_enhance}) in the direction of $\Delta\boldsymbol{\eta}\coloneqq\left[\Delta {\boldsymbol{y}^\mathrm{T}}, \Delta {\boldsymbol{\alpha}}^\mathrm{T}\right]^\mathrm{T}$ is given by
\begin{alignat}{5}
\Delta G_{{\mathop{\rm int}} } ({\boldsymbol{\eta }};\delta {\boldsymbol{\eta }},\Delta {\boldsymbol{\eta }}) &&&\coloneqq D{G_{{\mathop{\rm int}} }} \cdot {\Delta \boldsymbol{\eta }}\nonumber\\
&&&= \int_0^L {\delta {{\boldsymbol{\eta }}^{\mathrm{T}}}\left[ {\begin{array}{*{20}{c}}
{{{\Bbb{B}}^{{\rm{total}}}}^{\mathrm{T}}{\Bbb{C}}{{\Bbb{B}}^{{\rm{total}}}} + {{\boldsymbol{Y}}^{\mathrm{T}}}{{\boldsymbol{k}}_{\rm{G}}}{\boldsymbol{Y}}}&{{{\Bbb{B}}^{{\rm{total}}}}^{\mathrm{T}}{{\Bbb{C}}^{{\rm{ay}}}}^{\mathrm{T}}}\\
{\mathrm{sym.}}&{{{\Bbb{C}}^{{\rm{aa}}}}}
\end{array}} \right]\Delta {\boldsymbol{\eta }}\,{\mathrm{d}}s},
\end{alignat}
where we use the following matrices
\begin{equation}
{{\Bbb{C}}^{\mathrm{ay}}} \coloneqq \int_{\mathcal{A}} {\left( {{{\boldsymbol{\Gamma }}^{\mathrm{T}}}{\Bbb{\bar C}}^{\mathrm{ay}}{{\munderbar {\boldsymbol{D}}}}\,{j_0}} \right){\mathrm{d}}{\mathcal{A}}}\,\,\,\mathrm{with}\,\,\,{\Bbb{\bar C}}^{\mathrm{ay}}\coloneqq{\left[ {\begin{array}{*{20}{c}}
{{C^{1111}}}&{{C^{1122}}}&{{C^{1133}}}&{{C^{1112}}}&{{C^{1113}}}&{{C^{1123}}}\\
{{C^{2211}}}&{{C^{2222}}}&{{C^{2233}}}&{{C^{2212}}}&{{C^{2213}}}&{{C^{2223}}}\\
{{C^{1211}}}&{{C^{1222}}}&{{C^{1233}}}&{{C^{1212}}}&{{C^{1213}}}&{{C^{1223}}}
\end{array}} \right]},
\end{equation}
and
\begin{equation}
{{\Bbb{C}}^{\text{aa}}} \coloneqq \int_{\mathcal{A}} {\left( {{{\boldsymbol{\Gamma }}^{\mathrm{T}}}{\bar {\Bbb{C}}}^{\text{aa}}\,{\boldsymbol{\Gamma }}\,{j_0}} \right){\mathrm{d}}{\mathcal{A}}}\,\,\,\mathrm{with}\,\,\,{{\Bbb{\bar C}}^{\text{aa}}} \coloneqq \left[ {\begin{array}{*{20}{c}}
{{C^{1111}}}&{{C^{1122}}}&{{C^{1112}}}\\
{}&{{C^{2222}}}&{{C^{2212}}}\\
{{\rm{sym}}{\rm{.}}}&{}&{{C^{1212}}}
\end{array}} \right].
\end{equation}
\subsection{Solution update procedure}
The iterative process to find solution ${}^{n + 1}{\boldsymbol{\eta}} \coloneqq {\left[ {{}^{n + 1}{{\boldsymbol{y}}^{\mathrm{T}}},{}^{n + 1}{{\boldsymbol{\alpha}}^{\mathrm{T}}}} \right]^{\mathrm{T}}}$ at the $(n+1)$th load step is stated as: For a given solution ${}^{n + 1}{\boldsymbol{\eta }}^{(i-1)}$ at the $(i-1)$th iteration of the $(n+1)$th load step, find the solution increment $\Delta {\boldsymbol{\eta }}$,
where $\Delta {\boldsymbol{y}} \in \mathcal{V}$ and $\Delta {\boldsymbol{\alpha }} \in \left[{L_2}(0,L)\right]^{d}$, such that
\begin{equation}\label{enhance_strn_lin_var_eq_newton}
{\Delta G}_{{\mathop{\rm int}} }({}^{n + 1}{\boldsymbol{\eta }}^{(i-1)};\delta {\boldsymbol{\eta }},\Delta {\boldsymbol{\eta }}) = {G_{{\rm{ext}}}}(\delta {\boldsymbol{y}}) - {G_{{\rm{int}}}}({}^{n + 1}{\boldsymbol{\eta }}^{(i-1)},\delta {\boldsymbol{\eta }}),\,\,{\forall}{\delta {\boldsymbol{y}}} \in \mathcal{V},\,\,{\rm{and}}\,\,{\forall}\delta {\boldsymbol{\alpha }}\in \left[{L_2}(0,L)\right]^{d},
\end{equation}
where the dimension of the solution space of enhanced strain parameters can be $d=9$ or $d=5$ (see Remark \ref{remark_5param_form_eas}). Since the enhanced strain parameters are chosen to belong to the space ${L_2}(0,L)$, no inter-element continuity is required. Thus, it is possible to condense out those additional degrees-of-freedom at element level \citep{bischoff1997shear}. The solution is updated by
\begin{alignat}{2}
\left.\begin{array}{c}
\begin{aligned}
{}^{n + 1}{\boldsymbol{y}}^{(i)} &= {}^{n + 1}{\boldsymbol{y}}^{(i-1)} + \Delta {\boldsymbol{y}},&{{}^{n + 1}{\boldsymbol{y}}^{(0)}}&= {}^n{\boldsymbol{y}},\\
{}^{n + 1}{\boldsymbol{\alpha }}^{(i)} &= {}^{n + 1}{\boldsymbol{\alpha }}^{(i-1)} + \Delta {\boldsymbol{\alpha }},&{{}^{n + 1}{\boldsymbol{\alpha }}^{(0)}}&= {}^n{\boldsymbol{\alpha}}.\\
\end{aligned}
\end{array} \right\}
\end{alignat}}
\subsection{{Discretization of the enhanced strain parameters and static condensation}}
{We reparameterize each of the $n_\text{el}$ elements of the central axis by a parametric coordinate ${\tilde \xi} \in \left[ {-1,1}\right]$. We define a linear mapping between the parametric domain of the $e$th element ${\varXi _e} = \left[ {\xi _e^1,\xi _e^2} \right]\ni \xi$ and $\left[ { - 1,1} \right] \ni \tilde \xi$, as
\begin{equation}
\tilde \xi = 1 - 2\left( {\frac{{\xi _e^2 - \xi }}{{\xi _e^2 - \xi _e^1}}} \right).
\end{equation}
Then, within each element the vector of virtual enhanced strain parameters ${\delta\boldsymbol{\alpha }}={\delta\boldsymbol{\alpha }}(\tilde \xi )$ is linearly interpolated as
\begin{equation}\label{interp_var_alpha_beta}
{\delta\boldsymbol{\alpha }^h}(\tilde \xi ) = \left[ {\begin{array}{*{20}{c}}
{{{\tilde N}_1}(\tilde \xi )\,{{\bf{1}}_{9 \times 9}}}&{{{\tilde N}_2}(\tilde \xi )\,{{\bf{1}}_{9 \times 9}}}
\end{array}} \right]\left\{ {\begin{array}{*{20}{c}}
{\delta{\boldsymbol{\alpha }}_1}\\
{\delta{\boldsymbol{\alpha }}_2}
\end{array}} \right\} \eqqcolon \,{{\tilde{\Bbb{N}}}_{e}(\tilde \xi)}{\delta{\boldsymbol{\alpha }}^e},
\end{equation}
with nodal vectors of enhanced strain parameters ${\delta\boldsymbol{\alpha}_i}\,(i=1,2)$. In this paper, we use linear basis functions, given by
\begin{equation}
\left. {\begin{array}{lcl}\begin{aligned}
{{\tilde N}_1}(\tilde \xi ) &= (1 - \tilde \xi )/2\\
{{\tilde N}_2}(\tilde \xi ) &= (1 + \tilde \xi )/2
\end{aligned}\end{array}} \right\},\,\,\tilde \xi \in \left[ { - 1,1} \right].
\end{equation}
Similarly, the vector of incremental enhanced strain parameters is interpolated within each element, as
\begin{equation}\label{interp_del_alpha}
{\Delta\boldsymbol{\alpha }^h}(\tilde \xi ) = {{{\tilde{\Bbb{N}}}}_{e}(\tilde \xi)}\,{\Delta{\boldsymbol{\alpha }}^e}.
\end{equation}
Substituting Eq.\,(\ref{interp_var_alpha_beta}) into the internal virtual work of Eq.\,(\ref{re_mod_int_virt_work_enhance}), and using the standard element assembly process, we have
\begin{equation}\label{disc_re_mod_int_vir_work_enhance}
{G_{{\mathop{\rm int}} }}({{\boldsymbol{\eta}}^h},\delta {{\boldsymbol{\eta}}^h}) = \mathop {\bf{A}}\limits_{e = 1}^{{n_{{\rm{el}}}}} {\left\{ {\begin{array}{*{20}{c}}
{\delta {{\bf{y}}^e}}\\
{\delta {{\boldsymbol{\alpha }}^e}}\\
\end{array}} \right\}^{\mathrm{T}}}\left\{ {\begin{array}{*{20}{c}}
{{\bf{F}}^e_{{\mathop{\rm int}} }}\\
{{{\bf{s}}^e}}
\end{array}} \right\},
\end{equation}
where we use
\begin{equation}
{{\bf{s}}^e} \coloneqq \int_{{\varXi _e}} {\left\{ {{\tilde j}\,{{{\tilde{\Bbb{N}}}_e}^{\mathrm{T}}}\int_{\mathcal{A}} {\left( {{{\boldsymbol{\Gamma }}^{\mathrm{T}}}{\munderbar{\boldsymbol{\hat S}}}\,{j_0}} \right){\mathrm{d}}{\mathcal{A}}} } \right\}{\mathrm{d}}\xi }.
\end{equation}
The linearized variational equation (Eq.\,(\ref{enhance_strn_lin_var_eq_newton})) is discretized as follows. For the given solution at the $(i-1)$th iteration of the $(n+1)$th load step, we find the solution increment such that
\begin{align}\label{disc_lin_var_eq_newton}
&\mathop {\bf{A}}\limits_{e = 1}^{{n_{{\rm{el}}}}} {\left({\left\{ {\begin{array}{*{20}{c}}
{\delta {{\bf{y}}^e}}\\
{\delta {{\boldsymbol{\alpha }}^e}}
\end{array}} \right\}^{\mathrm{T}}}\,\leftidx{^{n+1}}{\left[ {\begin{array}{*{20}{c}}
{{{\bf{K}}_\mathrm{int}^e}}&{{\bf{K}}{{^e_{\text{ay}}}^{\mathrm{T}}}}\\
{\mathrm{sym.}}&{{\bf{K}}^e_{\text{aa}}}
\end{array}} \right]}^{(i-1)}\left\{ {\begin{array}{*{20}{c}}
{\Delta {{\bf{y}}^e}}\\
{\Delta {{\boldsymbol{\alpha }}^e}}
\end{array}} \right\}\right)} \nonumber\\
&= \mathop {\bf{A}}\limits_{e = 1}^{{n_{{\rm{el}}}}} {\left({\left\{ {\begin{array}{*{20}{c}}
{\delta {{\bf{y}}^e}}\\
{\delta {{\boldsymbol{\alpha }}^e}}
\end{array}} \right\}^{\mathrm{T}}}\,\leftidx{^{n+1}}{\left\{ {\begin{array}{*{20}{c}}
{{\bf{F}}^e_{\rm{ext}} - {\bf{F}}^e_{{\mathop{\rm int}} }}\\
{-{{\bf{s}}^e}}
\end{array}} \right\}}^{(i-1)}\right)},
\end{align}
where we use
\begin{equation}
{\bf{K}}^e_{\text{ay}} \coloneqq \int_{{\varXi _e}} {\left( {{\Bbb{\tilde N}}{{_e}^{\mathrm{T}}}{{\Bbb{C}}^{\text{ay}}}{\Bbb{B}}_e^{{\rm{total}}}\,\tilde j} \right){\mathrm{d}}\xi },
\end{equation}
and
\begin{equation}
{\bf{K}}^e_{\text{aa}} \coloneqq \int_{{\varXi _e}} {\left( {{\Bbb{\tilde N}}{{_e}^{\mathrm{T}}}{{\Bbb{C}}^{\text{aa}}}{{\Bbb{\tilde N}}}_e\,\tilde j} \right){\mathrm{d}}\xi }.
\end{equation}
Since we allow for a discontinuity of the enhanced strain field between adjacent elements, Eq.\,(\ref{disc_lin_var_eq_newton}) can be rewritten as
\begin{subequations}
\begin{alignat}{3}
\mathop {\bf{A}}\limits_{e = 1}^{{n_{{\rm{el}}}}} \left\{ {\delta {{\bf{y}}^e}^{\mathrm{T}}\left( {{{{\bf{K}}_\mathrm{int}^e}}\Delta {{\bf{y}}^e} + {{\bf{K}}{{^e_{\text{ay}}}^{\mathrm{T}}}}\Delta {{\boldsymbol{\alpha }}^e}} \right)} \right\} &= \mathop {\bf{A}}\limits_{e = 1}^{{n_{{\rm{el}}}}} \left\{ {\delta {{\bf{y}}^e}^{\mathrm{T}}\,\left( {{\bf{F}}^e_{\text{ext}} - {\bf{F}}^e_{{\mathop{\rm int}} }} \right)}\right\}\label{enhance_elem_y_eq},\\
\delta {{\boldsymbol{\alpha }}^e}^{\mathrm{T}}\left( {{{ {{\bf{K}}^e_{\text{ay}}}}}\Delta {{\bf{y}}^e} + {{{{\bf{K}}^e_{\text{aa}}}}}\Delta {{\boldsymbol{\alpha }}^e}}\right) &= -\delta {{\boldsymbol{\alpha }}^e}^{\mathrm{T}}{{\bf{s}}^e},\,e =1,2,...,{n_{{\rm{el}}}}.\label{enhance_elem_alpha_eq}
\end{alignat}
\end{subequations}
From Eq.\,(\ref{enhance_elem_alpha_eq}), we obtain
\begin{equation}\label{static_cond_del_alpha_eq}
\Delta {{\boldsymbol{\alpha }}_e} = - \leftidx{^{n + 1}}{\left[ {{\bf{K}}{{^e_{\text{aa}}}^{ - 1}}} \right]}^{(i - 1)}\left( {{}^{n + 1}\left[{{\bf{s}}^e}\right]^{(i - 1)} + {}^{n + 1}{{\left[ {{\bf{K}}^e_{\text{ay}}} \right]}^{(i - 1)}}\Delta {{\bf{y}}^e}} \right),\,e = 1,2,...,{n_{{\rm{el}}}}.
\end{equation}
Substituting Eq.\,(\ref{static_cond_del_alpha_eq}) into Eq.\,(\ref{enhance_elem_y_eq}) leads to
\begin{equation}
\label{eas_var_eq_1}
\delta {{\bf{y}}}^{\mathrm{T}}\leftidx{^{n + 1}}{{{\bf{\tilde K}}}}^{(i - 1)}\Delta {{\bf{y}}} = \delta {{\bf{y}}}^{\mathrm{T}}\leftidx{^{n + 1}}{{{\bf{\tilde R}}}}^{(i - 1)},
\end{equation}
where the global tangent stiffness matrix is defined as
\begin{equation}\label{enhanced_global_eq}
{\bf{\tilde K}} \coloneqq \mathop {\bf{A}}\limits_{e = 1}^{{n_{{\rm{el}}}}} \left( {{{\bf{K}}_\mathrm{int}^e} - {\bf{K}}{{^e_{\text{ay}}}^{\mathrm{T}}}{\bf{K}}{{^e_{\text{aa}}}^{ - 1}}{\bf{K}}^e_{\text{ay}}} \right),
\end{equation}
and the global residual vector is
\begin{equation}
{\bf{\tilde R}}\coloneqq \mathop {\bf{A}}\limits_{e = 1}^{{n_{{\rm{el}}}}} \left( {{\bf{F}}^e_{\rm{ext}} - {\bf{F}}^e_{{\mathop{\rm int}} } + {\bf{K}}{{^{e\,\mathrm{T}}_{\text{ay}}}}{\bf{K}}{{^{e\,{-1}}_{\text{aa}}}}{{\bf{s}}^e}} \right).
\end{equation}
After applying the kinematic boundary conditions to Eq.\,(\ref{eas_var_eq_1}), we obtain
\begin{equation}\label{enhanced_global_eq_reduced}
{{\leftidx{^{n + 1}}{{\bf{\tilde K}}}_{\text{r}}^{(i - 1)}}}\Delta {{{\bf{y}}}_{\text{r}}} = {{\leftidx{^{n + 1}}{{\bf{\tilde R}}}_{\text{r}}^{(i - 1)}}}.
\end{equation}}
\section{Numerical examples}
\label{num_ex}
We verify the presented beam formulation by comparison with reference solutions from the isogeometric analysis of three-dimensional hyperelasticity using brick elements. The brick elements use different degrees of basis functions in each parametric coordinate direction. We denote this by `$\mathrm{deg.}=({p_{\mathrm{L}}},{p_{\mathrm{W}}},{p_{\mathrm{H}}})$', where $p_{\mathrm{L}}$, $p_{\mathrm{W}}$, and $p_{\mathrm{H}}$ denote the degrees of basis functions along the length (L), width (W), and height (H), respectively. Further, we indicate the number of elements in each of those directions by $n_{\mathrm{el}}={n_{\mathrm{el}}^{\mathrm{L}}}\times{n_{\mathrm{el}}^{\mathrm{W}}}\times{n_{\mathrm{el}}^{\mathrm{H}}}$.
We employed two different hyperelastic material models: St.\,Venant-Kirchhoff and compressible Neo-Hookean types, which are abbreviated by `SVK' and `NH', respectively. We also use the following abbreviations to indicate our three beam formulations.
\begin{itemize}
\item{Ext.-dir.-std.: The standard extensible director beam formulation.}
\item{Ext.-dir.-EAS: The EAS method with nine enhanced strain parameters, i.e., Eq.\,(\ref{enhanced_strn}).}
\item{Ext.-dir.-EAS-5p.: The EAS method with five enhanced strain parameters, i.e., Eq.\,(\ref{eas_strn_5param_form}).}
\end{itemize}
In the beam formulation, the integration over the cross-section is evaluated numerically. We use standard Gauss integration for the central axis and cross-section, where $(p+1)$ integration points are used for the central axis, and the number of integration points for the cross-section is mentioned in each numerical example. Here $p$ denotes the order of basis functions approximating the central axis displacement and director fields.
\subsection{Uniaxial tension of a straight beam}
In order to verify the capability of the presented beam formulation in representing finite axial and transverse normal strain, we consider uniaxial tension of a straight beam having nonzero Poisson's ratio. The beam has length $L=1\text{m}$ and a circular cross-section with two cases for its radius, $R=0.05\text{m}$ and $R=0.1\text{m}$, while Young's modulus and Poisson's ratio are $E=1\text{GPa}$ and $\nu=0.3$, respectively. Two different kinematic boundary conditions at the two ends of beam (i.e., $s\in\left\{0,L\right\}$) are considered. First, the cross-section is allowed to deform at the both ends (BC{\#}1), and second, this is not allowed (BC{\#}2). A traction of ${{\boldsymbol{\bar T}}_0} = {\left[ {{{\bar T}_0},0,0} \right]^{\mathrm{T}}}$ where ${{\bar T}_0} = {10^6}\mathrm{kN}/{\mathrm{m}^2}$ is applied on the undeformed cross-section at $s=L$. In the beam model, these two boundary conditions are implemented as follows.
\begin{itemize}
\item BC{\#}1: Central axis displacement is constrained at one end, but the end directors are free, i.e.,
\begin{equation}
\Delta {\boldsymbol{\varphi }} = {\bf{0}}\,\,\text{at}\,\,s = 0,\,\,\text{and}\,\,{\boldsymbol{d}_1}\,\,\text{and}\,\,{\boldsymbol{d}_2}\,\,\text{are free at}\,\,s \in \left\{ {0,L} \right\}.\nonumber
\end{equation}
\item BC{\#}2: All degrees-of-freedom are constrained at one end, and the directors are fixed at the other end, that is,
\begin{equation}
\Delta {\boldsymbol{\varphi }} = \Delta {{\boldsymbol{d}}_1} = \Delta {{\boldsymbol{d}}_2} = {\bf{0}}\,\,\text{at}\,\,s = 0,\,\,\text{and}\,\,\Delta {{\boldsymbol{d}}_1} = \Delta {{\boldsymbol{d}}_2} = {\bf{0}}\,\,\text{at}\,\,s = L.\nonumber
\end{equation}
\end{itemize}
{In the numerical integration over the circular cross-section of the beam, we employ polar coordinates $r$ and $\theta$ (see Remark \ref{num_integ_polar_circ}), and four Gauss integration points are used for each of the variables $r$ and $\theta$ in each quarter of the domain $0\leq{\theta}<{2\pi}$.} Fig.\,\ref{uniaxial_tens_undeformed} shows the undeformed configuration, and Fig.\,\ref{deform_str_uni_axial_tens} shows the deformed configurations for the different boundary conditions and material models, where the decrease of cross-sectional area is noticeable. We compare the lateral displacement at surface point\,A, indicated in Fig.\,\ref{uniaxial_tens_undeformed}, with the reference solutions obtained from IGA using brick elements (convergence results for the lateral displacement at point A and the volume change can be found in Tables \ref{app_utens_conv_test_svk_010_fixed} and \ref{app_utens_conv_test_nh_010_fixed}). Tables \ref{str_utens_verif_surf_disp_svk} and \ref{str_utens_verif_surf_disp_nh} compare the lateral ($Y$-directional) displacements. The results from the developed beam model are in excellent agreement with the reference solution. In Fig.\,\ref{uniaxial_tens_vol_ratio}, we can also verify that the volume change of the beam agrees with the reference solutions in all cases of the selected materials and cross-section radii. As expected those two material models show similar behavior within the small strain range; however, the behavior become different for large strains. Note that the SVK material shows unphysical volume decrease beyond certain strains, which shows the unsuitability of this material model for large strains.
\begin{figure}[htp]
\centering
\includegraphics[width=0.375\linewidth]{Figure/0_num_ex/ex_uniaxial_undeformed_5_low_res.png}
\caption{Uniaxial tension of a straight beam: Undeformed configuration. The directions of ${\xi^1}$ and $\xi^2$ represent the chosen principal directions of the circular cross-section.}
\label{uniaxial_tens_undeformed}
\end{figure}
\begin{figure}[htp]
\centering
\begin{subfigure}[b] {0.375\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/ex_uni_tens_free_svk_low_res.png}
\caption{End directors free (SVK).}
\label{uniaxial_tens_deformed_d_free_svk}
\end{subfigure}
\begin{subfigure}[b] {0.525\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/ex_uni_tens_free_nh_low_res.png}
\caption{End directors free (NH).}
\label{uniaxial_tens_deformed_d_free_nh}
\end{subfigure}
\begin{subfigure}[b] {0.325\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/ex_uni_tens_fix_svk_low_res.png}
\caption{End directors fixed (SVK).}
\label{uniaxial_tens_deformed_d_free_svk}
\end{subfigure}
\begin{subfigure}[b] {0.525\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/ex_uni_tens_fix_nh_low_res.png}
\caption{End directors fixed (NH).}
\label{uniaxial_tens_deformed_d_free_nh}
\end{subfigure}\hspace{2.5mm}
\caption{Uniaxial tension of a straight beam: Undeformed and deformed configurations. The color represents the ratio of the current cross-sectional area ($A$) to the initial one ($A_0$). 40 cubic B-spline elements have been used for the analysis.}
\label{deform_str_uni_axial_tens}
\end{figure}
\begin{table}[]
\small
\centering
\caption{Uniaxial tension of a straight beam: Verification of the lateral displacement at surface point\,$\mathrm{A}$ (St.\,Venant-Kirchhoff material). All results are obtained by IGA.}
\label{str_utens_verif_surf_disp_svk}
\begin{tabular}{lcclcclcr}
\Xhline{3\arrayrulewidth}
&\multicolumn{2}{c}{End directors free}& &\multicolumn{2}{c}{End directors fixed}& &\multicolumn{2}{c}{Ratio}
\\ \cline{2-3} \cline{5-6} \cline{8-9}
\multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}$R$\\{[}m{]}\end{tabular}} &\multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Brick, deg.=(2,2,2),\\${n_\mathrm{el}}=320\times20\times20$,\\{[}m{]} (a)\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Beam, $p=3$,\\${n_\mathrm{el}}=40$\\{[}m{]} (b)\end{tabular}} & & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Brick, deg.=(3,3,3),\\${n_\mathrm{el}}=320\times15\times15$\\{[}m{]} (c)\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Beam, $p=3$,\\${n_\mathrm{el}}=40$\\{[}m{]} (d)\end{tabular}} & & \begin{tabular}[c]{@{}c@{}}(b)/(a)\\ {[}{\%}{]}\end{tabular} & \begin{tabular}[c]{@{}c@{}}(d)/(c)\\ {[}{\%}{]}\end{tabular} \\
\Xhline{3\arrayrulewidth}
0.05& -1.1089E-02 & -1.1089E-02 & & -1.1089E-02 & -1.1089E-02 & & 100.00 & 100.00\\
0.1 & -2.2178E-02 & -2.2178E-02 & & -2.2181E-02 & -2.2177E-02 & & 100.00 & 99.98\\
\Xhline{3\arrayrulewidth}
\end{tabular}
\end{table}
\begin{table}[]
\small
\centering
\caption{Uniaxial tension of a straight beam: Verification of the lateral displacement at surface point\,$\mathrm{A}$ (compressible Neo-Hookean material). All results are obtained by IGA.}
\label{str_utens_verif_surf_disp_nh}
\begin{tabular}{lcclcclcr}
\Xhline{3\arrayrulewidth}
&\multicolumn{2}{c}{End directors free}& &\multicolumn{2}{c}{End directors fixed}& &\multicolumn{2}{c}{Ratio}
\\ \cline{2-3} \cline{5-6} \cline{8-9}
\multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}$R$\\{[}m{]}\end{tabular}} &\multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Brick, deg.=(2,2,2),\\${{n_\mathrm{el}}}=320\times20\times20$,\\{[}m{]} (a)\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Beam, $p=3$,\\${{n_\mathrm{el}}}=40$,\\{[}m{]} (b)\end{tabular}} & & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Brick, deg.=(2,2,2),\\${{n_\mathrm{el}}}=320\times20\times20$,\\{[}m{]} (c)\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Beam, $p=3$,\\${{n_\mathrm{el}}}=40$,\\{[}m{]} (d)\end{tabular}} & & \begin{tabular}[c]{@{}c@{}}(b)/(a)\\ {[}{\%}{]}\end{tabular} & \begin{tabular}[c]{@{}c@{}}(d)/(c)\\ {[}{\%}{]}\end{tabular} \\
\Xhline{3\arrayrulewidth}
0.05& -1.4593E-02 & -1.4593E-02 & & -1.4593E-02 &-1.4593E-02 & & 100.00 & 100.00\\
0.1 & -2.9186E-02 & -2.9186E-02 & & -2.9186E-02 & -2.9186E-02 & & 100.00 & 100.00\\
\Xhline{3\arrayrulewidth}
\end{tabular}
\end{table}
\begin{figure}[htp]
\centering
\begin{subfigure}[b] {0.495\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/ex1_uniaxial_force_vol_change_R005.png}
\caption{$R=0.05\mathrm{m}$}
\label{uniaxial_tens_vol_change_r005}
\end{subfigure}
\begin{subfigure}[b] {0.495\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/ex1_uniaxial_force_vol_change_R010.png}
\caption{$R=0.1\mathrm{m}$}
\label{uniaxial_tens_vol_change_r010}
\end{subfigure}
\caption{Uniaxial tension of a straight beam: Comparison of volume change in uniaxial tension with brick elements and beam elements for the two different material models and cross-section radii with two cases of kinematic boundary conditions.}
\label{uniaxial_tens_vol_ratio}
\end{figure}
\subsection{Cantilever beam under end moment}
\label{ex_beam_end_mnt}
An initially straight beam of length $L=10\text{m}$ with rectangular cross-section of width ${w}=1\text{m}$ and height $h$ is clamped at one end and subject to bending moment $M$ on the other end (see Fig.\,\ref{cant_beam_end_moment}). The material properties are Young's modulus $E=1.2\times{10^7}\text{Pa}$, and Poisson's ratio $\nu=0$. Under the assumption of pure bending, an applied moment $M$ deforms the beam central axis into a circle with radius $R=EI/M$, where the $X$- and $Z$-displacements at the tip of the central axis (point $\mathrm{A}$ in Fig.\,\ref{cant_beam_end_moment}) can be derived, respectively, as
\begin{subequations}
\label{beam_end_mnt_exact_sol}
\begin{alignat}{2}
{u_{\mathrm{A}}} &= R\sin \frac{L}{R} - L,\label{beam_end_mnt_exact_sol_x}\\
{w_{\mathrm{A}}} &= R\left( {1 - \cos \frac{L}{{R}}} \right).\label{beam_end_mnt_exact_sol_z}
\end{alignat}
\end{subequations}
\begin{figure}[htp]
\centering
\includegraphics[width=0.6\linewidth]{Figure/0_num_ex/beam_end_mnt_initial_desc_1_low_res.png}
\caption{Cantilever beam under end moment: Undeformed configuration and boundary conditions.}
\label{cant_beam_end_moment}
\end{figure}
Since the presented extensible director beam formulation contains no rotational degrees of freedom, we cannot directly apply the bending moment. There are several ways to implement the moment load: A coupling element was introduced in \citet{frischkorn2013solid}, and the virtual work contribution of the boundary moment was directly discretized in the rotation-free thin shell formulation of \citet{duong2017new}. We adopt another way presented in \citet{betsch1995assumed} to use a distributed follower load acting on the end face. At the loaded end face, the following linear distribution of the first Piola-Kirchhoff stress is prescribed,
\begin{equation}\label{end_moment_first_pk}
{\boldsymbol{P}} = p\,{{\boldsymbol{\nu }}_t} \otimes {{\boldsymbol{\nu }}_0}\,\,\text{with}\,\,p\coloneqq{-\frac{M}{I}{\xi ^1}}\,\,\mathrm{and}\,\,{I=\frac{wh^3}{12}}\,\,\mathrm{at}\,\,s\in{\Gamma_\mathrm{N}}\,\left(s=L\right),
\end{equation}
where the outward unit normal vector on the initial end face is $\boldsymbol{\nu}_0={\boldsymbol{e}_1}$ since the beam central axis is aligned with the $X$-axis, and the outward unit normal vector on the current end face is
\begin{equation}
{{\boldsymbol{\nu }}_t} = \boldsymbol{d}_3\,\,\text{with}\,\,{{\boldsymbol{d}}_3} = \frac{{{{\boldsymbol{d}}_1} \times {{\boldsymbol{d}}_2}}}{{\left\| {{{\boldsymbol{d}}_1} \times {{\boldsymbol{d}}_2}} \right\|}},\,\,\mathrm{and}\,\,{\boldsymbol{d}_2}=-{\boldsymbol{e}_2}.
\end{equation}
From Eq.\,(\ref{end_moment_first_pk}), we can simply obtain the prescribed traction vector ${\boldsymbol{\bar T}}_0$, as
\begin{equation}
\label{num_ex_end_mnt_prescribed_traction}
{{\boldsymbol{\bar T}}_0} = {\boldsymbol{P}}{{\boldsymbol{\nu }}_0} = p\,{{\boldsymbol{d}}_3}\,\,\mathrm{at}\,\,s\in{\Gamma_\mathrm{N}}.
\end{equation}
Substituting the traction vector of Eq.\,(\ref{num_ex_end_mnt_prescribed_traction}) into Eqs.\,(\ref{app_nbdc_strs_res}) and (\ref{app_nbdc_strs_coup}), we obtain
\begin{subequations}
\begin{gather}
\label{ex_end_mnt_n0_m_condition}
{{\boldsymbol{\bar n}}_0} = \int_{{\mathcal{A}}_0} {{\boldsymbol{\bar T}_0}\,{\mathrm{d}}{{\mathcal{A}}_0}} = {\boldsymbol{0}},\\
{{\boldsymbol{\bar {\tilde m}}}_0^1} = \int_{{\mathcal{A}}_0} {\xi^1}{{\boldsymbol{\bar T}_0}\,{\mathrm{d}}{{\mathcal{A}}_0}} = - M{\boldsymbol{d}_3},\,\,\,\mathrm{and}\,\,{{\boldsymbol{\bar {\tilde m}}}_0^2} = {\boldsymbol{0}}.
\end{gather}
\end{subequations}
That is, the Neumann boundary condition at $s\in{\Gamma_\mathrm{N}}$ is given by
\begin{subequations}
\label{ex_end_mnt_neumann_bdc}
\begin{gather}
\boldsymbol{n}=\boldsymbol{0},\label{nbdc_n_end_mnt}\\
{{\boldsymbol{\tilde m}}^1}=-M{\boldsymbol{d}_3},\,\,\mathrm{and}\,\,{{\boldsymbol{\tilde m}}^2}={\boldsymbol{0}}.\label{nbdc_m_td_end_mnt}
\end{gather}
\end{subequations}
A detailed expression of the external virtual work and the load stiffness operator can be found in Appendix \ref{mnt_load_follower_load_exp}.
\begin{figure}[!htpt]
\centering
\begin{subfigure}[b] {0.435\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/beam_end_mnt_deformed_SVK_th010_beam_qr_164.png}
\caption{Initial cross-section height ${h}=0.1\rm{m}$}
\label{pure_bend_deform_h010}
\end{subfigure}\hspace{2.5mm}
\begin{subfigure}[b] {0.435\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/beam_end_mnt_deformed_SVK_th001_beam_qr_164.png}
\caption{Initial cross-section height ${h}=0.01\rm{m}$}
\label{pure_bend_deform_h001}
\end{subfigure}\hspace{2.5mm}
\caption{Cantilever beam under end moment: Deformed configurations for two different cross-section heights. $n$ denotes the load step number, where the applied end moment is $M=0.1n\pi EI/L$. Figure (b) shows the central axis only, because the cross-section is too thin to clearly visualize. The beam solutions are calculated by IGA with $p=4$ and $n_{\mathrm{el}}=160$.}
\label{cant_beam_end_moment_deformed}
\end{figure}
\begin{figure}[!htpt]
\centering
\begin{subfigure}[b] {0.4875\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/beam_end_mnt_comparison_th010_exact_std.png}
\caption{Initial cross-section height ${h}=0.1\rm{m}$}
\label{pure_bend_compare_exact_h010}
\end{subfigure}
\begin{subfigure}[b] {0.4875\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/beam_end_mnt_comparison_th001_exact_std.png}
\caption{Initial cross-section height ${h}=0.01\rm{m}$}
\label{pure_bend_compare_exact_h001}
\end{subfigure}
\caption{Cantilever beam under end moment: Comparison of the $X$- and $Z$-displacements at the tip of the central axis with the analytical solutions for the different initial cross-section heights. The beam solutions are calculated by IGA with $p=4$ and $n_{\mathrm{el}}=160$.}
\label{cant_beam_end_moment_compare_exact_sol}
\end{figure}
Figs.\,\ref{pure_bend_deform_h010} and \ref{pure_bend_deform_h001} show the deformed configurations of the cantilever for initial heights ${h}=0.1\text{m}$ and ${h}=0.01\text{m}$, respectively. The external load is incrementally applied in 20 uniform steps. The final deformed configurations are very close to circles, but are not \textit{exact} circles. As Fig.\,\ref{cant_beam_end_moment_compare_exact_sol} shows, the $X$- and $Z$-displacements at the end are in very good agreement with the analytic solution of Eq.\,(\ref{beam_end_mnt_exact_sol}). However, it turns out that a slight difference persists even in the converged solutions. This difference in the converged solution is attributed to the fact that axial strain in the central axis and the transverse normal strain in the cross-section are induced by the bending deformation, which is not considered in the analytical solution under the pure bending assumption.
\subsubsection{Coupling between bending and axial strains}
\label{ex_end_mnt_subsub_axial}
The axial strain ($\varepsilon$) is not zero, but decreases with $h$. To verify this, we show that the effective stress resultant $\tilde n$, which is work conjugate to the axial strain $\varepsilon$ (see Eq.\,(\ref{tot_strn_energy_beam_time_deriv})), is not zero. From Eq.\,(\ref{nbdc_m_td_end_mnt}), we obtain ${{\tilde m}}^{1}=-M/\left({\boldsymbol{\varphi}_{\!,s}\cdot{\boldsymbol{d}_3}}\right)$ by using Eq.\,(\ref{strs_res_dir_mnt}) and the relations ${\boldsymbol{d}_3}\cdot{\boldsymbol{d}_{\alpha}}=0$ and ${\boldsymbol{\varphi}_{\!,s}}\cdot{\boldsymbol{d}_3}>0$ (postulation of Eq.\,(\ref{beam_th_str_calc_d3_vec})). From Eq.\,(\ref{nbdc_n_end_mnt}), it follows that $n={\boldsymbol{n}}\cdot{\boldsymbol{d}_3}/\left({{\boldsymbol{\varphi}_{\!,s}}\cdot{\boldsymbol{d}_3}}\right)$, obtained by using Eq.\,\ref{strs_res_f}, vanishes $\mathrm{at}\,\,s\in{\Gamma_\mathrm{N}}$. Therefore, using Eq.\,(\ref{eff_axial_res}), we obtain the effective axial stress resultant
\begin{equation}
\label{theo_est_axial_strs_res}
\tilde n = - {\tilde m^1}{k_1},
\end{equation}
where the current bending curvature is $k_1\approx{1/R}$. Thus, $\tilde n$ does not vanish $\mathrm{at}\,\,s\in{\Gamma_\mathrm{N}}$. This is a high order effect of beam theory that disappears quickly for {decreasing $h$: $\tilde n$ decreases with the initial cross-section height $h$ due to ${\tilde m^1}\sim{M}\sim{h^3}$, i.e., ${\tilde n}\sim{h^3}$. Therefore, since the cross-sectional area is proportional to $h$, the work conjugate axial strain is $\varepsilon \sim {h^2}$. Then, the membrane strain energy is
\begin{equation}
\label{theo_memb_strn_e}
{\Pi _\varepsilon } \coloneqq \int_0^L {\tilde n\varepsilon {\rm{d}}s}\sim{h^5}.
\end{equation}
Further, for the given end moment $M\sim{h^3}$, the bending strain $\rho_1$ is nearly constant with respect to $h$, then the bending strain energy is
\begin{equation}
\label{theo_bend_strn_e}
{\Pi _\rho} \coloneqq \int_0^L {{{\tilde m}^1}{\rho_1}{\rm{d}}s}\sim{h^3}.
\end{equation}
}Fig.\,\ref{pure_bend_plot_n_ntilde} shows the convergence of axial stress resultant $n$ and the effective stress resultant $\tilde n$ with the mesh refinement in the beam. We calculate $\boldsymbol{n}$ using Eq.\,(\ref{beam_th_str_def_res_force}), from which we can extract $n$. It is observed that the condition of vanishing $n$ is weakly satisfied.
We compare the axial strain field on the loaded end face in the presented beam formulation with the following three different reference solutions.
\begin{itemize}
\item {Ref.\#1}: IGA with ${n_\mathrm{el}}=2,560\times1\times20$ brick elements and $\mathrm{deg.}=(2,1,2)$. One element along the beam width is sufficient since $\nu=0$.
\item {Ref.\#2}: IGA with ${{n_\mathrm{el}}}=2,560\times1\times1$ brick elements and $\mathrm{deg.}=(2,1,1)$. {In the calculation of the relative difference of the displacement in the $L^2$ norm in Fig.\,\ref{cant_beam_end_moment_conv_rate}, we use IGA with ${{n_\mathrm{el}}}=2,560\times1\times1$ brick elements and $\mathrm{deg.}=(4,1,1)$ in order to obtain the convergence of the difference to machine precision. {It is noted that three Gauss integration points are used in the direction of cross-section height for brick and beam element solutions.}}
\item {Ref.\#3}: The analytic solution under the pure bending assumption.
\end{itemize}
In the reference solution using brick elements, we apply the end moment in the same way as in the beam formulation, that is, we apply the distributed follower load on the end face. In the following, we derive the analytical solution of the axial strain under the pure bending assumption (Ref.\,\#3). In pure bending, every material fiber deforms into a circle and is being stretched in the axial direction, where the amount of stretch linearly varies through the cross-section height. If the central axis deforms into a full circle with radius $R=L/{2\pi}$, we have the following expression for the axial stretch
\begin{align}
\label{ex_end_mnt_analytic_axial_stretch}
U_{33}^* = \frac{\ell}{L} = \frac{2\pi\left(R-{{\xi}^1}\right)}{L} = 1 - \frac{{2\pi }}{L}{\xi ^1},\,\,{\xi ^1} \in \left[ { - h/2,h/2} \right],
\end{align}
where $\ell$ denotes the current length of each material fiber. Then, the axial component of the Green-Lagrange strain is obtained by
\begin{align}
\label{ex_end_mnt_axial_comp_GL}
E_{33}^* = \frac{1}{2}\left\{ {{{\left( {1 - \frac{{2\pi }}{L}{\xi ^1}} \right)}^2} - 1} \right\} = - 2\pi \frac{{{\xi ^1}}}{L} + 2{\pi ^2}{\left( {\frac{{{\xi ^1}}}{L}} \right)^2},\,\,{\xi ^1} \in \left[ { - h/2,h/2} \right].
\end{align}
In this analytical expression, it should be noted that the axial strain is zero at the central axis ${\xi^1}=0$. Since the cross-section height $h$ is much smaller than the beam length $L$, the quadratic order term in Eq.\,(\ref{ex_end_mnt_axial_comp_GL}) almost vanishes, so that the axial strain has nearly linear distribution along the coordinate $\xi^1$ (see Fig.\,\ref{pure_bend_analytic_graph}). Fig.\,\ref{dist_GL_strn_axial_diff} shows the differences between $E^*_{33}$ and the axial strains of reference solutions Ref.\#1, Ref.\#2, and the presented beam formulation. It is noticeable that the axial strain is nonzero in the results using brick elements as well. The beam solution agrees very well with that of Ref.\#2, since the Ref.\#2 also assumes a linear displacement field along the cross-section height. Especially, in case of ${h}=0.01\mathrm{m}$, it is observed that as we increase the number of elements along the central axis, the reference solution (${{n_\mathrm{el}}}=10,240\times1\times1$) approaches the beam solution. The solution of Ref.\#1 shows that the cross-section does not remain plane but undergoes \textit{warping}. Therefore there are large differences in the axial strain between Ref.\#1 and the beam solution; however, it is remarkable that the average of the solution in Ref.\#1 still agrees very well with the beam solution. Fig.\,\ref{ex_end_mnt_e_field} shows that the axial strain of the beam is nearly constant along the central axis, and decreases with the initial cross-section height $h$. Also, the shear strain is negligible, which is consistent with $-{\tilde m}^1\approx{M}$, shown in Fig.\,\ref{ex_end_mnt_m_field}. The slight shear strain near the clamped boundary is associated with the drastic change of current cross-section height there. At the clamped boundary, the cross-section does not deform. Thus, the gradient $\boldsymbol{d}_{1,s}$ does not vanish, i.e., $k^1_1\ne0$ (see Remark \ref{remark_curv_k}), which generates the effective shear stress ${\tilde q}^1$of Eq.\,(\ref{eff_trans_shear_res}). Similarly, the gradient $\boldsymbol{d}_{1,s}$ at the clamped boundary generates the nonvanishing strain $\gamma_{11}$, and its work conjugate $\tilde{m}^{11}$ (see Fig.\,\ref{ex_end_mnt_m_field}). However, $\tilde{m}^{11}$ is almost zero elsewhere in the domain, and this means that the current cross-section height is almost uniform (see Remark \ref{remark_bending_mnt} for the relavant explanation).
\begin{figure}[htp]
\centering
\includegraphics[width=0.65\linewidth]{Figure/0_num_ex/beam_end_mnt_axial_force_th010_th001.png}
\caption{Cantilever beam under end moment: Convergence of axial stress resultant $n$ and effective axial stress resultant $\tilde n$ for the two different cross-section heights $h$. {IGA with $p=4$ is used}. As expected, $n$ vanishes, while $\tilde n$ approaches a constant. The applied bending moment is $M=2{\pi}EI/L$. {The converged values of $-\tilde n$ at $n_\mathrm{el}=320$ are $395.7\mathrm{N}$ and $0.3948\mathrm{N}$ for the cases of $h=0.1\mathrm{m}$ and $0.01\mathrm{m}$, respectively, which is consistent with the theoretical estimation of convergence rate of ${\tilde n}\sim{h^3}$ discussed below Eq.\,(\ref{theo_est_axial_strs_res}).}}
\label{pure_bend_plot_n_ntilde}
\end{figure}
\begin{figure}[htp]
\centering
\begin{subfigure}[b] {0.465\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/beam_end_mnt_strn_axial_comparison_diff_th010.png}
\caption{Initial cross-section height ${h}=0.1\rm{m}$}
\label{dist_GL_strn_axial_diff_h010}
\end{subfigure}
\begin{subfigure}[b] {0.465\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/beam_end_mnt_strn_axial_comparison_diff_th001.png}
\caption{Initial cross-section height ${h}=0.01\rm{m}$}
\label{dist_GL_strn_axial_diff_h001}
\end{subfigure}
\caption{Cantilever beam under end moment: Difference of the axial strain component along the cross-section height at the loaded end ($s=L$), and the applied moment $M=2{\pi}EI/L$. `avg.' represents the average. Note that, in the solid red line of (a), $E_{33}=1.5580\times{10^{-6}}$ at ${\xi^1}=0$, i.e., the central axis is slightly stretched.}
\label{dist_GL_strn_axial_diff}
\end{figure}
\begin{figure}[htp]
\centering
\begin{subfigure}[b] {0.4875\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/beam_end_mnt_field_e.png}
\caption{Strain components $\varepsilon$ and $\delta_1$}
\label{ex_end_mnt_e_field}
\end{subfigure}
\begin{subfigure}[b] {0.4875\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/beam_end_mnt_field_m.png}
\caption{Director stress couples ${\tilde m}^1$ and ${\tilde m}^{11}$}
\label{ex_end_mnt_m_field}
\end{subfigure}
\caption{{Cantilever beam under end moment: Distribution of the axial strain ($\varepsilon$), transverse shear strain ($\delta_1$) and director stress couple components (${\tilde m}^{11}$ and ${\tilde m}^{1}$) of the beam along the central axis. The results are from IGA with $p=4$, and $n_{\mathrm{el}}=320$.}}
\label{dist_e_m_field}
\end{figure}
\subsubsection{Coupling between bending and through-the-thickness stretch}
\label{ex_end_mnt_subsub_th}
The through-the-thickness stretch ${\chi}^{11}$ is also coupled with the bending deformation, and decreases quickly with the initial cross-section height $h$. In the absence of an external director stress couple, $\bar{\tilde {\boldsymbol{m}}}^{\gamma}=\boldsymbol{0}$, substituting Eq.\,(\ref{strs_res_dir_mnt}) into Eq.\,(\ref{mnt_bal_eqn_dir_mnt}), and using the fact that torsional deformation is absent, i.e., ${\tilde m}^{21}=0$, we obtain
\begin{align}
\label{ex_end_mnt_l1_vector}
{{\boldsymbol{l}}^1} = {\tilde m^1_{,s}}{{\boldsymbol{\varphi }}_{\!,s}} + {\tilde m^1}{{\boldsymbol{\varphi }}_{\!,ss}} + {\tilde m^{11}_{,s}}{{\boldsymbol{d}}_1} + {\tilde m^{11}}{{\boldsymbol{d}}_{1,s}}\approx{\tilde m^1}{{\boldsymbol{\varphi }}_{\!,ss}},
\end{align}
since ${\tilde m}^1$ is nearly constant, and ${\tilde m}^{11}$ is negligible in the domain $s\in(0,L)$. Let $\tilde s$ be the arc-length coordinate along the current central axis. Then, $\boldsymbol{\varphi}_{\!,{\tilde s}{\tilde s}}$ represents the curvature vector such that $\kappa \coloneqq \left\| {{{\boldsymbol{\varphi }}_{\!,\tilde s\tilde s}}} \right\|$ denotes the curvature of the deformed central axis, which is given by $\kappa\approx{1/R}$ in the example. Using the relation $\mathrm{d}{\tilde s}=\sqrt{1+{2\varepsilon}}\mathrm{d}{s}$ and the chain rule of differentiation, we find
\begin{align}
\label{ex_end_mnt_2nd_mnt_caxis}
{{\boldsymbol{\varphi }}_{\!,ss}} = \frac{\varepsilon _{,s}}{\sqrt{1+2\varepsilon}}{{\boldsymbol{\varphi }}_{\!,\tilde s}} + {(1 + 2\varepsilon )}{{\boldsymbol{\varphi }}_{\!,\tilde s\tilde s}} \approx {\frac{1}{{\lambda_1}R}}{(1 + 2\varepsilon )}{{\boldsymbol{d}}_1},
\end{align}
since $\varepsilon$ is nearly constant, and the shear deformation is negligible such that the unit normal vector of the central axis is approximated by $\boldsymbol{\varphi}_{\!,{\tilde s}{\tilde s}}/\kappa={\boldsymbol{d}_1}/{\lambda_1}$. Substituting Eq.\,(\ref{ex_end_mnt_2nd_mnt_caxis}) into Eq.\,(\ref{ex_end_mnt_l1_vector}) and using the decomposition of Eq.\,(\ref{strs_res_dir_f}), we obtain
\begin{equation}
{{\tilde l}^{11}} \approx \frac{1}{{\lambda_1}{R}}{(1 + 2\varepsilon )}{\tilde m^1}.
\end{equation}
This means that the transverse normal stress ${\tilde l}^{11}$ does not vanish, but decreases with the initial cross-section height $h$ through the relation ${\tilde m}^1\sim{h^3}$, i.e., ${\tilde l}^{11}\sim{h^3}$. {Therefore, since the cross-sectional area is proportional to $h$, the work conjugate strain is ${\chi}_{11}\sim{h^2}$, and the in-plane strain energy of the cross-section is
\begin{align}
\label{the_strn_e_inp_cs}
{\Pi _\chi } \coloneqq \int_0^L {{{\tilde l}^{11}}{\chi _{11}}{\rm{d}}s}\sim{h^5}.
\end{align}
}
Fig.\,\ref{dist_GL_strn_th} compares the change of cross-sectional area along the axis for the beam and the reference solutions. It is noticeable that the cross-sectional area also decreases when using brick elements. The cross-sectional area in the beam solution agrees very well with that of Ref.\,\#2, since both assume constant transverse normal (through-the-thickness) strain of the cross-section (see also Fig.\,\ref{dist_GL_through_height_strn_th}). Also, Fig.\,\ref{dist_GL_strn_th} shows that the amount of change in cross-sectional area decreases with $h$. The deformation of the cross-section in Ref.\,\#1 is more complicated than for the other cases, since it allows for warping, i.e., the cross-section does not remain plane after deformation. Especially, at the loaded end face, the cross-sectional area slightly increases, since the central axis is stretched. It is shown in Fig.\,\ref{dist_GL_through_height_strn_th} (red curves) that the cross-section is stretched along the transverse direction at the center (${\xi^1}=0$, $s=L$), so that the average of the transverse normal strain is positive, i.e., the cross-section is stretched in average. On the other hand, apart from the boundary, the through-the-thickness compressive force coupled with the bending deformation is dominant, so that the cross-sectional area decreases. In Fig.\,\ref{dist_GL_strn_th}, it is remarkable that the average cross-sectional area of Ref.\,\#1 in the domain $s\in\left(0,L\right)$ coincides with that of the beam and Ref.\,\#2. Further, in Fig.\,\ref{dist_GL_through_height_strn_th}, the average of the transverse normal strain at the middle of the central axis ($s=L/2$) agrees very well with that of the beam and Ref.\,\#2.
\begin{figure}[htp]
\centering
\begin{subfigure}[b] {0.44\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/beam_end_mnt_strn_th_C_AREA_th010.png}
\caption{Initial cross-section height ${h}=0.1\rm{m}$}
\label{pure_bend_carea_h010}
\end{subfigure}
\begin{subfigure}[b] {0.44\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/beam_end_mnt_strn_th_C_AREA_th001.png}
\caption{Initial cross-section height ${h}=0.01\rm{m}$}
\label{pure_bend_carea_h001}
\end{subfigure}
\caption{Cantilever beam under end moment: Distribution of the current cross-sectional area along the central axis. `average' denotes the average in the whole domain of the central axis where the two boundary points are excluded. The applied bending moment is $M=2{\pi}EI/L$.}
\label{dist_GL_strn_th}
\end{figure}
\begin{figure}[htp]
\centering
\begin{subfigure}[b] {0.44\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/beam_end_mnt_strn_th_comparison_th010.png}
\caption{Initial cross-section height ${h}=0.1\rm{m}$}
\label{pure_bend_E11_dist_thick_h010}
\end{subfigure}
\begin{subfigure}[b] {0.44\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/beam_end_mnt_strn_th_comparison_th001.png}
\caption{Initial cross-section height ${h}=0.01\rm{m}$}
\label{pure_bend_E11_dist_thick_h001}
\end{subfigure}
\caption{Cantilever beam under end moment: Distribution of the transverse normal (through-the-thickness) component of the Green-Lagrange strain along the cross-section height at the loaded end face. `avg.' denotes the average, and $n_\mathrm{el}^{\mathrm{H}}$ denotes the number of brick elements in the direction of cross-section height. For brick element, $\mathrm{deg.}=(2,1,1)$ for $n_{\mathrm{el}}^{\mathrm{H}}=1$, and $\mathrm{deg.}=(2,1,2)$ in the other cases. For beam element, $p=4$, and $n_{\mathrm{el}}=160$. The applied bending moment is $M=2{\pi}EI/L$.}
\label{dist_GL_through_height_strn_th}
\end{figure}
\begin{figure}[htp]
\centering
\begin{subfigure}[b] {0.475\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/beam_end_mnt_ux_th010.png}
\caption{Initial cross-section height ${h}=0.1\rm{m}$}
\label{pure_bend_conv_test_ux}
\end{subfigure}\hspace{2.5mm}
\begin{subfigure}[b] {0.475\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/beam_end_mnt_ux_th001.png}
\caption{Initial cross-section height ${h}=0.01\rm{m}$}
\label{pure_bend_conv_test_ux}
\end{subfigure}
\caption{{Cantilever beam under end moment: Convergence of the relative difference of the $X$-displacement in the central axis. The applied bending moment is $M=2{\pi}EI/L$. The dashed lines represent the theoretical convergence rate of ${\bar h}^{p+1}$, where $\bar h$ denotes the element size.}}
\label{cant_beam_end_moment_conv_rate}
\end{figure}
\subsubsection{Verification of displacements}
Fig.\,\ref{cant_beam_end_moment_conv_rate} compares the relative difference of the $X$-displacement of the beam from the three different reference solutions, where the relative $L^2$ norm of the difference in the $X$-displacement $u$ in the domain of the central axis $(0,L)$ is calculated by
\begin{equation}\label{def_rel_l2_err}
{\left\| {{e_u}} \right\|_{{L^2}}} = \sqrt {\frac{{\int_0^L {{{\left( {u - {u_{{\rm{ref}}}}} \right)}^2}\,{\mathrm{d}}s } }}{{\int_0^L {{u_{{\rm{ref}}}}^2\,{\mathrm{d}}s } }}},
\end{equation}
where $u_\mathrm{ref}$ denotes the reference solution of the displacement component. The convergence test results of the reference solutions are given in Tables \ref{app_conv_test_xdisp_tip_h010} and \ref{app_conv_test_xdisp_tip_h001}. In Fig.\,\ref{cant_beam_end_moment_conv_rate}, Ref.\,\#2 shows the smallest differences from the beam solution. {The difference is even smaller than the analytical solution and vanishes to machine precision, since Ref.\,\#2 is kinematically the same as the beam formulation with Poisson's ratio $\nu=0$. Ref.\,\#1 shows the largest differences, but they are only around $1\%$ and $0.1\%$ in the cases of $h=0.1\mathrm{m}$ and $h=0.01\mathrm{m}$, respectively. We also compare the convergence rate in several different orders of basis function ($p=3,4,5$) with the asymptotic and optimal convergence rate of ${\bar h}^{p+1}$, where $\bar h$ denotes the element size. The beam solution shows comparable or even better rate of convergence than the optimal one, especially in the coarser level of mesh discretization.}
\subsubsection{Instability in thin beam limit}
\label{instab_thin_b_lim_end_mnt}
Tables\,\ref{str_end_mnt_iter_history_h010} and \ref{str_end_mnt_iter_history_h001} compare the total number of load steps and iterations in the cases of $h=0.1\mathrm{m}$ and $h=0.01\mathrm{m}$, respectively. Ref.\,\#1 requires larger number of iterations than Ref.\#2 and the beam solution. This is mainly attributed to more complicated deformations of the cross-section. It is also shown that more iterations are required for the thinner cross-section case. It has been investigated in the shell formulation with extensible director \citep{simo1990stress} that the instability in the thin limit ($h\to0$) is associated with the coupling of bending and through-the-thickness stretching. A couple of methods to alleviate this instability has been presented, for example based on a multiplicative decomposition of the extensible director into an inextensible direction vector and a scalar stretch \citep{simo1990stress}, and based on the mass scaling in dynamic problems \citep{hokkanen2019isogeometric}. In this paper, we restrict our application of the developed beam formulation to low to moderate slenderness ratios, and further investigation on the alleviation of the instability remains future work.
\begin{table}[!htpt]
\caption{Cantilever beam under end moment: History of the Newton-Raphson iteration for $M=0.1nEI\pi L$ at step $n=20$, and the total number of load steps and iterations (initial cross-section height ${h}=0.1\mathrm{m}$).}
\label{str_end_mnt_iter_history_h010}
\centering
\scriptsize
\begin{tabular}{clcclcclcc}
\Xhline{3\arrayrulewidth}
\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Iteration\\ number\\ at step\\ $n=20$\end{tabular}} & & \multicolumn{5}{c}{Brick} & \multicolumn{1}{c}{} & \multicolumn{2}{c}{Beam} \\ \cline{3-7} \cline{9-10}
& & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}IGA, deg.=(2,1,2),\\${{n_\mathrm{el}}}=2,560\times1\times20$\end{tabular}} & \multicolumn{1}{c}{} & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}IGA, deg.=(2,1,1),\\ ${{n_\mathrm{el}}}=2,560\times1\times1$\end{tabular}} & \multicolumn{1}{c}{} & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}IGA, $p=4$,\\${{n_\mathrm{el}}}=160$\end{tabular}} \\ \cline{3-4} \cline{6-7} \cline{9-10}
& & \begin{tabular}[c]{@{}c@{}}Euclidean\\ norm of residual\end{tabular} & \begin{tabular}[c]{@{}c@{}}Energy\\ norm\end{tabular} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Euclidean\\ norm of residual\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Energy\\ norm\end{tabular}} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Euclidean\\ norm of residual\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Energy\\ norm\end{tabular}} \\
\Xhline{3\arrayrulewidth}
1 & & 1.6E+02 & 1.1E+01 & & 3.1E+02 & 9.8E+00 & & 3.1E+01 & 9.8E+00 \\
2 & & 6.4E+04 & 1.5E+04 & & 6.2E+04 & 1.1E+04 & & 6.7E+04 & 1.1E+04 \\
3 & & 4.0E+03 & 6.6E+01 & & 3.1E+03 & 2.7E+01 & & 3.9E+03 & 2.7E+01 \\
4 & & 1.3E+03 & 8.0E+00 & & 1.2E+01 & 1.4E-02 & & 1.8E+01 & 1.4E-02 \\
5 & & 1.1E+03 & 5.8E+00 & & 3.5E+01 & 3.6E-03 & & 4.5E+01 & 3.6E-03 \\
6 & & 5.5E+02 & 1.3E+00 & & 8.7E-01 & 2.0E-04 & & 8.9E-01 & 2.0E-04 \\
7 & & 1.4E+01 & 5.6E-02 & & 1.3E+00 & 5.8E-06 & & 1.2E+00 & 5.8E-06 \\
8 & & 1.6E+02 & 1.1E-01 & & 1.7E-03 & 8.7E-10 & & 1.6E-03 & 8.7E-10 \\
9 & & 8.5E-01 & 5.9E-03 & & 5.8E-06 & 1.2E-16 & & 5.2E-06 & 1.2E-16 \\
10 & & 3.2E+01 & 4.3E-03 & & 1.4E-06 & 4.3E-20 & & 4.5E-08 & 1.3E-22 \\
11 & & 9.3E-03 & 3.2E-06 & & & & & & \\
12 & & 1.8E-02 & 1.4E-09 & & & & & & \\
13 & & 5.8E-07 & 4.9E-19 & & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-7} \cline{9-10}
\#load steps & & \multicolumn{2}{c}{20} & & \multicolumn{2}{c}{20} & & \multicolumn{2}{c}{20} \\ \cline{1-1} \cline{3-4} \cline{6-7} \cline{9-10}
\#iterations & & \multicolumn{2}{c}{445} & & \multicolumn{2}{c}{200} & & \multicolumn{2}{c}{200} \\
\Xhline{3\arrayrulewidth}
\end{tabular}
\end{table}
\begin{table}[!htpt]
\caption{Cantilever beam under end moment: History of the Newton-Raphson iteration for $M=0.1nEI\pi L$ at step $n=20$, and the total number of load steps and iterations (initial cross-section height ${h}=0.01\mathrm{m}$).}
\label{str_end_mnt_iter_history_h001}
\centering
\scriptsize
\begin{tabular}{clcclcclcc}
\Xhline{3\arrayrulewidth}
\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Iteration\\ number\\ at step\\ $n=20$\end{tabular}} & & \multicolumn{5}{c}{Brick} & \multicolumn{1}{c}{} & \multicolumn{2}{c}{Beam} \\ \cline{3-7} \cline{9-10}
& & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}IGA, deg.=(2,1,2),\\${{n_\mathrm{el}}}=2,560\times1\times20$\end{tabular}} & \multicolumn{1}{c}{} & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}IGA, deg.=(2,1,1),\\${{n_\mathrm{el}}}=2,560\times1\times1$\end{tabular}} & \multicolumn{1}{c}{} & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}IGA, $p=4$,\\${{n_\mathrm{el}}}=160$\end{tabular}} \\ \cline{3-4} \cline{6-7} \cline{9-10}
& & \begin{tabular}[c]{@{}c@{}}Euclidean\\ norm of residual\end{tabular} & \begin{tabular}[c]{@{}c@{}}Energy\\ norm\end{tabular} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Euclidean\\ norm of residual\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Energy\\ norm\end{tabular}} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Euclidean\\ norm of residual\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Energy\\ norm\end{tabular}} \\
\Xhline{3\arrayrulewidth}
1 & & 1.6E+00 & 1.0E-02 & & \multicolumn{1}{c}{3.1E+00} & \multicolumn{1}{c}{9.9E-03} & & \multicolumn{1}{c}{3.1E-02} & \multicolumn{1}{c}{9.9E-03} \\
2 & & 4.7E+04 & 8.5E+02 & & \multicolumn{1}{c}{5.4E+04} & \multicolumn{1}{c}{1.1E+03} & & \multicolumn{1}{c}{6.7E+03} & \multicolumn{1}{c}{1.1E+03} \\
3 & & 2.4E+03 & 2.4E+00 & & \multicolumn{1}{c}{2.6E+03} & \multicolumn{1}{c}{2.7E+00} & & \multicolumn{1}{c}{4.0E+02} & \multicolumn{1}{c}{2.8E+00} \\
4 & & 1.6E+02 & 1.2E-02 & & \multicolumn{1}{c}{8.7E+00} & \multicolumn{1}{c}{3.8E-05} & & \multicolumn{1}{c}{1.8E+00} & \multicolumn{1}{c}{3.8E-05} \\
& & {\rvdots} & & & {\rvdots} & & & {\rvdots} & \\
10 & & 7.0E+01 & 2.1E-03 & & \multicolumn{1}{c}{1.8E+00} & \multicolumn{1}{c}{1.4E-06} & & \multicolumn{1}{c}{1.6E-01} & \multicolumn{1}{c}{1.2E-06} \\
11 & & 3.7E+01 & 6.8E-04 & & \multicolumn{1}{c}{7.3E-05} & \multicolumn{1}{c}{5.5E-10} & & \multicolumn{1}{c}{3.6E-06} & \multicolumn{1}{c}{4.8E-10} \\
12 & & 8.8E+01 & 3.3E-03 & & \multicolumn{1}{c}{3.2E-03} & \multicolumn{1}{c}{4.4E-12} & & \multicolumn{1}{c}{2.6E-04} & \multicolumn{1}{c}{3.4E-12} \\
13 & & 1.0E+01 & 7.0E-05 & & \multicolumn{1}{c}{1.7E-07} & \multicolumn{1}{c}{1.6E-20} & & \multicolumn{1}{c}{6.5E-09} & \multicolumn{1}{c}{5.9E-21} \\
& & {\rvdots} & & & & & & & \\
29 & & 1.6E+01 & 1.1E-04 & & & & & & \\
30 & & 2.1E-04 & 2.2E-09 & & & & & & \\
31 & & 1.1E-02 & 5.5E-11 & & & & & & \\
32 & & 3.9E-06 & 6.2E-18 & & & & & & \\ \cline{1-1} \cline{3-4} \cline{6-7} \cline{9-10}
\multicolumn{1}{l}{\#load steps} & & \multicolumn{2}{c}{20} & & \multicolumn{2}{c}{20} & & \multicolumn{2}{c}{20} \\
\multicolumn{1}{l}{\#iterations} & & \multicolumn{2}{c}{787} & & \multicolumn{2}{c}{260} & & \multicolumn{2}{c}{260} \\
\Xhline{3\arrayrulewidth}
\end{tabular}
\end{table}
\subsubsection{{Alleviation of membrane, transverse shear, and curvature-thickness locking}}
\label{ex_beam_end_mnt_allev_lock}
{We investigate the effect of mesh refinement and higher-order of basis function on the alleviation of membrane, transverse shear, and curvature-thickness locking. We compare, in Fig.\,\ref{cant_beam_end_moment_sl_ratio_l2_error_xdisp}, the relative difference of $X$-displacement from the analytical solution (Ref.\#3) in the $L^2$ norm with increasing slenderness ratio. The difference of the displacement between our beam formulation and Ref.\#3 is attributed to the aforementioned coupling between bending strain and axial/through-the-thickness stretching strains. However, it is shown that both axial ($\varepsilon$) and through-the-thickness stretching ($\chi^{11}$) strains diminish with the rate of $h^2$. Therefore, it is expected that the resulting displacement difference from Ref.\#3 should also decrease with the rate of $h^2$. In Fig.\,\ref{cant_beam_end_moment_sl_ratio_l2_error_xdisp}, it is seen that mesh refinement improves the convergence rate, and it is noticeable that the solution of using $p=5$ with $n_\mathrm{el}=80$ shows the estimated convergence rate of order 2. Further, Fig.\,\ref{end_mnt_strn_e_conv} shows the ratio of membrane ($\Pi_\varepsilon$), through-the-thickness stretching ($\Pi_\chi$), and transverse shear ($\Pi_\delta$) strain energy to bending strain energy ($\Pi_\rho$). In Figs.\,\ref{end_mnt_strn_e_memb} and \ref{end_mnt_strn_e_inp}, it is seen that mesh refinement or higher-order basis functions lead to the expected convergence rate for the strain energy ratio of order 2, i.e., ${\Pi_\varepsilon}/{\Pi_\rho}\sim{h^2}$ and ${\Pi_\chi}/{\Pi_\rho}\sim{h^2}$. This means that the membrane-bending and curvature-thickness locking are alleviated. Further, we investigate the transverse shear strain energy, defined by
\begin{equation}
\label{theo_bend_strn_e}
{\Pi _\delta} \coloneqq \int_0^L {{{\tilde q}^1}{\delta_1}{\rm{d}}s}.
\end{equation}
In Fig.\,\ref{end_mnt_strn_e_shear}, by using higher-order basis functions and mesh refinement ($p=4,5$ with $n_\mathrm{el}=80$), the spurious transverse shear strain energy (transverse shear-bending locking) is alleviated. It should be noted that this result does not mean those locking issues are completely resolved. For example, as discussed in \citet{adam2014improved}, if higher-order basis function is used, the membrane and transverse shear locking are less significant but still existing, due to the field-inconsistency paradigm, which is more pronounced in higher slenderness ratio. However, in this paper, we focus on low to moderate slenderness ratios, and further investigation on the reduced integration method and mixed-variational formulation remains future work.}
\begin{figure}[htp]
\centering
\includegraphics[width=0.5\linewidth]{Figure/0_num_ex/beam_end_mnt_locking_test.png}
\caption{{Cantilever beam under end moment: Change of the relative difference of the $X-$displacement in the $L^2$ norm (w.r.t. Ref.\#3) with increasing slenderness ratio. The dashed line represents the theoretically estimated convergence rate of order 2, which agrees very well with the solution of using $p=5$ and $n_\mathrm{el}=80$.}}
\label{cant_beam_end_moment_sl_ratio_l2_error_xdisp}
\end{figure}
\begin{figure*}[!htbp]
\centering
\begin{subfigure}[b] {0.5\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/beam_end_mnt_strn_e_memb.png}
\caption{Membrane (axial) strain energy}
\label{end_mnt_strn_e_memb}
\end{subfigure}
\begin{subfigure}[b] {0.5\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/beam_end_mnt_strn_e_inp.png}
\caption{In-plane cross-section strain energy}
\label{end_mnt_strn_e_inp}
\end{subfigure}
\begin{subfigure}[b] {0.5\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/beam_end_mnt_strn_e_shear.png}
\caption{Transverse shear strain energy}
\label{end_mnt_strn_e_shear}
\end{subfigure}
\caption{{Cantilever beam under end moment: Comparison of the ratio of the membrane, in-plane cross-section, and transverse shear strain energy to the bending strain energy. It is noticeable that the solution of $p=3$ and $n_\mathrm{el}=80$ recovers the analytically estimated convergence rate of order 2 in (a) and (b). It is noted that, in (a) and (b), the case $p=3, n_\mathrm{el}=80$ shows the same result as the case of $p=5, n_\mathrm{el}=10$.}}
\label{end_mnt_strn_e_conv}
\end{figure*}
\subsection{Cantilever beam under end force}
\label{ex_cant_b_end_f}
{The third example illustrates Poisson locking in the standard extensible director beam formulation, and its alleviation by the EAS method. We further show that the EAS formulation based on Eq.\,(\ref{eas_strn_5param_form}) (i.e., ``ext.-dir.-EAS-5p.'') still suffers from significant Poisson locking due to its incomplete enrichment of the cross-section strains. A beam of length $L=10\mathrm{m}$ and cross-section dimension $h=w=1\mathrm{m}$ is clamped at one end, and subjected to a $Z$-directional force of magnitude $F={10^5}\mathrm{N}/\mathrm{m}^2$ acting on the other end (see Fig.\,\ref{thin_bend_undeformed}). The compressible Neo-Hookean material is selected, and Young's modulus is chosen as $E=10^7\mathrm{Pa}$, and two different Poisson's ratios are considered: $\nu=0$ and $\nu=0.3$.
\begin{figure*}[htp!]
\centering
\begin{subfigure}[b] {0.7\textwidth} \centering
\includegraphics[width=0.625\linewidth]{Figure/0_num_ex/straight_end_shear_f_undeformed.png}
\end{subfigure}
\caption{{Cantilever beam under end force: Undeformed configuration and boundary conditions.}}
\label{thin_bend_undeformed}
\end{figure*}
{We determine reference solutions by using IGA brick elements of $\mathrm{deg.}=(2,1,2)$ with ${{n_\mathrm{el}}}=200\times1\times15$ and $\mathrm{deg.}=(3,3,3)$ with ${{n_\mathrm{el}}}=200\times20\times20$ for those cases $\nu=0$ and $\nu=0.3$, respectively (the convergence test result can be found in Table \ref{app_conv_test_xdisp_tip}).} {For beams, we use $4\times4$ Gauss integration points for the integration over the cross-section.} Fig.\,\ref{convergence_test_thin_bend} shows the convergence of beam solutions based on the presented extensible director kinematics for those two different cases of Poisson's ratios. If zero Poisson's ratio is considered ($\nu=0$), the results of the standard method are very close to the reference solution, and the EAS method gives the same results as the standard method. However, in the case of nonzero Poisson's ratio, since the standard method only allows for constant transverse normal strains, the coupled bending stiffness increases. This leads to a much smaller deflection than the reference solution. This Poisson locking is alleviated by the EAS method as it enhances the in-plane strain field of the cross-section. The EAS solution gives larger displacements that are much closer to the reference solutions (see also Table \ref{conv_test_beam_xyz_disp_atA}). It is also noticeable that the EAS solution `ext.-dir.-EAS-5p.' gives smaller deflections than the results of `ext.-dir.-EAS', since its enriched linear strain field is incomplete, so that Poisson locking is not effectively alleviated. In case of nonzero Poisson's ratio, we have a lateral ($Y$-directional) displacement. Fig.\,\ref{thin_bend_lat_disp_edge_ab_dist} compares the lateral displacement along the edge $\overline {\mathrm{BA}}$, indicated in Fig.\,\ref{thin_bend_undeformed}. In the EAS solution, the magnitude of lateral displacement increases and becomes closer to the average displacement of the reference one, compared with the solution by the standard method. Although the lateral displacement at the point A (${\xi^1}=0.5\,\mathrm{m}$) in the standard method is closer to the reference solution than that of the EAS solution (see also Table \ref{conv_test_beam_xyz_disp_atA}), it is shown that the accuracy of lateral displacement improves substantially by the EAS method in an average sense (see also the difference in $L^2$ norm in Fig.\,\ref{thin_bend_lat_disp_edge_ab_l2}). Further, it is seen that the 5-parameter EAS formulation (ext.-dir.-EAS-5p.) shows smaller magnitude of lateral displacement in Fig.\,\ref{thin_bend_lat_disp_edge_ab_dist}, and larger $L^2$ norm of difference in Fig.\,\ref{thin_bend_lat_disp_edge_ab_l2} due to the incomplete enrichment of in-plane strain field, compared with the 9-parameter formulation (ext.-dir.-EAS). Fig.\,\ref{thin_bend_deformed_config_compare} shows that the standard beam formulation shows much smaller deflection than the other formulations in the final deformed configuration due to Poisson locking. Table\,\ref{convergence_test_thin_bend_instab} shows that the beam solutions use less number of load steps and iterations than the brick element solution.
\clearpage
\begin{figure*}[!htbp]
\centering
\begin{subfigure}[b] {0.4875\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/straight_end_shear_f_x_disp.png}
\vskip -2pt
\caption{$X$-displacement}
\label{thin_bend_pr0}
\end{subfigure}
\begin{subfigure}[b] {0.4875\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/straight_end_shear_f_z_disp.png}
\vskip -2pt
\caption{$Z$-displacement}
\label{thin_bend_pr03}
\end{subfigure}
\vskip -2pt
\caption{{Cantilever beam under end force: Convergence of the normalized displacements at point $\mathrm{A}$ for two different cases of Poisson's ratio. The displacement is normalized by the reference solution using brick elements, where $\mathrm{deg.}=(2,1,2)$ and ${{n_\mathrm{el}}}=200\times1\times15$ for $\nu=0$, and $\mathrm{deg.}=(3,3,3)$ and ${{n_\mathrm{el}}}=200\times20\times20$ for $\nu=0.3$.} {The beam solutions are obtained by IGA with $p=3$.}}
\label{convergence_test_thin_bend}
\end{figure*}
\begin{figure}[htp!]
\centering
\begin{subfigure}[b] {0.52\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/straight_end_shear_lateral_disp.png}
\caption{Lateral displacement ($u_Y$)}
\label{thin_bend_lat_disp_edge_ab_dist}
\end{subfigure}
\begin{subfigure}[b] {0.4675\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/straight_end_shear_lateral_disp_L2_err.png}
\caption{Relative $L^2$ error of $u_Y$}
\label{thin_bend_lat_disp_edge_ab_l2}
\end{subfigure}
\caption{Cantilever beam under end force: Comparison of the lateral displacement along the edge $\overline {\mathrm{BA}}$ in the case of $\nu=0.3$. {The beam solutions are obtained by IGA with $p=3$. Also, we use $n_\mathrm{el}=40$ for the beam solutions in Figure (a).} }
\label{thin_bend_lateral_disp_edge}
\end{figure}
\begin{figure*}[!htbp]
\centering
\begin{subfigure}[b] {0.32\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/end_shear_ux_nh_pr03_solid.png}
\caption{Brick element}
\label{thin_bend_instab_beam}
\end{subfigure}
\begin{subfigure}[b] {0.32\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/end_shear_ux_nh_pr03_beam_std.png}
\caption{Beam (ext.-dir.-std.)}
\label{thin_bend_instab_solid}
\end{subfigure}
\begin{subfigure}[b] {0.32\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/end_shear_ux_nh_pr03_beam_eas.png}
\caption{Beam (ext.-dir.-EAS)}
\label{thin_bend_instab_solid}
\end{subfigure}
\caption{{Cantilever beam under end force: Comparison of deformed configurations. Deformations by (a) brick elements with deg.=$(3,3,3)$, ${{n_\mathrm{el}}}=200\times10\times10$, (b) beam elements (ext.-dir.-std.) with $p=3$ and ${{n_\mathrm{el}}}=40$, and (c) beam elements (ext.-dir.-EAS) with the same discretization with (b). The color represents the $X$-displacement.}}
\label{thin_bend_deformed_config_compare}
\end{figure*}
\begin{table}[!htbp]
\small
\begin{center}
\caption{{Cantilever beam under end force: Convergence test of normalized displacements at the point A for $\nu=0.3$. $u_\mathrm{A}$, $v_\mathrm{A}$, and $w_\mathrm{A}$ denote the $X$-, $Y$-, and $Z$-displacements at the point A, respectively. $(\bullet)^\mathrm{ref}$ denotes the reference solution.} All results are obtained by IGA with $p=3$.}
\label{conv_test_beam_xyz_disp_atA}
\begin{tabular}{cccccccc}
\Xhline{3\arrayrulewidth}
\multicolumn{4}{c}{Beam (ext.-dir.-std.)} & & \multicolumn{3}{c}{Beam (ext.-dir.-EAS)} \\ \cline{2-4} \cline{6-8}
${n_\mathrm{el}}$ & ${u_\mathrm{A}}/{u_\mathrm{A}^\mathrm{ref}}$ & ${v_\mathrm{A}}/{v_\mathrm{A}^\mathrm{ref}}$ & ${w_\mathrm{A}}/{w_\mathrm{A}^\mathrm{ref}}$ & & ${u_\mathrm{A}}/{u_\mathrm{A}^\mathrm{ref}}$ & ${v_\mathrm{A}}/{v_\mathrm{A}^\mathrm{ref}}$ & ${w_\mathrm{A}}/{w_\mathrm{A}^\mathrm{ref}}$\\
\Xhline{3\arrayrulewidth}
5 & 9.0455E-01 & 1.0141E+00 & 9.6811E-01 & & 9.9831E-01 & 1.0261E+00 & 9.9886E-01 \\
10 & 9.0588E-01 & 1.0214E+00 & 9.6919E-01 & & 1.0002E+00 & 1.0300E+00 & 1.0003E+00 \\
20 & 9.0598E-01 & 1.0211E+00 & 9.6930E-01 & & 1.0003E+00 & 1.0298E+00 & 1.0004E+00 \\
40 & 9.0599E-01 & 1.0211E+00 & 9.6930E-01 & & 1.0003E+00 & 1.0298E+00 & 1.0004E+00 \\
\Xhline{3\arrayrulewidth}
\end{tabular}
\end{center}
\end{table}
\begin{table}[]
\centering
\small
\caption{{Cantilever beam under end force: Comparison of Newton-Raphson iteration history for $\nu=0.3$. A uniform load increment is used.}}
\label{convergence_test_thin_bend_instab}
\begin{tabular}{ccclcclcc}
\Xhline{3\arrayrulewidth}
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Iter.\#\\(last load\\step)\end{tabular}} & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}Brick, deg.=(3,3,3),\\${n_\mathrm{el}}$=$200\times10\times10$\end{tabular}} & & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}Beam (ext.-dir.-std.),\\$p=3$, ${n_\mathrm{el}}$=40\end{tabular}} & & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}Beam (ext.-dir.-EAS),\\$p=3$, ${n_\mathrm{el}}$=40\end{tabular}} \\ \cline{2-3} \cline{5-6} \cline{8-9}
& \begin{tabular}[c]{@{}c@{}}Euclidean\\ norm of residual\end{tabular} & \begin{tabular}[c]{@{}c@{}}Energy\\ norm\end{tabular} & & \begin{tabular}[c]{@{}c@{}}Euclidean\\ norm of residual\end{tabular} & \begin{tabular}[c]{@{}c@{}}Energy\\ norm\end{tabular} & & \begin{tabular}[c]{@{}c@{}}Euclidean\\ norm of residual\end{tabular} & \begin{tabular}[c]{@{}c@{}}Energy\\ norm\end{tabular} \\
\Xhline{3\arrayrulewidth}
1 & 4.4E+02 & 2.9E+02 & & 1.0E+04 & 1.5E+03 & & 1.0E+04 & 1.2E+03 \\
2 & 6.4E+02 & 3.7E+00 & & 2.2E+04 & 1.2E+02 & & 1.7E+04 & 7.7E+01 \\
3 & 8.6E-01 & 2.3E-04 & & 1.0E+02 & 1.7E-01 & & 8.2E+01 & 7.7E-02 \\
4 & 6.9E-04 & 4.3E-12 & & 3.4E+00 & 2.4E-06 & & 1.4E+00 & 4.8E-07 \\
5 & 2.8E-08 & 1.1E-21 & & 2.5E-06 & 9.5E-17 & & 7.6E-07 & 5.4E-18 \\
6 & & & & 7.7E-08 & 1.8E-22 & & 7.2E-08 & 1.3E-22 \\ \cline{1-3} \cline{5-6} \cline{8-9}
\#load steps & \multicolumn{2}{c}{20} & & \multicolumn{2}{c}{10} & & \multicolumn{2}{c}{10} \\
\#iterations & \multicolumn{2}{c}{124} & & \multicolumn{2}{c}{73} & & \multicolumn{2}{c}{78} \\
\Xhline{3\arrayrulewidth}
\end{tabular}
\end{table}
}
\subsection{Laterally loaded beam}
\label{ex_lat_load_b}
{The fourth example investigates the significance of considering the correct surface load rather than applying an equivalent load directly to the central axis, which is typically assumed in the analysis of thin beams. The significance was also discussed in the shell formulation based on an extensible director in \cite{simo1990stress}. We consider a clamped-clamped straight beam, and a distributed force of magnitude ${{\bar T}_0}={10^8}\mathrm{N}/\mathrm{m}^2$ in the negative $Z$-direction is applied over $0.1\mathrm{m}$ along the middle of the beam, as illustrated in Fig.\,\ref{model_des_str_b_con_f}. The beam has initial length $L=1\mathrm{m}$ and a square cross-section of dimension $h=w=0.1\mathrm{m}$, and the compressible Neo-Hookean material with Young's modulus $E=1\mathrm{GPa}$ and Poisson's ratio $\nu=0.3$. We model the geometry using three NURBS patches such that the basis functions have ${C^0}$-continuity at the boundaries of the loaded area ($s=0.45\mathrm{m}\,\,\mathrm{and}\,\,0.55\mathrm{m}$) in order to satisfy the discontinuity of the distributed load. {For the beam, $4\times4$ Gauss integration points are used for the integration over the cross-section.} Fig.\,\ref{disp_diff_comp_str_b_con_f} compares the relative difference of the $Z$-displacement at the central axis between the beam formulation and the reference solution obtained by IGA using brick elements with deg.=(3,3,3) and ${{n_\mathrm{el}}}=320\times15\times15$ (Table\,\ref{app_lat_load_conv_test_ref_sol} shows the convergence result of the brick element solution). As expected, it is seen in Fig.\,\ref{disp_diff_comp_str_b_con_f} that the EAS formulation gives much smaller differences than the standard formulation due to the alleviation of Poisson locking. Fig.\,\ref{disp_diff_comp_str_b_con_f} also illustrates the difference between two ways of applying the surface load: One follows the common practice in the analysis of thin beams that applies an equivalent load directly to the central axis, and is termed as \textit{equivalent central axis load}. The second, termed \textit{the correct surface load}, calculates the external stress resultant ${\bar{\boldsymbol{n}}}_0$ and external director stress couple ${\bar{\tilde {\boldsymbol{m}}}}_0^1$ by substituting ${\bar{\boldsymbol{T}}}_0=-{{\bar T}_0}{{\boldsymbol{e}}_3}$ into Eqs.\,(\ref{beam_lin_mnt_balance_ext_f}) and (\ref{beam_dir_mnt_balance_ext_m}), respectively. On the other hand, in the \textit{equivalent central axis load}, the force per unit arc-length is calculated by ${\bar{\boldsymbol{n}}}_0=-{{\bar T}_0}w{{\boldsymbol{e}}_3}$, and the effect of the director stress couple is neglected, i.e., ${\bar{\tilde {\boldsymbol{m}}}}_0^1=\boldsymbol{0}$, since the load is assumed to be directly applied to the central axis. In Fig.\,\ref{disp_diff_comp_str_b_con_f}, the beam solutions using the correct surface load show much smaller difference in both of standard and EAS formulations, compared with the results using the equivalent central axis load. Further, Fig.\,\ref{deform_str_b_clamped_cf} compares the deformed configurations and the change of cross-sectional area in three different formulations; the brick element solution with correct surface load, the beam element solution with EAS method and correct surface load, and the beam element solution with EAS method and equivalent central axis load. It is noticeable that the beam solution using the equivalent load shows much smaller change of cross-sectional area at the loaded part, since it neglects the effect of external director stress couple. Table\,\ref{newton_iter_lateral_load_compare} shows that the brick element formulation requires larger number of load steps to achieve the convergence.}
\begin{figure*}[!htbp]
\centering
\includegraphics[width=0.6\linewidth]{Figure/0_num_ex/cent_f_undeformed.png}
\caption{Laterally loaded beam: Undeformed configuration and boundary conditions.}
\label{model_des_str_b_con_f}
\end{figure*}
\begin{figure}[H] \centering
\includegraphics[width=0.6125\linewidth]{Figure/0_num_ex/cent_f_z-disp_err_ef.png}
\caption{{Laterally loaded beam: Comparison of relative difference in the $Z$-displacement on the central axis.} {The results are obtained by IGA with $p=3$.}}
\label{disp_diff_comp_str_b_con_f}
\end{figure}
\begin{figure*}[!htbp]
\centering
\begin{subfigure}[b] {0.3125\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/ex_cent_force_deformed_ca_ratio_brick.png}
\caption{Brick element (surface load)}
\label{deform_str_cent_f_deformed_solid}
\end{subfigure}\hspace{2.5mm}
\begin{subfigure}[b] {0.3125\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/ex_cent_force_deformed_ca_ratio_beam_eas.png}
\caption{Beam element (surface load)}
\label{deform_str_cent_f_deformed_beam}
\end{subfigure}\hspace{2.5mm}
\begin{subfigure}[b] {0.3125\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/ex_cent_force_deformed_ca_ratio_beam_eas_equiv_f.png}
\caption{Beam element (central axis load)}
\label{deform_str_cent_f_deformed_beam_equiv_f}
\end{subfigure}\hspace{2.5mm}
\caption{{Laterally loaded beam: Comparison of deformed configurations. The results are obtained by IGA using (a) brick elements with deg.\,=(3,3,3) and ${{n_\mathrm{el}}}=320\times15\times15$, (b) beam elements (ext.-dir.-EAS) with $p=3$ and ${{n_\mathrm{el}}}=40$ with correct surface load, and (c) the same spatial discretization with (b) but with equivalent load directly applied to the central axis. The color represents the ratio of current cross-sectional area ($A$) to the initial one ($A_0$).}}
\label{deform_str_b_clamped_cf}
\end{figure*}
\begin{table}[]
\small
\begin{center}
\caption{{Laterally loaded beam: Comparison of the total number of load steps and iterations in the Newton-Raphson method. A uniform load increment is used. Brick elements are deg.=(3,3,3) and $n_{\mathrm{el}}=320\times15\times15$, and all the beam elements are $p=3$ and $n_{\mathrm{el}}=320$.}}
\label{newton_iter_lateral_load_compare}
\begin{tabular}{cccclcc}
\Xhline{3\arrayrulewidth}
& \multicolumn{3}{c}{Correct surface load} & & \multicolumn{2}{l}{Equivalent central axis load} \\ \cline{2-4} \cline{6-7}
& \multicolumn{1}{l}{Brick} & \multicolumn{1}{l}{ext.-dir.-std} & \multicolumn{1}{l}{ext.-dir.-EAS} & & ext.-dir.-std & \multicolumn{1}{l}{ext.-dir.-EAS} \\ \Xhline{3\arrayrulewidth}
\multicolumn{1}{c}{\#load steps} & 5 & 1 & 2 & & \multicolumn{1}{c}{2} & 2 \\
\multicolumn{1}{c}{\#iterations} & 29 & 15 & 16 & & \multicolumn{1}{c}{15} & 16 \\
\Xhline{3\arrayrulewidth}
\end{tabular}
\end{center}
\end{table}
\subsection{{45$^{\circ}$-arch cantilever beam under end force}}
\label{ex_45_cant_beam_end_f}
{We verify the alleviation of Poisson locking by the EAS method, and the significance of correct surface load in a curved beam example as well. The initial beam central axis lies on the $XY$-plane and describes an $1/8$ of a full circle with radius $100\mathrm{m}$, and the cross-section has a square shape of dimension $h=w=5\mathrm{m}$. A $Z$-directional force of magnitude ${{\bar T}_0}=7.5\times{10^4}\mathrm{N/m}$ is applied on the upper edge of the end face, and the other end is clamped (see Fig.\,\ref{model_des_cant45}). We select the compressible Neo-Hookean material with Young's modulus $E=10^7\mathrm{Pa}$ and Poisson's ratio $\nu=0.3$. {For beams, we use $3\times3$ Gauss integration points for the integration over the cross-section.} The surface load ${{\boldsymbol{\bar T}}_0}=\left[0,0,{\bar T}_0\right]^\mathrm{T}$ leads to the external director stress couple ${\boldsymbol{\bar {\tilde m}}}_0^1 = {\left[ {0,0,-{{\bar T}_0}wh/2} \right]^\mathrm{T}}$, since the loaded edge is located at ${\xi^1}=-h/2$. Consequently, the following external stress couple is applied at the loaded end (see Fig.\,\ref{deform_cant45_h5_beam_cor}).
\begin{equation}
{{\bf{\bar m}}_0} \coloneqq {{\boldsymbol{d}}_\gamma } \times {\boldsymbol{\bar {\tilde m}}}_0^\gamma = {{\boldsymbol{d}}_1} \times {\boldsymbol{\bar {\tilde m}}}_0^1\ne\boldsymbol{0}.
\end{equation}
Fig.\,\ref{cant45deg_conv_norm_disp_graph} shows the beam displacement at point A normalized by the reference solution based on brick elements with deg.=(3,3,3) and $n_{\mathrm{el}}=240\times15\times15$ (see Tables \ref{app_45deg_conv_test_ref_sol} and \ref{cant45_conv_normalized_disp} for the convergence results of the brick and beam element solutions, respectively). As expected, the results of the standard formulation (black curves), which is combined with the correct surface load condition, exhibit significantly smaller displacements due to Poisson locking, and this is improved by employing the EAS method. Since the \textit{equivalent central axis load} neglects the external director stress couple (i.e., ${\boldsymbol{\bar {\tilde m}}^1_0}=\boldsymbol{0}$), it significantly overestimates the displacement at the point A, while the beam solution based on the \textit{correct surface load} is in a very good agreement with the reference solution (see also the comparison of $Z$-displacement contours in Fig.\,\ref{deform_45deg_bend_deformed}). Further, Fig.\,\ref{cant45deg_conv_area_diff_graph} compares the relative difference of the cross-sectional area from the brick element result. It is seen that the EAS formulation more accurately captures the change of cross-sectional area, compared with the standard formulation, and the \textit{equivalent central axis load} leads to a larger difference of the change of cross-sectional area from the brick element result, compared with the result of \textit{correct surface load}. Table \ref{num_iter_45_deg} compares the total number of load steps and iterations in the iterative solution process.}
\begin{figure}[H] \centering
\includegraphics[width=0.6\linewidth]{Figure/0_num_ex/cant45deg_init_undeformed_desc_1.png}
\caption{{45$^{\circ}$-arch cantilever beam: Undeformed configuration and boundary conditions. The axes of $\xi^1$ and $\xi^2$ represent two principal directions of the cross-section.}}
\label{model_des_cant45}
\end{figure}
\begin{figure}[htp]
\centering
\begin{subfigure}[b] {0.49\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/cant45deg_disp_conv_edge_f_pr03.png}
\vskip -4pt
\caption{Normalized displacements}
\label{cant45deg_conv_norm_disp_graph}
\end{subfigure}
\begin{subfigure}[b] {0.49\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/cant45deg_edge_f_pr03_c_area.png}
\vskip -4pt
\caption{Cross-sectional area ($A$)}
\label{cant45deg_conv_area_diff_graph}
\end{subfigure}
\caption{{45$^{\circ}$-arch cantilever beam: (a) Convergence of normalized displacements at the point A, and (b) the relative difference of the cross-sectional area $A$ from the brick element result in $L^2$ norm. `AL' and `SL' denote the equivalent central axis load and the correct surface load, respectively. $u_\mathrm{A}$, $v_\mathrm{A}$, and $w_\mathrm{A}$ denote the $X$-, $Y$-, and $Z$-displacements at the point A, respectively. $(\bullet)^\mathrm{ref}$ denotes the reference solution. Also, `EAS' represents the 9-parameter EAS formulation, i.e., `ext.-dir.-EAS'.} All results are obtained by IGA with $p=3$.}
\label{deform_45deg_conv_disp}
\end{figure}
\begin{table}[]
\small
\begin{center}
\caption{{{45$^{\circ}$-arch cantilever beam: Convergence of the normalized displacements at the point A. $u_\mathrm{A}$, $v_\mathrm{A}$, and $w_\mathrm{A}$ denote the $X$-, $Y$-, and $Z$-displacements at the point A, respectively. $(\bullet)^\mathrm{ref}$ denotes the reference solution.}} All results are obtained by IGA with $p=3$.}
\label{cant45_conv_normalized_disp}
\begin{tabular}{cccclccc}
\Xhline{3\arrayrulewidth}
\multirow{2}{*}{${n_\mathrm{el}}$} & \multicolumn{3}{c}{Beam (ext.-dir.-stand.)} & & \multicolumn{3}{c}{Beam (ext.-dir.-EAS)} \\ \cline{2-4} \cline{6-8}
& ${u_\mathrm{A}}/{u_\mathrm{A}^\mathrm{ref}}$ & ${v_\mathrm{A}}/{v_\mathrm{A}^\mathrm{ref}}$ & ${w_\mathrm{A}}/{w_\mathrm{A}^\mathrm{ref}}$ & & ${u_\mathrm{A}}/{u_\mathrm{A}^\mathrm{ref}}$ & ${v_\mathrm{A}}/{v_\mathrm{A}^\mathrm{ref}}$ & ${w_\mathrm{A}}/{w_\mathrm{A}^\mathrm{ref}}$ \\
\Xhline{3\arrayrulewidth}
5 & 9.6802E-01 & 9.3753E-01 & 9.7859E-01 & & 9.9660E-01 & 9.9514E-01 & 9.9583E-01 \\
10 & 9.6783E-01 & 9.4193E-01 & 9.8154E-01 & & 9.9633E-01 & 1.0018E+00 & 1.0002E+00 \\
20 & 9.6785E-01 & 9.4222E-01 & 9.8178E-01 & & 9.9635E-01 & 1.0021E+00 & 1.0005E+00 \\
40 & 9.6785E-01 & 9.4224E-01 & 9.8181E-01 & & 9.9635E-01 & 1.0022E+00 & 1.0005E+00 \\
\Xhline{3\arrayrulewidth}
\end{tabular}
\end{center}
\end{table}
\begin{figure*}[!htbp]
\centering
\begin{subfigure}[b] {0.3\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/cant45deg_deform_zdisp_nh_solid_cb_243_18_18.png}
\caption{Brick (correct surface load)}
\label{deform_cant45_h5_beam_solid}
\end{subfigure}
\begin{subfigure}[b] {0.3\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/cant45deg_deform_zdisp_nh_beam_cb_43.png}
\caption{Beam (correct surface load)}
\label{deform_cant45_h5_beam_cor}
\end{subfigure}
\begin{subfigure}[b] {0.3\textwidth} \centering
\includegraphics[width=\linewidth]{Figure/0_num_ex/cant45deg_deform_zdisp_nh_beam_ef_cb_43.png}
\caption{Beam (central axis load)}
\label{deform_cant45_h5_beam_equiv_f}
\end{subfigure}
\caption{{45$^{\circ}$-arch cantilever beam: Comparison of deformed configurations. The color represents the displacement in $Z$-direction.} The brick element solution is obtained by IGA with $\mathrm{deg.}=(3,3,3)$ and $n_\mathrm{el}=240\times15\times15$, and the beam element solutions are obtained by 9 parameter EAS formulation and IGA with $p=3$ and $n_\mathrm{el}=40$.}
\label{deform_45deg_bend_deformed}
\end{figure*}
\begin{table}[]
\caption{{45$^{\circ}$-arch cantilever beam: Comparison of the total number of load steps and iterations. In each formulation, the load is uniformly incremented.}}
\label{num_iter_45_deg}
\begin{center}
\begin{tabular}{cccc}
\Xhline{3\arrayrulewidth}
& Brick element & Beam (ext.-dir.-std.) & Beam (ext.-dir.-EAS) \\
& ${\mathrm{deg.}=(3,3,3), n_\mathrm{el}=200\times15\times15}$ & ${p=3, n_\mathrm{el}=40}$ & $p=3$, $n_\mathrm{el}=40$\\
\Xhline{3\arrayrulewidth}
\multicolumn{1}{c}{\#load steps} & 10 & 10 & 10 \\
\multicolumn{1}{c}{\#iterations} & 80 & 74 & 80 \\
\Xhline{3\arrayrulewidth}
\end{tabular}
\end{center}
\end{table}
\section{Conclusions}
In this paper, we present an isogeometric finite element formulation of geometrically exact Timoshenko beams with extensible directors. The presented beam formulation has the following advantages.
\begin{itemize}
\item The extensible director vectors allow for the accurate and efficient description of in-plane cross-sectional deformations.
\item They belong to the space $\Bbb R^3$, so that the configuration can be additively updated.
\item {In order to alleviate Poisson locking, the complete in-plane strain field has been added in the form of incompatible modes by the EAS method.}
\item The formulation does not require the zero stress assumption in the constitutive law, and offers a straightforward interface to employ general three-dimensional constitutive law like hyperelasticity.
\item In the analysis of beams, the external load is often assumed to be directly applied to the central axis. It is shown that this equivalent central axis load leads to significant error.
\item We verify the accuracy and efficiency of the developed beam formulation by comparison with the results of brick elements.
\end{itemize}
The following areas could be interesting future research directions.
\begin{itemize}
\item {Incorporation of out-of-plane deformation of cross-sections: In this paper, cross-section warping has not been considered, which restricts the range of application to compact convex cross-sections, where the out-of-plane cross-sectional deformation is less pronounced. In order to consider open and thin-walled cross-sections, one can incorporate cross-section warping by employing an additional degree-of-freedom as in \citet{simo1991geometrically}, \citet{gruttmann1998geometrical}, and \citet{coda2009solid}.
One can also refer to the works of \citet{wackerfuss2009mixed, wackerfuss2011nonlinear}, where additional strain and stress parameters are eliminated at the element level, so that the finite element formulation finally has three translation and three rotational degrees-of-freedom per node.}
\item Incorporation of \textit{exact} geometry of initial boundary surface of beams with non-uniform cross-section: {Eq.\,(\ref{inf_area_lat_bd_surf}) can be applied to non-uniform cross-sections along the central axis.} IGA has an advantage over conventional finite element formulation in perspective of straightforwardly utilizing \textit{exact} geometrical information of the initial boundary NURBS surface.
\item {Numerical instability in high slenderness ratio: There are several factors that limit the slenderness ratio within which the presented beam formulation can be properly utilized. First, the coupling between bending and through-the-thickness stretching can lead to an ill-conditioned stiffness matrix in the thin beam limit (see section \ref{instab_thin_b_lim_end_mnt}). These issues can be alleviated by existing techniques such as a multiplicative decomposition of directors in \citet{simo1990stress} and mass scaling techniques in \citet{hokkanen2019isogeometric}. Second, a mixed variational formulation or an optimal quadrature rule of reduced integration needs to be developed to alleviate membrane, shear, and curvature-thickness locking. Since the developed beam formulation has additional degrees-of-freedom for the in-plane cross-sectional deformation and the numerical integration over the cross-section, it may not be straightforward to directly employ existing reduced integration methods like the recent development in \citet{zou2021galerkin}.}
\item Enforcement of rotation continuity at intersections with slope discontinuity: As the developed beam formulation does not rely on rotational degrees-of-freedom, describing rigid joint connections between multiple beams becomes a challenge. A selective introduction of rotational degrees-of-freedom associated with the variation (increment) of director vectors can be utilized.
\item Beam contact problems: One can investigate the advantages of incorporating the transverse normal strain in beam contact problems. For example, the coupling between transverse normal stretching and bending deformations was illustrated in \citet{naghdi1989significance}.
\item Incompressible and nearly incompressible hyperelastic materials: One can extend the presented formulation to incorporate incompressibility constraint.
\item Strain objectivity and energy-momentum conservation: It has been shown in several works including \citet{romero2002objective}, \citet{betsch2002frame}, and \citet{eugster2014director}
that the direct interpolation of director fields satisfies the objectivity of strain measures. Furthermore, this can facilitate the straightforward application of time integration schemes with energy-momentum conservation\footnote{Refer to the comments in \citet{eugster2014director} and references therein.}. An in-depth discussion on the objectivity and energy-momentum conservation property in the developed beam formulation is planned for subsequent work.
Although a relevant numerical study on the objectivity, path-independence, and energy-momentum conservation was performed in \citet{coda2009solid} and \citet{coda2011fem} based on beam kinematics with extensible directors, further investigation including an analytical verification seems still necessary.
\end{itemize}
\nopagebreak[4]
|
1,941,325,220,041 | arxiv | \section{Introduction}
Soon after the discovery of recoil-free emission and absorption of gamma rays
by M\"{o}ssbauer\ in 1958~\cite{Moessbauer:1958,Frauenfelder:1962},
it has been suggested by Visscher that a similar effect should also exist for
neutrinos emitted in electron capture processes from unstable nuclei embedded
into a crystal lattice~\cite{Visscher:1959}. In the 1980's, the idea was
further developed by Kells and Schiffer~\cite{Kells:1983,Kells:1984nm},
who showed that bound state beta decay~\cite{Bahcall:1961} could
provide an alternative recoilless production mechanism. In this case, an
antineutrino with a very small energy uncertainty would be emitted, which
could then be absorbed through induced orbital electron
capture~\cite{Mikaelyan:1967}. Recently, there has been a renewed interest
in this idea, inspired by two works by Raghavan~\cite{Raghavan:2005gn,
Raghavan:2006xf}, in which the feasibility of an experiment using the
emission process
\begin{equation}
{}^3\H \ \rightarrow \ {}^3{\rm He}~+~e^- \text{(bound)}~+~\bar{\nu}_e
\label{eq:prod}
\end{equation}
and the detection process
\begin{equation}
{}^3{\rm He}~+~e^- \text{(bound)}~+~\bar{\nu}_e \ \ \rightarrow \ ^3\H
\label{eq:abs}
\end{equation}
has been studied. The $^3\H$ and $^3{\rm He}$ atoms were proposed to be embedded
into metal crystals. The detection process would then have a resonance nature,
leading to an enhancement of the detection cross section by up to a factor of
$10^{12}$ compared to the non-resonance capture of neutrinos of the same
energy. If such an experiment were realized, it could carry out a
very interesting physics program, including neutrino detection with 100 g
scale (rather than ton or kiloton scale) detectors, searching for neutrino
oscillations driven by the mixing angle $\theta_{13}$ at a baseline of
only 10~m, determining the neutrino mass hierarchy without using matter
effects, searching for active-sterile neutrino oscillations and studying the
gravitational redshift of neutrinos~\cite{Raghavan:2005gn,Raghavan:2006xf,
Minakata:2006ne,Minakata:2007tn}.
In this paper we consider recoillessly emitted and captured neutrinos -- which
we will call M\"{o}ssbauer\ neutrinos -- from a theoretical point of view. In our
discussion, we will mainly focus on the $^3\H$\,--$^3{\rm He}$ system of
Eqs.~\eqref{eq:prod} and \eqref{eq:abs}, but most of our results apply also
to other emitters and absorbers of M\"{o}ssbauer\ neutrinos.
One of our main goals is to resolve the recent controversy about the question
of whether M\"{o}ssbauer\ neutrinos would oscillate. It has been
argued~\cite{Bilenky:2006hk}
that the answer to this question depends on whether equal energies or equal
momenta are assumed for different neutrino mass eigenstates -- the assumptions
often made in deriving the standard formula for the oscillation probability.
Moreover, a possible inhibition of oscillations due to the time-energy
uncertainty relation has been brought up~\cite{Bilenky:2007vs}.
To come to definitive conclusions regarding the oscillation phenomenology
of M\"{o}ssbauer\ neutrinos, we employ a quantum field theoretical (QFT) approach,
in which neutrinos are treated as intermediate states in the combined
\mbox{production -- propagation -- detection} process and no {\em a priori}
assumptions on the energies or momenta of the different neutrino mass
eigenstates are made.
We begin in Sec.~\ref{sec:qualitative} by qualitatively discussing how the
peculiar features of M\"{o}ssbauer\ neutrinos, and in particular their very small
energy uncertainty, affect the oscillation phenomenology. We argue that
oscillations do occur, and that the coherence length is infinite if line
broadening is neglected. We then proceed to quantitative arguments in
Sec.~\ref{sec:QM} and discuss a formula for the $\bar{\nu}_e$
survival probability in the quantum mechanical intermediate wave packet
formalism~\cite{Giunti:1997wq}, in which the neutrino is described as a
superposition of three wave packets, one for each mass eigenstate.
In Sec.~\ref{sec:QFT}, we derive our main result, the rate for the
combined process of neutrino production, propagation and detection
in the QFT external wave packet approach. In this framework,
the neutrino is described by an internal line in a Feynman
diagram, while its production and detection partners are described by
wave packets. Also in this section, for the first time, we calculate the
rates for beta decay with production of a bound-state electron and for
the inverse process of stimulated electron capture in the case of nuclei
bound to a crystal lattice. We distinguish between different neutrino line
broadening mechanisms and concentrate on the oscillation phenomenology,
paying special attention to the coherence and localization terms in the
$\bar{\nu}_e$ survival probability and to the M\"{o}ssbauer\ resonance conditions
arising in each case. In Sec.~\ref{sec:discussion}, we discuss the obtained
results and draw our conclusions.
\section{M\"{o}ssbauer\ neutrinos do oscillate}
\label{sec:qualitative}
M\"{o}ssbauer\ neutrinos have very special properties compared to those of neutrinos
emitted and detected in conventional processes. In particular, they are almost
monochromatic because they are produced in two-body decays of nuclei embedded
in a crystal lattice and no phonon excitations of the host crystal accompany
their production, which ensures the recoilless nature of this process.
Therefore the width of the
neutrino line is only limited by the natural linewidth, which is the
reciprocal of the mean lifetime of the emitter, and by solid-state effects,
including electromagnetic interactions of the randomly oriented nuclear spins,
lattice defects and impurities~\cite{Raghavan:2006xf,Potzel:2006ad,
Coussement:1992,Balko:1997}.
For $^3\H$ decay, the natural linewidth is $1.17 \cdot 10^{-24}$~eV, but it
has been estimated that various broadening effects degrade this value
to an experimentally achievable M\"{o}ssbauer\ linewidth of $\gamma =
\mathcal{O}(10^{-11} \ \text{eV})$~\cite{Potzel:2006ad,Coussement:1992}.
Compared to the neutrino energy in bound state $^3\H$ decay, $E = 18.6$~keV,
the achievable relative linewidth is therefore of order $10^{-15}$.
In the standard derivations of the neutrino oscillation formula it is
often assumed that the different neutrino mass eigenstates composing the
produced flavor eigenstate have the same momentum ($\Delta p=0$), while
their kinetic energies differ by $\Delta E\simeq \Delta m^2/2 E$. For
bound state tritium beta decay (\ref{eq:prod}) and $\Delta m^2 =\Delta
m_{31}^2\simeq 2.5\times 10^{-3}$ eV$^2$ one has $\Delta E\simeq
7\times 10^{-8}$ eV, which is much larger than $\gamma$.
One may therefore wonder if the extremely small energy uncertainty of
M\"{o}ssbauer\ neutrinos would inhibit oscillations by destroying the coherence of
the different mass eigenstates of which the produced $\bar{\nu}_e$
is composed. Indeed, if neutrinos are emitted with no momentum
uncertainty and their energy uncertainty ($\sim\gamma$) is much smaller
than the energy differences of the different mass eigenstates, in each
decay event one would exactly know which mass eigenstate has been emitted.
This would prevent a coherent emission of different mass eigenstates, thus
destroying neutrino oscillations. If, on the contrary, one adopts the
same energy assumption, the momenta of different mass eigenstates would
differ by $\Delta p\simeq \Delta m^2/2p$, which would not destroy
their coherence provided that the momentum uncertainty of the
emitted neutrino state is greater that $\Delta p$; in that case,
oscillations are possible.
It is well known that in reality neither same momentum nor same energy
assumptions are correct
\cite{Winter:1981kj,Giunti:1991ca,Giunti:2000kw,Giunti:2001kj,Giunti:2003ax};
however, for neutrinos from conventional sources both lead to the correct
result, the reason being that neutrinos are ultra-relativistic and the spatial
size of the corresponding wave packets is small compared to the oscillation
length.%
\footnote{It is also essential that the energy and momentum
uncertainties of these neutrinos are of the same order. }
The above assumptions are thus just shortcuts which allow one to
arrive at
the correct result in an easy (though not rigorous) way. However, M\"{o}ssbauer\
neutrinos represent a very peculiar case, which requires a special
consideration.
Let us discuss the issue of coherence of different mass eigenstates in more
detail. If one knows the values of the neutrino energy $E$ and momentum $p$
with uncertainties $\sigma_E$ and $\sigma_p$, from the energy-momentum relation
of relativistic particles $E^2=p^2+m^2$ one can infer the value of the squared
neutrino mass $m^2$ with the uncertainty
\begin{align}
\sigma_{m^2} = \sqrt{(2 E \sigma_E)^2 + (2 p \sigma_p)^2}\,,
\label{eq:coherence-cond}
\end{align}
where it is assumed that $\sigma_E$ and $\sigma_p$ are independent. By
$\sigma_E$ and $\sigma_p$ we will now understand the intrinsic quantum
mechanical uncertainties of the neutrino energy and momentum, beyond which
these quantities cannot be measured in a given production or detection process;
$\sigma_{m^2}$ is then the quantum mechanical uncertainty of the inferred
neutrino squared mass. A generic requirement for coherent emission of
different mass eigenstates is their indistinguishability: the uncertainty
$\sigma_{m^2}$ has to be larger than the mass squared difference $\Delta m^2$
~\cite{Kayser:1981ye}. From the above discussion, we know that for M\"{o}ssbauer\
neutrinos corresponding to the $^3\H$\,--$^3{\rm He}$ system one has $E
\sigma_E \sim 10^{-8}$~eV$^2$, which is much smaller than $\Delta m^2\sim
10^{-3}$ eV$^2$. Thus, whether or not M\"{o}ssbauer\ neutrinos oscillate depends on
whether or not $2p\sigma_p>\Delta m^2$.
While the energy of M\"{o}ssbauer\ neutrinos is very precisely given by the production
process itself, this is not the case for their momentum. The neutrino momentum
can in principle be determined by measuring the recoil momentum of the crystal
in which the emitter is embedded. The ultimate uncertainty $\sigma_p$ of this
measurement is related to the coordinate uncertainty $\sigma_x$ of the
emitting nucleus through the Heisenberg relation $\sigma_p \sigma_x\ge 1/2$.
Therefore, for the momentum uncertainty to be small enough to destroy the
coherence of different mass eigenstates, $2p\sigma_p < \Delta m^2$, the
coordinate uncertainty of the emitter must satisfy $\sigma_x \gtrsim 2p/\Delta
m^2$. This means that the emitter should be strongly de-localized with the
coordinate uncertainty $\sigma_x$ of order of the neutrino oscillation
length $L^{\rm osc}=4\pi p/\Delta m^2\simeq 20$~m. This is certainly not
the case, because the coordinate uncertainty of the emitter cannot exceed
the size of the source, i.e.~a few cm. In fact, it is even much smaller,
because in principle it is possible to find out which particular nucleus
has undergone the M\"{o}ssbauer\ transition by destroying the crystal and checking which
$^3$H atom has been transformed into $^3$He. Thus, $\sigma_x$ is of the order
of interatomic distances, i.e.~$\sigma_p\sim 10$ keV, so that
\begin{equation}
2p\sigma_p \gg \Delta m^2\,.
\label{eq:local}
\end{equation}
This means that M\"{o}ssbauer\ neutrinos will oscillate.
The condition (\ref{eq:local}) is often called the localization
condition, because it requires the neutrino source to be localized in a
spatial region that is small compared to the neutrino oscillation length
$L^{\rm osc}$.
It should be noted that for the observability of neutrino oscillations the
coherence of the emitted neutrino state is not by itself sufficient; in
addition, this state must not lose its coherence until the neutrino is
detected. A coherence loss could occur because of the wave packet separation.
When a neutrino is produced as a flavour eigenstate, the wave packets of its
mass eigenstate components fully overlap; however, since they propagate with
different group velocities, after a time $t^{\rm coh}$ or upon propagating a
distance $L^{\rm coh}\simeq t^{\rm coh}$, these wave packets separate to
such an extent that they can no longer interfere in the detector, and
oscillations become unobservable. The coherence length $L^{\rm coh}$ depends
on the energy uncertainty $\sigma_E$ of the emitted neutrino state and
becomes infinite in the limit $\sigma_E\to 0$.
{}From the above discussion it follows that the oscillation phenomenology
of M\"{o}ssbauer\ neutrinos should mainly depend on their momentum uncertainty,
whereas their energy uncertainty, though crucial for the M\"{o}ssbauer\ resonance
condition, plays a relatively minor role for neutrino oscillations.
Therefore, the equal energy assumption, though in general incorrect,
should be a good approximation when discussing oscillations of M\"{o}ssbauer\ neutrinos.
Adopting this approach, i.e.~assuming the neutrino energy to be \emph{exactly}
fixed at a value $E$ by the production process, one obtains
for the $\bar{\nu}_e$ survival probability $P_{ee}$ at a distance $L$
\begin{align}
P_{ee}
&= \sum_{j,k} |U_{ej}|^2 |U_{ek}|^2
\exp\Big[ -2\pi i \frac{L}{L^{\rm osc}_{jk}} \Big].
\label{eq:infinite-wp-P1}
\end{align}
Here $U$ is the leptonic mixing matrix, $L^{\rm osc}_{jk}$ are the
partial oscillation lengths,
\begin{align}
L^{\rm osc}_{jk} &= \frac{4\pi E}{\Delta m_{jk}^2}\,,
\label{eq:Losc}
\end{align}
and the neutrinos are assumed to be ultra-relativistic or nearly
mass-degenerate, so that
\begin{align}
\frac{\Delta m_{jk}^2}{2 E} \ll E \,.
\label{eq:relativistic-approx}
\end{align}
Eq.~(\ref{eq:infinite-wp-P1}) is just the standard result for the $\bar{\nu}_e$
survival probability. As expected, we do not obtain any decoherence factors
if the neutrino energy is exactly fixed. We have also taken into account here
that in real experiments the size of the source and detector are much smaller
than the smallest of the oscillation lengths $L^{\rm osc}_{jk}$, so that the
localization condition \eqref{eq:local} is satisfied.
\section{M\"{o}ssbauer\ neutrinos in the intermediate wave packet formalism}
\label{sec:QM}
Although Eq.~\eqref{eq:infinite-wp-P1} shows that neutrino oscillations are not
inhibited by the energy constraints implied by the M\"{o}ssbauer\ effect, the assumption
of an exactly fixed neutrino energy is certainly unrealistic. Therefore, we
will now proceed to a more accurate treatment of M\"{o}ssbauer\ neutrinos using an
intermediate wave packet model~\cite{Giunti:1991ca,Giunti:1991sx,Kiers:1995zj,
Giunti:1997wq,Giunti:2003ax}. In this approach, the propagating neutrino is
described by a superposition of mass eigenstates, each of which is in turn a
wave packet with a finite momentum width. With the assumption of Gaussian wave
packets, Giunti, Kim and Lee~\cite{Giunti:1991sx,Giunti:1997wq} obtain the
following expression for the $\bar{\nu}_e$ survival probability in the
approximation of ultra-relativistic neutrinos:
\begin{align}
P_{ee}
&= \sum_{j,k} |U_{ej}|^2 |U_{ek}|^2
\exp\bigg[
- 2\pi i \frac{L}{L^{\rm osc}_{jk}}
- \bigg( \frac{L}{L^{\rm coh}_{jk}} \bigg)^2
- 2\pi^2 \xi^2 \bigg( \frac{1}{2 \sigma_p L^{\rm osc}_{jk}} \bigg)^2
\bigg].
\label{eq:QM-P1}
\end{align}
Here
\begin{align}
L^{\rm coh}_{jk} &= \frac{2 \sqrt{2} E^2}{\sigma_p |\Delta m_{jk}^2|}
\label{eq:Lcoh}
\end{align}
are the partial coherence lengths, $\sigma_p$ being the effective momentum
uncertainty of the neutrino state, and the oscillation lengths
$L^{\rm osc}_{jk}$ are given by Eq.~\eqref{eq:Losc}. $E$ is the energy
that a massless neutrino emitted in the same process would have, and the
$\mathcal{O}(1)$ parameter $\xi$ quantifies the deviation of the actual
energies of massive neutrinos from this value. Since the energy uncertainty
is very small for M\"{o}ssbauer\ neutrinos, the mass eigenstates differ in momentum,
but hardly in energy, so that $\xi$ should be negligibly small in our case.
One can see that the first term in the exponent of Eq.~\eqref{eq:QM-P1}
is the standard oscillation phase. The second term yields a decoherence
factor, which describes the suppression of oscillations due to the wave
packet separation. For conventional neutrino experiments with non-negligible
$\xi$, the third term implements a localization condition by suppressing
oscillations if the spatial width $\sigma_x = 1 / 2\sigma_p$ of the neutrino
wave packet is much larger than the oscillation length $L^{\rm osc}_{jk}$
(cf. Eqs.~(\ref{eq:Losc}) and (\ref{eq:local})). However, we have seen that,
due to the smallness of $\xi$, the intermediate wave packet formalism predicts
this condition to be irrelevant for oscillations of M\"{o}ssbauer\ neutrinos.
\section{M\"{o}ssbauer\ neutrinos in the external wave packet formalism}
\label{sec:QFT}
\begin{figure}
\begin{center}
\includegraphics{feyn.eps}
\end{center}
\caption{Feynman diagram for neutrino emission and absorption
in the $^3$H\,--$^3$He system.}
\label{fig:feyn}
\end{figure}
In the derivation of the quantum mechanical result discussed in the previous
section, certain assumptions had to be made on the properties of the neutrino
wave packets, in particular on the parameters $\sigma_p$ and $\xi$. We will
now proceed to the discussion of a QFT
approach~\cite{Jacob:1961,Sachs:1963,Giunti:1993se,Rich:1993wu,Grimus:1996av,
Grimus:1998uh,Grimus:1999ra,Cardall:1999ze,Beuthe:2001rc,Beuthe:2002ej}, in
which these quantities will be automatically determined from the properties of
the source and the detector.
Our calculation will be based on the Feynman diagram shown in
Fig.~\ref{fig:feyn}, in which the neutrino is described as an internal line.
We take the external particles to be confined by quantum mechanical harmonic
oscillator potentials to reflect the fact that they are bound in a crystal
lattice. Typical values for the harmonic oscillator frequencies are of the
order of the Debye temperature $\Theta_D \sim 600\ \mathrm{K}\simeq 0.05$ eV
of the respective crystals~\cite{Raghavan:2006xf,Potzel:2006ad}. Although this
simplistic treatment neglects the detailed structure of the solid state
lattice, it is known to correctly reproduce the main features of the
conventional M\"{o}ssbauer\ effect~\cite{Lipkin:1973}, and since we are interested
mainly in the oscillation physics and not in the exact overall process rate,
it is sufficient for our purposes. As only recoil-free neutrino emission
and absorption are of interest to us, we can neglect thermal excitations and
consider the parent and daughter nuclei in the source and detector to be in
the ground states of their respective harmonic oscillator potentials.
In Sec.~\ref{sec:QFT-minimal}, we will develop our formalism and derive
an expression for the rate of the combined process of M\"{o}ssbauer\ neutrino emission,
propagation and absorption. In
\mbox{Secs.~\ref{sec:QFT-inhom} -- \ref{sec:QFT-nat}} we will then discuss
in detail the effects of different line broadening mechanisms.
\subsection{The formalism}
\label{sec:QFT-minimal}
Let us denote the harmonic oscillator frequencies for tritium and helium in
the source by $\omega_{\H,S}$ and $\omega_{{\rm He},S}$ and those in the detector
by $\omega_{\H,D}$, and $\omega_{{\rm He},D}$. In general, these are four different
numbers because $^3\H$ and $^3{\rm He}$ have different chemical properties, and
because their different abundances in the source and detector imply
$\omega_{\H,S} \neq \omega_{\H,D}$ and $\omega_{{\rm He},S} \neq \omega_{{\rm He},D}$.
We ignore possible anisotropies of the oscillator frequencies because their
inclusion would merely lengthen our formulas without giving new insights into
the oscillation phenomenology. The normalized wave functions of the ground
states of the three-dimensional harmonic oscillators $\ket{\psi_{A,B,0}}$ are
given by
\begin{align}
\psi_{A,B,0}(\vec{x}, t) = \bigg[\frac{m_A \omega_{A,B}}{\pi}\bigg]^\frac{3}{4}
\exp\bigg[\! -\frac{1}{2} m_A \omega_{A,B} |\vec{x} - \vec{x}_B|^2 \bigg] \cdot e^{-i E_{A,B} t},
\label{eq:HO-WF-gs}
\end{align}
where $A = \{ \H, {\rm He} \}$ distinguishes the two types of atoms and $B =
\{ S, D \}$ distinguishes between quantities related to the source and to the
detector. The masses of the tritium and $^3{\rm He}$ atoms are denoted by $m_\H$
and $m_{\rm He}$, and the coordinates of the lattice sites at which the atoms are
localized in the source and in the detector are $\vec{x}_S$ and $\vec{x}_D$.
The energies $E_{A,B}$ of the external particles are not exactly fixed due to
the line broadening mechanisms discussed in Sec.~\ref{sec:qualitative}, but
follow narrow distribution functions, which are centered around $E_{A,B,0} =
m_A + \frac{1}{2} \omega_{A,B}$. For the differences of these mean
energies of tritium and helium atoms in the source and detector we will
use the notation
\begin{align}
E_{S,0} = E_{\H,S,0} - E_{{\rm He},S,0}\,,\qquad\qquad
E_{D,0} = E_{\H,D,0} - E_{{\rm He},D,0}\,.
\label{eq:diff1}
\end{align}
Before proceeding to calculate the overall rate of the process of neutrino
production, propagation and detection, we compute the expected rates of
the M\"{o}ssbauer\ neutrino production and detection treated as separate processes,
ignoring neutrino oscillations. This calculation is very instructive, and
we will use its result as a benchmark for comparison with our subsequent
QFT calculations.
The effective weak interaction Hamiltonians for the neutrino production
and detection $H_S^+$ and $H_D^-$ are given by Eqs. (\ref{eq:Hs+}) and
(\ref{eq:Hd-}) of appendix C. We will first assume that the neutrino emitted
in the recoil-free production process (\ref{eq:prod}) is monochromatic,
i.e.~neglect the natural linewidth as well as all broadening effects. Likewise,
we will neglect now the absorption line broadening effects in the recoilless
detection process (\ref{eq:abs}). A straightforward calculation gives
for the rate of recoilless neutrino production
\begin{align}
\Gamma_p = \Gamma_0\,X_S\,,
\label{eq:Gammap}
\end{align}
where
\begin{align}
\Gamma_0 = \frac{G_F^2 \cos^2\theta_c}{\pi}\; |\psi_e(R)|^2 \,m_e^2\,
\left(|M_V|^2+g_A^2 |M_A|^2\right)\,\left(\frac{E_{S,0}}{m_e}\right)^2
\kappa_S
\label{eq:Gamma0}
\end{align}
with $G_F$ the Fermi constant, $\theta_c$ the Cabibbo angle, $m_e$
the electron mass, $M_V$ and $M_A$ the vector and axial-vector (or Fermi
and Gamow-Teller) nuclear matrix elements and $g_A\simeq 1.25$ the
axial-vector coupling constant. Note that for the allowed beta transitions in
the $^3$H\,--$^3$He system, $M_V=1$ and $M_A\approx \sqrt{3}$. The quantity
$\psi_e(R)$ is the value of the anti-symmetrized atomic wave function of
$^3{\rm He}$ at the surface of the nucleus. The factor $\kappa_S$ takes into
account that the spectator electron which is initially in the $1s$ atomic
state of $^3\H$ ends up in the $1s$ state of $^3{\rm He}$. It is given by
the overlap integral of the corresponding atomic wave functions:
\begin{align}
\kappa_S~=~\Big|\int \Psi_{Z=2,S}(\vec{r})^* \,\Psi_{Z=1,S}(\vec{r})\, d^3 r
\,\Big|^2\,.
\label{eq:kappaS}
\end{align}
The factor $X_S$ in Eq.~(\ref{eq:Gammap}) is defined as
\begin{align}
X_S = 8\left(\eta_S+\frac{1}{\eta_S}\right)^{-3}
e^{-\frac{p^2}{\sigma_{pS}^2}}\,\equiv\,Y_{S}\,e^{-\frac{p^2}{\sigma_{pS}^2}}\,,
\label{eq:Xs}
\end{align}
where $p=\sqrt{E_{S,0}^2-m^2}$ is the neutrino momentum%
\footnote{Since in this calculation we ignore neutrino oscillations, we
also neglect the neutrino mass differences.},
and
\begin{align}
\eta_S = \sqrt{\frac{m_\H \,\omega_{\H,S}}{m_{\rm He}\, \omega_{{\rm He},S}}}\,,
\qquad\qquad \sigma_{pS}^2 = m_\H \,\omega_{\H,S}+m_{\rm He}\, \omega_{{\rm He},S}\,.
\label{eq:etaS}
\end{align}
The energy spectrum $\rho(E)$ of the emitted M\"{o}ssbauer\ neutrinos in the
considered approximation is
\begin{align}
\rho(E) = \Gamma_0 \, X_S \, \delta(E-E_{S,0})\,.
\label{eq:rhoE}
\end{align}
For the cross section of the recoilless detection process (\ref{eq:abs})
we obtain
\begin{align}
\sigma(E) = B_0\,X_D\,\delta(E-E_{D,0})\,,
\label{eq:sigma}
\end{align}
where
\begin{align}
B_0=4\pi G_F^2 \cos^2\theta_c\; |\psi_e(R)|^2
\left(|M_V|^2+g_A^2 |M_A|^2\right)\,\kappa_D\,.
\label{eq:B0}
\end{align}
The factor $\kappa_D$ here is defined similarly to $\kappa_S$ in
Eq.~\eqref{eq:kappaS}. Note that in the approximation of hydrogen-like
atomic wave functions one has
$\kappa_S=\kappa_D=512/729\simeq 0.7$.
The factor $X_D$ in Eq.~\eqref{eq:sigma} is defined similarly to the
corresponding factor for the
production process, i.e.
\begin{align}
X_D = 8\left(\eta_D+\frac{1}{\eta_D}\right)^{-3}
e^{-\frac{p^2}{\sigma_{pD}^2}}\,\equiv\,Y_{D}\, e^{-\frac{p^2}{\sigma_{pD}^2}}
\label{eq:Xd}
\end{align}
with
\begin{align}
\eta_D = \sqrt{\frac{m_\H \,\omega_{\H,D}}{m_{\rm He}\, \omega_{{\rm He},D}}}\,,
\qquad\qquad \sigma_{pD}^2 = m_\H \,\omega_{\H,D}+m_{\rm He}\, \omega_{{\rm He},D}\,.
\label{eq:etaD}
\end{align}
The M\"{o}ssbauer\ neutrino production rate $\Gamma_p$ and detection cross section
$\sigma(E)$ differ from those previously obtained for unbound parent and
daughter nuclei respectively in Refs.~\cite{Bahcall:1961} and
\cite{Mikaelyan:1967} by the factors $X_S$ and $X_D$. Note that in the
limit $m_\H \,\omega_{\H,S}=m_{\rm He}\, \omega_{{\rm He},S}$, $m_\H \,\omega_{\H,D}=
m_{\rm He}\, \omega_{{\rm He},D}$, the pre-exponential factors $Y_S$ and $Y_D$ in
Eqs.~(\ref{eq:Xs}) and (\ref{eq:Xd}) become equal to unity, so that $X_S$ and
$X_D$ reduce to the exponentials, which are merely the recoil-free fractions
in the production and detection processes (see the discussion below).
For unpolarized tritium nuclei in the source the produced neutrino flux is
isotropic; therefore the spectral density of the neutrino flux at the
detector located at a distance $L$ from the source is $\rho(E)/(4\pi L^2)$.
The detection rate is thus
\begin{align}
\Gamma\,=\,\frac{1}{4\pi L^2}\int_0^\infty \rho(E)\sigma(E)\,dE \,=\,
\frac{\Gamma_0\, B_0}{4\pi L^2}\,X_S X_D \;\delta(E_{S,0}-E_{D,0})\,.
\label{eq:rate1}
\end{align}
We see that it is infinite when the M\"{o}ssbauer\ resonance condition $E_{S,0}=E_{D,0}$
is exactly satisfied and zero otherwise, which is a consequence of our
assumption of infinitely sharp emission and absorption lines. This assumption
is certainly unphysical, and a realistic calculation should take into account
the finite linewidth effects. We do that here by assuming Lorentzian energy
distributions for the production and detection processes, which will be
useful for comparison with the results of our subsequent QFT approach. In this
approximation Eqs.~(\ref{eq:rhoE}) and (\ref{eq:sigma}) have to be replaced by
\begin{align}
\rho(E) = \Gamma_0 \,X_S\,
\frac{\gamma_S/2\pi}{(E-E_{S,0})^2+\gamma_S^2/4}\,,
\qquad \sigma(E) =
B_0\,X_D\,\frac{\gamma_D/2\pi}{(E-E_{D,0})^2+\gamma_D^2/4}\,,
\label{eq:rhoEsigma}
\end{align}
where $\gamma_S$ and $\gamma_D$ are the energy widths associated with
production and detection. The combined rate of the neutrino production,
propagation and detection process is then
\begin{align}
\Gamma\,= \,\frac{1}{4\pi L^2}\int_0^\infty \rho(E)\sigma(E)\,dE \,\simeq\,
\frac{\Gamma_0\, B_0}{4\pi L^2}\,X_S X_D\;
\frac{(\gamma_S+\gamma_D)/2\pi}{(E_{S,0}-E_{D,0})^2+(\gamma_S+\gamma_D)^2/4}\,.
\label{eq:rate2}
\end{align}
As can be seen from this formula, the M\"{o}ssbauer\ resonance condition is
\begin{align}
(E_{S,0}-E_{D,0})^2\ll(\gamma_S+\gamma_D)^2/4\,.
\label{eq:}
\end{align}
If it is satisfied, the neutrino detection cross section is enhanced by a
factor of order $(\alpha Z m_e)^3/[p_e E_e(\gamma_S+\gamma_D)]$ compared to
cross sections of non-resonant capture reactions $\bar{\nu}_e+A\to A'+e^+$
for neutrinos of the same energy (assuming the recoil-free fraction
to be of order 1). For $\gamma_S+\gamma_D\sim 10^{-11}$ eV the enhancement
factor can be as large as $10^{12}$.
We now turn to the QFT treatment of the overall neutrino production,
propagation and detection process, first neglecting the line broadening
effects. We derive the corresponding transition amplitude from the matrix
elements of the weak currents in the standard way by employing the
coordinate-space Feynman rules to the diagram in Fig.~\ref{fig:feyn}.
For the external tritium and helium nuclei, we use the bound state wave
function $\psi_{A,B,0}(\vec{x}, t)$ from Eq.~\eqref{eq:HO-WF-gs}. We
obtain
\begin{align}
i \mathcal{A} &=
\int\! d^3x_1 \, dt_1 \int\! d^3 x_2 \, dt_2 \,
\bigg( \frac{m_\H \omega_{\H, S}}{\pi} \bigg)^{\frac{3}{4}}
\exp\bigg[ -\frac{1}{2} m_\H \omega_{\H,S}
|\vec{x}_1 - \vec{x}_S|^2 \bigg] \, e^{-i E_{\H,S} t_1} \nonumber\\
&\hspace{1cm} \cdot
\bigg( \frac{m_{\rm He} \omega_{{\rm He},S}}{\pi} \bigg)^{\frac{3}{4}}
\exp\bigg[ -\frac{1}{2} m_{\rm He} \omega_{{\rm He},S}
|\vec{x}_1 - \vec{x}_S|^2 \bigg] \, e^{+i E_{{\rm He},S} t_1} \nonumber\\
&\hspace{1cm} \cdot
\bigg( \frac{m_{\rm He} \omega_{{\rm He}, D}}{\pi} \bigg)^{\frac{3}{4}}
\exp\bigg[ -\frac{1}{2} m_{\rm He} \omega_{{\rm He},D}
|\vec{x}_2 - \vec{x}_D|^2 \bigg] \, e^{-i E_{{\rm He},D} t_2} \nonumber\\
&\hspace{1cm} \cdot
\bigg( \frac{m_\H \omega_{\H,D}}{\pi} \bigg)^{\frac{3}{4}}
\exp\bigg[ -\frac{1}{2} m_\H \omega_{\H,D}
|\vec{x}_2 - \vec{x}_D|^2 \bigg] \, e^{+i E_{\H,D} t_2} \nonumber\\
&\hspace{1cm} \cdot \sum_j
\mathcal{M}_S^\mu \mathcal{M}_D^{\nu *} |U_{ej}|^2 \, \int \!
\frac{d^4p}{(2\pi)^4}
e^{-i p_0 (t_2 - t_1) + i \vec{p} (\vec{x}_2 - \vec{x}_1)} \nonumber\\
&\hspace{1cm} \cdot
\bar{u}_{e,S} \gamma_\mu (1 - \gamma^5) \,
\frac{i (\slashed{p} + m_j)}{p_0^2 - \vec{p}^2 - m_j^2 + i\epsilon} \,
(1 + \gamma^5) \gamma_\nu u_{e,D}.
\label{eq:QFT-A1}
\end{align}
The Dirac spinors for the external particles are denoted by $u_{A,B}$
with $A = \{ e, \H, {\rm He} \}$ and $B = \{ S, D \}$. Note that all spinors
are non-relativistic, so that we can neglect their momentum dependence.
The matrix elements $\mathcal{M}^\mu_S$ and $\mathcal{M}^\mu_D$ encode
the information on the bound state tritium beta decay and also on the
inverse process, the induced orbital electron capture which takes place
in the detector. They are given by
\begin{align}
\mathcal{M}_{S,D}^\mu &= \frac{G_F \cos\theta_c}{\sqrt{2}} \, \psi_e(R) \,
\bar{u}_{\rm He} (M_V\, \delta^\mu_0 - g_A M_A \sigma_i \,
\delta^{\mu}_i/\sqrt{3} ) u_\H\,\kappa_{S,D}^{1/2}\,.
\label{eq:Mj}
\end{align}
The integrations over $t_1$ and $t_2$ in Eq.~\eqref{eq:QFT-A1} yield
energy-conserving $\delta$-functions at the neutrino production and detection
vertices. The spatial integrals are Gaussian and can be evaluated after making
the transformations $\vec{x}_1 \rightarrow \vec{x}_1 + \vec{x}_S$ and
$\vec{x}_2 \rightarrow \vec{x}_2 + \vec{x}_D$. We obtain
\begin{align}
i \mathcal{A} &= \mathcal{N} \int \! \frac{d^4p}{(2\pi)^4} \,
2\pi \delta(p_0 - E_S) \, 2\pi \delta(p_0 - E_D) \,
\exp\bigg[ -\frac{\vec{p}^2}{2 \sigma_p^2} \bigg] \nonumber\\
&\hspace{1cm} \cdot
\sum_j \mathcal{M}_S^\mu \mathcal{M}_D^{\nu *} |U_{ej}|^2
\bar{u}_{e,S} \gamma_\mu (1 - \gamma^5) \,
\frac{i (\slashed{p} + m_j) e^{i \vec{p} \vec{L}}}{p_0^2 - \vec{p}^2 -
m_j^2 + i\epsilon} \,
(1 + \gamma^5) \gamma_\nu u_{e,D},
\label{eq:QFT-A2}
\end{align}
where we have used the notation
\begin{align}
E_S = E_{\H,S} - E_{{\rm He},S}\,,\qquad\qquad
E_D = E_{\H,D} - E_{{\rm He},D}\,,
\label{eq:EsEd}
\end{align}
and introduced the baseline vector $\vec{L} = \vec{x}_D - \vec{x}_S$. The
quantity $\sigma_p$, which is given by
\begin{align}
\frac{1}{\sigma_p^2} =
\frac{1}{m_\H \omega_{\H,S} + m_{\rm He} \omega_{{\rm He},S}}
+ \frac{1}{m_\H \omega_{\H,D} + m_{\rm He} \omega_{{\rm He},D}}\,,
\label{eq:sigma-p}
\end{align}
can be interpreted as an effective momentum uncertainty of the neutrino.
Note that $\sigma_p^{-2}=\sigma_{pS}^{-2}+\sigma_{pD}^{-2}$.
We have also defined a constant
\begin{align}
\mathcal{N} &=
\bigg( \frac{m_\H \omega_{\H, S}}{\pi} \bigg)^{\frac{3}{4}}
\bigg( \frac{m_{\rm He} \omega_{{\rm He},S}}{\pi} \bigg)^{\frac{3}{4}}
\bigg( \frac{m_{\rm He} \omega_{{\rm He},D}}{\pi} \bigg)^{\frac{3}{4}}
\bigg( \frac{m_\H \omega_{\H, D}}{\pi} \bigg)^{\frac{3}{4}} \nonumber\\
&\hspace{3cm} \cdot
\bigg( \frac{2\pi}{m_\H \omega_{\H,S} + m_{\rm He} \omega_{{\rm He},S}}
\bigg)^\frac{3}{2}
\bigg( \frac{2\pi}{m_\H \omega_{\H,D} + m_{\rm He} \omega_{{\rm He},D}}
\bigg)^\frac{3}{2},
\end{align}
containing the numerical factors from Eq.~\eqref{eq:HO-WF-gs} and coming from
the integrals over $\vec{x}_1$ and $\vec{x}_2$. One of the $\delta$-functions
in Eq.~\eqref{eq:QFT-A2} can now be used to perform the integration over
$p_0$, thereby fixing $p_0$ at the value $p_0 = E_S = E_D$. To compute
the remaining integral over the three-momentum $\vec{p}$, we
use a theorem by Grimus and Stockinger~\cite{Grimus:1996av}, which states the
following: Let $\psi(\vec{p})$ be a three times continuously differentiable
function on $\mathbb{R}^3$, such that $\psi$ itself and all its first and
second derivatives decrease at least as $1/|\vec{p}|^2$ for $|\vec{p}|
\rightarrow \infty$. Then, for any real number $A > 0$,
\begin{align}
\int d^3p \, \frac{\psi(\vec{p}) \, e^{i \vec{p} \vec{L}}}{A - \vec{p}^2 + i\epsilon}
\xrightarrow{|\vec{L}| \rightarrow \infty}
-\frac{2 \pi^2}{L} \psi(\sqrt{A} \tfrac{\vec{L}}{L}) e^{i \sqrt{A} L}
+ \mathcal{O} (L^{-\frac{3}{2}}).
\label{eq:Grimus}
\end{align}
The validity conditions are fulfilled in our case, so that in leading
order in $1/L$ we have
\begin{align}
i \mathcal{A} &= \frac{-i}{2L} \mathcal{N} \, \delta(E_S - E_D) \,
\sum_j \exp\bigg[\! -\frac{E_S^2 - m_j^2}{2 \sigma_p^2} \bigg]
\mathcal{M}_S^\mu \mathcal{M}_D^{\nu *} |U_{ej}|^2 \,
e^{i \sqrt{E_S^2 - m_j^2} L} \nonumber\\
&\hspace{6cm} \cdot
\bar{u}_{e,S} \gamma_\mu (1 - \gamma^5) (\slashed{p}_j + m_j)
(1 + \gamma^5) \gamma_\nu u_{e,D}\,,
\label{eq:QFT-A3}
\end{align}
where the 4-vector $p_j$ is defined as $p_j = (E_S, (E_S^2 - m_j^2)^{1/2} \,
\vec{L}/L)$. The Grimus-Stockinger theorem ensures that for $L\gg E_0^{-1}$,
where $E_0$ is the characteristic neutrino energy, the intermediate-state
neutrino is essentially on mass shell and its momentum points from the
neutrino source to the detector.
The transition probability $\mathcal{P}$ is obtained by summing
$|\mathcal{A}|^2$ over the spins of the final states and averaging it over
the initial-state spins. Note that no integration over final-state momenta
is necessary because we consider transitions into discrete states. The
transition rate is obtained from $\mathcal{P}$ as $\Gamma = d\mathcal{P}/dT$,
where $T$ is the total running time of the experiment. As we shall see, in
the case of inhomogeneous line broadening $\mathcal{P}\propto T$ for large $T$,
so that $\Gamma$ is independent of $T$ in that limit. The same is true for
the homogeneous line broadening, except for the special case of the
natural line width, for which the dependence on $T$ is more complicated
(see Sec.~\ref{sec:QFT-nat}).
\subsection{Inhomogeneous line broadening}
\label{sec:QFT-inhom}
Inhomogeneous line broadening is due to stationary effects, such as
impurities, lattice defects, variations in the lattice constant, etc.
\cite{Potzel:2006ad,Balko:1997}. These effects are taken into
account by summing the probabilities of the process for all possible
energies of the external particles, weighted with the corresponding
probabilities of these energies. In other words, one has to fold the
probability or total rate of the process with the energy distributions of
tritium and helium atoms in the source and detector, $\rho_{{\rm He},S}(E_{{\rm He},S})$,
$\rho_{\H,D}(E_{\H,D})$, $\rho_{\H,S}(E_{\H,S})$ and $\rho_{{\rm He},D}(E_{{\rm He},D})$.
We obtain
\begin{align}
\mathcal{P} &=
\int_0^\infty
\! dE_{\H,S} \, dE_{{\rm He},S} \, dE_{{\rm He},D} \, dE_{\H,D} \, \nonumber\\
&\hspace{3cm} \cdot
\rho_{\H,S}(E_{\H,S}) \, \rho_{{\rm He},D}(E_{{\rm He},D}) \,
\rho_{{\rm He},S}(E_{{\rm He},S}) \, \rho_{\H,D}(E_{\H,D}) \,
\overline{|\mathcal{A}|^2},
\end{align}
where $\overline{|\mathcal{A}|^2}$ is the squared modulus of the amplitude,
averaged over initial spins and summed over final spins. Using the standard
trace techniques to evaluate these spin sums and neglecting the momenta of
the non-relativistic external particles, one finds
\begin{align}
\mathcal{P}
=& T\, \frac{G_F^4\,\cos^4\theta_c}{\pi L^2}\,|\psi_e(R)|^4 E_{S,0}^2\,
(|M_V|^2 + g_A^2 |M_A|^2)^2\,Y_S Y_D \kappa_S \kappa_D
\int_0^\infty \!\!\! dE_{\H,S} \, dE_{{\rm He},S} \, dE_{{\rm He},D} \, dE_{\H,D} \,
\nonumber\\
&\hspace{0.5cm} \cdot \delta(E_S - E_D)
\rho_{\H,S}(E_{\H,S}) \, \rho_{{\rm He},D}(E_{{\rm He},D}) \,
\rho_{{\rm He},S}(E_{{\rm He},S}) \, \rho_{\H,D}(E_{\H,D}) \nonumber\\
&\hspace{0.5cm} \cdot
\sum_{j,k} |U_{ej}|^2 |U_{ek}|^2 \,
\exp\bigg[\! -\frac{2 E_S^2 - m_j^2 - m_k^2}{2 \sigma_p^2} \bigg]
e^{i \big(\sqrt{E_S^2 - m_j^2} - \sqrt{E_S^2 - m_k^2}\big) L}\,,
\label{eq:QFT-Gamma1}
\end{align}
where $Y_S$ and $Y_D$ were defined in Eqs.~(\ref{eq:Xs}) and (\ref{eq:Xd}).
Here we have taken into account that for $T \gg (E_S - E_D)^{-1}$ the
squared $\delta$-function appearing in $\overline{|\mathcal{{A}}|^2}$ can
be rewritten as%
\footnote{The expression $\delta(E_S-E_D)$ here should be understood as a
$\delta$-like function of very small width. For $|E_S - E_D|\sim 10^{-11}$
eV, the condition $T \gg (E_S-E_D)^{-1}$ would require $T\gg 10^{-4}$ s,
which should be very well satisfied in any realistic experiment. }
\begin{align}
[\delta(E_S - E_D)]^2
\simeq \frac{1}{2\pi} \delta(E_S - E_D)
\int_{-T/2}^{T/2} \! dt \, e^{i (E_S - E_D) t}
= \frac{T}{2\pi} \delta(E_S - E_D)\,.
\label{eq:double-delta}
\end{align}
The overall process rate $\Gamma$ is then obtained from
Eq.~(\ref{eq:QFT-Gamma1}) by simply dividing by $T$. Using the definitions
of $\Gamma_0$ and $B_0$ given in Eqs.~(\ref{eq:Gamma0}) and (\ref{eq:B0}),
one finds
\begin{align}
\Gamma =& \frac{\Gamma_0 \,B_0}{4\pi L^2}\;Y_S Y_D
\int_0^\infty \! dE_{\H,S} \, dE_{{\rm He},S} \, dE_{{\rm He},D} \, dE_{\H,D} \,
\nonumber\\
&\hspace{0.5cm} \cdot \delta(E_S - E_D)
\rho_{\H,S}(E_{\H,S}) \, \rho_{{\rm He},D}(E_{{\rm He},D}) \,
\rho_{{\rm He},S}(E_{{\rm He},S}) \, \rho_{\H,D}(E_{\H,D}) \nonumber\\
&\hspace{0.5cm} \cdot
\sum_{j,k} |U_{ej}|^2 |U_{ek}|^2 \,
\exp\bigg[\! -\frac{2 E_S^2 - m_j^2 - m_k^2}{2 \sigma_p^2} \bigg]
e^{i \big(\sqrt{E_S^2 - m_j^2} - \sqrt{E_S^2 - m_k^2}\big) L}.
\label{eq:QFT-Gamma1a}
\end{align}
Before proceeding to the computation of the remaining integrations over the
energy distributions of the external particles,
let us discuss the expression in the last line in Eq.~\eqref{eq:QFT-Gamma1a}.
In the approximation of ultra-relativistic (or nearly mass-degenerate)
neutrinos, Eq.~\eqref{eq:relativistic-approx}, the last exponential becomes
the standard oscillation phase factor $\exp(-2\pi i L / L^{\rm osc}_{jk})$
with the oscillation length defined in Eq.~\eqref{eq:Losc}. The additional
exponential suppression term $\exp[-(2 E_S^2 - m_j^2 - m_k^2)/2 \sigma_p^2]$
is an analogue of the well-known Lamb-M\"{o}ssbauer\ factor (or recoil-free
fraction)~\cite{Frauenfelder:1962,Lipkin:1973,Raghavan:2005gn}, which
describes the relative probability of recoil-free emission and absorption
compared to the total emission and absorption probability. We see that
for M\"{o}ssbauer\ neutrinos this factor depends not only on their energy, but also on
their masses. Therefore, if two mass eigenstates, $\nu_j$ and $\nu_k$, do not
satisfy the relation $|\Delta m_{jk}^2| \ll \sigma_p^2$, the emission and
absorption of the lighter mass eigenstate will be suppressed compared to
the emission and absorption of the heavier one. This can be viewed as a
reduced mixing of the two states, which in turn leads to a suppression
of oscillations. To stress this point directly in our formulas, we rewrite
the corresponding factor as
\begin{align}
\exp\bigg[ -\frac{(p^{\rm min}_{jk})^2}{\sigma_p^2} \bigg]
\exp\bigg[ -\frac{|\Delta m_{jk}^2|}{2 \sigma_p^2} \bigg],
\label{eq:Lamb-MB}
\end{align}
where $p^{\rm min}_{jk}$ is the smaller of the two momenta of the mass
eigenstates $\nu_j$ and $\nu_k$,
\begin{align}
(p^{\rm min}_{jk})^2 = E_S^2 - \max(m_j^2, m_k^2)\,.
\end{align}
The first exponential in Eq.~\eqref{eq:Lamb-MB} describes the
suppression of the emission rate and the absorption cross section, i.e.~is
a generalized Lamb-M\"{o}ssbauer\ factor, while the second one describes the
suppression of oscillations. The condition $|\Delta m_{jk}^2| \lesssim
2\sigma_p^2$ enforced by this second exponential can also be interpreted as
a localization condition: Defining the spatial localization $\sigma_x \simeq
1/2\sigma_p$, we can reformulate it as $L^{\rm osc}_{jk} \gtrsim 4\pi
\sigma_x E_S/\sigma_p$. Since the generalized Lamb-M\"{o}ssbauer\ factor (the
first factor in Eq.~(\ref{eq:Lamb-MB})) enforces $E_S \lesssim
\sigma_p$, this inequality is certainly fulfilled if $|L^{\rm osc}_{jk}|
\gtrsim 2\pi \sigma_x$ holds. The latter, stronger, localization condition
is the one obtained in other external wave packet
calculations~\cite{Giunti:1993se,Grimus:1998uh,Beuthe:2001rc} and is also
equivalent to the one obtained in the intermediate wave packet
picture~\cite{Giunti:1991sx,Giunti:1997wq} and discussed in Sec.~\ref{sec:QM}.
Let us now consider the integrations over the spectra of initial and final
states in Eq.~\eqref{eq:QFT-Gamma1a}. To evaluate these integrals, we need
expressions for $\rho_{A,B}$, based on the physics of the inhomogeneous line
broadening mechanisms. To a very good approximation,
these effects cause a Lorentzian smearing of the energies of the external
states~\cite{Potzel:PrivComm}, so that the energy distributions are
\begin{align}
\rho_{A,B}(E_{A,B}) &=
\frac{\gamma_{A,B}/2\pi}{(E_{A,B} - E_{A,B,0})^2 + \gamma_{A,B}^2/4}\,,
\label{eq:QFT-Lorentzian}
\end{align}
where, as before, $A = \{ \H, {\rm He} \}$, $B = \{ S, D \}$ and
$E_{A,B,0} = m_A + \frac{1}{2} \omega_{A,B}$. After evaluating the four
energy integrals in Eq.~\eqref{eq:QFT-Gamma1a} (see appendix
\ref{sec:appendix-inhom} for details), we obtain
\begin{align}
\Gamma =& \frac{\Gamma_0 \,B_0}{4\pi L^2}\;Y_S Y_D \,\frac{1}{2\pi}\,
\sum_{j,k} |U_{ej}|^2 |U_{ek}|^2 \,
\exp\bigg[ -\frac{(p^{\rm min}_{jk})^2}{\sigma_p^2} \bigg]
\exp\bigg[ -\frac{|\Delta m_{jk}^2|}{2 \sigma_p^2} \bigg] \nonumber\\
&\hspace{1cm} \cdot
\frac{1}{E_{S,0} - E_{D,0} \pm i \, \tfrac{\gamma_S - \gamma_D}{2}} \,
\Bigg[
\frac{\gamma_D A_{jk}^{(S)}}
{E_{S,0} - E_{D,0} \pm i \, \tfrac{\gamma_S +
\gamma_D}{2}}
+ \frac{\gamma_S A_{jk}^{(D)}}
{E_{S,0} - E_{D,0} \mp i \, \tfrac{\gamma_S +
\gamma_D}{2}}
\Bigg]\,.
\label{eq:QFT-Gamma3}
\end{align}
In deriving this expression we have used the fact that the generalized
Lamb-M\"{o}ssbauer\ factor is almost constant over the resonance region and can thus
be approximated by its value at $\bar{E} = \frac{1}{2} (E_{S,0} + E_{D,0})$.
The quantities $A_{jk}^{(B)}$ in Eq.~\eqref{eq:QFT-Gamma3} are given by
\begin{align}
A_{jk}^{(B)}
= \exp\bigg[ -i \frac{\Delta m_{jk}^2}{2(E_{B,0} \pm i \,
\tfrac{\gamma_B}{2})}\, L \bigg]
&\simeq \exp\bigg[ -2\pi i \frac{L}{L^{\rm osc}_{B,jk}} \bigg] \,
\exp\bigg[ - \frac{L}{L^{\rm coh}_{B,jk}} \bigg]\,.
\label{eq:inhom-A}
\end{align}
In Eqs.~(\ref{eq:QFT-Gamma3}) and (\ref{eq:inhom-A}) the upper (lower)
signs correspond to $\Delta m_{jk}^2>0$ ($\Delta m_{jk}^2<0$). The oscillation
and coherence lengths in (\ref{eq:inhom-A}) are defined in analogy with
Eqs.~\eqref{eq:Losc} and \eqref{eq:Lcoh}:
\begin{align}
L^{\rm osc}_{B,jk} = \frac{4\pi E_{B,0}}{\Delta m_{jk}^2}\simeq
\frac{4\pi \bar{E}}{\Delta m_{jk}^2}\,,
\qquad\qquad
L^{\rm coh}_{B,jk} = \frac{4 E_{B,0}^2}{\gamma_B |\Delta m_{jk}^2|} \simeq
\frac{4 \bar{E}^2}{\gamma_B |\Delta m_{jk}^2|}\,,
\label{eq:Lcoh2}
\end{align}
We see that Eq.~\eqref{eq:QFT-Gamma3} depends not on the individual energies
and widths of all external states separately, but only on the combinations
$E_{B,0} = E_{\H,B,0} - E_{{\rm He},B,0}$ and $\gamma_B=\gamma_{\H,B} +
\gamma_{{\rm He},B}$. In the limit of no neutrino oscillations, i.e.~when
all $\Delta m_{jk}^2=0$ or $U_{aj}=\delta_{aj}$, Eq.~\eqref{eq:QFT-Gamma3}
reproduces the no-oscillation result (\ref{eq:rate2}) obtained in our
calculation of the M\"{o}ssbauer\ neutrino production and detection rates treated
as separate processes.
If the localization condition $|\Delta m_{jk}^2|\ll 2\sigma_p^2$ is satisfied
for all $j$ and $k$, as it is expected to be the case in realistic experiments,
one can pull the generalized Lamb-M\"{o}ssbauer\ factor out of the sum in
Eq.~(\ref{eq:QFT-Gamma3}) and replace the localization exponentials by
unity, which yields
\begin{align}
\Gamma \simeq \frac{\Gamma_0 \,B_0}{4\pi L^2}\;Y_S Y_D \,
\exp\bigg[ -\frac{E_{S,0}^2-m_0^2}{\sigma_p^2} \bigg]\,
\sum_{j,k} |U_{ej}|^2 |U_{ek}|^2 \, I_{jk} \,.
\label{eq:QFT-Gamma3a}
\end{align}
Here $m_0$ is an average neutrino mass and $I_{jk}$ is defined in
Eq.~(\ref{eq:Ijk}). In realistic situations, it is often sufficient to
consider two-flavour approximations to this expression. Indeed, at baselines
$L \simeq 10$~m which
are suitable to search for oscillations driven by $\theta_{13}$, the ``solar"
mass squared difference $\Delta m_{21}^2$ is inessential, whereas for longer
baselines around $L\simeq 300$~m, which could be used to study the oscillations
driven by the parameters $\Delta m_{21}^2$ and $\theta_{12}$, the subdominant
oscillations governed by $\Delta m_{31}^2$ and $\theta_{13}$ are in the
averaging regime, leading to an effective 2-flavour oscillation probability.
In both cases one therefore needs to evaluate
\begin{align}
&\sum_{j,k = 1, 2} |U_{ej}|^2 |U_{ek}|^2 \, I_{jk}
= \frac{(\gamma_S + \gamma_D) / 2\pi}
{(E_{S,0} - E_{D,0})^2 + \frac{(\gamma_S + \gamma_D)^2}{4}}
\Bigg\{
(c^4 + s^4) + \frac{c^2 s^2}{2} \Big[ A^{(S)} + A^{(D)} + c.c. \Big]
\Bigg\} \nonumber\\
&\hspace{0.5cm}
- \frac{c^2 s^2 / 4\pi}{(E_{S,0} - E_{D,0})^2 + \frac{(\gamma_S + \gamma_D)^2}{4}}
\Bigg[
\frac{(A^{(S)} - A^{(D)})
\big[
(E_{S,0} - E_{D,0})(\gamma_S - \gamma_D)
+ i \frac{(\gamma_S + \gamma_D)^2}{2}
\big]}
{ E_{S,0} - E_{D,0} + i \frac{\gamma_S - \gamma_D}{2}}
+ c.c.
\Bigg],
\label{eq:inhom-2f}
\end{align}
where $A^{(B)}$ ($B=S,D$) denotes the value of $A^{(B)}_{jk}$ corresponding to
the appropriate fixed $\Delta m_{jk}^2\equiv \Delta m^2$ (which is defined
here to be positive, i.e.~$\Delta m^2=|\Delta m_{31}|^2$ or $\Delta m_{21}^2$),
$s = \sin\theta$ and $c = \cos\theta$, with $\theta$ being the relevant
two-flavour mixing angle.
As in the full three-flavour framework, in the absence of oscillations,
i.e.~for $\Delta m^2 = 0$ or $\theta = 0$, Eqs.~\eqref{eq:QFT-Gamma3a} and
(\ref{eq:inhom-2f}) reproduce the no-oscillation rate of Eq.~(\ref{eq:rate2}).
With oscillations included, the first line of Eq.~\eqref{eq:inhom-2f}
factorizes into the Lorentzian times the $\bar{\nu}_e$ survival probability,
which in general contains decoherence factors. Such a factorization does
\emph{not} occur in the second line because the first term in the numerator
in the square brackets is not proportional to $\gamma_S + \gamma_D$.
This term, containing a product of three small differences, is typically
small compared to the other terms (at least when the M\"{o}ssbauer\ resonance
condition $|E_{S,0} - E_{D,0}| \ll (\gamma_S + \gamma_D)/2$ is satisfied).
Still, it is interesting to observe that a naive factorization of $\Gamma$
into a no-oscillation transition rate and the $\bar{\nu}_e$ survival
probability is not possible when this term is retained.
In all physically relevant situations, however, the whole second line of
Eq.~\eqref{eq:inhom-2f} is negligible because so is $A^{(S)} - A^{(D)}$.
Retaining only the contribution of the first line in Eq.~\eqref{eq:inhom-2f},
from Eq.~\eqref{eq:QFT-Gamma3a} one finds
\begin{align}
\Gamma \simeq & \frac{\Gamma_0 \,B_0}{4\pi L^2}\;Y_S Y_D \,
\exp\bigg[ -\frac{E_{S,0}^2-m_0^2}{\sigma_p^2} \bigg]\,
\frac{(\gamma_S + \gamma_D) / 2\pi}
{(E_{S,0} - E_{D,0})^2 + \frac{(\gamma_S + \gamma_D)^2}{4}}
\nonumber \\
& \hspace*{2cm}
\cdot \bigg\{1 - 2 s^2 c^2\bigg[1-\frac{1}{2}
( e^{-\alpha_S L} + e^{-\alpha_D L})
\cos\bigg(\frac{\Delta m^2 L}{4\bar{E}}\bigg)\bigg]\bigg\}\,,
\label{eq:P-inhom-rel-2f-approx}
\end{align}
where $\alpha_{S,D} = (\Delta m^2/4\bar{E}^2) \gamma_{S,D}$,
so that $\exp[-\alpha_{S,D} L]=\exp[-L/L^{\rm coh}_{S,D}]$ are the decoherence
factors (cf. Eqs.~\eqref{eq:inhom-A} and \eqref{eq:Lcoh2}). For realistic
experiments, one expects the oscillation phase $(\Delta m^2/4\bar{E})L$ to be
of order unity, so that $\alpha_{S,D} L\sim \gamma_{S,D}/\bar{E}\sim 10^{-15}$,
and decoherence effects are completely negligible. The second line in
Eq.~\eqref{eq:P-inhom-rel-2f-approx} then yields the standard 2-flavour
expression for the $\bar{\nu}_e$ survival probability.
As we have already pointed out, the contribution of the second line in
\eqref{eq:inhom-2f} to $\Gamma$ is of order $(e^{-\alpha_S L}-e^{-\alpha_D L})$
and therefore completely negligible. It is interesting to ask if there are any
conceivable situations in which the decoherence exponentials in
Eq.~\eqref{eq:P-inhom-rel-2f-approx} should be kept, while the contribution of
the second line in \eqref{eq:inhom-2f} can still be neglected. Direct
inspection of Eq.~\eqref{eq:inhom-2f} shows that this is the case when
$|E_{S,0}-E_{D,0}| \lesssim |\gamma_S+\gamma_D|$ with
$\alpha_{S,D} L\gtrsim 1$ and $|\alpha_S- \alpha_D|L\ll 1$.
\subsection{Homogeneous line broadening}
\label{sec:QFT-hom}
Homogeneous line broadening is caused by various electromagnetic relaxation
effects, including interactions with fluctuating magnetic fields in the lattice
\cite{Potzel:2006ad,Coussement:1992}. Unlike inhomogeneous broadening, it
affects equally all the emitters (or absorbers) and therefore cannot be taken
into account by averaging the unperturbed transition probability over the
appropriate energy distributions of the participating particles, as we have
done in the previous subsection. Instead, one has to modify already the
expression for the amplitude. Since the homogeneous broadening effects
are stochastic, a proper averaging procedure, adequate to the broadening
mechanism, has then to be employed. For the conventional M\"{o}ssbauer\ effect with
long-lived nuclei, a number of models of homogeneous broadening was studied in
\cite{Coussement:1992,Coussement:1992b,Odeurs:1995,Balko:1997,Odeurs:1997}.
In all the considered cases the Lorentzian shape of the emission and
absorption lines has been obtained. The same models can be used
in the case of M\"{o}ssbauer\ neutrinos; one therefore expects that in most
of the cases of homogeneous broadening the overall neutrino production --
propagation -- detection rate will also have the Lorentzian resonance
form, i.e.~will essentially coincide in form with Eq.~\eqref{eq:QFT-Gamma3},
or with its simplified version in which the difference between $A_{jk}^{(S)}$
and $A_{jk}^{(D)}$ is neglected. A notable exception, which we consider next,
is the homogeneous broadening due to the natural linewidth. As we shall see,
this case is special because the time interval during which the source is
produced is small compared with the tritium lifetime.
\subsection{Neutrino M\"{o}ssbauer\ effect dominated by the natural linewidth}
\label{sec:QFT-nat}
Although in a M\"{o}ssbauer\ neutrino experiment with a tritium source and a $^3{\rm He}$
absorber inhomogeneous broadening as well as homogeneous line broadening
different from the natural linewidth are by far dominant, we will now consider
also the case in which the emission and absorption linewidths are determined
by the decay widths of the unstable nuclei. Even though it is not clear if
such a situation can be realized experimentally, it is still very interesting
for theoretical reasons.
To take the natural linewidth of tritium into account, we modify our
expression for the amplitude, Eq.~\eqref{eq:QFT-A1}, by including exponential
decay factors in the $^3$H wave functions. For the tritium in the source,
this factor has the form $\exp(-\gamma t/2)$, describing a decay starting at
$t = 0$, the time at which the experiment starts.%
\footnote{This is also supposed to be the time at which the number of $^3$H
atoms in the source is known. It is assumed that the source is created in
a time interval that is short compared to the tritium mean lifetime
$\gamma^{-1}=17.81$ years.}
For the tritium which is produced in the detector, the decay factor
is $\exp(-\gamma (T - t_2)/2)$, where $t_2$ is the production time and $T$
is the time at which the number of produced $^3$H atoms is counted. Note that
$\gamma$ here is the total decay width of tritium, not the partial width
for bound state beta decay. Since we are taking into account the finite
lifetime of tritium, we also have to restrict the domain of all time
integrations in $\mathcal{A}$ to the interval $[0, T]$ instead of
$(-\infty, \infty)$. We thus have to compute
\begin{align}
i \mathcal{A} &=
\int\! d^3x_1 \int_0^{T}\! dt_1 \int\! d^3x_2 \int_0^{T}\! dt_2 \,
\bigg( \frac{m_\H \omega_{\H, S}}{\pi} \bigg)^{\frac{3}{4}}
\exp\bigg[ -\frac{1}{2} m_\H \omega_{\H,S}
|\vec{x}_1 - \vec{x}_S|^2 \bigg] \,
e^{-i E_{\H,S,0} t_1 - \frac{1}{2} \gamma t_1}
\nonumber\\
&\hspace{1cm} \cdot
\bigg( \frac{m_{\rm He} \omega_{{\rm He},S}}{\pi} \bigg)^{\frac{3}{4}}
\exp\bigg[ -\frac{1}{2} m_{\rm He} \omega_{{\rm He},S}
|\vec{x}_1 - \vec{x}_S|^2 \bigg] \, e^{+i E_{{\rm He},S,0} t_1}
\nonumber\\
&\hspace{1cm} \cdot
\bigg( \frac{m_{\rm He} \omega_{{\rm He}, D}}{\pi} \bigg)^{\frac{3}{4}}
\exp\bigg[ -\frac{1}{2} m_{\rm He} \omega_{{\rm He},D}
|\vec{x}_2 - \vec{x}_D|^2 \bigg] \, e^{-i E_{{\rm He},D,0} t_2}
\nonumber\\
&\hspace{1cm} \cdot
\bigg( \frac{m_\H \omega_{\H,D}}{\pi} \bigg)^{\frac{3}{4}}
\exp\bigg[ -\frac{1}{2} m_\H \omega_{\H,D}
|\vec{x}_2 - \vec{x}_D|^2 \bigg] \,
e^{+i E_{\H,D,0} t_2 - \frac{1}{2} \gamma (T - t_2)}
\nonumber\\
&\hspace{1cm} \cdot \sum_j
\mathcal{M}_S^\mu \mathcal{M}_D^{\nu *} |U_{ej}|^2 \, \int \! \frac{d^4p}{(2\pi)^4}
e^{-i p_0 (t_2 - t_1) + i \vec{p} (\vec{x}_2 - \vec{x}_1)} \nonumber\\
&\hspace{1cm} \cdot
\bar{u}_{e,S} \gamma_\mu (1 - \gamma^5) \,
\frac{i (\slashed{p} + m_j)}{p_0^2 - \vec{p}^2 - m_j^2 + i\epsilon} \,
(1 + \gamma^5) \gamma_\nu u_{e,D}
\label{eq:QFT-A5}
\end{align}
with the same notation as in Sec.~\ref{sec:QFT-minimal}. This form for
$\mathcal{A}$ can also be derived in a more rigorous way using the
Weisskopf-Wigner approximation~\cite{Weisskopf:1930au,Weisskopf:1930ps,
Grimus:1998uh,Cohen:QM2}, as shown in
appendix~\ref{sec:appendix-WeisskopfWigner}. After
a calculation similar to the one described in Sec.~\ref{sec:QFT-minimal}, we
find for the total probability for finding a tritium atom at the lattice
site $\vec{x}_D$ in the detector after a time $T$:
\begin{align}
\mathcal{P}
&= \frac{\Gamma_0 \,B_0}{4\pi L^2}\;Y_S Y_D \,\frac{2}{\pi}\,
\sum_{j,k} \theta(T_{jk}) \, |U_{ej}|^2 |U_{ek}|^2 \,
\nonumber\\
&\hspace{1cm} \cdot
\exp\bigg[\! -\frac{(p^{\rm min}_{jk})^2}{\sigma_p^2} \bigg]
\exp\bigg[\! -\frac{|\Delta m_{jk}^2|}{2 \sigma_p^2} \bigg]
e^{i \big( \sqrt{\bar{E}^2 - m_j^2} - \sqrt{\bar{E}^2 - m_k^2} \big) L}
\nonumber\\
&\hspace{1cm} \cdot
e^{-\gamma T_{jk}} e^{- L / L^{\rm coh}_{jk}} \,
\frac{\sin\big[ \frac{1}{2} (E_{S,0} - E_{D,0}) (T - \frac{L}{v_j})
\big]\sin\big[ \frac{1}{2} (E_{S,0} - E_{D,0}) (T - \frac{L}{v_k})
\big]} {(E_{S,0} - E_{D,0})^2}
\label{eq:QFT-P2}
\end{align}
In the derivation, which is described in more detail in
appendix~\ref{sec:appendix-nat}, we have neglected the energy dependence
of the generalized Lamb-M\"{o}ssbauer\ factor and of the spinorial terms,
approximating them by their values at $\bar{E} = \frac{1}{2} (E_{S,0}
+ E_{D,0})$. Furthermore, we have expanded the oscillation phase around this
average energy. These approximations are justified by the observation that
these quantities are almost constant over the resonance region.
In Eq.~\eqref{eq:QFT-P2} the quantity $v_j =(\bar{E}^2 - m_j^2)^{1/2}/\bar{E}$
denotes the group velocity of the $j$th neutrino mass eigenstate, and the
generalized Lamb-M\"{o}ssbauer\ factor is parameterized in the by now familiar form
with $p^{\rm min}_{jk} = \bar{E}^2 - \max(m_j^2, m_k^2)$. Moreover, we
have defined the quantity
\begin{align}
T_{jk} = \min\bigg( T - \frac{L}{v_j}, \, T - \frac{L}{v_k} \bigg)\,,
\end{align}
which corresponds to the total running time of the experiment, minus the time
of flight of the heavier of the two mass eigenstates $\nu_j$ and $\nu_k$. The
appearance of the step-function factor $\theta(T_{jk})$ in
Eq.~\eqref{eq:QFT-P2} is related to the finite neutrino time of flight
between the source and the detector and to the fact that the interference
between the $j$th and $k$th mass components leading to oscillations is
only possible if both have already arrived at the detector. As in
Sec.~\ref{sec:QFT-inhom}, decoherence exponentials appear, containing the
characteristic coherence lengths
\begin{align}
\frac{1}{L^{\rm coh}_{jk}} &= \gamma \, \bigg| \frac{1}{v_j} - \frac{1}{v_k}
\bigg|\,.
\end{align}
In the approximation of ultra-relativistic (or nearly mass-degenerate)
neutrinos, this becomes
\begin{align}
L^{\rm coh}_{jk} = \frac{4 \bar{E}^2}{\gamma |\Delta m_{jk}^2|}\,,
\end{align}
and is thus analogous to Eqs.~\eqref{eq:Lcoh} and \eqref{eq:Lcoh2}.
While the first two lines of Eq.~\eqref{eq:QFT-P2} contain the standard
oscillation terms, the generalized Lamb-M\"{o}ssbauer\ factor and some numerical
factors, the expression in the third line is unique to M\"{o}ssbauer\ neutrinos in the
regime of natural linewidth dominance. To interpret this part of the
probability, it is helpful to consider the approximation of massless
neutrinos, which implies $v_j = 1$ for all $j$ and thus $L^{\rm coh}_{jk} =
\infty$. If we neglect the time of flight $L/v_j$ compared to the total
running time of the experiment $T$, we find that the probability is
proportional to
\begin{align}
e^{-\gamma T} \, \frac{\sin^2 [(E_{S,0} - E_{D,0}) \frac{T}{2}]}{(E_{S,0}
- E_{D,0})^2}.
\label{eq:T-dependence1}
\end{align}
The factor $\exp(-\gamma t)$ accounts for the depletion of $^3\H$ in
the source and for the decay of the produced $^3\H$ in the detector.
It is easy to see that for $\gamma = 0$ and $T \rightarrow \infty$,
Eq.~\eqref{eq:QFT-Gamma1a} is recovered, except for the omitted averaging
over the energies of the initial and final state nuclei. In particular,
we see that in this limit, due to the emerging $\delta$-function, the M\"{o}ssbauer\
effect can only occur if the resonance energies $E_{S,0}$ and $E_{D,0}$ match
exactly. For finite $T$, in contrast, the matching need not be exact because
of the time-energy uncertainty relation, which permits a certain detuning, as
long as $|E_{S,0} - E_{D,0}| \lesssim 1/T$. In the case of strong inequality
$|E_{S,0} - E_{D,0}| \ll 1/T$, Eq.~\eqref{eq:T-dependence1} can be
approximated by
\begin{align}
T^2\, e^{-\gamma T}/4\,.
\label{eq:T-dependence2}
\end{align}
It is crucial to note that the allowed detuning of $E_{S,0}$ and $E_{D,0}$
does \emph{not} depend on $\gamma$, contrary to what one might expect.
Instead, the M\"{o}ssbauer\ resonance condition requires this detuning to be small
compared to the reciprocal of the overall observation time $T$. Therefore, the
natural linewidth is not a fundamental limitation to the energy resolution of
a M\"{o}ssbauer\ neutrino experiment. There is a well-known analogue to this in quantum
optics~\cite{Meystre:1980}, called subnatural spectroscopy. Consider an
experiment, in which an atom is instantaneously excited from its ground
state into an unstable state $\ket{b}$ by a strong laser pulse at $t=0$.
Moreover, the atom is continuously exposed to electromagnetic radiation
with a photon energy $E$, which can eventually excite it further into another
unstable state $\ket{a}$. If, after a time $\tau$, the number of atoms in
state $\ket{a}$ is measured, it turns out that the result is proportional
to $1/[(E - \Delta E)^2 + (\gamma_a - \gamma_b)^2/4]$ rather than to naively
expected $1/[(E - \Delta E)^2 + (\gamma_a + \gamma_b)^2/4]$, where $\Delta E$
is the energy difference between the two states, and $\gamma_a$, $\gamma_b$
are their respective widths. In our case, the state $\ket{b}$ corresponds to a
$^3\H$ atom in the source and a $^3{\rm He}$ atom in the detector, while $\ket{a}$
corresponds to a $^3{\rm He}$ atom in the source and a $^3\H$ atom in the detector.
The initial excitation of state $\ket{b}$ corresponds to producing the tritium
source and starting the M\"{o}ssbauer\ neutrino experiment, and the transition from
$\ket{b}$ to $\ket{a}$ corresponds to the production, propagation and
absorption of a neutrino. Since the difference of decay widths
$\gamma_a-\gamma_b$ vanishes for M\"{o}ssbauer\ neutrinos,%
\footnote{We assume that the tritium nuclei in the source and detector
have the same mean lifetime.}
we see that $\gamma$ does not have any impact on the
achievable energy resolution, in accordance with Eq.~\eqref{eq:T-dependence1}.
Note that this is only true because the source is produced at one specific
point in time, namely $t=0$ (more generally, during a time interval that
is short compared to the tritium lifetime). In a hypothetical experiment,
in which tritium is continuously replenished in the source, an additional
integration of $\mathcal{P}$ over the production time would be required, and
this would yield proportionality to $1/[(E_{S,0} - E_{D,0})^2 +
\gamma^2]$, in full analogy with the corresponding result in quantum
optics~\cite{Meystre:1980}.
The $T$-dependence of $\mathcal{P}$, as given by Eq.~\eqref{eq:T-dependence2}
can be understood already from a classical argument. If we denote the number
of $^3\H$ atoms in the source by $N_S$ and the corresponding number in
the detector by $N_D$, the latter obeys the following differential equation:
\begin{align}
\dot{N}_D = -\dot{N}_S N_{0} P_{ee} \frac{\sigma(T)}{4\pi L^2} - \gamma N_D\,.
\label{eq:ND-ODE}
\end{align}
Here $P_{ee}$ is the $\bar{\nu}_e$ survival probability, $N_0$ is the number
of $^3$He atoms in the detector, which we consider constant (this is justified
if the number of $^3$H atoms produced in the detector is small compared to the
initial number of $^3$He), and $\sigma(T)$ is the absorption
cross section. It depends on $T$ because, due to the Heisenberg principle,
the accuracy to which the resonance condition has to be fulfilled is
given by $T^{-1}$. If we describe this limitation by assuming the emission and
absorption lines to be Lorentzians of width $1/T$, we find that for
$|E_{S,0}-E_{D,0}|\ll T^{-1}$ the overlap integral is proportional
to $T$, so that we can write $\sigma = s_0\, T$ with $s_0$ a constant. Using
furthermore the fact that $N_S = N_{S,0} \exp(-\gamma T)$, the
solution of Eq.~\eqref{eq:ND-ODE} is found as
\begin{align}
N_D = \frac{N_{S,0} N_0 \gamma P_{ee} s_0}{8\pi L^2}
\, T^2 e^{-\gamma T}.
\end{align}
This expression has precisely the $T$-dependence given by
Eq.~\eqref{eq:T-dependence2}.
\section{Discussion}
\label{sec:discussion}
Let us now summarize our results. We have studied the properties of
recoillessly emitted and absorbed neutrinos (M\"{o}ssbauer\ neutrinos) in a plane wave
treatment (Sec.~\ref{sec:qualitative}), in a quantum mechanical wave packet
approach (Sec.~\ref{sec:QM}) and in a full quantum field theoretical
calculation (Sec.~\ref{sec:QFT}). The plane wave treatment corresponds to
the standard derivation based on the same energy approximation. We have pointed
out, that for M\"{o}ssbauer\ neutrinos this approximation is justifiable, even though for
conventional neutrino sources it is generally considered to be inconsistent.
The wave packet approach is an extension of the plane wave treatment, which
takes into account the small but non-zero energy and momentum spread of the
neutrino. Finally, the QFT calculation is superior to the other two, in
particular, because no prior assumptions about the energies and momenta
of the intermediate-state neutrinos have to be made. These properties are
automatically determined from the wave functions of the external particles
in the source and in the detector. For these wave functions we used well
established approximations that are known to be good in the theory of
the standard M\"{o}ssbauer\ effect.
In all three approaches that have been discussed, we have consistently arrived
at the prediction that M\"{o}ssbauer\ neutrinos will oscillate, in spite of their very
small energy uncertainty. The plane wave result, Eq.~\eqref{eq:relativistic-approx},
is actually the standard textbook expression for the $\bar{\nu}_e$ survival
probability, and Eqs.~\eqref{eq:QM-P1},
\eqref{eq:QFT-Gamma3} and \eqref{eq:QFT-P2} are extensions of this expression,
containing, in particular, decoherence and localization factors. We have found
that these factors cannot suppress oscillations under realistic experimental
conditions, but are very interesting from the theoretical point of view.
Let us now compare the results of different approaches. First, we observe that
the decoherence exponents in our QFT calculations are linear in
$L/L^{\rm coh}$, while in the quantum mechanical result, Eq.~\eqref{eq:QM-P1},
the dependence is quadratic. This behaviour can be traced back to the fact
that Gaussian neutrino wave packets have been assumed in the quantum
mechanical computation, while in our QFT approach we have employed the
Lorentzian line shapes, which are more appropriate for describing M\"{o}ssbauer\
neutrinos. The linear dependence of the decoherence exponents on
$L/L^{\rm coh}$ in the case of the Lorentzian neutrino energy
distribution has been previously pointed out in~\cite{Grimus:1998uh}.
Even more striking than the differing forms of the decoherence exponentials
is the fact that a localization factor of the form $\exp[-|\Delta m_{jk}^2|
/2\sigma_p^2]$ is present in Eqs.~\eqref{eq:QFT-Gamma3} and \eqref{eq:QFT-P2},
while the localization exponentials disappear from Eq.~\eqref{eq:QM-P1} in the
limit $\xi \to 0$ which is relevant for M\"{o}ssbauer\ neutrinos. This shows that the
naive quantum mechanical wave packet approach does not capture all features of
M\"{o}ssbauer\ neutrinos. In particular, it neglects the differences between the
emission (and absorption) probabilities of different mass eigenstates, which
effectively may lead to a suppression of neutrino mixing. In realistic
experiments, however, this effect should be negligible.
Another interesting feature of the exponential factors implementing the
localization condition in our QFT calculations is that the corresponding
exponents are linear in $|\Delta m_{jk}^2|$, whereas the dependence is
quadratic (for $\xi\ne 0)$ in the quantum-mechanical expression
\eqref{eq:QM-P1}. This can be attributed to the fact that we consider the
parent and daughter nuclei in the source and detector to be in bound states
with zero mean momentum (but non-zero {\it rms} momentum). This is also
the reason why the $\sigma_p$-dependence of the localization exponents in
Eqs.~\eqref{eq:QFT-Gamma3} and \eqref{eq:QFT-P2} is different from that in
the quantum mechanical approach (namely, they depend on $|\Delta m_{jk}^2|
/2\sigma_p^2$ rather than $|\Delta m_{jk}^2|/2p\sigma_p$): for the considered
bound states, the {\it rms} momentum is $\bar{p}\sim \sigma_p$, so that
$\bar{p}\sigma_p\sim \sigma_p^2$.
One more point to notice is that while the same quantity, the momentum
uncertainty \mbox{$\sigma_p$}, enters into the decoherence and localization
factors in the quantum mechanical formula~\eqref{eq:QM-P1}, this is not the
case in the QFT approach, where the localization factors depend on $\sigma_p$,
whereas the decoherence exponentials are determined by the (much smaller)
energy uncertainty. In the case of natural line broadening this energy
uncertainty is given by the $^3$H decay width $\gamma$, while in all the other
cases it is given by the widths of the neutrino emission and
absorption lines, which are determined by the homogeneous and inhomogeneous
line broadening effects taking place in the source and detector.
The QFT results of Eqs.~\eqref{eq:QFT-Gamma3} and \eqref{eq:QFT-P2} describe
not only the oscillation physics, but also the production and detection
processes. These results can thus also be used for an approximate prediction
of the total event rate expected in a M\"{o}ssbauer\ neutrino experiment.
Both expressions contain the Lamb-M\"{o}ssbauer\ factor (or recoil-free
fraction), which describes the relative probability of recoilless decay
and absorption of neutrinos. Moreover, they contain factors that suppress
the overall process rate $\Gamma$ unless the emission and absorption lines
overlap sufficiently well. In the case of inhomogeneous line broadening
(Sec.~\ref{sec:QFT-inhom}) as well as for homogeneous broadening different
from the natural linewidth effect, this is a Lorentzian factor, the same as
in the no-oscillation rate \eqref{eq:rate2}. It suppresses the transition rate
if the peak energies of the emission and absorption lines differ by more than
the combined linewidth $\gamma_S+\gamma_D$. We have, however, found that the
factorization of the total rate into the no-oscillation rate including the
overlap factor and the oscillation probability is only approximate. For the
hypothetical case of an experiment in which the neutrino energy uncertainty is
dominated by the natural linewidth $\gamma$ (Sec.~\ref{sec:QFT-nat}), we have
found that the overlap condition does not depend on $\gamma$, but is rather
determined by the reciprocal of the overall duration of the experiment $T$.
Although this result may seem counterintuitive at first sight, it has a
well-known analogy in quantum optics~\cite{Meystre:1980} and is related to
the fact that the initial unstable particles in the source are produced in a
time interval much shorter than their lifetime.
Notice that the overlap factors contained in our QFT-based results for the
neutrino M\"{o}ssbauer\ effect governed by the natural linewidth and by other line
broadening mechanisms, Eqs.~\eqref{eq:QFT-P2} and \eqref{eq:QFT-Gamma3a},
are two well-known limiting representations of the $\delta$-function, which
yield the energy-conserving $\delta$-function $\delta(E_{S,0}-E_{D,0})$ in the
limits $T\to \infty$ or $\gamma_S+\gamma_D\to 0$, respectively.%
\footnote{Eq.~\eqref{eq:QFT-P2} yields $T\delta(E_{S,0}-E_{D,0})$ because it
describes a probability rather than a rate.} One can see that in these limits
both expressions reproduce, if one sets the $\bar{\nu}_e$ survival probability
$P_{ee}$ to unity, the no-oscillation rate~\eqref{eq:rate1} obtained in the
infinitely sharp neutrino line limit by treating the M\"{o}ssbauer\ neutrino production
and detection as separate processes.
Our QFT results thus generalize the results of the standard calculations
and allow a more accurate and consistent treatment of both the production
-- detection rate and the oscillation probability of M\"{o}ssbauer\ neutrinos.
To conclude, we have performed a quantum field theoretic calculation of
the combined rate of the emission, propagation and detection of M\"{o}ssbauer\ neutrinos
for the cases of inhomogeneous and homogeneous neutrino line broadening.
In both cases we found that the decoherence and localization damping factors
present in the combined rate will not play any role in realistic experimental
settings and therefore will not prevent M\"{o}ssbauer\ neutrinos from oscillating.
\begin{acknowledgments}
It is a pleasure to thank F.~v.~Feilitzsch, H.~Kienert, J.~Litterst, W.~Potzel,
G.~Raffelt and V.~Rubakov for very fruitful discussions. This work was in part
supported by the Transregio Sonderforschungsbereich TR27 ``Neutrinos and Beyond''
der Deutschen Forschungsgemeinschaft. JK would like to acknowledge support from
the Studienstiftung des Deutschen Volkes.
\end{acknowledgments}
|
1,941,325,220,042 | arxiv | \section{Introduction}
\IEEEPARstart{N}{owdays} deep convolutional neural networks (CNNs) have achieved top results in many difficult image classification
tasks. However, the number of parameters in CNN models is high which limits the use of deep models on devices with limited resources such as smartphones, embedded systems, etc.
Meanwhile, it has been known that there exist a lot of redundancy between the parameters and the feature maps in deep models, i.e., that CNN models are over-parametrized.
The reason that over-parametrized CNN models are used instead of small sized CNN models is that the over-parametrization makes the training of the network easier as has been shown in the experiments in \cite{Livni}. The reason for this phenomenon is believed to be due to the fact that the gradient flow in networks with many parameters achieves a better trained network than the gradient flow in small networks.
Therefore, a well-known traditional principle of designing good neural networks is to make a network with a large number of parameters, and then use regularization techniques to avoid over-fitting rather than making a network with small number of parameters from the beginning.\\
\indent However, it has been shown in \cite{Zhang} that even with the use of regularization methods, there still exists excessive capacity in the trained networks, which means that
the redundancy between the parameters is still large.
This again implies the fact that the parameters or the feature maps can be expressed in a structured subspace with a smaller number of coefficients.
Finding the underlying structure that exist between the parameters in the CNN models and reducing the redundancy of parameters and feature maps are the topics of the deep compression field.
As has been well summarized in \cite{CompressDeep1}, researches on the compression of deep models can be categorized into works which try to eliminate unnecessary weight parameters \cite{CompressDeep2}, works which try to compress the parameters by projecting them onto a low rank subspace \cite{CompressDeep3}\cite{CompressDeep4}\cite{CompressDeep5}, and works which try to group similar parameters into groups and represent them by representative features\cite{CompressDeep6}\cite{CompressDeep7}\cite{CompressDeep8}\cite{CompressDeep9}\cite{CompressDeep10}.
These works follow the common framework shown in Fig. \ref{frameworks}(a), i.e.,
they first train the original uncompressed CNN model by back-propagation to obtain the uncompressed parameters, and then try to find a compressed expression for these parameters to construct a new compressed CNN model.\\
\indent In comparison, researches which try to restrict the number of parameters in the first place by proposing small networks are also actively in progress (Fig. \ref{frameworks}(b)). However, as mentioned above, the reduction in the number of parameters changes the gradient flow, so the networks have to be designed carefully to achieve a trained network with good performance.
For example, MobileNets \cite{Mobilenet} and Xception networks \cite{Xception} use depthwise separable convolution filters, while the Squeezenet \cite{Squeezenet} uses a bottleneck approach to reduce the number of parameters.
Other models use 1-D filters to reduce the size of networks such as the highly factorized Flattened network \cite{Flattened}, or the models in \cite{TrainingLow} where 1-D filters are used together with other filters of different sizes.
Recently, Google's Inception model has also adopted 1-D filters in version 4.
One difficulty in using 1-D filters is that 1-D filters are not easy to train, and therefore, they are used only partially like in the Google's Inception model, or in the models in \cite{TrainingLow} etc., except for the Flattened network which is constituted of consecutive 1-D filters only.
However, even the Flattened network uses only three layers of 1-D filters in their experiments, due to the difficulty of training 1-D filters with many layers.\\
\indent In this paper, we propose a rank-1 CNN, where the rank-1 3-D filters are constructed by the outer products of 1-D vectors.
At the outer product based composition step at each epoch of training, the number of parameters in the 3-D filters become the same as in the filters in standard CNNs, allowing a good gradient flow to flow throughout the network. This gradient flow also updates the parameters in the 1-D vectors, from which the 3-D filters are composed. At the next composition step, the weights in the 3-D filters are updated again, not by the gradient flow but by the outer product operation, to be projected onto the rank-1 subspace. By iterating this two-step update, all the 3-D filters in the network are trained to minimize the loss function while maintaining its rank-1 property.
This is different from approaches which try to approximate the trained filters by low rank approximation
after the training has finished, e.g., like the low rank approximation in \cite{Jaderberg}. The composition operation is included in the training phase in our network,
which directs the gradient flow in a different direction from that of standard CNNs, directing the solution to live on a rank-1 subspace.
In the testing phase, we do not need the outer product operation anymore, and can directly filter the input channels with the trained 1-D vectors treating them now as 1-D filters. That is, we take consecutive 1-D convolutions with the trained 1-D vectors, since the result is the same as being filtered with the 3-D filter constituted of the trained 1-D vectors. Therefore, the inference speed is exactly the same as that of the Flattened network. However, due to the better gradient flow, better parameters for the 1-D filters can be found with the proposed method, and more importantly, the network can be trained even in the case when the Flattened network can be not.\\
\indent We will also show that the convolution with rank-1 filters results in rank-deficient outputs, where the rank of the output is upper-bounded by a smaller bound than in normal CNNs.
Therefore, the output feature vectors are constrained to live on a rank-deficient subspace in a high dimensional space. This coincides with the well-known belief that the feature vectors corresponding to images live on a low-dimensional manifold in a high dimensional space, and the fact that we get similar accuracy results with the rank-1 net can be another proof for this belief.\\
\indent We also explain in analogy to the bilateral-projection based 2-D principal component analysis(B2DPCA) what the 1-D vectors are trying to learn, and why the redundancy becomes reduced in the parameters with the rank-1 network.
The reduction of the redundancy between the parameters is expressed by the reduced number of effective parameters, i.e., the number of parameters in the 1-D vectors.
Therefore, the rank-1 net can be thought of as a compressed version of the standard CNN, and the reduced number of parameters as a smaller upper bound for the effective capacity of the standard CNN.
Compared with regularization methods, such as stochastic gradient descent, drop-out, and regularization methods, which do not reduce the excessive capacities of deep models as much as expected, the rank-1 projection reduces the capacity proportionally to the ratio of decrease in the number of parameters, and therefore, maybe can help to define a better upper bound for the effective capacity of deep networks.
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{fig_compare_with_deep_compression.png}
\caption{Two kinds of approaches trying to achieve small and efficient deep models (a) approach of compressing pre-trained parameters (b) approach of modeling and training a small-sized model directly.}
\label{frameworks}
\end{figure}
\section{Related Works}
The following works are related to our work. It is the work of the B2DPCA which gave us the insight for the rank-1 net. After we designed the rank-1 net, we found out that a similar research, i.e., the work on the Flattened network, has been done in the past. We explain both works below.
\subsection{Bilateral-projection based 2DPCA}
In \cite{B2DPCA}, a bilateral-projection based 2D principal component analysis(B2DPCA) has been proposed, which minimizes the following energy functional:
\begin{equation} \label{bilateral}
[{\mathbf P}_{opt}, {\mathbf Q}_{opt}] = \mathop{{\rm argmin}}\limits_{{\mathbf P}, {\mathbf Q}} \| {\mathbf X} - {\mathbf P}{\mathbf C}{\mathbf Q}^T \|^2_F,
\end{equation}
where ${\mathbf X} \in R^{n \times m}$ is the two dimensional image,
${\mathbf P} \in R^{m \times l}$ and ${\mathbf Q} \in R^{n \times r}$ are the left- and right-
multiplying projection matrices, respectively, and ${\mathbf C} = {\mathbf P}^T{\mathbf X}{\mathbf Q}$ is the extracted feature matrix for the image ${\mathbf X}$.
The optimal projection matrices ${\mathbf P}_{opt}$ and ${\mathbf Q}_{opt}$ are simultaneously constructed, where ${\mathbf P}_{opt}$ projects the column vectors of ${\mathbf X}$ to a subspace,
while ${\mathbf Q}_{opt}$ projects the row vectors of ${\mathbf X}$ to another one.
To see why ${\mathbf P}$ is projecting the column vectors of ${\mathbf X}$ to a subspace,
consider a simple example where ${\mathbf P}$ has $l$ column vectors:
\begin{equation} \label{P}
\begin{array}{c} {\mathbf P}=
\left[
\begin{array}{cccc}
| & | & & |\\
{\mathbf p}_1 & {\mathbf p}_2 & ... & {\mathbf p}_l\\
| & | & & |\\
\end{array}
\right],
\end{array}
\end{equation}
Then, left-multiplying ${\mathbf P}$ to the image ${\mathbf X}$, results in:
\begin{equation} \label{PX}
\begin{array}{c}
{\mathbf P}^T{\mathbf X} =\left[
\begin{array}{c}
\leaders\hrule height3pt depth-2.6pt\hskip2em \relax {\mathbf p}_1^T \leaders\hrule height3pt depth-2.6pt\hskip2em \relax \leaders\hrule height3pt depth-2.6pt\hskip2em \relax\\
\leaders\hrule height3pt depth-2.6pt\hskip2em \relax {\mathbf p}_2^T \leaders\hrule height3pt depth-2.6pt\hskip2em \relax \leaders\hrule height3pt depth-2.6pt\hskip2em \relax\\
\vdots \\
\leaders\hrule height3pt depth-2.6pt\hskip2em \relax {\mathbf p}_l^T \leaders\hrule height3pt depth-2.6pt\hskip2em \relax \leaders\hrule height3pt depth-2.6pt\hskip2em \relax\\
\end{array}
\right]
\left[
\begin{array}{cccc}
| & | & & |\\
{\mathbf x}_{col_1} & {\mathbf x}_{col_2} & ... & {\mathbf x}_{col_m}\\
| & | & & |\\
\end{array}
\right]
\\ \\ =
\left[
\begin{array}{cccc}
{\mathbf p}_1^T{\mathbf x}_{col_1} & {\mathbf p}_1^T{\mathbf x}_{col_2}& ... & {\mathbf p}_1^T{\mathbf x}_{col_m} \\
{\mathbf p}_2^T{\mathbf x}_{col_1} & {\mathbf p}_2^T{\mathbf x}_{col_2}& ... & {\mathbf p}_2^T{\mathbf x}_{col_m} \\
\vdots & \vdots & \vdots & \vdots \\
{\mathbf p}_l^T{\mathbf x}_{col_1} & {\mathbf p}_l^T{\mathbf x}_{col_2}& ... & {\mathbf p}_l^T{\mathbf x}_{col_m} \\
\end{array}
\right],
\end{array}
\end{equation}
where it can be observed that all the components in ${\mathbf P}^T{\mathbf X}$ are the projections of the column vectors of ${\mathbf X}$ onto the column vectors of ${\mathbf P}$.
Meanwhile, the right-multiplication of the matrix ${\mathbf Q}$ to ${\mathbf X}$ results in,
\begin{equation} \label{XQ}
\begin{array}{c}
{\mathbf X}{\mathbf Q} = \left[
\begin{array}{cc}
\leaders\hrule height3pt depth-2.6pt\hskip2em \relax {\mathbf x}_{row_1} \leaders\hrule height3pt depth-2.6pt\hskip2em \relax \leaders\hrule height3pt depth-2.6pt\hskip2em \relax \\
\leaders\hrule height3pt depth-2.6pt\hskip2em \relax {\mathbf x}_{row_2} \leaders\hrule height3pt depth-2.6pt\hskip2em \relax \leaders\hrule height3pt depth-2.6pt\hskip2em \relax\\
\vdots \\
\leaders\hrule height3pt depth-2.6pt\hskip2em \relax {\mathbf x}_{row_n} \leaders\hrule height3pt depth-2.6pt\hskip2em \relax \leaders\hrule height3pt depth-2.6pt\hskip2em \relax\\
\end{array}
\right]
\left[
\begin{array}{cccc}
| & | & & | \\
{\mathbf q}_1 & {\mathbf q}_2 & ... & {\mathbf q}_r\\
| & | & & |\\
\end{array}
\right]
\\ \\ =
\left[
\begin{array}{cccc}
{\mathbf x}_{row_1}{\mathbf q}_1 & {\mathbf x}_{row_1}{\mathbf q}_2 & ... & {\mathbf x}_{row_1}{\mathbf q}_r \\
{\mathbf x}_{row_2}{\mathbf q}_1 & {\mathbf x}_{row_2}{\mathbf q}_2 & ... & {\mathbf x}_{row_2}{\mathbf q}_r \\
\vdots & \vdots & \vdots & \vdots \\
{\mathbf x}_{row_n}{\mathbf q}_1 & {\mathbf x}_{row_n}{\mathbf q}_2 & ... & {\mathbf x}_{row_n}{\mathbf q}_r \\
\end{array}
\right],
\end{array}
\end{equation}
where the components of ${\mathbf X}{\mathbf Q}$ are the projections of the row vectors of ${\mathbf X}$ onto the column vectors of ${\mathbf Q}$.
From the above observation, we can see that the components of the feature matrix ${\mathbf C} = {\mathbf P}^T {\mathbf X} {\mathbf Q} \in R^{l \times r}$ is a result of simultaneously projecting the row vectors of ${\mathbf X}$ onto the column vectors of ${\mathbf P}$, and the column vectors of ${\mathbf X}$ onto the column vectors of ${\mathbf Q}$. It has been shown in \cite{B2DPCA}, that the advantage of the bilateral projection over the unilateral-projection scheme is that ${\mathbf X}$ can be represented effectively with smaller number of coefficients than in the unilateral case, i.e., a small-sized matrix ${\mathbf C}$ can well represent the image ${\mathbf X}$. This means that the bilateral-projection effectively removes the redundancies among both rows and columns of the image.
Furthermore, since
\begin{equation}
\begin{array}{c}
{\mathbf C} = {\mathbf P}^T {\mathbf X} {\mathbf Q}
=
\left[
\begin{array}{cccc}
{\mathbf p}^T_1 {\mathbf X} {\mathbf q}_1 & {\mathbf p}^T_1 {\mathbf X} {\mathbf q}_2 & ...
& {\mathbf p}^T_1 {\mathbf X} {\mathbf q}_r \\
{\mathbf p}^T_2 {\mathbf X} {\mathbf q}_1 & {\mathbf p}^T_2 {\mathbf X} {\mathbf q}_2 & ... & {\mathbf p}^T_2 {\mathbf X} {\mathbf q}_r \\
\vdots & \vdots & \vdots & \vdots\\
{\mathbf p}^T_l {\mathbf X} {\mathbf q}_1 & {\mathbf p}^T_l {\mathbf X} {\mathbf q}_2 & ... & {\mathbf p}^T_l {\mathbf X} {\mathbf q}_r \\
\end{array}\right] \\
= \left[
\begin{array}{cccc}
<{\mathbf X}, {\mathbf p}_1 {\mathbf q}^T_1> & <{\mathbf X}, {\mathbf p}_1 {\mathbf q}^T_2> & ... & <{\mathbf X}, {\mathbf p}_1 {\mathbf q}^T_r> \\
<{\mathbf X}, {\mathbf p}_2 {\mathbf q}^T_1> & <{\mathbf X}, {\mathbf p}_2 {\mathbf q}^T_2> & ... & <{\mathbf X}, {\mathbf p}_2 {\mathbf q}^T_r> \\
\vdots & \vdots & \vdots & \vdots \\
<{\mathbf X}, {\mathbf p}_l {\mathbf q}^T_1> & <{\mathbf X}, {\mathbf p}_l {\mathbf q}^T_2> & ... & <{\mathbf X}, {\mathbf p}_l {\mathbf q}^T_r> \\
\end{array}\right],
\end{array}
\end{equation}
it can be seen that the components of ${\mathbf C}$ are the 2-D projections of the image ${\mathbf X}$ onto the 2-D planes ${\mathbf p}_1 {\mathbf q}^T_1, {\mathbf p}_1 {\mathbf q}^T_2, ...{\mathbf p}_l {\mathbf q}^T_r$ made up by the outer products of the column vectors of ${\mathbf P}$ and ${\mathbf Q}$. The 2-D planes have a rank of one, since they are the outer products of two 1-D vectors. Therefore, the fact that ${\mathbf X}$ can be well represented by a small-sized ${\mathbf C}$ also implies the fact that ${\mathbf X}$ can be well represented by a few rank-1 2-D planes, i.e., only a few
1-D vectors ${\mathbf p}_1, ...{\mathbf p}_l, {\mathbf q}_1, ....{\mathbf q}_r$, where $l << m$ and $r << n$.\\
\indent In the case of (\ref{bilateral}), the learned 2-D planes try to minimize the loss function
\begin{equation}
L= \| {\mathbf X} - {\mathbf P}{\mathbf C}{\mathbf Q}^T \|^2_F,
\end{equation}
i.e., try to learn to best approximate ${\mathbf X}$.
A natural question arises, if good rank-1 2-D planes can be obtained to minimize other loss functions too, e.g., loss functions related to the image classification problem, such as
\begin{equation}
L= \| y_{true} - y({\mathbf X},{\mathbf P},{\mathbf Q}) \|^2_F,
\end{equation}
where $y_{true}$ denotes the true classification label for a certain input image ${\mathbf X}$, and
$y({\mathbf X},{\mathbf P},{\mathbf Q})$ is the output of the network
constituted by the outer products of the column vectors in the learned matrices ${\mathbf P}$ and ${\mathbf Q}$.
In this paper, we will show that it is possible to learn such rank-1 2-D planes, i.e., 2-D filters, if they are used in a deep structure. Furthermore, we extend the rank-1 2-D filter case to the rank-1 3-D filter case, where the rank-1 3-D filter is constituted as the outer product of three column vectors from three different learned matrices.
\subsection{Flattened Convolutional Neural Networks}
In \cite{Flattened}, the `Flattened CNN' has been proposed
for fast feed-forward execution by separating the conventional 3-D convolution filter
into three consecutive 1-D filters.
The 1-D filters sequentially convolve the input over different directions, i.e., the lateral, horizontal, and vertical directions.
Figure \ref{flattened-training} shows the network structure of the Flattened CNN.
The Flattened CNN uses the same network structure in both the training and
the testing phases. This is in comparison with our proposed model, where we use a different network structure in the training phase as will be seen later.
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{Fig_training1.jpg}
\caption{The structure in Flattened network. The same network structure of sequential use of 1-D filters is used in the training and testing phases.}
\label{flattened-training}
\end{figure}
However, the consecutive use of 1-D filters in the training phase makes the training difficult. This is due to the fact that the gradient path becomes longer than in normal CNN, and therefore, the gradient flow vanishes faster while the error is more accumulating.
Another reason is that the reduction in the number of parameters causes a gradient flow
different from that of the standard CNN, which is more difficult to find an
appropriate solution. This fact coincides with the experiments in \cite{Livni} which show that the gradient flow in
a network with small number of parameters cannot find good parameters.
Therefore, a particular weight initialization method has to be used with this setting. Furthermore, in \cite{Flattened}, the networks in the experiments have only three layers of convolution, which is maybe due to the fact of the difficulty in training networks with more layers.
\section{Proposed Method}
In comparison with other CNN models using 1-D rank-1 filters, we propose
the use of 3-D rank-1 filters(${\mathbf w}$) in the training stage, where the 3-D rank-1 filters are
constructed by the outer product of three 1-D vectors,
say ${\mathbf p}$, ${\mathbf q}$, and ${\mathbf t}$:
\begin{equation}
{\mathbf w} = {\mathbf p} \otimes {\mathbf q} \otimes {\mathbf t}.
\end{equation}
This is an extension of the 2-D rank-1 planes used in the B2DPCA, where the 2-D planes
are constructed by ${\mathbf w} = {\mathbf p} \otimes {\mathbf q} = {\mathbf p}{\mathbf q}^T$.
Figure \ref{proposed-training1} shows the training and the testing phases of the proposed method. The structure of the
proposed network is different for the training phase and the testing phase.
In comparison with the Flattened network (Fig. \ref{flattened-training}), in the training phase,
the gradient flow first flows through the 3-D rank-1 filters and then through the 1-D vectors.
Therefore, the gradient flow is different from that of the Flattened network resulting in a different and better solution of parameters in the 1-D vectors.
The solution can be obtained even in large networks with the proposed method,
for which the gradient flow in the Flattened network cannot obtain a solution at all.
Furthermore, at test time, i.e., at the end of optimization, we can use the 1-D vectors directly as 1-D filters in the same manner as in the Flattened network, resulting in the same inference speed as the Flattened network(Fig. \ref{proposed-training1}).
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{Fig_training2.jpg}
\caption{Proposed rank-1 neural network with different network structures in training and testing phases.}
\label{proposed-training1}
\end{figure}
Figure \ref{proposed_training2} explains the training process with the proposed network structure in detail. At every epoch of the training phase, we first take the outer product of the three 1-D vectors ${\mathbf p}$, ${\mathbf q}$, and ${\mathbf t}$. Then, we assign the result of the outer product to the weight values of the 3-D convolution filter, i.e., for every weight value in the 3-D convolution filter ${\mathbf w}$, we assign
\begin{equation} \label{FX}
w_{i,j,k} = p_i q_j t_k, \,\, \forall_{i,j,k \in \Omega({\mathbf w})}
\end{equation}
where, $i,j,k$ correspond to the 3-D coordinates in $\Omega({\mathbf w})$, the 3-D domain of the 3-D convolution filter ${\mathbf w}$.
Since the matrix constructed by the outer product of vectors has always a rank of one, the 3-D convolution filter ${\mathbf w}$ is a rank-1 filter.\\ \indent During the back-propagation phase, every weight value in ${\mathbf w}$ will be updated by
\begin{equation} \label{normal_update}
w'_{i,j,k} = w_{i,j,k} - \alpha \frac{\partial L}{\partial w_{i,j,k}},
\end{equation}
where $\frac{\partial L}{\partial w_{i,j,k}}$ denotes the gradient of the loss function $L$ with respect to the weight $w_{i,j,k}$, and $\alpha$ is the learning rate.
In normal networks, $w'_{i,j,k}$ in (\ref{normal_update}) is the final updated weight value. However, the updated filter ${\mathbf w}'$ normally is not a rank-1 filter. This is due to the fact that the update in (\ref{normal_update}) is done in the direction which considers only the minimizing of the loss function and not the rank of the filter.\\
\indent With the proposed training network structure, we take a further update step, i.e., we update the 1-D vectors ${\mathbf p}$, ${\mathbf q}$, and ${\mathbf t}$:
\begin{equation}
p'_{i} = p_{i} - \alpha \frac{\partial L}{\partial p_{i}}, \,\, \forall_{i \in \Omega({\mathbf p})}
\end{equation}
\begin{equation}
q'_{j} = q_{j} - \alpha \frac{\partial L}{\partial q_{j}}, \,\, \forall_{j \in \Omega({\mathbf q})}
\end{equation}
\begin{equation}
t'_{k} = t_{k} - \alpha \frac{\partial L}{\partial t_{k}}, \,\, \forall_{k \in \Omega({\mathbf t})}
\end{equation}
Here, $\frac{\partial L}{\partial p_{i}}$, $\frac{\partial L}{\partial q_{j}}$, and $\frac{\partial L}{\partial t_{k}}$ can be calculated as
\begin{equation}
\frac{ \partial L }{ \partial p_i } = \sum_j \sum_k \frac{ \partial L}{ \partial w_{i,j,k} }\frac{ \partial w_{i,j,k}}{ \partial p_i}= \sum_j \sum_k \frac{ \partial L}{ \partial w_{i,j,k} }q_j t_k,
\end{equation}
\begin{equation}
\frac{ \partial L }{ \partial q_j } = \sum_i \sum_k \frac{ \partial L}{ \partial w_{i,j,k} }\frac{ \partial w_{i,j,k}}{ \partial q_j}= \sum_i \sum_k \frac{ \partial L}{ \partial w_{i,j,k} }p_i t_k,
\end{equation}
\begin{equation}
\frac{ \partial L }{ \partial t_k} = \sum_i \sum_j \frac{ \partial L}{ \partial w_{i,j,k} }\frac{ \partial w_{i,j,k}}{ \partial t_k}= \sum_i \sum_j \frac{ \partial L}{ \partial w_{i,j,k} }p_i q_j.
\end{equation}
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{Fig_training-4.jpg}
\caption{Steps in the training phase of the proposed rank-1 network.}
\label{proposed_training2}
\end{figure}
At the next feed forward step of the back-propagation, an outer product of the updated 1-D vectors ${\mathbf p}$, ${\mathbf q}$, and ${\mathbf t}$ is taken to concatenate them back into the 3-D convolution filter ${\mathbf w}''$:
\begin{equation} \label{next_update}
\begin{array}{ccc}
w''_{i,j,k} \!\!\!\!\!\!\!\!\! & = p'_{i}q'_{j}t'_{k} = (p_{i} - \alpha \frac{\partial L}{\partial p_{i}})(q_{j} - \alpha \frac{\partial L}{\partial q_{j}})(t_{k} - \alpha \frac{\partial L}{\partial t_{k}})& \\
&\!\!\!\!\!\!\!\! = p_{i}q_{j}t_{k} - \alpha (p_{i}q_{j}\frac{\partial L}{\partial t_{k}}+q_{j}t_{k}\frac{\partial L}{\partial p_{i}} + p_{i}t_{k}\frac{\partial L}{\partial q_{j}})&\\
\!\!\!\!\!\!+ \!\!\!\!\!\!\!\!\! & {\alpha}^2 (p_{i}\frac{\partial L}{\partial q_{j}}\frac{\partial L}{\partial t_{k}}+ q_{j}\frac{\partial L}{\partial p_{i}}\frac{\partial L}{\partial t_{k}}+t_{k}\frac{\partial L}{\partial p_{i}}\frac{\partial L}{\partial t_{k}})-{\alpha}^3\frac{\partial L}{\partial p_{i}}\frac{\partial L}{\partial q_{j}}\frac{\partial L}{\partial t_{k}}& \\
=& w_{i,j,k} - \alpha \Delta_{i,j,k}, \,\, \forall_{i,j,k}, &\!\!\!\!\!\!\!\! \\
\end{array}
\end{equation}
where
\begin{equation} \label{set}
\begin{array}{ccc}
\Delta_{i,j,k}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! & = p_{i}q_{j}\frac{\partial L}{\partial t_{k}}+q_{j}t_{k}\frac{\partial L}{\partial p_{i}} + p_{i}t_{k}\frac{\partial L}{\partial q_{j}}&\\
& \!\!\!\!\!\! - \alpha (p_{i}\frac{\partial L}{\partial q_{j}}\frac{\partial L}{\partial t_{k}}+ q_{j}\frac{\partial L}{\partial p_{i}}\frac{\partial L}{\partial t_{k}}+t_{k}\frac{\partial L}{\partial p_{i}}\frac{\partial L}{\partial t_{k}})+{\alpha}^2\frac{\partial L}{\partial p_{i}}\frac{\partial L}{\partial q_{j}}\frac{\partial L}{\partial t_{k}}.& \\
\end{array}
\end{equation}
As the outer product of 1-D vectors always results in a rank-1 filter, ${\mathbf w}''$ is a rank-1 filter as compared with ${\mathbf w}'$ which is not.
Comparing (\ref{normal_update}) with (\ref{next_update}), we get
\begin{equation}
w''_{i,j,k} = w'_{i,j,k} - \alpha (\Delta_{i,j,k}-\frac{\partial L}{\partial w_{i,j,k}}).
\end{equation}
Therefore, $\Delta_{i,j,k}-\frac{\partial L}{\partial w_{i,j,k}}$ is the incremental update vector which projects
${\mathbf w}'$ back onto the rank-1 subspace.
\section{Property of rank-1 filters}
Below, we explain some properties of the 3-D rank-1 filters.
\subsection{Multilateral property of 3-D rank-1 filters}
We explain the bilateral property of the 2-D rank-1 filters in analogy to the B2DPCA.
The extension to the multilateral property of the 3-D rank-1 filters is then straightforward.
We first observe that a 2-D convolution can be seen as shifting inner products, where
each component $y({\mathbf r})$ at position ${\mathbf r}$ of the output matrix ${\mathbf Y}$ is computed as the inner product of a 2-D filter
${\mathbf W}$ and the image patch ${\mathbf X}({\mathbf r})$ centered at ${\mathbf r}$:
\begin{equation}
y({\mathbf r}) = <{\mathbf W},{\mathbf X}({\mathbf r})>.
\end{equation}
If ${\mathbf W}$ is a 2-D rank-1 filter, then,
\begin{equation}
y({\mathbf r}) = <{\mathbf W},{\mathbf X}({\mathbf r})> = <{\mathbf p}{\mathbf q}^T, {\mathbf X}({\mathbf r})> = {\mathbf p}^T {\mathbf X}({\mathbf r}) {\mathbf q}
\end{equation}
As has been explained in the case of B2DPCA, since ${\mathbf p}$ is multiplied to the rows of ${\mathbf X}({\mathbf r})$, ${\mathbf p}$ tries to extract the features from the rows of ${\mathbf X}({\mathbf r})$ which can minimize the loss function. That is, ${\mathbf p}$ searches the rows in all patches ${\mathbf X}({\mathbf r}), \forall_{{\mathbf r}}$ for some common features which can reduce the loss function, while ${\mathbf q}$ looks for the features in the columns of the patches.
This is in analogy to the B2DPCA, where the bilateral projection removes the redundancies among the rows and columns in the 2-D filters.
Therefore, by easy extension, the 3-D rank-1 filters which are learned by the multilateral projection will have less redundancies among the rows, columns, and the channels than the normal 3-D filters in standard CNNs.
\subsection{Property of projecting onto a low dimensional subspace}
In this section, we show that the convolution with the rank-1 filters projects the output channels onto a low dimensional subspace.
In \cite{DeepConvolution}, it has been shown via the block Hankel matrix formulation that the auto-reconstructing U-Net with insufficient number of filters results in a low-rank approximation of its input. Using the same block Hankel matrix formulation for the 3-D convolution, we can show that the 3-D rank-1 filter projects the input
onto a low dimensional subspace in a high dimension.
To avoid confusion, we use the same definitions and notations as in \cite{DeepConvolution}.
A wrap-around Hankel matrix $H_{d}({\mathbf f})$ of a function ${\mathbf f} = [f[1], f[2], \hdots ,f[n]]$ with respect to the number of columns $d$ is defined as
\begin{equation}
H_{d}({\mathbf f}) =
\left[
\begin{array}{cccc}
f[1] & f[2] & \hdots & f[d] \\
f[2] & f[3] & \hdots & f[d+1] \\
\vdots & \vdots & \ddots & \vdots\\
f[n] & f[1] & \hdots & f[d-1] \\
\end{array}
\right] \in R^{n \times d}.
\end{equation}
Using the Hankel matrix, a convolution operation with a 1-D filter ${\mathbf w}$ of length $d$ can be expressed in a matrix-vector form as
\begin{equation}
{\mathbf y} = H_{d}({\mathbf f})\bar{{\mathbf w}},
\end{equation}
where $\bar{{\mathbf w}}$ is the flipped version of ${\mathbf w}$, and ${\mathbf y}$ is the output result of the convolution.\\
\indent The 2-D convolution can be expressed using the block Hankel matrix expression of the input channel. The block Hankel matrix of a 2-D input ${\mathbf X} = [{\mathbf x}_1, ..., {\mathbf x}_{n_2}] \in R^{n_1 \times n_2}$ with ${\mathbf x}_i \in R^{n_1}$ being the columns of ${\mathbf X}$, becomes
\begin{equation}
H_{d_1,d_2}({\mathbf X}) =
\left[
\begin{array}{cccc}
H_{d_1} ({\mathbf x}_1) & H_{d_1} ({\mathbf x}_2) & \hdots & H_{d_1} ({\mathbf x}_{d_2}) \\
H_{d_1} ({\mathbf x}_2) & H_{d_1} ({\mathbf x}_3) & \hdots & H_{d_1} ({\mathbf x}_{d_2 +1}) \\
\vdots & \vdots & \ddots & \vdots\\
H_{d_1} ({\mathbf x}_{n_2}) & H_{d_1} ({\mathbf x}_1) & \hdots & H_{d_1} ({\mathbf x}_{d_2 -1}) \\
\end{array}
\right],
\end{equation}
where $H_{d_1,d_2}({\mathbf X}) \in R^{n_1 n_2 \times d_1 d_2}$ and $H_{d_1}({\mathbf x}_{i}) \in R^{n_1 \times d_1}$.
With the block Hankel matrix, a single-input single-output 2-D convolution with a 2-D filter ${\mathbf W}$ of size $d_1 \times d_2$ can be expressed in matrix-vector form,
\begin{equation}
VEC({\mathbf Y}) = H_{d_1,d_2}({\mathbf X}) VEC({\mathbf W}),
\end{equation}
where $VEC({\mathbf Y})$ denotes the vectorization operation by stacking up the column vectors of the 2-D matrix ${\mathbf Y}$.\\
\indent In the case of multiple input channels ${\mathbf X}^{(1)} \hdots {\mathbf X}^{(N)}$, the block Hankel matrix is extended to
\begin{equation}
H_{d_1,d_2 | N}\left( [{\mathbf X}^{(1)} \hdots {\mathbf X}^{(N)}] \right)= \left[ H_{d_1,d_2}({\mathbf X}^{(1)}) \hdots H_{d_1,d_2}({\mathbf X}^{(N)})\right],
\end{equation}
and a single output of the multi-input convolution with multiple filters becomes
\begin{equation}
VEC({\mathbf Y}^{(i)})=\sum_{j=1}^{N}H_{d_1,d_2}({\mathbf X}^{(j)})VEC({\mathbf W}_{(i)}^{(j)}), \,\,\, i=1,\hdots,q,
\end{equation}
where $q$ is the number of filters.
Last, the matrix-vector form of the multi-input multi-output convolution resulting in multiple outputs ${\mathbf Y}^{(1)} \hdots {\mathbf Y}^{(q)}$ can be expressed as
\begin{equation}
{\mathbf Y} = H_{d_1,d_2 | N}\left( [{\mathbf X}^{(1)} \hdots {\mathbf X}^{(N)}] \right) {\mathbf W},
\end{equation}
where
\begin{equation}
{\mathbf Y} = [VEC({\mathbf Y}^{(1)}) \, \hdots \, VEC({\mathbf Y}^{(q)})]
\end{equation}
and
\begin{equation}
{\mathbf W} =
\left[
\begin{array}{ccc}
VEC({\mathbf W}_{(1)}^{(1)}) & \hdots & VEC({\mathbf W}_{(q)}^{(1)}) \\
\vdots & \ddots & \vdots \\
VEC({\mathbf W}_{(1)}^{(N)}) & \hdots & VEC({\mathbf W}_{(q)}^{(N)}) \\
\end{array}
\right].
\end{equation}
To calculate the upper bound of the rank of ${\mathbf Y}$, we use the rank inequality
\begin{equation}
rank(\mathbf{AB}) \leq min\{rank(\mathbf{A}),rank(\mathbf{B})\}
\end{equation}
on ${\mathbf Y}$ to get
\begin{equation}
rank({\mathbf Y})\!\leq\!min \{rank H_{d_1,d_2|N}\!\left(\![{\mathbf X}^{(1)}\hdots{\mathbf X}^{(N)}]\!\right)\!,rank({\mathbf W})\}.
\end{equation}
Now to investigate the rank of ${\mathbf W}$, we first observe that
\begin{equation}
{\mathbf W} =
\left[
\begin{array}{ccc}
{\mathbf t}_1[1]
VEC({\mathbf p}_1 \otimes {\mathbf q}_1 ) & \hdots & {\mathbf t}_q[1]
VEC({\mathbf p}_1 \otimes {\mathbf q}_1 ) \\
\vdots & \ddots & \vdots \\
{\mathbf t}_1[N]
VEC({\mathbf p}_l \otimes {\mathbf q}_r ) & \hdots & {\mathbf t}_q[N]
VEC({\mathbf p}_l \otimes {\mathbf q}_r ) \\
\end{array}
\right]
\end{equation}
as can be seen in Fig. \ref{HankelAnalysis}.\\
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{ForHankelAnalysis.jpg}
\caption{Convolution filters of the proposed rank-1 network.}
\label{HankelAnalysis}
\end{figure}
\indent Then, expressing ${\mathbf W}$ as the stack of its sub-matrices,
\begin{equation}
{\mathbf W} =
\left[
\begin{array}{ccc}
{\mathbf W}_1 \\
\vdots\\
{\mathbf W}_s \\
\vdots\\
{\mathbf W}_N \\
\end{array}
\right] \in R^{Nd_1d_2 \times q},
\end{equation}
where
\begin{equation}
{\mathbf W}_s =
\left[
\begin{array}{ccc}
{\mathbf t}_1[s]VEC({\mathbf p}_i\otimes{\mathbf q}_j) & \hdots & {\mathbf t}_q[s]VEC({\mathbf p}_i\otimes{\mathbf q}_j)\\
\end{array}
\right],
\end{equation}
which columns are the vectorized forms of the 2-D slices in the 3-D filters which convolve with the $s$-th image. We observe that all the sub-matrices ${\mathbf W}_s \in R^{d_1d_2 \times q}, (s=1,...N)$ have a rank of 1, since all the column vectors in ${\mathbf W}_s$ are in the same direction and differ only in their magnitudes, i.e., by the different values of ${\mathbf t}_1[s], ..., {\mathbf t}_q[s]$.
Therefore, the upper bound of $rank({\mathbf W})$ is $min\{N,q\}$ instead of $min\{Nd_1d_2,q\}$ which is the upper bound we get if we use non-rank-1 filters.\\
\indent As a result, the output ${\mathbf Y}$ is upper bounded as
\begin{equation}
rank({\mathbf Y}) \leq a,
\end{equation}
where
\begin{equation}
\label{ranka}
\begin{array}{cc}
a = min \{rank H_{d_1,d_2|N}\left([{\mathbf X}^{(1)}\hdots{\mathbf X}^{(N)}]\right),\\
\mbox{number of input channels ($N$)},\\
\mbox{number of filters ($q$)}\}.
\end{array}
\end{equation}
As can be seen from (\ref{ranka}), the upper bound is determined by the ranks of Hankel matrices of the input channels or the numbers of input channels or filters.
In common deep neural network structures, the number of filters are normally larger than the number of input channels, e.g., the VGG-16 uses in every layer a number of filters larger or equal to the number of input channels. So if we use the same structure for the proposed rank-1 network as in the VGG-16 model, the upper bound will be determined mainly by the number of input channels.
Therefore, the outputs of layers in the proposed CNN are constrained to live on sub-spaces having lower ranks than the sub-spaces on which the outputs of layers in standard CNNs live. Since the output of a certain layer becomes the input of the next layer, the difference in the rank between the standard and the proposed rank-1 CNN accumulates in higher layers. Therefore, the final output of the proposed rank-1 CNN lives on a sub-space of much lower rank than the output of the standard CNN.
\section{Experiments}
We compared the performance of the proposed model with the standard CNN and the Flattened CNN model \cite{Flattened}.
We used the same number of layers for all the models, where for the Flattened CNN we regarded the combination of the lateral, vertical, and horizontal 1-D convolutional layers as a single layer. Furthermore, we used the same numbers of input and output channels in each layer for all the models, and also the same ReLU, Batch normalization, and dropout operations.
The codes for the proposed rank-1 CNN will be opened at https://github.com/petrasuk/Rank-1-CNN.\\
\indent Table 1-3 show the different structures of the models used for each dataset in the training stage.
The outer product operation of three 1-D filters ${\mathbf p}$, ${\mathbf q}$, and ${\mathbf t}$ into a 3-D rank-1 filter ${\mathbf w}$ is denoted as ${\mathbf w} \doteq {\mathbf p} \otimes {\mathbf q} \otimes {\mathbf t}$ in the tables.
The datasets that we used in the experiments are the MNIST, the CIFAR10, and the `Dog and Cat'(https://www.kaggle.com/c/dogs-vs-cats) datasets.
We used different structures for different datasets.
For the experiments on the MNIST and the CIFAR10 datasets, we trained on 50,000 images, and
then tested on 100 batches each consisting of 100 random images, and calculated the overall average accuracy. The sizes of the images in the MNIST and the CIFAR10 datasets are $28 \times 28$ and $32 \times 32$, respectively.
For the `Dog and Cat' dataset, we trained on 24,900 training images (size $224 \times 224$), and tested on a set of 100 test images.\\
\indent The proposed rank-1 CNN achieved a slightly larger testing accuracy on the MNIST dataset than the other two models (Fig. \ref{MNIST}). This is maybe due to the fact that the MNIST dataset is in its nature a low-ranked one, for which the proposed method can find the best approximation since the proposed method constrains the solution to a low rank sub-space. With the CIFAR10 dataset, the accuracy is slightly less than that of the standard CNN which maybe due to the fact that the images in the CIFAR10 datasets are of higher ranks than those in the MNIST dataset.
However, the testing accuracy of the proposed CNN is higher than that of the Flattened CNN which shows the fact that the better gradient flow in the proposed CNN model achieves a better solution. The `Dog and Cat' dataset was used in the experiments to verify the performance of the proposed CNN on real-sized images and on a
deep structure. In this case, we could not train the Flattened network due to memory issues. We used the Tensorflow API, and somehow, the Tensorflow API requires much more GPU memory for the Flattened network than the proposed rank-1 network.
We also believe that, even if there is no memory issue, with this deep structure, the Flattened network cannot find good parameters at all due to the limit of the bad gradient flow in the deep structure.
The Standard CNN and the proposed CNN achieved similar test accuracy as can be seen in Fig. \ref{DogAndCat}.
\begin{table}[h]
\begin{center}
\caption{Structure of CNN for MNIST dataset}
\begin{tabular}{c|c|c}
\textbf{Standard CNN} & \textbf{Flattened CNN} & \textbf{Proposed CNN}\\
\hline
\multicolumn{3}{c}{Conv1: 64 filters, each filter constituted as:}\\
\hline
\multirow{4}{*}{$1\times3\times3$ conv} & $1\times1\times1\times$ conv &${\mathbf w}_{1} \doteq {\mathbf p}_{1}(1\times3\times1)$\\
& $1\times3\times1$ conv & $\otimes{\mathbf q}_{1}(1\times1\times3)$\\
& $1\times1\times3$ conv & $\otimes{\mathbf t}_{1}(1\times1\times1)$\\
& & $1\times3\times3$ conv \\
\hline
\multicolumn{3}{c}{Conv2: 64 filters, each filter constituted as:}\\
\hline
\multirow{4}{*}{$64\times3\times3$ conv} & $64\times1\times1$ conv &${\mathbf w}_{2} \doteq {\mathbf p}_{2}(1\times3\times1)$\\
& $1\times3\times1$ conv & $\otimes{\mathbf q}_{2}(1\times1\times3)$\\
& $1\times1\times3$ conv & $\otimes{\mathbf t}_{2}(64\times1\times1)$ \\
& & $64\times3\times3$ conv\\
\hline
\multicolumn{3}{c}{Max Pool ($\frac{1}{2})$}\\
\hline
\multicolumn{3}{c}{Conv3: 144 filters, each filter constituted as:}\\
\hline
\multirow{4}{*}{$64\times3\times3$ conv} & $64\times1\times1$ conv &${\mathbf w}_{3} \doteq {\mathbf p}_{3}(1\times3\times1)$\\
& $1\times3\times1$ conv & $\otimes{\mathbf q}_{3}(1\times1\times3)$\\
& $1\times1\times3$ conv & $\otimes{\mathbf t}_{3}(64\times1\times1)$ \\
& & $64\times3\times3$ conv\\
\hline
\multicolumn{3}{c}{Conv4: 144 filters, each filter constituted as:}\\
\hline
\multirow{4}{*}{$144\times3\times3$ conv} & $144\times1\times1$ conv &${\mathbf w}_{4} \doteq {\mathbf p}_{4}(1\times3\times1)$\\
& $1\times3\times1$ conv & $\otimes{\mathbf q}_{4}(1\times1\times3)$\\
& $1\times1\times3$ conv & $\otimes{\mathbf t}_{4}(144\times1\times1)$ \\
& & $144\times3\times3$ conv\\
\hline
\multicolumn{3}{c}{Max Pool ($\frac{1}{2})$}\\
\hline
\multicolumn{3}{c}{Conv5: 144 filters, each filter constituted as:}\\
\hline
\multirow{4}{*}{$144\times3\times3$ conv} & $144\times1\times1$ conv &${\mathbf w}_{5} \doteq {\mathbf p}_{5}(1\times3\times1)$\\
& $1\times3\times1$ conv & $\otimes{\mathbf q}_{5}(1\times1\times3)$\\
& $1\times1\times3$ conv & $\otimes{\mathbf t}_{5}(144\times1\times1)$ \\
& & $144\times3\times3$ conv\\
\hline
\multicolumn{3}{c}{Conv6: 256 filters, each filter constituted as:}\\
\hline
\multirow{4}{*}{$144\times3\times3$ conv} & $144\times1\times1$ conv &${\mathbf w}_{6} \doteq {\mathbf p}_{6}(1\times3\times1)$\\
& $1\times3\times1$ conv & $\otimes{\mathbf q}_{6}(1\times1\times3)$\\
& $1\times1\times3$ conv & $\otimes{\mathbf t}_{6}(144\times1\times1)$ \\
& & $144\times3\times3$ conv\\
\hline
\multicolumn{3}{c}{Conv7: 256 filters, each filter constituted as:}\\
\hline
\multirow{4}{*}{$256\times3\times3$ conv} & $256\times1\times1$ conv &${\mathbf w}_{7} \doteq {\mathbf p}_{7}(1\times3\times1)$\\
& $1\times3\times1$ conv & $\otimes{\mathbf q}_{7}(1\times1\times3)$\\
& $1\times1\times3$ conv & $\otimes{\mathbf t}_{7}(256\times1\times1)$ \\
& & $256\times3\times3$ conv\\
\hline
\multicolumn{3}{c}{FC 2048 + Batch Normalization + ReLU + Drop Out (Prob. = 0.5)}\\
\hline
\multicolumn{3}{c}{FC 1024 + Batch Normalization + ReLU + Drop Out (Prob. = 0.5)}\\
\hline
\multicolumn{3}{c}{FC 10 + ReLU + Drop Out (Prob. = 0.5)}\\
\hline
\multicolumn{3}{c}{Soft-Max}\\
\hline
\end{tabular}
\end{center}
\label{tab:table2}
\end{table}
\begin{table}[h!]
\label{tab:table1}
\begin{center}
\caption{Structure of CNN for CIFAR-10 dataset}
\begin{tabular}{c|c|c}
\textbf{Standard CNN} & \textbf{Flattened CNN} & \textbf{Proposed CNN}\\
\hline
\multicolumn{3}{c}{Conv1: 64 filters, each filter constituted as:}\\
\hline
\multirow{4}{*}{$3\times3\times3$ conv} & $3\times1\times1$ conv &${\mathbf w}_{1} \doteq {\mathbf p}_{1}(1\times3\times1)$\\
& $1\times3\times1$ conv & $\otimes{\mathbf q}_{1}(1\times1\times3)$\\
& $1\times1\times3$ conv & $\otimes{\mathbf t}_{1}(3\times1\times1)$\\
& & $3\times3\times3$ conv \\
\hline
\multicolumn{3}{c}{ReLU + Batch Normalization}\\
\hline
\multicolumn{3}{c}{Conv2: 64 filters, each filter constituted as:}\\
\hline
\multirow{4}{*}{$64\times3\times3$ conv} & $64\times1\times1$ conv &${\mathbf w}_{2} \doteq {\mathbf p}_{2}(1\times3\times1)$\\
& $1\times3\times1$ conv & $\otimes{\mathbf q}_{2}(1\times1\times3)$\\
& $1\times1\times3$ conv & $\otimes{\mathbf t}_{2}(64\times1\times1)$ \\
& & $64\times3\times3$ conv\\
\hline
\multicolumn{3}{c}{ReLU + Max Pool ($\frac{1}{2})$ + Drop Out (Prob. = 0.5)}\\
\hline
\multicolumn{3}{c}{Conv3: 144 filters, each filter constituted as:}\\
\hline
\multirow{4}{*}{$64\times3\times3$ conv} & $64\times1\times1$ conv &${\mathbf w}_{3} \doteq {\mathbf p}_{3}(1\times3\times1)$\\
& $1\times3\times1$ conv & $\otimes{\mathbf q}_{3}(1\times1\times3)$\\
& $1\times1\times3$ conv & $\otimes{\mathbf t}_{3}(64\times1\times1)$ \\
& & $64\times3\times3$ conv\\
\hline
\multicolumn{3}{c}{ReLU + Batch Normalization}\\
\hline
\multicolumn{3}{c}{Conv4: 144 filters, each filter constituted as:}\\
\hline
\multirow{4}{*}{$144\times3\times3$ conv} & $144\times1\times1$ conv &${\mathbf w}_{4} \doteq {\mathbf p}_{4}(1\times3\times1)$\\
& $1\times3\times1$ conv & $\otimes{\mathbf q}_{4}(1\times1\times3)$\\
& $1\times1\times3$ conv & $\otimes{\mathbf t}_{4}(144\times1\times1)$ \\
& & $144\times3\times3$ conv\\
\hline
\multicolumn{3}{c}{ReLU + Max Pool ($\frac{1}{2})$ +Drop Out (Prob. = 0.5)}\\
\hline
\multicolumn{3}{c}{Conv5: 256 filters, each filter constituted as:}\\
\hline
\multirow{4}{*}{$144\times3\times3$ conv} & $144\times1\times1$ conv &${\mathbf w}_{5} \doteq {\mathbf p}_{5}(1\times3\times1)$\\
& $1\times3\times1$ conv & $\otimes{\mathbf q}_{5}(1\times1\times3)$\\
& $1\times1\times3$ conv & $\otimes{\mathbf t}_{5}(144\times1\times1)$ \\
& & $144\times3\times3$ conv\\
\hline
\multicolumn{3}{c}{ReLU + Batch Normalization}\\
\hline
\multicolumn{3}{c}{Conv6: 256 filters, each filter constituted as:}\\
\hline
\multirow{4}{*}{$256\times3\times3$ conv} & $256\times1\times1$ conv &${\mathbf w}_{6} \doteq {\mathbf p}_{6}(1\times3\times1)$\\
& $1\times3\times1$ conv & $\otimes{\mathbf q}_{6}(1\times1\times3)$\\
& $1\times1\times3$ conv & $\otimes{\mathbf t}_{6}(256\times1\times1)$ \\
& & $256\times3\times3$ conv\\
\hline
\multicolumn{3}{c}{ReLU + Max Pool ($\frac{1}{2})$ + Drop Out (Prob. = 0.5)}\\
\hline
\multicolumn{3}{c}{FC 1024 + Batch Normalization + ReLU + Drop Out (Prob. = 0.5)}\\
\hline
\multicolumn{3}{c}{FC 512 + Batch Normalization + ReLU + Drop Out (Prob. = 0.5)}\\
\hline
\multicolumn{3}{c}{FC 10}\\
\hline
\multicolumn{3}{c}{Soft-Max}\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[h]
\begin{center}
\caption{Structure of CNN for `Dog and Cat' dataset}
\label{tab:table3}
\begin{tabular}{c|c}
\textbf{Standard CNN} & \textbf{Proposed CNN}\\
\hline
\multicolumn{2}{c}{Conv1: 64 filters, each filter constituted as:}\\
\hline
\multirow{3}{*}{$3\times3\times3$ conv} & ${\mathbf w}_{1} \doteq {\mathbf p}_{1}(1\times3\times1) \otimes$\\
& ${\mathbf q}_{1}(1\times1\times3)\otimes{\mathbf t}_{1}(3\times1\times1)$\\
& $3\times3\times3$ conv \\
\hline
\multicolumn{2}{c}{Conv2: 64 filters, each filter constituted as:}\\
\hline
\multirow{3}{*}{$64\times3\times3$ conv} & ${\mathbf w}_{2} \doteq {\mathbf p}_{2}(1\times3\times1) \otimes$\\
& ${\mathbf q}_{2}(1\times1\times3)\otimes{\mathbf t}_{2}(64\times1\times1)$\\
& $64\times3\times3$ conv \\
\hline
\multicolumn{2}{c}{Batch Normalization + ReLU + Max Pool ($\frac{1}{2})$}\\
\hline
\multicolumn{2}{c}{Conv3: 144 filters, each filter constituted as:}\\
\hline
\multirow{3}{*}{$64\times3\times3$ conv} & ${\mathbf w}_{3} \doteq {\mathbf p}_{3}(1\times3\times1) \otimes$\\
& ${\mathbf q}_{3}(1\times3\times1)\otimes{\mathbf t}_{3}(64\times1\times1)$\\
& $64\times3\times3$ conv \\
\hline
\multicolumn{2}{c}{ReLU}\\
\hline
\multicolumn{2}{c}{Conv4: 144 filters, each filter constituted as:}\\
\hline
\multirow{3}{*}{$144\times3\times3$ conv} & ${\mathbf w}_{4} \doteq {\mathbf p}_{4}(1\times3\times1) \otimes$\\
& ${\mathbf q}_{4}(1\times1\times3)\otimes{\mathbf t}_{4}(144\times1\times1)$\\
& $144\times3\times3$ conv \\
\hline
\multicolumn{2}{c}{Batch Normalization + ReLU + Max Pool ($\frac{1}{2})$}\\
\hline
\multicolumn{2}{c}{Conv5: 256 filters, each filter constituted as:}\\
\hline
\multirow{3}{*}{$144\times3\times3$ conv} & ${\mathbf w}_{5} \doteq {\mathbf p}_{5}(1\times3\times1) \otimes$\\
& ${\mathbf q}_{5}(1\times1\times3)\otimes{\mathbf t}_{5}(144\times1\times1)$\\
& $144\times3\times3$ conv \\
\hline
\multicolumn{2}{c}{ReLU}\\
\hline
\multicolumn{2}{c}{Conv6: 256 filters, each filter constituted as:}\\
\hline
\multirow{3}{*}{$256\times3\times3$ conv} & ${\mathbf w}_{6} \doteq {\mathbf p}_{6}(1\times3\times1) \otimes$\\
& ${\mathbf q}_{6}(1\times1\times3)\otimes{\mathbf t}_{6}(256\times1\times1)$\\
& $256\times3\times3$ conv \\
\hline
\multicolumn{2}{c}{Batch Normalization + ReLU + Max Pool ($\frac{1}{2})$}\\
\hline
\multicolumn{2}{c}{Conv7: 256 filters, each filter constituted as:}\\
\hline
\multirow{3}{*}{$256\times3\times3$ conv} & ${\mathbf w}_{7} \doteq {\mathbf p}_{7}(1\times3\times1) \otimes$\\
& ${\mathbf q}_{7}(1\times1\times3)\otimes{\mathbf t}_{7}(256\times1\times1)$\\
& $256\times3\times3$ conv \\
\hline
\multicolumn{2}{c}{ReLU}\\
\hline
\multicolumn{2}{c}{Conv8: 484 filters, each filter constituted as:}\\
\hline
\multirow{3}{*}{$256\times3\times3$ conv} & ${\mathbf w}_{8} \doteq {\mathbf p}_{8}(1\times3\times1) \otimes$\\
& ${\mathbf q}_{8}(1\times1\times3)\otimes{\mathbf t}_{8}(256\times1\times1)$\\
& $256\times3\times3$ conv \\
\hline
\multicolumn{2}{c}{ReLU}\\
\hline
\multicolumn{2}{c}{Conv9: 484 filters, each filter constituted as:}\\
\hline
\multirow{3}{*}{$484\times3\times3$ conv} & ${\mathbf w}_{9} \doteq {\mathbf p}_{9}(1\times3\times1) \otimes$\\
& ${\mathbf q}_{9}(1\times1\times3)\otimes{\mathbf t}_{9}(484\times1\times1)$\\
& $484\times3\times3$ conv \\
\hline
\multicolumn{2}{c}{Batch Normalization + ReLU + Max Pool ($\frac{1}{2})$}\\
\hline
\multicolumn{2}{c}{Conv10: 484 filters, each filter constituted as:}\\
\hline
\multirow{3}{*}{$484\times3\times3$ conv} & ${\mathbf w}_{10} \doteq {\mathbf p}_{10}(1\times3\times1) \otimes$\\
& ${\mathbf q}_{10}(1\times1\times3)\otimes{\mathbf t}_{10}(484\times1\times1)$\\
& $484\times3\times3$ conv \\
\hline
\multicolumn{2}{c}{ReLU}\\
\hline
\multicolumn{2}{c}{Conv11: 484 filters, each filter constituted as:}\\
\hline
\multirow{3}{*}{$484\times3\times3$ conv} & ${\mathbf w}_{11} \doteq {\mathbf p}_{11}(1\times3\times1) \otimes$\\
& ${\mathbf q}_{11}(1\times1\times3)\otimes{\mathbf t}_{11}(484\times1\times1)$\\
& $484\times3\times3$ conv \\
\hline
\multicolumn{2}{c}{Batch Normalization + ReLU + Max Pool ($\frac{1}{2})$}\\
\hline
\multicolumn{2}{c}{FC 1024 + Batch Normalization + ReLU}\\
\hline
\multicolumn{2}{c}{FC 512 + Batch Normalization + ReLU}\\
\hline
\multicolumn{2}{c}{FC 2}\\
\hline
\multicolumn{2}{c}{Soft-Max}\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{MNIST.png}
\caption{Comparison of test accuracy on the MNIST dataset.}
\label{MNIST}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{CIFAR10.png}
\caption{Comparison of test accuracy on the CIFAR10 dataset.}
\label{CIFAR10}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{DogAndCat.png}
\caption{Comparison of test accuracy on the `Dog and Cat' dataset.}
\label{DogAndCat}
\end{figure}
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
|
1,941,325,220,043 | arxiv | \section{Introduction}
The past two decades have seen the discovery of a number of strongly correlated materials with unconventional physical properties. Due to the competing effects of essentially electronic processes and interactions these doped Mott insulators typically exhibit complex phase diagrams which include antiferromagnetic phases, generally incommensurate charge-ordered phases, and high temperature superconducting phases. When conducting, these systems do not have well defined electron-like quasiparticles and their metallic states thus cannot be explained by the conventional theory of metals, the Landau theory of the Fermi liquid, and the associated superconducting states cannot be described in terms of the BCS mechanism for superconductivity.
The startling properties of these materials have led to a number of proposals of non-trivial ground states of strongly correlated systems which share the common feature that they cannot be adiabatically obtained from the physics of non-interacting electrons. A class of proposed ground states are the resonating valence bond (RVB) spin liquid phases, quantum liquid ground states in which there is no long range spin order of any kind, and the related valence bond crystal phases, of frustrated quantum antiferromagnets\cite{andersonrvb,sr-rvb87} and their descendants.\cite{leenagwen06,senthilfisher1,herm_alg05,mudry94} On the other hand, the presence of competing spatially-inhomogeneous charge-ordered phases in close proximity to both antiferromagnetism and high $T_c$ superconductivity, and the existence of incommensurate low energy fluctuations in the latter phase, strongly suggest that these phases may have a common origin. It has long been suggested that some form of frustration of the charge degrees of freedom may be at work in these systems.\cite{Emery93,emer99,stripes} The explanation of both the existence of a large pairing scale in the superconducting phase and their close proximity to inhomogeneous
charge-ordered phases is one of the central conceptual challenges in the physics of these doped Mott insulators.\cite{chapter}
The most studied class of these strongly correlated materials are the cuprate high temperature superconductors (for a recent review on their behavior and open questions see Ref.~\onlinecite{Kivelson03}.)
Unconventional behaviors have also been seen in other strongly correlated complex oxides.\cite{maeno-mackenzie}
More recently, strong evidence for non-magnetic phases has been discovered in new frustrated quantum magnetic materials, including the quasi-2D triangular antiferromagnetic insulators
such as\cite{coldea01} Cs${}_2$CuCl${}_4$, the quasi-2D triangular organic compounds such
as\cite{Kanoda03} $\kappa$-(BEDT-TTF)${}_2$Cu${}_2$(CN)${}_3$, and the 3D pyrochlore antiferromagnets such as the spin-ice compound\cite{Ramirez99,Snyder04} Dy${}_2$Ti${}_2$O${}_7$ (although quantum effects do not appear to be prominent in spin-ice systems.)
It is thus of interest to develop a theoretical framework to describe quantum frustrated systems in the regime of strong correlation, and to understand their role in the mechanism for inhomogeneous phases in strongly correlated systems. This is the main purpose of this paper. It has long been known that generally incommensurate inhomogeneous phases arise in classical systems with competing short range attractive interactions and long range repulsive interactions. In such systems, the short range attractive interactions (whose physical origin depend on the system in question) favor spatially inhomogeneous phases, {\it i.e.\/} phase separation, which is {\em frustrated} by long range (typically Coulomb) repulsive interactions. Coulomb-frustrated phase separation has been proposed as a mechanism for stripe phases in doped Mott insulators\cite{Emery93,low94,lorenzana2001} and in low density electron gases.\cite{jamkivspiv05} Similar ideas were also proposed to explain the structure of the crust of neutron stars, lightly doped with protons,\cite{Ravenhall83,Ravenhall93} and in soft condensed matter ({\it e.g.\/} block copolymers).\cite{seul95}
In this work we will pursue a different approach and consider mechanisms of quantum stabilization ({\it i.e.\/} quantum order by disorder) of stripe-like phases in frustrated quantum systems. We will specifically consider frustrated versions of two-dimensional quantum dimer models,\cite{Rokhsar88} which provide a qualitative description of the physics of quantum frustrated magnets in their spin-disordered phases. The phases that we will discuss here are essentially valence bond crystals with varying degree of commensurability and become asymptotically incommensurate. Since these systems are charge-neutral, there are no long range interactions. As we will see below, quantum fluctuations resolve the high degeneracies of their naive classical limit leading to a non-trivial phase diagram with phases with different degree of commensurability or {\em tilt}. The resulting phase diagram has the structure of an incomplete devil's staircase similar to that found in classical order-by-disorder systems such as the anisotropic next-nearest neighbor Ising (ANNNI) models\cite{fishselke80,bak82}. In the regime in which quantum fluctuations are weak, which is where our calculations are systematically controlled, only a small fraction of the phase diagram exhibits phases with non-trivial modulations. In this ``classical'' regime the observation of non-trivial phases requires fine tuning of the coupling constants. (In contrast, in systems with long range interactions no such fine tuning is needed at the classical level.\cite{low94}) However, as the quantum fluctuations grow, the fraction of the phase diagram occupied by these non-trivial phases becomes larger. Thus, at finite values of the coupling constants, where our estimates are not accurate, no fine tuning is needed.
Considerable progress has been made towards understanding theoretically
the liquid phases, including the proper field
theoretic description\cite{msf02,ardonne04} which also allows for an analysis of the related valence bond solid phases with varying degrees of commensurability.\cite{vbs04,fhmos04}
Complementary to this effort is the dynamical question of how, or
even if, a phase with exotic non-local properties can
arise in a system where the interactions are purely local.
We emphasize the requirement of locality because many of these exotic
structures have been proposed for experimental systems believed to be
described by a local Hamiltonian (i.\ e.\ of the Hubbard or
Heisenberg type). An additional question is whether the exotic
physics can be realized in an isotropic model or if it is necessary for
the Hamiltonian to explicitly break symmetries.
For some of these
structures, the dynamical questions have been partially
settled by the discovery of model Hamiltonians\cite{MStrirvb,
kitaev05, ardonne04} which stabilize the exotic
phase over a portion of their quantum phase diagrams. While many of these
models do not (currently) have experimental realizations, their value, in
addition to providing proofs of principle, lies in the
identification of physical mechanisms, which often have
validity beyond the specific case considered. For example,
the existence of short-range RVB phases was first demonstrated
analytically in quantum dimer models\cite{Rokhsar88,MStrirvb}, the essential ingredients being geometric
frustration and ring interactions. The possibility of such phases
being realized in spin systems was subsequently demonstrated by the
construction of a number of spin
Hamiltonians\cite{herm_pyro04,balfishgirv02,fujimoto05,rms05,damle06,isakov06,sengupta06}, including SU(2) invariant
models\cite{fujimoto05,rms05},
all of which reduce to dimer-like models at low energies. The existence of commensurate valence bond solid phases, with low order commensurability, in doped quantum dimer models\cite{Fradkin90} has been studied recently\cite{Balents05a,Balents05b}, as well as modulated phases in doped quantum dimer models within mean field theory.\cite{vojta99}
It is in this context that we ask the following question: is it
possible to realize high order striped phases in a strongly-correlated quantum system with only
local interactions and no explicitly broken symmetries?
While the term {\em stripe} has been used in reference to a number of
spatially inhomogeous states, here we use the term to
denote a domain wall between two uniform regions. The presence of
domain walls means that rotational symmetry has been broken.
Fig.~\ref{fig:stripe} gives examples of striped phases. Theories
based on the formation of striped phases have been
proposed in a number of experimental contexts, notably the high
$T_c$ cuprates\cite{zaanen,zhang,vojta99,stripes}, where the ``stripes'' are lines of doped holes separating antiferromagnetic domains (see Fig.~\ref{fig:stripe}a). In the simplest
striped states, the domain walls are periodically
spaced (Fig.~\ref{fig:stripe}b) or are part of a repeating unit cell
(Fig.~\ref{fig:stripe}c). We use the term ``high order striped
phase'' in the case where the periodicity is large compared to
the other characteristic lengths in the model.
\begin{figure}[ht]
{\begin{center}
\includegraphics[width=3in]{stripe.eps}
\caption{Examples of striped phases. (a) shows stripes of holes
separating antiferromagnetic domains. This structure appears in some
theories of the high $T_{c}$ cuprates. (b) shows periodically spaced
domain walls separating regions where the order parameter takes the
uniform value $\phi_{1}$ or $\phi_{2}$. (c) is another example where the
periodicity is associated with a repeating unit cell. If the repeat
distance becomes infinite, then the state is said to be {\em
incommensurate} with the lattice.}
\label{fig:stripe}
\end{center}}
\end{figure}
Our central result is a positive answer to the question posed in the
preceding paragraph. We do this by constructing a two-dimensional
quantum dimer model, with only short range interactions and no explicitly broken
symmetries, that shows an infinite number of periodic striped phases in its $T=0$
phase diagram. The collection of states forms an incomplete devil's staircase. The
phases are separated by first order transitions and the spacing between stripes
becomes arbitrarily large as the staircase is traversed.
Before giving details of the construction, we reiterate that we are
searching for (high-order) striped phases in a Hamiltonian with only {\em local}
interactions and {\em without} explicitly breaking any symmetries. As
alluded to earlier, a number of experimental systems where
stripe-based theories have been proposed are widely believed to be
described by Hamiltonians that meet these restrictions. A notable
example is the Emery model \cite{emery87, kivfradgeb04} of the high-$T_{c}$ cuprates, which is a
generalization of the Hubbard model that includes both Cu and O
sites. Since it is not {\em a priori} obvious that a nontrivial
global ordering such as a high order striped phase will/can unambiguously arise from such
local, symmetric strongly correlated models, we include these phases in the list of
``exotic'' structures. However, in the
absence of these restrictions, the occurrence of stripe-like
phases is relatively common. Relaxing the requirement of high
periodicity, we note that low order striped quantum phases can occur
in the Bose-Hubbard model at fractional fillings if appropriate
next-nearest-neighbor interactions are added\cite{sach_compord05}. Relaxing the requirement of
locality, we note that stripe phases arise naturally in
systems with long range Coulomb interactions\cite{vojta99,jamkivspiv05}. More generally, if the
Hamiltonian includes a term that is effectively a chemical potential
for domain walls, and if there is a long-range repulsive interaction
between domain walls, then we may generically expect striped phases where the
spacing between domain walls is large (compared with the lattice
spacing).
A guiding principle in constructing models with exotic phases
is {\em frustration}, or the inability of a system to simultaneously
optimize all of its local interactions. Quantum dimer models are
relatively simple models that contain the basic physics of quantum
frustration and have proven to be a useful place to look in the search
for exotic phases\cite{MStrirvb}. This is one reason why we choose to work in the
dimer Hilbert space. A second reason is related to the observation
that each dimer covering may be assigned a winding number and this
divides the Hilbert space into topological sectors that are not
coupled by local dynamics. The ground state wavefunction of a dimer
Hamiltonian will typically live in one of these sectors (ignoring the
complications of degeneracy for the moment). As
parameters in the Hamiltonian are varied, it is possible that at some
critical value, the
sector containing the ground state will change. Such a scenario,
which occurs even in the simplest dimer model formed by Rokhsar and Kivelson, is
an example of a {\em quantum tilting transition} between a ``flat''
and ``tilted'' state. In
Refs.~\onlinecite{vbs04} and \onlinecite{fhmos04}, it was shown, by
field theoretic arguments, that
when such a transition is perturbed, states of ``intermediate
tilt'' (this will be made more precise below) may be stabilized. As
we will show, these intermediate tilt states may be viewed as
stripe-like states, of the form we are interested in. Taking
inspiration from these ideas, our construction involves perturbing
about a tilting transition in a specially constructed dimer model.
In classical Ising systems, the competition between nearest-neighbor and
next-nearest-neighbor interactions is a
well-known mechanism for generating incommensurate phases
\cite{fishselke80}.
Classical models of striped phases are
based on two complementary principles: soliton formation and competing
interactions\cite{bak82}. Our quantum construction is based on
analogies with two classical models that are representatives of these
two aspects: the Pokrovsky-Talapov model of fluctuating domain
walls\cite{poktal78} and the ANNNI model\cite{fishselke80}. In section
\ref{sec:classical}, we review
the relevant features of these models. In
section \ref{sec:dimer}, we review the salient features about dimer
models and tilting transitions. In section \ref{sec:overview}, we give an
overview of the construction and technical details are presented in
section \ref{sec:details} and the appendices. In section
\ref{sec:spin}, we discuss how these ideas connect to spin models, thus
extending our ``proof of principle'' to systems with physical degrees of
freedom. We discuss implications of these results
in section \ref{sec:discuss}. In two appendices we give technical details of our calculations.
\section{Classical models}
\label{sec:classical}
Our approach builds on principles underlying
stripe formation in classical models, where the problem is also
referred to as a commensurate-incommensurate transition\cite{bak82}.
The classical models are based on two complementary principles: the
insertion of domain walls and competing local interactions.
\begin{figure}[t]
{\begin{center}
\includegraphics[width=3in]{poktal.eps}
\caption{(a) ground and (b) excited states of the
Pokrovsky-Talapov model of fluctuating classical domain walls.
This is a two-dimensional anisotropic model where domain walls form
along the $y$ direction and separate regions where the order parameter
is uniform. While the domain walls cost energy, they
are allowed to fluctuate, which carries entropy. For a range of
parameters, the domain walls are actually favored by the free energy
minimization.}
\label{fig:poktal}
\end{center}}
\end{figure}
\begin{figure}[h!]
{\begin{center}
\includegraphics[width=2in]{annni1.eps}
\caption{The anisotropic next nearest neighbor Ising (ANNNI) model.
Ising spins lie on the points of a $d$-dimensional cube. Nearest
neighbor interactions are ferromagnetic ($J_{1}<0$. Along one of the
directions, we also have antiferromagnetic next nearest neighbor
interactions $J_{2}>0$.}
\label{fig:annni1}
\end{center}}
\end{figure}
A toy model relevant to the present work is the picture of fluctuating
domain walls in two dimensions, introduced by Pokrovsky and
Talapov\cite{poktal78}. The walls are allowed to fluctuate though the
ends are fixed (Fig.~\ref{fig:poktal}), which precludes bubbles. The
free energy minimization is a competition between the creation energy
of having walls, the elastic energy of deviating from the flat state, and
the entropic benefit of allowing the walls to fluctuate. This theory predicts
a transition from a uniform phase to a striped phase. The spacing between
walls depends on the parameters of the theory (including
temperature) and can be large compared with other length scales.
A second model relevant to the present work is the classical anisotropic
next-nearest-neighbor Ising (ANNNI) model in three (and higher)
dimensions\cite{fishselke80,bakvonboehm80}. This model
(Fig.~\ref{fig:annni1}) describes
Ising spins on a cubic lattice with ferromagnetic nearest-neighbor
interactions $J_{1}<0$ and antiferromagnetic next-nearest-neighbor
interactions $J_{2}>0$ {\em along one of the lattice directions}.
A key feature of
the ANNNI Hamiltonian is a special
point $J_{1}/J_{2}=2$ where a large number of stripe-like states are
degenerate at zero temperature. As the temperature is raised, the
competition between $J_{1}$ and $J_{2}$ causes an infinite number of
modulated phases to emerge from this degenerate point.
The phase diagram in the low $T$ limit was studied analytically in
Ref.~\onlinecite{fishselke80} using a novel perturbative scheme where the
existence of higher order phases was established at successively high orders
in the perturbation theory. Numerical studies at higher
temperatures\cite{bakvonboehm80} indicated that incommensurate phases
occur near the phase boundaries. Therefore, the collection of phases form an
incomplete devil's staircase\cite{bak82,3dannni}. A quantum version of the ANNNI
model was studied in Refs.~\onlinecite{yeomans95} and \onlinecite{harris95}.
The phase diagram of our model is similar to that of the ANNNI model
and our analytical methods are similar in spirit to
Ref.~\onlinecite{fishselke80}.
However, the basic physics of our model corresponds more clearly to a quantum
version of the energy-entropy balance occuring in the fluctuating domain wall
picture. We now discuss one more ingredient of the construction
before putting the pieces together in section \ref{sec:overview}.
\section{Quantum dimer models and tilting transitions}
\label{sec:dimer}
A hard-core dimer covering of a lattice is a mapping where each site
of the lattice forms a bond with exactly one of its nearest
neighbors. Each dimer covering is a basis vector
in a dimer Hilbert space and the inner product is such that
different dimer coverings are orthogonal. Quantum dimer models are
defined on this dimer Hilbert space through operators that manipulate
these dimers in ways that preserve the hard-core condition. These
models were first proposed as effective descriptions of the strong coupling
regime of quantum spin systems\cite{Rokhsar88} and
Refs.~\onlinecite{balfishgirv02,fujimoto05,rms05}
discuss ways in which this correspondence can be made precise.
The space of dimer coverings can be subdivided into different
topological sectors labelled by the pair of winding numbers
$(W_{x},W_{y})$ defined in Fig.~\ref{fig:tilt}. The winding number
is a global property in that it is not affected by any local
rearrangement of dimers. In particular, for any local Hamiltonian,
the matrix element between dimer coverings in different sectors will be zero.
For a given local Hamiltonian, the ground state of the system will
typically lie in one of the topological sectors. A common occurrence in dimer
models is a quantum phase transition in which the topological sector
containing the ground state changes. Such an occurrence is
called a tilting transition because a dimer covering of a 2d bipartite
lattice may be viewed as the surface of a three-dimensional crystal
through the height representation\cite{blote82}. In this language, the
different topological sectors correspond to different values for
the (global) average tilt of the surface. The correspondence between
dimers and heights is reviewed in Fig.~\ref{fig:height} but for the
present purpose, it is sufficient to {\em define} the ``tilt'' of a dimer
covering as its ``winding number per unit length''. The simplest dimer model
introduced in Ref.~\onlinecite{Rokhsar88}, has a tilting transition between a
flat state (zero tilt) and the staggered state, which is maximally tilted
(Fig.~\ref{fig:height}b). At the critical point, called the
Rokhsar-Kivelson or RK point, the Hamiltonian has a ground state
degeneracy where all tilts are equally favored.
\begin{figure}[ht]
{\begin{center}
\includegraphics[width=3in]{tilt.eps}
\caption{Winding numbers: draw a reference line that extends through and
around (due to the periodic boundary condition) the system and
label the vertical lines of the lattice alternately as $A$ and $B$
lines. For any dimer configuration, we may define, with regard to
this reference line, the winding
number $W_{x}=N_{A}-N_{B}$, where $N_{A}$ is the number
of $A$ dimers intersecting the line and similarly for $N_{B}$.
We can similarly draw a vertical line and define a similar quantity $W_{y}$.
Note that this particular construction works for a 2d bipartite lattice.
For 2d non-bipartite lattices, the construction is simpler: count the
total number of dimers intersecting the horizontal and vertical reference lines
and there are four sectors corresponding to whether $W_{x,y}$ is even
or odd.}
\label{fig:tilt}
\end{center}}
\end{figure}
\begin{figure}[ht]
{\begin{center}
\includegraphics[width=3in]{height.eps}
\caption{Sample dimer configurations with corresponding height
mappings. The height mapping involves assigning integers to the squares of the
lattice in the following manner. Divide the bipartite lattice into $A$ and $B$
sublattices. Assign zero height to a reference square and then
moving clockwise around an $A$ site, the height increases by one if a
dimer is not crossed and decreases by three if a dimer is crossed.
The same rule applies moving counterclockwise about a $B$ site. The
integers correspond to local heights of a crystal whose base lies on
the page. In these examples, the lower square in the second column is taken as
the reference square. (a) Dimers are arranged in
columns corresponding to a surface that is flat on average (though
there are fluctuations at the lattice scale). (b) Dimers are
staggered and the corresponding surface is maximally tilted. (c)
There average tilt is nonzero due to the staggered strip in between
the flat columnar regions. (d) Going from left to right, the surface
initially falls and then rises giving an average tilt of zero. This
is because the two staggered regions have opposite orientation.}
\label{fig:height}
\end{center}}
\end{figure}
The recognition of the Rokhsar-Kivelson dimer model as a tilting
transition led to a field theoretic description \cite{hen97,msf02}
of the RK point based on a coarse-grained\cite{action} version of the height field
(Fig.~\ref{fig:height}). The stability of this field theory
was studied in Refs.~\onlinecite{vbs04} and \onlinecite{fhmos04}. These studies
showed that by tuning a small perturbation and non-perturbatively adding irrelevant operators, it is
possible to make the tilt vary continuously from a
flat state to the maximally tilted state. In addition, it was
observed that the system has a tendency to ``lock-in'' at values of the
tilt commensurate with the underlying microscopic lattice,
the specific values depending on details of the perturbation. It was
also noted that while a generic perturbation would make the transition first
order\cite{vbs04}, for a sufficiently small perturbation, the correlation length
was extremely large\cite{fhmos04} which, it was argued, justified the
field theory approach nonetheless. Therefore, the generic effect of
perturbations would be to smoothen the Rokhsar-Kivelson
tilting transition by making the system pass through an incomplete devil's
staircase of intermediately tilted states. One may suspect that the structure
of the field theory, including the predictions of
Refs.~\onlinecite{vbs04} and \onlinecite{fhmos04}, would hold for a broader class of tilting
transitions. In particular, one may consider
the case where the critical point is merely a point of large
degeneracy where all tilts are favored\cite{tilt, nussinov}, which is analogous
to the classical ANNNI model.
The relevance of all of this to the present work is most easily seen
by Fig.~\ref{fig:ideal}, which shows the simplest examples of states
that have intermediate tilt (i.\ e.\ winding number). These are
stripe-like states having a finite density of staggered domain walls
and more general tilted states may be obtained by locally rearranging
the dimers. The preceding discussion suggests that these kinds of
structures arise naturally when quantum dimer models are perturbed
around a tilting transition. This observation will guide the
construction outlined in the next section.
\section{Overview of strategy}
\label{sec:overview}
We now combine various ideas presented in sections \ref{sec:classical} and
\ref{sec:dimer} to construct the promised quantum model. In the
present section, we present an overview of the construction with
details and subtleties relegated to section \ref{sec:details}. The basic idea is to
construct a quantum dimer model with a tilting transition and then to
appropriately perturb this model to realize a staircase of striped
states. We design the unperturbed system to have a large degeneracy at the
critical point, with each of the degenerate states having
a domain-wall structure. The perturbation will effectively make
these domain walls fluctuate and the degeneracy will be lifted in a
quantum analog of the energy-entropy competition that drives the
classical Pokrovsky-Talapov transition. Using standard quantum
mechanical perturbation theory, we will
obtain a phase diagram similar to the classical 3D ANNNI model and
will find that phases with increasingly long periodicities
will be stabilized at higher orders in the perturbation theory. This
mathematical approach is similar in spirit to the analysis of the
classical ANNNI model in Ref.~\onlinecite{fishselke80}.
A simple, rotationally invariant Hamiltonian with a tilting transition is given by:
\begin{equation}
\scalebox{0.43}{\includegraphics{parent4.eps}}\\
\label{eq:parent4}
\end{equation}
This model displays a first-order transition between a columnar and fully staggered state at a very degenerate point, where $2a=b$. In principle, we may perturb this model
with an off-diagonal resonance term and expect phases with intermediate tilt (and
possibly other exotic phases) on the general grounds discussed in the previous
section. However, it is difficult to make precise statements about
the phase diagram even for such fairly simple models. We will study a slightly
constrained version of this model that is convenient for making analytical
progress.
We construct the quantum dimer model in two steps. First, we
construct a diagonal parent Hamiltonian $H_{0}$ (Eq.~\ref{eq:H0}) where the
ground states
are separated from excited states by a tunably large gap. $H_{0}$ will
not break any lattice symmetries, but the preferred ground states will
spontaneously break translational and rotational symmetry.
In particular, we design $H_{0}$ to select ground states having the
domain wall structure shown in Fig.~\ref{fig:generic}.
In these states, the dimers arrange themselves into staggered domains of unit
width separating columnar regions of arbitrary width. The columnar dimers are
horizontal if the staggered dimers are vertical (and vice versa) and the
staggered strips come in two orientations. Notice that the fully columnar
state (a columnar region of infinite width) is included this collection but
the fully staggered state (Fig.~\ref{fig:height}b), which appears in the
Rokhsar-Kivelson phase diagram, is not. In the following, we will
typically draw the staggered strips as vertical domain walls but the
horizontal configurations are equally possible.
\begin{figure}[ht]
{\begin{center}
\includegraphics[width=3in]{random2.eps}
\caption{A typical domain wall state selected by $H_{0}$.
These states break translational and rotational symmetry. The staggered
strips, which are one column wide and may have one of two orientations, are like
domain walls separating columnar regions, which may have arbitrary width.
When $a=b$ in Eq.~ \ref{eq:energy0}, the set of these states spans the
degenerate ground state manifold of $H_{0}$.}
\label{fig:generic}
\end{center}}
\end{figure}
In analogy with the ANNNI model, $H_{0}$ is designed so that all of
these domain wall states are degenerate when the couplings are tuned
to a special point. Away from this point, the system will enter either a flat
or a tilted phase. The unperturbed
phase diagram is sketched in Fig.~\ref{fig:phase0}.
\begin{figure}[ht]
{\begin{center}
\includegraphics[width=3in]{phase_parent_1.eps}
\caption{Ground state phase diagram of the parent Hamiltonian $H_{0}$ as a
function of the parameter $a-b$ (see Eq.~\ref{eq:H0}). In these
states, the dimers may only have two attractive bonds. When $a-b=0$, the
states of Fig.~\ref{fig:generic} are degenerate ground states. Away from this
point, the system enters a state where dimers either maximize or
minimize the number of staggered interactions. The maximally staggered
configuration is commonly called the ``herringbone'' state.}
\label{fig:phase0}
\end{center}}
\end{figure}
The second step of the construction is to perturbatively add a small,
non-diagonal, resonance term $tV$:
\begin{equation}
\psfrag{t}{$t$}
\scalebox{0.5}{\includegraphics{v.eps}}\\
\label{eq:V}
\end{equation}
The sum is over all plaquettes in the lattice. Depending on the
local dimer configuration of the wavefunction, the individual terms in
this sum will either annihilate the state or flip the local cluster of
dimers as shown in Fig.~\ref{fig:flip}. The action of this operator
on the domain wall states (Fig.~\ref{fig:generic}) is confined to the
boundaries between staggered and columnar regions and effectively makes
the domain walls fluctuate. Notice that Eq.~\ref{eq:V}
is equivalent to two actions of the familiar Rokhsar-Kivelson two-dimer
resonance term. We expect the basic conceptual argument to apply for a more
general class of perturbations, including the two-dimer resonance, but we consider
the specific form of Eq.~\ref{eq:V} to simplify certain technical aspects of the calculation.
We will elaborate on this more in the next section.
\begin{figure}[ht]
{\begin{center}
\includegraphics[width=3in]{flip.eps}
\caption{One of the terms in the operator of Eq.~\ref{eq:V} will flip
the circled cluster as shown.}
\label{fig:flip}
\end{center}}
\end{figure}
The degenerate point of Fig.~\ref{fig:phase0} may be viewed as the
degeneracy of an individual vertical column having a staggered or
columnar dimer arrangement. The perturbation \ref{eq:V} lifts this
degeneracy by lowering the energy of configurations with
domain walls by an amount of order $\sim Lt^{2}$ per domain wall, where
$L$ is the linear size of the system. Therefore, the system favors
one of the states with a maximal number of domain walls and for technical
reasons discussed below, will choose
the one having maximal tilt: the [11] state in Fig.~\ref{fig:ideal}a.
However, the degeneracy between columnar and staggered strips will be
restored by detuning $H_{0}$ from the $t=0$ degenerate point. This
implies the second order phase diagram sketched in Fig.~\ref{fig:phase2}.
\begin{figure}
\begin{tabular}{cc}
\centering
\begin{minipage}{1.5in}
\includegraphics[width=1.5in]{11.eps}
\end{minipage}&
\begin{minipage}{1.5in}
\includegraphics[width=1.5in]{12.eps}
\end{minipage}\\ (a) & (b) \\
\begin{minipage}{1.5in}
\includegraphics[width=1.5in]{13.eps}
\end{minipage}&
\begin{minipage}{1.5in}
\includegraphics[width=1.5in]{14.eps}
\end{minipage}\\ (c) & (d) \\
\begin{minipage}{1.5in}
\includegraphics[width=1.5in]{15.eps}
\end{minipage}&
\begin{minipage}{1.5in}
\includegraphics[width=1.5in]{16.eps}
\end{minipage}\\ (e) & (f) \\
\end{tabular}
\caption{Examples of ideal tilted states. In these states, the
domain walls have the same orientations and are uniformly spaced. The
notation
[1n] denotes the state where one staggered strip is followed by
$n$ columnar strips and so on. It is understood that [1n]
collectively refers to the above states and those related by translational,
rotational, and reflection (i.e.\ switching the orientation of
the staggering) symmetries. The examples drawn here, where it is understood
that what we are seeing is part of a larger lattice, are (a) [11] (b)
[12] (c) [13] (d) [14] (e) [15] (f) [16].}
\label{fig:ideal}
\end{figure}
\begin{figure}[ht]
{\begin{center}
\includegraphics[width=0.37\textwidth]{stairs_phase2-v2.eps}
\caption{(Color online) Ground state phase diagram of $H=H_{0}+tV$ from second order
perturbation theory.}
\label{fig:phase2}
\end{center}}
\end{figure}
Fig.~\ref{fig:phase2} is correct up to error terms of order $t^{4}$.
To this approximation, states having tilt in between the [11] state
and the (flat) columnar state are degenerate on the phase boundary.
Physically, this boundary occurs when the energy from second order
processes which stabilize the staggered domains is precisely balanced by the
energy of a columnar strip. This degeneracy will be partially lifted by going
to higher orders in perturbation theory. We find that at fourth order, a new
phase is stabilized in a region of width $\sim t^{4}$ between the columnar and
[11] phases. This new phase is the one which maximizes the number of
fourth order resonances shown in Fig.~\ref{fig:four} and is the [12]
state (Fig.~\ref{fig:ideal}b). The corrected phase diagram is given
in Fig.~\ref{fig:phase4}.
\begin{figure}[ht]
{\begin{center}
\includegraphics[width=3in]{four_long_1.eps}
\caption{(Color online) The excited state on the right is obtained from the initial
state by acting twice with the perturbation in Eq.~\ref{eq:V}.}
\label{fig:four}
\end{center}}
\end{figure}
\begin{figure}[h!]
{\begin{center}
\includegraphics[width=0.37\textwidth]{stairs_phase4C-v2.eps}
\caption{(Color online) Ground state phase diagram of $H=H_{0}+tV$ from fourth order
perturbation theory. The width of the [12] phase is order $t^{4}$.}
\label{fig:phase4}
\end{center}}
\end{figure}
Fig.~\ref{fig:phase4} is accurate up to corrections of order $t^6$.
To this approximation, on the boundaries, states with tilts in between the
bordering phases are degenerate. These degeneracies will be partially
lifted at higher orders in the perturbation. At higher orders, there will be
resonances corresponding to increasingly complicated fluctuations of the
staggered lines but at $n$th order, the competition between the [1,n-1]
and columnar phases will always be decided by the $n$th order
generalization of the long resonance in Fig.~\ref{fig:four}.
The competition will stabilize a new [1n] phase
in a tiny region of width $\sim t^{2n}$ between the [1,n-1] and columnar
phases resulting in the phase diagram of Fig.~\ref{fig:phaseN}.
\begin{figure}[t]
{\begin{center}
\includegraphics[width=0.37\textwidth]{stairs_phaseN-v2.eps}
\
\caption{(Color online) Ground state phase diagram of $H=H_{0}+tV$ from nth order
perturbation theory. The width of the [1n] phase is of order $t^{2n}$.}
\label{fig:phaseN}
\end{center}}
\end{figure}
\begin{figure}[t]
{\begin{center}
\includegraphics[width=0.37\textwidth]{stairs_phasefull-v2.eps}
\
\caption{(Color online) The boundaries of the [1n] sequence will typically open into
finer phases and subsequently the fine boundaries can themselves open.
While the detailed structure depends on parameters in the model, generically we expect
an incomplete devil's staircase to be realized. In the figure,we have explicitly
drawn the opening of the[11]-[12] boundary but the other boundaries will behave
similarly.}
\label{fig:staircase}
\end{center}}
\end{figure}
We will also find that at higher orders, the individual boundaries of
the [1n] sequence will themselves open revealing finer phase
boundaries, which themselves can open. This leads to the generic
phase diagram sketched in Fig.~\ref{fig:staircase}.
The steps in the [1n] sequence that are stabilized depend on the values of
parameters in the Hamiltonian. However, the conclusion of arbitrarily long
periods being realized is robust. The fine structure of how the [1,n-1]-[1n]
boundaries open is less certain because the dependence on parameters is more intricate
and increasingly complicated resonances need to be accounted for.
However, general arguments indicate that the boundaries will open
and even periods incommensurate with the lattice\cite{footnote:incomm}
are likely to occur in the model, though such states will not be seen at any
finite order of our perturbation theory.
In the terminology of commensurate-incommensurate phase transitions,
the [1n] sequence forms a (harmless) staircase with a ``devil's
top-step''\cite{fishselke80,bak82}. With the openings of these boundaries, beginning in the
[11] state and moving left in Fig.~\ref{fig:staircase} for $t\neq 0$,
the system traverses an incomplete devil's staircase of periodic
states. The subsequent steps in the staircase have progressively
smaller tilts culminating in the flat columnar state. The phase
boundaries are first order transitions. This phase diagram is similar
to what is seen in the classical ANNNI model, where the transitions
are driven by thermal fluctuations.
We make two remarks before launching into the calculation. First,
when we refer to the ``[11] phase'' (for example) what we
precisely mean is that in this region, the ground state wavefunction is a
superposition of dimer coverings that has relatively large overlap
with the state in Fig.~\ref{fig:ideal}a and much smaller overlaps (of
order $t^{2}$, $t^{4}$ etc.\ ) with excited states obtained by acting
on Fig.~\ref{fig:ideal}a with Eq.~\ref{eq:V}. The coefficients follow
from perturbation theory. Second, since Fig.~\ref{fig:staircase} is
obtained using perturbation theory, we can be confident that this
describes our system only in the limit where $t$ is small. In the
classical ANNNI model, numerical evidence indicates that as the small
parameter (the temperature in that case) is increased, the phase
boundaries close into Arnold tongue structures. We do not currently
know if this will occur in our model as $t$ increases.
\section{Details}
\label{sec:details}
In this section, we construct a Hamiltonian using the strategy
outlined in the previous sections. The Hamiltonian $H=H_{0}+tV$
consists of a diagonal term $H_{0}$ and an off-diagonal term $tV$,
which we treat perturbatively in the small parameter $t$.
\subsection{Parent Hamiltonian}
Our parent Hamiltonian $H_{0}$ is the following operator:
\begin{equation}
\scalebox{0.43}{\includegraphics{parent3_1.eps}}\nonumber
\end{equation}
\vspace{-0.95cm}
\begin{equation}
\scalebox{0.43}{\includegraphics{parent3_2.eps}}\hspace{1.9cm}\nonumber
\end{equation}
\vspace{-0.95cm}
\begin{equation}
\scalebox{0.43}{\includegraphics{parent3_3.eps}}\hspace{2cm}
\label{eq:H0}
\end{equation}
The coefficients $a$, $b$, $c$, $d$, $p$, and $q$ are
positive numbers. The symbols used in Eq.~\ref{eq:H0} are
projection operators referring to configurations of dimers on clusters
of plaquettes and the sums are over all such clusters. The notation
``3 more'', etc.\ refers to the given term as well terms related
to it by rotational and/or reflection symmetry; in terms $a$ and $b$
these other terms are explicitly written. Notice that this
Hamiltonian is a sum of local operators and does not break any
symmetries of the underlying square lattice.
\begin{figure}[t]
{\begin{center}
\includegraphics[width=3in]{34_1.eps}
\caption{(Color online) (a) and (b) are the two ways in which a dimer
can participate in three attractive bonds. (c) is the
one way in which a dimer can participate in four attractive bonds.
The attractive bonds are shown by the blue (dark gray) arrows. However, these
configurations also involve repulsive interactions, which are shown
in red (light gray), from terms $c$ and $d$ in the Hamiltonian. (d) is an example
of a state where every
where every dimer has only two attractive
bonds but with some dimers the two bonds are of different types.
These ``kinks'' in the staggered domain walls involve an energy cost
from term $d$ as indicated by the red (light gray) arrows.}
\label{fig:34}
\end{center}}
\end{figure}
Terms $a$ and $b$ are attractive interactions favoring staggered and columnar
dimer arrangements respectively and we study Eq.~\ref{eq:H0} near
$a=b$. Terms $c$ and
$d$ are repulsive interactions and if $c,d\geq a,b$,
the dimers prefer domain wall patterns
(Fig.~\ref{fig:generic}). Terms $p$ and $q$ are repulsive interactions which
determine the phases on the staircase. If
these terms are sufficiently large\cite{infinite} compared to $a,b,c$ and $d$,
the staircase will include phases with arbitrarily long periods.
We begin by showing that when $a=b$, the ground states of $H_{0}$ are
the domain wall states of Fig.~\ref{fig:generic}. We do this by showing that
competitive states must have higher energy. In the domain wall
states, every dimer participates in exactly two attractive interactions and no
repulsive interactions. The only way to achieve a lower energy is for
some dimers to participate in three or four attractive
interactions. This involves local dimer patterns of the form shown in
Fig.~\ref{fig:34}abc. In
Fig.~\ref{fig:34}a, the central dimer participates in one columnar
and two staggered interactions but also two repulsive interactions from
term $d$ in Eq.~\ref{eq:H0}. Similarly,
Figs.~\ref{fig:34}b and \ref{fig:34}c show that if a dimer
participates in more than two staggered interactions, the extra
bonds are penalized by term $c$. If we require $c,d\geq a,b$, these
patterns will result in higher energy states as the
repulsive terms nullify the advantage of having extra attractive bonds.
This also explains why in Fig.~\ref{fig:generic},
the staggered strips have unit width and the staggered and columnar dimers
have opposite orientation.
Of the states where every dimer has two attractive
bonds, the states where some dimers have two bonds of different type
will also have higher energy as shown in Fig.~\ref{fig:34}d. Of the
remaining states, it is readily seen that states where every dimer
participates in either two $a$ bonds or two $b$ bonds, and where there
are some $b$ bonds, must be of the domain wall form. The only other
possibility is the ``herringbone state'' where every dimer has two $a$
bonds (see Fig.~\ref{fig:phase0}). The latter states are part of the
degenerate manifold at $a=b$ but are dynamically inert because in this
state it is not possible to locally manipulate the dimers (without
violating the hard-core constraint). This establishes that when $a=b$, the ground states have the domain
wall form. It is also clear that when $a<b$, the system will maximize
the number of $b$ bonds and when $a>b$, the number
of $a$ bonds. Therefore, we obtain the zero temperature phase
diagram in Fig.~\ref{fig:phase0}.
It is useful to see this formally by calculating the energy of each domain wall state.
For concreteness, we assume the translational
symmetry is broken in the $x$ direction. A configuration with
$N_{s}$ staggered strips and $N_{c}$ columnar strips has energy:
\begin{eqnarray}
E(N_{s}, N_{c})&=& -a N_{y}N_{s} -b N_{y}N_{c}\nonumber\\&=&
-b\frac{N_{y}N_{x}}{2}+(b-a)N_{y}N_{s}
\label{eq:energy0}
\end{eqnarray}
where $N_{x}$ and $N_{y}$ are the dimensions of the lattice (the
lattice spacing is set to unity). We have used the relation
$N_{s}+N_{c}=\frac{N_{x}}{2}$. When $a=b$, the
domain wall states are degenerate and the energy scales with the total
number of plaquettes $N_{x}N_{y}$. If $a<b$, the system prefers the
minimal number of staggered strips, which is the columnar state. If
$a>b$, the herringbone configuration has lower energy than any
domain wall state.
Notice that all of these states are separated by a gap of order
$a$ or $b$ from the nearest excited states obtainable by
local manipulations of dimers. Since we will be
interested mainly in the difference $a-b$, the individual size of $a$
(or $b$), which sets the scale of this gap, can be made arbitrarily large.
\subsection{Perturbation}
We now consider the effect of perturbing the parent Hamiltonian
(\ref{eq:H0}) with the non-diagonal resonance term given in
Eq.~\ref{eq:V}:
\begin{equation}
\scalebox{0.5}{\includegraphics{v.eps}}\\
\label{eq:V2}
\end{equation}
We assume $t<<1,a,b$ and consider $t$ as a
small parameter in perturbation theory. We examine how the
degeneracies in the $t=0$ phase diagram (Fig.~\ref{fig:phase0}) get
lifted when $t\neq 0$. The technical complications of degenerate
perturbation theory do not arise because different domain wall
states are not connected by a finite number of applications of
this operator. Eq.~\ref{eq:V2} is equivalent to two applications of the
familiar two-dimer resonance of Rokhsar-Kivelson. The mechanism
we now discuss can, in principle, be made to work for even
this two-dimer term, but there are additional subtleties which will be
mentioned.
\subsubsection{Second order}
An even number of applications of operator \ref{eq:V2} are
required to connect a domain wall state back to itself.
Therefore, to linear order in $t$, the energies of these states are
unchanged. To second order in $t$, the energy shift of state
$|n\rangle$ is given by:
\begin{eqnarray}
E_{n}&=&
\epsilon_{n}-t^{2}\sum_{m}' \frac{V_{nm}V_{mn}}
{\epsilon_{m}-\epsilon_{n}}+O(t^{4})
\label{eq:pert2}
\end{eqnarray}
$\epsilon_{n}$ is the unperturbed energy of state $|n\rangle$ as
given by Eq.~\ref{eq:energy0}. The primed summation is over all dimer
coverings except the original state $|n\rangle$. The terms in the
sum which give nonzero contribution correspond to states connected to the
initial state by a single flipped cluster. These terms may be
interpreted as virtual processes taking the initial state to and from higher
energy intermediate states, which may be viewed as quantum
fluctuations of the staggered lines.
\begin{figure}[ht]
{\begin{center}
\includegraphics[width=3in]{singleflip.eps}
\caption{(Color online) The three types of intermediate states obtained by
acting once with the perturbation \ref{eq:V2} on the domain wall states.
The blue/dark gray (red/light gray) arrows denote attractive (repulsive) interactions that are
present in the initial (left) or excited (right) state but not both. The
circled cluster of the excited state of (3) is another such interaction.
(1) is a staggered line next to a columnar region of greater than unit width.
(2) and (3) are staggered lines separated by one columnar strip
from another staggered line of the same and opposite orientation
respectively. Notice that relative to the excited state of (1), the excited
state of (2) has 2 additional attractive $a$ bonds and 2 additional
repulsive $c$ bonds. Similarly, excited state (3) has 1 additional
attractive $b$ bond, 2 additional repulsive $d$ bonds, and a
repulsive $p$ interaction. Since $c,d\geq a,b$, cases (2) and (3)
involve higher energy intermediate states than (1).}
\label{fig:singleflip}
\end{center}}
\end{figure}
The resonance energy of a particular staggered line depends on how the dimers
next to the line are arranged. Fig.~\ref{fig:singleflip} shows the three
possibilities that give denominators that
may enter Eq.~\ref{eq:pert2}: The staggered line may be of type (1) next to a
columnar region of greater than unit width
(Fig.~\ref{fig:singleflip}-1), or separated by one columnar strip
from another staggered line of type (2), having the same orientation (Fig.~\ref{fig:singleflip}-2)
or type (3), having opposite (Fig.~\ref{fig:singleflip}-3) orientation.
From the figure, we see that cases (2) and (3) involve higher energy
intermediate states than case (1) but allow for a denser packing of
lines.
If $p$ is sufficiently large, states with type (3) lines will
be disfavored as ground states. Ignoring such states, we update
Eq.~\ref{eq:energy0} to include second order corrections. For
convenience, we set $c=a$ which removes the distinction between
cases (1) and (2),
\begin{eqnarray}
E(N_{s})&=&
-b\frac{N_{y}N_{x}}{2}+(b-a-\frac{t^{2}}{4d+2b})N_{y}N_{s}\nonumber\\
\label{eq:energy2}
\end{eqnarray}
We may use this to update the ground state phase diagram.
If $b>a+\frac{t^{2}}{4d+2b}$, the system is optimized when
$N_{s}=0$, which is the columnar phase. If $b<a+\frac{t^{2}}{4d+2b}$,
the best domain wall state is the one with maximal staggering but
without case (3) lines. This corresponds to the [11] state
(Fig.~\ref{fig:ideal}a) in which every staggered strip has the same orientation.
As $a-b$ is further increased, the herringbone state will eventually
be favored. The boundary between the [11] and herringbone state may
be determined by comparing energies. The energy of the [11] state is:
\begin{eqnarray}
E_{[11]}=(-b-a-\frac{t^{2}}{4d+2b})\frac{N_{x}N_{y}}{4}+O(t^{4})
\end{eqnarray}
while the herringbone state has energy $E_{h}=-a\frac{N_{x}N_{y}}{2}$.
From this, it follows that the [11] state will be favored when:
\begin{eqnarray}
a-\frac{t^{2}}{4d+2b}<b<a+\frac{t^{2}}{4d+2b}
\end{eqnarray}
while the herringbone state will occur when
$b<a-\frac{t^{2}}{4d+2b}$.
Therefore, up to corrections of order $t^{4}$, the system has the
phase diagram shown in Fig.~\ref{fig:phase2}. Because the coefficient of
$N_{s}$ in Eq.~\ref{eq:energy2} is
zero on the [11]-columnar boundary, we have that the [11] state,
columnar state, and any domain wall state with intermediate tilt
(that contains only lines of type (1) and (2)) are degenerate on the
boundary. In contrast, on the [11]-herringbone boundary, only the two states
are degenerate.
In Appendix \ref{app:ca}, the more general case, where $c>a$, is
discussed and the resulting phase diagram is shown in
Fig.~\ref{fig:phase2b}. An additional phase is
stabilized in a region of width $\sim (c-a)t^{2}$ (assuming $|c-a|$ is finite)
between the [11] and columnar states. In this new phase, labelled $A_{2}$, any state where
adjacent staggered lines are separated by two columns, including the
[12] state (Fig.~\ref{fig:ideal}b), is a ground state. These are the
states which maximize the number of type (1) staggered lines
(Fig.~\ref{fig:singleflip}-1). On
the [11]-$A_{2}$ boundary, intermediate states where adjacent
staggered lines are separated by one or two columns (and where there
are no type (3) lines) are degenerate.
On the $A_{2}$-columnar boundary, states where
adjacent staggered lines are separated by at least two columns are
degenerate.
\begin{figure}[t]
{\begin{center}
\includegraphics[width=0.37\textwidth]{stairs_phase2B-v2.eps}
\caption{(Color online) Second order phase diagram when $c>a$. The [11] phase has
width $\sim t^{2}$ and the $A_{2}$ phase has
width $\sim (c-a)t^{2}$. On the [11]-$A_{2}$ and the
$A_{2}$-columnar boundaries, intermediate domain wall states are
degenerate as described in the text.}
\label{fig:phase2b}
\end{center}}
\end{figure}
Fig.~\ref{fig:phase2b} may be understood intuitively by noting that
at $a=b$ and $t=0$, a staggered strip has the same energy as a columnar strip.
The resonance terms lower the effective energy of a staggered strip
and since the [11] state involves the most staggered strips, its
energy will be lowered the most. As $b$ increases to the point where
the degeneracy between columnar strips and type (2) lines is restored,
the system will prefer to maximize the number of type (1) lines, which
are individually more stable but loosely packed. This is the
transition to the $A_{2}$ phase. As $b$ is increased further, the
degeneracy between columnar strips and type (1) lines is restored and
there is a transition to the columnar state. Note that if $|c-a|$ is
very large, [11]-$A_{2}$ boundary can occur in the $a>b$ region.
Before proceeding with the calculation, we clarify two points of
potential confusion. First, when we say the ``[11]
state is stabilized'' over part of the phase diagram, what we precisely
mean is that the ground state wavefunction is a superposition of dimer
coverings that has relatively large overlap with
the literal [11] state of Fig.~\ref{fig:ideal}a and much smaller
overlaps (of order $t^{2}$) with the excited states obtained by acting
on the [11] state once with the perturbation \ref{eq:V2} (Fig.~\ref{fig:flip}).
We use the notation [11] to denote both the perturbed wavefunction, which is an
eigenstate of the perturbed Hamiltonian and the literal [11] state, which is an
eigenstate of the unperturbed Hamiltonian. Second, the phase
boundaries are based on a competition
between a resonance term, which is a quantum version of ``entropy'', and
{\em part} of the zeroeth order piece which, continuing the classical
analogy, is like an internal energy. This does not contradict the spirit of
perturbation theory because the {\em full} zeroeth order term,
$-b\frac{N_{y}N_{x}}{2}+(b-a)N_{y}N_{s}$, is always larger than the
second order correction.
A similar phase diagram will occur (though at fourth order in perturbation theory) if we use the
two-dimer move of Rokhsar-Kivelson. However, the bookkeeping will be more complicated
because resonances will be able to originate in the interior of the columnar regions instead of
just at that the columnar-staggered boundaries. This may be compensated by tuning
$b$, which will merely move the boundaries, or by adding appropriate (local) repulsive terms to
the parent Hamiltonian.
\subsubsection{Fourth order}
\label{sec:fourth}
We concentrate on Fig.~\ref{fig:phase2b} as it is
more generic than the fine-tuned $c=a$ case of
Fig.~\ref{fig:phase2}. In either case, we expect the degeneracies on
the phase boundaries to be partially lifted by considering higher orders in
perturbation theory. In this section, we focus on the
$A_{2}$-columnar boundary, where adjacent lines are
separated by at least two columnar strips.
We return to
the perturbation series for the energy (Eq.~\ref{eq:pert2}),
this time keeping terms up to fourth order in the small parameter.
\begin{eqnarray}
E_{n}&=&
\epsilon_{n}-t^{2}\sum_{m}'\frac{V_{nm}V_{mn}}{\epsilon_{m}-\epsilon_{n}}
\nonumber\\
&-& t^{4}\Bigl[\sum_{ml}'\frac{V_{nm}V_{ml}V_{lk}V_{kn}}
{(\epsilon_{m}-\epsilon_{n})(\epsilon_{l}-\epsilon_{n})(\epsilon_{k}-\epsilon_{n})}
\nonumber\\&-&\sum_{ml}'\frac{V_{nm}V_{mn}V_{nl}V_{ln}}{(\epsilon_{m}-\epsilon_{n})^{2}
(\epsilon_{l}-\epsilon_{n})
}\Bigr]+ O(t^{6})
\label{eq:pert4}
\end{eqnarray}
As usual, the primes denote that the sum is over all states except
$|n\rangle$. The two fourth order terms correspond, in
conventional Rayleigh-Schrodinger perturbation theory, to the
corrections to the energy expectation value $\langle\psi|H|\psi\rangle$
and wavefunction normalization $\langle\psi|\psi\rangle$. We will
refer to the corresponding terms as ``energy'' and ``wavefunction''
terms. As in the second order case, we may view the terms in
the fourth order sums as virtual resonances connecting the initial state to
itself via a series of high energy intermediate states. For this
reason, we will refer to the summands as (fourth-order) ``resonances''.
Most terms in the double sum in Eq.~\ref{eq:pert4} correspond to
resonances between disconnected clusters
(Fig.~\ref{fig:disconnect}). Referring to the figure, we use the term
``disconnected'' to indicate that there are no interaction terms connecting
the dimers of clusters 1 and 2. While the number of such resonances
scales as the square of the system size, the contributions from the
energy and wavefunction terms in Eq.~\ref{eq:pert4} precisely cancel
for these disconnected clusters. The details of this cancellation
are discussed in Appendix \ref{app:disconnect}.
The remaining fourth order resonances are extensive in number and may be
grouped into three categories. In the first category, shown in
Fig.~\ref{fig:four_line}, are resonances associated with a single
staggered line and the number of such resonances in the system is proportional
to the number of lines. We refer to these resonances as
``self-energy corrections''.
In the second category, shown in Fig.~\ref{fig:four_suppress}, there are
resonances that contribute to the effective interactions between adjacent lines.
These resonances occur only in states where lines are
separated by two or fewer columnar strips. Because we are interested
in the $A_{2}$-columnar boundary, the only processes to
consider are the ones shown in the figure. The purpose of terms $p$
and $q$ in Eq.~\ref{eq:H0} is to control the processes in
Figs.~\ref{fig:four_suppress}a and b respectively. The net
contribution of these resonances involve both energy and wavefunction
terms. For example, the contribution of Fig.~\ref{fig:four_suppress}b
to the energy is:
\begin{equation}
e=\frac{2t^{4}}{2(4d+2b)^{3}}\Bigl[1-\frac{1}{1+\frac{2q-a}{2(4d+2b)}}\Bigr]
\label{eq:e}
\end{equation}
and likewise for Fig.~\ref{fig:four_suppress}a (replace $2q-a$ with
$2p-b$). If $q>a/2$, the net contribution of this resonance is repulsive.
The third type of resonance is the long resonance, shown
in Fig.~\ref{fig:four_long}. These resonances occur in the
energy term of Eq.~\ref{eq:pert4} but do not have corresponding pieces
in the wavefunction term. Therefore, these resonances will always
lower the energy though, as the figure indicates, the precise amount
depends on the way the lines are spaced.
\begin{figure}[ht]
{\begin{center}
\includegraphics[width=3in]{disconnected.eps}
\caption{(Color online) Most of the fourth order terms in Eq.~\ref{eq:pert4}
involve ``disconnected'' clusters of dimers. In this figure,
the perturbation connects the initial state (0) to excited states, labelled
(1) and (2), depending on whether cluster 1 or 2 has been flipped. Acting
again with the perturbation connects to an excited
state, labeled (12), where both of these clusters are flipped.
Acting two more times with the perturbation brings us back to the
initial state (0) via either of excited states (1) or (2). Such
terms are called disconnected because there are no interactions (in
Eq.~\ref{eq:H0}) between the dimers of clusters 1 and 2. The figure depicts a
particular resonance from the energy term in Eq.~\ref{eq:pert4}.
There is an analogous contribution from the wavefunction term which is
a product of the second order processes $(0)\rightarrow(1)\rightarrow(0)$
and $(0)\rightarrow(2)\rightarrow(0)$. While the number of such disconnected
terms scales as $N^{2}$, where $N=L_{x}L_{y}$ is the system size,
these resonances do not contribute to the energy because the
contributions from the energy and wavefunction terms exactly cancel.}
\label{fig:disconnect}
\end{center}}
\end{figure}
\begin{figure}[ht]
{\begin{center}
\includegraphics[width=3in]{four_line.eps}
\caption{(Color online) These are examples of fourth order self-energy resonances.
Each resonance is confined to a single line and the number of
resonances in the system is proportional to the number of lines.
In resonances such as (a), there are interactions
connecting dimers of two flipped clusters on the same line.
Terms such as (b) arise
from the wavefunction term in Eq.~\ref{eq:pert4} but have no analogous
processes in the energy term to cancel against.}
\label{fig:four_line}
\end{center}}
\end{figure}
\begin{figure}[ht]
{\begin{center}
\includegraphics[width=3in]{four_suppress.eps}
\caption{(Color online) These are examples of fourth order resonances which are
effective interactions between lines. At fourth order, such interactions are
possible only when lines are separated by two or fewer columns. On
the $A_{2}$-columnar boundary, resonances (a) and (b) are the only
processes to consider. These involve terms $p$ and $q$ in the
Hamiltonian, as indicated by the circles.}
\label{fig:four_suppress}
\end{center}}
\end{figure}
\begin{figure}[ht]
{\begin{center}
\includegraphics[width=3in]{four_long.eps}
\caption{(Color online) These are the long fourth order resonances which occur in
states where lines are separated by two or more columnar strips.
These processes are always stabilizing though the amount depends on
the environment of the line as in the second order (see
Fig.~\ref{fig:singleflip}). The resonance in (a) is strongest because
it involves the lowest energy intermediate state but (b) and (c) allow
for a denser packing of lines. Resonance (c) is especially
suppressed due to term $p$ in Eq.~\ref{eq:H0}.}
\label{fig:four_long}
\end{center}}
\end{figure}
An immediate implication of these resonances is the lifting of the
degeneracy of the $A_{2}$ phase. All of these states have the same
number of lines so will receive the same self-energy contribution
(Fig.~\ref{fig:four_line}). If we choose $p,q$ large compared to
$a,b,c$ and $d$, then the repulsive contribution from the interaction
resonances (Fig.~\ref{fig:four_suppress}) is essentially determined
by the wavefunction term, which is the same for all of the $A_{2}$
states. The degeneracy is broken by the long resonances because in
the [12] state, only Fig.~\ref{fig:four_long}b processes occur
while in the other $A_{2}$ states, some of the resonances are the
suppressed Fig.~\ref{fig:four_long}c variety. Therefore, what was
seen as merely an $A_{2}$ phase at second order is revealed, on
closer inspection, as a [12] phase.
To investigate the degeneracy of what we now recognize as the [12]-columnar
boundary, it is useful to update Eq.~\ref{eq:energy2} to include
fourth order corrections:
\begin{eqnarray}
E &=&
-b\frac{N_{y}N_{x}}{2}+(b-a-\frac{t^{2}}{4d+2b}+\alpha
t^{4})N_{y}N_{s}\nonumber\\
&-&\beta t^{4}N_{y}N_{sa}-\gamma t^{4} N_{sb}+O(t^{6})
\label{eq:energy4}
\end{eqnarray}
where $N_{s}$ is the total number of staggered lines and $N_{sa(b)}$ is
the number of staggered lines having the environment of
Fig.~\ref{fig:four_long}a(b). We ignore states with arrangements like
Fig.~\ref{fig:four_long}c since they are disfavored as ground states.
$\alpha$ is a
constant, which may be calculated but whose value is unimportant,
containing the contribution of fourth order self energy terms.
$\beta=\frac{1}{(4d+2b)^{2}(2(4d+2b)+4(d-a))}>0$ is the
contribution of the most favorable long fourth order resonances
(Fig.~\ref{fig:four_long}a)
whose number is proportional to $N_{sa}$.
$\gamma=\frac{1}{(4d+2b)^{2}(2(4d+2b)+4(d-a)+2(c-a))}-2e$ ($e$ given
by Eq.~\ref{eq:e}) is proportional to $N_{sb}$ and includes the
contributions of Figs.~\ref{fig:four_suppress}b and \ref{fig:four_long}b.
Note that $\gamma<\beta$ and the sign of $\gamma$ is determined by
the size of $q$. For convenience, we assume $q$ large enough that
$\gamma<0$.
We may use Eq.~\ref{eq:energy4} to correct the phase diagram.
Similar to the second order case, as $b$ is increased, the extra stability of
the staggered strips in the [12] state becomes eventually balanced by the
zeroeth order energy of the columnar strips. When this occurs, the
system will prefer a state with fewer lines that are individually more
stable. In particular, the states we may label $A_{3}$, where
adjacent lines are separated by three columns, maximize the number of
favorable long resonances (Fig.~\ref{fig:four_long}a) without incurring
any of the repulsive fourth order penalties (i.\ e.\ the analog of
Fig.~\ref{fig:four_suppress} would be a disconnected resonance).
In the next section, higher order perturbation theory will show that this $A_{3}$ phase is
actually a [13] phase (Fig.~\ref{fig:ideal}c) so we
begin using the [13] label immediately. As $b$ is increased further,
the system will enter the columnar state.
The result is the phase diagram in Fig.~\ref{fig:phase4b}.
The phase boundaries are determined by comparing energies.
Ignoring corrections of order $t^{6}$, we have the following:
\begin{eqnarray}
E_{[12]}=-b\frac{N_{y}N_{x}}{2}+(b-a-\frac{t^{2}}{4d+2b}+\alpha
t^{4}-\gamma t^{4})\frac{N_{y}N_{x}}{6}\nonumber\\
\label{en12}
\end{eqnarray}
\begin{eqnarray}
E_{[13]}=-b\frac{N_{y}N_{x}}{2}+(b-a-\frac{t^{2}}{4d+2b}+\alpha
t^{4} - \beta t^{4})\frac{N_{y}N_{x}}{8}\nonumber\\
\label{en13}
\end{eqnarray}
\begin{equation}
E_{col}=-b\frac{N_{y}N_{x}}{2}
\end{equation}
Comparing these expressions, we obtain the updated phase diagram shown in
Fig.~\ref{fig:phase4b}. The [12] state is favored when:
\begin{equation}
b<a+\frac{t^{2}}{4d+2b}-\alpha t^{4}-(4|\gamma|+3\beta)t^4
\label{eq:boundary4}
\end{equation}
The [13] is favored when:
\begin{eqnarray}
a&+&\frac{t^{2}}{4d+2b}-\alpha t^{4}-(4|\gamma|+3\beta)t^{4}< b\nonumber\\&<&
a+\frac{t^{2}}{4d+2b}-\alpha t^{4}+\beta t^{4}\nonumber\\
\end{eqnarray}
The columnar state is favored when:
\begin{equation}
b>a+\frac{t^{2}}{4d+2b}-\alpha t^{4}+\beta t^{4}
\end{equation}
On the [12]-[13] boundary, there is a degeneracy between intermediate
states where adjacent lines are separated by either two or three
columnar strips. On the [13]-columnar boundary, there is a degeneracy
between states where adjacent staggered lines are separated by
at least two columns.
\begin{figure}[t]
{\begin{center}
\includegraphics[width=0.37\textwidth]{stairs_phase4B-v2.eps}
\caption{(Color online) Fourth order phase diagram. The new [13] state has width $\sim
t^{4}$.}
\label{fig:phase4b}
\end{center}}
\end{figure}
\subsubsection{Higher orders and fine structure}
\label{sec:sixth}
The picture of Fig.~\ref{fig:phase4b} will be further refined
by considering sixth order resonances and new phases
will appear near the phase boundaries, in regions of width $~\sim t^{6}$,
which is why they were missed at fourth order. The most immediate
consequence will be the lifting of the degeneracy of the $A_{3}$ states in
favor of the state [13]. The latter state, in comparison with the other $A_{3}$
states, is both stabilized maximally by the sixth order analog of
Fig.~\ref{fig:four_long} and, if we choose $p>q$, destabilized minimally
by the sixth order analog of Fig.~\ref{fig:four_suppress}.
The [13]-columnar phase boundary will open to reveal the [14] phase
(Fig.~\ref{fig:ideal}d)\cite{footnote:14}, in which the number of
favorable long resonances, the sixth order analogs of
Fig.~\ref{fig:four_long}a, is maximized and there are no repulsive
contributions ( i.\ e.\ the sixth order analogs of
Fig.~\ref{fig:four_suppress} will be disconnected terms if the lines
are more than three columnar strips apart).
The argument may be applied iteratively at higher orders in
perturbation theory. At 2n-th order, we may ask whether the
[1n]-columnar boundary will open to reveal a new phase.
The transition to less tilted states will be again be
driven by processes that connect adjacent lines
(Fig.~\ref{fig:nthorder}). In the competitive states,
adjacent lines are separated by at least n columnar strips so
2n-th order resonances connecting the lines must be ``straight''.
This means that the complicated
high order processes, including ``snake-like'' fluctuations that break the
staggered lines, will simply change the self-energy of a staggered
line and do not have any effect on the transition. The [1n] phase will be
destabilized by the process in Fig.~\ref{fig:nthorder}b which will
overwhelm the stabilizing effect of Fig.~\ref{fig:nthorder}a due to
combinatorics. However, these repulsive processes will not contribute when
the lines are separated by more than n columnar strips. Therefore,
the [1,n+1] phase, which maximizes the number of the long 2n-th
order resonances (Fig.~\ref{fig:nthorder}c), will be stabilized in a region of
width $~\sim t^{2n}$ between the [1n] and columnar phases.
Therefore, we obtain the phase diagram of Fig.~\ref{fig:phaseN}.
\begin{figure}[ht]
{\begin{center}
\includegraphics[width=2.5in]{nthorder.eps}
\caption{(Color online) These are the 12-th order resonances which drive the transition between
the [16] and [17] states. Process (a) selects the [16] state from
the others in the $A_{6}$ manifold, which were stabilized equally at
10th order. Process (b) destabilizes the [16] state near its
columnar boundary in favor of the [17] state, which maximizes the number of
most favorable resonances (c).}
\label{fig:nthorder}
\end{center}}
\end{figure}
This shows that states with arbitrarily long periods are stabilized
without long range interactions or fine tuning (other than the requirements of perturbation
theory). The situation will be similar if we were to use the two-dimer Rokhsar-Kivelson
resonance. In this case, there will be contributions from resonances occurring only in the
columnar regions, which were ``inert" in our calculation. These processes will amount to
self-energy corrections that just renormalize columnar energy scale $b$. Also, additional local terms (i.e. other than $p$ and $q$) may be required to realize the very high-order states because adjacent lines will be able to interact via intermediate states other than the ones shown in Fig.~\ref{fig:four_suppress}. While we have not worked out the exact details of this case, we note that there are
a finite number of such intermediates states so only a finite number of {\em local} terms will be
required. In particular, we will {\em not} have to add longer terms at each order in perturbation
theory.
So far, we have concentrated on the boundary with the columnar phase but we may also
ask whether a similar lifting may occur on the other phase
boundaries. We consider the [11]-[12] boundary. Both of these
phases are stabilized at second order and occupy regions of width
$~\sim t^{2}$ in the phase diagram (assuming $|c-a|\gg t^{2}$). On
their boundary, all states where staggered lines are separated by
either one or two columnar strips are degenerate to second order. We
investigate the effect of fourth order resonances on this boundary.
We need to consider not only the resonances presented in section
\ref{sec:fourth} but also new fourth order processes which become
available once we consider staggered lines that are one column apart.
These are shown in Figs.~\ref{fig:boundary1112} and
\ref{fig:boundary1112b}. The resonances in section \ref{sec:fourth}
will stabilize (or destabilize -- the sign is not important)
each boundary state by an amount proportional
to the number of its ``[12] regions'', i.\ e.\ columnar regions that are
two columns wide and have staggered lines on their boundary, while the resonances in Fig.~\ref{fig:boundary1112}
will contribute an amount proportional to the number of ``[11]
regions'', i.\ e.\ columnar regions that are one column wide. If
these were the only available processes, the [11]-[12] boundary would
be shifted by $\sim t^{4}$ but the degeneracy on the boundary would
remain.
The possibility of a new phase is determined by the resonances in
Fig.~\ref{fig:boundary1112b}. Both of these resonances have an overall stabilizing effect (the contributions to the energy are negative)
and depend on whether a [11] region is adjacent to another [11]
region (Fig.~\ref{fig:boundary1112b}a) or a [12] region
(Fig.~\ref{fig:boundary1112b}b). Because $c>a$, resonance (b) is
stronger than (a), since its intermediate state has lower energy, but requires
a lower density of staggered lines. If the net contribution of
resonance (a) wins, then the degeneracy would be lifted in favor of
the [11] state and the [11]-[12] boundary would be shifted again, but
the degeneracy will only be between the two states.
However, if $c$ is made sufficiently large\cite{foot_csize}, the
contribution of resonance (a) tends to zero while (b) approaches a constant
because the intermediate state can occur without involving $c$ bonds
(i.\ e.\ the fourth order process where the left cluster is flipped
first and last). Therefore, there will be a new phase where resonance
(b) is maximized. This state is the [11-12] state where the label
refers to the repeating unit
``one staggered strip followed by one columnar strip followed by
one staggered strip followed by two columnar strips''.
\begin{figure}[ht]
{\begin{center}
\includegraphics[width=3in]{boundary1112.eps}
\caption{(Color online) These are fourth order processes available between two
staggered lines separated by a single columnar strip.}
\label{fig:boundary1112}
\end{center}}
\end{figure}
\begin{figure}[ht]
{\begin{center}
\includegraphics[width=2in]{boundary1112b.eps}
\caption{(Color online) These are fourth order processes which can occur if a [11]
region is next to (a) another [11] region or (b) a [12] region. Both
of these resonances contribute to the energy with a negative sign because the intermediate state
involves two fewer repulsive $c$ bonds than if the flipped clusters
were farther apart.}
\label{fig:boundary1112b}
\end{center}}
\end{figure}
Continuing the line of thought, we may ask whether the [11-12]-[12]
boundary will open at higher orders. Sixth order resonances will
shift the boundary but there are no processes which break the
degeneracy. However, at eighth order, there is a resonance which will
favor a [11-(12)$^{2}$] phase (Fig.~\ref{fig:boundary1112c}).
We can, in principle, investigate whether the [11-(12)$^{n}$] phases
continue to appear when n is large and also whether the new
boundaries themselves open to reveal even finer details.
The same arguments will hold for all of the other boundaries in the [1n]
sequence. While the structure of our Hamiltonian allowed us to be
definite regarding the [1n] sequence, it is more difficult to draw
conclusions about the fine structure at high orders in perturbation
theory because increasingly complicated resonances need to be
accounted for. Most of these resonances will stabilize the
unit cells of the states on either side of the boundary so the net
effect will be to move the boundary. The more important terms, with
respect to whether boundaries will open, are resonances associated
with interfaces between regions with one or another unit cell.
Even these terms can become complicated at very
high orders in perturbation theory. However, our arguments suggest
that arbitrarily complicated phases can, in principle, be stabilized by
going to an appropriate range in parameter space and/or adding
additional local interactions. Therefore, the most generic situation
is an incomplete devil's staircase, as sketched in
Fig.~\ref{fig:staircase}.
\begin{figure}[t]
{\begin{center}
\includegraphics[width=2in]{boundary1112c.eps}
\caption{(Color online) These are eighth order processes which can lead to a new
phase between the [11-12] and [12] phases. Resonance (a) stabilizes
the [11-12] phase while (b) stabilizes the [11-(12)$^{2}$] phase. In
the limit where $c$ is large, resonance (b) is preferable. The
easiest way to see this is by setting $c=\infty$. Then the energy
term of resonance (a) and all of the wavefunction terms in resonances
(a) and (b) will give zero because the
intermediate states can not form without creating $c$ bonds.
However, the intermediate state in
resonance (b) does not involve $c$ bonds and can be obtained without
creating $c$ bonds (i.\ e.\ in an
eighth
order process where the first and last two
actions involve flipping the cluster on the right and the first
cluster in the
middle). Therefore, the energy term of resonance (b) will give a
stabilizing contribution.}
\label{fig:boundary1112c}
\end{center}}
\end{figure}
\subsection{Order of the transitions between the $[1n]$ phases}
The boundaries between different modulated states are generically first-order.
Intuitively, this is not surprising because of the topological property that ``protects" the states, namely that even with an infinite number of {\em local} dimer moves, it is \emph{impossible} to go from one state to the other since the states are in different topological sectors.
Therefore, we would not expect a transition to be driven by a growing correlation length.
Formally, the way we use to determine the order of the transitions that
emerge in the system, is by calculating the first derivative of
the ground-state energy on either side of the phase boundaries that we found.\cite{sachdev_book}
For example, we will treat explicitly the case of the transition between
$[12]\rightarrow[13]$, even though the same line of argument applies to the other boundaries
as well. The energies of the two states near the phase
boundary, are given by Eq.~\ref{en12}, Eq.~\ref{en13} and the phase boundary is
given by the following condition(as can be seen in
Eq.~\ref{eq:boundary4}):
\begin{equation}
b=a+\frac{t^{2}}{4d+2b}-\alpha t^{4} - (4|\gamma|+3\beta) t^4
\label{b4explicit}
\end{equation}
Let's consider the case where we approach the boundary from the $[12]$ side
varying the variable $t$ but keeping $a,b$ constant. In the phase diagram
Fig.~\ref{fig:phaseN}, we 'move' vertically down. The reason for choosing this direction is just clarity. Let's call the point of the phase boundary where our path crosses, $A$, and the critical value of t, $t_c$ (coming from the solution of Eq.~\ref{b4explicit} for fixed values of $a,b,d$).
The energies of the two states at the phase boundary are exactly equal. Their derivatives are:
\begin{eqnarray}
\frac{\partial E_{[12]}}{\partial t}\Bigg|_{A^{+},t_c}&=&-(\frac{2t_c}{4d+2b}-4\alpha t_{c}^3 -4|\gamma|
t_{c}^{3})\frac{N_{y}N_{x}}{6}\nonumber\\ &&
+O(t_{c}^{5})\label{ender1}\\
\frac{\partial E_{[13]}}{\partial t}\Bigg|_{A^{-},t_c}&=&-(\frac{2t_c}{4d+2b}-4\alpha t_{c}^{3} + 4\beta t_{c}^{3})\frac{N_{y}N_{x}}{8}\nonumber\\ &&
+O(t_{c}^{5})
\label{ender2}
\end{eqnarray}
By Eqs.~\ref{b4explicit}, \ref{ender1} and \ref{ender2}, we have:
\begin{eqnarray}
\frac{\partial E_{[12]}}{\partial t}\Bigg|_{A^{+},t_c}-\frac{\partial E_{[13]}}{\partial t}\Bigg|_{A^{-},t_c} = \nonumber\\
\underbrace{\left(\frac{t_c}{2(4d+2b)}-\frac{b-a}{t_c}\right)}_{<0 {\rm\; to\; order}\; O(t^2)} \frac{N_yN_x}{6} + O(t_{c}^5)
\end{eqnarray}
The derivatives are not equal along the phase boundary so the transition is discontinuous (first-order). It is clear that \emph{all} the phase transitions we found will also be discontinuous because the above discontinuity comes exactly from the contributions leading to the phase boundary's presence.
\section{Connections with frustrated Ising models}
\label{sec:spin}
A natural question to ask is whether the staircase presented above has any connection to the staircase of the 3D ANNNI model or the quantum analogs discussed in Ref.~\onlinecite{yeomans95}. One of the main differences of the present work is the non-perturbative inclusion of frustration by considering hard-core dimers as the fundamental degrees of freedom. In this sense, the present staircase differs from previous work similarly to how the fully frustrated Ising model differs from the conventional Ising model. It is instructive to consider the mapping between dimer coverings and configurations of the fully frustrated Ising model on the square lattice (FFSI) in more detail.
\begin{figure}[h]
\includegraphics[width=0.45\textwidth]{ffsi_staircase.eps}
\caption{(Color online) One of the four possible $[11]$ configuration in terms of Ising spins on the fully frustrated square lattice. The hard-core dimer constraint corresponds to the requirement that the FFSI ground state has one ``unsatisfied" bond per plaquette. The columnar-dimer regions correspond to ferromagnetic-Ising domains. The staggered-dimer strips correspond to the Ising domain walls, depicted by the red-colored (dashed) curved lines, which separate ferromagnetic domains of different orientation. Clearly, in the other equivalent configurations, even though they share the same principle of domain-wall competition, the separated regions are not ferromagnetic but one of several metamagnetic choices \cite{liebmann_book}.}
\label{fig:ffsi}
\end{figure}
The FFSI model can be described in terms of Ising degrees of freedom living on the square lattice. The main difference with the usual ferromagnetic Ising model is the following: Even though in the x-direction, all the bonds are ferromagnetic, in the y-direction there are alternating ferromagnetic (antiferromagnetic) lines where ferromagnetic (antiferromagnetic) vertical bonds live (we consider that the absolute values of the couplings of all bonds are equal). In this way, the product of bonds on a single plaquette is always $-1$ (three ferromagnetic bonds per plaquette) and therefore the ground-state of the system cannot be just the ferromagnetic one. In fact, by mapping each "unsatisfied" bond to a dimer living on the dual sublattice, we find that each of the degenerate ground-state configuration maps to a hard-core dimer configuration on the square lattice (see Fig.\ref{fig:ffsi}) \cite{liebmann_book}.
A typical [1n] configuration in the dimer language we used, as clearly depicted in Fig. \ref{fig:ffsi} for one of the four equivalent dimer structures, under $\pi/2$ rotations and sublattice shifts, can be seen in the FFSI picture as ferromagnetic stripes of length $4n$ in the one direction and infinite in the other, separated by antiferromagnetic domain walls. In this way, these ordered states clearly resemble the modulated phases of the ANNNI model in two dimensions. The other three equivalent dimer structures map again to periodic domains in the Ising model, but with more complicated metamagnetic structure. The reason for this seemingly large complexity has to do with the fact that the possible equilibrium configurations have to satisfy the FFSI constraint.
As far as the interactions are concerned, we have the following correspondences: The three-dimer resonance term we used in our construction corresponds to a two neighboring-spins flip process which should, however, respect the FFSI constraint. We should note that the usual single-plaquette resonance move maps to the single-spin flip which is the same as the Ising transverse field usually considered. The $a$ and $b$-terms correspond to domain-wall energies in the FFSI. Both of them have an one-to-one mapping to three-spin interactions but these interactions are also anisotropic (they depend on the distribution of the Ising bonds which we described). The additional interactions that we added to the system, so that to extensively study it, correspond clearly to complicated multi-spin interactions.
\section{Discussion}
\label{sec:discuss}
There are reasons to be optimistic that these
ideas apply more generally. For example, as mentioned earlier, we expect that with
suitable modification of Eq.~\ref{eq:H0}, a similar phase diagram
may be obtained for a wide variety of off-diagonal resonance terms,
including the familiar two-dimer resonance of Rokhsar-Kivelson. This
is because the perturbation theory is structured so that at any
order, most of the nontrivial resonances amount to self-energy
corrections and the resonances driving the transitions are
comparatively simpler. The three-dimer resonance of Eq.~\ref{eq:V} is
analytically convenient as its action is confined to the
domain wall boundaries. The two-dimer resonance would involve more
complicated bookkeeping since we also need to account for
internal fluctuations of the columnar regions.
Another reason to expect these ideas to hold more generally is the
qualitative similarity of this approach to the field theoretic
arguments in Refs.~\onlinecite{vbs04} and \onlinecite{fhmos04}. In
those studies, the following action, in the notation of
Ref.~\onlinecite{fhmos04}, was used to describe the tilting
transition in the Rokhsar-Kivelson quantum
dimer model on the honeycomb lattice (the square lattice is similar
but with some added subtleties -- see Ref.~\onlinecite{fhmos04}):
\begin{eqnarray}
S&=&\frac{1}{2}(\partial_{\tau}h)^{2}+\frac{1}{2}\rho_{2}(\nabla
h)^{2}+\frac{1}{2}\rho_{4}(\nabla^{2}h)^{2}\nonumber\\&+& g_{3}(\partial_{x} h)
(\frac{1}{2}\partial_{x}h-\frac{\sqrt{3}}{2}\partial_{y}h)(\frac{1}{2}\partial_{x}h+\frac{\sqrt{3}}{2}
\partial_{y}h)\nonumber\\&+& g_{4}|\nabla h\cdot \nabla h|^{2} + \ldots
\label{eq:action}
\end{eqnarray}
where ``$\ldots$'' includes terms that are irrelevant to the present
discussion (though maybe not strictly ``irrelevant'' in the RG sense).
In this expression, $h$ is a coarse grained version of the height
field (Fig.~\ref{fig:height}) and the first line of Eq.~\ref{eq:action}
describes the tilting transition at the RK point\cite{action}, which
corresponds to $\rho_{2}=g_{3}=g_{4}=0$. If $g_{3}<0$, the system favors tilted
states and is similar to our parameter $a-b$. However, $g_{4}$ prevents the
tilt from taking its maximal value and in this sense, is similar to
our terms $c$ and $d$. The existence of the devil's staircase in
Ref.~\onlinecite{fhmos04} was established by tuning $g_{3}$ and
$g_{4}$ so as to stabilize an intermediate tilt and then to study the
fluctuations about this state. The staircase arose from a
competition between these quantum fluctuations, analogous to our
term $t$, and lattice interactions, (roughly) analogous to our
terms $c$, $d$, $p$, and $q$.
Another sense in which our calculation is similar to Ref.~\onlinecite{fhmos04} may
be seen by heuristically considering the effect of doping the model. In particular, consider
replacing one of the dimers in a staggered strip with two monomers. If we then
separate the monomers in the direction parallel to the stripe, a string of columnar bonds will
be created. If the staggered and columnar bonds were degenerate, then this would
cost no energy in addition to the cost of creating the monomers in the first place so the
monomers would be deconfined. However, in the [1n] phase, the staggered bonds are
slightly favored so the energy cost $E$ of separating the monomers by a distance $R$ would be
$E \sim R t^{2n}$. Hence, the commensurate phases seen in our model are confining with
a confinement length that becomes arbitrarily large for the high-order structures that appear very close to the columnar phase boundary. This is qualitatively similar to the ``Cantor deconfinement" scenario
proposed in Ref.~\onlinecite{fhmos04}.
However, there are ways in which our calculation is qualitatively different from the above. Our calculation takes place in the limit of
``strong-coupling'' where $t$ is
small compared with other terms but
influences the phase diagram nonetheless because the stronger terms
are competing. In contrast, the RK point of a quantum dimer model, by
definition, occurs in a regime of parameter space where quantum fluctuations
are comparable in strength to the interactions. The field theoretic
prediction requires $g_{3}$ and $g_{4}$ being nonzero so does not
literally apply at the RK point either but, by self-consistency,
should apply somewhat ``near'' it. We may speculate that the tilted states
being predicted by the field theory are
large $t$ continuations of states that emerge in the strong coupling
limit far from the RK point. However, we reemphasize that our
calculation is reliable only in the limit of small $t$ and we can not
be certain which (if any) of our striped phases survive at larger
$t$. Another issue is that the phase diagram near the RK point depends strongly
on the lattice geometry and the prediction of a devil's staircase in
Ref.~\onlinecite{fhmos04} is for bipartite lattices. In contrast, lattice symmetry
does not play an obvious role in the present work and it is likely that these ideas
can be made to apply on more general lattices.
It is also likely that this calculation can be made to work in the
strong-coupling limits of other frustrated models, for example vertex
models\cite{ardonne04} and even in higher dimensions. For example,
mappings similar to those discussed in Ref.~\onlinecite{rms05} may be used to
construct an SU(2)-invariant spin model on a decorated lattice that
displays the same phases. A more interesting direction would be to study the
strong-coupling limits of more physical models, for example the Emery model
of high $T_{c}$ superconductivity\cite{emery87} which also shows an affinity for
nematic ground states\cite{kivfradgeb04}. It would also be
interesting to see whether nematicity is essential, i.\ e.\ whether
other types of phase separation can occur in a purely {\em local}
model through effective long range interactions that arise, as in the
present calculation, from the interplay of kinetic energy and frustration.
\begin{acknowledgments}
The authors would like to thank David Huse, Steve Kivelson, Roderich Moessner, and
Shivaji Sondhi for many useful discussions. In particular, we thank Steve Kivelson for the observation regarding monomer confinement. This work was supported through the
National Science Foundation through the grant NSF DMR 0442537 at the University of Illinois, and through the Department of Energy's Office of Basic Sciences through the grant DEFG02-91ER45439, through the Frederick Seitz Materials Research Laboratory at the University of
Illinois at Urbana-Champaign (EF). We are also grateful to the Research Board of the University of Illinois at Urbana-Champaign for its support.
\end{acknowledgments}
|
1,941,325,220,044 | arxiv | \section{Introduction}
There is a growing interest in the commercial and recreational use of
drones. This in turn imposes a threat to public safety. The Federal
Aviation Administration (FAA) and NASA have reported numerous cases of
drones disturbing the airline flight operations, leading to near
collisions. It is therefore important to develop a robust drone
monitoring system that can identify and track illegal drones. Drone
monitoring is however a difficult task because of diversified and
complex background in the real world environment and numerous drone
types in the market.
Generally speaking, techniques for localizing drones can be categorized
into two types: acoustic and optical sensing techniques. The acoustic
sensing approach achieves target localization and recognition by using a
miniature acoustic array system. The optical sensing approach processes
images or videos to estimate the position and identity of a target
object. In this work, we employ the optical sensing approach by
leveraging the recent breakthrough in the computer vision field.
The objective of video-based object detection and tracking is to detect
and track instances of a target object from image sequences. In earlier
days, this task was accomplished by extracting discriminant features
such as the scale-invariant feature transform (SIFT) \cite{sift} and the
histograms of oriented gradients (HOG) \cite{hog}. The SIFT feature
vector is attractive since it is invariant to object's translation,
orientation and uniform scaling. Besides, it is not too sensitive to
projective distortions and illumination changes since one can transform an
image into a large collection of local feature vectors. The HOG feature
vector is obtained by computing normalized local histograms of image
gradient directions or edge orientations in a dense grid. It provides
another powerful feature set for object recognition.
In 2012, Krizhevsky {\em et al.} \cite{alexnet} demonstrated the power
of the convolutional neural network (CNN) in the ImageNet grand
challenge, which is a large scale object classification task,
successfully. This work has inspired a lot of follow-up work on the
developments and applications of deep learning methods. A CNN consists
of multiple convolutional and fully-connected layers, where each layer
is followed by a non-linear activation function. These networks can be
trained end-to-end by back-propagation. There are several variants in
CNNs such as the R-CNN \cite{rcnn}, SPPNet \cite{sppnet} and Faster-RCNN
\cite{fasterrcnn}. Since these networks can generate highly
discriminant features, they outperform traditional object detection
techniques in object detection by a large margin. The Faster-RCNN
includes a Region Proposal Network (RPNs) to find object proposals, and
it can reach real time computation.
The contributions of our work are summarized below.
\begin{itemize}
\item To the best of our knowledge, this is the first one to use the
deep learning technology for the challenging drone detection and
tracking problem.
\item We propose to use a large number of synthetic drone images, which
are generated by conventional image processing and 3D rendering algorithms,
along with a few real 2D and 3D data to train the CNN.
\item We propose to use the residue information from an image sequence
to train and test an CNN-based object tracker. It allows us to track a
small flying object in a cluttered environment.
\item We propose an integrated drone monitoring system that consists of a
drone detector and a generic object tracker. The integrated system
outperforms the detection-only and the tracking-only sub-systems.
\item We have validated the proposed system on several drone datasets.
\end{itemize}
The rest of this paper is organized as follows. The collected drone
datasets are introduced in Sec. \ref{sec:dataset}. The proposed drone
detection and tracking system is described in Sec. \ref{sec:solution}.
Experimental results are presented in Sec. \ref{sec:results}.
Concluding remarks are given in Sec. \ref{sec:conclusion}.
\section{Data Collection and Augmentation}\label{sec:dataset}
\begin{figure}[ht]
\begin{center}
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=70mm]{fig/onlinedrone.png}
\caption{Public-Domain Drone Dataset} \label{fig:dataset2}
\end{subfigure}
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=70mm]{fig/uscdrone.png}
\caption{USC Drone Dataset} \label{fig:dataset1}
\end{subfigure}
\end{center}
\caption{Sampled frames from two collected drone datasets.}\label{fig:dataset}
\end{figure}
\subsection{Data Collection}
The first step in developing the drone monitoring system is to collect
drone flying images and videos for the purpose of training and testing.
We collect two drone datasets as shown in Fig. \ref{fig:dataset}. They
are explained below.
\begin{itemize}
\item Public-Domain drone dataset. \\
It consists of 30 YouTube video sequences captured in an indoor or outdoor
environment with different drone models. Some samples in this dataset are
shown in Fig. \ref{fig:dataset2}. These video clips have a frame
resolution of 1280 x 720 and their duration is about one minute. Some
video clips contain more than one drone. Furthermore, some shoots are
not continuous.
\item USC drone dataset. \\
It contains 30 video clips shot at the USC campus. All of them were shot
with a single drone model. Several examples of the same drone in
different appearance are shown in Fig. \ref{fig:dataset1}. To shoot
these video clips, we consider a wide range of background scenes,
shooting camera angles, different drone shapes and weather conditions.
They are designed to capture drone's attributes in the real world such
as fast motion, extreme illumination, occlusion, etc. The duration of
each video approximately one minute and the frame resolution is 1920 x
1080. The frame rate is 15 frames per second.
\end{itemize}
We annotate each drone sequence with a tight bounding box around the
drone. The ground truth can be used in CNN training. It can also be used
to check the CNN performance when we apply it to the testing data.
\subsection{Data Augmentation}\label{sec:augmentation}
The preparation of a wide variety of training data is one of the main
challenges in the CNN-based solution. For the drone monitoring task,
the number of static drone images is very limited and the labeling of
drone locations is a labor intensive job. The latter also suffers from
human errors. All of these factors impose an additional barrier in
developing a robust CNN-based drone monitoring system. To address this
difficulty, we develop a model-based data augmentation technique that
generates training images and annotates the drone location at each frame
automatically.
The basic idea is to cut foreground drone images and paste them on top
of background images as shown in Fig. \ref{fig:augment}. To accommodate
the background complexity, we select related classes such as aircrafts,
cars in the PASCAL VOC 2012 \cite{pascal}. As to the diversity of drone
models, we collect 2D drone images and 3D drone meshes of many drone
models. For the 3D drone meshes, we can render their corresponding
images by changing camera's view-distance, viewing-angle, lighting
conditions. As a result, we can generate many different drone images
flexibly. Our goal is to generate a large number of augmented images to
simulate the complexity of background images and foreground drone models
in a real world environment. Some examples of the augmented drone images of
various appearances are shown in Fig. \ref{fig:augment}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=90mm]{fig/augment.png}
\end{center}
\caption{Illustration of the data augmentation idea, where augmented
training images can be generated by merging foreground drone images and
background images.}\label{fig:augment}
\end{figure}
Specific drone augmentation techniques are described below.
\begin{itemize}
\item Geometric transformations \\
We apply geometric transformations such as image translation, rotation
and scaling. We randomly select the angle of rotation from the range
(-30$^{\circ}$, 30$^{\circ}$). Furthermore, we conduct uniform scaling
on the original foreground drone images along the horizontal and the
vertical direction. Finally, we randomly select the drone location in
the background image.
\item Illumination variation \\
To simulate drones in the shadows, we generate regular shadow maps by
using random lines and irregular shadow maps via Perlin noise
\cite{perlin}. In the extreme lighting environments, we observe that
drones tend to be in monochrome (i.e. the gray-scale) so that we
change drone images to gray level ones.
\item Image quality \\
This augmentation technique is used to simulate blurred drones caused
by camera's motion and out-of-focus. We use some blur filters (e.g. the
Gaussian filter, the motion Blur filter) to create the blur effects on
foreground drone images.
\end{itemize}
\begin{figure}[ht]
\begin{center}
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=70mm]{fig/droneimage.png}
\caption{Augmented drone models} \label{fig:augdrone}
\end{subfigure}
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=70mm]{fig/augdrone.png}
\caption{Synthetic training data} \label{fig:augimage}
\end{subfigure}
\end{center}
\caption{Illustration of (a) augmented drone models and (b) synthesized
training images by incorporating various illumination conditions, image qualities,
and complex backgrounds.}\label{fig:augresult}
\end{figure}
Several exemplary synthesized drone images are shown in Fig.
\ref{fig:augresult}, where augmented drone models are given in Fig.
\ref{fig:augdrone}. We use the model-based augmentation technique to
acquire more training images with the ground-truth labels and show them
in Fig. \ref{fig:augimage}.
\section{Drone Monitoring System}\label{sec:solution}
To realize the high performance, the system consists of two modules;
namely, the drone detection module and the drone tracking module. Both
of them are built with the deep learning technology. These two modules
complement each other, and they are used jointly to provide the accurate
drone locations for a given video input.
\subsection{Drone Detection}\label{sec:detection}
The goal of drone detection is to detect and localize the drone in
static images. Our approach is built on the Faster-RCNN
\cite{fasterrcnn}, which is one of the state-of-the-art object detection
methods for real-time applications. The Faster-RCNN utilizes the deep
convolutional networks to efficiently classify object proposals. To
achieve real time detection, the Faster-RCNN replaces the usage of
external object proposals with the Region Proposal Networks (RPNs) that
share convolutional feature maps with the detection network. The RPN is
constructed on the top of convolutional layers. It consists of two
convolutional layers -- one that encodes conv feature maps for each
proposal to a lower-dimensional vector and the other that provides the
classification scores and regressed bounds. The Faster-RCNN achieves
nearly cost-free region proposals and it can be trained end-to-end by
back-propagation. We use the Faster-RCNN to build the drone detector by
training it with synthetic drone images generated by the proposed data
augmentation technique as described in Sec. \ref{sec:augmentation}.
\subsection{Drone Tracking}\label{sec:tracking}
The drone tracker attempts to locate the drone in the next frame based
on its location at the current frame. It searches around the
neighborhood of the current drone's position. This helps detect a drone
in a certain region instead of the entire frame. To achieve this
objective, we use the state-of-the-art object tracker called the
Multi-Domain Network (MDNet) \cite{mdnet}. The MDNet is able to
separate the domain independent information from the domain specific
information in network training. Besides, as compared with other
CNN-based trackers, the MDNet has fewer layers, which lowers the
complexity of an online testing procedure.
To improve the tracking performance furthermore, we propose a video
pre-processing step. That is, we subtract the current frame from the
previous frame and take the absolute values pixelwise to obtain the
residual image of the current frame. Note that we do the same for the
R,G,B three channels of a color image frame to get a color residual
image. Three color image frames and their corresponding color residual
images are shown in Fig. \ref{fig:residueimage} for comparison. If
there is a panning movement of the camera, we need to compensate the
global motion of the whole frame before the frame subtraction operation.
\begin{figure}[]
\begin{center}
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=70mm]{fig/Original.jpg}
\caption{Raw input images}
\end{subfigure}
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=70mm]{fig/Residue.jpg}
\caption{Corresponding residual images}
\end{subfigure}
\end{center}
\caption{Comparison of three raw input images and their corresponding
residual images.} \label{fig:residueimage}
\end{figure}
Since there exists strong correlation between two consecutive images,
most background of raw images will cancel out and only the fast moving
object will remain in residual images. This is especially true when the
drone is at a distance from the camera and its size is relatively small.
The observed movement can be well approximated by a rigid body motion.
We feed the residual sequences to the MDNet for drone tracking after the
above pre-processing step. It does help the MDNet to track the drone
more accurately. Furthermore, if the tracker loses the drone for a short
while, there is still a good probability for the tracker to pick up the
drone in a faster rate. This is because the tracker does not get
distracted by other static objects that may have their shape and color
similar to a drone in residual images. Those objects do not appear in
residual images.
\subsection{Integrated Detection and Tracking System}\label{sec:fusion}
There are limitations in detection-only or tracking-only modules. The
detection-only module does not exploit the temporal information, leading
to huge computational waste. The tracking-only module does not attempt
to recognize the drone object but follow a moving target only. To build
a complete system, we need to integrate these two modules into one. The
flow chart of the proposed drone monitoring system is shown in Fig.
\ref{fig:overview}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=70mm]{fig/overview.png}
\end{center}
\caption{A flow chart of the drone monitoring system.}\label{fig:overview}
\end{figure}
Generally speaking, the drone detector has two tasks -- finding the
drone and initializing the tracker. Typcially, the drone tracker is used
to track the detected drone after the initialization. However, the
drone tracker can also play the role of a detector when an object is too
far away to be robustly detected as a drone due to its small size. Then,
we can use the tracker to track the object before detection based on the
residual images as the input. Once the object is near, we can use the
drone detector to confirm whether it is a drone or not.
An illegal drone can be detected once it is within the field of view and
of a reasonable size. The detector will report the drone location to the
tracker as the start position. Then, the tracker starts to work. During
the tracking process, the detector keeps providing the confidence score
of a drone at the tracked location as a reference to the tracker. The
final updated location can be acquired by fusing the confidence scores
of the tracking and the detection modules as follows.
For a candidate bounding box, we can compute the confidence scores of
this location via
\begin{eqnarray} \label{eqn:confidence}
S'_d&=& 1 / ({1+e^{-\beta_1(S_d-\alpha_1)}}),\\
S'_t&=& 1 / ({1+e^{-\beta_2(S_t-\alpha_2)}}),\\
S' &=& \max(S'_d, S'_t),
\end{eqnarray}
where $S_d$ and $S_t$ denote the confidence scores obtained by the
detector and the tracker, respectively, $S'_f$ is the confidence score
of this candidate location and parameters $\beta_1$, $\beta_2$,
$\alpha_1$, $\alpha_2$ are used to control the acceptance threshold.
We compute the confidence score of a couple of bounding box candidates,
denoted by $BB_i$, $i \in C$, where $C$ denoted the set of candidate
indices. Then, we select the one with the highest score:
\begin{eqnarray}
i^* & = & \underset{i \in C}{\operatorname{argmax}}~S'_i, \\
S_f & = & \underset{i \in C}{\operatorname{max}}~S'_i,
\end{eqnarray}
where $BB_{i^*}$ is the finally selected bounding box and $S_f$ is its
confidence score. If $S_f = 0$, the system will report a message of
rejection.
\section{Experimental Results}\label{sec:results}
\subsection{Drone Detection}
We test on both the real-world and the synthetic datasets. Each of them
contains 1000 images. The images in the real-world dataset are sampled from
videos in the USC Drone dataset. The images in the synthetic dataset are
generated using different foreground and background images in the
training dataset. The detector can take any size of images as the input.
These images are then re-scaled such that their shorter side has 600
pixels \cite{fasterrcnn}.
To evaluate the drone detector, we compute the precision-recall curve.
Precision is the fraction of the total number of detections that are
true positive. Recall is the fraction of the total number of labeled
samples in positive class that are true positive. The area under the
precision-recall curve (AUC) \cite{auc} is also reported. The
effectiveness of the proposed data augmentation technique is illustrated
in Fig. \ref{fig:detectorR}. In this figure, we compare the performance
of the baseline method that uses simple geometric transformations only
and that of the method that uses all mentioned data augmented
techniques, including geometric transformations, illumination conditions
and image quality simulation. Clearly, better detection performance can
be achieved by more augmented data. We see around $11\%$ and $16\%$
improvements in the AUC measure on the real-world and the synthetic
datasets, respectively.
\begin{figure}[t!]
\begin{center}
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=70mm]{fig/synv2.png}
\caption{Synthetic Dataset} \label{fig:detectorRa}
\end{subfigure}
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=70mm]{fig/realv2.png}
\caption{Real-World Dataset} \label{fig:detectorRb}
\end{subfigure}
\end{center}
\caption{Comparison of the drone detection performance on (a) the
synthetic and (b) the real-world datasets, where the baseline method
refers to that uses geometric transformations to generate training data
only while the All method indicates that uses geometric transformations,
illumination conditions and image quality simulation for data
augmentation.} \label{fig:detectorR}
\end{figure}
\subsection{Drone Tracking}
The MDNet is adopted as the object tracker. We take 3 video sequences
from the USC drone dataset as testing ones. They cover several
challenges, including scale variation, out-of-view, similar objects in
background, and fast motion. Each video sequence has a duration of 30
to 40 seconds with 30 frames per second. Thus, each sequence contains
900 to 1200 frames. Since all video sequences in the USC drone dataset
have relatively slow camera motion, we can also evaluate the advantages
of feeding residual frames (instead of raw images) to the MDNet.
The performance of the tracker is measured with the area-under-the-curve
(AUC) measure. We first measure the intersection over union $(IoU)$ for all
frames in all video sequences as
\begin{equation}
IoU = \frac{Area~ of~ Overlap}{Area~ of~ Union},
\end{equation}
where the ``Area of Overlap" is the common area covered by the predicted
and the ground truth bounding boxes and the ``Area of Union" is the
union of the predicted and the ground truth bounding boxes. The IoU
value is computed at each frame. If it is higher than a threshold, the
success rate is set to 1; otherwise, 0. Thus, the success rate value is
either 1 or 0 for a given frame. Once we have the success rate values
for all frames in all video sequences for a particular threshold, we can
divide the total success rate by the total frame number. Then, we can
obtain a success rate curve as a function of the threshold. Finally, we
measure the area under the curve (AUC) which gives the desired
performance measure.
We compare the success rate curves of the MDNet using the original
images and the residual images in Fig. \ref{fig:trackR}. As compared to
the raw frames, the AUC value increases by around 10\% using the
residual frames as the input. It collaborates the intuition that
removing background from frames helps the tracker identify the drones
more accurately. Although residual frames help improve the performance
of the tracker for certain conditions, it still fails to give good
results in two scenarios: 1) movement with fast changing directions and
2) co-existence of many moving objects near the target drone. To
overcome these challenges, we have the drone detector operating in
parallel with the drone tracker to get more robust results.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=90mm]{fig/Residue_result_v2.png}
\end{center}
\caption{Comparison of the MDNet tracking performance using the raw
and the residual frames as the input.} \label{fig:trackR}
\end{figure}
\subsection{Fully Integrated System}
The fully integrated system contains both the detection and the tracking
modules. We use the USC drone dataset to evaluate the performance of the
fully integrated system. The performance comparison (in terms of the
AUC measure) of the fully integrated system, the conventional MDNet (the
tracker-only module) and the Faster-RCNN (the detector-only module) is
shown in Fig. \ref{fig:systemR}. The fully integrated system
outperforms the other benchmarking methods by substantial margins. This
is because the fully integrated system can use detection as the means to
re-initialize its tracking bounding box when it loses the object.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=90mm]{fig/Drone_result.png}
\end{center}
\caption{Detection only (Faster RCNN) vs. tracking only (MDNet tracker)
vs. our integrated system: The performance increases when we fuse the
detection and tracking results.} \label{fig:systemR}
\end{figure}
\section{Conclusion}\label{sec:conclusion}
A video-based drone monitoring system was proposed in this work. The
system consisted of the drone detection module and the drone tracking
module. Both of them were designed based on deep learning networks. We
developed a model-based data augmentation technique to enrich the
training data. We also exploited residue images as the input to the
drone tracking module. The fully integrated monitoring system takes
advantage of both modules to achieve high performance monitoring.
Extensive experiments were conducted to demonstrate the superior
performance of the proposed drone monitoring system.
\section*{Acknowledgment}
This research is supported by a grant from the Pratt \& Whitney
Institute of Collaborative Engineering (PWICE).
|
1,941,325,220,045 | arxiv | \section{Introduction}
The use of spectroscopy has been showing an increasing research interest and practical applications with the advancement of spectral measurement techniques and data analysis studies in the field of chemometrics. In particular, the use of measurements in the infrared (IR) and near and infrared (NIR) spectrum, which provides information that is not visible by simple observation. The use of infrared spectroscopy is considered one of the most important tools in chemometrics \citep{stuart2000infrared}, as it has a low cost, is fast, and can analyze samples in their most diverse states, making it also a non-invasive approach. Besides that, the use of machine learning and deep learning, such as artificial neural networks, have been introduced in areas previously dominated by statistical analysis methods and/or classical artificial intelligence algorithms. With the growth of neural network applications in the last few years, such algorithms began to be explored in problems of different natures, including spectroscopy problems.
Data from spectroscopy presents some problems such as high dimensionality, and small datasets. Thus, It usually requires the use of pre processing techniques such as noise filtering and dimensionality reduction for proper classification. However, artificial neural networks (NN) are known to deal well with high dimensionality and have achieved remarkable results for different applications. However, NN need a significant amount of data to work properly. In such cases, it becomes necessary to analyze the trade-off between using standard methods or migrate to new approaches.
Convolutional neural networks can be applied to challenging real-world problems, such as the diagnosis of the Sars-Cov-2 virus. With the advance of the COVID-19 pandemic, testing the population in large scale for sanitary and public health reasons became necessary. Sars-cov2 virus testing presents two main barriers: 1) the price of testing, and 2) time to get the results. While the first affects mostly underdeveloped countries, the second influences containment plans around the world. By testing with information acquired via spectrometers and processing the data through a robust artificial intelligence model, information about the patient's diagnosis could be obtained quickly and at a low cost, making it accessible to public and private health services.
With the advances in machine learning techniques, there is a tendency to migrate from standard algorithms used in spectroscopy such as Support Vector Machines and Partial Least Squares, to recent approaches such as Convolutional Neural Networks \citep{zhang2019deepspectra} and \citep{yuanyuan:2018quantitative}. However, CNNs commonly require a large amount of data to train the model. Although there are public databases \citep{kaewseekhao2020dataset}, \citep{chauvergne2020dataset} and \citep{zyubin2020dataset} there is still a lack of spectral datasets in public domain. So, it is necessary to investigate the performance of such algorithms when applied to problems with a small number of instances of training. Furthermore, spectral data present some characteristics, such as overlap of samples at certain wavelengths, which make the classification task difficult, as in the COVID-19 dataset provided by \cite{yin2021efficient}.
The main contribution of this work are twofold:
\begin{itemize}
\item we applied a 1D-CNN to 7 available public spectral datasets in order to validate the results as compared to conventional machine learning algorithms.
\item we present a case study involving spectral data of COVID-19 and show the feasibility of 1D-CNN.
\end{itemize}
This reminder of this paper is organized as follows. In section 2, we present related works. In section 3, the methodology involving pre-processing techniques and machine learning algorithms as well as a 1D convolutional neural network are described. The results are presented and a comparative analysis is carried out with discusions in section 4, and in section 5 the article is concluded with directions for future research.
\section{Related Works}
Sepctroscopy techniques are used in several areas of knowledge and can be applied to approach several real-world problems, from the classification \textit{in vivo} of skin lesions \citep{mcintosh:2001} to determine the quality of agricultural products \citep{hayati2020enhanced}. \cite{mcintosh:2001} addressed the task of characterization skin lesions through statistical methods, specifically the use of the paired \textit{t-test}. a classification of carcinogenic lesions was performed using Linear Discriminant Analysis (LDA), showing high accuracy and feasibility for discriminating skin lesions in a non-invasive way.
\cite{gniadecka:2004} combined spectroscopy techniques and Multilayer Perceptron neural networks for early diagnosis of skin cancer. The study aimed to differentiate melanoma, a skin cancer with a higher letal rate, from other skin lesions, such as pigmented nevus and basal cell carcinoma. The influence of chemical changes in cancer tissue and their importance for classification was investigated. The results obtained showed high sensitivity and specificity, important metrics in the detection of melanoma.
\cite{xiang:2010} used infrared spectral data to diagnose endometrial cancer. The study contained 77 samples, including carcinogenic material and a control group. Noise reduction techniques were used as the Savitsky-Golay filter and data dimensionality reduction with PCA. Post-processed data were used for training artificial neural networks and the obtained results showed effectiveness for early diagnosis of the disease.
\cite{wu:2012} used spectroscopy data to non-invasively determine the presence of proteins and polysaccharides in powder samples of \textit{coriolus versicolor}, a medicinal mushroom species. The authors compare artificial neural networks applied to original data and data with reduced dimensionality by applying PCA. The neural network was trained with the \textit{Backpropagation} algorithm and presented better results using data with reduced dimensionality.
\cite{liu:2017} used Deep Auto-Encoder (DAE) to extract and reduce features from the infrared spectral data. To evaluate its effectiveness, the algorithm was applied to the Cigarettes dataset and compared with methods established in the literature, such as PCA and in turn, the KNN algorithm was applied. The results showed that the model using DAE was able to extract features with better differentiation capacity as the concurrent approaches.
\cite{peng:2018} proposed an algorithm for extracting spectral information in the frequency domain. Principal components were determined by the entropic contribution of each component together with genetic algorithm. The selected components were used to build a regression model with Partial Least Square (PLS), showing a performance improvement when compared to the results obtained with the original data.
\cite{cui:2018modern} performed tests on three spectroscopy datasets to demonstrate the effectiveness of CNNs in relation to the PLS Regression method. The results obtained showed that the convolution layer of a CNN can act as a preprocessor of the spectrum, not requiring the application of signal processing techniques. In addition, results demonstrated the superiority of CNN in terms of the metrics used to validate the model, even using datasets with a low number of samples.
\cite{yuanyuan:2018quantitative} applied an ensemble of CNNs to the Corn, Gasoline, and Mixed Gases datasets and the approach was compared with classical methods in the spectroscopy field, such as PLS, as well as neural networks trained with Backpropagation. Results showed that the proposed model obtained superior performance in all case studies investigated, showing its relevance for spectrum classification.
\cite{lima:2019} investigated the discrimination between non-melanoma cancer (basal cell carcinoma and squamous cell carcinoma) and actinic keratosis and normal skin. The study used spectral data collected from in-vivo and ex-vivo tissues. The t-test was used to find regions of the spectrum with significant differences between samples. The selected regions were used to discriminate the lesions using Euclidean and Mahalanobis distance. Results obtained show robustness when using ex-vivo tissue data.
\cite{seoni:2019} investigated hemoglobin concentration using infrared spectrum data on normal skin tissue and skin tissue with actinic keratosis, a benign skin lesion that, without treatment, can evolve into a cancerous lesion. The MANOVA statistical analysis was applied to assess possible differences between the healthy skin group and the one containing actinic keratosis. The results showed that, in general, it was possible to differentiate the two groups through the statistical approach.
\cite{zhang2019deepspectra} introduced a new approach based on a CNN with 1D convolutions and the adoption of Inception structure in two convolution layers. The method was evaluated in four public spectroscopy datasets: Corn, Tablet, Wheat, and Soil. The results obtained were superior to other CNN architectures and linear methods such as PLS and Support Vector Regression (SVR). In addition, pre-processing methods applied to the raw spectrum signal have also been applied in order to investigate the influence on the prediction results. The authors concluded that signal noise reduction is a necessary step to improve the performance of the algorithms.
\cite{chen:2019:end} evaluated the ability to extract features from a CNN without applying dimensionality reduction on the spectrum. Experiments were carried out using the dataset Corn and compared with traditional chemometric methods such as PLS and neural network training algorithms such as Extreme Learning Machine (ELM) and Backpropagation. The results obtained in the tests performed show that: 1) If the methods/algorithms use the original spectrum as input, the CNN obtains better performance; and 2) If standard methods/algorithms are combined with dimensionality reduction algorithms and variable selection, the results obtained are similar to those obtained by CNN, with no statistical difference between the methods.
\cite{chen:2020} proposed the use of ensemble of artificial neural networks trained with the ELM algorithm. The proposed algorithm uses random initialization of weights of the hidden layer to obtain diversity among models. The method was tested in 3 datasets, Tecator, Shootcut\_2002, and Tablet. The approach was compared with PLS, and was superior as compared to those achieved by classic chemometric algorithm.
\cite{yan:2020} collected infrared spectrum data in the hydrolysis process of materials used in traditional Chinese medicine. Regression models were developed using CNN and PLS with real-time data. The proposed CNN architecture presented better results over PLS, but with higher processing cost and computational time. Therefore, for real-time applications, it is necessary to analyze the maximum latency of the interval between results to decide which method should be used.
\section{Methodology}
\subsection{Data pre-processing }
Spectroscopy is a technique based on the vibrations of atoms in a molecule. The spectrum can be analyzed and classified by its wavelength, such as ultraviolet, visible, and infrared. Each wavelength provides unique information for analyzing a sample, so it is necessary to know the best wavelength range to use when collecting data. For example, the spectrum obtained at the infrared wavelength is widely used in applications that need to identify organic components of a sample \citep{stuart2000infrared}. The data generated by this technique, known as a spectrum, is obtained by emitting an amount of radiation onto a sample and determining which part of the radiation is absorbed or reflected by the sample at a given energy level \citep{stuart2000infrared}.
Data collected through spectrometers can suffer different types of interference, from low-quality equipment to external interference in uncontrolled environments, such as light scaterring, and noisy data, among others. Noisy components of the the spectrum signal is a challenging issue in spectral analysis, as the instability of the collected signal can lead to wrong conclusions. This instability can be reduced by performing more than one reading of the sample, obtaining the average spectra, but without eliminating all the existing noise. As a result, signal processing techniques become a crucial step in spectral analysis to remove, or reduce undesired noisy components of the spectral signal.
Selecting an appropriate technique to perform data pre-processing can directly influence the model performance and data interpretability \citep{lee2017contemporary}. The most used pre-processing techniques can be divided into two categories: 1) Scatter-correction, and 2) Spectral derivatives \citep{rinnan2009review}. Scatter-correction algorithms are used to reduce the variability between samples. Spectral derivates algorithms use convolution operations, usually by fitting a polynomial to the points of the spectrum.
In this work, we use one of the most standard pre-processing filter in spectral analysis, the Savitzky-Golay (SG) filter \citep{savitzky1964smoothing}, and evaluate its influence on the classification results. \cite{savitzky1964smoothing} proposed an algorithm to smooth data based on polynomial interpolation. They demonstrated that fitting a polynomial to a set of points in the spectrum and then evaluating it at the subsequent point respecting the point approximation interval would be equivalent to more complex convolution operations. Furthermore, they showed that the algorithm can remove noise from the data and keep the shape and peaks in the spectrum wavelengths, preserving original characteristics in the smoothed data.
%
\subsection{Machine Learning}
One of the fastest growing areas in computing is Artificial Intelligence. Its applications are present in medicine, engineering, finance, among other areas \citep{pannu2015artificial}. One of the sub-areas of Artificial Intelligence is machine learning, which consists of algorithms acquiring knowledge from already known data and, subsequently, being able to infer solutions for new instances of the problem without being explicitly programmed for it \citep{ alpaydin2020introduction}.
Supervised Learning problems can be considered the most common among the existing ones. The algorithms look for patterns in order to learn how to associate an input $\textbf{x}$ to its respective output $y$. In general, supervised learning algorithms learn through examples how to classify an output $y$ given an input $\textbf{x}$ by estimating $p(y | \textbf{x})$ \citep{goodfellow2016deep}. The machine learning algorithms used in this work are Partial Least Square - Discriminant Analysis, Support Vector Machine, k-Nearest Neighbors \citep{alpaydin2020introduction}, and Many-objective clustering based on Hill Climbing and minimum spanning tree \citep{esgario2018clustering}.
\subsection{1D Convolutional Neural Networks}
Artificial neural networks (ANN) are structures composed of simple computing units, called artificial neurons, interconnected with each other \citep{schalkoff1997artificial}. The connections between neurons form complex layers for computing and propagating information. The algorithm for training ANN's is known as \textit{Backpropagation}. Through it, it is possible to train ANNs for classification and regression/prediction tasks. Standard ANNs are characterized by three main layers, the data input layer, the hidden layers, and the output layer.
Convolutional neural networks (CNN), initially proposed by \cite{cnn-1989}, are specialized networks for applications that present data in grid format such as images and temporal series, as they are able to capture spatial and temporal features \citep{goodfellow2016deep}. The architecture gained prominence in 2012 when a CNN-based model won the LSVRC-12 competition in the ImageNet database \citep{krizhevsky2012imagenet}. The main difference between CNN's and traditional ANN's is the presence of convolution and pooling layers in their architecture. CNNs use convolution operations instead of matrix operations in at least one of their layers \citep {goodfellow2016deep}. The main components of a CNN and its work principle during the training process are described in the following.
\paragraph{Convolution Layer:} Convolutions are linear operations on two functions, that generate a third one. In CNN's, convolutions are applied to extract features, where the first function of a convolution is represented by the input data of the network, and the second is represented by a \textit{kernel}, a sliding filter used to extract features. By applying the convolution operation between these two functions, it is generated feature maps, which contain relevant information from the original input data. CNN's can have layers of convolutions of one, two, or three dimensions, depending on the problem domain and data characteristics. For example, image recognition applications use 2D convolutions, while applications whose data comes from time-series use 1D convolution.
After convolution operations, feature maps are applied to an activation function. The activation function aims to add non-linearity to the model \citep{goodfellow2016deep}. When choosing an activation function, some characteristics must be observed, such as: The computational cost must be small, since the operation is performed frequently. Due to the training process of a neural network is based on gradient descent, the activation function must also be differentiable. The Rectified Linear Unit (ReLU) activation function is commonly used in neural networks because it presents such characteristics. The function returns zero for all negative values and the value itself for positive values. It is described by $ReLU(x) = max(0,x)$.
\paragraph{Pooling Layer:} The pooling layer, normally applied after the convolution operation, consists of reducing the dimensionality of the feature map in order to eliminate redundant information and keep the main features \citep{goodfellow2016deep}. Similar to convolution, a sliding window moves over the feature map generating a new output. However, pooling operations are deterministic, typically calculating the maximum or average value of the elements in the sliding window \citep{zhang2020dive}. When we apply the MaxPooling technique, the maximum value present in the analyzed data group is propagated to the next layers, while the AvgPooling technique propagates the average value of the data.
\paragraph{Fully connected layer:} The combination of convolution and pooling layers alone is not enough to perform the classification/regression task, its main function in the architecture is to extract features from the given data. The extracted features, therefore, need a classifier to complete the task, which we call a fully connected layer. Typically, the fully connected layer of a CNN is represented by an ANN known as a Multilayer Perceptron (MLP) \citep{zhang2020dive}, a feedforward neural network composed of an input layer, a set of hidden layers, and the output layer.
A MLP maps a $f: x \rightarrow y$ function, where $x$ are the features extracted by the convolution and pooling layers, and $y$ is the desired network output. This process, known as training an ANN, is an optimization problem that seeks to minimize a loss function \citep{goodfellow2016deep}. The loss function calculates the error between the desired outputs and those obtained by the neural network. The error obtained is used to recalculate the weights of the network \citep{zhang2020dive}. The Cross-Entropy loss function commonly used by CNN's in classification problems is described by:
\begin{equation}
Cross\-Entropy(pa, pb) = - \sum_{i}^k pa_{i}\log{pb_{i}}
\end{equation}
where $pa_{i}$ and $pb_{i}$ represent the real and predicted probability for the class $i$, respectively and $k$ represents the number of classes. For unbalanced datasets, loss functions that consider the frequency of each class in its calculation, such as the Weighted Cross Entropy are often used.
Along with the parameter adjustment algorithm, weight regularization techniques can be included in order to avoid problems such as overfitting, when a model presents good results in the training process but it is not able to work properly to unknown data. In this work, the dropout regularization technique was applied. This technique consists of removing some of the neuron connections with a certain probability, in order to improve the network's generalization to unknown data \citep{zhang2020dive}.
By combining multiple convolution and pooling layers with a fully connected layer is generated CNN-like architectures. Figure \ref{fig-cnn} shows the 1D-CNN architecture used to classify spectral data in this work.
\begin{figure}[h!]
\centering
\includegraphics[width=1\textwidth]{figuras/algoritmos/modelo_final_2.png}
\caption{1D-CNN architecture made up of convolution, pooling, and fully connected layers.}
\label{fig-cnn}
\end{figure}
\section{Experiments and Results}
This section presents the results with discussions. First, the datasets used in the experiments are presented. Second, the algorithms are applied to the datasets in order to compare with machine learning and standard algorithms of chemometrics. Finally, a case study with spectral data from samples of the Sars-Cov-2 virus is presented.
\subsection{Datasets}
Next, we present a brief description of the datasets. \cite{kosmowski2018evaluation} collected infrared spectral data from barley, chickpea, and sorghum crops. Each type of grain originated a dataset. The Barley dataset is composed of 1200 barley grain samples, distributed in 24 classes of grain variants. The Chickpea dataset has 950 chickpea samples distributed in 19 classes, and finally, the Sorghum dataset has 500 sorghum samples distributed in 10 classes. Despite a large number of samples in the dataset, it is clear that they are distributed in a large number of classes. Consequently, there is a small frequency of samples per class. All three datasets are well balanced.
\cite{zheng2019spectra} presented an approach that uses Extreme Learning Machine for spectra classification and made the data available. Four datasets were used in this study, Coffee, Meat, Oil, and Fruits. The Coffee dataset presents samples divided into Arabica and Conilon coffee beans uniformly distributed in 56 samples. The Meat dataset is made up of 60 samples of chicken, pork, and turkey, also evenly distributed. The Oil dataset contains 60 extra virgin olive oil spectrum samples from 4 different countries: Greece, Italy, Portugal, and Spain. The Fruits dataset consists of 983 samples of fruit purees divided into two classes, strawberry, and non-strawberry.
Table \ref{tab-base-de-dados} lists the number of samples and classes present in each dataset.
%
%
\begin{table}[]
\centering
\begin{tabular}{ccc}
\hline
\multicolumn{1}{l}{Dataset} & \multicolumn{1}{l}{Samples} & \multicolumn{1}{l}{N. classes} \\ \hline
Barley & 1200 & 24 \\
Chickpea & 950 & 19 \\
Sorghum & 500 & 10 \\
Coffee & 56 & 2 \\
Meat & 120 & 3 \\
Oil & 120 & 4 \\
Fruits & 983 & 2
\end{tabular}
\caption{Number of samples and classes present in each dataset.}
\label{tab-base-de-dados}
\end{table}
\subsection{Experimental Setting}
The training of 1-D CNN and machine learning algorithms investigated in this work were performed using cross-validation. This method is used to decrease the bias and avoid over-estimated models. The learning process consists of dividing the dataset into $k$ \textit{folds} and use each one separately as a test set of a model. At the end, one calculates the average accuracy and standard deviation of the algorithm for the dataset. The process of training and evaluating an algorithm using \textit{cross-validation} is described in the Algorithm \ref{alg-cross-validation}. For all experiments a number of $k=5$ folds was adopted, in other words, the dataset was divided into $80\%$ for training and validation and $20\%$ was used exclusively for testing.
\begin{algorithm}[httb!]
\SetAlgoLined
1 - Shuffle the samples.\\
2 - Split it into $k$ folds. \\
\For{f in folds }{
1 - Keep $f$ as a test set.\\
2 - Train a new model with the remaining folds\\
3 - Evaluate the model with \textit{fold} $f$.\\
4 - Save the model accuracy.\\
}
3 - Get the cross-validation mean accuracy and standard deviation.\\
4 - Return the model within the closest accuracy to the cross-validation mean accuracy.
\caption{Cross-validation evaluation}
\label{alg-cross-validation}
\end{algorithm}
The algorithms PLS-DA, KNN, SVM, and CNN need hyperparameter adjustments. So, one can select the best configuration for each dataset and maximize its performance.
The PLS-DA algorithm is based on components construction in order to maximize the variance of the variables. Each component explains the degree of variance it represents. For each dataset, $c$ components were selected that, together, represent 95\% of the total variance of the data.
For the KNN and SVM algorithms, a variation of the algorithm \ref{alg-cross-validation} was used, in order to find the best set of hyperparameters for each cross-validation round. The data used to train the model undergoes a new division into a training set and validation set, in order to find the best combination of hyperparameters. Thus, for each combination of training and testing folds, one finds a set of optimal hyperparameters for the model. At the end of the process, one calculates the mean and standard deviation obtained by the algorithm and the set of hyperparameters that generated the model with the closest accuracy to the average accuracy, avoiding an overestimated model. The process for finding the best hyperparameters and average accuracy is described in the algorithm \ref{alg-cross-validation-2}.
Table \ref{tab-espaco-busca-pls-knn} presents the search space used for the KNN and SVM algorithms. Table \ref{tab-hiper-pls-knn} presents the number of components $c$ needed to obtain $95\%$ of the data variance in the PLS-DA algorithm, in addition to the amount of $k$-neighbors and the kernel function that provided the closest result to the average result obtained when executing the algorithm \ref{alg-cross-validation-2}.
\begin{table}[httb!]
\centering
\begin{tabular}{c|c|c}
\hline
Algorithm & Hyperparameter & Search Space \\ \hline
KNN & \textit{k}-neighbors & {[}2, 3, 5, 10, 15, 20, 24{]} \\
SVM & Kernel Function & {[}Linear, Polinomial, Sigmoid, RBF{]} \\
\end{tabular}
\caption{Search space for KNN and SVM hyperparameters. }
\label{tab-espaco-busca-pls-knn}
\end{table}
\begin{algorithm}[httb!]
\SetAlgoLined
1 - Shuffle the samples.\\
2 - Split it into $k$ folds. \\
\For{f in folds }{
1 - Keep $f$ as a test set.\\
\For{each set $\lambda$ of hyperparameters}{
\For{ v in (folds - f)}{
1 - Keep $v$ as a validation set.\\
2 - Train a new model with the remaining folds.\\
3 - Evaluate the model with fold $v$.\\
4 - Save the model accuracy.\\
}
1 - Save the mean accuracy and standard deviation for the $\lambda$ set. \\
}
2 - Train a new model with the remaining folds with the best $\lambda$ set.\\
3 - Evaluate the model with fold $f$.\\
4 - Save the accuracy and the model\\
}
3 - Get the cross-validation mean accuracy and standard deviation.\\
4 - Return the model and $\lambda$ set within the closest accuracy to the cross-validation mean accuracy.
\caption{Cross-validation evaluation - variation to find best set of hyperparameters}
\label{alg-cross-validation-2}
\end{algorithm}
\begin{table}[httb!]
\centering
\begin{tabular}{c|c|c|c}
\hline
\multicolumn{1}{l|}{Datasets} & \multicolumn{1}{l|}{N. of components} & \multicolumn{1}{l|}{N. \textit{k}-neighbors} & \multicolumn{1}{l}{Kernel Function} \\ \hline
Barley & 2 & 15 & Sigmoid \\
Chickpea & 2 & 10 & Sigmoid \\
Sorghum & 2 & 10 & Sigmoid \\
Coffee & 2 & 3 & RBF \\
Meat & 3 & 3 & RBF \\
Oil & 6 & 5 & RBF \\
Fruits & 4 & 3 & RBF
\end{tabular}
\caption{Hyperparameters that obtained the best results for the PLS-DA, KNN, and SVM.}
\label{tab-hiper-pls-knn}
\end{table}
Finally, the 1D-CNN, as a recent approach in the field of Chemometrics, it does not have pre-defined and pre-trained architectures for applications that involve spectral data classification. To choose the architecture and hyperparameters used in this work, the software Optuna\footnote{https://optuna.readthedocs.io/} was used, a hyperparameter optimizer developed to be used in machine learning problems that search for combinations of hyperparameters and architectures that maximize model performance. Similar to the algorithm \ref{alg-cross-validation-2}, it searches for the best model in the training and validation dataset and, after finding a set of suboptimal hyperparameters, tests the model in the test set.
Table \ref{tab-hiperparam-cnn} lists the selected hyperparameters for each dataset involving the training and testing step. All experiments and algorithms were programmed using utilities from the scikit-learn\footnote{https://scikit-learn.org/} and PyTorch\footnote{https://pytorch.org/} libraries from Python\footnote{https: //www.python.org/}. The source code in the experiments is available from authors upon request.
\begin{table}[h!]
\begin{adjustbox}{max width=\textwidth}
\begin{tabular}{c|c|c|c|c|c|c|c}
\cline{2-8}
& \multicolumn{7}{c|}{Dataset} \\ \hline
\multicolumn{1}{c|}{Hyperparameters} & Barley & Chickpea & \multicolumn{1}{l|}{Sorghum} & \multicolumn{1}{l|}{Coffee} & \multicolumn{1}{l|}{Meat} & \multicolumn{1}{l|}{Oil} & \multicolumn{1}{l|}{Fuit} \\ \hline
\multicolumn{1}{c|}{Optimizer} & Adam & Adam & Adam & Adam & Adam & Adam & Adam \\
\multicolumn{1}{c|}{Learning rate} & 0.001 & 0.001 & 0.001 & 0.0001 & 0.001 & 0.001 & 0.001 \\
\multicolumn{1}{c|}{Epochs} & 5000 & 5000 & 5000 & 100 & 700 & 1000 & 1000 \\
\multicolumn{1}{c|}{Dropout Rate} & 0.4 & 0.4 & 0.3 & 0.1 & 0.1 & 0.2 & 0.1 \\
\multicolumn{1}{c|}{Batch Size} & 50 & 50 & 50 & 6 & 10 & 10 & 50 \\
\multicolumn{1}{c|}{Loss Function} & Cross entropy & Cross entropy & Cross entropy & Cross entropy & Cross entropy & Cross entropy & Cross entropy \\
\end{tabular}
\end{adjustbox}
\caption{Hyperparameters that obtained the best results for 1D-CNN.}
\label{tab-hiperparam-cnn}
\end{table}
\newpage
\subsection{Experimental Results}
\label{sec-resultados}
This subsection presents the results obtained by each algorithm when applied to the datasets in this work. The experiments were split into two parts: 1) Using original spectral data, and 2) Applying the Savitzky–Golay (SG) filter as a pre-processing step. The division of the experiments into two stages seeks to analyze the influence of data pre-processing on the model results. All experiments were performed on a machine with Windows 10, equipped with an Intel Core I5-8265U processor and 8GB of RAM memory.
\paragraph{Original Data:} The results obtained for the first configuration of experiments, data without pre-processing, are presented in table \ref{tab-resultados-1}. The investigated algorithms (MOCHM, PLS-DA, SVM, KNN and, 1D-CNN) were applied to each dataset. Best results are shown in bold.
\begin{table}[h!]
\centering
\begin{tabular}{c|c|c|c|c|c}
\hline
Dataset & MOCHM & PLS-DA & SVM & KNN & 1D-CNN \\ \hline
Barley & - & $10.66 \pm 0.62$ & $7.75 \pm 0.54$ & $16.08 \pm 1.38$ & \textbf{47.4 $\pm$ 4.7} \\
Chickpea & - & $14.73 \pm 1.12$ & $23.58 \pm 1.07$ & $27.16\pm4.48$ & \textbf{54.2 $\pm$ 3.3} \\
Sorghum & - & $18.79 \pm 3.31$ & $15.79 \pm 1.94$ & $34.79 \pm 2.99$ & \textbf{55.8 $\pm$ 3.3} \\
Coffee & $51 \pm 2.1$ & $85.7 \pm 3.4$ & $42.7 \pm 3.4$ & $46.3 \pm1.37$ & \textbf{100 $\pm$ 0} \\
Meat & $36 \pm 0.2$ & $70.83 \pm 6.56$ & $41.6 \pm 3.1$ & $87.5 \pm 7$ & \textbf{95 $\pm$ 3.3} \\
Oil & \multicolumn{1}{l|}{$44 \pm 0.7$} & \multicolumn{1}{l|}{$75 \pm 2.04$} & \multicolumn{1}{l|}{$41.6 \pm 1.02$} & \multicolumn{1}{l|}{$73.3 \pm 3.11$} & \multicolumn{1}{l}{\textbf{88.8 $\pm$ 5.3}} \\
Fruit & \multicolumn{1}{l|}{$95 \pm 1.9$} & \multicolumn{1}{l|}{$91.04 \pm 1.02$} & \multicolumn{1}{l|}{$88.3\pm2.96$} & \multicolumn{1}{l|}{$88.9 \pm1.75$} & \multicolumn{1}{l}{\textbf{96 $\pm$ 0.6}}
\end{tabular}
\caption{Average accuracy obtained by the algorithms when applied to data without pre-processing.}
\label{tab-resultados-1}
\end{table}
The obtained results with the 1D-CNN were superior for all datasets using data without pre-processing. It is observed, however, that the datasets \textit{Barley}, \textit{Chickpea} and \textit{Sorghum} achieved an average accuracy lower than other datasets. It turns out that due to the high number of classes present in the data and the small number of samples available for each class in the training step, the model was not able to generalize effectively. Nevertheless, we notice a difference when comparing the results obtained by 1D-CNN with those obtained by the standard algorithms used in Chemometrics, i.e., PLS-DA, and SVM. The KNN algorithm also obtained better results than the PLS-DA and SVM algorithms, reaching an average accuracy of $20\%$ for the \textit{Sorghum} dataset compared to SVM, the worst algorithm performance at the dataset. In the other datasets, the PLS-DA algorithm obtained, in general, the best results after 1D-CNN. The MOCHM algorithm was stoped before finishing its execution for the \textit{Barley}, \textit{Sorghum} and \textit{Chickpea} datasets because it demanded a computation time over 60 minutes to finish the process.
\paragraph{Pre-processed data:} The second configuration of experiments was performed with the same algorithms and datasets, but applying the SG filter in the pre-processing step. The SG filter was performed with a standard hyperparameter configuration: the window size was set to 11, using a polynomial of degree 3 and second-order derivative. In the table \ref{tab-resultados-sg} the results obtained in the experiment are presented.
\begin{table}[h!]
\centering
\begin{tabular}{c|c|c|c|c|c}
\hline
Dataset & MOCHM & PLS-DA & SVM & KNN & 1D-CNN \\ \hline
Barley & - & 11.83 $\pm$ 2.77 & 14.25 $\pm$ 2.3 & \textbf{64.91 $\pm$ 1.3} & 46.16 $\pm$ 0.97 \\
Chickpea & - & 15.78 $\pm$ 1.07 & 20.4 $\pm$ 4.28 & \textbf{71.36 $\pm$ 2.11} & 55.2 $\pm$ 1.1 \\
Sorghum & - & 25.99 $\pm$ 2.63 & 28.38 $\pm$ 5.87 & \textbf{71.39 $\pm$ 3.3} & 59.6 $\pm$ 2.1 \\
Coffee & 83 $\pm$ 1.07 & 87.4 $\pm$ 2.1 & 92.69 $\pm$ 6.1 & 80.11 $\pm$ 4.9 & \textbf{100 $\pm$ 0} \\
Meat & 81 $\pm$ 0.8 & 71.66 $\pm$ 4.24 & 58.33 $\pm$ 3.1 & 92.49 $\pm$ 4.08 & \textbf{96.6 $\pm$ 2.07} \\
Oil & \multicolumn{1}{l|}{ \textbf{91 $\pm$ 1.1 }} & \multicolumn{1}{l|}{75.8 $\pm$ 3.11} & \multicolumn{1}{l|}{41.6 $\pm$ 1.02} & \multicolumn{1}{l|}{79.1 $\pm$ 1.16} & \multicolumn{1}{l}{89.4 $\pm$ 1.12} \\
Fruit & \multicolumn{1}{l|}{ 92 $\pm$ 2.2} & \multicolumn{1}{l|}{79.2 $\pm$ 1.02} & \multicolumn{1}{l|}{93.49 $\pm$ 0.86} & \multicolumn{1}{l|}{91.86 $\pm$ 0.38} & \textbf{94.2 $\pm$ 0.3}
\end{tabular}
\caption{Average accuracy obtained by the algorithms when applied to data with pre-processing.}
\label{tab-resultados-sg}
\end{table}
We can observe that the 1D-CNN did not obtain the hegemony of the results as reported in the previous case. For the \textit{Barley}, \textit{Chickpea} and \textit{Sorghum} datasets, KNN was superior to CNN. For the other datasets, CNN maintained its superiority except for the \textit{Oil} dataset, where the MOCHM obtained the best accuracy.
In general, the performance gain of the algorithms when using pre-processed data is noticeable. We can evaluate the gain from two aspects: 1) The increase in the average accuracy of the algorithms for the datasets, with emphasis on the MOCHM and KNN algorithms, and 2) The reduction in standard deviation, as observed on 1D-CNN.
The MOCHM achieved a gain of approximately $50\%$ in accuracy in the \textit{Oil} dataset, achieving better overall performance. KNN also achieved significant gains in accuracy surpassing CNN in the \textit{Barley}, \textit{Chickpea} and \textit{Sorghum} datasets. Regarding the decrease in standard deviation, CNN presents this characteristic for all datasets. Despite not presenting a high gain in accuracy as in other algorithms, the reduction in the standard deviation indicates greater stability, generating models with a higher degree of reliability. This fact is explicit in the results obtained by the 1D-CNN in the \textit{Barley} dataset, where it reached an accuracy of $47.4 \pm 4.7$ in the experiments without preprocessing against $46.16 \pm 0.97$ in the experiments with preprocessing.
Despite the superior results obtained by 1D-CNN in terms of accuracy for the experiments performed, it is necessary to analyze the trade-off in the results. As 1D-CNN is an algorithm which demands a higher computational cost compared to other algorithms, then it is necessary to analyze in which cases its use is preferred. Table \ref{tab-tempo-treinamento} presents the training time required for a model of each algorithm per dataset. It is clear that the larger the dataset, the longer the time needed for training the 1D-CNN. Furthermore, it was the only algorithm that needs, in most cases, minutes to train the model. In problems with a large number of samples and high dimensionality, the efficient use of pre-processing algorithms combined with simple algorithms, such as KNN, might be a viable alternative instead of using 1D-CNN.
\begin{table}[h!]
\centering
\begin{tabular}{c|c|c|c|c|c}
\hline
Dataset & MOCHM & PLS-DA & SVM & KNN & 1D-CNN \\ \hline
Barley & - & 0.1328 & 1.7173 & 0.3353 & 603.12 \\
Chickpea & - & 0.0855 & 0.8775 & 0.2436 & 485.30 \\
Sorghum & - & 0.0255 & 0.2826 & 0.1020 & 443.05 \\
Coffee & 22.10 & 0.1196 & 0.0083 & 0.0795 & 4.22 \\
Meat & 18.30 & 0.0377 & 0.0136 & 0.0277 & 45.27 \\
Oil & 31.55 & 0.0152 & 0.0194 & 0.0231 & 106.07 \\
Fruit & 32.26 & 0.0377 & 0.3920 & 0.2325 & 81.13
\end{tabular}
\caption{Model Training time for each algorithm in seconds.}
\label{tab-tempo-treinamento}
\end{table}
\paragraph {Data visualization:} In order to get a 2D visualization of the data pre-processing influence, the t-Distributed Stochastic Neighbor Embedding (t-SNE) \citep{van2008visualizing} was applied to the cases without and with pre-processing.
Figure \ref{feature-coffee} shows the original data and with pre-processing for the \textit{Coffee} dataset. It is not possible to observe in the original data clear groups that distinguish the two classes present in the dataset so that a simple classifier had difficulties to correctly separate the samples through a linear function. However, looking at the pre-processed data, we see the data grouped in a concise way, showing that the SG filter helps the model to discriminate the data.
\begin{figure}[httb!]
\begin{subfigure}[httb!]{0.5\textwidth}
\includegraphics[width=\textwidth]{figuras/feature_visualization/coffee_raw.png}
\caption{}
\end{subfigure}
%
\begin{subfigure}[httb!]{0.5\textwidth}
\includegraphics[width=\textwidth]{figuras/feature_visualization/coffee_filter.png}
\caption{}
\end{subfigure}
\caption{Visualization of the spatial arrangement of (a) original data and (b) data after pre-processing using t-SNE for the Coffee dataset.}
\label{feature-coffee}
\end{figure}
%
The results show the importance of applying pre-processing algorithms to spectral data for noise reduction and how this positively impacts the results obtained by each algorithm, whether in terms of accuracy gain or model stability. Convolutional Neural Networks are promising in the analysis of spectral data, since the results obtained indicate superior performance over traditional algorithms. It is worth to mention the ability of 1D-CNN to generalize well for small datasets. Furthermore, algorithms such as KNN prove to be a viable and simple alternative to use in most cases when combined with a data pre-processing algorithm, since it obtains good results and does not demand high computational costs. Based on the results obtained, we extend our work to a case study using 1D-CNN, PLS-DA, and KNN in the next section, since these algorithms obtained the best average performance in the experiments previously investigated.
\subsection{Covid-19 case study}
The Sars-Cov-2 virus (COVID-19) pandemic has emerged as the biggest and challenging health problem facing the world in recent decades. With the first case confirmed in December, 2019 in Wuhan District, China \citep{zhu2020novel}, the virus has spread rapidly in China and to other countries, and in January 2020 the World Health Organization (WHO) declared a state of global emergency \citep{WHO2019_COVID}.
Due to its rapid transmission characteristics and high transmission rate, in a few months since the first confirmed case, the virus has been reported in 144 countries around the world \citep{world2020world}. In order to carry out sanitary restrictions, assess and report on the spread of the virus, mass testing was started in several territories that had cases of COVID-19. However, the available methods for testing have undesirable characteristics that can influence the decisions taken to contain the virus. First, the high cost of available tests, causing the unavailability of testing in underdeveloped countries. Second, the time needed to obtain definitive results from the presence of the virus in the patient. Third, the high rate of false-negative tests. Among the mentioned problems, the high rate of false negatives is a crucial factor that has a direct impact on the virus containment. Individuals tested as false negatives can relax the sanitary measures imposed by healthy organizations and increase the spread of the virus \citep{west2020covid}. Recently, there is a growing research interest to automated diagnose of COVID-19 using artificial intelligence \citep{khana2021ESWA}.
In this context, automated diagnostic applications using data from spectrometers may become an alternative to traditional tests. Since spectrometers are portable and easy-to-use device, the data collection performed by such devices in conjunction with computational classification models might provide a quick diagnosis in minutes. Such characteristics would affect positively the impact of two of the undesirable characteristics on current testing, high cost, and time required for the result. Finally, in order to reduce false-negative results, the third problem found in current testing, two additional metrics will be included in the analysis of results along with accuracy (ACC) i.e., specificity (ESPEC) and sensitivity (SE), common metrics in biomedical applications.
The specificity is calculated by:
\begin{equation}\label{eq-precision}
ESPEC = \frac{TN}{TN + FP}
\end{equation}
where $TN$ represents the true negative values, i.e, the samples that belonged to the control group and the classifier classified them correctly, and $FP$ the false-positive values, representing the control group samples that were classified positively for Covid-19.
The sensitivity metric (also known as recall) is calculated by:
\begin{equation}\label{eq-recall}
SE = \frac{TP}{TP + FN}
\end{equation}
where $TP$ represents the true-positive values, i.e, the virus samples that were correctly classified, and $FN$ represents the false negative values, i.e, the samples that belonged to the Covid-19 group but were classified as a control group.
By using such metrics, we can answer two important questions:
\begin{itemize}
\item Among those samples labeled as a control group, in this case, samples that did not carry COVID-19, how many did the model correctly classify?
\item For all the samples that are positive for COVID-19, how many did the model classify as carrying the virus?
\end{itemize}
Especially in case of COVID-19, due to its high dispersion and lethality characteristics, we are interested in answering the second question, as this information is directly linked to the sensitivity metric. By obtaining a high sensitivity rate in the model, we are ensuring that cases in which COVID-19 is present in the sample are classified as positive for the virus, i.e, it reduces the number of false-negative diagnoses, a current problem faced in the test protocols used for detecting the virus.
\cite{yin2021efficient} collected spectral samples from the control group and from patients infected with COVID-19 and made them available for public use. The dataset provided includes 309 samples, of which 150 belong to the control group and 159 to the group of patients infected by the virus. The collected spectra is shown in Figure \ref{spectra-covid}.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{figuras/spectra_plots/covid_spectra.png}
\caption{Absorption levels for each wavelength in nanometers for the spectrum collected from the Covid-19 dataset \citep{yin2021efficient}.}
\label{spectra-covid}
\end{figure}
Table \ref{tab-resultados-covid} presents accuracy, specificity, and sensitivity results obtained for each algorithm when applied to the \textit{Covid} dataset \citep{yin2021efficient}, with original data and pre-processed by the SG filter. Similar to the results obtained for other datasets reported in the previous section, the 1D-CNN obtained the best results with an average accuracy of $96.5 \pm 1.1$ when using the SG filter in the pre-processing step. Furthermore, it is important to highlight the high values of specificity and sensitivity, with an average of $98.06 \pm 1.86$ and $94.06 \pm 2.43$, respectively. As explained before, low sensitivity values indicate a high number of false negatives, considered the worst scenario in the diagnosis of COVID-19. Such behavior can be observed in the results obtained by the KNN, which obtained, on average, a rate higher than $25\%$ of false negatives for data without pre-processing and $12\%$ for pre-processed data. This value indicates the infeasibility of the algorithm for real applications in the context of COVID-19. The PLS-DA algorithm presented, in general, promising results for all the analyzed metrics, however inferior to those presented by the 1D-CNN.
\begin{table}[httb!]
\centering
\begin{tabular}{c|c|l|l}
\hline
COVID-19 & ACC & \multicolumn{1}{c|}{ESPEC} & \multicolumn{1}{c}{SE} \\ \hline
1D-CNN & 94.19 $\pm$ 1.6 & \multicolumn{1}{c|}{96.82 $\pm$ 2.3} & \multicolumn{1}{c}{94.72 $\pm$ 3.39} \\
SG + 1D-CNN & \textbf{96.5 $\pm$ 1.1} & \multicolumn{1}{c|}{\textbf{98.06 $\pm$ 1.86}} & \multicolumn{1}{c}{\textbf{94.06 $\pm$ 2.43}} \\
PLS-DA & 93.7 $\pm$ 1.81 & 92.85 $\pm$ 1.8 & 90.3 $\pm$ 2.1 \\
SG + PLS-DA & 94.17 $\pm$ 0.7 & 95.12 $\pm$ 1.15 & 92.1 $\pm$ 1.9 \\
KNN & 83.17 $\pm$ 4.84 & 92.78 $\pm$ 7.1 & 73.59 $\pm$ 10.1 \\
SG + KNN & 93.2 $\pm$ 1.58 & 97.23 $\pm$ 0.9 & 88.71 $\pm$ 1.1
\end{tabular}
\caption{Results obtained for the Covid dataset.}
\label{tab-resultados-covid}
\end{table}
Figure \ref{fig-confusao} shows the confusion matrices for each algorithm using pre-processing. Each matrix is equivalent to the model that obtained the closest accuracy to the average accuracy obtained through the \textit{cross-validation} technique. The confusion matrix referring to 1D-CNN presents a high hit rate and only two false-negative, which confirms the results presented previously. The KNN confusion matrix presents a high amount of false negatives, in agreement with the results presented in table \ref{tab-resultados-covid}.
\begin{figure}[h!]
\centering
\subfloat[SG + 1D-CNN - Confusion matrix ]{\includegraphics[width=0.45\linewidth]{figuras/confusion_matrix/covid_sg_cnn.png}}
\subfloat[SG + PLS-DA - - Confusion matrix]{\includegraphics[width=0.45\linewidth]{figuras/confusion_matrix/covid_sg_pls.png}}\\
\subfloat[SG + KNN - - Confusion matrix]{\includegraphics[width=0.45\linewidth]{figuras/confusion_matrix/covid_sg_knn.png}}
\caption{Confusion matrices obtained by the models.}
\label{fig-confusao}
\end{figure}
The spatial arrangement of the data is shown in Figure \ref{feature-covid}. Figure \ref{fig-a} shows the visualization of the original data, Figure \ref{fig-b} the visualization of the data after pre-processing, and in Figures \ref{fig-c} and \ref{fig-d} the features extracted by 1D-CNN for original and pre-processed data, respectively. Figure \ref{fig-a} shows that the original data presents overlapping and are not well grouped between the two classes. In Figure \ref{fig-b}, although the application of the SG filter improves the separation of the data into better defined groups, there is still an overlap of data in one of the groups, which makes the classification task difficult for algorithms that do not have feature extraction mechanisms like KNN. This problem can be solved by feature extractors, present on 1D-CNN. The visualization of the features extracted by 1D-CNN can be observed in Figures \ref{fig-c} and \ref{fig-d}, where we notice a better arrangement of groups than in those in Figures \ref{fig-a} and \ref{fig-b}. Especially, in Figure \ref{fig-c}, we note that 1D-CNN was able to group the data even without using the pre-processing technique.
\begin{figure}[h!]
\begin{subfigure}[httb!]{0.5\textwidth}
\includegraphics[width=\textwidth]{figuras/feature_visualization/covid_raw.png}
\caption{Spatial arrangement of original data.}
\label{fig-a}
\end{subfigure}
%
\begin{subfigure}[httb!]{0.5\textwidth}
\includegraphics[width=\textwidth]{figuras/feature_visualization/covid_filter.png}
\caption{Spatial arrangement of pre-processed data.}
\label{fig-b}
\end{subfigure}
\begin{subfigure}[httb!]{0.5\textwidth}
\includegraphics[width=\textwidth]{figuras/feature_visualization/covid_feat_raw.png}
\caption{Spatial arrangement of features \\ extracted by 1D-CNN from original data.}
\label{fig-c}
\end{subfigure}
%
\begin{subfigure}[httb!]{0.5\textwidth}
\includegraphics[width=\textwidth]{figuras/feature_visualization/covid_feat_filter.png}
\caption{Spatial arrangement of features \\ extracted by 1D-CNN from pre-processed data.}
\label{fig-d}
\end{subfigure}
\caption{Data spatial arrangement visualization using t-SNE for the Covid-19 dataset.}
\label{feature-covid}
\end{figure}
Recent studies have applied spectroscopy techniques in conjunction with classification algorithms to analyze data collected from COVID-19 samples. \cite{yin2021efficient} applied the SVM algorithm to their dataset to perform the classification between the control group and the virus-infected group. \cite{Barauna2020} used ATR-FTIR spectroscopy to collect data from COVID-19 samples. In their study, it was used 111 samples for the control group and 70 positives for COVID-19. The authors applied a Genetic Algorithm with Linear Discriminant Analysis (GA-LDA) to the data pre-processed by the SG filter to perform the classification task. \cite{carlomagno2021covid} also proposed the use of CNN-type networks for the classification of spectral samples from COVID-19, with a dataset containing 33 control samples and 30 positive samples for the virus. Table \ref{covid-final} presents the results obtained in this work and the works previously mentioned in the literature.
\begin{table}[httb!]
\centering
\begin{threeparttable}
\begin{tabular}{c|c|l|l}
\hline
Algoritmo & ACC & \multicolumn{1}{c|}{ESPEC} & \multicolumn{1}{c}{SE} \\ \hline
SG + 1D-CNN* & 96.5 $\pm$ 1.1 & \multicolumn{1}{c|}{98.06 $\pm$ 1.86} & \multicolumn{1}{c}{94.06 $\pm$ 2.43} \\
\cite{yin2021efficient}* & {91 $\pm$ 4} & 93 $\pm$ 6 & {89 $\pm$ 7} \\
\cite{Barauna2020}** & {90 $\pm$ 0} & 89 $\pm$ 0 & {95 $\pm$ 0} \\
\cite{carlomagno2021covid}** & {97.8 $\pm$ 0} & 98 $\pm$ 0 & {97.5 $\pm$ 0}
\end{tabular}
\begin{tablenotes}\footnotesize
\small
\item[*] Tests performed using the same database.
\item[**] Tests reporting author's own dataset and not yet publicly available.
\end{tablenotes}
\end{threeparttable}
\caption{Positioning of our results in respect to other studies in the literature for different spectral datasets obtained from Sars-Cov2 spectroscopy samples.}
\label{covid-final}
\end{table}
Although the dataset used was not the same for all studies, the approaches applied in this work and in the work of \cite{carlomagno2021covid} using 1D-CNN obtained superior results in respect to the metrics used. As noted earlier in the comparative study, steps such as data pre-processing can significantly influence the final classification results. It is worth mentioning the superiority of 1D-CNN in relation to SVM presented by \cite{yin2021efficient} using the same the Covid-19 dataset. The results presented by \cite{Barauna2020} using the GA-LDA algorithm show promising sensitivity values, surpassing those obtained in this work by $1\%$. However, they do not present standard deviation values. The results presented by \cite{carlomagno2021covid} and \cite{Barauna2020} may suffer changes when going through methods such as \textit{cross-validation}. In general, the results obtained using data pre-processed by the SG filter and 1D-CNN show promising results in terms of accuracy and sensitivity metrics for the dataset investigated, indicating the feasibility of automated systems to aid in the diagnosis of the COVID-19 using spectroscopy data.
\section{Conclusion}
In this work, we present a CNN-1D to tackle classification of spectral data. Firstly, we carried out a comparative study between 1D-CNN and machine learning algorithms as SVM, PLS-DA, KNN and also MOCHM with the objective of evaluating the performance of these algorithms in chemometrics problems. We investigate the impact of data pre-processing and analyzed how this technique influences the performance of each algorithm. The results indicate that the 1D-CNN obtained an average performance superior to the other algorithms investigated when applied to spectroscopy datasets. Furthermore, data pre-processing proved to be a crucial step in the problem modeling process, influencing the gain on average accuracy and reduction of standard deviation, which impacts the reliability of the results. Besides that, the reduced amount of samples present in the datasets did not appear to be an obstacle in generalization of the models for the investigated datasets. Due to its characteristics that favor the use of spectroscopy techniques in various everyday problems, a case study with data obtained from samples of patients with COVID-19 was carried out using 1D-CNN, PLS-DA and KNN for comparison. The algorithms were applied to a public dataset referring to samples of patients from the control group and patients infected with COVID-19. 1D-CNN together with the Savitzky–Golay filter obtained the best results and was used for comparison with other studies involving the diagnosis of COVID-19 from spectral data. The results presented show that approaches involving 1D-CNN's provided the best performance in terms of accuracy, specificity and sensitivity. In particular, in problems such as COVID-19 detection, a high sensitivity rate is very desired and needed since false negatives represent the worst scenario. In this work, the obtained results were, on average, $96\%$ accuracy, $98\%$ specificity and $94\%$ sensitivity, indicating the feasibility of solutions for virus diagnosis using data obtained through spectroscopy with 1D-CNN as classifier. In future works, the influence of signal processing algorithms used in the data pre-processing step can be further investigated. Also, many problems present information beyond that the spectrum provides and may be utilized.
\section*{Acknowledgments}
R.A. Krohling thanks the Brazilian research agency Conselho Nacional de Desenvolvimento Científico e Tecnólogico (CNPq) - grant n.304688/2021-5.
\bibliographystyle{plainnat}
|
1,941,325,220,046 | arxiv | \section{Introduction}
Plasmids are highly common in natural bacterial strains and are widely used in studies
of gene expression~\cite{Solar1998}. They have been seen as a model for genomic replication and partition~\cite{Solar1998,Nordstroem2006} and studied as genetic control systems, possibly subject to noise~\cite{Paulsson2001}. A number of technics have been used to measure plasmid copy numbers (PCN). DNA titration is the simplest, but least precise. Quantitative polymerase chain reaction (qPCR)~\cite{Lee2006} is often used and gives access to mean PCN in a population. Two \emph{in vivo} labeling techniques may \emph{a priori} give access to PCN distributions when applied on single-cells: fusions of a fluorescent protein with a transcription factor that binds the plasmids~\cite{Belmont2001,Pogliano2001} or insertion of a gene coding for a fluorescent protein into the plasmids~\cite{Bagh2008}. However both have limitations that prevent them from giving access to more than the mean PCN~\cite{WongNg2010}.
In the remainder of this Introduction we briefly recall the setup of the experiments reported previously, making use of dual fluorescence reporters, that allowed us to infer the second moment of PCN distributions~\cite{WongNg2010}. In Section \ref{SimpleModel} we derive the expression for PCN mean and noise in a simple case, where only fluctuations of gene expression are considered. The realistic case, taking into account all sources of fluctuations of the actual experiment, is presented in Section \ref{CompleteModel}. Section \ref{Results} presents the values obtained for PCN mean and noise when one uses the experimentally measured quantities. These results and the principle of this work are then discussed. Appendixes present some computations in greater details.
The gene {\it egfp} ~\cite{Tsien1998}, coding for the the green fluorescent protein EGFP, was fused to the inducible, strong promoter {\it PtacI} {\cite{Boer1983} and then inserted in the chromosome of an {\it E. coli} strain. The bacteria were then transformed with either one of the four plasmids studied here, which contained the fusion {\it PtacI-mOrange} {\cite{Shaner2004}: we thus obtained strains expressing EGFP and the orange fluorescent protein mOrange at the same time, under the same transcriptional control. After one hour induction with IPTG, all protein expression was blocked. Cells were incubated overnight so that all fluorescent proteins acquire their mature form. For each of the four strains, green and orange fluorescence intensities of individual cells were then measured. In each experiment at least 10,000 cells were observed, and at least three experiments were done in each condition.
In general, disentangling the various contributions to the final distribution of fluorescence would be a difficult problem. However, making some assumptions on the gene expression processes, we will be able to express the first and second moments of the number of fluorescent proteins as functions of those of copy numbers and to inverse these relations to find how to relate the experimental measurements to the distribution of PCN. The next section presents this strategy in a simple case.
\begin{figure*}
\begin{center}
\includegraphics[scale=0.35]{Plasmides_Experience_Annote.jpg}
\caption{(Color online) Cartoon of the lineage of a bacterium during protein production induction, here depicted with one division (only one of the two final cells is shown). Fluorescence intensities of single cells are measured at the end of induction. The orange, resp. green, intensities are proportional to the number of orange proteins $P_O$, resp. green proteins $P_G$, in the observed cell, shown as orange (dark gray), resp. green (light gray), dots. These proteins were produced during all the induction by a varying number of {\it mOrange} or {\it egfp} copies ($n_O$ and $n_G$) and randomly distributed among daughter cells at each division.}
\label{FigExp}
\end{center}
\end{figure*}
\section{Simple model}
\label{SimpleModel}
We suppose here that during the induction, bacteria do not grow, the plasmids and chromosomes do not replicate, the protein production does not depend on time \footnote{This hypothesis is not necessary, supposing that it does not depend on time \emph{on average} would lead to the same result, but it makes the notations simpler.} and the age distribution of bacteria is uniform.
We note $P_a^i$ the contribution of the copy $i$ of the gene $a$ ($a=O$ or $G$
for the genes {\it mOrange} or {\it egfp}) to the total number of proteins $P_a$ at the end of induction in one cell and
$n_a$ the number of copies of the gene $a$ in that cell (see Fig.~\ref{FigExp}). One can write:
\begin{displaymath}
P_a =\sum_{i=1}^{n_a}P_a^i.
\end{displaymath}
The average (over the population) of $P_a$ can thus be written:
\begin{displaymath}
\langle P_a\rangle = \sum_{n_a}\sum_{i=1}^{n_a}\sum_{P_a^i}p(n_a, P_a^i)\, P_a^i,
\end{displaymath}
where $p(n_a, P_a^i)$ is the joint probability of $n_a$ and $P_a^i$. We can suppose that the distribution of the number of proteins produced by each copy does not depend on the particular copy considered nor on the number of copies (we measured the same distributions of green fluorescence, i.e. of expression from the chromosome, for strains bearing both high and low copy number plasmids~\cite{WongNg2008}). Thus:
\begin{eqnarray*}
\langle P_a\rangle & = & \sum_{n_a}p(n_a)\, n_a\sum_{P_a^1}p(P_a^1)\, P_a^1 \\
& = & \langle n_a\rangle\langle P_a^1\rangle.
\end{eqnarray*}
Moreover we can suppose that on average the number of proteins produced by a copy of a gene does not depend on the gene (both genes are under the same promoter). Hence, as expected:
\begin{equation}
\frac{\langle n_O\rangle}{\langle n_G\rangle} =
\frac{\langle P_O\rangle}{\langle P_G\rangle}.
\label{EqAvSimple}
\end{equation}
The moments of order 2 can similarly be written:
\begin{displaymath}
\langle P_aP_b\rangle = \sum_{n_a, n_b}
\sum_{i=1}^{n_a}\sum_{j=1}^{n_b}\sum_{P_a^i, P_b^j}p(n_a, n_b, P_a^i, P_b^j)
\, P_a^i P_b^j.
\end{displaymath}
where $P_a$ and $P_b$ are evaluated in the same cell.
In the case of different genes, we can suppose that the correlation does not depend on the particular copies considered, nor on their numbers. Thus:
\begin{eqnarray*}
\langle P_OP_G\rangle & = &\!\! \sum_{n_O, n_G}\!\! p(n_O, n_G)\, n_O n_G
\!\!\!\sum_{P_O^1, P_G^1}\!\! p(P_O^1, P_G^1)\, P_O^1 P_G^1\\
& = & \langle n_On_G\rangle\langle P_O^1P_G^1\rangle.
\end{eqnarray*}
In the case of the same gene, we can suppose that two different copies correlate like two copies of different genes ($\langle P_a^iP_a^j\rangle = \langle P_O^1P_G^1\rangle,\;\forall i\neq j$) and that the auto-correlation of one copy does not depend on the particular copy or gene considered ($\langle (P_a^i)^2\rangle = \langle (P^1)^2\rangle,\;\forall a, i$). Then:
\begin{displaymath}
\langle P_a^2\rangle = \langle n_a\rangle\langle (P^1)^2\rangle
+ \langle n_a(n_a -1)\rangle\langle P_O^1P_G^1\rangle.
\end{displaymath}
Combining those two last expressions with equation \ref{EqAvSimple}, we obtain:
\begin{displaymath}
\langle n_O^2\rangle\! =\! \frac{\langle P_O\rangle}{\langle P_G\rangle}
\langle n_G^2\rangle + \frac{1}{\langle P_OP_G\rangle}\!\! \left(\!\! \langle P_O^2\rangle
\!-\!\frac{\langle P_O\rangle}{\langle P_G\rangle}\langle P_G^2\rangle\!\! \right)
\!\! \langle n_On_G\rangle.
\end{displaymath}
Since the replication of the chromosome is well controlled~\cite{Skarstad1986,Nordstroem2006}
we can suppose
that the variance of the chromosome copy number vanishes ($\langle n_G^2\rangle\approx\langle n_G\rangle^2$) and that the plasmid and chromosome copy numbers are uncorrelated
($\langle n_On_G\rangle\approx\langle n_O\rangle\langle n_G\rangle$). Let $\eta$ be the PCN noise, defined by: $\eta^2=(\langle n_O^2\rangle-\langle n_O\rangle^2)/\langle n_O\rangle^2$. Then:
\begin{equation}
\eta^2=\frac{\langle P_G\rangle}{\langle P_O\rangle}
+ \frac{1}{\langle P_OP_G\rangle}
\!\left(\!\frac{\langle P_G\rangle}{\langle P_O\rangle}\langle P_O^2\rangle
-\langle P_G^2\rangle\!\right)\!-\!1,
\end{equation}
which, it turns out, does not depend on the chromosome copy number or any other external inputs, but solely on quantities directly measured in this experiment.
\section{Complete model}
\label{CompleteModel}
We want now to also take into account sources of fluorescence fluctuation other than gene expression. We assume that \emph{all cells have exactly the same division time $T$}. Two studies report a small variability of division times, with a standard deviation of the growth time constant of $\sim$10\% of the average~\cite{Megerle2008,Wang2010}.
We note $t_0$ the age of a cell at the beginning of induction. Under this hypothesis, the distribution of ages $t_0$ is exponential~\cite{Neidhart1996}: $p(t_0)=(2\ln2/T).2^{-t_0/T}$. We will also consider that the induction time (one hour) is a multiple of the division time. This is true at $30$ and $37\,^{\circ}\mathrm{C}$, where we measured cell cycles of 1 h and 30 min respectively, but not for intermediate temperatures (this is discussed in Section \ref{Results}). We will present calculations with cells dividing twice during the induction, i.e. a cell cycle of 30 min; more or less divisions only change the numerical pre-factors~\cite{Ghozzi2009}.
At each cell division, fluorescent proteins are randomly inherited by one of the two daughter cells, thus adding to the fluorescence fluctuations.
As discussed in Appendix~\ref{App:Div}, this contribution turns out to be small: to a good precision, half of the fluorescent proteins are inherited by each daughter cell.
Following one lineage during the induction, we can now express the number of fluorescent proteins at the end of induction in a given cell:
\begin{displaymath}
P_a=\left(\frac{1}{4}\int_{t_0}^T +\frac{1}{2}\int_{T}^{2T} +\int_{2T}^{2T+t_0}\right)\!\!\sum_{i=1}^{n_a(t)} \alpha_a(i,t)\,dt,
\end{displaymath}
where we took the age of the cell at the beginning of induction $t_0$ as the initial time and introduced $\alpha_a(i,t)$, the rate of protein production at time $t$ from the copy $i$ of the gene $a$ \footnote{Here both $\alpha_a$ and $n_a$ are particular realizations of rate and copy number, and thus are ``noisy'' functions. One can fix an arbitrarily small time step and consider the \emph{initiations of transcription} to give a precise, well defined meaning to $\alpha_a$ without disregarding the discrete replication, transcription, translation and maturation steps.}.
\subsection{Fluorescence averages}
To compute the average of $P_a$ we introduce the joint probability $p[t_0,n_a,\alpha_a]$, which is now a functional and the integral is performed over all possible $n_a$ and $\alpha_a$ functions:
\begin{equation}
\langle P_a \rangle = \int dt_0 \mathcal{D}[n_a] \mathcal{D}[\alpha_a]\;
p[t_0,n_a,\alpha_a]\, P_a[t_0,n_a,\alpha_a]. \label{Eq:Av1}
\end{equation}
A number of assumptions on gene expression and replication, similar to those presented in the Section~\ref{SimpleModel}, are detailed in Appendix~\ref{App:Assump}. We use here the hypotheses \emph{(i)} to \emph{(iv)} to simplify Eq.~\ref{Eq:Av1} \emph{without having to postulate explicit models for gene expression or replication}. We then find:
\begin{displaymath}
\langle P_a\rangle=\frac{3}{4}\, T \langle \alpha \rangle \left( \langle\overline{n_a}\rangle + \frac{1}{T}\int_0^T\!\!dt_0\, p(t_0) \int_0^{t_0}\!\!dt\, \langle n_a(t) \rangle \right),
\end{displaymath}
where $\overline{\bullet}$ is the average over one cycle, which commutes with the average over the population.
In general we cannot inverse this relation so as to express the average copy number as a function of the average protein number and we do not know the plasmid replication systems well enough to evaluate the second term in the parentheses. It is nevertheless possible to bound its ratio to the mean copy number. We thus define $\mathcal{R}_a = ((1/T)\int_0^T dt_0\, p(t_0) \int_0^{t_0} dt\,\langle n_a(t) \rangle)/\langle\overline{n_a}\rangle$, and use it to express the mean PCN per chromosome:
\begin{equation} \label{nOnG}
\frac{\langle\overline{n_O}\rangle}{\langle\overline{n_G}\rangle} =\left(\frac{1+\mathcal{R}_G}{1+\mathcal{R}_O}\right)\frac{\langle P_O\rangle }{\langle P_G\rangle }.
\end{equation}
We show in Appendix~\ref{App:RST} that $\mathcal{R}_a \in [0.15,0.45]$. We also computed it after postulating various shapes for $\langle n_a \rangle$ as a function of time and propose that this interval can be reduced to $[0.36,0.44]$ (see Appendix~\ref{App:Test}). The results for the four plasmids we studied, at various temperatures, are presented in Section \ref{Results}.
\subsection{Fluorescence cross-correlations}
We will follow the same strategy for the correlations, namely bound terms related to plasmid or chromosome replication and partition. Beside those already mentioned, we use the assumptions \emph{(v)} and \emph{(vi)} presented in Appendix~\ref{App:Assump} and introduce:
\begin{displaymath}
\mathcal{S}_{ab}=\frac{1}{\langle\overline{n_a}\;\overline{n_b}\rangle}\frac{1}{T^2}\!\!\int_0^T\!\!\!dt_0\, p(t_0)\!\!\int_0^{t_0}\!\!\!dt\!\!\int_0^{t_0}\!\!\!dt' \langle n_a(t)n_b(t')\rangle.
\end{displaymath}
We can now write:
\begin{equation} \label{POPG}
\langle P_OP_G\rangle = \frac{9}{16}T^2\langle \alpha_O\alpha_G\rangle
(1+\mathcal{R}_O +\mathcal{R}_G + \mathcal{S}_{OG})
\langle\overline{n_O}\rangle\langle\overline{n_G}\rangle,
\end{equation}
where $P_O$ and $P_G$ are evaluated in the same cell.
We show in Appendix~\ref{App:RST} that $\mathcal{S}_{OG}\in [0,0.45]$, and argue that this interval can be reduced to $[0.20,0.28]$ (see Appendix~\ref{App:Test}).
\subsection{Fluorescence auto-correlations}
We consider now the moment of order 2 for the same gene, i.e. $\langle P_a^2\rangle$, with $a=O$ or $G$.
We make two more assumptions, \emph{(vii)} and \emph{(viii)} in Appendix~\ref{App:Assump}, and note $C_\alpha(|t-t'|)$ the auto-correlation function at two times $t$ and $t'$ of the rate of fluorescent protein production $\alpha$.
Our guess is that the results will not be affected by the particular form this auto-correlation function will take; to test it we will make two extreme hypotheses: (A) of very short ``memory'', (B) of infinite (over the whole induction time) ``memory''.
In the hypothesis (A), we suppose that after a very short time $\tau$ the expression of a copy of a gene correlates with itself the same way it correlates with other copies. This makes sense if $\tau$ is small compared to the replication time; and indeed, we expect a particular copy auto-correlation to stem from multiple translations of a given mRNA, which has a typical life time of the order of the minute in bacteria, or from transcriptional bursts, which were shown to happen over short time scales~\cite{Golding2005}. In contrast, genes are on average replicated once per cell cycle, i.e. every few tens of minutes.
We consider in this hypothesis that $C_\alpha$ is a peaked function at 0, with a non-zero value beyond a small time $\tau$ such that it does not depend on wether a previous copy was the ancestor of the considered copy or not :
\begin{eqnarray*}
C_\alpha^{\rm A}(|t-t'|) & = & \langle \alpha^2 \rangle\times\tau\delta(t-t')\\
&&+\langle \alpha_O\alpha_G\rangle(1-\tau\delta(t-t')) -\langle\alpha\rangle^2,
\end{eqnarray*}
This gives:
\begin{eqnarray}
\langle P_a^2\rangle_{\mathrm{A}}
&=& \frac{9}{16}T^2\langle \alpha_O\alpha_G\rangle (1+2\mathcal{R}_a + \mathcal{S}_{aa}) \langle(\overline{n_a})^2\rangle\nonumber\\
&+& \frac{5}{16}\tau T(\langle \alpha^2\rangle\!-\!\langle \alpha_O\alpha_G\rangle )
(1\!+\!3\mathcal{R}_a)\langle\overline{n_a}\rangle. \label{P2A}
\end{eqnarray}
In the hypothesis (B), we suppose that $C_\alpha$ is constant:
\begin{displaymath}
C_\alpha^{\rm B}(|t-t'|)=\langle \alpha^2 \rangle-\langle\alpha\rangle^2.
\end{displaymath}
(We expect the actual form of $C_\alpha$ to be intermediate between those two, namely a smooth declining function on a time scale of a few minutes.)
The hypothesis (B) is less realistic. It could correspond to mutations distinguishing different copies of a given gene. By noting that at any previous time each copy has exactly one ancestor, this translates in:
\begin{eqnarray*}
\sum_{i=1}^{n_a(t)}\sum_{i'=1}^{n_a(t')}\!\!\langle\alpha_a(i,t)\alpha_a(i',t')\rangle_{\mathrm{B}}=\langle \alpha_O\alpha_G\rangle n_a(t)n_a(t')\\
+ (\langle \alpha^2\rangle\!-\!\langle\alpha_O\alpha_G\rangle )
(n_a(t)\theta(t\!-\!t')+n_a(t')\theta(t'\!-\!t)),
\end{eqnarray*}
where $\theta$ is the Heaviside function.
We then introduce a third quantity, $\mathcal{T}_a$, which is defined in Appendix~\ref{App:RST}, and can be shown to lay in the interval $[0,9.9]$. (We will argue in Appendix~\ref{App:Test} that this interval can be reduced to $[5.7,6.1]$.) Then:
\begin{eqnarray}
\langle P_a^2\rangle_{\mathrm{B}} & = & \frac{9}{16}T^2\langle \alpha_O\alpha_G\rangle
(1+2\mathcal{R}_a + \mathcal{S}_{aa}) \langle(\overline{n_a})^2\rangle\nonumber\\
& + & \frac{1}{8}T^2(\langle \alpha^2\rangle\!-\!\langle \alpha_O\alpha_G\rangle )(1\!+\!\mathcal{T}_a)\langle\overline{n_a}\rangle. \label{P2B}
\end{eqnarray}
The two hypotheses (A) and (B) thus only lead to different factors for the contribution of the average copy number \footnote{The ratio of these factors is $\frac{5}{2}\!\!\left(\!\frac{1+3\mathcal{R}_a}{1+\mathcal{T}_a}\!\right)\!\!\frac{\tau}{T}\in[0.01,0.2]$ if we consider that $\mathcal{R}_a$ and $\mathcal{T}_a$ are independent. (This interval is reduced to $[0.02,0.03]$ if we let these two quantities vary in the intervals found with a set of test functions, see Appendix~\ref{App:Test}.)}. This term is expected to be small, even in the hypothesis (B), where $1\!+\!\mathcal{T}_a$ can be of the order of 10: the numerical pre-factor cancels it, one can expect $\langle \alpha^2\rangle$ and $\langle \alpha_O\alpha_G\rangle$ to be of the same order of magnitude and, already for the plasmid of lowest copy number and for the chromosome, $\langle\overline{n_a}\rangle$ is significantly smaller than $\langle(\overline{n_a})^2\rangle$. Moreover, if we let $\tau$ tend to the time of induction $2T$, we recover terms of the same order of magnitude, thus suggesting a low sensitivity to the actual mathematical translation of the hypotheses.
The results will be presented and discussed with only the hypothesis (A), the more realistic, being considered; full computations with test functions confirmed that very close values for the PCN noise were found in the hypothesis (B) (data not shown).
\section{Results and discussion}
\label{Results}
By combining Eq.~\ref{nOnG}, \ref{POPG} and \ref{P2A} so as to eliminate the gene expression rates, and assuming that the replication of the chromosome is well controlled, we can now express the PCN noise:
\begin{eqnarray}\label{Eq:eta}
\eta^2\!\! & = &\!\! \left(\frac{1+\mathcal{R}_O}{1+\mathcal{R}_G}\right)\!\!\left(\frac{1+3\mathcal{R}_O}{1+3\mathcal{R}_G}\right)\!\!\left(\frac{1+2\mathcal{R}_G+\mathcal{S}_{GG}}{1+2\mathcal{R}_O+\mathcal{S}_{OO}}\right)\!\!\frac{\langle P_G\rangle}{\langle P_O\rangle}\nonumber\\
&&\!\! \!\! \!\! \!\! \!\! \!\! -1 +\! \left(\frac{1+\mathcal{R}_O+\mathcal{R}_G+\mathcal{S}_{OG}}{1+2\mathcal{R}_O+\mathcal{S}_{OO}}\right)\!\! \frac{1}{\langle P_OP_G\rangle}\times\nonumber\\
&&\!\! \!\! \!\! \!\! \!\! \!\! \! \left\{\left(\frac{1+\mathcal{R}_O}{1+\mathcal{R}_G}\right)\!\! \frac{\langle P_G\rangle}{\langle P_O\rangle} \langle P_O^2 \right.\rangle\!\left. -\! \left(\frac{1+3\mathcal{R}_O}{1+3\mathcal{R}_G}\right)
\langle P_G^2\rangle\right\}.
\end{eqnarray}
Note that both the auto-correlation time $\tau$ introduced previously and the cell cycle length $T$ have also been eliminated. Only terms related to replication and partition of genes, which we can bound, and the experimentally measured moments of protein numbers remain \footnote{The number of proteins can be determined up to a constant multiplicative factor, the same for both colors (namely, the fluorescent intensity per EGFP molecule in the selected green channel)~\cite{WongNg2010}; here, the contribution of random partition at divisions having been neglected, only ratios of the same order show up, thus canceling this unknown factor.}.
By making the conservative assumption that $\mathcal{R}_a$ and $\mathcal{S}_{ab}$ can independently take any value in their intervals, we can compute intervals in which the mean PCN per chromosome and the PCN noise are surely. They are presented in Table~\ref{tab:Res}
\begin{table*}
\caption{\label{tab:Res}
Mean PCN per chromosome $\langle\overline{n_O}\rangle/\langle\overline{n_G}\rangle$ and PCN noise $\eta$ computed with data from experiments at $37\,^{\circ}\mathrm{C}$, using the simple model or the complete one, either with a set of test functions or within a general analysis. Only the hypothesis (A) of short ``memory'' was considered. We assumed that cells divided twice during the induction.}
\begin{ruledtabular}
\begin{tabular}{lcccc}
& mini-F & mini-R1-$par^-$ & mini-R1-$par^+$ & mini-ColE1\\
\hline
$\langle n_O\rangle/\langle n_G\rangle$ simple & 0.9 &7.2 & 6.5 & 87\\
$\langle\overline{n_O}\rangle/\langle\overline{n_G}\rangle$ complete/test & [0.84, 0.95] & [6.8, 7.7] & [6.1, 6.9] & [82, 93]\\
$\langle\overline{n_O}\rangle/\langle\overline{n_G}\rangle$ complete/general & [0.71, 1.13] & [5.7, 9.1] & [5.1, 8.2] & [69, 110]\\
\hline
$\eta \times 10^2$ simple &58 & 36 & 30 & 28\\
$\eta \times 10^2$ complete/test & [50, 67] & [25, 45] & [16, 39] & [13, 38] \\
$\eta \times 10^2$ complete/general & [0, 100] & [0, 74] & [0, 71] & [0, 68] \\
\end{tabular}
\end{ruledtabular}
\end{table*}
for experiments at $37\,^{\circ}\mathrm{C}$, and in the Fig.~\ref{fig:Av}
\begin{figure}
\begin{center}
\includegraphics[clip, viewport=160 275 440 510,scale=0.75]{PlasmidesTheo_mean.pdf}
\caption{\label{fig:Av}
(Color online) Mean PCN per chromosome $\langle\overline{n_O}\rangle/\langle\overline{n_G}\rangle$, computed using
Eq.~\ref{nOnG} and the measured average protein numbers $\langle P_O\rangle$ and $\langle P_G\rangle$. Results are shown for cells grown at $30$, $32$, $35$, $37$ and $39\,^{\circ}\mathrm{C}$ and for the four plasmids studied here, from bottom to top: mini-F (red), mini-R1-$par^+$(blue), mini-R1-$par^-$(green), mini-ColE1 (magenta). The values obtained in three cases are plotted: with the simple model (squares, solid line), with the complete model and test functions (upper and lower bounds of the interval: circles, dashed lines) or within a general analysis (upper and lower bounds of the interval: triangles, dotted lines). The mini-R1 plasmids used here have a synthetic, thermo-sensitive origin of replication, the control of which is inactivated at high temperature~\cite{Gerdes1985,Rodionov2004}.}
\end{center}
\end{figure}
and Fig.~\ref{fig:Noise}
\begin{figure}
\begin{center}
\includegraphics[clip, viewport=135 250 460 535,scale=0.7]{PlasmidesTheo_noise.pdf}
\caption{\label{fig:Noise}
(Color online) PCN noise $\eta$ computed using
Eq.~\ref{Eq:eta}, and the measured average protein numbers $\langle P_O\rangle$, $\langle P_G\rangle$, and protein number correlations $\langle P_O^2\rangle$, $\langle P_G^2\rangle$, $\langle P_O P_G\rangle$. Results are shown at various temperatures for the four plasmids studied here (see the caption of Fig.~\ref{fig:Av}). We considered that cells divided once during the induction at $30$ and $32\,^{\circ}\mathrm{C}$, twice at $35$, $37$ and $39\,^{\circ}\mathrm{C}$. Only the hypothesis (A) of short ``memory'' was considered. The results obtained with the simple model are fully recovered if we suppose a similar behavior for the plasmids and for the chromosome, see the main text.}
\end{center}
\end{figure}
for various temperatures. We report both the intervals estimated with a general analysis and with a set of test functions for the moments of copy numbers. Values computed with the simple model are also shown. Both for means and noises, the values computed with the simple model fall in the middle of the intervals computed with the more realistic model.
As Fig.~\ref{fig:Av} shows, we can clearly distinguish the plasmids by their mean PCN per chromosome. Moreover, these results agree with previous, independent estimates, as discussed~\cite{WongNg2010}. For the noises the picture is less clear. In the general study, the intervals found are too large for the results to be meaningful; but we know that we have largely overestimated them. In contrast, using test functions allows one to distinguish the plasmids by their PCN noises. In particular, we can notice that the partition system reduces the noise (compare mini-R1-$par^+$ and mini-R1-$par^-$), and that a plasmid with a high copy number (mini-ColE1) has a lower noise than a plasmid with a small copy number (mini-F), even though it has a partitioning system \footnote{We notice however that for the mini-R1 plasmids, both averages and noises increase at high temperature; this could come from fluctuations in the number of mature thermo-sensitive replication control proteins.}.
We tested the quality of the inference with simple computer simulations, where stochastic gene expression and plasmid replication were implemented (see Appendix~\ref{App:Sim} for more details). Table~\ref{tab:Sim}
\begin{table*}
\caption{\label{tab:Sim}
Test of the inference method with computer simulations, in four cases: 1. a synchronized population of bacteria with fixed division time, equal to half the induction time; 2. as 1. with an exponential age distribution; 3. as 2. with a distribution of division times, with mean equal to half the induction time; 4. as 3. with the mean division time equal to one third of the induction time.}
\begin{ruledtabular}
\begin{tabular}{lcccc}
& case 1 & case 2 & case 3 & case 4\\
\hline
$\langle \overline{n_O}\rangle/\langle\overline{n_G}\rangle$ true & 11.9 & 9.6 & 10.1 & 10.1\\
$\langle n_O\rangle/\langle n_G\rangle$ simple & 12.8 & 10.1 & 11.0 & 11.7\\
$\langle\overline{n_O}\rangle/\langle\overline{n_G}\rangle$ complete/test & [12.0, 13.6] & [9.5, 10.7] & [10.4, 11.6] & [11.0, 12.4]\\
$\langle\overline{n_O}\rangle/\langle\overline{n_G}\rangle$ complete/general & [10.1, 16.1] & [8.0, 12.7] & [8.7, 13.8] & [9.3, 14.7]\\
\hline
$\eta \times 10^2$ true &63 & 66 & 75 & 75\\
$\eta \times 10^2$ simple &60 & 66 & 74 & 83\\
$\eta \times 10^2$ complete/test & [54, 67] & [59, 72] & [68, 80] & [77, 89] \\
$\eta \times 10^2$ complete/general & [0, 93] & [19, 98] & [35, 106] & [47, 114] \\
\end{tabular}
\end{ruledtabular}
\end{table*}
compares the true and inferred values of the mean PCN per chromosome and the PCN noise in four cases, corresponding to different assumptions on the age and cell cycle duration distributions. In each case we find a very good agreement.
As it appears in Eq.~\ref{nOnG}, \ref{POPG} and \ref{P2A}, what we call here ``plasmid copy number'', or ``chromosome copy number'', is precisely the average over one cell cycle of the number of copies of the gene coding for a fluorescent protein. A quantitative PCR (qPCR) measures $\langle n_O\rangle_{\mathrm{q}}/\langle n_G\rangle_{\mathrm{q}}$, where $\langle n_a\rangle_{\mathrm{q}}=\left\langle\int\!\!dt_0\,p(t_0)n_a(t_0)\right\rangle=\int dt_0\,p(t_0)\,\langle n_a(t_0)\rangle$. This quantity and the ratio $\langle\overline{n_O}\rangle/\langle\overline{n_G}\rangle$ reported here take in general different values. We have indeed noticed a discrepancy between the two approaches, but other explanations are likely~\cite{WongNg2010}.
We have made strong, but reasonable hypotheses on gene expression. We made intuitive notions explicit and gave them a well defined mathematical translation.
A deeper mathematical analysis could reduce significantly the general intervals found, but not below the intervals found with test functions.
Here again, the experimental approach and derivation of the PCN mean and noise are self-consistent: there are no external inputs, even in the bounded ``correction'' quantities $\mathcal{R}_a$ or $\mathcal{S}_{ab}$, which depend only on the way the genes are replicated and inherited by daughter cells at divisions.
Using the chromosome as a reference allowed us to get rid of global fluctuations: the number of divisions considered do not affect the results, fluctuations from proteins partition at division are suppressed, all fluctuations of gene expression are cancelled, and even the division time does not appear in the final result. This argues for the assumptions that the induction time is a multiple of the division time and that the variability in division times can be neglected not to affect the results. The simulations further confirm the robustness of this strategy, in particular the values inferred with the crudest assumptions (``simple model'') are strikingly close to the true ones, both for the mean PCN per chromosome and the PCN noise (see Table \ref{tab:Sim}).
The only source of uncertainty that remains stems from the replication and partition of the plasmids and chromosome. The use of test functions suggests that it does not affect the results much. Moreover, if we suppose that both are similar, i.e. $\mathcal{R}_O\approx\mathcal{R}_G$ and $\mathcal{S}_{OO}\approx\mathcal{S}_{GG}$, we fully recover the simple model presented at the beginning and in the previous article~\cite{WongNg2010}.
The next obvious step would be to consider correlations \emph{between} cells, which could in particular inform us on plasmids partition. Here however, we lack the information on the lineage (which cells share a common induced ancestor) necessary to make a practical use of these quantities.
The use of dual reporters to dissect sources of noise was first proposed and demonstrated in a simple framework: steady state of fully induced bacteria, with both reporters in as much a similar position as possible~\cite{Swain2002,Elowitz2002}. Here we took a similar approach further, and made sense of an intuitive setup: by changing one element, namely the {\it locus} of insertion of the genes coding for fluorescent proteins, we were able to measure one particular source of noise.
The analysis proposed here could serve as a model for other derivations of this strategy.
\section*{Acknowledgements}
J.W.N. acknowledges financial support from the Minist\`ere de la Recherche and CNRS. This work was supported by the Grant No. 05-BLAN-0026-01 from the ANR.
|
1,941,325,220,047 | arxiv | \section{Introduction}
The discovery of cosmic acceleration \cite{Riess:1998May, Perlmuter:1999Dec} has inspired the development of a wide range of dark energy (DE) and modified gravity models. The simplest DE candidate, the cosmological constant $\Lambda$, is a parameter in General Relativity and is consistent with all current observations \cite{Planck:2015Cos}. The main problem with $\Lambda$ is not why it has a particular value, but the fact that the vacuum energy contribution to $\Lambda$ is sensitive to the ultraviolet cutoff scale and requires a technically unnatural fine-tuning \cite{Burgess:2013ara}, which is the long-standing cosmological constant problem (CCP) \cite{Weinberg:1988cp}. Most of the dynamical DE and modified gravity models proposed in the literature do not offer a solution to this old problem. The CCP would be surmountable if there was a dynamical mechanism by which vacuum energy could decay from its initially large value and settle at an attractor consistent with the observed value of $\Lambda$. While a full theory of the quantum vacuum that would contain such a mechanism does not exist, this idea has motivated phenomenological models of decaying vacuum energy \cite{Bertolami:1986bg,Freese:1986dd,Chen:1990jw,Carvalho:1991ut,Berman:1991zz,Shapiro:2000dz,Sola:2011qr}.
One way to have a nonconstant vacuum energy is to introduce a new dynamical degree of freedom, {\frenchspacing\it e.g.}, a scalar field. An alternative approach, which avoids explicitly introducing new degrees of freedom while still preserving the general covariance of the evolution of cosmological perturbations, is to allow for the exchange of momentum between the vacuum energy and other species \cite{Wands:2012Mar}. Both the minimally \cite{Ratra:1998apj, Ratra:1998prd, Caldwell:1997ii} and nonminimally \cite{Wetterich:1994bg, Holden:1999hm, Amendola:1999er} coupled quintessence models can be cast in this general framework. In the former case, the vacuum (the potential energy) exchanges energy with the kinetic energy of the scalar field. In the latter, there is an additional exchange with matter. In this sense, one can say that a time-dependent vacuum energy is necessarily interacting. Since additional interactions in the visible matter sector are strongly constrained while the nature of dark matter (DM) is largely unknown, we will consider models in which the vacuum interacts only with DM.
In this paper we adopt the phenomenological model of vacuum energy evolution introduced in \cite{Wands:2012Mar} which avoids dealing with explicit additional degrees of freedom. The vacuum equation of state is by definition equal to $-1$, but the vacuum energy density $V$ is allowed to vary because of the interaction with DM, $\nabla_{\mu}V=-Q_{\mu}$.
\section{The model}
In the interacting vacuum energy model the background DM and vacuum energy densities obey continuity equations
\begin{eqnarray}
\label{rhodmdot} \dot{\rho}_{\rm \textcolor{black}{dm}}+3H\rho_{\rm \textcolor{black}{dm}} = -Q \,; \ \dot{V} = Q \,,
\label{Vdot}
\end{eqnarray}
where $Q$ denotes the energy transfer between DM and vacuum energy.
{An arbitrary energy transfer $Q$ can in principle reproduce an arbitrary background cosmology, with energy density $\rho=\rho_{\rm \textcolor{black}{dm}}+V$ and pressure $P=-V$ \cite{Wands:2012Mar}. This reduces to $\Lambda$CDM when $Q=0$.}
To calculate the linear perturbations, we need to specify a covariant form of the energy-momentum transfer 4-vector. Following \cite{Wang:2013Jan, Wang:2014Apr}, we assume that the covariant interaction is parallel to the 4-velocity of DM, $Q^{\mu} = {Q}u^{\mu}_{(\mathrm{dm})}$. There are other choices \cite{Wands:2012Mar}, but this covariant form for the interaction means that there is no momentum transfer in the DM \textcolor{black}{comoving-orthogonal frame} and hence the DM particles follow geodesics, as in $\Lambda$CDM. Although the interacting vacuum does allow inhomogeneous perturbations, one can always choose a frame in which the vacuum is spatially homogeneous, $\delta V=0$. For geodesic DM this coincides with comoving-orthogonal frame \cite{Wang:2013Jan, Wang:2014Apr} and the perturbation equations are particularly simple in this gauge. Note that in this case the energy transfer is a potential flow, $Q^\mu = -\nabla^\mu V$, and therefore the matter velocity $u^{\mu}_{(\mathrm{dm})}$ must be irrotational \cite{Sawicki:2013wja}. The DM density contrast evolves according to
\begin{eqnarray}
\label{dmenergy}
\dot{\delta}_{\rm \textcolor{black}{dm}}=-\vartheta+\frac{Q(a_i)}{\rho_{\rm \textcolor{black}{dm}}}\delta_{\rm \textcolor{black}{dm}} \,,
\end{eqnarray}
where the divergence of the matter 3-velocity is given by the extrinsic curvature of the metric, $\vartheta=\dot{h}/2$ in the comoving-synchronous gauge, and $h$ is the scalar mode of metric perturbations \cite{Ma:1995Jun}.
We take $Q$ to be of a form inspired by the generalized Chaplygin gas model \cite{Kamenshchik:2001cp,Bento:2002ps}, $Q = 3\alpha H \rho_{\rm \textcolor{black}{dm}} V / (\rho_{\rm \textcolor{black}{dm}} +V)$, where $\alpha$ is a dimensionless coupling parameter \cite{Wands:2012Mar}. This form naturally reproduces a conventional matter-dominated solution at early times, and a vacuum dominated solution at late times, while allowing more general evolution in between. Previous studies \cite{Wang:2013Jan, Wang:2014Apr} have constrained the interaction assuming $\alpha = {\rm const.}$\footnote{Salvatelli {\frenchspacing\it et al.} \cite{Salvatelli:2014Jun} considered a related dimensionless parameter, $q=-3\alpha\rho_{\rm \textcolor{black}{dm}}/(\rho_{\rm \textcolor{black}{dm}} +V)$, in four redshift intervals.}
In this paper, we make no assumptions about the time dependence of $\alpha$ and directly reconstruct it from data using the nonparametric Bayesian approach introduced in \cite{Crittenden:2011Dec}. We are thus able to describe a general background cosmology and reproduce any equation of state seen in quintessence models, but the perturbations are of a restricted form with vanishing sound speed but nonzero energy transfer to or from DM, determined by the background cosmology. Thus our model is degenerate with quintessence models in terms of the background cosmology, but distinguished by the evolution of perturbations.
\section{The reconstruction method}
We first discretize $\alpha(a)$ into bins $\alpha_i = \alpha(a_i)$, $i=1,...,N$, distributed uniformly in the interval $[a_{\rm min}, a_{\rm max}]$, giving us $N$ parameters $\alpha_i$ that can be fit to data. Any representation of a function with a finite number of parameters is necessarily \textcolor{black}{inaccurate} and, in the absence of additional priors, the outcome of the fit would directly depend on $N$. As shown by Crittenden {\frenchspacing\it et al.} \,\cite{Crittenden:2011Dec}, it is possible to eliminate the dependence on $N$ and explicitly control the reconstruction bias by adding a prior that correlates the nearby bins.
In the correlated prior approach one assumes that $\alpha(a)$ is a smooth function, so that its values at neighboring points in $a$ are not entirely independent. More specifically, $\alpha(a)$ is taken to be a Gaussian random variable with a given correlation $\xi$ between its values at $a$ and $a'$, $\xi (|a - a'|) \equiv \left\langle [\alpha(a) - \alpha^{\rm fid}(a)][\alpha(a') - \alpha^{\rm fid}(a')] \right\rangle$, which is nonzero for $|a-a'|$ below a given ``correlation length'' but vanishes at much larger separations, and $\alpha^{\rm fid}(a)$ is a reference fiducial model. Given a particular functional form of $\xi (|a - a'|)$, one can calculate the corresponding covariance matrix for the $N$ parameters $\alpha_i$:
\begin{equation}
\label{eq:Cij} C_{ij}=\frac{1}{\Delta^2}\int_{a_i}^{a_i+\Delta}{\rm d}a \int_{a_j}^{a_j+\Delta}{\rm d}a'~\xi (|a - a'|) \,,
\end{equation}
where $\Delta$ is the bin width. This covariance matrix defines a (Gaussian) prior probability distribution for parameters $\alpha_i$ and its product with the likelihood, according to Bayes' theorem, gives the desired posterior distribution. Thus, the reconstruction amounts to finding the minimum of $\chi^2=\chi_{\rm prior}^2+\chi_{\rm data}^2$, where $\chi_{{\rm prior}}^2=\left(\mathbf{\alpha}-\mathbf{\alpha}^{{\rm fid}}\right)^T \mathbf{C}^{-1} \left(\mathbf{\alpha}-\mathbf{\alpha}^{{\rm fid}}\right)$ and $\mathbf{\alpha}^{{\rm fid}}$ is
some fiducial model. To avoid dependence on $\mathbf{\alpha}^{{\rm fid}}$, we take it to be constant and marginalize over its value following \cite{Crittenden:2011Dec}.
We adopt the CPZ form \cite{Crittenden:2011Dec, Crittenden:2009Dec} for the correlation function, $ \xi(|a - a'|)=\xi(0)/[1+(|a - a'|/a_c)^2] $, where $a_c$ determines the correlation length and $\xi(0)$ sets the strength of the prior. This form has the advantage of making it possible to evaluate integrals in Eq.~(\ref{eq:Cij}) analytically, while also having a transparent dependence on its two parameters. We also note that, as shown in \cite{Crittenden:2011Dec}, the outcome of reconstruction is largely insensitive to the particular function chosen to represent $\xi(|a - a'|)$.
The correlation length $a_c$ sets the effective number of degrees of freedom allowed by the prior: $N_{\rm eff} \simeq (a_{\rm max}-a_{\rm min})/a_c$. As long as $N>N_{\rm eff}$, the reconstructed result is independent of $N$. Rather than adjusting $\xi(0)$ to control the strength of the prior, we use the variance of the mean given by $\sigma^2_{\bar{\alpha}} \simeq \pi\xi(0){a_c}/(a_{\rm max}-a_{\rm min})$ in the limit $a_c\ll a_{\rm max}-a_{\rm min}$ \cite{Crittenden:2011Dec, Crittenden:2009Dec}.
Namely, given $a_c$, one can adjust $\xi(0)$ to keep the prior on the uncertainty in the mean of $\alpha(a)$ independent of $N_{\rm eff}$. In what follows, we take $\sigma_{\bar{\alpha}}=0.04$ based on the constraint on a constant $\alpha$ obtained in
\cite{Wang:2014Apr}, and set $a_c=0.06$. As we show later using principal component analysis (PCA), this choice of $a_c$ effectively separates the signal from noise, thus allowing a high-resolution reconstruction with minimal contamination from the noise.
\begin{figure*}[htp]
\begin{center}
\includegraphics[scale=0.39]{alpha-compare.eps}
\setlength{\abovecaptionskip}{-2.9cm}
\caption{Reconstruction of $\alpha(a)$ from four different data combinations: Data I: CMB+SN+BAO (without LyaF)+RSD (with AP effect); Data II: CMB+SN+BAO (without LyaF)+RSD (All); Data III: CMB+SN+BAO (All)+RSD (with AP effect) and Data IV: CMB+SN+BAO (All)+RSD (All). The \textcolor{black}{best-fit} model (central white solid curves) with the 68, 95\% CL errors (dark and light blue shaded bands) are shown in each panel. \textcolor{black}{We denote the redshift in each panel on the top x-axis.} The horizontal dashed line denotes the $\Lambda$CDM model. }\label{fig:alpha}
\end{center}
\end{figure*}
\section{Data}
Our data sets include the CMB temperature and polarization power spectra from Planck \cite{Planck:2013Mar:cos} and WMAP9 \cite{WMAP9:2012} respectively; the JLA supernovae sample \cite{Sako:2014Jan}; the BAO measurements of 6dFGRS \cite{Beutler:2011Jun}, SDSS DR7 \cite{Padmanabhan:2012Feb}, BOSS LOWZ \cite{Anderson:2013Dec} and BOSS Lyman-$\alpha$ Forest (LyaF) \cite{Aubourg:2014Nov}; the redshift space distortion (RSD) measurements which probe both the expansion and the growth history, namely ($D_V/r_s(z_d), F_{\rm AP}, f\sigma_8$) from BOSS CMASS \cite{Beutler:2013Dec} and ($A, F_{\rm AP}, f\sigma_8$) from WiggleZ \cite{Blake:2012Apr}, where $F_{\rm AP}$ quantifies the ``Alcock-Paczynski" effect \cite{AP}. We denote these data sets as ``RSD (with AP effect)," while we also use a ``RSD (All)" data set that includes additional RSD measurements without the AP effect: 6dFGRS \cite{Beutler:2012Apr}, 2dFGRS \cite{Percival:2004Jun}, SDSS LRG \cite{Samushia:2011Feb} and VIPERS \cite{Torre:2013Mar}. We present the constraints from four data combinations as shown in Fig.~\ref{fig:alpha}.
The RSD measurements constrain $(f\sigma_8)^2$, the product of the growth rate and the variance of the total matter density contrast. Note that the continuity equation of DM density contrast in our model [Eq.~(\ref{dmenergy})] is different from that in $\Lambda$CDM and, therefore, the growth rate probed by the RSD is no longer simply that of the DM component. Namely, in $\Lambda$CDM, the velocity divergence of DM, $\vartheta\equiv\vec{\nabla}\cdot\vec{v}$, is simply $f_{\rm dm} H\delta_{\mathrm{dm}}$ where $f_{\mathrm{dm}} \equiv d \ln \delta_{\mathrm{dm}} /d \ln a$ is the DM growth rate. In the interacting vacuum model, Eq.~(\ref{dmenergy}) implies that
\begin{equation}
\nonumber
\vartheta = -\left[f_{\mathrm{dm}}-\frac{Q(a_i)}{H\rho_{\mathrm{dm}}}\right]H\delta_{\mathrm{dm}} \equiv -f_i H\delta_{\mathrm{dm}} \equiv -f_{\vartheta}H\delta_{\rm \textcolor{black}{m}} \,.
\end{equation}
If $Q(a_i)\neq0$, the effective DM growth rate $f_i$, the appropriately weighted total matter growth rate $f_{\vartheta}$ \cite{BruniWandsinprep} and $f_{\rm dm}$ are different from each other. It can be shown that
\begin{equation}
\label{eq:fvartheta} f_{\vartheta}=\left( \frac{1}{f_i}\frac{\rho_{\rm \textcolor{black}{dm}}}{\rho_{\rm \textcolor{black}{m}}}+\frac{1}{f_{\rm \textcolor{black}{b}}}\frac{\rho_{\rm \textcolor{black}{b}}}{\rho_{\rm \textcolor{black}{m}}}\right)^{-1}\,,
\end{equation}
where $f_{\rm b}\equiv {d \ln \delta_{\rm b}}/{d \ln a}$ is the growth rate of baryons. In the absence of an interaction we recover the standard result: $f_{\vartheta}=f_i=f_{\rm \textcolor{black}{b}}$, while, in general, it is $(f_{\vartheta}\sigma_8)^2$ that is measured by RSD.
\section{Results}
We discretize $\alpha(a)$ into $40$ bins uniform in $a$ within [0.001, 1], and use Monte Carlo Markov chains (MCMC) implemented in a modified version of {\tt CosmoMC} \cite{ref:MCMC} to fit them to data along with all other relevant cosmological parameters. A total $\chi^2$ is minimized and the joint posterior probability distribution for all the parameters is obtained after the MCMC has converged.
\begin{figure*}[htp]
\begin{center}
\includegraphics[scale=0.45]{bao-fs8-pca.eps}
\setlength{\abovecaptionskip}{-2.0cm}
\caption{Panels (A) and (B): The theoretical prediction of $D_V/r_s$ and $f_{\vartheta}\sigma_8$ by the best-fit $\alpha(a)$ model (solid line), and measurements (data with error bars), rescaled by the best-fit $\Lambda$CDM model. The rescaled BAO measurements are: 6dFGRS (filled circles), BOSS LOWZ (open circles), SDSS DR7 (filled squares), WiggleZ (open squares), CMASS (filled star), and LyaF (open star). The RSD points from WiggleZ (open squares) and CMASS (filled star) are shown in panel (B). The dashed horizontal line denotes the $\Lambda$CDM model. Panel (C): The eigenvalues of the covariance matrix obtained using data plus prior for four different data combinations, and using prior alone. Panel (D): The first three eigenvectors of the best-fit $\alpha(a)$. See the text for more details.}\label{fig:bao-fs8-pca}
\end{center}
\end{figure*}
The best-fit reconstructed models of $\alpha(a)$ (with 68\% and 95\% C.L. errors) are shown in Fig.~\ref{fig:alpha}. The $\Lambda$CDM fits the observations well when the LyaF BAO measurements are not included (Data I) and the reconstruction remains almost unchanged after adding more RSD data (Data II). However, when the LyaF data is included, we see evidence for an evolving $\alpha$: Data III (IV) shows a $1.8 (1.9\sigma)$ improvement in the fit for the $\alpha\ne0$ model. In these cases, the best-fit $\alpha(a)$ model changes sign during its evolution, {\frenchspacing\it i.e.}, it is positive at $z\gtrsim2.1$, implying an energy transfer from DM to vacuum energy, but negative at $0.6\lesssim z\lesssim2.1$, implying vacuum decay. At $z\lesssim0.6$, $\alpha$ is consistent with $\Lambda$CDM. The variations at higher and lower redshifts compensate to make the deceleration-acceleration transition redshift, $z_{\rm t}=0.6$ from Data III, close to the $\Lambda$CDM value of $z_{\rm t}=0.65$, which agrees with the value extracted from the $H(z)$ data \cite{Farooq:2013hq} using a Gaussian prior of $H_0=68\pm2.8\,{\rm km\,s^{-1}Mpc^{-1}}$, but is smaller than the value extracted using a prior of $H_0=73.8\pm2.4\,{\rm km\,s^{-1}Mpc^{-1}}$ \cite{Farooq:2013hq}. Again, adding more RSD data points (Data IV versus Data III) does not change the reconstruction significantly.
To understand this result, in Fig.~\ref{fig:bao-fs8-pca} we show the theoretical predictions for $D_V/r_s$ and $f_{\vartheta}\sigma_8$ for the best-fit binned model from Data III rescaled by the corresponding best-fit $\Lambda$CDM model predictions, together with the actual measurements. In panel (A), the LyaF measurement (open star), has the smallest error bar and pulls the fit down, also making it more consistent with the CMASS measurement (filled star). This reduces the BAO $\chi^2$ by $4.1$. This fit is favored by the $f_{\vartheta}\sigma_8$ as well (panel B), further reducing the $\chi^2$ by $1.7$. The remaining data from CMB and SN slightly disfavor this model; namely, they increase $\chi^2$ by $\sim2$, but not enough to compensate for the reduction in $\chi^2$ from BAO and RSD.
\textcolor{black}{The improvement in the fit achieved by the binned model must be weighed against the increased number of degrees of freedom. This can be quantified via the Bayes factor, or the ratio of the Bayesian evidences of the interacting model and the $\Lambda$CDM model. For one model to be preferred over the other, the Bayes factor should be significantly greater than 1.
Since the evidence depends on the prior assumed for the binned coupling $\alpha(a_i)$, we follow the method in \cite{Zhao:2012Jul} to examine the dependence of the Bayes factor on the the variance in each bin, $\sigma_{\rm bin}$, which controls the strength of the prior. In Fig.~\ref{fig:bayes-evi}, we plot the logarithm of the evidence ratio ($\Delta \ln \rm E$) as well as the logarithms of the ratios of the volumes of parameter space ($\Delta \ln \rm V$) and the likelihood ($\Delta \ln \rm L$) as a function of $\sigma_{\rm bin}$. For $\sigma_{\rm bin} \rightarrow 0$, the prior effectively forces $\alpha(a_i)$ to become equal to the fiducial model, which is $\Lambda$CDM, and all three ratios approach unity. Increasing $\sigma_{\rm bin}$ allows for more freedom in the variation of $\alpha(a_i)$ and improves the fit. However, as evident from Fig.~\ref{fig:bayes-evi}, the improvement in the fit is not sufficient to compensate for the increase in the parameter volume. The Bayes factor is only marginally greater than 1 for a limited range of $\sigma_{\rm bin}$ around the value of $0.1$, which was the value used in the reconstruction in Fig.~\ref{fig:alpha}. Thus, we conclude that the interacting model is not preferred over $\Lambda$CDM}.
\begin{figure}[tbp]
\includegraphics[scale=0.32]{bayes-evi.eps}
\caption{The logarithms of the ratios of volumes of parameter space ($\Delta \ln \rm V$), of likelihoods ($\Delta \ln \rm L$), and of the evidences ($\Delta \ln \rm E$) for the binned model and the $\Lambda$CDM as a function of the strength of the prior set by the variance in each bin, $\sigma_{\rm bin}$.}\label{fig:bayes-evi}
\end{figure}
\textcolor{black}{Even with the lack of strong preference for the evolving model, it is still interesting to know to what extent our reconstruction constrains the evolution of the vacuum energy. For instance, one could ask which part of the information is informed by data and which part is informed by the prior. The errors shown in the reconstruction in Fig.~\ref{fig:alpha} are highly correlated, making their direct interpretation difficult. Principal component analysis (PCA) is a useful tool that can be used to analyze and tune the prior (for applications of PCA to DE studies, see {\frenchspacing\it e.g.}, \cite{Huterer:2002Jul,Crittenden:2009Dec,Crittenden:2011Dec}).} The PCA seeks the orthonormal eigenmodes of the inverse covariance matrix $F_{\alpha}$ of the $\alpha$ bins after marginalizing over other cosmological parameters. Namely, $F_{\alpha}=W^T\, \Lambda \,W$, with eigenvectors defined by the decomposition matrix $W$ and the eigenvalues given by the elements of the diagonal matrix $\Lambda$. Eigenmodes define independent linear combinations of the original parameters ($\alpha$ bins) that have uncorrelated errors and, thus, are easier to interpret. We can also use PCA to compare the eigenmodes with and without the prior which, as we explain below, allows us to estimate the effective number of degrees of freedom of $\alpha(a)$ constrained by data. Performing the PCA is straightforward, since $F_{\alpha}$ is one of the products of our MCMC calculation.
The eigenvalues of both the prior and the data+prior covariance are shown in panel (C) of Fig.~\ref{fig:bao-fs8-pca}. There are three data eigenmodes that are not affected by the prior, {\frenchspacing\it i.e.}, these three best constrained modes pass the prior with almost no penalty. The shapes of these modes are shown in panel (D)and we find that they are similar for all data combinations. The remaining modes, on the other hand, are dominated by the prior. Thus, even though our model has many bins of $\alpha$ as parameters, there are effectively only three additional degrees of freedom. \textcolor{black}{As mentioned earlier, we find that the total $\chi^2$ is reduced by $\sim4$, which is somewhat greater than expected for a model with three additional parameters. }
The eigenvectors provide a natural basis onto which an arbitrary $\alpha(a)$ can be expanded, {\frenchspacing\it i.e.}, $\alpha(a)=\sum_i \beta_i e_i(a)$. Given any $\alpha(a)$, the coefficients $\beta$ can then be found using the orthogonality of the modes. The $\beta$'s corresponding to the three best constrained eigenmodes of the best-fit models from the four data combinations are listed in Table \ref{tab:coeff}. Since the uncertainty in the third eigenmode is large, adding more modes by relaxing the prior [either by reducing $\xi(0)$ or $a_c$] would not notably change the fit.
\begin{table}
\begin{center}
\begin{tabular}{c|c|c|c|c}
\hline \hline
&Data I & Data II & Data III & Data IV \\ \hline
&$-0.17\pm 0.30$ &
$-0.19 \pm 0.30$ &
$-0.34 \pm 0.30$&
$-0.40 \pm 0.29$ \\
$\beta_i$ &$0.34 \pm 0.61$&
$0.34 \pm 0.63$ &
$0.47 \pm 0.65$&
$0.48 \pm 0.64$ \\
&$-0.25 \pm 1.35$&
$-0.21 \pm 1.35$&
$-0.19 \pm 1.34$&
$-0.10 \pm 1.31$\\
\hline\hline
\end{tabular}
\caption{The coefficients (best-fit and 68\% C.L. uncertainty) of the first three best-determined modes, $\beta_i$, of the best-fit models using different data combinations.}\label{tab:coeff}
\end{center}
\end{table}
\section{Conclusion and discussions}
We performed a high-resolution reconstruction of $\alpha$, the coupling between DM and vacuum energy, as a function of the scale factor using the latest observations including CMB, SN, BAO and RSD. Our model is degenerate with standard quintessence models in terms of the background cosmology, but is distinguished by the growth of perturbations. We found that, when the BAO measurement using the BOSS LyaF sample \cite{Aubourg:2014Nov} is used, an evolving $\alpha$ is mildly favored by the joint data set. Interestingly, the best-fit $\alpha(a)$ model changes sign during its evolution: $\alpha>0$ at higher redshifts, implying an energy transfer from DM to vacuum energy, while $\alpha<0$ at lower redshifts, corresponding to a decaying vacuum energy. A PCA study of our result shows that we have extracted three informative eigenmodes from the data.
The LyaF BAO measurement, which is the best existing BAO measurement at such a high redshift, is in tension with the $\Lambda$CDM model at the $2-2.5\sigma$ level. As noted in \cite{Aubourg:2014Nov,Delubac:2014aqe}, it can be interpreted as favoring a DE component with a negative energy density at $z\sim2.3$. The RSD measurements from BOSS and WiggleZ are also in tension with $\Lambda$CDM: the RSD measurements favor a lower growth rate than the $\Lambda$CDM prediction at low redshifts. The interacting vacuum model provides another physical interpretation of these tensions if $\alpha$ is allowed to change sign during its evolution. We note that extracting BAO from LyaF data is a relatively new field, and the current measurement could be subject to systematic issues \cite{Aubourg:2014Nov}. Our method opens a new window into investigation of the interacting vacuum model that can be applied to improved future datasets as they become available.
~
\acknowledgements{We thank M. Bruni, H. Borges and V. Salvatelli for discussions. Y. W. is supported by the NSFC grant No. 11403034 and the China Postdoctoral Science Foundation Grant No. 2014M550091. G. B. Z. is supported by the 1000 Young Talents program in China, by the Strategic Priority Research Program ``The Emergence of Cosmological Structures" of the CAS, Grant No. XDB09000000. L. P. is supported by NSERC and acknowledges the hospitality at the ICG. The work was supported by STFC Grants No. ST/H002774/1 and No. ST/L005573/1.}
|
1,941,325,220,048 | arxiv | \section{Introduction}\label{sect:intro}
Far-infrared astronomy, defined broadly as encompassing science at wavelengths of $30-1000\,\mu$m, is an invaluable tool in understanding all aspects of our cosmic origins. Tracing its roots to the late 1950's, with the advent of infrared detectors sensitive enough for astronomical applications, far-infrared astronomy has developed from a niche science, pursued by only a few teams of investigators, to a concerted worldwide effort pursued by hundreds of astronomers, targeting areas ranging from the origins of our Solar System to the ultimate origin of the Universe.
By their nature, far-infrared observations study processes that are mostly invisible at other wavelengths, such as young stars still embedded in their natal dust clouds, or the obscured, rapid assembly episodes of supermassive black holes. Moreover, the $30-1000\,\mu$m wavelength range includes a rich and diverse assembly of diagnostic features. The most prominent of these are:
\begin{itemize}
\item Continuum absorption and emission from dust grains with equilibrium temperatures approximately in the range 15-100\,K. The dust is heated by any source of radiation at shorter wavelengths, and cools via thermal emission.
\item Line emission and absorption from atomic gas, the most prominent lines including [O I], [N II], [C I], [C II], as well as several hydrogen recombination lines.
\item A plethora of molecular gas features, including, but not limited to: CO, H$_2$O, H$_2$CO, HCN, HCO$^+$, CS, NH$_3$, CH$_3$CH$_2$OH, CH$_3$OH, HNCO, HNC, N$_2$H$^+$, H$_3$O$^+$, their isotopologues (e.g. $^{13}$CO, H$_2$$^{18}$O), and deuterated species (e.g. HD, HDO, DCN).
\item Amorphous absorption and emission features arising from pristine and processed ices, and crystalline silicates.
\end{itemize}
\noindent This profusion and diversity of diagnostics allows for advances across a wide range of disciplines. We briefly describe four examples:
\vspace{0.2cm}
\noindent {\bfseries Planetary systems and the search for life:} Far-infrared continuum observations in multiple bands over $50-200\,\mu$m measure the size distributions, distances, and orbits of both Trans-Neptunian Objects \citep{backman95,santos12,lebret12,eiroa13} and of zodiacal dust \citep{nesvor10}, which gives powerful constraints on the early formation stages of our Solar System, and of others. Molecular and water features determine the composition of these small bodies, provide the first view of how water pervaded the early Solar System via deuterated species ratios, and constrain how water first arrived on Earth \citep{morb00,mumma11,harto11}. Far-infrared observations are also important for characterizing the atmospheric structure and composition of the gas giant planets and their satellites, especially Titan.
Far-infrared continuum observations also give a direct view of the dynamics and evolution of protoplanetary disks, thus constraining the early formation stages of other solar systems \citep{holland98,andwil05,bryden06,wyatt08}. Deuterated species can be used to measure disk masses, ice features and water lines give a census of water content and thus the earliest seeds for life \citep{hoger11}, while the water lines and other molecular features act as bio-markers, providing the primary tool in the search for life beyond Earth \citep{kalt07,hedelt13}.
\vspace{0.1cm}
\noindent {\bfseries The early lives of stars:} The cold, obscured early stages of star formation make them especially amenable to study at far-infrared wavelengths. Far-infrared continuum observations are sensitive to the cold dust in star forming regions, from the filamentary structures in molecular clouds \citep{andre10} to the envelopes and disks that surround individual pre-main-sequence stars \citep{brog15}. They thus trace the luminosities of young stellar objects and can constrain the masses of circumstellar structures. Conversely, line observations such as [O I], CO, and H$_2$O probe the gas phase, including accretion flows, outflows, jets, and associated shocks \citep{motte98,evans09,schul09,kristen12,manoj13,vandish11,watson16}.
For protostars, since their SEDs peak in the far-infrared, photometry in this regime is required to refine estimates of their luminosities and evolutionary states \citep{dunham08,furl16,fisch17}, and can break the degeneracy between inclination angle and evolutionary state\footnote{At mid-infrared and shorter wavelengths a more evolved protostar seen through its edge-on disk has an SED similar to a deeply embedded protostar viewed from an intermediate angle \citep{whit03}.}. With {\itshape Herschel}, it became possible to measure temperatures deep within starless cores \citep{launh13}, and young protostars were discovered that were only visible at far-infrared and longer wavelengths \citep{stutz13}. These protostars have ages of $\sim25,000$\,yr, only 5\% of the estimated protostellar lifetime.
In the T Tauri phase, where the circumstellar envelope has dispersed, far-infrared observations probe the circumstellar disk \citep{howard13}. At later phases, the far-infrared traces extrasolar analogs of the Kuiper belt in stars such as Fomalhaut \citep{acke12}.
Future far-infrared observations hold the promise to understand the photometric variability of protostars. {\itshape Herschel} showed that the far-infrared emission from embedded protostars in Orion could vary by as much as 20\% over a time scale of weeks \citep{billot12}, but such studies were limited by the $<4$ year lifetime of {\itshape Herschel}. Future observatories will allow for sensitive mapping of entire star-forming regions several times over the durations of their missions. This will enable a resolution to the long-running question of whether protostellar mass accretion happens gradually over a few hundred thousand years, or more stochastically as a series of short, episodic bursts \citep{kenyon90}.
\vspace{0.1cm}
\noindent {\bfseries The physics and assembly history of galaxies:} The shape of the mid/far-infrared dust continuum is a sensitive diagnostic of the dust grain size distribution in the ISM of our Milky Way, and nearby galaxies, which in turn diagnoses energy balance in the ISM\citep{cal00,li01,dale01,moli10}. Emission and absorption features measure star formation, metallicity gradients, gas-phase abundances and ionization conditions, and gas masses, all independently of extinction, providing a valuable perspective on how our Milky Way, and other nearby galaxies, formed and evolved\citep{craw85,panu10,fisch10,diaz13,far13}. Continuum and line surveys at far-infrared wavelengths measure both obscured star formation rates and black hole accretion rates over the whole epoch of galaxy assembly, up to $z\gtrsim7$, and are essential to understand why the comoving rates of star formation and supermassive black hole accretion both peaked at redshifts of $z = 2 - 3$, when the Universe was only 2 or 3 billion years old, and have declined strongly since then \citep{lag05,madau14}.
Of particular relevance in this context are the infrared-luminous galaxies, in which star formation occurs embedded in molecular clouds, hindering the escape of optical and ultraviolet radiation; however, the radiation heats dust, which reradiates infrared light, enabling star-forming galaxies to be identified and their star formation rates to be inferred. These infrared-luminous galaxies may dominate the comoving star formation rate density at $z>1$ and are most optimally studied via infrared observations \citep{lon06,rodig11,lutz11,oliver12,cas14}. Furthermore, far-infrared telescopes can study key processes in understanding stellar and black hole mass assembly, whether or not they depend directly on each other, and how they depend on environment, redshift, and stellar mass \citep{gen10,fabian12,far12}.
\vspace{0.1cm}
\noindent {\bfseries The origins of the Universe:} Millimeter-wavelength investigations of primordial B- and E-modes in the cosmic microwave background provide the most powerful observational constraints on the very early Universe, at least until the advent of space-based gravitational-wave observatories\citep{page07,planck16}. However, polarized dusty foregrounds are a pernicious barrier to such observations, as they limit the ability to measure B-modes produced by primordial gravitational waves, and thus to probe epochs up to $10^{-30}$ seconds after the Big Bang. Observations at far-infrared wavelengths are the only way to isolate and remove these foregrounds. CMB instruments that also include far-infrared channels thus allow for internally consistent component separation and foreground subtraction.
\vspace{0.2cm}
The maturation of far-infrared astronomy as a discipline has been relatively recent, in large part catalyzed by the advent of truly sensitive infrared detectors in the early 1990s. Moreover, the trajectory of this development over the last two decades has been steep, going from one dedicated satellite and a small number of other observatories by the mid-1980's, to at least eight launched infrared-capable satellites, three airborne facilities, and several balloon/sub-orbital and dedicated ground based observatories by 2018. New detector technologies are under development, and advances in areas such as mechanical coolers enable those detectors to be deployed within an expanding range of observing platforms. Even greater returns are possible in the future, as far-infrared instrumentation capabilities remain far from the fundamental sensitivity limits of a given aperture.
This recent, rapid development of the far-infrared is reminiscent of the advances in optical and near-infrared astronomy from the 1940s to the 1990s. Optical astronomy benefited greatly from developments in sensor, computing, and related technologies that were driven in large part by commercial and other applications, and which by now are fairly mature. Far-infrared astronomers have only recently started to benefit from comparable advances in capability. The succession of rapid technological breakthroughs, coupled with a wider range of observing platforms, means that far-infrared astronomy holds the potential for paradigm-shifting advances in several areas of astrophysics over the next decade.
We here review the technologies and observing platforms for far-infrared astronomy, and discuss potential technological developments for those platforms, including in detectors and readout systems, optics, telescope and detector cooling, platform infrastructure on the ground, sub-orbital, and in space, and software development and community cohesion. We aim to identify the technologies needed to address the most important science goals accessible in the far-infrared. We do not review the history of infrared astronomy, as informative historical reviews can be found elsewhere \citep{soipi78,lrg07,rie07,siegel07,price2008history,price09,rieke09,leq09,rowan2013night,cle14}. We focus on the $30-1000\,\mu$m wavelength range, though we do consider science down to $\sim10\,\mu$m, and into the millimeter range, as well. We primarily address the US mid/far-infrared astronomy community; there also exist roadmaps for both European\citep{rigo17} and Canadian\citep{webb2013roadmap} far-infrared astronomy, and for THz technology covering a range of applications\citep{walker2015terahertz,lee2009principles,dhillon17}.
\begin{figure*}
\centering
\includegraphics[width=14cm,angle=0]{afig_atm_ALMA_SOFIA_balloon_final.png}
\vspace{-1cm}
\caption[Atmospheric transmission]{Atmospheric transmission over $1-1000\,\mu$m\citep{lord92}. The curves for ALMA and SOFIA were computed with a 35$^{\circ}$ telescope zenith angle. The two balloon profiles were computed with a 10$^{\circ}$ telescope zenith angle. The precipitable water vapor for ALMA (5060\,m), SOFIA (12,500\,m), and the 19,800\,m and 29,000\,m altitudes are 500, 7.3, 1.1, and 0.2\,$\mu$m, respectively. The data were smoothed to a resolution of $R = 2000$.}
\label{fig:atmos}
\end{figure*}
\section{Observatories: Atmosphere-Based}
\subsection{Ground-Based}\label{ssect:ground}
Far-infrared astronomy from the ground faces the fundamental limitation of absorption in Earth's atmosphere, primarily by water vapor. The atmosphere is mostly opaque in the mid- through far-infrared, with only a few narrow wavelength ranges with modest transmission. This behavior can be seen in Figure \ref{fig:atmos}, which compares atmospheric transmission for ground-based observing, observing from SOFIA (\S\ref{ssect:sofia}), and two higher altitudes that are accessible by balloon-based platforms. The difficulties of observing from the ground at infrared wavelengths are evident. Moreover, the transmissivity and widths of these windows are heavily weather dependent. Nevertheless, there do exist spectral windows at $34\,\mu$m, $350\,\mu$m, $450\,\mu$m, $650\,\mu$m and $850\,\mu$m with good, albeit weather-dependent transmission at dry, high-altitude sites, with a general improvement towards longer wavelengths. At wavelengths longward of about 1\,mm there are large bands with good transmission. These windows have enabled an extensive program of ground-based far-infrared astronomy, using both single-dish and interferometer facilities.
\subsubsection{Single-dish facilities}\label{ssect:singdish}
Single-dish telescopes dedicated to far-infrared through millimeter astronomy have been operated for over 30 years. Examples include the 15-m James Clerk Maxwell Telescope (JCMT), the 12-m Caltech Submillimeter Observatory (CSO, closed September 2015), the 30-m telescope operated by the Institut de Radioastronomie Millimetrique (IRAM), the 12-m Atacama Pathfinder Experiment (APEX), the 50-m Large Millimeter Telescope (LMT) in Mexico, the 10-m Submillimeter Telescope (SMT, formerly the Heinrich Hertz Submillimeter Telescope) in Arizona, and the 10-m South Pole Telescope (SPT). These facilities have made major scientific discoveries in almost every field of astronomy, from planet formation to high-redshift galaxies. They have also provided stable development platforms, resulting in key advances in detector technology, and pioneering techniques that subsequently found applications in balloon-borne and space missions.
There is an active program of ground-based single-dish far-infrared astronomy, with current and near-future single-dish telescopes undertaking a range of observation types, from wide-field mapping to multi-object wideband spectroscopy. This in turn drives a complementary program of technology development. In general, many applications for single-dish facilities motivate development of detector technologies capable of very large pixel counts (\S\ref{directdetect}). Similarly large pixel counts are envisioned for planned space-based far-infrared observatories, including the Origins Space Telescope (OST, \S\ref{origins}). Since far-infrared detector arrays have few commercial applications, they must be built and deployed by the science community itself. Thus, ground-based instruments represent a vital first step toward achieving NASA's long-term far-infrared goals.
We here briefly describe two new ground-based facilities; CCAT-prime (CCAT-p), and the Large Millimeter Telescope (LMT):
\noindent {\bfseries CCAT-p}: will be a 6\,m telescope at 5600\,m altitude, near the summit of Cerro Chajnantor in Chile \citep{koop17}. CCAT-p is being built by Cornell University and a German consortium that includes the universities of Cologne and Bonn, and in joint venture with the Canadian Atacama Telescope Corporation. In addition, CCAT-p collaborates with CONICYT and several Chilean universities. The project is funded by a private donor and by the collaborating institutions, and is expected to achieve first light in 2021.
The design of CCAT-p is an optimized crossed-Dragone \citep{niemack16} system that delivers an $8\hbox{$^\circ$}$ field of view (FoV) with a nearly flat image plane. At 350\,$\mu$m the FoV with adequate Strehl ratio reduces to about $4\hbox{$^\circ$}$. The wavelength coverage of the anticipated instruments will span wavelengths of 350\,$\mu$m to 1.3\,mm. With the large FoV and a telescope surface RMS of below 10.7\,$\mu$m, CCAT-p is an exceptional observatory for survey observations. Since the 200\,$\mu$m zenith transmission is $\geq10$\% in the first quartile at the CCAT-p site \citep{radford16}, a 200\,$\mu$m observing capability will be added in a second generation upgrade.
The primary science drivers for CCAT-p are 1) tracing the epoch or reionization via [CII] intensity mapping, 2) studying the evolution of galaxies at high redshifts, 3) investigating dark energy, gravity, and neutrino masses via measurements of the Sunyaev-Zel'dovich effect, and 4) studying the dynamics of the interstellar medium in the Milky Way and nearby galaxies via high spectral resolution mapping of fine-structure and molecular lines.
CCAT-p will host two facility instruments, the CCAT Heterodyne Array Instrument (CHAI), and the direct detection instrument Prime-Cam (P-Cam). CHAI is being built by the University of Cologne and will have two focal plane arrays that simultaneously cover the 370\,$\mu$m and 610\,$\mu$m bands. The arrays will initially have $8\times8$ elements, with a planned expansion to 128 elements each. The direct detection instrument P-Cam, which will be built at Cornell University, will encompass seven individual optics-tubes. Each tube has a FoV of about $1.3\hbox{$^\circ$}$. For first light, three tubes will be available, 1) a four-color, polarization sensitive camera with 9000 pixels that simultaneously covers the 1400, 1100, 850, and 730\,$\mu$m bands, 2) a 6000 pixel Fabry-Perot spectrometer, and 3) a 18,000 pixel camera for the 350\,$\mu$m band.
\vspace{0.1cm}
\noindent {\bfseries LMT}: The LMT is a 50-m diameter telescope sited at 4600\,m on Sierra Negra in Mexico. The LMT has a FoV of $4\,\hbox{$^\prime$}$ and is optimized for maximum sensitivity and small beamsize at far-infrared and millimeter wavelengths. It too will benefit from large-format new instrumentation in the coming years. A notable example is TolTEC, a wide-field imager operating at 1.1\,mm, 1.4\,mm, and 2.1\,mm, and with an anticipated mapping speed at 1.1\,mm of 14 deg$^2$ my$^{-2}$ hr$^{-1}$ (Table \ref{tbl:requirements}). At 1.1\,mm, the TolTEC beam size is anticipated to be $\sim5\hbox{$^{\prime\prime}$}$, which is smaller than the $6\hbox{$^{\prime\prime}$}$ beamsize of the 24\,$\mu$m {\itshape Spitzer} extragalactic survey maps. As a result, the LMT confusion limit at 1.1\,mm is predicted to be $\sim0.1$\,mJy, making LMT capable of detecting sources with star formation rates below 100\,M$_{\odot}$yr$^{-1}$ at $z\sim6$. This makes TolTEC an excellent ``discovery machine'' for high-redshift obscured galaxy populations. As a more general example of the power of new instruments mounted on single-aperture ground-based telescopes, a $\sim$100-object steered-beam multi-object spectrometer mounted on LMT would exceed the abilities of any current ground-based facility, including ALMA, for survey spectroscopy of galaxies, and would require an array of $\sim10^{5.5}$ pixels.
\begin{figure*}
\includegraphics[width=16cm,angle=0]{afig_SOFIACombined.png}
\caption[The SOFIA observatory, and example flight paths]{{\itshape Left:} The world's largest flying infrared astronomical observatory, SOFIA (\S\ref{ssect:sofia}). {\itshape Right:} Two flight plans, originating from SOFIA's prime base in Palmdale, California. In a typical 8-10 hour flight, SOFIA can observe 1-5 targets.}
\label{fig:sofflight}
\end{figure*}
\subsubsection{Interferometry}\label{ssect:gbinter}
Interferometry at far-infrared wavelengths is now routinely possible from the ground, and has provided order of magnitude improvements in spatial resolution and sensitivity over single-dish facilities. Three major ground-based far-infrared/millimeter interferometers are currently operational. The NOEMA array (the successor to the IRAM Plateau de Bure interferometer) consists of nine 15-m dishes at 2550\,m elevation in the French Alps. The Submillimeter Array (SMA) consists of eight 6-m dishes on the summit of Mauna Kea in Hawaii (4200\,m elevation). Both NOEMA and the SMA are equipped with heterodyne receivers. NOEMA has up to 16\,GHz instantaneous bandwidth, while the SMA has up to 32\,GHz of instantaneous bandwidth (or 16\,GHz with dual polarization) with 140\,KHz uniform spectral resolution.
Finally, the Atacama Large Millimeter/submillimeter Array (ALMA) is sited on the Chajnantor Plateau in Chile at an elevation of 5000\,m. It operates from 310\,$\mu$m to 3600\,$\mu$m in eight bands covering the primary atmospheric windows. ALMA uses heterodyne receivers based on Superconductor-Insulator-Superconductor (SIS) mixers in all bands, with 16\,GHz maximum instantaneous bandwidth split across 2 polarizations and 4 basebands. ALMA consists of two arrays: the main array of fifty 12-m dishes (of which typically 43 are in use at any one time), and the Morita array (also known as the Atacama Compact Array), which consists of up to twelve 7-m dishes and up to four 12-m dishes equipped as single dish telescopes.
At the ALMA site (which is the best of the three ground-based interferometer sites), the Precipitable Water Vapor (PWV) is below 0.5\,mm for 25\% of the time during the five best observing months (May-September). This corresponds to a transmission of about 50\% in the best part of the 900-GHz window (ALMA Band 10). In more typical weather (PWV=1\,mm) the transmission at 900-GHz is 25\%.
There are plans to enhance the abilities of ALMA over the next decade, by (1) increasing bandwidth, (2) achieving finer angular resolutions, (3) improving wide-area mapping speeds, and (4) improving the data archive. The primary improvement in bandwidth is expected to come from an upgrade to the ALMA correlator, which will effectively double the instantaneous bandwidth, and increase the number of spectral points by a factor of eight. This will improve ALMA's continuum sensitivity by a factor $\sqrt{2}$, as well as making ALMA more efficient at line surveys. Further bandwidth improvements include the addition of a receiver covering 35-50\,GHz (ALMA Band 1, expected in 2022), and 67-90\,GHz (ALMA Band 2). To improve angular resolution, studies are underway to explore the optimal number and placement of antennas for baseline lengths of up to tens of km. Other possible improvements include increasing the number of antennas in the main array to 64, the incorporation of focal-plane arrays to enable wider field imaging, and improvements in the incorporation of ALMA into the global Very Long Baseline Interferometry (VLBI) network.
\subsection{Stratospheric Observatory for Infrared Astronomy}\label{ssect:sofia}
The Stratospheric Observatory for Infrared Astronomy (SOFIA\citep{temi14}) is a 2.5-m effective diameter telescope mounted within a Boeing 747SP aircraft that flies to altitudes of 13,700\,m to get above over 99.9\% of the Earth's atmospheric water vapor. The successor to the Learjet observatory and NASA's Kuiper Airborne Observatory (KAO), SOFIA saw first light in May 2010, began prime operations in May 2014, and offers approximately 600 hours per year for community science observations\citep{young2012early}. SOFIA is the only existing public observatory with access to far-infrared wavelengths inaccessible from the ground, though CMB polarization studies at millimeter wavelengths have also been proposed from platforms at similar altitudes to SOFIA\citep{feen17}.
SOFIA's instrument suite can be changed after each flight, and is evolvable with new or upgraded instruments as capabilities improve. SOFIA is also a versatile platform, allowing for (1) continuous observations of single targets of up to five hours, (2) repeated observations over multiple flights in a year, and (3) in principle, observations in the visible though millimeter wavelength range. Example flight paths for SOFIA are shown in Figure \ref{fig:sofflight}. Each flight path optimizes observing conditions (e.g., elevation, percentage water vapor, maximal on-target integration time). SOFIA can be positioned to where the science needs, enabling all-sky access. Annually, SOFIA flies from Christchurch, New Zealand to enable southern hemisphere observations.
SOFIA's instruments include the $5-40\,\mu$m camera and grism spectrometer FORCAST\citep{adams2010forcast,herter2012first}, the high-resolution (up to $R=\lambda/\Delta\lambda=100,000$) $4.5-28.3\,\mu$m spectrometer EXES\citep{richter2003high}), the $51-203\,\mu$m integral field spectrometer FIFI-LS\citep{colditz2012sofia}), the $50-203\,\mu$m camera and polarimeter HAWC\citep{harper2000hawc}, and the $R\sim10^8$ heterodyne spectrometer GREAT\citep{heyminck2012great,klein12}. The first-generation HIPO\citep{dunham2004hipo} and FLITECAM\citep{mclean2006flitecam} instruments were retired in early 2018. The sensitivities of these instruments as a function of wavelength are presented in Fig \ref{fig:sofia}. Upgrades to instruments over the last few years have led to new science capabilities, such as adding a polarimetry channel to HAWC (HAWC+\citep{dowell2013hawc+}), and including larger arrays and simultaneous channels on GREAT (upGREAT\citep{risacher2016upgreat} \& 4GREAT, commissioned in 2017), making it into an efficient mapping instrument. Figure \ref{fig:sofiascience} shows early polarimetry measurements from HAWC+.
\begin{figure*}
\begin{center}
\includegraphics[width=16cm,angle=0]{afig_SOFIA_INST_cont_sens_March2018.png}
\caption[SOFIA instrument sensitivities as a function of wavelength]{The continuum sensitivities, as a function of wavelength, of SOFIA's mid- to far-infrared instrument suite. Shown are the 4$\sigma$ Minimum Detectable Continuum Flux (MDCF) densities for point sources in Janskys for 900\,s of integration time.}
\label{fig:sofia}
\end{center}
\end{figure*}
\begin{figure}
\begin{center}
\includegraphics[width=8cm,angle=0]{afig_RhoOph_sofia_cropped.png}
\caption[An example of a HAWC+ polarimetric image from SOFIA]{The 89\,$\mu$m image (intensity represented by color) with polarization measurements at the same wavelength (black lines), taken using HAWC+ on SOFIA, of $\rho$-ophiucus (courtesy of Fabio Santos, Northwestern University). The length of the line is the degree of polarization. The SOFIA beam size is 7.8$^{\prime\prime}$ indicated by the black circle, lower right.}
\label{fig:sofiascience}
\end{center}
\end{figure}
Given the versatility and long-term nature of SOFIA, there is a continuous need for more capable instruments throughout SOFIA's wavelength range. However, the unique niche of SOFIA, given its warm telescope and atmosphere, the imminent era of the James Webb Space Telescope (JWST), and ever more capable ground-based platforms, is high-resolution spectroscopy. This is presently realized with two instruments (GREAT and EXES). An instrument under development, the HIgh Resolution Mid InfrarEd Spectrometer (HIRMES), scheduled for commissioning in 2019, will enhance SOFIA's high resolution spectroscopy capabilities. HIRMES covers $25-122\,\mu$m, with three spectroscopic modes ($R = 600$, $R=10,000$, and $R=100,000$), and an imaging spectroscopy mode ($R=2,000$).
As SOFIA can renew itself with new instruments, it provides both new scientific opportunities and maturation of technology to enable future far-infrared space missions. SOFIA offers a 20\,kVA cryo-cooler with two compressors able to service two cold heads. The heads can be configured to operate two cryostats or in parallel within one cryostat to increase heat pumping capacity, with 2nd stage cooling capacity Q2$\geq$800\,mW at 4.2\,K and 1st stage cooling capacity Q1$\geq$15\,W at 70\,K. Instruments aboard SOFIA can weigh up to 600\,kg excluding the instrument electronics in the counter-weight rack and PI Rack(s) and draw power up to 6.5\,kW. Their volume is limited by the aircraft's door-access and must fit within the telescope assembly constraints.
Capabilities that would be invaluable in a next-generation SOFIA instrument include, but are not limited to:
\begin{itemize}
\item Instruments with $\geq100$ beams that enable low- to high-resolution spectroscopy (up to sub-km s$^{-1}$ velocity resolution) from 30 to 600\,$\mu$m. This would enable large-area, velocity-resolved spectral line maps of key fine-structure transitions in giant molecular clouds, and complement the wavelengths accessible by JWST and ground-based telescopes. The current state of the art on SOFIA is upGREAT LFA: 14 beams, 44\,kHz channel spacing.
\item Medium to wide-band imaging and imaging polarimetry from 30 to 600\,$\mu$m, with $10^{4-5}$ pixels and FoV's of tens of arcminutes. The current state of the art on SOFIA is HAWC+, with a $64\times40$ pixel array and a largest possible FoV of $8.0'\times6.1'$.
\item High spectral resolution (R=4,000-100,000) 5-30\,$\mu$m mapping spectroscopy with factor $\geq3$ greater observation efficiency and sensitivity than EXES. This would complement JWST, which observes in the same wavelength range but at $R<3300$ with the mid-infrared instrument (MIRI). Such an instrument on SOFIA could then identify the molecular lines that JWST may detect but not spectrally resolve. The current state of the art on SOFIA is EXES, with R$\sim$100,000 and sensitivities ($10\sigma, 100s$) of 10\,Jy at 10\,$\mu$m; 20 Jy at 20\,$\mu$m (NELB, $10\sigma, 100s$: $1.4\times10^{-6}\,$W m$^{-2}$ sr$^{-1}$ at 10\,$\mu$m; $7.0\times10^{-7}\,$W m$^{-2}$ sr$^{-1}$ at $20\,\mu$m).
\item High resolution (R$\sim$100,000) spectroscopy at $2.5-8\,\mu$m, to identify several gas-phase molecules. These molecules are not readily accessible from the ground (Fig \ref{fig:atmos}), and cannot be reliably identified by JWST as its near-infrared spectrometer NIRSpec only goes up to $R=2700$. Currently, SOFIA does not have such an instrument.
\item A wide-field, high-resolution integral-field spectrometer covering $30-600$\,$\mu$m. This would allow rapid, large-area spectrally-resolved mapping of fine structure lines in the Milky Way, and integral field-spectroscopy of nearby galaxies. The current state of the art on SOFIA is FIFI-LS, with FoV $12\hbox{$^{\prime\prime}$}$ over 115-203$\,\mu$m) and $6\hbox{$^{\prime\prime}$}$ over 51-120$\,\mu$m.
\item A broadband, wide-field, multi-object spectrograph, with resolution $R=10^3 - 10^4$ and up to $1000$ beams, over $30-300\,\mu$m. Such an instrument could map velocity fields in galaxies or star-forming regions, with enough beams to allow mapping of complex regions. SOFIA currently does not have any multi-object spectroscopic capability.
\item An instrument to characterize exoplanet atmospheres: an ultra-precise spectro-imager optimized for bands not available from the ground and with sufficient FoV to capture simultaneous reference stars to decorrelate time-variable effects. JWST and ESA's ARIEL mission will both also contribute to this science. SOFIA currently does not have this capability. However, during Early Science with first-generation instruments, SOFIA demonstrated it could measure atmospheres with transiting exoplanets with performance similar to existing ground assets.
\item A mid/far-infrared spectropolarimeter. Spectropolarimetric observations of the relatively unexplored $20\,\mu$m silicate feature with SOFIA would be a unique capability, and allow for e.g. new diagnostics of the chemistry and composition of protoplanetary disks. SOFIA currently does not have polarimetry shortward of $50\,\mu$m.
\end{itemize}
Other possible improvements to the SOFIA instrument suite include: (1) upgrading existing instruments (e.g. replacing the FIFI-LS germanium photoconductors to achieve finer spatial sampling through higher multiplexing factors), and (2) instruments that observe in current gaps in SOFIA wavelength coverage (e.g., $1-5\,\mu$m, $90-150\,\mu$m, $210-310\,\mu$m).
More general improvements include the ability to swap instruments faster than a two-day timescale, or the ability to mount multiple instruments. Mounting multiple instruments improves observing efficiency if both instruments can be used on the same source, covering different wavelengths or capabilities. This would also allow for flexibility to respond to targets of opportunity, time domain or transient phenomena, and increase flexibility as a development platform to raise Technology Readiness Levels (TRLs\citep{mankins1995technology,mankins2009technology}) of key technologies.
\begin{figure*}
\centering
\includegraphics[width=16cm,angle=0]{afig_BalloonsCombined.png}
\caption[The balloon-based Stratospheric Terahertz Observatory]{{\itshape Left:} The STO balloon observatory and science team, after the successful hang test in the Columbia Scientific Balloon Facility in Palestine, Texas, in August 2015. This image originally appeared on the SRON STO website. {\itshape Right:} The second science flight of the STO took place from from McMurdo in Antarctica on December 8th, 2016, with a flight time of just under 22 days. This image was taken from https://www.csbf.nasa.gov/antarctica/payloads.htm.}
\label{fig:stoii}
\end{figure*}
\subsection{Scientific Ballooning}\label{ssect:uldb}
Balloon-based observatories allow for observations at altitudes of up to $\sim40,000$\,m ($130,000$\,ft). At these altitudes, less than 1\% of the atmosphere remains above the instrument, with negligible water vapor. Scientific balloons thus give access, relatively cheaply, to infrared discovery space that is inaccessible to any ground-based platform, and in some cases even to SOFIA. For example, several key infrared features are inaccessible even at aircraft altitudes (Figure \ref{fig:atmos}), including low-energy water lines and the [N II]122\,$\mu$m line. Scientific ballooning is thus a valuable resource for infrared astronomy. Both standard balloons, with flight times of $\lesssim24$ hours, and Long Duration Balloons (LDBs) with typical flight times of $7-15$ days (though flights have lasted as long as 55 days) have been used. Balloon projects include the Balloon-borne Large Aperture Submillimeter Telescopes (BLAST\citep{tuck04,fiss10,galit14}), PILOT \citep{bern16}, the Stratospheric Terahertz Observatory\citep{walk10} (Figure \ref{fig:stoii}), and FITE\citep{shib10} \& BETTII\citep{rine14}, both described in \S\ref{ssect:firint}. Approved future missions include GUSTO, scheduled for launch in 2021. With the development of Ultra-Long Duration Balloons (ULDB), with potential flight times of over 100 days, new possibilities for far-infrared observations become available.
A further advantage of ballooning, in a conceptually similar manner to SOFIA, is that the payloads are typically recovered and available to refly on $\sim$ one year timescales, meaning that balloons are a vital platform for technology development and TRL raising. For example, far-infrared direct detector technology shares many common elements (detection approaches, materials, readouts) with CMB experiments, which are being conducted on the ground\citep{bicep14,flauger14,bikec15}, from balloons\citep{kogut12,gand16,ebex17}, and in space. These platforms have been useful for developing bolometer and readout technology applicable to the far-infrared.
All balloon projects face challenges, as the payload must include both the instrument and all of the ancillary equipment needed to obtain scientific data. For ULDBs, however, there are two additional challenges:
\vspace{0.1cm}
\noindent {\bfseries Payload mass}: While zero-pressure balloons (including LDBs) can lift up to about 2,700\,kg, ULDBs have a mass limit of about 1,800\,kg. Designing a payload to this mass limit is non-trivial, as science payloads can have masses in excess of 2,500\,kg. For example, the total mass of the GUSTO gondola is estimated to be 2,700\,kg.
\vspace{0.1cm}
\noindent {\bfseries Cooling}: All far-infrared instruments must operate at cryogenic temperatures. Liquid cryogens have been used for instruments on both standard and LDB balloons, with additional refrigerators (e.g. $^3$He, adiabatic demagnetization) to bring detector arrays down to the required operating temperatures, which can be as low as $100\,$mK. These cooling solutions typically operate on timescales commensurate with LDB flights. For the ULDB flights however it is not currently possible to achieve the necessary cryogenic hold times. Use of mechanical coolers to provide first-stage cooling would solve this problem, but current technology does not satisfy the needs of balloon missions. Low-cost cryocoolers for use on the ground are available, but have power requirements that are hard to meet on balloons, which currently offer total power of up to about $2.5$\,kW. Low-power cryocoolers exist for space use, but their cost (typically $\gtrsim\$1$M) does not fit within typical balloon budgets. Cryocoolers are discussed in detail in \S\ref{ssect:cryoc}.
\vspace{0.1cm}
In addition to addressing the challenges described above, there exist several avenues of development that would enhance many balloon experiments. Three examples are:
\begin{itemize}
\item Large aperture, lightweight mirrors for $50-1000\,\mu$m observing (see also \S\ref{ssect:general}).
\item Common design templates for certain subsystems such as star cameras, though attempting to standardize on {\itshape gondola} designs would be prohibitively expensive since most systems are still best implemented uniquely for each payload.
\item Frameworks to enhance the sharing of information, techniques and approaches. While balloon experiments are in general more ``PI driven'' than facility class observatories (since much of the hardware is custom-built for each flight), there does exist a thriving user community in ballooning, within which information and ideas are shared. Nurturing the sharing of information would help in developing this community further. The PI-driven balloon missions also serve as pathfinders for larger facilities, as was the case for BLAST and {\itshape Herschel}, and thus may lay the groundwork for a future ``General Observatory'' class balloon mission.
\end{itemize}
\begin{figure}
\includegraphics[width=8cm,angle=0]{afig_SatellitesCombined.png}
\caption[Four examples of infrared satellites.]{Four examples of satellites that observe at mid/or far-infrared wavelengths (\S\ref{spaceobs}). {\itshape Top row:} {\itshape Spitzer} and {\itshape Herschel}. {\itshape Bottom row:} {\itshape Planck} and JWST, which also use V-groove radiators (thermal shields) to achieve passive cooling to $< 40$\,K.}
\label{fig:launchir}
\end{figure}
\subsection{Short Duration Rocket Flights}\label{ssect:rockets}
Sounding rockets inhabit a niche between high-altitude balloons and fully orbital platforms, providing $5{-}10$ minutes of observation time above the Earth's atmosphere, at altitudes of $50$\,km to $\sim1500$\,km. They have been used for a wide range of astrophysical studies, with a heritage in infrared astronomy stretching back to the 1960's \citep{shiv68,houck69,afgl76,seibert2006history}.
Though an attractive way to access space for short periods, the mechanical constraints of sounding rockets are limiting in terms of the size and capability of instruments. However, sounding rockets observing in the infrared are flown regularly\citep{zemc13}, and rockets are a viable platform for both technology maturation and certain observations in the far-infrared. In particular, measurements of the absolute brightness of the far-infrared sky, intensity mapping, and development of ultra-low noise far-infrared detector arrays are attractive applications of this platform.
Regular access to sounding rockets is now a reality, with the advent of larger, more capable Black Brant XI vehicles launched from southern Australia via the planned Australian NASA deployment in 2019-2020. Similarly, there are plans for recovered flights from Kwajalein Atoll using the recently tested NFORCE water recovery system. Long-duration sounding rockets capable of providing limited access to orbital trajectories and $> 30 \,$min observation times have been studied \citep{NAP12862}, and NASA is continuing to investigate this possibility. However, no missions using this platform are currently planned, and as a result the associated technology development is moving slowly.
\begin{figure*}
\includegraphics[width=16cm,angle=0]{afig_plot_backgrounds.png}
\caption[A comparison between astrophysical infrared background, and thermal backgrounds from telescope optics]{A comparison between the primary astrophysical continuum backgrounds at infrared wavelengths (the Cosmic Microwave Background\citep{fixs09}, the Cosmic Infrared Background\citep{cobefiras}, Galactic ISM emission\citep{parad11}, and Zodiacal emission from interplanetary dust\citep{lein98}) and representative thermal emission from telescope optics at three temperatures, assuming uniform thermal emissivity of 4\%. The astrophysical backgrounds assume observations outside the atmosphere towards high ecliptic and galactic latitudes, and at a distance of 1\,AU from the Sun. The advantages of ``cold'' telescope optics are apparent; at $300\,\mu$m the thermal emission from a 4\,K telescope is five orders of magnitude lower than for a telescope at 45\,K, and enables the detection of the CIRB, Galactic ISM, and zodiacal light.}
\label{fig:backgr}
\end{figure*}
\section{Observatories: Space-Based}\label{spaceobs}
All atmospheric-based observing platforms, including SOFIA and balloons, suffer from photon noise from atmospheric emission. Even at balloon altitudes, of order 1\% emissivity on average through the far-infrared remains from residual water vapor, which can contaminate astrophysical water lines unless they are shifted by velocities of at least a few tens of km s$^{-1}$. The telescope optics are another source of loading, with an unavoidable $2-4$\% emissivity. Though the total emissivity can be less than 5\%, these ambient-temperature ($\sim$250\,K) background sources dominate that of the zodiacal and galactic dust. Space-based platforms are thus, for several paths of inquiry, the only way to perform competitive infrared observations.
There exists a rich history of space-based mid/far-infrared observatories (Figure \ref{fig:launchir}), including IRAS\citep{neugebauer1984infrared}, MSX \citep{mill94}, the IRTS\citep{mura96}, ISO\citep{kessler1996infrared}, SWAS\citep{meln00}, {\itshape Odin}\citep{nordh03}, AKARI \citep{murakami2007infrared}, {\itshape Herschel}\citep{pilb10}, WISE\citep{wright10}, and {\itshape Spitzer}\citep{wer04}. Far-infrared detector arrays are also used on space-based CMB missions, with past examples including {\itshape Planck} \citep{plan11}, WMAP\citep{benn03}, and COBE\citep{bogg92,cobefiras}, as well as concepts such as PIXIE\citep{pixie16}, LiteBIRD\citep{litebird14}, and CORE\citep{core17}.
\begin{figure*}
\includegraphics[width=16cm,angle=0]{afig_confnoise.png}
\caption[A summary of confusion noise levels for selected telescopes]{A summary of literature estimates of confusion noise levels for selected telescopes. The confusion levels are not calculated with a uniform set of assumptions, but are comparable in that they are applicable to regions of sky away from the galactic plane, and with low galactic cirrus emission. Shown are estimates for IRAS at 60\,$\mu$m\citep{hahou87}, ISO\citep{kiss05}, {\itshape Spitzer}\citep{dole04}, {\itshape Herschel}\citep{nguyen10}, {\itshape Planck}\citep{negrel04,fernan08}, AKARI, SPICA\citep{takeuchi2018estimation}, and JCMT. The confusion limits for interferometers such as ALMA at the SMA are all below $10^{-6}$\,mJy.}
\label{fig:confnoise}
\end{figure*}
It is notable, however, that the performance of many past and present facilities is limited by thermal emission from telescope optics (Figure \ref{fig:backgr}). The comparison between infrared telescopes operating at 270\,K vs. temperatures of a few kelvins is analogous to the comparison between the sky brightness during the day and at night in the optical. Even with {\itshape Herschel} and its $\sim$85\,K telescope, the telescope emission was the dominant noise term for both its Photodetector Array Camera and Spectrometer (PACS\citep{pog10}) and Spectral and Photometric Imaging REceiver (SPIRE\citep{gri10}). Thus, the ultimate scientific promise of the far-infrared is in orbital missions with actively-cooled telescopes {\itshape and} instruments. Cooling the telescope to a few kelvins effectively eliminates its emission through most of the far-infrared band. When combined with appropriate optics and instrumentation, this results in orders-of-magnitude improvement in discovery speed over what is achievable from atmospheric-based platforms (Figures \ref{fig:detnep} \& \ref{fig:speed}). A ``cold'' telescope can bring sensitivities at observed-frame $30-500\,\mu$m into parity with those at shorter (JWST) and longer (ALMA) wavelengths.
A further limiting factor is source confusion - the fluctuation level in image backgrounds below which individual sources can no longer be detected. Unlike instrument noise, confusion noise cannot be reduced by increasing integration time. Source confusion can arise from both smooth diffuse emission and fluctuations on scales smaller than the beamsize of the telescope. Outside of the plane of the Milky Way, the primary contributors to source confusion are structure in Milky Way dust emission, individually undetected extragalactic sources within the telescope beam, and individually detected sources that are blended with the primary source. Source confusion is thus a strong function of the location on the sky of the observations, the telescope aperture, and observed wavelength. Source confusion is a concern for all previous and current single-aperture infrared telescopes, especially space-based facilities whose apertures are modest compared to ground-based facilities. A summary of the confusion limits of some previous infrared telescopes is given in Figure \ref{fig:confnoise}.
A related concept is line confusion, caused by the blending and overlapping of individual lines in spectral line surveys. While this is barely an issue in e.g. H\,I surveys as the 21\,cm H\,I line is bright and isolated\citep{jonesmg16}, it is potentially a pernicious source of uncertainty at far-infrared wavelengths, where there are a large number of bright spectral features. This is true in galactic studies\citep{terce10} and in extragalactic surveys. Carefully chosen spatial and spectral resolutions are required to minimize line confusion effects\citep{kogut15}.
\begin{figure*}
\includegraphics[width=16cm,angle=0]{afig_senslimreq.pdf}
\caption[Sensitivity requirements to meet photon background levels in the far-infrared]{Detector sensitivity requirements to meet photon background levels in the far-infrared. With a cryogenic space telescope, the fundamental limits are the zodiacal dust and galactic cirrus emission, and the photon noise level scales as the square root of bandwidth. Of particular interest is the requirement for moderate-resolution dispersive spectroscopy (blue). Also shown are detector sensitivity measurements for the TES, KIDS and QCD technologies described in \S\ref{directdetect}. The magenta dotted line shows the photon counting threshold at 100\,Hz: a device which can identify individual photons at this rate (photon counting) at high efficiency is limited by the dark counts rate rather than classical NEP.}
\label{fig:detnep}
\end{figure*}
\begin{figure*}
\includegraphics[width=16cm,angle=0]{afig_calisto_speed.png}
\caption[A comparison of far-infrared discovery speeds as a function of wavelength]{A comparison of the times required to perform a blank-field spatial-spectral survey reaching a depth of 10$^{-19}\,\rm W\,m^{-2}$ over one square degree, as a function of wavelength, for various facilities. This figure uses current estimates for sensitivity, instantaneous bandwidth covered, telescope overheads, and instantaneous spatial coverage on the sky. The OST curves assume $R=500$ grating spectrometers with $60-200$ beams (depending on wavelength), 1:1.5 instantaneous bandwidth. Pixels are assumed to operate with a NEP of $2\times10^{-20}\,$W\,Hz$^{-1/2}$. The SPICA/SAFARI-G curve is for a 2.5-m telescope with $R=300$ grating spectrometer modules with 4 spatial beams, and detector arrays with a NEP of $2\times10^{-19}\,$W\,Hz$^{-1/2}$. ST30 is a ground-based 30-m telescope with 100 spectrometer beams, each with 1:1.5 bandwidth, ALMA band-averaged sensitivity, and survey speed based on 16\,GHz bandwidth in the primary beam.}
\label{fig:speed}
\end{figure*}
Several approaches have been adopted to extract information on sources below the standard confusion limit. They include; novel detection methods applied to single-band maps\citep{knud06}, the use of prior positional information from higher spatial resolution images to deconvolve single far-infrared sources\citep{rose10,safar15}, and combining priors on positions with priors from spectral energy distribution modelling\citep{macken16,hurl17}. Finally, the spatial-spectral surveys from upcoming facilities such as SAFARI on SPICA or the OSS on the OST should push significantly below the classical confusion limit by including spectral information to break degeneracies in the third spatial dimension\citep{raym10}.
There are two further challenges that confront space-based far-infrared observatories, which are unfamiliar to sub-orbital platforms:
\vspace{0.1cm}
\noindent {\bfseries Dynamic range}: In moving to ``cold'' telescopes, sensitivity is limited only by the far-infrared sky background. We enter a regime where the dominant emission arises from the sources under study, and the sky has genuinely high contrast. This imposes a new requirement on the detector system - to observe the full range of source brightnesses - that is simple from sub-orbital platforms but challenging for cooled space-based platforms, since the saturation powers of currently proposed high-resolution detector arrays are within $\sim2$ orders of magnitude of their Noise Equivalent Powers (NEP \footnote{The Noise Equivalent Power (NEP) is, briefly, the input signal power that results in a signal-to-noise ratio of unity in a 1\,Hz bandwidth - the minimum detectable power per square root of bandwidth. Thus, a lower NEP is better. In-depth discussions of the concept of NEP can be found in \citep{lamarre86,richards94bol,benford98}.}). This would limit observations to relatively faint sources. Dynamic range limitations were even apparent for previous-generation instruments such as the Multiband Imaging Photometer onboard {\itshape Spitzer} and PACS onboard {\itshape Herschel}, with saturation limits at 70\,$\mu$m of 57\,Jy and 220\,Jy, respectively. Thus, we must either design detector arrays with higher dynamic range, or populate the focal plane with detector arrays, each suited to part of the range of intensities.
\vspace{0.1cm}
\noindent {\bfseries Interference}: The susceptibility of cooled detector arrays to interference from ionizing radiation in space was noted in the development of microcalorimeter arrays for X-ray telescopes \citep{stahl99,stahl04,saab04}. Moreover, this susceptibility was clearly demonstrated by the bolometers on {\itshape Planck}. An unexpectedly high rate and magnitude of ionizing radiation events were a major nuisance for this mission, requiring corrections to be applied to nearly all of the data. Had this interference been a factor of $\sim2$ worse, it would have caused significant loss of science return from {\itshape Planck}. Techniques are being developed and demonstrated to mitigate this interference for X-ray microcalorimeters by the addition of a few micron thick layer of gold on the back of the detector frame. It is likely that a similar approach can mitigate interference in high-resolution far-infrared detector arrays as well. Moreover, work on reducing interference in far-infrared detector arrays is being undertaken in the SPACEKIDS program (\S\ref{ssect:kids}).
\vspace{0.1cm}
NASA, the European Space Agency (ESA), and the Japan Aerospace Exploration Agency (JAXA), in collaboration with astronomers and technologists around the world, are studying various options for cryogenic space observatories for the far-infrared. There are also opportunities to broaden the far-infrared astrophysics domain to new observing platforms. We give an overview of these space-based observing platforms in the following sections. We do not address the James Webb Space Telescope, as comprehensive overviews of this facility are given elsewhere\citep{gard06}. We also do not review non-US/EU projects such as Millimetron/Spektr-M\citep{smirn12,kard14}.
\begin{figure}
\begin{center}
\includegraphics[width=8cm,angle=0]{afig_conceptspica.png}
\caption[Satellite concept - SPICA]{A concept image for the proposed SPICA satellite (\S\ref{sec:spica}).}
\label{fig:satconceptspica}
\end{center}
\end{figure}
\subsection{The Space Infrared Telescope for Cosmology and Astrophysics}\label{sec:spica}
First proposed by JAXA scientists in 1998, the Space Infrared Telescope for Cosmology and Astrophysics (SPICA\citep{naka98,nakagawa2004spica,swinyard2009space,naka17,sibth15}) garnered worldwide interest due to its sensitivity in the mid- and far-infrared, enabled by the combination of the actively-cooled telescope and the sensitive far-infrared detector arrays. Both ESA and JAXA have invested in a concurrent study, and an ESA-JAXA collaboration structure has gelled. ESA will provide the 2.5-m telescope, science instrument assembly, satellite integration and testing, and the spacecraft bus. JAXA will provide the passive and active cooling system (supporting a telescope cooled to below 8\,K), cryogenic payload integration, and launch vehicle. JAXA has indicated commitment to their portion of the collaboration, and the ESA selected SPICA as one of the 3 candidates for the Cosmic Visions M5 mission. The ESA phase-A study is underway now, and the downselect among the 3 missions will occur in 2021. Launch is envisioned for 2031. An example concept design for SPICA is shown in Figure \ref{fig:satconceptspica}.
SPICA will have three :instruments. JAXA’s SPICA mid-infrared instrument (SMI) will offer imaging and spectroscopy from $12$ to $38\,\mu$m. It is designed to complement JWST-MIRI with wide-field mapping (broad-band and spectroscopic), R$\sim$30,000 spectroscopy with an immersion grating, and an extension to $38\,\mu$m with antimony-doped silicon detector arrays. A polarimeter from a French-led consortium will provide dual-polarization imaging in 2-3 bands using high-impedance semiconductor bolometers similar to those developed for {\itshape Herschel}-PACS, but modified for the lower background and to provide differential polarization. A sensitive far-infrared spectrometer, SAFARI, is being provided by an SRON-led consortium \citep{roelf14,past16}. It will provide full-band instantaneous coverage over $35-230\,\mu$m, with a longer wavelength extension under study, using four $R=300$ grating modules. A Fourier-transform module which can be engaged in front of the grating modules will offer a boost to the resolving power, up to R=3000. A US team is working in collaboration with the European team and aims to contribute detector arrays and spectrometer modules to SAFARI\citep{brad17} through a NASA Mission of Opportunity.
\begin{figure*}
\centering
\includegraphics[width=14cm,angle=0]{afig_gep_params.png}
\caption[Photometric redshifts in the infrared from the Galaxy Evolution Probe]{A mid/far-infrared galaxy spectrum, the GEP photometric bands, and notional survey depths. The spectrum is a model of a star-forming galaxy\citep{dale02} exhibiting strong PAH features and far-infrared dust continuum emission. The black spectrum is the galaxy at a redshift of $z = 0$, but scaled vertically by a luminosity distance corresponding to $z = 0.1$ to reduce the plot range. The same spectrum is shown at redshifts $z = 1$, 2, and 3. The vertical dashed lines mark the GEP photometric bands. As the galaxy spectrum is redshifted, the PAH features move through the bands, enabling photometric redshift measurements. This figure does not include the effects of confusion noise.}
\label{fig:probegep}
\end{figure*}
\subsection{Probe-class Missions}
Recognizing the need for astronomical observatories beyond the scope of Explorer class missions but with a cadence more rapid than flagship observatories such as the Hubble Space Telescope (HST), JWST, and the Wide Field Infrared Survey Telescope (WFIRST), NASA announced a call for Astrophysics Probe concept studies in 2017. Ten Probe concepts were selected in Spring 2017 for 18-month studies. Probe study reports will be submitted to NASA and to the Astro 2020 Decadal Survey to advocate for the creation of a Probe observatory line, with budgets of \$400M to \$1B.
Among the Probe concepts under development is the far-infrared Galaxy Evolution Probe (GEP), led by the University of Colorado Boulder and the Jet Propulsion Laboratory. The GEP concept is a two-meter-class, mid/far-infrared observatory with both wide-area imaging and followup spectroscopy capabilities. The primary aim of the GEP is to understand the roles of star formation and black hole accretion in regulating the growth of stellar and black hole mass. In the first year of the GEP mission, it will detect $\geq10^{6}$ galaxies, including $\gtrsim10^5$ galaxies at $z>3$, beyond the peak in redshift of cosmic star formation, by surveying several hundred square degrees of the sky. A unique and defining aspect of the GEP is that it will detect galaxies by bands of rest-frame mid-infrared emission from polycyclic aromatic hydrocarbons (PAHs), which are indicators of star formation, while also using the PAH emission bands and silicate absorption bands to measure photometric redshifts.
The GEP will achieve these goals with an imager using approximately 25 photometric bands spanning $10\,\mu$m to at least $230\,\mu$m, giving a {\itshape spectral} resolution of $R \simeq 8$ (Figure \ref{fig:probegep}). Traditionally, an imager operating at these wavelengths on a 2-m telescope would be significantly confusion-limited, especially at the longer wavelengths (see e.g. the discussion in the introduction to \S\ref{spaceobs}. However, the combination of many infrared photometric bands, and advanced multi-wavelength source extraction techniques, will allow the GEP to push significantly below typical confusion limits. The GEP team is currently simulating the effects of confusion on their surveys, with results expected in early 2019. The imaging surveys from the GEP will thus enable new insights into the roles of redshift, environment, luminosity and stellar mass in driving obscured star formation and black hole accretion, over most of the cosmic history of galaxy assembly.
In the second year of the GEP survey, a grating spectrometer will observe a sample of galaxies from the first-year survey to identify embedded AGN. The current concept for the spectrometer includes four or five diffraction gratings with $R \simeq 250$, and spectral coverage from $23\,\mu$m to at least $190\,\mu$m. The spectral coverage is chosen to enable detection of the high-excitation [NeV] $24.2\,\mu$m line, which is an AGN indicator, over $0<z<3.3$, and the [OI] $63.2\,\mu$m line, which is predominantly a star formation indicator, over $0 < z \lesssim 2$.
Recent advances in far-infrared detector array technology have made an observatory like the GEP feasible. It is now possible to fabricate large arrays of sensitive kinetic inductance detectors (KIDs, see \S\ref{ssect:kids}) that have a high frequency multiplex factor. The GEP concept likely will employ Si BIB arrays (similar to those used on JWST-MIRI) for wavelengths from $10\,\mu$m to $24\,\mu$m and KIDs at wavelengths longer than $24\,\mu$m. Coupled with a cold ($\sim4\,$K) telescope, such that the GEP’s sensitivity would be photon-limited by astrophysical backgrounds (Figure \ref{fig:backgr}), the GEP will detect the progenitors of Milky Way-type galaxies at $z = 2$ ($\geq10^{12}$\,L$_{\odot}$). Far-infrared KID sensitivities have reached the NEPs required for the GEP imaging to be background limited ($3\times10^{-19}$ W / Hz$^{-0.5}$\citep{Baselmans2016,bueno2017full}) although they would need to be lowered further, by a factor of at least three, for the spectrometer to be background limited. The GEP would serve as a pathfinder for the Origins Space Telescope (\S\ref{origins}), which would have a greater reach in redshift by virtue of its larger telescope. SOFIA and balloons will also serve as technology demonstrators for the GEP and OST.
The technology drivers for the GEP center on detector array size and readout technology. While KID arrays with $10^4 - 10^5$ pixels are within reach, investment must be made in development of low power-consumption readout technology (\S\ref{readschemes}). Large KID (or other direct detection technology) arrays with low-power readouts on SOFIA and balloons would raise their respective TRLs, enabling the GEP and OST.
\subsection{The Origins Space Telescope}\label{origins}
As part of preparations for the 2020 Decadal Survey, NASA is supporting four studies of flagship astrophysics missions. One of these studies is for a far-infrared observatory. A science and technology definition team (STDT) is pursuing this study with support from NASA GSFC. The STDT has settled on a single-dish telescope, and coined the name ``Origins Space Telescope'' (OST). The OST will trace the history of our origins, starting with the earliest epochs of dust and heavy element production, through to the search for extrasolar biomarkers in the local universe. It will answer textbook-altering questions, such as: ``How did the universe evolve in response to its changing ingredients?'' and ``How common are planets that support life?''
Two concepts for the OST are being investigated, based on an Earth-Sun L2 orbit, and a telescope and instrument module actively cooled with 4\,K-class cryocoolers. Concept 1 (Figure \ref{fig:satconceptost}) has an open architecture, like that of JWST. It has a deployable segmented 9-m telescope with five instruments covering the mid-infrared through the submillimeter. Concept 2 is smaller and simpler, and resembles the Spitzer Space Telescope architecturally. It has a 5.9-m diameter telescope (with the same light collecting area as JWST) with no deployable components. Concept 2 has four instruments, which span the same wavelength range and have comparable spectroscopic and imaging capabilities as the instruments in Concept 1.
Because OST would commence in the middle of the next decade, improvements in far-infrared detector arrays are anticipated, both in per-pixel sensitivity and array format, relative to what is currently mature for spaceflight (\S\ref{directdetect}). Laboratory demonstrations, combined with initial OST instrument studies which consider the system-level readout requirements, suggest that total pixel counts of 100,000 to 200,000 will be possible, with each pixel operating at the photon background limit. This is a huge step forward over the $3200$ pixels total on {\itshape Herschel} PACS and SPIRE, and the $\sim4000$ pixels anticipated for SPICA.
The OST is studying the impact of confusion on both wide and deep survey concepts. Their approach is as follows. First, a model of the far-infrared sky is used to generate a three-dimensional hyperspectral data cube. Each slice of the cube is then convolved with the telescope beam, and the resulting cube is used to conduct a search for galaxies with no information given on the input catalogs. Confusion noise is then estimated by comparing the input galaxy catalog to the recovered galaxy catalog. The results from this work are not yet available, but this approach is a significant step forward in robustness compared to prior methods\citep{kogut15}.
\begin{figure}
\begin{center}
\includegraphics[width=8cm,angle=0]{afig_conceptosthires.png}
\caption[Satellite concept - OST]{A concept image for the proposed Origins Space Telescope (OST, \S\ref{origins}). This image shows a design for the more ambitious ``Concept 1''. The design includes nested sunshields and a boom, in which the instrument suite is located. The color coding of the image gives a qualitative indication of telescope temperature.}
\label{fig:satconceptost}
\end{center}
\end{figure}
\subsection{CubeSats}\label{ssect:cube}
CubeSats are small satellites built in multiples of 1U (10\,cm $\times$ 10\,cm $\times$ 10\,cm, $<$1.33 kg). Because they are launched within containers, they are safe secondary payloads, reducing the cost of launch for the payload developer. In addition, a large ecosystem of CubeSat vendors and suppliers is available, which further reduces costs. CubeSats thus provide quick, affordable access to space, making them attractive technology pathfinders and risk mitigation missions towards larger observatories. Moreover, according to a 2016 National Academies report\citep{NAP23503}, CubeSats have demonstrated their ability to perform high-value science, especially via missions to make a specific measurement, and/or that complement a larger project. To date, well over 700 CubeSats have been launched, most of them 3U's.
Within general astrophysics, CubeSats can produce competitive science, although the specific area needs to be chosen carefully\citep{ardila2017,shk18}. For example, long-duration pointed monitoring is a unique niche. So far the Astrophysics division within NASA's Science Mission Directorate has funded four CubeSat missions: in $\gamma$-rays (BurstCube, \cite{perkins2018}), X-rays (HaloSat, \cite{kaar16}), and in the ultraviolet (SPARCS, \cite{shkolnik2018}; CUTE, \cite{fleming2018}).
For the far-infrared, the CubeSat technology requirements are daunting. Most far-infrared detectors require cooling to reduce the thermal background to acceptable levels, to 4\,K or even 0.1\,K, although CubeSats equipped with Schottky-based instruments that do not require active cooling may be sufficiently sensitive for certain astronomical and Solar System applications (see also e.g. \citep{jos18}). CubeSat platforms are thus constrained by the lack of low-power, high efficiency cryocoolers. Some applications are possible at 40\,K, and small Stirling coolers can provide 1\,W of heat lift at this temperature (see also \S\ref{ssect:cryoc}). However, this would require the majority of the volume and power budget of even a large CubeSat (which typically have total power budgets of a few tens of watts), leaving little for further cooling stages, electronics, detector systems, and telescope optics.
CubeSats are also limited by the large beam size associated with small optics. A diffraction-limited $10$\,cm aperture operating at $100\,\mu$m would have a beam size of about $3.5\hbox{$^\prime$}$. There are concepts for larger, deployable apertures \citep{agasid2013}, up to $\sim$20\,cm, but none have been launched.
For these reasons, it is not currently feasible to perform competitive far-infrared science with CubeSats. However, CubeSats can serve to train the next generation of space astronomers, as platforms for technology demonstrations that may be useful to far-infrared astronomy, and as complements to larger observing systems. For example, the CubeSat Infrared Atmospheric Sounder (CIRAS) is an Earth Observation 6U mission with a $4.78 - 5.09\,\mu$m imaging spectrograph payload. The detector array will be cooled to 120\,K, using a Lockheed Martin Coaxial MPT Cryocooler, which provides a 1\,W heat lift (Figure \ref{fig:lm_cryo}). At longer wavelengths, the Aerospace Corporation's CUMULOS \citep{ardila2016} has demonstrated $8-15\,\mu$m Earth imaging with an uncooled bolometer from a CubeSat. CubeSats can also serve as support facilities. In the sub-millimeter range, {\itshape CalSat} uses a CubeSat as a calibration source for CMB polarization observatories \cite{johnson15}.
\begin{figure}
\includegraphics[width=8cm,angle=0]{afig_LM_cryo.png}
\caption[A example of a 120\,K cryocooler for a CubeSat]{The Lockheed Martin Coaxial Micro Pulse Tube Cryocooler, which will provide cooling to 120\,K for the CubeSat Infrared Atmospheric Sounder (CIRAS), scheduled for launch in 2019 \citep{pag16}. This cooler weighs less than 0.4\,kg, and has reached TRL $\geq6$.}
\label{fig:lm_cryo}
\end{figure}
\subsection{The International Space Station}\label{ssect:iss}
The International Space Station (ISS) is a stable platform for both science and technology development. Access to the ISS is currently provided to the US astronomical community through Mission of Opportunity calls which occur approximately every two years and have $\sim\$60$M cost caps. Several payload sites are available for hosting US instruments, with typically $1 \,$m$^{3}$ of volume, at least 0.5 and up to 6\,kW of power, wired and wireless ethernet connectivity, and at least 20\,kbps serial data bus downlink capability\citep{ISSPropGuide}.
In principle, the ISS is an attractive platform for astrophysics, as it offers a long-term platform at a mean altitude of 400\,km, with the possibility for regular instrument servicing. Infrared observatories have been proposed for space station deployment at least as far back as 1990\citep{brown90}. There are however formidable challenges in using the ISS for infrared astronomy. The ISS environment is, for infrared science, significantly unstable, with sixteen sunrises every 24 hour period, ``glints'' from equipment near the FoV, and vibrations and electromagnetic fields from equipment in the ISS. Furthermore, the external instrument platforms are not actively controlled, and are subject to various thermal instabilities over an orbit, which would require active astrometric monitoring.
Even with these challenges, there are two paths forward for productive infrared astronomy from the ISS:
\begin{itemize}
\item For hardware that can tolerate and mitigate the dynamic environment of the ISS, there is ample power and space for the deployment of instruments, potentially with mission lifetimes of a year or more. Example applications that may benefit from this platform include monitoring thermal emission from interplanetary dust, or time domain astronomy.
\item The long-term platform, freely available power, and opportunities for direct servicing by astronauts, make the ISS an excellent location to raise TRLs of technologies so that they can be deployed on other space-based platforms.
\end{itemize}
\noindent Efforts thus exist to enable infrared observing from the ISS. For example, the Terahertz Atmospheric/Astrophysics Radiation Detection in Space (TARDiS) is a proposed infrared experiment that will observe both in the upper atmosphere of Earth, and in the ISM of the Milky Way.
\section{New Instruments and Methods}\label{sect:newmod}
Continuing advances in telescope and detector technology will enable future-generation observatories to have much greater capabilities than their predecessors. Technological advancement also raises the possibility of new observing techniques in the far-infrared, with the potential for transformational science. We discuss two such techniques in this section; interferometry, and time-domain astronomy.
\subsection{Interferometry}\label{ssect:firint}
Most studies of future far-infrared observatories focus on single-aperture telescopes. There is however enormous potential for interferometry in the far-infrared (Figure \ref{fig:firresgap}). Far-infrared interferometry is now routine from the ground (as demonstrated by ALMA, NOEMA, and the SMA), but has been barely explored from space- and balloon-based platforms. However, the combination of access to the infrared without atmospheric absorption and angular resolutions that far exceed those of any single-aperture facility, enables entirely new areas of investigation\citep{sauvage2013sub,leis13,juanola2016far}.
In our solar system, far-infrared interferometry can directly measure the emission from icy bodies in the Kuiper belt and Oort cloud. Around other stars, far- infrared interferometry can probe planetary disks to map the spatial distribution of water, water ice, gas, and dust, and search for structure caused by planets. At the other end of the scale, far-infrared interferometry can measure the rest-frame near/mid-infrared emission from high-redshift galaxies without the information-compromising effects of spatial confusion. This was recognized within NASA's 2010 long-term roadmap for Astrophysics, {\it Enduring Quests/Daring Visions} \citep{endques14}, which stated that, within the next few decades, scientific goals will begin to outstrip the capabilties of single aperture telescopes. For example, imaging of exo-Earths, determining the distribution of molecular gas in protoplanetary disks, and directly observing the event horizon of a black hole all require single aperture telescopes with diameters of hundreds of meters, over an order of magnitude larger than is currently possible. Conversely, interferometry can provide the angular resolution needed for these goals with much less difficulty.
Far-infrared interferometry is also an invaluable technology development platform. Because certain technologies for interferometry, such as ranging accuracy, are more straightforward for longer wavelengths, far-infrared interferometry can help enable interferometers operating in other parts of the electromagnetic spectrum\footnote{Interferometer technology has however been developed for projects outside the infrared; examples include the Keck Interferometer, CHARA, LISA Pathfinder, and the Terrestrial Planet Finder, as well as several decades of work on radio interferometry.}. This was also recognized within {\itshape Enduring Quests/Daring Visions}: ``...the technical requirements for interferometry in the far-infrared are not as demanding as for shorter wavelength bands, so far-infrared interferometry may again be a logical starting point that provides a useful training ground while delivering crucial science.'' Far-infrared interferometry thus has broad appeal, beyond the far-infrared community, as it holds the potential to catalyze development of space-based interferometry across multiple wavelength ranges.
The 2000 Decadal Survey\citep{NAP9839} recommended development of a far-infrared interferometer, and the endorsed concept (the Submillimeter Probe of the Evolution of Cosmic Structure: SPECS) was subsequently studied as a ``Vision Mission'' \citep{har06}. Recognizing that SPECS was extremely ambitious, a smaller, structurally-connected interferometer was studied as a potential Origins Probe -- the Space Infrared Interferometric Telescope (SPIRIT\citep{leis07}, Figure \ref{fig:firinter}). At around the same time, several interferometric missions were studied in Europe, including FIRI\citep{helm09} and the heterodyne interferometer ESPRIT\citep{wild08}. Another proposed mission, TALC\citep{dur14,sauv16talc}, is a hybrid between a single-aperture telescope and an interferometer and thus demonstrates technologies for a structurally connected interferometer. There are also concepts using nanosats\citep{dohlena2014design}. Recently, the European community carried out the Far-Infrared Space Interferometer Critical Assessment (FP7-FISICA), resulting in a design concept for the Far-Infrared Interferometric Telescope (FIRIT). Finally, the ``double Fourier'' technique that would enable simultaneous high spatial and spectral observations over a wide FoV is maturing through laboratory experimentation, simulation, and algorithm development\citep{elias07,leis12,grain12,7460625,brack16}.
\begin{figure*}
\includegraphics[width=16cm,angle=0]{afig_FIR_scene-v1.png}
\caption[The far-infrared resolution ``gap'', and the potential for far-infrared interferometry]{The angular resolutions of selected facilities as a function of wavelength. Very high spatial resolutions are achievable at millimeter to radio wavelengths using ground-based interferometers, while current and next-generation large-aperture telescopes can achieve high spatial resolutions in the optical and near-infrared. In the mid/far-infrared however the best achievable spatial resolutions still lag several orders of magnitude behind those achievable at other wavelengths. Far-infrared interferometry from space will remedy this, providing an increase in spatial resolution shown by the yellow arrow. A version of this figure originally appeared in the FISICA (Far-Infrared Space Interferometer Critical Assessment) report, courtesy of Thijs de Graauw.}
\label{fig:firresgap}
\end{figure*}
Two balloon payloads have been developed to provide scientific and technical demonstration of interferometry. They are the Far-Infrared Interferometric Telescope Experiment (FITE\citep{shib10}), and the Balloon Experimental Twin Telescope for Infrared Interferometry (BETTII\citep{rine14}), first launched in June 2017. The first BETTII launch resulted in a successful engineering flight, demonstrating nearly all of the key systems needed for future science flights. Unfortunately, an anomaly at the end of the flight resulted in complete loss of the payload. A rebuilt BETTII should fly before 2020.
Together, BETTII and FITE will serve as an important development step towards future space-based interferometers, while also providing unique scientific return. Their successors, taking advantage of many of the same technologies as other balloon experiments (e.g. new cryocoolers, lightweight optics), will provide expanded scientific capability while continuing the path towards space-based interferometers.
Far-infrared interferometers have many of the same technical requirements as their single aperture cousins. In fact, an interferometer could be used in ``single aperture'' mode, with instruments similar to those on a single aperture telescope. However, in interferometric mode, the development requirements for space-based far-infrared interferometry are:
\begin{itemize}
\item Detailed simulations, coupled with laboratory validation, of the capabilities of interferometers. For example, imaging with an interferometer is sometimes assumed to require full coverage of the synthetic aperture; however, for many science cases, partial coverage (akin to coverage of ground-based radio interferometers) may be sufficient.
\item High speed detector arrays are desirable for interferometry missions, to take advantage of fast-scanning techniques.
\item Free-flying interferometers can benefit from advances in sub-newton thruster technology, as well as techniques for efficient formation flying.
\item Structurally connected interferometers can benefit from studying deployment of connected structures and boom development.
\item Demonstration of the system-level integration of interferometers. Balloon-borne pathfinders provide an ideal platform for doing this.
\end{itemize}
Finally, we comment on temporal performance requirements. The temporal performance requirements of different parts of an interferometer depend on several factors, including the FoV, sky and telescope backgrounds, rate of baseline change, and desired spectral resolution. We do not discuss these issues in detail here, as they are beyond the scope of a review paper. We do however give an illustrative example; a $1\hbox{$^\prime$}$ FoV, with a baseline of 10\,m, spectral resolution of $R=100$, and 16 points per fringe, results in a readout speed requirement of 35\,Hz. However, increasing the spectral resolution to $R=1000$ (at the same scan speed) raises the readout speed requirement to 270\,Hz. These correspond to detector time constants of 17\,ms and 3\,ms. A baseline requirement for a relatively modest interferometer (e.g. SHARP-IR \citep{rine16}) is thus a detector time constant of a few milliseconds. The exact value is however tied tightly to the overall mission architecture and operations scheme.
\begin{figure*}
\centering
\includegraphics[width=12cm,angle=0]{afig_SPIRIT_orig5.png}
\vspace{-1cm}
\caption[SPIRIT as an example of a structurally connected interferometer]{The SPIRIT structurally connected interferometer concept \citep{leis07}. SPIRIT is a spatio-spectral ``double Fourier'' interferometer that has been developed to Phase A level. SPIRIT has sub-arcsecond resolution at $100\,\mu$m, along with $R\sim4,000$ spectral resolution. The maximum interferometric baseline is 36\,m.}
\label{fig:firinter}
\end{figure*}
\subsection{Time-domain \& Rapid Response Astronomy}\label{ssect:timed}
Time domain astronomy is an established field at X-ray through optical wavelengths, with notable observations including {\it Swift}'s studies of transient high-energy events, and the {\it Kepler} mission using optical photometry to detect exoplanets. Time domain astronomy in the far-infrared holds the potential for similarly important studies of phenomena on timescales of days to years; (1) searches for infrared signatures of (dust-obscured) $\gamma$-ray bursts, (2) monitoring the temporal evolution of waves in debris disks to study the earliest stages of planet formation, and (3) monitoring supernovae light curves to study the first formation stages of interstellar dust. To date however such capabilities in the far-infrared have been limited. For example, {\itshape Spitzer} was used to measure secondary transits of exoplanets\citep{dem07}, but only when the ephemeris of the target was known.
The limitations of far-infrared telescopes for time-domain astronomy are twofold. First, to achieve high photometric precision in the time domain, comparable to that provided by {\it Kepler}, the spacecraft must be extremely stable, to requirements beyond those typically needed for cameras and spectrographs. This is not a fundamental technological challenge, but the stability requirements must be taken into consideration from the earliest design phase of the observatory. Second, if the intent is to {\itshape discover} transient events in the far-infrared (rather than monitor known ones) then the FoV of the telescope must be wide, since most transient events cannot be predicted and thus must be found via observations of a large number of targets.
\section{Technology Priorities}\label{sect:firprior}
The anticipated improvements in existing far-infrared observatories, as well as the realization of next-generation space-based far-infrared telescopes, all require sustained, active development in key technology areas. We here review the following areas; direct detector arrays (\S\ref{directdetect}), medium-resolution spectroscopy (\S\ref{ssect:medresspec}), heterodyne spectroscopy (\S\ref{ssect:het}), Fabry-Perot interferometry (\S\ref{ssect:fabry}), cooling systems (\S\ref{ssect:cryoc}), and mirrors (\S\ref{ssect:mirr}). We briefly discuss a selection of other topics in \S\ref{ssect:general}.
\subsection{Direct Detector Arrays}\label{directdetect}
A key technical challenge for essentially any future far-infrared space observatory (whether single-aperture or interferometer) is the development of combined direct detector + multiplexer readout systems. These systems are not typically developed by the same industrial teams that build near- and mid-infrared devices. Instead, they are usually developed by dedicated groups at universities or national labs. These systems have two core drivers:
\begin{enumerate}
\item {\bfseries Sensitivity:} The per-pixel sensitivity should meet or exceed the photon background noise set by the unavoidable backgrounds: zodiacal light, galactic cirrus, and the microwave background (Figure \ref{fig:backgr}). An especially important target is that for moderate-resolution (R$\sim$1000) spectroscopy, for which the per-pixel NEP is 3$\times$10$^{-20}\rm\,W Hz^{-1/2}$. For the high-resolution direct detection spectrometers considered for the OST, the target NEP is $\sim10^{-21}\rm\,W Hz^{-1/2}$. A representative set of direct detector sensitivities and requirements is given in Table \ref{tbl:requirements}.
\vspace{0.1cm}
\item {\bfseries High pixel counts:} Optimal science return from a mission like the OST demands total pixel counts (in all instruments) in the range $10^{5-6}$. This is still a small number compared with arrays for the optical and near-infrared, for which millions of pixels can be fielded in a single chip, but $\sim$100$\times$ larger than the total number of pixels on {\itshape Herschel}. Moreover, mapping speed is also influenced by the per-pixel aperture efficiency. Large, high-efficiency feedhorn systems (such as that used on {\itshape Herschel} SPIRE), can offer up to twice the mapping speed {\itshape per detector}, though such systems are slower per unit focal plane area than more closely packed horns or filled arrays \citep{griffin02}.
\end{enumerate}
\noindent There are also the challenges of interference and dynamic range (\S\ref{spaceobs}).
The world leaders in far-infrared detector technology include SRON in the Netherlands, Cambridge and Cardiff in the UK, and NASA in the USA, with at least three approaches under development. In order of technical readiness they are:
\begin{itemize}
\item {\bfseries Superconducting transition-edge-sensed (TES) bolometers}, which have been used in space-based instruments, as well as many atmosphere-based platforms.
\item {\bfseries Kinetic inductance detectors (KIDs)}, which have progressed rapidly, and have been used on several ground-and atmosphere-based instruments. The best KID sensitivities are comparable to TES detectors and have been demonstrated at larger (kilopixel) scales, though the sensitivities needed for spectroscopy with future large space missions remain to be demonstrated. While KIDs lead in some areas (e.g., pixel count), overall they are a few years behind TES-based systems in technological maturity.
\item {\bfseries Quantum capacitance detectors (QCDs)}, which have demonstrated excellent low-background sensitivity but at present have modest yield, and are substantially behind both TES and KID-based systems in terms of technological maturity.
\end{itemize}
\noindent All are potentially viable for future far-infrared missions. We consider each one in turn, along with a short discussion of multiplexing.
\begin{table*}
\begin{threeparttable}[b]
\caption[Far-infrared to mm-wave detector arrays: examples and requirements]{Selected examples of sensitivities achieved by far-infrared to mm-wave detector arrays, along with some required for future missions} \label{tbl:requirements}
{\scriptsize
\begin{tabular}{lcccccccl}
\hline
\hline
Observatory \& & Waveband & Aperture & $T_{aper}$ & $T_{det}$ & NEP & Detector & Detector & Notes \\
instrument & $\mu$m & meters & K & K & $\mathrm{W}/\sqrt{\mathrm{Hz}}$ & Technology & Count & \\
\hline
JCMT - SCUBA & 450/850 & 15 & 275 & 0.1 & $2\times 10^{-16}$ & Bolometers & 91/36 & \\
JCMT - SCUBA2 & 450/850 & 15 & 275 & 0.1 & $2\times 10^{-16}$ & TES & 5000/5000 & \\
APEX - ArTeMis & 200-450 & 12 & 275 & 0.3 & $4.5\times 10^{-16}$ & Bolometers & 5760 & \\
APEX - A-MKID & 350/850 & 12 & 275 & 0.3 & $1\times 10^{-15}$ & KIDS & 25,000 & \\
APEX - ZEUS-2 & 200-600 & 12 & 275 & 0.1 & $4\times 10^{-17}$ & TES & 555 & $R\sim1000$ \\
CSO - MAKO & 350 & 10.4 & 275 & 0.2 & $7\times 10^{-16}$ & KIDS & 500 & Low-\$/pix \\
CSO - Z-Spec & 960-1500 & 10.4 & 275 & 0.06 & $3\times 10^{-18}$ & Bolometers & 160 & \\
IRAM - NIKA2 & 1250/2000 & 30 & 275 & 0.1 & $1.7\times 10^{-17}$ & KIDS & 4000/1000 & \\
LMT - TolTEC & 1100 & 50 & 275 & 0.1 & $7.4\times 10^{-17}$ & KIDS & 3600 & Also at 1.4\,mm, 2.1\,mm \\
SOFIA - HAWC+ & 40-250 & 2.5 & 240 & 0.1 & $6.6\times 10^{-17}$ & TES & 2560 & \\
SOFIA - HIRMES & 25-122 & 2.5 & 240 & 0.1 & $2.2\times 10^{-17}$ & TES & 1024 & Low-res channel \\
BLAST-TNG & 200-600 & 2.5 & 240 & 0.3 & $3\times 10^{-17}$ & KIDS & 2344 & \\
{\itshape Herschel} - SPIRE & 200-600 & 3.5 & 80 & 0.3 & $4\times 10^{-17}$ & Bolometers & 326 & \\
{\itshape Herschel} - PACS bol. & 60-210 & 3.5 & 80 & 0.3 & $2\times 10^{-16}$ & Bolometers & 2560 & \\
{\itshape Herschel} - PACS phot. & 50-220 & 3.5 & 80 & 1.7 & $5\times 10^{-18}$ & Photoconductors & 800 & $R\sim2000$ \\
{\itshape Planck} - HFI & 300-3600 & 1.5 & 40 & 0.1 & $1.8\times 10^{-17}$ & Bolometers & 54 & \\
SuperSpec & 850-1600 & -- & N/A & 0.1 & $1.0\times 10^{-18}$ & KIDS & $\sim10^{2}$ & $R\lesssim700$ \\
SPACEKIDS & -- & -- & N/A & 0.1 & $3\times 10^{-19}$ & KIDS & 1000 & \\
\hline
SPICA - SAFARI & 34-210 & 3.2 & $<6$ & 0.05 & $2\times 10^{-19}$ & & 4000 & \\
SPIRIT & 25-400 & 1.4 & $4$ & 0.05 & $1\times 10^{-19}$ & & $\sim10^{2}$ & \\
OST - imaging & 100-300 & 5.9-9.1 & $4$ & 0.05 & $2\times 10^{-19}$ & & $\sim10^{5}$ & \\
OST - spectroscopy & 100-300 & 5.9-9.1 & $4$ & 0.05 & $2\times 10^{-20}$ & & $\sim10^{5}$ & $R\sim500$ \\
\hline
\hline
\end{tabular}
}
\begin{tablenotes}
{\footnotesize
\item Requirements for the SPICA/SAFARI instrument are taken from \citep{jackson2012spica}. Requirements for the SPIRIT interferometer (whose aperture is the effective aperture diameter for an interferometer with two 1-m diameter telescopes) are taken from \citep{benford2007cryogenic}.
}
\end{tablenotes}
\end{threeparttable}
\end{table*}
\subsubsection{Transition Edge Sensors}\label{ssect:tes}
A transition edge sensor (TES, Figure \ref{fig:dettes}) consists of a superconducting film operated near its superconducting transition temperature. This means that the functional form of the temperature dependence of resistance, $R(T)$, is very sharp. The sharpness of the $R(T)$ function allows for substantially better sensitivity than semi-conducting thermistors (though there are other factors to consider, such as readout schemes, see \S\ref{readschemes}). Arrays of transition-edged-sensed (TES) bolometers have been used in CMB experiments \citep{bicep15,hend16,hubm16,thorn16,denis16}, as well as in calorimeters in the $\gamma$-ray\citep{noro13}, X-ray\citep{irwin1996x,woll00}, ultraviolet, and optical. They are also anticipated for future X-ray missions, such as Athena\citep{smith16,gott16}.
In the infrared, TES bolometers are widely used. A notable ground-based example is SCUBA2 on the JCMT\citep{holl13} (Table \ref{tbl:requirements}). Other sub-orbital examples include HAWC+ and the upcoming HIRMES instrument, both on SOFIA. TES bolometers are also planned for use in the SAFARI instrument for SPICA\citep{tessfara,gold16,khos16,hijm16}. In terms of sensitivity, groups at SRON and JPL have demonstrated TES sensitivities of 1$\times$10$^{-19}\,\rm W\,Hz^{-1/2}$\citep{beyer12,karas14,khos16}.
The advantages of TES arrays over KIDs and QCD arrays are technological maturity and versatility in readout schemes (see \S\ref{readschemes}). However, TES detector arrays do face challenges. The signal in TES bolometers is a current through a (sub-$\Omega$) resistive film at sub-kelvin temperatures, so conventional amplifiers are not readily impedance matched to conventional low-noise amplifiers with high input impedance. Instead, superconducting quantum interference devices (SQUIDs) are used as first-stage amplifiers and SQUID-based circuits have been fashioned into a switching time-domain multiplexers (the TDM, from NIST and UBC\citep{irwin2005transition}), which has led to array formats of up to $\sim$10$^4$ pixels. While this time-domain multiplexing system is mature and field tested in demanding scientific settings, it is not an approach that can readily scale above $\sim10^4$ pixels, due primarily to wire count considerations. Other issues with TES arrays include; (1) challenging array fabrication, (2) relatively complex SQUID-based readout systems and no on-chip multiplexing (yet).
\begin{figure*}
\includegraphics[width=16cm,angle=0]{afig_wfig_tes.png}
\caption[Transition-edge sensed (TES) bolometers]{Transition-edge sensed (TES) bolometers developed at SRON (left) and JPL (center), targeting high sensitivity for far-infrared spectroscopy from cold telescopes. These are silicon-nitride suspensions, similar to the {\itshape Herschel} and {\itshape Planck} bolometers, but they feature long ($\sim1\,$mm), narrow ($\sim0.4\,\mu$m) suspension legs, and are cooled to below 100\,mK. Both programs have demonstrated NEPs of $1-3\times10^{-19}\,$W\,Hz$^{-1/2}$\citep{suz16dev}. An example NEP measurement of the JPL system is shown at right.}
\label{fig:dettes}
\end{figure*}
\subsubsection{Kinetic Inductance Detectors}\label{ssect:kids}
The simplest approach to high-multiplex-factor frequency domain multiplexing (FDM, see also \S\ref{readschemes}) thus far is the kinetic inductance detector (KID\citep{day03,zmu12}, Figure \ref{fig:detkid}). In a KID, photons incident on a superconducting film break Cooper pairs, which results in an increase in the inductance of the material. When embedded in a resonant circuit, the inductance shift creates a measureable frequency shift, which is encoded as a phase shift of the probe tone. KIDs originated as far-infrared detector arrays, with on-telescope examples including MAKO \citep{mako12} and MUSIC\citep{maloney2010music} at the CSO, A-MKID\citep{heym10} at APEX, NIKA/NIKA2\citep{monf10,monf11,adam18} at IRAM, the extremely compact $\mu$-Spec \citep{cataldo2014micro,barrentine2016design}, SuperSpec\citep{shirokoff2014design}, and the submillimeter wave imaging spectrograph DESHIMA\citep{endo12}. KIDs were later adapted for the optical / near-infrared\citep{mazin12}, where they provide advances in time resolution and energy sensitivity. Examples include ARCONS\citep{arcons13}, DARKNESS \& MEC\citep{mee15,maz15}, the KRAKENS IFU\citep{mazkr15}, and PICTURE-C\citep{picc}. KIDs are also usable for millimeter-wave/CMB studies \citep{calvo10,kara12,mcc14,low14,oguri2016}, although there are challenges in finding materials with suitably low $T_c$'s when operating below 100\,GHz. KIDs are now being built in large arrays for several ground-based and sub-orbital infrared observatories, including the BLAST-Pol2 balloon experiment.
There exist three primary challenges in using KIDS in space-based infrared observatories:
\vspace{0.1cm}
\noindent {\bfseries Sensitivity}: Sub-orbital far-infrared observatories have relatively high-backgrounds, and thus sensitivities that are $2-3$ orders of magnitude above those needed for background-limited observations from space. For space-based KIDs instruments, better sensitivities are needed. The state of the art is from SPACEKIDs, for which NEPs of 3$\times$10$^{-19}\,\rm W\,Hz^{-1/2}$ have been demonstrated in aluminum devices coupled via an antenna \citep{devis14,griff16,Baselmans2016}. This program has also demonstrated 83\% yield in a 961-pixel array cooled to $120\,$mK. A further, important outcome of the SPACEKIDs program was the demonstration that the effects of cosmic ray impacts can be effectively minimised\citep{Baselmans2016,monfabas16}. In the US, the Caltech / JPL group and the SuperSpec collaboration have demonstrated sensitivities below 1$\times$10$^{-18}\,\rm W\,Hz^{-1/2}$ in a small-volume titanium nitride devices at $100\,$mK, also with radiation coupled via an antenna.
\vspace{0.1cm}
\noindent {\bfseries Structural considerations}: KIDs must have both small active volume (to increase response to optical power) and a method of absorbing photons directly without using superconducting transmission lines. Options under development include:
\begin{itemize}
\item Devices with small-volume meandered absorbers / inductors, potentially formed via electron-beam lithography for small feature widths.
\item Thinned-substrate devices, in which the KID inductor is patterned on a very thin (micron or sub-micron) membrane which may help increase the effective lifetime of the photo-produced quasiparticles, thereby increasing the response of the device.
\end{itemize}
\vspace{0.1cm}
\noindent {\bfseries Antenna coupling at high frequencies}: While straightforward for the submillimeter band, the antenna coupling becomes non-trivial for frequencies above the superconducting cutoff of the antenna material (e.g., $\sim714\,$GHz for Nb and $1.2\,$THz for NbTiN). To mitigate this, one possible strategy is to integrate the antenna directly into the KID, using only aluminium for the parts of the detector that interact with the THz signal. This approach has been demonstrated at $1.55\,$THz, using a thick aluminium ground plane and a thin aluminium central line to limit ground plane losses to 10\% \citep{bueno2017full,Baselmans2016}. This approach does not rely on superconducting stripline technology and could be extended to frequencies up to $\sim10\,$THz.
\vspace{0.1cm}
A final area of research for KIDs, primarily for CMB experiments, is the KID-sensed bolometer, in which the thermal response of the KID is used to sense the temperature of a bolometer island. These devices will be limited by the fundamental phonon transport sensitivity of the bolometer, so are likely to have sensitivity limits comparable to TES bolometers, but may offer advantages including simplified readout, on-array multiplexing, lower sensitivity to magnetic fields, and larger dynamic range.
\begin{figure*}
\includegraphics[width=16cm,angle=0]{afig_wfig_kids.png}
\caption[Kinetic Inductance Detectors (KIDs).]{Kinetic Inductance Detectors (KIDs). The schematic at left is reprinted from\citep{day03}; (a) Photons are absorbed in a superconducting film operated below its transition temperature, breaking Cooper pairs to create free electrons; (b) The increase in free electron density increases the inductance (via the kinetic inductance effect) of an RF or microwave resonator, depicted schematically here as a parallel LC circuit which is capacitively coupled to a through line. (c) On resonance, the LC circuit loads the through line, producing a dip in its transmission. The increase in inductance moves the resonance to lower frequency ($f\sim 1/\sqrt{L}$), which produces a phase shift (d) of a RF or microwave probe signal transmitted though the circuit. Because the resonators can have high quality factor, many hundreds to thousands can be accessed on a single transmission line. Center shows the 432-pixel KID array in the Caltech / JPL MAKO camera, and right shows an image of SGR B2 obtained with MAKO at the CSO.}
\label{fig:detkid}
\end{figure*}
\subsubsection{Quantum Capacitance Detectors}\label{ssect:qcd}
The Quantum Capacitance Detector (QCD\citep{Shaw09,Bueno10,Bueno11,Stone12,Echternach13}) is based on the Single Cooper Pair Box (SCB), a superconducting device initially developed as a qubit for quantum computing applications. The SCB consists of a small island of superconducting material connected to a ground electrode via a small (100\,nm $\times$ 100\,nm) tunnel junction. The island is biased with respect to ground through a gate capacitor, and because it is sufficiently small to exhibit quantum behaviour, its capacitance becomes a strong function of the presence or absence of a single free electron. By embedding this system capacitively in a resonator (similar to that used for a KID), a single electron entering or exiting the island (via tunneling through the junction) produces a detectable frequency shift.
To make use of this single-electron sensitivity, the QCD is formed by replacing the ground electrode with a superconducting photon absorber. As with the KIDs, photons with energy larger than the superconducting gap breaks Cooper pairs, establishing a density of free electrons in the absorber that then tunnel onto (and rapidly back out of) the island through the tunnel junction. The rate of tunneling into the island, and thus the average electron occupation in the island, is determined by the free-electron density in the absorber, set by the photon flux. Because each photo-produced electron tunnels back and forth many times before it recombines, and because these tunneling events can be detected individually, the system has the potential to be limited by the photon statistics with no additional noise.
This has indeed been demonstrated. QCDs have been developed to the point where a 25-pixel array yields a few devices which are photon noise limited for $200\,\mu$m radiation under a load of 10$^{-19}\,\rm W$, corresponding to a NEP of $2\times10^{-20}\,\rm W Hz^{-1/2}$. The system seems to have good efficiency as well, with inferred detection of 86\% of the expected photon flux for the test setup. As an additional demonstration, a fast-readout mode has been developed which can identify individual photon arrival events based on the subsequent increase in tunneling activity for a timescale on order the electron recombination time (Figure \ref{fig:detqcd}).
With its demonstrated sensitivity and natural frequency-domain multiplexing, the QCD is promising for future far-infrared space systems. Optical NEPs of below $10^{-20}\,\rm W Hz^{-1/2}$ at $200\,\mu$m have been demonstrated, with the potential for photon counting at far-infrared wavelengths \citep{echter18}. However, QCDs are some way behind both TES and KIDs arrays in terms of technological maturity. To be viable for infrared instruments, challenges in (1) yield and array-level uniformity, (2) dark currents, and (3) dynamic range must all be overcome. The small tunnel junctions are challenging, but it is hoped that advances in lithography and processing will result in improvements.
\begin{figure*}
\includegraphics[width=16cm,angle=0]{afig_wfig_qcd.png}
\vspace{-1cm}
\caption[Quantum capacitance detector]{The Quantum Capacitance Detector (QCD). (a) Schematic representation showing the mesh absorber QCD with its LC resonator coupling it to the readout circuit. The single Cooper-pair box (SCB) island is formed between the tunnel junction and the gate capacitor. The tunnel junction is connected to the mesh absorber which in turn is connected to ground plane. The SCB presents a variable capacitance in parallel with an LC resonator. (b) Optical microscope picture of a device, showing the feedline, the inductor, the interdigitated capacitor, all fabricated in Nb and the Al mesh absorber. (c) SEM picture of mesh absorber consisting of 50nm wide aluminum lines on a $5\,\mu$m pitch grid. (d) Detail of the SCB showing the aluminum island (horizontal line) in close proximity to the lowest finger of the interdigitated capacitor and the tunnel junction (overlap between the island and vertical line connecting to the mesh absorber below). (e) Optical setup schematic showing temperature-tunable blackbody, aperture, and filters which define the spectral band. This device has demonstrated an optical NEP of 2$\times$10$^{-20}\,\rm W\,Hz^{-1/2}$ at $200\,\mu$m, as well as the ability to count individual photons\citep{Echternach13,echter18}.}
\label{fig:detqcd}
\end{figure*}
\subsubsection{System Considerations for Direct Detector Readouts}\label{readschemes}
There exist three commonly used multiplexing (muxing) schemes\citep{ramas09} for readout of arrays; Frequency Domain Muxing (FDM), Time Domain Muxing (TDM), and Code Division Muxing (CDM). In this section we briefly review their applicability and advantages.
FDM is a promising path to reading out the large arrays anticipated in future infrared observatories. In FDM, a single readout circuit services up to $\sim1000$ pixels, each coupled through a micro-resonator tuned to a distinct frequency. Each pixel is then probed individually with an RF or microwave tone at its particular frequency. The warm electronics must create the suite of tones which is transmitted to the array for each circuit, then digitize, Fourier-transform, and channel the output data stream to measure the phase and amplitude shifts of each tone independently. The number of resonators (and thus pixels) that can be arrayed onto a single readout circuit depends on the quality factor (Q) of the resonators and the bandwidth available in the circuit. For micro-resonators patterned in superconducting films, resonator Q's exceeding $10^7$ are possible but more typical values are around $10^5$, which permits approximately $10^3$ pixels per octave of readout bandwidth to be operated with sufficiently low cross-talk.
In these systems, all of the challenging electronics are on the warm side, and the detector array is accessed via low-loss RF / microwave lines (one from the warm side down through the crysotat stages, another for the return signal). Moreover, FDM readout schemes can be applied to both TES and KIDs arrays, while other multiplexing schemes are TES-only. An example of recent progress is the development of an FDM scheme that can read out 132 TES pixels simultaneously, using a single SQUID, without loss of sensitivity \citep{hijm16}. This is very close to the 160 detectors per SQUID targeted for SPICA/SAFARI.
There are, however, limitations to FDM schemes:
\begin{enumerate}
\item {\bfseries Thermal constraints:} While the detector arrays themselves are essentially passive, the conductors, whether coaxial or twisted pair, will have thermal conduction from the warm stages, impacting the overall thermal design. Additionally these systems require a single low-noise amplifier (LNA) on each circuit, likely deployed somewhere between 4\,K and 20\,K, and the LNAs will have some dissipation.
\item {\bfseries Signal processing:} FDM schemes pose significant challenges for backend electronics processing capability: they must digitize the returning waveforms, then Fourier transform in real time at the science sampling rate and extract the full array of tone phases which encode the pixel signal levels. These hurdles become non-trivial for the large arrays envisaged for future missions.
\end{enumerate}
A further challenge, that applies to readout schemes for any far-infrared resonant detector array (including TES, KID, and QCD systems), is the power required to read out $10^{4-5}$ detector arrays, due in part to the signal processing requirements. The power requirements are such that they may pose a significant obstacle to reading out $\sim10^5$ detector arrays on {\it any} balloon- or space-based platform.
For the OST, power dissipation in the warm electronics will be a particular challenge. An example is the medium-resolution survey spectrometer (MRSS), which targets 200,000 pixels among all six spectrometer bands. The concept assumes resonator frequencies between 75\,MHz and 1\,GHz, and that $1500$ pixels can be arrayed in this bandwidth (a relatively comfortable multiplexing density assuming $400$ per readout octave). This requires 130 readout circuits, each with two coaxial lines all the way to the cold stage, and a cold amplifier on the output. The conducted loads through the coaxial lines, as well as reasonable assumptions about the LNA dissipation (1\,mW at 4\,K plus 3\,mW at 20\,K for each circuit) do not stress the observatory thermal design. However, the electronics for each circuit requires a 2 giga-sample per second analog to digital converter (ADC) working at $\sim$12 bits depth, followed by FFTs of this digital signal stream in real time - 1024 point FFTs every $0.5\,\mu$s. Systems such as these implemented in FPGAs used in the laboratory dissipate $\sim$100\,W for each readout circuit, which is not compatible with having 130 such systems on a space mission.
For these reasons, development of muxing schemes is a high priority for large-format arrays, irrespective of the detector technology used. A promising path for such development is to employ a dedicated application specific integrated circuit (ASIC), designed to combine the digitization, FFT, and tone extraction in a single chip. Power dissipation estimates obtained for the MRSS study based on custom spectrometer chips developed for flight systems, and extrapolating to small-gate CMOS technology, suggest that such a custom chip could have a power dissipation of $\sim$14\,W per circuit, including all aspects. At this level, the total scales to $\sim1.8$\,kW. This power dissipation is well within the range of that of other sub-systems on future missions - for example, such missions will require several kW to operate the cryocoolers - and thus does not pose a unique problem.
\vspace{0.1cm}
\noindent Finally, we make four observations:
\vspace{0.2cm}
\noindent (1) While the power scaling calculations are straightforward, the development of this silicon ASIC is a substantial design effort, in large part because of the 12-bit depth; most fast digital spectrometers implemented in CMOS operate at 3 or 4 bits depth.
\vspace{0.1cm}
\noindent (2) The power dissipation scales as the total bandwidth, so the per-pixel electronics power dissipation could be reduced if lower resonant frequencies were used. The downside of this though is that the physical size of the resonators scale approximately as $1/\sqrt{f}$, and (with current designs) becomes several square millimeters per resonator for frequencies below $\sim50\,$MHz.
\vspace{0.2cm}
\noindent (3) Hybrid schemes, such as combining CDM with frequency domain readout, are attractive for their power efficiency, both at 4\,K due to lower number of high electron mobility transistors (HEMTs) or Parametric Amps, and for the warm electronics due to lower bandwidths and lower wire counts. These schemes however are only applicable to TES based systems.
\vspace{0.2cm}
\noindent (4) With $Q = 10^5$ and 1000 resonators per octave, the FDM scheme utilizes only a few percent of readout bandwidth. Factors of 10 or more improvement in multiplexing density and reduction in readout power are possible if the resonator frequency placement could be improved to avoid collisions, e.g. through post-fabrication trimming\footnote{Post-fabrication trimming (PFT) is a family of techniques that permanently alter the refractive index of a material to change the optical path length\citep{spara05,schrau08,atab13}. The advantage of PFT is that it does not require complex control electronics, but concerns have been raised over the long-term stability of some of the trimming mechanisms.}.
\subsection{Medium-resolution spectroscopy}\label{ssect:medresspec}
A variety of spectrometer architectures can be used to disperse light at far-infrared wavelengths. Architectures that have been successfully used on air-borne and space instruments include grating dispersion like FIFI-LS on SOFIA \citep{Klein2006} and PACS on \textit{Herschel} \cite{pog10}, Fourier Transform spectrometers like the \textit{Herschel}/SPIRE-FTS \citep{gri10}, and Fabry-Perot etalons like FIFI on the KAO telescope \citep{Poglitsch1990}. These technologies are well understood and can achieve spectral resolutions of $R = 10^2 - 10^4$. However, future spectrometers will need to couple large FoVs to many thousands of imaging detectors, a task for which all three of these technologies have drawbacks. Grating spectrometers are mechnically simple devices that can achieve $R \sim 1000$, but are challenging to couple to wide FoVs since the spectrum is dispersed along one spatial direction on the detector array. FTS systems require moving parts and suffer from noise penalties associated with the need for spectral scanning. They are also not well-suited to studies of faint objects because of systematics associated with long-term stability of the interferometer and detectors \citep{zmuid03}. Fabry-Perot systems are also mechanically demanding, requiring tight parallelism tolerances of mirror surfaces, and typically have restricted free spectral range due to the difficulty of manufacturing sufficiently precise actuation mechanisms \citep{Parshley2014}. A new technology that can couple the large FoVs anticipated in next-generation far-infrared telescopes to kilo-pixel or larger detector arrays would be transformative for far-infrared spectroscopy.
A promising approach to this problem is far-infrared filter bank technology\citep{Kovacs2012,Wheeler2016}. This technology has been developed as a compact solution to the spectral dispersion problem, and has potential for use in space. These devices require the radiation to be dispersed to propagate down a transmission line or waveguide. The radiation encounters a series of tuned resonant filters, each of which consists of a section of transmission line of length $\lambda_{i}/2$, where $\lambda_{i}$ is the resonant wavelength of channel $i$. These half-wave resonators are evanescently coupled to the feedline with designable coupling strengths described by the quality factors $Q_{\rm feed}$ and $Q_{\rm det}$ for the feedline and detector, respectively. The filter bank is formed by arranging a series of channels monotonically increasing in frequency, with a spacing between channels equal to an odd multiple of $\lambda_{i}/4$. The ultimate spectral resolution $R=\lambda / \Delta \lambda$ is given by:
\begin{equation}
\frac{1}{R} = \frac{1}{Q_{\rm filt}} = \frac{1}{Q_{\rm feed}} +
\frac{1}{Q_{\rm det}} + \frac{1}{Q_{\rm loss}}.
\end{equation}
\noindent where $Q_{\rm loss}$ accounts for any additional sources of dissipation in the circuit and $Q_{\rm filt}$ is the net quality factor. This arrangement has several advantages in low and medium-resolution spectroscopy from space, including: (1) compactness (fitting on a single chip with area of tens of square cm), (2) integrated on-chip dispersion and detection, (3) high end-to-end efficiency equal to or exceeding existing technologies, and (4) a mechanically stable architecture. A further advantage of this architecture is the low intrinsic background in each spectrometer, which only couples to wavelengths near its resonance. This means that very low backgrounds can be achieved, requiring detector NEPs below $10^{-20}$ W Hz$^{-1/2}$. Filter banks do however have drawbacks\citep{Kovacs2012}. For example, while filter banks are used in instruments operating from millimeter to radio wavelengths, they are currently difficult to manufacture for use at wavelengths shortward of about 500\,$\mu$m.
Two ground-based instruments are being developed that make use of filter banks. A prototype transmission-line system has been fabricated for use in SuperSpec \citep{Shirokoff2012,SHD2014} for the LMT. SuperSpec will have $R \sim 300$ near 250\,GHz and will allow photon-background limited performance. A similar system is WSPEC, a 90\,GHz filter bank spectrometer that uses machined waveguide to propagate the radiation \cite{Che2015}. This prototype instrument has 5 channels covering the $130 {-} 250$\,GHz band. Though neither instrument is optimized for space applications, this technology can be adapted to space, and efforts are underway to deploy it on sub-orbital rockets.
\subsection{High-resolution spectroscopy}\label{ssect:het}
Several areas of investigation in mid/far-infrared astronomy call for spectral resolution of $R\geq10^{5}$, higher than can be achieved with direct detection approaches. At this very high spectral resolution, heterodyne spectroscopy is routinely used \citep{Schieder2008,golds17}, with achievable spectral resolution of up to $R\simeq10^{7}$. In heterodyne spectroscopy, the signal from the ``sky'' source is mixed with a spectrally-pure, large-amplitude, locally-generated signal, called the ``Local Oscillator (LO)'', in a nonlinear device. The nonlinearity generates the sum and difference of the sky and LO frequencies. The latter, the ``Intermediate Frequency (IF)'', is typically in the $1-10$\,GHz range, and can be amplified by low-noise amplifiers and subsequently sent to a spectrometer, which now is generally implemented as a digital signal processor. A heterodyne receiver is a coherent system, preserving the phase and amplitude of the input signal. While the phase information is not used for spectroscopy, it is available and can be used for e.g. interferometry.
The general requirements for LOs are as follows; narrow linewidth, high stability, low noise, tunability over the required frequency range, and sufficient output power to couple effectively to the mixer. For far-infrared applications, LO technologies are usually one of two types: multiplier chain, and Quantum Cascade Laser (QCL). Multiplier chains offer relatively broad tuning, high spectral purity, and known output frequency. The main limitation is reaching higher frequencies ($>3\,$THz). QCL’s are attractive at higher frequencies, as their operating frequency range extends to 5\,THz and above, opening up the entire far-infrared range for high resolution spectroscopy.
For mixers, most astronomical applications use one or more of three technologies: Schottky diodes, Superconductor-Insulator-Superconductor (SIS) mixers, and Hot Electron Bolometer (HEB) mixers\citep{klapw17}. Schottky diodes function at temperatures of $>70\,$K, can operate at frequencies as high as $\sim3\,$THz ($100\,\mu$m), and provide large IF bandwiths of $>8\,$GHz, but offer sensitivities that can be an order of magnitude or more poorer than either SIS or HEB mixers. They also require relatively high LO power, of order 1\,mW. SIS and HEB mixers, in contrast, have operating temperatures of $\sim4\,$K and require LO powers of only $\sim1\mu$W. SIS mixers are most commonly used at frequencies up to about 1\,THz, while HEB mixers are used over the 1-6\,THz range. At present, SIS mixers offer IF bandwidths and sensitivities both a factor of 2-3 better than HEB mixers. All three mixer types have been used on space-flown hardware; SIS and HEB mixers in the {\itshape Herschel} HIFI instrument\citep{deg10,roelf12}, and Schottky diodes on instruments in SWAS and Odin.
Heterodyne spectroscopy can currently achieve spectral resolutions of $R\simeq10^{7}$, and in principle the achievable spectral resolution is limited only by the purity of the signal from the LO. Moreover, heterodyne spectroscopy preserves the phase of the sky signal as well as its frequency, lending itself naturally to interferometric applications. Heterodyne arrays are used on SOFIA, as well as many ground-based platforms. They are also planned for use in several upcoming observatories, including GUSTO. A further example is FIRSPEX, a concept study for a small-aperture telescope with heterodyne instruments to perform several large-area surveys targeting bright far-infrared fine-structure lines, using a scanning strategy similar to that used by {\itshape Planck}\citep{rig16}.
There are however challenges for the heterodyne approach. We highlight five here:
\begin{itemize}
\item {\bfseries The antenna theorem:} Coherent systems are subject to the antenna theorem that allows them to couple to only a single spatial mode of the electromagnetic field. The result is that the product of the solid angle subtended by the beam of a heterodyne receiver system ($\Omega$) and its collecting area for a normally incident plane wave ($A_e$) is determined; $A_e\Omega = \lambda^2$ \citep{gold02}.
\item {\bfseries The quantum noise limit:} A heterodyne receiver, being a coherent system, is subject to the quantum noise limit on its input noise temperature, $T \ge hf/k$ (e.g.\citep{zmuid03}. While SIS mixers have noise temperatures only a few times greater than the quantum noise limit, HEB mixer receivers typically have noise temperatures $\sim10$ times the quantum noise limit, e.g. $10\times91$\,K at $f = 1900$\,GHz. Improved sensitivity for HEB mixers, and SIS mixers operating at higher frequencies will offer significant gains in astronomical productivity.
\item {\bfseries Limited bandwith:} There is a pressing need to increase the IF bandwidth of HEB mixers, with a minimum of 8\,GHz bandwidth required at frequencies of $\sim3$\,THz. This will allow for complete coverage of Galactic spectral lines with a single LO setting, as well as the lines of nearby galaxies. Simultaneous observation of multiple lines also becomes possible, improving both efficiency and relative calibration accuracy.
\item {\bfseries Array size:} The largest arrays currently deployed (such as in upGREAT on SOFIA) contain fewer than 20 pixels although a 64-pixel ground-based array operating at 850\,$\mu$m has been constructed \citep{groppi10}. Increasing array sizes to hundreds or even thousands of pixels will require SIS and HEB mixers that can be reliably integrated into these new large-format arrays, low power IF amplifiers, and efficient distribution of LO power.
\item {\bfseries Power requirements:} Existing technology typically demands significantly more power per pixel than is available for large-format arrays on satellite-based platforms.
\end{itemize}
On a final note: for the higher frequency ( $>3$\,THz) arrays, high-power ($5-10$\,mW) QCL LO's are a priority for development, along with power division schemes (e.g., Fourier phase gratings) to utilize QCLs effectively \citep{hayton20144,richter20154,richter2015performance}. At $<3\,$THz, frequency-multiplied sources remain the system of choice, and have been successfully used in missions including SWAS, {\itshape Herschel}-HIFI, STO2, and in GREAT and upGREAT on SOFIA. However, to support large-format heterodyne arrays, and to allow operation with reduced total power consumption for space missions, further development of this technology is necessary. Further valuable developments include SIS and HEB mixers that can operate at temperatures of $>20$\,K, and integrated focal planes of mixers and low-noise IF amplifiers.
\subsection{Fabry-Perot Interferometry}\label{ssect:fabry}
Fabry-Perot Interferometers (FPIs) have been used for astronomical spectroscopy for decades, with examples such as FIFI\citep{pogli91}, KWIC\citep{stacey93}, ISO-SWS/LWS\citep{degraa96,clegg96}, and SPIFI\citep{bradford02}. FPIs similar to the one used in ISO have also been developed for balloon-borne telescopes\citep{pepe94}.
FPIs consist of two parallel, highly reflective (typically with reflectivities of $\sim96\%$), very flat mirror surfaces. These two mirrors create a resonator cavity. Any radiation whose wavelength is an integral multiple of twice the mirror separation meets the condition for constructive interference and passes the FPI with high transmission. Since the radiation bounces many times between the mirrors before passing, FPIs can be fabricated very compactly, even for high spectral resolution, making them attractive for many applications. In addition, FPIs allow for large FoVs, making them an excellent choice as devices for spectroscopic survey instruments.
Observations with FPI are most suitable for extended objects and surveys of large fields, where moderate to high spectral resolution ($R\sim10^2 - 10^5$) is required. for example:
\begin{itemize}
\item Mapping mearby galaxies in multiple molecular transitions and atomic or ionic fine-structure lines in the far-infrared. This traces the properties of the interstellar medium, and relates small-scale effects like star forming regions to the larger-scale environment of their host galaxies.
\item For high-redshift observations, FPI is suited to survey large fields and obtain a 3D data cube by stepping an emission line over a sequence of redshift bins. This results in line detections from objects located at the corresponding redshift bins and allows e.g. probing ionization conditions or metallicities for large samples simultaneously.
\end{itemize}
FPIs do however face challenges. We highlight four examples:
\vspace{0.2cm}
\noindent (1) To cover a certain bandwidth, the FPI mirror separation has to be continuously or discretely changed, i.e. the FPI has to be scanned, which requires time, and may result in poor channel-to-channel calibration in the spectral direction if the detector system is not sufficiently stable.
\vspace{0.2cm}
\noindent (2) Unwanted wavelengths that fulfill the resonance criteria also pass through the FPI and need to be filtered out. Usually, additional FPIs operated in lower order combined with band-pass or blocking/edge filters are used for order sorting. However, since most other spectrometers need additional filters to remove undesired bands, the filtration of unwanted orders in FPIs is not a profound disadvantage.
\vspace{0.2cm}
\noindent (3) In current far-infrared FPIs, the reflective components used for the mirrors are free-standing metal meshes. The finesse\footnote{The spectral range divided by the FWHMs of individual resonances, see e.g.\citep{Ismail16}.} of the meshes changes with wavelength and therefore a FPI is only suitable over a limited wavelength range. Also, the meshes can vibrate, which requires special attention especially for high spectral resolution, where the diameters can be large. Replacing the free-standing metal meshes with a different technology is therefore enabling for broader applications of FPI. For example, flat silicon wafers with an anti-reflection structure etched on one side and the other side coated with a specific thin metal pattern, optimized for a broader wavelength range, can substitute for a mirror. This silicon wafer mirror is also less susceptible to vibrations and could be fabricated with large enough diameters.
\vspace{0.2cm}
\noindent (4) Improving cryogenic scanning devices. Currently, FPIs usually use piezoelectric elements (PZTs) for scanning. However, PZTs have limited travel range, especially at 4\,K. Moreover, mechanical devices or PZT-driven motors are still not reliable enough at cryogenic temperatures, or too large to be used in the spaces available inside the instruments. It is thus important to develop either smaller PZT-driven devices which can travel millimeters with resolutions of nanometers at a temperature of 4\,K, or an alternative scanning technology that overcomes the limitations of PZT devices and satisfies the requirements of FPIs.
\begin{table*}
\begin{threeparttable}[b]
\caption[On-orbit Mechanical cryocooler lifetimes]{Long-life space cryocooler operating experiences as of May 2016.} \label{tab:coolers}
{\scriptsize
\begin{tabular}{cr|ccl}
\hline
\hline
\multicolumn{2}{l}{Cooler, Mission, \& Manufacturer} & T (K) & Hours/unit & Notes \\
\hline
\hline
\multicolumn{2}{|l|}{\cellcolor{gray!30}{\bfseries Turbo Brayton}} & & & \\
\multicolumn{2}{r} {International Space Station - MELFI (Air Liquide)} & 190 & 85,600 & Turn-on 7/06, ongoing, no degradation \\
\multicolumn{2}{r} {HST - NICMOS (Creare)} & 77 & 57,000 & 3/02 thru 10/09, off, coupling to load failed \\
\multicolumn{2}{|l|}{\cellcolor{gray!30}{\bfseries Stirling}} & & & \\
\multicolumn{2}{r} {HIRDLS: 1-stage (Ball Aerospace)} & 60 & 83,800 & 8/04 thru 3/14, instrument failed 03/08, data turned off 3/14 \\
\multicolumn{2}{r} {TIRS: 2-stage (Ball Aerospace)} & 35 & 27,900 & Turn-on 6/13, ongoing, no degradation \\
\multicolumn{2}{r} {ASTER-TIR (Fujitsu)} & 80 & 141,7000 & Turn-on 3/00, ongoing, no degradation \\
\multicolumn{2}{r} {ATSR-1 on ERS-1 (RAL)} & 80 & 75,300 & 7/91 thru 3/00, satellite failed \\
\multicolumn{2}{r} {ATSR-2 on ERS-2 (RAL)} & 80 & 112,000 & 4/95 thru 2/08, instrument failed \\
\multicolumn{2}{r} {Suzaku: one stage (Sumitomo)} & 100 & 59,300 & 7/05 thru 4/12, mission end, no degradation \\
\multicolumn{2}{r} {SELENE/Kaguya GRS: one stage (Sumitomo)} & 70 & 14,600 & 10/07 thru 6/09, mission end, no degradation \\
\multicolumn{2}{r} {\cellcolor{green!15}AKARI: two stage (Sumitomo)} &\cellcolor{green!15} 20 &\cellcolor{green!15} 39,000 &\cellcolor{green!15} 2/06 thru 11/11, mission end \\
\multicolumn{2}{r} {RHESSI (Sunpower)} & 80 & 124,600 & Turn-on 2/02, ongoing, modest degradation \\
\multicolumn{2}{r} {CHIRP (Sunpower)} & 80 & 19,700 & 9/11 thru 12/13, mission end, no degradation \\
\multicolumn{2}{r} {ASTER-SWIR (Mitsubishi)} & 77 & 137,500 & Turn-on 3/00, ongoing, load off at 71,000 hours \\
\multicolumn{2}{r} {ISAMS (Oxford/RAL)} & 80 & 15,800 & 10/91 thru 7/92, instrument failed \\
\multicolumn{2}{r} {HTSSE-2 (Northrop Grumman)} & 80 & 24,000 & 3/99 thru 3/02, mission end, no degradation \\
\multicolumn{2}{r} {HTSSE-2 (BAe)} & 80 & 24,000 & 3/99 thru 3/02, mission end, no degradation \\
\multicolumn{2}{r} {MOPITT (BAe)} & 50-80 & 138,600 & Turn on 3/00, lost one disp. at 10.300 hours \\
\multicolumn{2}{r} {\cellcolor{green!15}Odin (Astrium)} &\cellcolor{green!15} 50-80 &\cellcolor{green!15} 132,600 &\cellcolor{green!15} Turn-on 3/01, ongoing, no degradation \\
\multicolumn{2}{r} {ERS-1: AATSR \& MIPAS (Astrium)} & 50-80 & 88,200 & 3/02 thru 4/12, no degradation, satellite failed \\
\multicolumn{2}{r} {INTEGRAL (Astrium)} & 50-80 & 118,700 & Turn-on 10/02, ongoing, no degradation \\
\multicolumn{2}{r} {Helios 2A (Astrium)} & 50-80 & 96,600 & Turn-on 4/05, ongoing, no degradation \\
\multicolumn{2}{r} {Helios 2B (Astrium)} & 50-80 & 58,800 & Turn-on 4/10, ongoing, no degradation \\
\multicolumn{2}{r} {SLSTR (Airbus)} & 50-80 & 1,4000 & Turn-on 3/16, ongoing, no degradation \\
\multicolumn{2}{|l|}{\cellcolor{gray!30}{\bfseries Pulse-Tube}} & & & \\
\multicolumn{2}{r} {CX (Northrop Grumman)} & 150 & 161,600 & Turn-on 2/98, ongoing, no degradation \\
\multicolumn{2}{r} {MTI (Northrop Grumman)} & 60 & 141,600 & Turn-on 3/00, ongoing, no degradation \\
\multicolumn{2}{r} {Hyperion (Northrop Grumman)} & 110 & 133,600 & Turn-on 12/00, ongoing, no degradation \\
\multicolumn{2}{r} {SABER on TIMED (Northrop Grumman)} & 75 & 129,600 & Turn-on 1/02, ongoing, no degradation \\
\multicolumn{2}{r} {AIRS (Northrop Grumman)} & 55 & 121,600 & Turn-on 6/02, ongoing, no degradation \\
\multicolumn{2}{r} {TES (Northrop Grumman)} & 60 & 102,600 & Turn-on 8/04, ongoing, no degradation \\
\multicolumn{2}{r} {JAMI (Northrop Grumman)} & 65 & 91,000 & 4/05 thru 12/15, mission end, no degradation \\
\multicolumn{2}{r} {IBUKI/GOSAT (Northrop Grumman)} & 65 & 63,300 & Turn-on 2/09, ongoing, no degradation \\
\multicolumn{2}{r} {OCO-2 (Northrop Grumman)} & 110 & 14,900 & Turn-on 8/14, ongoing, no degradation \\
\multicolumn{2}{r} {Himawari-8 (Northrop Grumman)} & 65 & 12,800 & Turn-on 12/14, ongoing, no degradation \\
\multicolumn{2}{|l|}{\cellcolor{gray!30}{\bfseries Joule-Thompson}} & & & \\
\multicolumn{2}{r} {International Space Station - SMILES (Sumitomo)} & 4.5 & 4,500 & 10/09 thru 04/10, instrument failed \\
\multicolumn{2}{r} {\cellcolor{green!15}{\itshape Planck} (RAL/ESA)} &\cellcolor{green!15} 4 &\cellcolor{green!15} 38,500 &\cellcolor{green!15} 5/09 thru 10/13, mission end, no degradation \\
\multicolumn{2}{r} {\cellcolor{green!15}{\itshape Planck} (JPL)} &\cellcolor{green!15} 18 &\cellcolor{green!15} 27,500 &\cellcolor{green!15} FM1: 8/10-10/13 (EOM), FM2: failed at 10,500 hours \\
\hline
\hline
\end{tabular}
}
\begin{tablenotes}
{\footnotesize
\item Almost all cryocoolers have continued to operate normally until turned off at end of instrument life. Mid/far-infrared \& CMB astrophysics observatories are highlighted in green. The data in this table are courtesy of Ron Ross, Jr. }
\end{tablenotes}
\end{threeparttable}
\end{table*}
\subsection{Small and Low-Power Coolers}\label{ssect:cryoc}
For any spaceborne observatory operating at mid/far-infrared wavelengths, achieving high sensitivity requires that the telescope, instrument, and detectors be cooled, with the level of cooling dependent on the detector technology, the observation wavelength, and the goals of the observations. Cooling technology is thus fundamentally enabling for all aspects of mid/far-infrared astronomy.
The cooling required for the telescope depends on the wavelengths being observed (Figure \ref{fig:backgr}). For some situations, cooling the telescope to $30-40$\,K is sufficient. At these temperatures it is feasible to use radiative (passive) cooling solutions if the telescope is space-based, {\itshape and} if the spacecraft orbit and attitude allow for a continuous view of deep space \citep{haw92}. Radiative coolers typically resemble a set of thermal/solar shields in front of a black radiator to deep space (Figure \ref{fig:launchir}). This is a mature technology, having been used on {\itshape Spitzer}, {\itshape Planck}, and JWST (for an earlier proposed example, see\citep{thron95}).
For many applications however, cooling the telescope to a few tens of kelvins is sub-optimal. Instead, cooling to of order 4\,K is required, for e.g., zodiacal background limited observations (see also \S\ref{spaceobs}). Moreover, detector arrays require cooling to at least this level. For example, SIS and HEB mixers need cooling to 4\,K, while TES, KID, and QCD arrays need cooling to 0.1\,K or below. Achieving cooling at these temperatures requires a cooling chain - a staged series of cooling technologies selected to maximize the cooling per mass and per input power.
To achieve temperatures below $\sim40$\,K, or where a continuous view of deep space is not available, cryocoolers are necessary. In this context, the Advanced Cryocooler Technology Development Program (ACTDP\citep{ross06}), initiated in 2001, has made excellent progress in developing cryogen-free multi-year cooling for low-noise detector arrays at temperatures of 6\,K and below (Figure \ref{fig:cryod}). The state-of-the-art for these coolers include those on-board {\itshape Planck}, JWST, and {\itshape Hitomi}\citep{shirron16hit}. Similar coolers that could achieve 4\,K are at TRL 4-5, having been demonstrated as a system in a laboratory environment\citep{ross04}, or as a variant of a cooler that has a high TRL (JWST/MIRI). Mechanical cryocoolers for higher temperatures have already demonstrated impressive on-orbit reliability (Table \ref{tab:coolers}). The moving components of a 4\,K cooler are similar (expanders) or the same (compressors) as those that have flown. Further development of these coolers to maximize cooling per input power for small cooling loads ($<100$\,mW at 4\,K) and lower mass is however needed. There is also a need to minimize the vibration from the cooler system. The miniature reverse Brayton cryocoolers in development by Creare are examples of reliable coolers with negligible exported vibration. These coolers are at TRL 6 for 80\,K and TRL 4 for 10\,K operation.
For cooling to below 0.1\,K, adiabatic demagnetization refrigerators (ADRs) are currently the only proven technology, although work has been funded by ESA to develop a continuously recirculating dilution refrigerator (CADR). A single shot DR was flown on {\itshape Planck} producing $0.1\,\mu$W of cooling at $100$\,mK for about 1.5 years, while a three-stage ADR was used on {\itshape Hitomi} producing $0.4\,\mu$W of cooling at 50\,mK with an indefinite lifetime. In contrast, a TRL 4 CADR has demonstrated $6\,\mu$W of cooling at $50$\,mK with no life-limiting parts\citep{shirron01} (Figure \ref{fig:cryoe}). This technology is being advanced toward TRL 6 by 2020 via funding from the NASA SAT/TPCOS program\citep{tuttle17}. Demonstration of a 10\,K upper stage for this machine, as is planned, would enable coupling to a higher temperature cryocooler, such as that of Creare, that has near-zero vibration. The flight control electronics for this ADR are based on the flight-proven {\itshape Hitomi} ADR control, and has already achieved TRL 6. ADR coolers are the current reference design for the {\itshape Athena} X-ray observatory. For the OST, all three of the above technologies are required to maintain the telescope near 4\,K and the detector arrays near 50\,mK.
Continued development of 0.1\,K and $4\,$k coolers with cooling powers of tens of mW, high reliability, and lifetimes of 10+ years is of great importance for future far-infrared observatories. Moreover, the development of smaller, lighter, vibration resistant, power efficient cryo-coolers enables expansion of infrared astronomy to new observing platforms. An extremely challenging goal would be the development of a 0.1\,K cooler with power, space, and vibration envelopes that enable its use inside a 6U CubeSat, while leaving adequate resources for detector arrays, optics, and downlink systems (see also \S\ref{ssect:cube}). More generally, the ubiquity of cooling in infrared astronomy means that development of low-mass, low-power, and low cost coolers will reduce mission costs and development time across all observational domains.
\begin{figure*}
\includegraphics[width=16cm,angle=0]{afig_cryo_fig4.png}
\caption[Three cryocoolers for 6\,K cooling]{Three cryocoolers for 6\,K cooling developed through the Advanced Cryocooler Technology Program (ACTDP).}
\label{fig:cryod}
\end{figure*}
\begin{figure}
\includegraphics[width=8cm,angle=0]{afig_cryo_fig5.png}
\caption[The Continuous Adiabatic Demagnetization Refrigerator (CADR) under development at GSFC]{The Continuous Adiabatic Demagnetization Refrigerator (CADR) under development at NASA GSFC. This will provide $6\,\mu$W of cooling at 50\,mK. It also has a precooling stage that can be operated from 0.3 to 1.5\,K. The picture also shows a notional enclosing magnetic shield for a $<1\mu$\,T fringing field.}
\label{fig:cryoe}
\end{figure}
\subsection{High Surface Accuracy Lightweight Mirrors}\label{ssect:mirr}
As far-infrared observing platforms mature and develop, there emerge new opportunities to use large aperture mirrors for which the only limitations are (1) mirror mass, and (2) approaches to active control and correction of the mirror surface. This raises the possibility of a high altitude, long duration far-infrared observing platform with a mirror factors of 2-5 larger than on facilities such as SOFIA or {\itshape Herschel}.
The key enabling technology for such an observing platform is the manufacture of lightweight, high surface accuracy mirrors, and their integration into observing platforms. This is especially relevant for ULDBs, which are well-suited to this activity. Lightweight mirrors with apertures of three meters to several tens of meters are ideal for observations from balloon-borne platforms. Carbon-fiber mirrors are an attractive option; they are low mass and can offer high sensitivity in the far-infrared, at low cost of manufacture. Apertures of 2.5\,m are used on projects such as BLAST-TNG\citep{galitzki2014next}. Apertures of up to $\sim$10-m are undergoing ground-based tests, including the phase 2 NIAC study for the Large Balloon Reflector \citep{walk14,less15,cort16}.
A conceptually related topic is the physical size and mass of optical components. The physical scale of high resolution spectrometers in the far-infrared is determined by the optical path difference required for the resolution. For resolutions of $R\gtrsim10^5$, this implies scales of several meters for a grating spectrometer. This scale can be reduced by folding, but mass remains a potentially limiting problem. Moreover, larger physical sizes are needed for optical components to accommodate future large format arrays, posing challenges for uniformity, thermal control, and antireflection coatings. The development of low-mass optical elements suitable for diffraction limited operation at $\lambda \geq 25\,\mu$m would open the range of technical solutions available for the highest performance instruments.
\subsection{Other Needs}\label{ssect:general}
There exist several further areas for which technology development would be beneficial. We briefly summarize them below:
\vspace{0.2cm}
\noindent {\bfseries Lower-loss THz optics:} lenses, polarizers, filters, and duplexers.
\vspace{0.1cm}
\noindent {\bfseries Digital backends:} Low-power (of order a few watts or less) digital backends with $>1000$ channels covering up to several tens of GHz of bandwidth.
\vspace{0.1cm}
\noindent {\bfseries Wide-field imaging fourier transform spectrometers:} Expanding on the capabilities of e.g. SPIRE on {\itshape Herschel}, balloon or space-based IFTS with FoVs of tens of square arcminutes\citep{mail13}. Examples include the concept H2EX\citep{boul09}.
\vspace{0.1cm}
\noindent {\bfseries Deployable optics:} Development of deployable optics schemes across a range of aperture sizes would be enabling for a range of platforms. Examples range from 20-50\,cm systems for CubeSats to 5-10\,m systems for JWST.
\vspace{0.1cm}
\noindent {\bfseries Data downlinking and archiving:} The advent of infrared observatories with large-format detector arrays presents challenges in downlinking and archiving. Infrared observatories have, to date, not unduly stressed downlinking systems, but this could change in the future with multiple instruments each with $10^4 - 10^5$ pixels on a single observatory. Moreover, the increasing number and diversity of PI and facility-class infrared observatories poses challenges to data archiving, in particular for enabling investigators to efficiently use data from multiple observatories in a single study. One way to mitigate this challenge is increased use of on-board data processing and compression, as is already done for missions operating at shorter wavelengths.
\vspace{0.1cm}
\noindent {\bfseries Commonality and community in instrument software:} Many tasks are similar across a single platform, and even between platforms (e.g., pointing algorithms, focus, data download). Continued adherence to software development best practices, code sharing via repositories via GitHub, and fully open-sourcing software, will continue to drive down associated operating costs, speed up development, and facilitate ease of access.
\section{Conclusions: The Instrument Development Landscape for Infrared Astronomy}\label{sect:conc}
The picture that coalesces from this review is that far-infrared astronomy is still an emerging field, even after over forty years of development. Optical and near-infrared astronomy has a mature and well-understood landscape in terms of technology development for different platforms. In contrast, far-infrared astronomy has more of the ``Wild West'' about it; there are several observing platforms that range widely in maturity, all with overlapping but complementary domains of excellence. Moreover, considering the state of technology, {\itshape all} areas have development paths where huge leaps forward in infrared observing capability can be obtained. In some cases, entirely new platforms can be made possible.
To conclude this review, we bring together and synthesize this information in order to lay out how the capabilities of each platform can be advanced. To do so, we use the following definitions:
\begin{itemize}
\item {\bfseries Enabling:} Enabling technologies satisfy a capability need for a platform, allowing that platform to perform science observations in a domain that was hitherto impossible with that platform.
\item {\bfseries Enhancing:} Enhancing technologies provide significant benefits to a platform over the current state of the art, in terms of e.g., observing efficiency or cost effectiveness, but do not allow that platform to undertake observations in new science domains.
\end{itemize}
\noindent These definitions correspond closely to the definitions of Enabling (a pull technology) and Enhancing (a push technology) as used in the 2015 NASA Technology Roadmap.
Since different technology fields vary in relevance for different platforms, technologies can be enabling for some platforms and enhancing for others. In Table \ref{tab:summary} we assess the status of selected technology areas as enabling or enhancing, as a function of observing platform. This table is solely the view of the authors, and not obtained via a community consultation.
With this caveat in mind, based on Table \ref{tab:summary}, we present a non-exhaustive list of important technology development areas for far-infrared astronomy:
\vspace{0.3cm}
\noindent {\bfseries Large format detectors:} Existing and near-future infrared observatories include facilities with large FoVs, or those designed to perform extremely high resolution spectroscopy. These facilities motivate the development of large-format arrays that can fill telescope FoVs, allowing for efficient mapping and high spatial resolutions. A reference goal is to increase the number of pixels in arrays to $10^5$ for direct detectors, and $10^2$ for heterodyne detectors. This is a small number compared with arrays for optical and near-infrared astronomy, for which millions of pixels can be fielded in a single chip, but is still $1-2$ orders of magnitude larger than any array currently used in the far-infrared.
\vspace{0.2cm}
\noindent {\bfseries Detector readout electronics:} Increases in detector array sizes are inevitably accompanied by increases in complexity and power required for the readout electronics, and power dissipation of the cold amplifiers for these arrays. At present, the power requirements for $\gtrsim10^4$ detector array readout systems are a key limitation for their use in any space-based or sub-orbital platform, restricting them to use in ground-based facilities. For these reasons, development of multiplexing schemes is a high priority for large-format arrays, irrespective of the technology used.
The main driver for power dissipation is the bandwidth of the multiplexers. Low-power cryogenic amplifiers, in particular parametric amplifiers, can mitigate this problem at 4\,K. Application Specific Integrated Circuits (ASICs), which combine digitization, FFT, and tone extraction in a single chip, can greatly reduce the power required for the warm readout system. A reference goal for the use of $\gtrsim10^4$ pixel arrays on space-based observatories such as the OST is a total power dissipation in the readout system of below $2\,$kW. This requires a denser spacing of individual channels in frequency domain multiplexers. For balloon-based facilities, sub-kW power dissipation is desirable.
\vspace{0.2cm}
\noindent {\bfseries Direct detector sensitivity \& dynamic range:} The performance of $4\,$K-cooled space-based and high-altitude sub-orbital telescopes will be limited by astrophysical backgrounds such as zodiacal light, galactic cirrus, and the microwave background, rather than telescope optics or the atmosphere. Increasing pixel sensitivity to take advantage of this performance is of paramount importance to realize the potential of future infrared observatories. A reference goal is large-format detector arrays with per-pixel NEP of $2\times 10^{-20}\,\mathrm{W}/\sqrt{\mathrm{Hz}}$. This sensitivity is enabling for all imaging and medium resolution spectroscopy applications. It meets the requirement for R$\sim$1000 spectroscopy for the OST, and exceeds the medium resolution spectroscopy requirement for SPICA by a factor of five. However, for high spectral resolutions ($R>10^5$, e.g. the proposed HRS on the OST), even greater sensitivities are required, of $\sim 10^{-21}\,\mathrm{W}/\sqrt{\mathrm{Hz}}$, and ideally photon-counting.
Turning to dynamic range; the dynamic range of detector arrays for high-background applications, such as ground-based observatories, is sufficient. However, the situation is problematic for the low background of cold space-based observatories. This is particularly true of observatories with $\gtrsim5\,$m apertures, since the saturation powers of currently proposed high-resolution detector arrays are within $\sim2$ orders of magnitude of their NEPs. It would be advantageous to increase the dynamic range of detector arrays to five or more orders of magnitude of their NEPs, as this would mitigate the need to populate the focal plane with multiple detector arrays, each with different NEPs.
\vspace{0.2cm}
\noindent {\bfseries Local Oscillators for heterodyne spectroscopy:} The extremely high spectral resolutions achievable by heterodyne spectroscopy at mid/far-infrared wavelengths are of great value, both for scientific investigations in their own right, and for complementarity with the moderate spectral resolutions of facilities like JWST. This motivates continued development of high quality Local Oscillator sources to increase the sensitivity and bandwidth of heterodyne receivers. An important development area is high spectral purity, narrow-line, phase-locked, high-power ($5-10$\,mW) Quantum Cascade Laser (QCL) LOs, since the QCL LOs operate effectively for the higher frequency ( $>3$\,THz) arrays. A complementary development area is power division schemes (e.g., Fourier phase gratings) to utilize QCLs effectively
\vspace{0.2cm}
\noindent {\bfseries High bandwith heterodyne mixers:} The current bandwidth of heterodyne receivers means that only very small spectral ranges can be observed at any one time, meaning that some classes of observation, such as multiple line scans of single objects, are often prohibitively inefficient. There is thus a need to increase the IF bandwidth of $1-5\,$THz heterodyne mixers. A reference goal is a minimum of $8\,$GHz bandwidth required at frequencies of $\sim3$\,THz. This will allow for simultaneous observation of multiple lines, improving both efficiency and calibration accuracy. A related development priority is low-noise $1-5\,$THz mixers that can operate at temperatures of $>20$\,K. At present, the most promising paths towards such mixers align with the HEB and SIS technologies.
\vspace{0.2cm}
\noindent {\bfseries Interferometry:} Ground-based observations have conclusively demonstrated the extraordinary power of interferometry in the centimeter to sub-millimeter, with facilities such as the VLA and ALMA providing orders of magnitude increases in spatial resolution and sensitivity over any existing single-dish telescope. As Table \ref{tab:summary} illustrates, the technology needs for space-based far-infrared interferometry are relatively modest, and center on direct detector developments. For interferometry, high-speed readout is more important than a large pixel count or extremely low NEP. For example, SPIRIT requires $14\times14$ pixel arrays of detectors with a NEP of $\sim 10^{-19}\,\mathrm{W}/\sqrt{\mathrm{Hz}}$ and a detector time constant of $\sim185\,\mu$s \citep{benford2007cryogenic}. Detailed simulations, coupled with rigorous laboratory experimentation and algorithm development, are the greatest priorities for interferometry.
\vspace{0.2cm}
\noindent {\bfseries Cryocoolers:} Since cooling to $4\,$K and $0.1\,$K temperatures is required for all far-infrared observations, improvements in the efficiency, power requirements, size, and vibration of cryocoolers are valuable for all far-infrared space- and sub-orbital-based platforms. For $<0.1\,$K coolers, there is a need for further development of both CADRs and DRs that enable cooling of up to tens of $\mu$W at $<0.1\,$K, to enable cooling of larger arrays. For $4\,$K coolers, further development to maximize cooling power per input power for small cooling loads ($<100$\,mW at $4\,$K) and lower mass is desirable, along with minimizing the exported vibration from the cooler system. For $\sim30\,$K coolers, development of a cooling solution with power, space, and vibration envelopes that enable its use inside a 6U CubeSat, while leaving adequate resources for detector arrays, optics, and downlink systems, would enable far-infrared observations from CubeSat platforms, as well as enhancing larger observatories.
\vspace{0.2cm}
\noindent {\bfseries Deployable and/or Light-weight telescope mirrors:} The advent of long-duration high-altitude observing platforms, and the expanded capabilities of future launch vehicles, enable the consideration of mirrors for far-infrared observatories with diameters 2-5 times larger than on facilities such as SOFIA and {\itshape Herschel}. The most important limitations on mirror size are then (a) mass, and (b) approaches to active control of the mirror surface. The development of large-aperture, lightweight, high surface accuracy mirrors is thus an important consideration, including those in a deployable configuration. A related area is the development of optical components that accomodate large-format arrays, or very high resolution spectroscopy.
\vspace{0.2cm}
\noindent {\bfseries Technology maturation platforms:} Sub-orbital far-infrared platforms including ground-based facilities, SOFIA, and balloon-borne observatories, continue to make profound advances in all areas of astrophysics. However, they also serve as a tiered set of platforms for technology maturation and raising TRL's. The continued use of all of these platforms for technology development is essential to realize the long-term ambitions of the far-infrared community for large, actively cooled, space-based infrared telescopes. A potentially valuable addition to this technology maturation tier is the International Space Station, which offers a long-term, stable orbital platform with abundant power.
\vspace{0.2cm}
\noindent {\bfseries Software and data archiving:} In the post-{\itshape Herschel} era, SOFIA and other sub-orbital platforms will play a critical role in mining the information-rich far-infrared spectral range, and in keeping the community moving forward. For example, the instruments flying on SOFIA and currently under development did not exist when {\itshape Herschel} instrumentation was defined. During this time, and henceforth, there is an urgent need to ensure community best-practices in software design, code sharing, and open sourcing via community-wide mechanisms. It is also important to maintain and enhance data archiving schemes that effectively bridge multiple complex platforms in a transparent way, and which enable access to the broadest possible spectrum of the community.
\section{Acknowledgements}\label{sect:ack}
We thank George Nelson and Kenol Jules for help on the capabilities of the International Space Station, and Jochem Baselmans for insights into KIDs. We also thank all speakers who took part in the FIR SIG Webinar series. This report developed in part from the presentations and discussions at the Far-Infrared Next Generation Instrumentation Community Workshop, held in Pasadena, California in March 2017. It is written as part of the activities of the Far-Infrared Science Interest Group. This work was supported by CNES.
\begin{landscape}
\begin{table}[ht]
\begin{threeparttable}[b]
\caption{A summary of enabling and enhancing technologies for far-infrared observing platforms.} \label{tab:summary}
{\footnotesize
\begin{tabular}{@{}|crr|p{1.4cm}p{1.4cm}p{1.4cm}p{1.4cm}p{1.4cm}p{1.4cm}p{1.4cm}|p{1.4cm}p{1.4cm}p{1.4cm}p{1.4cm}|@{}}
\hline
\hline
\multicolumn{3}{|l|}{} & \multicolumn{7}{c|}{\cellcolor{blue!30}{\bfseries SPACE BASED}} & \multicolumn{4}{c|}{\cellcolor{blue!30}{\bfseries ATMOSPHERE BASED}} \\
\hline
\multicolumn{3}{|l|}{} & \cellcolor{yellow!35}OST\tnote{(a)} & \cellcolor{yellow!15}SPICA & \cellcolor{yellow!35}Probe & \cellcolor{yellow!15}CubeSats & \cellcolor{yellow!35}ISS & \cellcolor{yellow!15}Interfer- & \cellcolor{yellow!35}Sounding & \cellcolor{lime!25}SOFIA & \multicolumn{2}{c}{\cellcolor{lime!15}Balloons\tnote{(b)}} & \cellcolor{lime!25}Ground \\
\multicolumn{3}{|l|}{} & \cellcolor{yellow!35} & \cellcolor{yellow!15} & \cellcolor{yellow!35}Class & \cellcolor{yellow!15} & \cellcolor{yellow!35} & \cellcolor{yellow!15}ometry & \cellcolor{yellow!35}Rockets & \cellcolor{lime!25} & \cellcolor{lime!15}ULDB & \cellcolor{lime!15}LDB & \cellcolor{lime!25}Based \\
\hline
\hline
\multicolumn{3}{|l|}{\cellcolor{gray!50}{\bfseries Direct Detectors\tnote{(c)}}} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} \\
\multicolumn{3}{|r|}{\cellcolor{green!25}Array size ($10^{4+}$pix)} & \cellcolor{Sienna2} & \cellcolor{Sienna2} & \cellcolor{Sienna2} & \cellcolor{gray!50} & \cellcolor{Sienna2} & \cellcolor{SteelBlue1} & \cellcolor{gray!50} & \cellcolor{Sienna2} & \cellcolor{Sienna2} & \cellcolor{Sienna2} & \cellcolor{Sienna2} \\
\multicolumn{3}{|r|}{\cellcolor{green!15}Sensitivity} & \cellcolor{Sienna2} & \cellcolor{Sienna2} & \cellcolor{Sienna2} & \cellcolor{Sienna2} & \cellcolor{gray!50} & \cellcolor{SteelBlue1} & \cellcolor{Sienna2} & \cellcolor{Sienna2} & \cellcolor{Sienna2} & \cellcolor{SteelBlue1} & \cellcolor{SteelBlue1} \\
\multicolumn{3}{|r|}{\cellcolor{green!25}Speed} & \cellcolor{SteelBlue1} & \cellcolor{gray!50} & \cellcolor{SteelBlue1} & \cellcolor{gray!50} & \cellcolor{SteelBlue1} & \cellcolor{SteelBlue1} & \cellcolor{SteelBlue1} & \cellcolor{SteelBlue1} & \cellcolor{SteelBlue1} & \cellcolor{gray!50} & \cellcolor{SteelBlue1} \\
\multicolumn{3}{|r|}{\cellcolor{green!15}Dynamic Range} & \cellcolor{SteelBlue1} & \cellcolor{SteelBlue1} & \cellcolor{SteelBlue1} & \cellcolor{SteelBlue1} & \cellcolor{SteelBlue1} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} \\
\multicolumn{3}{|r|}{\cellcolor{green!25}Readout: $10^{4}$pix} & \cellcolor{Sienna2} & \cellcolor{Sienna2} & \cellcolor{Sienna2} & \cellcolor{SteelBlue1} & \cellcolor{Sienna2} & \cellcolor{Sienna2} & \cellcolor{gray!50} & \cellcolor{Sienna2} & \cellcolor{Sienna2} & \cellcolor{Sienna2} & \cellcolor{SteelBlue1} \\
\multicolumn{3}{|r|}{\cellcolor{green!15}Readout: $10^{5}$pix} & \cellcolor{Sienna2} & \cellcolor{gray!50} & \cellcolor{Sienna2} & \cellcolor{gray!50} & \cellcolor{Sienna2} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{Sienna2} & \cellcolor{Sienna2} & \cellcolor{Sienna2} & \cellcolor{SteelBlue1} \\
\hline
\multicolumn{3}{|l|}{\cellcolor{gray!50}{\bfseries Heterodyne Detectors}\tnote{(d)}} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} \\
\multicolumn{3}{|r|}{\cellcolor{green!25}Array size ($10^{2+}$pix)} & \cellcolor{SteelBlue1} & \cellcolor{gray!50} & \cellcolor{SteelBlue1} & \cellcolor{gray!50} & \cellcolor{SteelBlue1} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{Sienna2} & \cellcolor{Sienna2} & \cellcolor{Sienna2} & \cellcolor{Sienna2} \\
\multicolumn{3}{|r|}{\cellcolor{green!15}LO bandwidth\tnote{(e)}} & \cellcolor{SteelBlue1} & \cellcolor{gray!50} & \cellcolor{SteelBlue1} & \cellcolor{SteelBlue1} & \cellcolor{SteelBlue1} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{Sienna2} & \cellcolor{Sienna2} & \cellcolor{Sienna2} & \cellcolor{Sienna2} \\
\multicolumn{3}{|r|}{\cellcolor{green!25}LO mass} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{SteelBlue1} & \cellcolor{Sienna2} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{Sienna2} & \cellcolor{SteelBlue1} & \cellcolor{gray!50} \\
\multicolumn{3}{|r|}{\cellcolor{green!15}LO power draw} & \cellcolor{SteelBlue1} & \cellcolor{gray!50} & \cellcolor{SteelBlue1} & \cellcolor{Sienna2} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{SteelBlue1} & \cellcolor{SteelBlue1} & \cellcolor{gray!50} \\
\multicolumn{3}{|r|}{\cellcolor{green!25}Mixer bandwidth} & \cellcolor{SteelBlue1} & \cellcolor{gray!50} & \cellcolor{Sienna2} & \cellcolor{SteelBlue1} & \cellcolor{SteelBlue1} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{Sienna2} & \cellcolor{Sienna2} & \cellcolor{Sienna2} & \cellcolor{Sienna2} \\
\multicolumn{3}{|r|}{\cellcolor{green!15}Mixer sensitivity} & \cellcolor{Sienna2} & \cellcolor{gray!50} & \cellcolor{Sienna2} & \cellcolor{Sienna2} & \cellcolor{SteelBlue1} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{SteelBlue1} & \cellcolor{SteelBlue1} & \cellcolor{SteelBlue1} & \cellcolor{gray!50} \\
\hline
\multicolumn{3}{|l|}{\cellcolor{gray!50}{\bfseries Cryocoolers\tnote{(f)}}} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} \\
\multicolumn{3}{|r|}{\cellcolor{green!25}Low-Power} & \cellcolor{SteelBlue1} & \cellcolor{gray!50} & \cellcolor{SteelBlue1} & \cellcolor{Sienna2} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{SteelBlue1} & \cellcolor{gray!50} & \cellcolor{Sienna2} & \cellcolor{SteelBlue1} & \cellcolor{gray!50} \\
\multicolumn{3}{|r|}{\cellcolor{green!15}Low-Mass} & \cellcolor{SteelBlue1} & \cellcolor{gray!50} & \cellcolor{SteelBlue1} & \cellcolor{Sienna2} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{SteelBlue1} & \cellcolor{gray!50} & \cellcolor{Sienna2} & \cellcolor{SteelBlue1} & \cellcolor{gray!50} \\
\hline
\multicolumn{3}{|l|}{\cellcolor{gray!50}{\bfseries Mirrors/optics}} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} \\
\multicolumn{3}{|r|}{\cellcolor{green!25}Low areal density} & \cellcolor{SteelBlue1} & \cellcolor{SteelBlue1} & \cellcolor{gray!50} & \cellcolor{SteelBlue1} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{SteelBlue1} & \cellcolor{gray!50} & \cellcolor{Sienna2} & \cellcolor{SteelBlue1} & \cellcolor{gray!50} \\
\multicolumn{3}{|r|}{\cellcolor{green!15}Large aperture} & \cellcolor{Sienna2} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{SteelBlue1} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{Sienna2} & \cellcolor{SteelBlue1} & \cellcolor{gray!50} \\
\multicolumn{3}{|r|}{\cellcolor{green!25}Deployable} & \cellcolor{SteelBlue1} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{Sienna2} & \cellcolor{SteelBlue1} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} \\
\hline
\multicolumn{3}{|l|}{\cellcolor{gray!50}{\bfseries Other}} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} \\
\multicolumn{3}{|r|}{\cellcolor{green!25}Backend electronics} & \cellcolor{SteelBlue1} & \cellcolor{SteelBlue1} & \cellcolor{SteelBlue1} & \cellcolor{Sienna2} & \cellcolor{gray!50} & \cellcolor{Sienna2} & \cellcolor{SteelBlue1} & \cellcolor{SteelBlue1} & \cellcolor{SteelBlue1} & \cellcolor{SteelBlue1} & \cellcolor{SteelBlue1} \\
\multicolumn{3}{|r|}{\cellcolor{green!15}Downlink systems} & \cellcolor{SteelBlue1} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{SteelBlue1} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} & \cellcolor{gray!50} \\
\hline
\hline
\multicolumn{5}{r}{} & \multicolumn{2}{c}{\cellcolor{Sienna2}\textcolor{white}{{\bfseries Enabling}}}& \multicolumn{2}{c}{\cellcolor{SteelBlue1}\textcolor{white}{{\bfseries Enhancing}}}& & & & \\
\hline
\hline
\end{tabular}
}
\begin{tablenotes}
{\footnotesize
\item[(a)] For the OST (\S\ref{origins}), the table refers to ``concept 1'', the more ambitious of the concepts investigated, with greater dependence on technology development.
\item[(b)] For balloons (\S\ref{ssect:uldb}) ULD balloons have flight times of 100+ days and carry payloads up to $\sim1800\,$kg. The $<50$ day LD balloons can carry up to $\sim2700\,$kg.
\item[(c)] Fiducial targets for direct detectors (\S\ref{directdetect}) used for space-based imaging are a NEP of $1\times 10^{-19}\,\mathrm{W}/\sqrt{\mathrm{Hz}}$ and a readout system with $<3\,$kW power dissipation. They should also be compatible with an observatory cryogenic system.
\item[(d)] For heterodyne instruments (\S\ref{ssect:het}): none are planned for SPICA (\S\ref{sec:spica}). For interferometers (\S\ref{ssect:firint}); all those proposed by the US community are direct detection; heterodyne interferometer needs have however been studied in Europe.
\item[(e)] The assumed operating frequency range is 1-5\,THz.
\item[(f)] For cryocoolers (\S\ref{ssect:cryoc}) we do not distinguish between 4\,K and 0.1\,K coolers, since the choice is detector dependent.
}
\end{tablenotes}
\end{threeparttable}
\end{table}
\end{landscape}
\footnotesize
|
1,941,325,220,049 | arxiv | \section{Introduction}
Hidden populations are groups of individuals which i) have strong privacy concerns due to illicit or stigmatized behaviour, and ii) lack a sampling frame, i.e., their size and composition are unknown. Examples of hidden populations include several groups that are at high risk for contracting and spreading HIV, e.g., men who have sex with men, sex workers, and injecting drug users~\cite{Beyrer2012,Kerrigan2012,Aceijas2004}; it is therefore of great importance to obtain reliable sampling methods for hidden populations in order to plan and evaluate interventions in the global HIV epidemic~\cite{Magnani2005,Lamptey2008}.
Respondent-driven sampling (RDS)~\cite{Heckathorn1997,Heckathorn2002} is a sampling methodology that utilizes the relationships between individuals in order to sample from the population. By combining an effective sampling scheme and the ability to produce unbiased population estimates, RDS has become the perhaps most preferred method when sampling from hidden populations. A typical RDS study starts with the selection of a group of seed individuals. Each seed is provided with a number of coupons, typically between three to five, to distribute to his or her peers in the population. An individual is eligible for participation upon presenting a coupon at the study site. Because recruitment takes place by coupons, participants remain anonymous throughout the study, but each coupon is numbered with a unique ID to keep track of who recruited whom. Incentives are given both for the participation of an individual as well as for the participation of those to whom he or she passed coupons. After participation, which commonly includes survey questions and possibly being tested for diseases, newly recruited individuals (i.e., respondents) are also given coupons to disperse among their contacts in the population. This procedure is then repeated until the desired sample size has been reached. The sampled individuals form a tree-like structure which is obtained from tracing the coupons. Recently, online based RDS methods (webRDS), where recruitment takes place via email and a survey is filled out at a designated web site, have also been put into use~\cite{Wejnert2008,Wejnert2009,Bengtsson2012}. There are several procedures available for estimating population characteristics from RDS data, most of which use a Markov model in order to approximate the actual recruitment process~\cite{Salganik2004,Volz2008,Gile2011JASA,Gile2011arXiv,LuEtal,malmros2013}; this is not the focus of the present paper.
A frequent problem in RDS studies is the inability of the recruitment process to reach the desired sample size due to premature failure of the recruitment chains started by the seeds~\cite{malekinejad2008}. This is often mitigated by additional seeds that enter the study as the rate of recruitment declines; e.g., in~\cite{malekinejad2008}, 43\% of reviewed RDS studies with available data reported that additional seeds were used. Relatedly, it has been observed in webRDS studies, where recruitment is allowed to go on until it stops by itself, that the recruitment process fails to reach a large proportion of the population despite additional seeds joining in at a later time~\cite{Bengtsson2012,Stein2014}. While there are most likely several reasons behind recruitment chain failure, such as community structure in the population causing chains to become stuck in a sub-network and/or clustering that has a similar effect, but more locally, an important reason is the limited number of coupons in the RDS recruitment process. This is the main focus of this paper. Furthermore, recruitment chain failure is highly associated with the ability of the recruitment process to start successful recruitment chains, the probability of such chains occurring, and the relative size of the population that is reached by an RDS study, all of which are related to quantities typically studied in epidemic modelling. As it turns out, it is possible to use models of infectious disease spread on social networks to describe coupon distribution in RDS, where the disease is defined as ``participation in the study'' and spreads by the RDS coupon distribution mechanism.
The simplest model of infectious disease spread is the Reed-Frost model, see e.g.~\cite[][p. 11-18]{andersson2000}, where in each generation $i$, each infectious individual independently infects each susceptible individual with the same probability. The individuals that were infected by the individuals in generation $i$ make up generation $i+1$ of infectious individuals in the epidemic. After spreading the disease, the individuals in generation $i$ are considered recovered (or dead) from the disease and are removed from the process. In the original version of the model, an infectious individual attempts to infect all susceptible individuals in the population. The model is however easily modified to the more realistic case when the structure of the population is described by a social network, hence imposing the restriction that an infectious individual only may spread the disease to his or her contacts in the social network independently of each other with the same probability. Infectious diseases are usually able to spread to all contacts of an individual, and consequently, the Reed-Frost model and other epidemic models defined on social networks do not impose any restrictions on the number of individuals that an infectious individual can infect other than those given by population structure. The RDS recruitment process differs from infectious diseases in that its spread is restricted by the limited number of coupons. Consequently, individuals with more population contacts than the number of coupons distributed to them have less capability of recruiting than if RDS recruitment were to spread in the usual manner of an epidemic, i.e.\ without any limitations. Depending on how the number of contacts (i.e, degrees) of population members are distributed, this may have a large effect on the capability of the RDS recruitment process to sustain and initiate recruitment. Furthermore, it may affect the ability of the recruitment process to reach a substantial proportion of the population, as the sampling procedure can limit recruitment to parts of the population.
In this paper, we model RDS as an epidemic taking place on a social network by defining a Reed-Frost type model which has an upper limit on the number of individuals that an infectious individual could infect. We will use both infectious disease terminology and RDS terminology when referring to this model. In order to be able to specify the degree distribution of the social network, we use the configuration model~\cite{Molloy1995,Molloy1998} to describe the structure of the population. We calculate the \emph{basic reproduction number}, i.e., the number of individuals that are infected by a typical infectious individual during the early stages of the epidemic. This is often denoted by $R_0$. We say that there is a \emph{major outbreak} if a non-negligible proportion of the population is infected and calculate the probability $\tau$ of such outbreaks occurring. If $R_0\le1$, it is not possible for a major outbreak to occur, while if $R_0>1$, a major outbreak may occur. The critical value of $R_0=1$ is often referred to as the \emph{epidemic threshold}. We also calculate the relative size of an outbreak in case of a major outbreak $z$ using so-called \emph{susceptibility sets}~\citep{ball2001,ball2002}. Note that $\tau$ and $z$ are positive only if $R_0$ is larger than the epidemic threshold. We compare the RDS recruitment process to corresponding epidemics with unrestricted spread and investigate the effect of varying the number of coupons and the coupon transfer probability. To our knowledge, there are no previous studies of epidemics on networks that describes behaviour similar to the present one, although the model in \cite{martin1986} allows for a restriction on the number of individuals that an infectious individual can infect in a homogeneously mixing population (i.e.\ a population without network structure).
\section{Models}
\subsection{Network model}\label{Subsec:NetworkModel}
We consider a configuration model network consisting of $n$ vertices. In later calculations, we will assume that $n\to\infty$. Each
individual $i, i=1,\ldots,n,$ is assigned an i.i.d.\ number of stubs (half-edges) $d_i$ from a prescribed distribution
$D$ having support on the non-negative integers. The network is then formed by pairing stubs together uniformly at
random. If $\sum_{i=1}^nd_i$ is odd, an edge is added to the $n$:th vertex (this does not influence our results in the
limit of infinite population size). This construction allows the formation of multiple edges and self-loops; it is
however well known that the fraction of these is small if $D$ has finite second moment. Specifically, the probability of
the resulting graph being simple is bounded away from 0 as $n\to\infty$; see \citep[Theorem 7.8]{hofstad2009} and
\citep[Lemma 5.3]{britton2007}. Hence we can condition on the graph being simple given that $E(D^2)<\infty$.
Alternatively, we may proceed by removing multiple edges and self-loops from the generated graph since asymptotically
this does not change the degree distribution if $D$ has finite second moment; see \citep[Theorem 7.9]{hofstad2009}.
Hence, we will from now on assume that the resulting graph is simple. Moreover, the graph is locally tree-like when
$E(D^2)<\infty$, meaning that it with high probability does not contain short cycles \citep{britton2007}. Hence, we can
take advantage of the branching process \citep[e.g.,][]{athreya2011} approximations that are often used for epidemics, see e.g.~\citep[][ch. 3]{andersson2000}. In what follows, we will assume that the degree distributions considered have finite second moment.
\subsection{Epidemic model}\label{Subsec:EpidemicModel}
On this graph, describing the social structure in a community, we define an epidemic model mimicking the RDS recruitment process. In this model, becoming infected corresponds to participating in the RDS study. Initially, all members of the population (vertices) are susceptible. The epidemic starts with one randomly selected individual (vertex), the index case, being infected from the outside. The infected individual uniformly selects $c$ of his or her neighbours in the population and infects them
independently of each other with the same probability $p$. The parameter $c$ corresponds to the number of coupons in RDS and the parameter $p$ to the probability of being successfully recruited to the RDS study. If the infected individual has less than $c$ contacts, he or she infects all his or her contacts independently of each other with probability $p$. The newly infected individuals
make up the first generation of the epidemic. After spreading the disease, the initially infected individual recovers
and becomes immune (or dies) and has no further role in the epidemic. The individuals in the first generation each in
turn select $c$ of their neighbours excluding the one who infected them (which for the first generation is the index
case), regardless of whether they are susceptible or not. If an individual has less than $c$ neighbours excluding the
one who infected him or her, he or she selects all of his or her neighbours. Then, they infect the selected contacts
that are susceptible, independently of each other with probability $p$, and then recover; contacts with already infected individuals have no effect. The now infected individuals
form the second generation of the epidemic. The disease continues to spread in the same fashion from the
second generation and onward until there are no newly infected individuals in a generation. The individuals that were
infected during the course of the epidemic make up the outbreak, and the number of ultimately infected individuals is
the final size of the outbreak. Note that if we let $c=\infty$, we get the standard Reed-Frost epidemic taking place on the configuration model network~\cite{britton2007}.
Because an individual only tries to infect those he or she selected, the spread of the disease, or coupon distribution mechanism, in our model is more similar to that of webRDS than physical RDS. We discuss this further and present other possible coupon distribution mechanisms in Section~\ref{Sec:Discussion}.
\section{Calculations}\label{Sec:Calculations}
\subsection{The basic reproduction number $R_0$}\label{Subsec:R0}
Assume that we have a configuration model graph $G$ of size $n$, where $n$ is large, and let the degree distribution of $G$ be
$D$, where $P(D=k)=p_k$. The degree of a given neighbour of an individual follow the \emph{size-biased} degree distribution $\tilde D$, where $P(\tilde
D=k)=\tilde p_k=kp_k/E(D)$. Assume that we have an epidemic spreading on this graph according to the description in
Subsection \ref{Subsec:EpidemicModel}. The degree of the index case is then distributed as $D$, and the degree of
infected individuals in later generations during the early stages of an outbreak is distributed as $\tilde D$. As previously mentioned in Subsection \ref{Subsec:NetworkModel}, the graphs generated by the configuration model will with high probability not contain short cycles, meaning that we can approximate the spread of the epidemic with a
(forward) branching process. Let $X$ and $\tilde X$ be the offspring of the ancestor (i.e., the index case) and of the
later generations in this branching process, respectively. Given that the index case has degree $k\le
c$, he or she can at most infect $k$ neighbours. If the index case has degree larger than or equal to $c+1$, he
or she infects at most $c$ neighbours. Because infections happens independently with the same probability $p$, we
have that, conditionally on the degree, the probability that the index case infects $j$ neighbours is
\begin{equation}\label{Eq:CondNumberInfectedIndex}
P(X=j|D=k)=\binom{c\wedge k}{j}p^j(1-p)^{(c\wedge k)-j},
\end{equation}
where $j=0,\ldots,c\wedge k$. Infectious individuals in later generations have one less contact available for
infection (the one that infected them). Hence, we get that, conditionally on the degree, the probability that an
infectious individual in later generations infects $j$ neighbours is
\begin{equation}\label{Eq:CondNumberInfectedLaterGen}
P(\tilde X=j|\tilde D=k)=\binom{c\wedge (k-1)}{j}p^j(1-p)^{(c\wedge (k-1))-j},
\end{equation}
where $j=0,\ldots,c\wedge (k-1)$.
Because the ability of an individual to spread the disease will depend on its degree, the offspring distributions are obtained by conditioning on
the degree:
\begin{align}
P(X=j) &=\sum_{k=j}^\infty P(X=j|D=k)p_k;\\
P(\tilde X=j) &=\sum_{k=j+1}^\infty P(\tilde X=j|\tilde D=k)\tilde p_k,
\end{align}
where $j=0,\ldots,c$, and the probabilities $P(X=j|D=K)$ and $P(\tilde X=j|\tilde D=k)$ come
from Eqs.~\eqref{Eq:CondNumberInfectedIndex} and \eqref{Eq:CondNumberInfectedLaterGen}, respectively. From standard
branching process theory~\cite{athreya2011} we have that $R_0$ is the expected number of individuals
that get infected by an infectious individual in the second and later generations; hence
\begin{align}\label{Eq:R0}
R_0 &=E(\tilde X) = \sum_{j=0}^c j\sum_{k=1}^\infty P(\tilde X=j|\tilde D=k)\tilde p_k\\
&=\sum_{j=0}^c j\left(\sum_{k=j}^{c-1} \binom{k}{j}p^j(1-p)^{k-j}\tilde p_k +
\binom{c}{j}p^j(1-p)^{c-j}\left(1-\sum_{k=1}^c\tilde p_k\right)\right).\notag
\end{align}
The obtained $R_0$ is increasing in $p$ and $c$, and for a fixed $p$, $R_0\to R_0^{\rm (unrestricted)}$ as $c\to\infty$, where $R_0^{\rm (unrestricted)}$ is the $R_0$ value for the standard Reed-Frost epidemic on a configuration model network, given by \citep{britton2007}
\[R_0^{\rm (unrestricted)}=\left(E(D)+\frac{\text{Var}(D)-E(D)}{E(D)}\right).\]
\subsection{Probability of major outbreak}\label{Subsec:ProbMaj}
When $R_0>1$, it is possible for a major outbreak to occur. The probability $\tau$ of such an outbreak occurring is given
by the survival probability of the approximating branching process, which we get by standard techniques. We first
consider a branching process with offspring distribution $\tilde X$ for all individuals, i.e.\ also for the index case.
Let the extinction probability of this process be $\tilde\pi$. For the process to die out, all the branching processes
initiated by the offspring of the ancestor must die out; hence by conditioning on the number of offspring in the first
generation of the process, we get
\begin{equation}\label{Eq:pitilde}
\tilde\pi =\sum_{j=0}^c\tilde\pi^j P(\tilde X=j)=\tilde\rho(\tilde\pi),
\end{equation}
where $\tilde\rho$ is the probability generating function of $\tilde X$. The solution to Equation~\eqref{Eq:pitilde} is obtained numerically. In our original branching process the ancestor has
offspring distribution $X$ and later generations have offspring distribution $\tilde X$. Again by conditioning on the
number of individuals in the first generation, we get that the extinction probability $\pi$ of the original branching
process is
\begin{align}\label{Eq:pi}
\pi &=\rho(\tilde\pi),
\end{align}
where $\tilde \pi$ is the solution to Equation~\eqref{Eq:pitilde} and $\rho$ is the probability generating function of $X$. The solution to Equation~\eqref{Eq:pi} is given
by numerical calculations, and we obtain the probability of a major outbreak $\tau=1-\pi$.
Note that if we have $1<s<\infty$ initially infected individuals in the epidemic, the probability of a major outbreak is $1-\pi^s$, which approaches 1 as $s$ becomes large. The number of initially infected individuals does not affect $R_0$ or the relative size of a major outbreak calculated in Subsection~\ref{Subsec:RelativeSize}.
\subsection{Relative size of a major outbreak}\label{Subsec:RelativeSize}
The relative size of a major outbreak in case of a major outbreak $z$ can be obtained using susceptibility sets, constructed as follows. For each individual $i$, we can obtain a random list of which
neighbours that $i$ would infect given that it were to be infected. By combining the lists from all individuals in the
population, it is possible to construct a directed graph with all vertices (individuals) in which there is an arc from
vertex $i$ to vertex $j$ if $j$ is in $i$:s list. The susceptibility set of an individual $j$ consists of all
individuals in this directed graph, including $j$ itself, from which there is a directed path to $j$. Hence, $j$:s
susceptibility set is such that the infection of any individual in the set would result in the ultimate infection of
$j$. Note that $j$ will be infected in the epidemic if and only if the initially infected individual is in $j$:s susceptibility set.
The susceptibility set of a randomly chosen individual, $i_0$ say, can be approximated with a (backward) branching process in which $i_0$ is the only member of the zeroth generation. We consider the number of neighbours that, if they were to be infected, would infect $i_0$ (as opposed to previously when we considered the number of neighbours that an individual would infect were it to be infected). Suppose that $i_0$ have degree $d$. Because all neighbours of $i_0$ contact him or her with the same probability $\theta$ independently of each other, the number of neighbours that contact him or her is $\bin(d,\theta)$-distributed; hence, the unconditional distribution of the number of neighbours that contact him or her is a mixed binomial distribution with parameters $D$ and $\theta$. We now derive an equation for the contact probability $\theta$. The degree distribution of the neighbouring individuals is $\tilde D$, so we obtain
\begin{equation}
\theta =\sum_{k=0}^\infty\theta_k\tilde p_k,
\end{equation}
where $\theta_k$ is the probability that a neighbour with degree $k$ contacts $i_0$. Because a neighbour of $i_0$ with degree
$k$ has to be contacted first in order to become infected, only $k-1$ edges are available for him or her to spread the
disease. Therefore, a neighbour must have at least degree two in order to first become infected and then contact
$i_0$. If a neighbour has degree $k\ge c+2$, he or she first selects $c$ of the available $k-1$ contacts and then attempts to spread the disease to them. Hence, the contact probabilities are
\begin{equation}
\theta_k=\left\{
\begin{array}{ll}
0, &k=0,1;\\
p, &k=2,\ldots,c+1;\\
\frac{c}{k-1}p, &k=c+2,c+3,\ldots.
\end{array}\right.
\end{equation}
The probability that a neighbour makes contact with $i_0$ depends on his or her degree. Hence, the degree distribution
of individuals in the first generation, i.e.\ those neighbours of $i_0$ that makes contact with $i_0$, and of individuals in later generations in the backward branching process is altered by the fact that they have contacted another individual. Conditionally on the event that a contact has been made, call it $C$, the distribution of the degree $D^*$ of an
individual in the first and later generations of the susceptibility set process is given by
\begin{align}
P(D^*=k) &= P(\tilde D=k|C)\notag\\
&=\frac{P(C|\tilde D=k)P(\tilde D=k)}{\sum_{k=0}^\infty P(C|\tilde D=k)P(\tilde D=k)}\notag\\
&= \frac{\theta_k\tilde p_k}{\theta},
\end{align}
so
\begin{equation}
P(D^*=k)=\left\{
\begin{array}{ll}
0,&k=0,1;\\
\frac{p\tilde p_k}{\theta}, &k=2,\ldots,c+1;\\
\frac{cp\tilde p_k}{(k-1)\theta}, &k=c+2,c+3,\ldots.
\end{array}\right.
\end{equation}
An individual in later generations of the process will be contacted by any of his or her neighbours independently of other neighbours with
the same probability $\theta$. Given that this individual has degree $k$, the number of neighbours that contact him or her is binomially distributed with parameters $k-1$ and $\theta$. Hence, the unconditional distribution of the number of neighbours that contact an individual in later generations is mixed binomial with parameters $D^*-1$ and $\theta$.
If the approximating backward branching process contains few individuals, it is unlikely that $i_0$ will be infected, whereas if the process reaches a large number of individuals (i.e.\ grows infinitely large), there is a positive probability that $i_0$ will not escape infection. More specifically, the probability that $i_0$ will be infected during a major outbreak is given by the survival probability of the backward branching process. Because $i_0$ is chosen randomly, we also have that the relative size of an outbreak in case of a major outbreak is given by the survival probability of the backward branching process. Let $Y$ be the number of offspring of the ancestor and $Y^*$ the number of offspring of individuals in later generations in the approximating branching process, respectively. Hence, $Y\sim\mixbin(D,\theta)$ and $Y^*\sim\mixbin(D^*-1,\theta)$. We obtain the survival probability of the process similarly as in Subsection~\ref{Subsec:ProbMaj}. Let the extinction probability of a branching process with offspring distribution $Y^*$ be $\pi^*$. We have
\begin{align}\label{Eq:pistar}
\pi^* &= \sum_{j=0}^\infty(\pi^*)^jP(Y^*=j) = E((\pi^*)^{Y^*})\notag\\
&= E(E((\pi^*)^{Y^*}|D^*)) = E(1-\theta+\theta\pi^*)^{D^*-1}\notag\\
&=\sum_{k=2}^\infty(1-\theta+\theta\pi^*)^{k-1}P(D^*=k)\notag\\
&=\frac{p}{\theta}\sum_{k=2}^{c+1}(1-\theta+\theta\pi^*)^{k-1}\tilde p_k+\frac{cp}{\theta}\sum_{k=c+2}
^\infty(1-\theta+\theta\pi^*)^{k-1}\frac{\tilde p_k}{k-1}.
\end{align}
The solution to Equation~\eqref{Eq:pistar} for $\pi^*$ is obtained numerically. Let the extinction probability of the approximating branching process be $\pi'$. Then,
\begin{align}\label{Eq:piprime}
\pi' &= \sum_{j=0}^\infty(\pi^*)^jP(Y=j) = E((\pi^*)^Y)\notag\\
&= E(E((\pi^*)^Y|D)) = E(1-\theta+\theta\pi^*)^D\notag\\
&= f_D(1-\theta+\theta\pi^*),
\end{align}
where $f_D(\cdot)$ is the probability generating function of $D$ and $\pi^*$ is the solution to Equation~\eqref{Eq:pistar}. The solutions to Equation \eqref{Eq:piprime} is obtained numerically, and the relative final size of the epidemic in case of a major outbreak is $z=1-\pi'$.
A rigorous proof of that $z=1-\pi'$ is beyond the scope of this paper. It has been proved that for Reed-Frost epidemics on random intersection graphs~\citep{ball2014} and Reed-Frost epidemics on configuration model graphs~\citep{ball2013} that the proportion of infected during the epidemic converges in probability to the survival probability of the backward branching process. Similar arguments could also be used for our process to provide a formal proof. Additionally, we believe that the techniques described in~\cite{barbour2013} could be used to obtain stronger results for the whole epidemic process.
\section{Numerical results and simulations}\label{Sec:Results}
We now numerically examine the analytical results obtained in Section~\ref{Sec:Calculations}. In particular, we examine the relation between $R_0$, $\tau$, and $z$ and the parameters $c$ and $p$, and compare the RDS recruitment process with unrestricted epidemics. We use two different degree distributions in our calculations, the Poisson degree distribution and a variant of the power-law degree distribution with exponential cut-off given by $p_k\propto k^{-\alpha}\exp(-k/\kappa)$, $k=1,2,\ldots$, where $\alpha$ is the power-law exponent and $\kappa$ refers to the exponential cut-off~\cite[e.g.][]{newman2002epidemics}.
In Figure~\ref{Fig:R0}, we show the $R_0$ values for the RDS recruitment process with $c=3,5,10$ and the unrestricted epidemic for $p\in[0,1]$. Figure~\ref{Fig:R0}~(a) shows the results for the Poisson degree distribution with parameter $\lambda=8$ and Figure~\ref{Fig:R0}~(b) shows the results for the power-law degree distribution with parameters $\alpha=2$ and $\kappa=100$. For both degree distributions and a fixed value of $p$, the limitation imposed by the number of coupons on disease spread yields smaller $R_0$ values for the RDS recruitment process when compared to the unrestricted epidemic for all values of $c$. Especially for the power-law degree distribution, all values of $c$ give much smaller $R_0$ values than those of the unrestricted epidemic, and the value of $p$ for which $R_0$ becomes larger than 1 (i.e., the epidemic threshold) is larger than that of the unrestricted epidemic for all values of $c$.
\begin{figure}
\centering
\includegraphics[width=6cm]{R0Po8.pdf}
\includegraphics[width=6cm]{R0PLK100tau2.pdf}
\caption{Comparison of $R_0$ for unrestricted epidemics and RDS recruitment processes with 10, 5, and 3 coupons and $p\in[0,1]$. Plot (a) show the results for the Poisson degree distribution with parameter $\lambda=8$ and plots (b) show the results for the power-law degree distribution with parameters $\alpha=2$ and $\kappa=100$. The dashed horizontal lines shows the threshold value $R_0=1$.}\label{Fig:R0}
\end{figure}
Figure~\ref{Fig:ProbMajSizeOutbreak} shows the values of $\tau$ and $z$ for the RDS recruitment process with $c=3,5,10$ and the unrestricted epidemic for $p\in[0,1]$. Figures~\ref{Fig:ProbMajSizeOutbreak}~(a) and \ref{Fig:ProbMajSizeOutbreak}~(b) show the results for $\tau$ and $z$, respectively, for the Poisson degree distribution with parameter $\lambda=8$ and Figures~\ref{Fig:ProbMajSizeOutbreak}~(c) and \ref{Fig:ProbMajSizeOutbreak}~(d) show the results for $\tau$ and $z$, respectively, for the power-law degree distribution with parameters $\alpha=2$ and $\kappa=100$. The relative size of a major outbreak is always smaller than the probability of a major outbreak for both degree distributions. For both degree distributions, the probability of a major outbreak for the RDS recruitment process is smaller than that of the unrestricted epidemic for small values of $p$ and approaches that of the unrestricted epidemic when $p\to1$. For the power-law degree distribution, the size of a major outbreak is much smaller than that of the unrestricted epidemic for all values of $c$ and $p$.
\begin{figure}
\centering
\includegraphics[width=6cm]{MajorOutbreakPo8.pdf}
\includegraphics[width=6cm]{SizeOutbreakPo8.pdf}\vspace{10pt}
\includegraphics[width=6cm]{MajorOutbreakPLK100tau2.pdf}
\includegraphics[width=6cm]{SizeOutbreakPLK100tau2.pdf}
\caption{Comparison of the asymptotic probability of a major outbreak and relative size of a major outbreak for
unrestricted epidemics and RDS recruitment processes with 10, 5, and 3 coupons and $p\in[0,1]$. Plots (a) and (b) show
the results for the Poisson degree distribution with parameter $\lambda=8$ and plots (c) and (d) show the results for
the power-law degree distribution with parameters $\alpha=2$ and $\kappa=100$.}\label{Fig:ProbMajSizeOutbreak}
\end{figure}
We also make a brief evaluation of the adequacy of our asymptotic results in finite populations by means of
simulations. From simulated RDS recruitment processes (as described by the model), we estimate the probability of a major outbreak and the relative
size of a major outbreak in case of a major outbreak by the relative proportion of major outbreaks and the mean relative size of major outbreaks, respectively. Given a degree distribution and number of coupons $c$, let $p_c$ be the smallest value of $p$ for which the process is above the epidemic threshold. Each simulation run consists of generating a network of size 5000 by an erased configuration model approach \citep{britton2006}, for which we make use of the iGraph R package~\citep{igraph}. Then, RDS recruitment processes are run on the generated network for values of $p\in[p_c,1]$. In Figure~\ref{Fig:Simulations}, we show the estimated probability of a major outbreak $\hat\tau$ and estimated relative size of a major outbreak in case of a major outbreak $\hat z$ for varying $p$ and the corresponding asymptotic results. Figure~\ref{Fig:Simulations} (a) shows the results for the Poisson degree distribution with parameter $\lambda=12$ from 5000 simulations runs of RDS recruitment processes with 3 coupons. Figure~\ref{Fig:Simulations} (b) shows the results for the power-law degree distribution with parameters $\alpha=2.5$ and $\kappa=50$ from 5000 simulation runs of RDS recruitment processes with 10 coupons. In both Figures~\ref{Fig:Simulations} (a) and (b), we show error bars for the estimates based on $\pm2$ standard errors, where the standard error for $\hat\tau$ is estimated as $SE(\hat\tau)=(\hat\tau(1-\hat\tau)/m))^{1/2}$, where $m$ is the number of simulations, and the standard error for $\hat z$ is estimated as $SE(\hat z)=(\hat\sigma^2/m_{maj})^{1/2}$, where $\hat\sigma^2$ is the sample variance of the relative final sizes of major outbreaks and $m_{maj}$ is the number of simulations resulting in a major outbreak.
\begin{figure}
\centering
\includegraphics[width=6cm]{SimulationsPo12c3N5000Runs5000Treshold100.pdf}
\includegraphics[width=6cm]{SimulationsPLTAu25K50c10N5000Runs5000Treshold100.pdf}
\caption{Comparison of results from simulations of RDS recruitment processes and the asymptotic probability and relative size of a major outbreak.
Plot (a) shows the results for the Poisson degree distribution with parameter $\lambda=12$ for processes with $c=3$ and plot (b) shows the
results for the power-law degree distribution with parameters $\alpha=2.5$ and $\kappa=50$ for processes with $c=10$. Note that the error bars for the simulated relative size are very narrow and not visible for most simulated values. Also note that the horizontal scales are different.}\label{Fig:Simulations}
\end{figure}
Note that it is not well defined what constitutes a major outbreak in small, finite populations. Usually, the threshold for when an outbreak constitutes a major outbreak is determined by inspecting the distribution of outbreak sizes. Typically, this distribution is bimodal with modes at 0 and $z$, corresponding to small and major outbreaks. In our model, outbreak sizes will depend on $p$. For $p$ close to $p_c$, where ``close'' depends on the degree distribution, small and major outbreaks are indistinguishable. Consequently, it is difficult to estimate $\tau$ and $z$ for such values of $p$. In Figure~\ref{Fig:Simulations}, we have chosen to set the (relatively small) threshold for major outbreaks to 2\% of the population over the whole interval $[p_c,1]$. This yields fairly correct estimates for $p$ close to $p_c$ and does not affect estimates for $p$ further away from $p_c$.
We see that both the estimated probability of a major outbreak and the estimated relative size of major outbreak in case
of a major outbreak are very well approximated by the asymptotic results for both the evaluated degree distributions. As
pointed out in \citep{ball2009}, the relative size of the epidemic is more efficiently estimated than the probability of
a major outbreak because each simulation yields many (correlated) observations of the backward process and only one
observation of the forward process.
\section{Discussion and conclusions}\label{Sec:Discussion}
When the RDS recruitment process is compared to the corresponding unrestricted epidemic, it is clear that the limited number of coupons has a large impact on $R_0$ and the value of $p_c$ corresponding to the epidemic threshold, the probability of a major outbreak, and the relative size of a major outbreak in case of a major outbreak. This is especially true for the power-law degree distribution, for which in particular $R_0$ and $z$ is much smaller than for the corresponding unrestricted epidemic. In social networks with power-law degree distribution, the vast majority of individuals will have small degrees. For these individuals, the probability of being infected in an epidemic will be small. Also, such an individual will, once infected, have few or no contacts to spread the disease to. Hence, the spread of an epidemic in such networks will be highly dependent on a few individuals with very large degrees that have the capacity to infect many of their (small degree) neighbours. Because of the relatively small value of $c$, the potential of large degree individuals to spread the disease is much impaired in RDS compared to an unrestricted epidemic with the same $p$, hence impairing the spread of the epidemic as a whole.
The impact of the number of coupons on the RDS recruitment process may in part explain why some RDS studies experience difficulties in obtaining the desired sample size and/or recruiting a substantial proportion of the study population. Given $p$, the number of coupons will be crucial to whether $R_0$ is above or below the epidemic threshold for the recruitment process; in the latter case all recruitment chains will eventually fail. Moreover, the proportion of the population recruited by the RDS recruitment process may be small even given that $p$ is relatively large and a major outbreak occurs. For some parameter combinations, the proportion reached can be very small; this is especially important to consider in webRDS. We illustrate this by considering the webRDS studies in~\cite{Bengtsson2012} and~\cite{Stein2014}. In both studies, each respondent were allowed to make 4 recruitments. In the latter study, 66\% of started recruitment chains had a depth of one generation (i.e.\ index case and one generation of recruitments) and 11\% had a depth of three generations or more. This indicates that $R_0$ is below the epidemic threshold for this study and therefore, recruitment never takes off. In the former study, the majority of recruitments come from long recruitment chains, implying that $R_0$ is above the epidemic threshold. Still, recruitment eventually declined and stopped completely before reaching a large part of the population despite additional seeds joining the study. As we see in Section~\ref{Sec:Results} however, relatively many parameter combinations with $R_0>1$ yields small $z$ values, which could explain the observed behaviour. For both studies, heterogeneity in network structure, such that, locally $R_0<1$, may also be an explanation. It would be of interest to find proper inference procedures for our model to be used in further evaluation of actual RDS studies with respect to the quantities studied in this paper.
One might consider other ways to distribute coupons. The coupon distribution mechanism in our model, where a respondent selects some of his or her neighbours for attempted coupon transfer while ignoring those neighbours that were not selected, is most similar to a webRDS process. In a physical RDS study where coupons are handed over from person to person, a respondent may attempt to distribute a coupon to another neighbour if the originally intended recipient declines (here, distributing a coupon implies study participation). This modified mechanism is given as follows. A respondent first attempts to give a coupon to a randomly chosen neighbour. If the coupon is rejected, the respondent may try to distribute the same coupon to another neighbour, randomly chosen among those who previously have not been offered a coupon. When the coupon is accepted, the procedure is repeated starting by randomly selecting among those neighbours that have not been offered a coupon. When there are no more neighbours and/or coupons left, no further distribution attempts are made. The offspring probabilities in the branching process are the same as previously for individuals with degree less than the number of coupons, but the distribution of the number of coupons given out by an individual with degree larger than $c$ will be tilted towards larger values compared to the previous model. The probabilities in Eq.~\eqref{Eq:CondNumberInfectedIndex} now become
\begin{equation}
P(X=j|D=k)=\left\{
\begin{array}{ll}
\binom{k}{j}p^j(1-p)^{k-j},&j<c;\\
\sum_{i=c}^k\binom{k}{i}p^i(1-p)^{k-i},&j=c.
\end{array}\right.
\end{equation}
It is straightforward to calculate $R_0$ and $\tau$ using the same techniques as in Sections~\ref{Subsec:R0} and~\ref{Subsec:ProbMaj}. Figure~\ref{Fig:R0ModifiedRDS} shows the $R_0$ values for the modified RDS recruitment process with $c=3,5,10$ and the unrestricted epidemic for $p\in[0,1]$. In Figure~\ref{Fig:R0ModifiedRDS}~(a), we show the results for the Poisson degree distribution with $\lambda=8$ and in Figure~\ref{Fig:R0ModifiedRDS}~(b) we show the results for the power-law degree distribution with parameters $\alpha=2$ and $\kappa=100$. It is clear that $R_0$ is larger for the modified recruitment process for all $p$ compared to the process described in Subsection~\ref{Subsec:EpidemicModel} and the $p$ value corresponding to the epidemic threshold is considerably smaller. When $p\to1$, the $R_0$ values converges to those seen in Figure~\ref{Fig:R0}.
\begin{figure}
\centering
\includegraphics[width=6cm]{R0ModRDSPo8.pdf}
\includegraphics[width=6cm]{R0ModRDSPLK100tau2.pdf}
\caption{Comparison of $R_0$ for unrestricted epidemics and RDS recruitment processes where a recruiter tries to distribute a coupon until success. Plot (a) shows the results for the Poisson degree distribution with parameter $\lambda=8$ and plot (b) shows the results for the power-law degree distribution with parameters $\alpha=2$ and $\kappa=100$.}\label{Fig:R0ModifiedRDS}
\end{figure}
Because the modified process has similar epidemic threshold values in terms of $p$ for different $c$, the corresponding $\tau$ values (not shown) are close to those for the unrestricted epidemic when $R_0>1$. For the final size of the epidemic, the calculations are much harder to derive and is thus out of the scope of this paper. There are several other complications that could be considered in terms of coupon distribution. E.g., it is not likely that all coupon distribution attempts of a respondent will have the same success probability, both because the respondent may act differently depending on how many attempts he or she has previously made and because the relations to his or her neighbours may be different. Other complications include different respondent behaviour depending on (measurable) individual characteristics, geographical variations, and time dependence.
Overall, our results indicate that RDS studies which experience difficulties with respect to recruitment chain failure could benefit from an increased number of coupons, which would reduce the number of additional seeds needed. Furthermore, the longer recruitment chains obtained as a result of an increased number of coupons are more likely to reach remote parts of the population and meet equilibrium criteria for inference. As the recruitment potential of RDS increases from an increased number of coupons, the time to reach the desired sample size is shortened. Additionally, the study time is not subject to unexpected prolongation due to the addition of seeds. Hence, an increased number of coupons may result in lower and more predictable study costs. For webRDS studies in particular, the increase in the proportion of the population reached due to increasing the number of coupons facilitates larger sample sizes. We therefore advise that the recruitment potential of a planned RDS study should be considered beforehand so that the number of coupons could be chosen large enough to facilitate sustained recruitment and an acceptable sample size. Other factors may also increase recruitment potential. The coupon transfer probability $p$ could be increased by e.g.\ larger incentives or improved information about the study; this has an immediate effect on $R_0$, $\tau$, and $z$. Additionally, the selection of seeds could also affect recruitment capability, see e.g.\ \cite{wylie2013} where different seed selection methods produce very different recruitment scenarios. In general, it is of interest to further study why certain RDS studies are more successful in reaching the desired sample size with a modest number of seeds.
The presented epidemic model is a novel contribution to the area of stochastic epidemic models and although many results from Reed-Frost epidemics on configuration model networks are expected to hold for this model, several properties of it remain to be studied. There are a number of extensions that can be considered, e.g.\ different recruitment probabilities through unequally weighted edges, controlling for network structural properties, e.g.\ clustering, and modifying the coupon distribution mechanism as previously described.
\section*{Acknowledgements}
J.M. was supported by the Swedish Research Council, grant no. 2009-5759. T.B. and F.L. are grateful to Riksbankens jubileumsfond (contract
P12-0705:1) for financial support.
\bibliographystyle{JRSI}
|
1,941,325,220,050 | arxiv | \section{Introduction}
\label{sec:introduction}
In a plethora of problems in science and engineering, one needs to decide which action to take next, based on partial information about the options available: a doctor must prescribe a medicine to a patient, a manager must allocate resources to competing projects, an ad serving algorithm must decide where to place ads, etc. In practice, the underlying properties of each choice are only partially known at the time of the decision, but one hopes that the understanding of the caveats involved will improve as time passes.
This set of problems has an illustrative gambling analogy, where a person facing a row of slot machines needs to devise its playing strategy (policy): which arms to play and in which order. The aim is to maximize the expected reward after a certain set of actions. Statisticians have studied this abstraction under the name of the multi-armed bandit problem for decades, e.g., in the seminal works by \citet{j-Robbins1952,j-Robbins1956}. The multi-armed bandit setting consists of sequential interactions with the world with rewards that are independent and identically distributed, or the related contextual bandit case, in which the reward distribution depends on different information or `context' presented with each interaction. It has played an important role in many fields across science and engineering.
Several algorithms have been proposed to overcome the exploration-exploitation tradeoff in such problems, mostly based on heuristics, on upper confidence bounds, or on the Gittins index. From the former, the $\epsilon$-greedy approach (randomly pick an arm with probability $\epsilon$, otherwise be greedy) has become very popular due to its simplicity, while nonetheless retaining often good performance \cite{j-Auer2002}. In the latter case, \citet{j-Gittins1979} formulated a method based on computing the optimal strategy for certain types of bandits, where geometrically discounted future rewards are considered. There are several difficulties inherent to the exact computation of the Gittins index and thus, approximations have been developed as well \cite{j-Brezzi2002}. These and other intrinsic challenges of the method have limited its applicability \cite{b-Sutton1998}.
\citet{j-Lai1985} introduced another class of algorithms, based on upper confidence intervals of the expected reward of each arm, for which strong theoretical guarantees were proved \cite{j-Lai1987}. Nevertheless, these algorithms might be far from optimal in the presence of dependent and more general reward distributions \cite{j-Scott2010}. Bayesian counterparts of UCB-type algorithms have been proposed in \cite{ip-Kaufmann2012}, where they show it provides an unifying framework for other variants of the UCB algorithm for distinctive bandit problems.
Recently, the problem has re-emerged both from a practical (importance in e-commerce and web applications, e.g., \cite{j-Li2010}) and a theoretical (research on probability matching algorithms and their regret bounds, e.g., \cite{j-Agrawal2011} and \cite{ip-Maillard2011}) point of view.
Contributing to this revival was the observation that
one of the oldest heuristics to address the exploration-exploitation tradeoff, i.e., Thompson sampling \cite{j-Thompson1935,j-Thompson1933}, has been empirically proven to perform satisfactorily (see \cite{ic-Chapelle2011} and \cite{j-Scott2015} for details). Contemporaneously, theoretical study established several performance bounds, both for problems with and without context \cite{j-Agrawal2012,j-Agrawal2012a,ic-Korda2013,j-Russo2014,j-Russo2016}.
In this work, we are interested in the randomized probability matching approach, as it connects to the Bayesian learning paradigm. It readily facilitates not only generative and interpretable modeling, but sequential and batch processing algorithm development too.
Specifically, we investigate the benefits of fully harnessing the posteriors obtained via the Bayesian sequential learning process. We hereby avoid distributional assumptions to allow for complicated relationships among action rewards, as long as Bayesian posterior updates are computable.
We explore the benefits of sampling the model posterior to estimate the sufficient statistics that drive randomized probability matching algorithms. Our motivation is cases where sampling from the model posterior is inexpensive relative to interacting with the world, which may be expensive or invasive or, as in the medical application domain, both. The goal is that, with informative posterior sufficient statistics, better decisions can be made, leading to a lower cumulative regret.
We propose a double sampling technique for the multi-armed bandit problem, based on (1) Monte Carlo sampling, to approximate otherwise unsolvable integrals, and (2), a sampling-based arm-selection policy.
The policy is driven by the uncertainty in the learning process, as it favors exploitation when certain about the properties of the arms, exploring otherwise. Due to this autonomous exploration-exploitation balancing technique, the proposed algorithm achieves improved average performance, with important regret reductions.
We formally introduce the problem in Section \ref{sec:problem_formulation}, before providing all the details of our proposed double sampling method in Section \ref{sec:proposed_method}. The performance of double sampling is compared to the Thompson sampling and Bayes-UCB alternatives in Section \ref{sec:evaluation}, and we conclude with final remarks in Section \ref{sec:conclusion}.
\section{Problem formulation}
\label{sec:problem_formulation}
We mathematically formulate the multi-armed bandit problem as follows. Let $a\in\{1,\cdots,A\}$ indicate the arms of the bandit (possible actions to take), and $f_{a}(y|\theta)$ the stochastic reward distribution of each arm. For every time instant, the observed reward $y_t$ is independently drawn from the reward distribution corresponding to the played arm. We denote as $a_t$ the arm played at time instant $t$; $a_{1:t} \equiv (a_1, \cdots , a_t)$ refers to the sequence of arms played up to time $t$, and similarly, $y_{1:t} \equiv (y_1, \cdots , y_t)$ to the sequence of observed rewards.
In the multi-armed bandit setting one must decide, based on observed rewards $y_{1:t}$ and actions $a_{1:t}$, which arm to play next in order to maximize rewards. Due to the stochastic nature of the rewards, their expectation under the arm's distribution is the statistic of interest. We denote each arm's expected reward as $\mu_{a}(\theta)=\mathbb{E}_{a}\{y|\theta\}$, which is parameterized by the arm-dependent parameters $\theta$.
When the properties of the arms (i.e., their parameters) are known, one can readily determine the optimal selection policy, i.e.,
\begin{equation}
a^*(\theta)=\mathop{\mathrm{argmax}}_{a}\mu_{a}(\theta) \; .
\end{equation}
However, the optimal solution for the multi-armed bandit is only computable in closed form in very few special cases \cite{j-Bellm1956, j-Gittins1979}, and it fails to generalize to more realistic reward distributions and scenarios \cite{j-Scott2010}. The biggest challenge occurs when the parameters are unknown, as one might end up playing the wrong arm forever if incomplete learning occurs \cite{j-Brezzi2000}.
Amongst the different algorithms to overcome these issues, the randomized probability matching, i.e., playing each arm in proportion to its probability of being optimal, is a particularly appealing one. It has shown to be easy to implement, efficient and broadly applicable.
Given the parameters $\theta$, the expected reward of each arm is deterministic and, thus, one must pick the arm with the maximum expected reward
\begin{equation}
\mathrm{Pr}\left[a=a_{t+1}^*|a_{1:t}, y_{1:t}, \theta \right] = \mathrm{Pr}\left[a=a_{t+1}^*|\theta \right] = I_a(\theta),
\label{eq:theta_known_pr_arm_optimal}
\end{equation}
where we use the indicator function
\begin{equation}
I_a(\theta) = \begin{cases}
1, \; \mu_{a}(\theta)=\max\{\mu_1(\theta), \cdots, \mu_A(\theta)\} \;, \\
0, \; \text{otherwise} \;.
\end{cases}
\label{eq:indicator_arm_optimal}
\end{equation}
Under random probability matching, the aim is to compute the probability of a given arm $a$ being optimal for the next time instant, $p_{a,t+1}\in [0,1]$, even with unknown parameters. Mathematically,
\begin{equation}
\begin{split}
p_{a,t+1} &\equiv \mathrm{Pr}\left[a=a_{t+1}^* \big| a_{1:t}, y_{1:t}\right] \\
&\equiv \mathrm{Pr}\left[ \mu_{a}(\theta) = \max\{\mu_1(\theta), \cdots, \mu_A(\theta)\} \big| a_{1:t}, y_{1:t}\right].
\end{split}
\label{eq:theta_unknown_pr_arm_optimal}
\end{equation}
Note that there is an inherent uncertainty about the unknown properties of the arms, as Eqn.~\eqref{eq:theta_unknown_pr_arm_optimal} is parameterized by $\theta$. In order to compute a solution to this problem, recasting it as a Bayesian learning problem, where $\theta$ is a random variable, is of great help. It allows for computation of posterior and marginal distributions, with direct connection to sampling techniques.
\section{Proposed method: double sampling}
\label{sec:proposed_method}
The multi-armed bandit problem consists of two separate but intertwined tasks: (1) learning about the properties of the arms, and (2) deciding what arm to play next. The problem is sequential in nature, as one makes a decision on which arm to play and learns from the observed reward, one observation at a time.
We cast the multi-armed bandit problem as a sequential Bayesian learning task. By doing so, we capture the full state of knowledge about the world at every time instant. We incorporate any available prior information to the learning process, and update our knowledge about the unknown parameter $\theta$, as we sequentially play arms and observe rewards. This learning can be done both sequentially or in batches, as Bayesian posterior updates are computable for both cases \cite{b-Bernardo2009}.
However, the solution to the probability matching equation in \eqref{eq:theta_unknown_pr_arm_optimal} is analytically intractable, so we approximate it via Monte Carlo sampling. For balancing the exploration-exploitation tradeoff, we propose a sampling-based probability matching technique too. The proposed arm-selection policy is a function of the uncertainty in the learning process. The intuition is that we exploit only when certain about the properties of the arms, while we keep exploring otherwise.
We elaborate on the foundations of the proposed double sampling method in the following sections, before presenting it formally in Algorithm \autoref{alg:bayesianDoubleSampling}.
\subsection{Bayesian multi-armed bandits}
\label{ssec:bayesian_multi_armed_bandit}
We are interested in computing, after playing arms $a_{1:t}$ and observing rewards $y_{1:t}$, the probability $p_{a,t+1}$ of each arm $a$ being optimal for the next time instant. In practice, one needs to account for the lack of knowledge of each arm's properties, i.e., the unknown parameter $\theta$ in Eqn.~\eqref{eq:theta_unknown_pr_arm_optimal}.
We do so by following the Bayesian methodology, where the parameters are considered to be another set of random variables. The uncertainty over the parameters can be accounted for by marginalizing over their probability distribution.
Specifically, we marginalize over the posterior of the parameters after observing rewards and actions up to time $t$,
\begin{equation}
\begin{split}
p_{a,t+1} \equiv \mathrm{Pr}\left[a=a_{t+1}^* \big| a_{1:t}, y_{1:t}\right] &= f(a=a^*_{t+1}|a_{1:t}, y_{1:t}) \\
&=\int f(a=a^*_{t+1}|a_{1:t}, y_{1:t}, \theta) f(\theta|a_{1:t}, y_{1:t}) \mathrm{d}\theta \;.
\end{split}
\label{eq:pr_arm_optimal_bayes}
\end{equation}
Given a prior for the parameters $f(\theta)$ and the per-arm reward distribution $f_{a}(y|\theta)$, one can compute the posterior of each arm's parameters by
\begin{equation}
\begin{split}
f(\theta|a_{1:t}, y_{1:t}) &\propto f_{a_t}(y_t | \theta)f(\theta | a_{1:t-1}, y_{1:t-1}) \\
& \propto \left[\prod_{\tau=1}^t f_{a_{\tau}}(y_{\tau}|\theta)\right] f(\theta) \; .
\end{split}
\label{eq:seq_param_posterior}
\end{equation}
This posterior provides information (with uncertainty) about the characteristics of the arm. Note that the updates can usually be written in both sequential and batch forms. This flexibility is of great help in many practical scenarios, as one can learn from historic observations, as well as process data as it comes.
Even if analytical expressions for the parameter posteriors are available for many models of interest, computing the probability of any given arm being optimal is analytically intractable, due to the nonlinearities induced by the indicator function as in Eqn.~\eqref{eq:indicator_arm_optimal}
\begin{equation}
\begin{split}
p_{a,t+1} &=\int f(a=a^*_{t+1}|a_{1:t}, y_{1:t}, \theta) f(\theta|a_{1:t}, y_{1:t}) \mathrm{d}\theta = \int I_a(\theta) f(\theta|a_{1:t}, y_{1:t}) \mathrm{d}\theta \; .
\label{eq:pr_arm_optimal_bayes_indicator}
\end{split}
\end{equation}
\subsection{Monte-Carlo integration}
\label{ssec:mc_integration}
We harness the power of Monte Carlo sampling to compute the otherwise analytically intractable integral in Eqn.~\eqref{eq:pr_arm_optimal_bayes_indicator}. We obtain a Monte Carlo based random measure approximation to compute estimates of $p_{a,t+1}\in [0,1]$ as follows:
\begin{enumerate}
\item Draw $M$ parameter samples from the updated posterior distribution
\begin{equation}
\theta^{(m)}\sim f(\theta|a_{1:t}, y_{1:t}), \; \; m=\{1, \cdots, M\} \; .
\end{equation}
\item For each parameter sample $\theta^{(m)}$, compute the expected reward and determine the best arm
\begin{equation}
a_{t+1}^*(\theta^{(m)})=\mathop{\mathrm{argmax}}_{a}\mu_{a}(\theta^{(m)}) \; .
\end{equation}
\item Define the random measure approximation
\begin{equation}
f(a =a_{t+1}^*|a_{1:t}, y_{1:t}) \approx f_M(a =a_{t+1}^*|a_{1:t}, y_{1:t}) \approx \frac{1}{M} \sum_{m=1}^M \delta\left(a - a_{t+1}^*(\theta^{(m)}) \right) ,
\label{eq:pr_arm_optimal_bayes_MC}
\end{equation}
where $\delta(\cdot)$ denotes the Dirac delta function.
\item Estimate the first- and second-order sufficient statistics of $f_M(a =a_{t+1}^*|a_{1:t}, y_{1:t})$, i.e.,
\begin{equation}
\begin{cases}
\hat{p}_{a,t+1}=\eValue{f_M(a =a_{t+1}^*|a_{1:t}, y_{1:t})} =\frac{1}{M}\sum_{m=1}^M I_a\left(\theta^{(m)}\right) \; , \\
\hat{\sigma}^2_{a,t+1}=\mathbb{V}\mathrm{ar}\{f_M(a =a_{t+1}^*|a_{1:t}, y_{1:t})\} =\frac{1}{M} \sum_{m=1}^M \left(I_a\left(\theta^{(m)}\right)- \hat{p}_{a,t+1} \right)^2 \; .
\end{cases}
\label{eq:pr_arm_optimal_bayes_MC_suff_statistics}
\end{equation}
\item Estimate which is the optimal arm and with what probability
\begin{equation}
\begin{cases}
\hat{a}_{t+1}^* =\mathop{\mathrm{argmax}}_{a} \hat{p}_{a,t+1} \; , \\
\hat{p}^*_{a,t+1}=\max_{a} \hat{p}_{a,t+1} \; .
\end{cases}
\end{equation}
\end{enumerate}
\subsection{Sampling-based policy}
\label{ssec:sampling_policy}
In any bandit setting, given the available information at time $t$, one needs to decide which arm to play next. A randomized probability matching technique would pick the next arm $a$ with probability $p_{a,t+1}$. On the contrary, a greedy approach would choose the arm with the highest probability of being optimal, i.e., $p^*_{a,t+1}$.
We present an alternative sampling-based probability matching arm-selection policy that finds a balance between these two cases. We rely on the Monte Carlo approximation to Eqn.~\eqref{eq:pr_arm_optimal_bayes_indicator}, and leverage the estimated sufficient statistics in Eqn.~\eqref{eq:pr_arm_optimal_bayes_MC_suff_statistics} to balance the exploration-exploitation tradeoff. We draw candidate arm samples from the random measure in Eqn.~\eqref{eq:pr_arm_optimal_bayes_MC}, and automatically adjust the probability matching technique according to the accuracy of this approximation.
The number of candidate arm samples drawn is instrumental for our sampling-based policy. We automatically adjust its value according to the uncertainty on the optimality of each arm, i.e., $\sigma^2_{a,t+1}$. By doing so, we account for the uncertainty of the learning process in the arm-selection policy, dynamically balancing exploration and exploitation.
The number of candidate arm samples to draw is inversely proportional to the probability of not picking the optimal arm. We denote this probability as $p_{FA}$, which is computed for each arm as
\begin{equation}
p_{FA}^{(a)} =Pr\left(p_{a,t+1} > p^*_{a,t+1} \right) = 1 - F_{p_{a,t+1}}(p^*_{a,{t+1}}) \; ,
\end{equation}
where $p_{a,t+1}^*=\max_{a}p_{a,t+1}$. The true cumulative density function $F_{p_{a,t+1}}(\cdot)$ is analytically intractable as well, but we approximate it (based on the central limit theorem guarantees of the MC estimates) with a Gaussian truncated to the CDF's range
\begin{equation}
F_{p_{a,t+1}}(p^*_{a,{t+1}}) \approx \Phi_{[0,1]}\left(\frac{p^*_{a,{t+1}}-p_{a,t+1}}{\sigma_{a,t+1}}\right) \;.
\end{equation}
Since we can not exactly evaluate $p_{a,t+1}$ and $\sigma_{a,t+1}$, we resort to our Monte Carlo estimates in Eqn.~\eqref{eq:pr_arm_optimal_bayes_MC_suff_statistics} instead.
All in all, the proposed sampling policy proceeds as follows:
\begin{enumerate}
\item Determine $N_{t+1}$, the number of candidate arm samples to draw
\begin{equation}
\begin{split}
N_{t+1} \propto & \log\left(\frac{1}{p_{FA}}\right), \; \; p_{FA}=\frac{1}{K-1}\sum_{a \neq \hat{a}_{t+1}^*}p_{FA}^{(a)}\;, \\
& p_{FA}^{(a)} \approx 1- \Phi_{[0,1]}\left(\frac{\hat{p}^*_{a,{t+1}}-\hat{p}_{a,t+1}}{\hat{\sigma}_{a,t+1}}\right) \; .
\end{split}
\label{eq:policy_n_samples}
\end{equation}
\item Draw $N_{t+1}$ candidate arm samples
\begin{equation}
\hat{a}_{t+1}^{(n)} \sim \Cat{\hat{p}_{a,t+1}}, \; \; \; n=1,\cdots, N_{t+1} \; .
\end{equation}
\item Pick the most probable optimal arm, given drawn candidate arm samples $\hat{a}_{t+1}^{(n)}$
\begin{equation}
a_{t+1} = \text{Mode}\left(\hat{a}_{t+1}^{(n)}\right), \; \; \; n=1,\cdots,N_{t+1} \;.
\end{equation}
\end{enumerate}
By allowing for $N_{t+1}$ to be adjusted based upon the uncertainty of the learning process, we balance the exploration-exploitation tradeoff. We present full details of the proposed double sampling technique in Algorithm \autoref{alg:bayesianDoubleSampling}.
\begin{algorithm}
\begin{algorithmic}[1]
\REQUIRE Number of arms $A$, number of MC samples $M$, and horizon $T$
\REQUIRE Prior over model parameters $f(\theta)$ and per-arm reward distributions $f_a(y|x,\theta)$
\STATE $D=\emptyset$
\FOR{$t=1, \cdots, T$}
\STATE Draw $M$ posterior parameter samples
\begin{equation}
\theta_{t+1}^{(m)}\sim f(\theta|a_{1:t}, x_{1:t}, y_{1:t}), \; \; \; m=\{1, \cdots, M\}
\end{equation}
\STATE If applicable, receive context $x_{t+1}$
\FOR{$a=1, \cdots, A$}
\STATE Compute expected reward, \\
\qquad \qquad per parameter sample
\begin{equation}
\mu_{a,t+1}(\theta_{t+1}^{(m)})=\mu_{a}(x_{t+1},\theta_{t+1}^{(m)})
\end{equation}
\STATE Compute sufficient statistics
\begin{equation}
\begin{split}
&\qquad \; \; \hat{p}_{a,t+1}=\frac{1}{M}\sum_{m=1}^M I_a\left(\theta_{t+1}^{(m)}\right) \\
&\qquad \; \; \hat{\sigma}^2_{a,t+1}=\frac{1}{M} \sum_{m=1}^M \left(I_a\left(\theta_{t+1}^{(m)}\right)- \hat{p}_{a,t+1} \right)^2
\end{split}
\end{equation}
\ENDFOR
\STATE Compute estimates
\begin{equation}
\begin{cases}
\hat{a}_{t+1}^* =\mathop{\mathrm{argmax}}_{a} \hat{p}_{a,t+1} \; , \\
\hat{p}^*_{a,t+1}=\max_{a} \hat{p}_{a,t+1} \; .
\end{cases}
\end{equation}
\FOR{$a=1,\cdots, A$}
\STATE Compute probability of arm not being optimal
\begin{equation}
\hat{p}_{FA}^{(a)}=Pr\left(\hat{p}_{a,t+1} > \hat{p}^*_{a,t+1} \right)
\end{equation}
\ENDFOR
\STATE Compute the number of candidate arm samples
\begin{equation}
N_{t+1} \propto \log\left(\frac{1}{\hat{p}_{FA}}\right), \; \; \; \hat{p}_{FA}=\frac{1}{A-1}\sum_{a \neq \hat{a}_{t+1}^*}\hat{p}_{FA}^{(a)}
\end{equation}
\STATE Draw $N_{t+1}$ candidate arm samples
\begin{equation}
\hat{a}_{t+1}^{(n)} \sim \Cat{\hat{p}_{a,t+1}}, \; \; \; n=1,\cdots, N_{t+1}
\end{equation}
\STATE Play arm
\begin{equation}
a_{t+1} = \text{Mode}\left(\hat{a}_{t+1}^{(n)}\right), \; \; \; n=1,\cdots,N_{t+1}
\end{equation}
\STATE Observe reward $y_{t+1}$
\STATE Update $D=D \cup \left\{x_{t+1}, a_{t+1}, y_{t+1}\right\}$
\ENDFOR
\end{algorithmic}
\caption{Double sampling algorithm}
\label{alg:bayesianDoubleSampling}
\end{algorithm}
The proposed sampling policy reduces to a probabilistic matching regime when uncertain about the arms, (i.e., $N_t \approx 1$), but favors exploitation ($N_t \gg 1$) when the probability of picking a suboptimal arm is low. In other words, double sampling exploits only when confident about the learned probabilities ($\hat{\sigma}_{a,t+1} \rightarrow 0, N_t \gg 1)$, and picks the arm with the highest probability $\hat{p}_{a,t+1}$. However, for $N_t \approx 1$, a randomized probability matching is in play. Note that Thompson sampling is a special case of double sampling, when $N_{t}=1 \;, \forall t$.
To conclude, note that the sampling-based policy decides on the action to take next, by drawing from an approximation to the posterior density $p_{a,t+1}$. Precisely, by probability matching the expected return of each arm, which is estimated via Monte Carlo as in Eqn.~\eqref{eq:pr_arm_optimal_bayes_MC_suff_statistics}. For the derivation of performance bounds in multi-armed bandit problems and, in particular, regret bounds for posterior sampling techniques, one studies the expected returns of the arms. Due to Monte Carlo guarantees on the convergence of the computed estimates ($\lim_{M \to \infty} \hat{p}_{a,t+1} = p_{a,t+1}$), and the random probability matching nature of double sampling, the regret bounds for our proposed technique are of the same order as those of any posterior sampling technique \cite{j-Agrawal2012a,j-Russo2016}. We argue that the discrepancies are on the multiplicative constants, which we evaluate in the following section.
\section{Evaluation}
\label{sec:evaluation}
We now empirically evaluate the performance of double sampling in both discrete and continuous contextual multi-armed bandit settings. We compare the performance of our proposed algorithm, to that of Thompson sampling \cite{ic-Chapelle2011} and Bayes-UCB \cite{ip-Kaufmann2012}.
On the one hand, \cite{ic-Chapelle2011} show empirically the significant advantages Thompson sampling offers for the Bernoulli and other cases, while theoretical guarantees are provided in \cite{j-Agrawal2012,j-Agrawal2012a,ic-Korda2013,j-Russo2014,j-Russo2016}. On the other, \cite{ip-Kaufmann2012} prove the asymptotic optimality of Bayes-UCB's finite-time regret bound for the Bernoulli case, and argue that it provides an unifying framework for several variants of the UCB algorithm for different bandit problems: parametric multi-armed bandits and linear Gaussian bandits.
We compare double sampling as in Algorithm \ref{alg:bayesianDoubleSampling} to these two state-of-the-art algorithms, in order to provide empirical evidence of the reduced cumulative regret of our proposed approach. We define cumulative regret as
\begin{equation}
R_t=\sum_{\tau=0}^t \eValue{\left(y^*_{\tau}-y_{\tau} \right)} = \sum_{\tau=0}^t \mu_\tau^*-\bar{y}_{\tau} \; ,
\end{equation}
where for each time instant $t$, $\mu_t^*$ denotes the expected reward of the optimal arm and $\bar{y}_{t}$ the empirical mean of the observed rewards under the executed policy. Note that even if the bandits considered are stationary (i.e., parameters are not dynamic), the expected rewards are indexed with time to accommodate their dependency with potentially time-dependent contexts $x_t$.
\subsection{Bernoulli bandits}
\label{ssec:bernoulli_bandits}
Bernoulli bandits are well suited for applications with binary rewards (i.e., success or failure of an action). The rewards of each arm are modeled as independent draws from a Bernoulli distribution with success probabilities $\theta_a$, i.e.,
\begin{equation}
f_a(y|\theta)=\theta_a^{y}(1-\theta_a)^{(1-y)} \; .
\end{equation}
For a Bernoulli reward distribution, the posterior parameter update can be computed using the conjugate prior distribution $f(\theta_a|\alpha_{a,0}, \beta_{a,0})=\Beta{\theta_a|\alpha_{a,0}, \beta_{a,0}}$. After observing actions $a_{1:t}$ and rewards $y_{1:t}$, the posterior parameter distribution follows an updated Beta distribution
\begin{equation}
f(\theta_a|a_{1:t}, y_{1:t}, \alpha_{a,0}, \beta_{a,0}) =f(\theta_a|\alpha_{a,t}, \beta_{a,t}) =\Beta{\theta_a|\alpha_{a,t}, \beta_{a,t}} \; ,
\end{equation}
with sequential updates
\begin{equation}
\begin{cases}
\alpha_{a,t}=\alpha_{a,t-1} + y_{t} \cdot \mathds{1}[a_t=a] \; ,\\
\beta_{a,t}=\beta_{a,t-1} + (1 - y_{t}) \cdot \mathds{1}[a_t=a] \; ,
\end{cases}
\end{equation}
or, alternatively, batch updates of the following form
\begin{equation}
\begin{cases}
\alpha_{a,t}=\alpha_{a,0} + \sum_{t|a_t=a} y_{t} \; ,\\
\beta_{a,t}=\beta_{a,0} + \sum_{t|a_t=a} (1-y_{t}) \; .
\end{cases}
\end{equation}
The sequential Bayesian learning process for a three-armed Bernoulli bandit with parameters $\theta=\left(0.4 \; 0.7 \; 0.8 \right)$ is illustrated in Fig. \ref{fig:pred_action_density}. We show the evolution of the probability of each arm being optimal as computed by our proposed algorithm: i.e., the Monte Carlo approximation to $p_{a,t+1}$. For all results to follow, we use $M=1000$ Monte Carlo samples, as larger $M$s do not significantly improve regret performance. In Fig. \ref{fig:n_samples}, we illustrate how double sampling is {\em automatically} adjusted according to the uncertainty of the learning process, via the number of arm samples to draw (i.e., $N_{t+1}$ in Eqn.~\eqref{eq:policy_n_samples}).
\begin{figure}[!h]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{./figs/bernoulli/pred_action_density.pdf}
\caption{$\hat{p}_{a,t+1}$ over time.}
\label{fig:pred_action_density}
\end{subfigure}%
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{./figs/bernoulli/n_samples.pdf}
\caption{$N_{t+1}$ over time.}
\label{fig:n_samples}
\end{subfigure}
\caption{Illustrative execution of double sampling.}
\label{fig:approach_intuition}
\end{figure}
Let us elaborate on the exploration-exploitation tradeoff by following the double sampling bandit execution shown in Fig. \ref{fig:approach_intuition}. Observe how arm 0 is quickly discarded as a good candidate, while the decision over which of the other two arms is optimal requires further learning. For some time ($t<200$), there is high uncertainty about the properties of these two arms (high variance in Fig. \ref{fig:pred_action_density}). Thus, double sampling favors exploration ($N_{t+1}\approx 1$ in Fig. \ref{fig:n_samples}), until the uncertainty about which arm is best is reduced. Once the algorithm becomes more certain about the better reward properties of arm 2 ($t>200$), double sampling gradually favors a greedier policy ($N_{t+1}>1$).
All in all, within periods of high uncertainty, the number of samples $N_{t+1}$ is kept low (i.e., exploration); on the contrary, when the learning is more accurate, it increases (i.e., exploitation). By means of the double sampling technique, we account for the uncertainty in the learning process and thus, the proposed algorithm can reduce the variance over the actions taken.
We plot in Fig. \ref{fig:bernoulli_correct_actions_compare} the empirical probabilities of each algorithm playing the optimal arm over 5000 realizations\footnote{All averaged results in this work are computed over 5000 realizations of the same set of parameters.} of a Bernoulli bandit with $A=2, \theta=\left(0.4 \; 0.8\right)$. Observe how, even if in expectation all algorithms take the same actions, the action variability of double sampling is considerably smaller.
\begin{figure}[!h]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{./figs/bernoulli/correct_actions_TS_DS.pdf}
\label{fig:bernoulli_correct_actions_TS_DS}
\end{subfigure}%
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{./figs/bernoulli/correct_actions_BUCB_DS.pdf}
\label{fig:bernoulli_correct_actions_BUCB_DS}
\end{subfigure}
\caption{Averaged correct action probability (standard deviation as shaded region) with $A=2, \theta=\left(0.4 \; 0.8\right)$.}
\label{fig:bernoulli_correct_actions_compare}
\end{figure}
As a result, the cumulative regret of our proposed technique is lower than those of the compared alternatives, i.e., Thompson sampling and Bayes-UCB, (see averaged cumulative regrets in Fig. \ref{fig:bernoulli_cumulative_regret}).
\begin{figure}[!h]
\centering
\includegraphics[width=0.55\textwidth]{./figs/bernoulli/cumulative_regret.pdf}
\caption{Averaged cumulative regret (standard deviation shown as shaded region) with $A=2, \theta=\left(0.4 \; 0.8\right)$.}
\label{fig:bernoulli_cumulative_regret}
\end{figure}
For any bandit problem, the difficulty of learning the properties of each arm is a key factor on its regret performance. Intuitively, this difficulty relates to how close the expected rewards of each arm are. Mathematically, it can be captured by the divergence between arm reward distributions. By computing the minimum Kullback-Leibler (KL) divergence between arms, one quantifies how ``difficult'' a multi-armed bandit problem is, as established by the regret lower-bound in \cite{j-Lai1985}.
We evaluate the relative difference between the averaged cumulative regret of our proposed double sampling technique and the alternatives, i.e.,
\begin{equation}
\Delta_t^{(TS)} =\frac{R_{t}^{(DS)}}{R_{t}^{(TS)}}-1 \; \text{ and } \; \Delta_t^{(B-UCB)} =\frac{R_{t}^{(DS)}}{R_{t}^{(B-UCB)}}-1 \; ,
\label{eq:relative_cum_reg_dif}
\end{equation}
where $R_t^{(DS)}$ denotes the regret of the proposed double sampling approach at time $t$, $R_t^{(TS)}$, that of Thompson sampling, and $R_t^{(B-UCB)}$, that of Bayes-UCB.
We show in Fig. \ref{fig:bernoulli_relative_cumregret_kl} results for the above metric indexed by the KL divergence of a wide range of Bernoulli bandit parameterizations\footnote{Bernoulli bandits with $A=2$ and $A=3$ arms, for all per-arm parameter permutations in the range $\theta_a\in[0,1]$ with grid size $0.05$.}. Note that the KL metric may map many parameter combinations to the same point in Fig. \ref{fig:bernoulli_relative_cumregret_kl}.
\begin{figure}[!h]
\centering
\includegraphics[width=0.7\textwidth]{./figs/bernoulli/min_KL_relDiff_Nmax1000_t1499_09.pdf}
\caption{Relative average cumulative regret differences at $t=1500$.}
\label{fig:bernoulli_relative_cumregret_kl}
\end{figure}
Double sampling performs significantly better than the alternatives when it is certain about the learned arm parameters. We obtain cumulative regret reductions around 25\% and 40\% when compared to B-UCB (with optimal quantile parameter $\alpha_t=1/t$ as in \cite{ip-Kaufmann2012}) and Thompson sampling, respectively. However, when the best arms are very similar ($KL<0.25$), performance worsens. First, recall that the regret lower-bound increases for all bandits with small KL values \cite{j-Lai1985}. Second, note that when the properties of the arms are very similar to each other, our algorithm resorts to a Thompson sampling-like policy ($N_{t+1}\approx1$), yielding near-equivalent performance.
Finally, we observe that for the challenging cases (i.e., $KL<0.25$) cumulative regret shows high variance for the three alternatives (points are scattered in Fig. \ref{fig:bernoulli_relative_cumregret_kl}). On the contrary, when the difference between arm properties is distinguishable ($KL>0.25$), the proposed double sampling technique considerably reduces cumulative regret for Bernoulli multi-armed bandits.
\subsection{Contextual linear Gaussian bandits}
\label{ssec:contextLinearGaussian_bandits}
Another set of well studied bandits are those with continuous reward functions and, in particular, those with contextual dependencies. That is, the reward distribution of each arm is dependent on a time-varying $d$-dimensional context vector $x_t\in\Real^{d}$.
The contextual linear Gaussian bandit model is suited for these scenarios, where the expected reward of each arm is linearly dependent on the context $x\in\Real^{d}$, and the idiosyncratic parameters of the bandit $\theta\equiv\{w, \sigma\}$. That is, the per-arm reward distribution follows
\begin{equation}
f_a(y|x,\theta)=\N{y|x^\top w_a, \sigma_a^2} \; .
\end{equation}
For such reward distribution, the posterior can be computed with the Normal Inverse Gamma conjugate prior distribution
\begin{equation}
\begin{split}
f(w_a, \sigma_a^2|u_{a,0}, V_{a,0},\alpha_{a,0}, \beta_{a,0}) &= \NIG{w_a, \sigma_a^2|u_{a,0}, V_{a,0},\alpha_{a,0}, \beta_{a,0}} \\
& = \N{w_a|u_{a,0}, \sigma_a^2 V_{a,0}} \cdot \Gamma^{-1}\left(\sigma_a^2|\alpha_{a,0}, \beta_{a,0}\right) \;. \\
\end{split}
\end{equation}
Given previous actions $a_{1:t}$, contexts $x_{1:t}$ and rewards $y_{1:t}$, one obtains the following posterior
\begin{equation}
f(w_a, \sigma_a^2|a_{1:t},y_{1:t},u_{a,0}, V_{a,0},\alpha_{a,0}, \beta_{a,0}) =\NIG{w_a, \sigma_a^2|u_{a,t}, V_{a,t},\alpha_{a,t}, \beta_{a,t}} \; ,
\end{equation}
where the parameters of the posterior are sequentially updated as
\begin{equation}
\begin{cases}
V_{a,t}^{-1} = V_{a,t-1}^{-1} + x_t x_t^\top \cdot \mathds{1}[a_t=a] \; ,\\
u_{a,t}= V_{a,t} \left( V_{a,t-1}^{-1} u_{a,t-1} + x_t y_{t}\cdot \mathds{1}[a_t=a]\right) \; ,\\
\alpha_{a,t}=\alpha_{a,t-1} + \frac{\mathds{1}[a_t=a]}{2} \; ,\\
\beta_{a,t}=\beta_{a,t-1} + \frac{\mathds{1}[a_t=a](y_{t_a}-x_t^\top\theta_{a,t-1})^2}{2\left(1+x_t^\top \Sigma_{a,t-1} x_t\right)} \; .
\end{cases}
\end{equation}
Alternatively, if data is collected in batches, one updates the posterior with
\begin{equation}
\begin{cases}
V_{a,t}^{-1}= V_{a,0}^{-1}+x_{{1:t}|t_a} x_{{1:t}|t_a}^\top \; ,\\
u_{a,t}=V_{a,t}\left(V_{a,0}^{-1}u_{a,0}+x_{{1:t}|t_a} y_{{1:t}|t_a}\right) \; ,\\
\alpha_{a,t}=\alpha_{a,0} + \frac{|t_a|}{2} \; ,\\
\beta_{a,t}=\beta_{a,0} + \frac{\left(y_{{1:t}|t_a}^\top y_{{1:t}|t_a} + u_{a,0}^\top V_{a,0}^{-1}u_{a,0} - u_{a,t}^\top V_{a,t}^{-1}u_{a,t} \right)}{2} \; ,
\end{cases}
\end{equation}
where $t_a=\{t|a_t=a\}$ indicates the set of time instances when arm $a$ is played.
We evaluate double sampling for the multi-armed contextual Gaussian bandit with uniform and uncorrelated context, i.e., $x_{i,t}\sim \U{0,1}, i \in \{1, \cdots, d\}, t \in \Natural$.
We again use the minimum KL divergence as a proxy for bandit complexity. The divergence is model agnostic, as many parameter combinations for any model may map to the same KL divergence value.
\begin{figure}[!h]
\centering
\includegraphics[width=0.5\textwidth]{./figs/linearGaussian/cumulative_regret.pdf}
\caption{Averaged cumulative regret comparison (standard deviation shown as shaded region) with $A=2$, $w_0=(0.4 \; 0.4)^\top$, $w_1=(0.8 \; 0.8)^\top$, $\sigma_0=\sigma_1=0.2$.}
\label{fig:linearGaussian_cumulative_regret_compare}
\end{figure}
We provide results for a specific two-armed contextual Gaussian bandit in Fig. \ref{fig:linearGaussian_cumulative_regret_compare}, and in Fig. \ref{fig:linearGaussian_relative_cumregret_kl}, average cumulative regret relative differences (as in Eqn.~\eqref{eq:relative_cum_reg_dif}) for a wide range of parameterizations\footnote{Contextual linear Gaussian bandits with per-dimension parameter $w_i \in [-1,1], i\in\{1,2\}$ with gaps of $0.1$, and $\sigma \in [0.1,1]$ with step size of $0.1$.} of two-dimensional contextual linear Gaussian bandits with two arms.
Again, when the reward difference between arms is easy to learn (KL>0.25), double sampling attains significant cumulative regret reductions. The regret improvement is most evident for models with significant KL divergence, with cumulative regret reductions of up to 40\% and 50\% when compared to Thompson sampling and Bayes-UCB, respectively.
\begin{figure}[!h]
\centering
\includegraphics[width=0.75\textwidth]{./figs/linearGaussian/min_KL_relDiff_Nmax1000_t1499_09.pdf}
\caption{Relative average cumulative regret differences at $t=1500$.}
\label{fig:linearGaussian_relative_cumregret_kl}
\end{figure}
We argue that the comparative performance loss of Bayes-UCB in the contextual Gaussian case relates to the $\alpha_t = 1/t$ quantile value proposed by \cite{ip-Kaufmann2012}. Its justification comes from the bounds established for the Bernoulli bandit case, but there are no guarantees provided for other bandits. That is, the optimal quantile values for Bayes-UCB are problem dependent, and require careful analytical derivations. On the contrary, our proposed double sampling algorithm does not require any manual tuning, as it autonomously balances the exploration-exploitation tradeoff by adjusting the number of candidate arm samples $N_{t+1}$ based on the learning uncertainty.
\section{Conclusion}
\label{sec:conclusion}
We have presented a new sampling-based probability matching technique for the multi-armed bandit setting. We formulated the problem as a Bayesian sequential learning one, and leveraged random sampling to overcome two of its main challenges: approximating the analytically unsolvable integrals, and automatically balancing the exploration-exploitation tradeoff. We empirically show that additional sampling from the model, which is in many application domains inexpensive in comparison with interacting with the world, can provide improved statistics and, ultimately, reduced regrets. Encouraged by these findings, we aim at implementing this technique with other reward distributions and extending it to real application datasets.
\subsection{Software and Data}
The implementation of the proposed method is available in \href{https://github.com/iurteaga/bandits}{this public repository}. It contains all the software required for replication of the findings of this study.
\subsubsection*{Acknowledgments}
This research was supported in part by NSF grant SCH-1344668. We thank Hang Su and Edward Peng Yu for their feedback and comments on this work.
|
1,941,325,220,051 | arxiv | \section*{Introduction }
Since they appeared in a celebrated counterexample to the
Cancellation Problem due to W. Danielewski \cite{Dan89}, the surfaces
defined by the equations $xz-y\left(y-1\right)=0$ and $x^{2}z-y\left(y-1\right)=0$
in $\mathbb{C}^{3}$ and their natural generalisations, such as surfaces
defined by the equations $x^{n}z-P\left(y\right)=0$, where $P\left(y\right)$
is a nonconstant polynomial, have been studied in many different
contexts. Of particular interest is the fact
that they can be equipped with nontrivial actions of the additive
group $\mathbb{C}_{+}$. The general orbits of these actions coincide with the general fibers of $\mathbb{A}^{1}$-fibrations $\pi:S\rightarrow\mathbb{A}^{1}$,
that is, surjective morphisms with generic fiber isomorphic to an
affine line. Normal affine
surfaces $S$ equipped with an $\mathbb{A}^{1}$-fibration $\pi:S\rightarrow\mathbb{A}^{1}$
can be roughly classified into two classes according the following
alternative : either $\pi:S\rightarrow\mathbb{A}^{1}$ is a unique
$\mathbb{A}^{1}$-fibration on $S$ up to automorphisms of the base,
or there exists a second $\mathbb{A}^{1}$-fibration $\pi':S\rightarrow\mathbb{A}^{1}$
with general fibers distinct from the ones of $\pi$.
Due to the symmetry between the variables $x$ and $z$, a surface
defined by the equation $xz-P\left(y\right)=0$ admits two distinct $\mathbb{A}^1$-fibrations
over the affine line. In contrast, it was established by L. Makar-Limanov \cite{ML01} that
on a surface $S_{P,n}$ defined by the equation $x^{n}z-P\left(y\right)=0$
in $\mathbb{C}^{3}$, where $n\geq2$ and where $P\left(y\right)$
is a polynomial of degree $r\geq2$, the projection ${\rm pr}_{x}:S_{P,n}\rightarrow\mathbb{C}$
is a unique $\mathbb{A}^{1}$-fibration up to automorphisms of the
base. In his proof, L. Makar-Limanov used the correspondence between algebraic $\mathbb{C}_+$-actions on an affine surface $S$ and locally nilpotent derivations of the algebra of regular functions on $S$. It turns out that his proof is essentially independent of the base field $k$ provided that we replace locally nilpotent derivations by suitable systems of Hasse-Schmidt derivations when the characteristic of $k$ is positive (see e.g., \cite{Cra06}).
The fact that an affine surface $S$ admits a unique $\mathbb{A}^{1}$-fibration
$\pi:S\rightarrow\mathbb{A}^{1}$ makes its study simpler. For instance,
every automorphism of $S$ must preserve this fibration. In this context,
a result due to J. Bertin \cite{Ber83} asserts that the identity
component of the automorphisms group of such a surface is an algebraic
pro-group obtained as an increasing union of solvable algebraic subgroups
of rank $\leq1$. For surfaces defined by the equations $x^{n}z-P\left(y\right)=0$
in $\mathbb{C}^{3}$, the picture has been completed by L. Makar-Limanov
\cite{ML01} who gave explicit generators of their automorphisms
groups. Similar results have been obtained over arbitrary base fields by A. Crachiola
\cite{Cra06} for surfaces defined by the equations $x^{n}z-y^{2}-\sigma\left(x\right)y=0$, where $\sigma\left(x\right)$ is a polynomial such that $\sigma\left(0\right)\neq 0$.
The latter surfaces are particular examples of a general class of
$\mathbb{A}^{1}$-fibred surfaces called \emph{Danielewski surfaces} \cite{DubG03}, that is, normal integral affine surface $S$ equipped with an $\mathbb{A}^{1}$-fibration
$\pi:S\rightarrow\mathbb{A}_{k}^{1}$ over an affine line with a fixed $k$-rational point
$o$, such that every fiber $\pi^{-1}\left(x\right)$, where $x\in\mathbb{A}_{k}^{1}\setminus\left\{ o\right\} $, is geometrically integral, and such that every irreducible component
of $\pi^{-1}\left(o\right)$ is geometrically integral. In this article, we consider Danielewski surfaces $S_{Q,n}$ in $\mathbb{A}_{k}^{3}$ defined by an equation of the form $x^{n}z-Q\left(x,y\right)=0$, where $n\geq2$ and where $Q\left(x,y\right)\in k\left[x,y\right]$
is a polynomial such that $Q\left(0,y\right)$ splits with $r\geq2$
simple roots in $k$. This class contains most of the surfaces considered by L. Makar-Limanov and A. Crachiola.
The paper is organised as follows. First, we briefly recall definitions about weighted rooted trees and the notion of equivalence of algebraic surfaces in an affine $3$-space.
In section $2$, we recall from \cite{DubG03} the
main facts about Danielewski surfaces and we review the correspondence between these surfaces and certain classes of weighted trees in a form appropriate to our needs. We also
generalise to arbitrary base fields $k$ some results which are only
stated for fields of characteristic zero in \cite{Dub02} and \cite{DubG03}.
In particular, the case of Danielewski surfaces which admit two $\mathbb{A}^{1}$-fibrations with distinct general
fibers is studied in Theorem \ref{thm:Comb_ML_Trivial}. We show that these surfaces correspond to Danielewski surfaces $S\left(\gamma\right)$
defined by the fine $k$-weighted trees $\gamma$ which are called \emph{combs} and we give explicit embeddings of them. This result generalises Theorem 4.2 in \cite{DubEmb05}.
In section $3$, we classify Danielewski surfaces $S_{Q,h}$ in $\mathbb{A}_{k}^{3}$
defined by equations of the form $x^{h}z-Q\left(x,y\right)=0$ and determine their automorphism groups. We remark that such a surface admits many embeddings as a surface $S_{Q,h}$. In particular, we establish in Theorem \ref{thm:Equivalent_charac} that these surfaces can always be embedded as surface $S_{\sigma,h}$ defined by an equation of the form $x^{h}z-\prod_{i=1}^r\left(y-\sigma_i\left(x\right)\right)=0$ for a suitable collection of polynomials $\sigma=\left\{\sigma_i\left(x\right)\right\}_{i=1,\ldots,r}$. We say that these surfaces $S_{\sigma,h}$ are \emph{standard form} of Danielewski surfaces $S_{Q,h}$.
Next, we compute ( Theorem \ref{thm:Main_auto_thm}) the automorphism groups of Danielewski surfaces in standard form. We show in particular that any of them comes as the restriction of an algebraic automorphism of the ambient space.
Finally we consider the problem of extending automorphisms of a given Danielewski surface $S_{Q,h}$ to automorphisms of the ambient space $\mathbb{A}^3_k$. We show that this is always possible in the holomorphic category but not in the algebraic one. We give explicit examples which come from the study of multiplicative group actions on Danielewski surfaces. For instance, we prove that
every surface $S\subset\mathbb{A}_{\mathbb{C}}^{3}$ defined by the equation $x^{h}z-\left(1-x\right)P\left(y\right)=0$, where $h\geq2$ and where
$P\left(y\right)$ has $r\geq2$ simple roots, admits a nontrivial
$\mathbb{C}^{*}$-action which is algebraically inextendable but holomorphically
extendable to $\mathbb{A}_{\mathbb{C}}^{3}$. In particular, even the involution of the surface $S$ defined by the equation $x^{2}z-\left(1-x\right)P\left(y\right)=0$ induced by the endomorphism
$J\left(x,y,z\right)=\left(-x,y,\left(1+x\right)\left(\left(1+x\right)z+P\left(y\right)\right)\right)$
of $\mathbb{A}^3_k$ does not extend to an algebraic automorphism of $\mathbb{A}_{k}^{3}$.
\section{Preliminaries}
\subsection{Basic facts on weighted rooted trees}
\begin{defn}
A \emph{tree} is a nonempty, finite, partially ordered set $\Gamma=\left(\Gamma,\leq\right)$
with a unique minimal element $e_{0}$ called the \emph{root}, and
such that for every $e\in\Gamma$ the subset $\left(\downarrow e\right)_{\Gamma}=\left\{ e'\in\Gamma,\, e'\leq e\right\} $
is a chain \emph{}for the induced ordering.
\end{defn}
\begin{enavant} \label{txt-def:subchain_def} A minimal sub-chain
$\overleftarrow{e'e}=\left\{ e'<e\right\} $ with two elements of
a tree $\Gamma$ is called \emph{an edge} of $\Gamma$. We denote
the set of all edges in $\Gamma$ by $E\left(\Gamma\right)$. An element
$e\in\Gamma$ such that $\textrm{Card}\left(\downarrow e\right)_{\Gamma}=m$
is said to be \emph{at level} $m$. The maximal elements $e_{i}=e_{i,m_{i}}$,
where $m_{i}=\textrm{Card}\left(\downarrow e_{i}\right)_{\Gamma}$
of $\Gamma$ are called the \emph{leaves} of $\Gamma$. We denote
the set of those elements by $L\left(\Gamma\right)$. The maximal
chains of $\Gamma$ are the chains \begin{equation}
\Gamma_{e_{i,m_{i}}}=\left(\downarrow e_{i,m_{i}}\right)_{\Gamma}=\left\{ e_{i,0}=e_{0}<e_{i,1}<\cdots<e_{i,m_{i}}\right\} ,\quad e_{i,m_{i}}\in L\left(\Gamma\right).\label{eq:Maximal_Chain_Notation}\end{equation}
We say that $\Gamma$ has \emph{height} $h=\max\left(m_{i}\right)$.
The \emph{children} of an element $e\in\Gamma$ are the elements of
$\Gamma$ at relative level $1$ with respect to $e$, i.e.,
the maximal elements of the subset $\left\{ e'\in\Gamma,\: e'>e\right\} $
of $\Gamma$.
\end{enavant}
\begin{defn}
\label{def:fine_weighted_def1} A \emph{fine} $k$\emph{-weighted
tree} $\gamma=\left(\Gamma,w\right)$ is a tree $\Gamma$ equipped
with a weight function $w:E\left(\Gamma\right)\rightarrow k$ with
values in a field $k$, which assigns an element $w\left(\overleftarrow{e'e}\right)$
of $k$ to every edge $\overleftarrow{e'e}$ of $\Gamma$, in such
a way that $w\left(\overleftarrow{e'e_{1}}\right)\neq w\left(\overleftarrow{e'e_{2}}\right)$
whenever $e_{1}$ and $e_{2}$ are distinct children of a same element $e'$.
\end{defn}
\noindent In what follows, we frequently consider the following classes
of trees.
\begin{defn} \label{CombDef} Let $\Gamma$ be a rooted tree.
a) If all the leaves of $\Gamma$ are at the same level $h\geq 1$ and if there exists a unique element $\bar{e}_{0}\in\Gamma$ for which $\Gamma\setminus\left\{ \bar{e}_{0}\right\} $ is a nonempty disjoint union of chains then we say that $\Gamma$ is a \emph{rake}.
b) If $\Gamma\setminus L\left(\Gamma\right)$ is a chain then we say that $\Gamma$ is a \emph{comb}. Equivalently, $\Gamma$ is a comb if and only if every $e\in\Gamma\setminus L\left(\Gamma\right)$
has at most one child which is not a leaf of $\Gamma$.
\end{defn}
\begin{figure}[h]
\begin{pspicture}(1,0.6)(8,-1.5)
\rput(1,0){
\pstree[treemode=R,radius=2.5pt,treesep=0.5cm,levelsep=0.6cm]{\Tc{3pt}~[tnpos=a]{$e_0$}}{
\pstree{\TC*}{\skiplevels{1}\pstree{\TC*~[tnpos=a]{$\bar{e}_0$}}{
\pstree{\TC*}{\skiplevels{2} \TC*\endskiplevels}
\pstree{\TC*}{\skiplevels{2} \TC*\endskiplevels}
\pstree{\TC*}{\skiplevels{2} \TC*\endskiplevels}
}\endskiplevels}
}
\rput(-1.5,-1.2){ A rake rooted in $e_0$.}
\rput(5,0){
\pstree[treemode=R,radius=2.5pt,treesep=0.5cm,levelsep=1cm]{\Tc{3pt}~[tnpos=a]{$e_0$}}{
\pstree{\TC*}{ \pstree{\TC*} {\Tn \pstree{\TC*} {\TC*\TC*\TC*} \TC* } }
}
}
}
\rput(8,-1.2){ A comb rooted in $e_0$.}
\end{pspicture}
\end{figure}
\subsection{Algebraic and analytic equivalence of closed embeddings}
\indent\newline\noindent Here we briefly discuss the notions of algebraic
and analytic equivalences of closed embeddings of a given affine algebraic
surface in an affine $3$-space.
Let $S$ be an irreducible affine surface and let $i_{P_1}:S\hookrightarrow \mathbb{A}^3_{k}$ and $i_{P_2}:S\hookrightarrow \mathbb{A}^3_{k}$ be embeddings of $S$ in a same affine $3$-space as closed subschemes defined by polynomial equations $P_{1}=0$ and $P_{2}=0$ respectively.
\begin{defn}
\label{def:Algebraic_equiv_def} In the above setting, we say that the closed embeddings $i_{P_1}$ and $i_{P_2}$ are \emph{algebraically equivalent}
if one of the following equivalent conditions is satisfied:
1) There exists an automorphism $\Phi$ of $\mathbb{A}^3_{k}$ such that $i_{P_2}=i_{P_1}\circ\Phi$.
2) There exists an automorphism $\Phi$ of $\mathbb{A}_{k}^{3}$ and a nonzero constant
$\lambda\in k^{*}$ such that $\Phi^{*}P_{1}=\lambda P_{2}$.
3) There exists automorphisms $\Phi$ and $\phi$ of $\mathbb{A}_{k}^{3}$
and $\mathbb{A}_{k}^{1}$ respectively such that $P_{2}\circ\Phi=\phi\circ P_{1}$.
\end{defn}
\begin{enavant} \label{txt:Analytic_equivalence_def} Over the field $k=\mathbb{C}$ of complex numbers, one can also consider holomorphic automorphisms. With the notation of definition \ref{def:Algebraic_equiv_def}, two closed algebraic embeddings $i_{P_1}$ and $i_{P_2}$ of a given affine surface $S$ in $\mathbb{A}^3_{\mathbb{C}}$ are called \emph{holomorphically equivalent} if there exists a biholomorphism $\Phi:\mathbb{A}_{\mathbb{C}}^{3}\rightarrow\mathbb{A}_{\mathbb{C}}^{3}$ such that $i_{P_2}=i_{P_1}\circ\Phi$. Clearly, the embeddings $i_{P_2}$ and $i_{P_1}$ are holomorphically equivalent if and only if there exists a biholomorphism $\Phi:\mathbb{A}_{\mathbb{C}}^{3}\rightarrow\mathbb{A}_{\mathbb{C}}^{3}$ such that $\Phi^{*}\left(P_1\right)=\lambda P_2$ for a certain nowhere vanishing holomorphic function $\lambda$. Since there are many nonconstant holomorphic functions with this property on $\mathbb{A}_{\mathbb{C}}^{3}$, $\Phi$ need not preserve the algebraic families of level surfaces $P_{1}:\mathbb{A}_{\mathbb{C}}^{3}\rightarrow\mathbb{A}_{\mathbb{C}}^{1}$
and $P_{2}:\mathbb{A}_{\mathbb{C}}^{n}\rightarrow\mathbb{A}_{\mathbb{C}}^{1}$. So holomorphic equivalence is a weaker requirement than algebraic equivalence.
\end{enavant}
\section{Danielewski surfaces }
For certain authors, a Danielewski surface
is an affine surface $S$ which is algebraically isomorphic to a surface
in $\mathbb{C}^{3}$ defined by an equation of the form $x^{n}z-P\left(y\right)=0$,
where $n\geq1$ and $P\left(y\right)\in\mathbb{C}\left[y\right]$. These surfaces come
equipped with a surjective morphism $\pi={\rm pr}_{x}\mid_{S}:S\rightarrow\mathbb{A}^{1}$
restricting to a trivial $\mathbb{A}^{1}$-bundle over the complement
of the origin. Moreover, if the roots $y_{1},\ldots,y_{r}\in\mathbb{C}$
of $P\left(y\right)$ are simple, then the fibration $\pi={\rm pr}_{x}\mid_{S}:S\rightarrow\mathbb{A}^{1}$
factors through a locally trivial fiber bundle over the affine line
with an $r$ -fold origin (see e.g., \cite{Dan89} and \cite{Fie94}).
In \cite{DubG03}, the first author used the term Danielewski surface
to refer to an affine surface $S$ equipped with a morphism $\pi:S\rightarrow\mathbb{A}^{1}$
which factors through a locally trivial fiber bundle in a similar
way as above. In what follows, we keep this point of view, which leads
to a natural geometric generalisation of the surfaces constructed
by W. Danielewski \cite{Dan89}. We recall that an $\mathbb{A}^{1}$\emph{-fibration}
over an integral scheme $Y$ is a faithfully flat (i.e., flat and
surjective) affine morphism $\pi:X\rightarrow Y$ with generic fiber
isomorphic to the affine line $\mathbb{A}_{K\left(Y\right)}^{1}$
over the function field $K\left(Y\right)$ of $Y$. The following
definition is a generalisation to arbitrary base fields $k$ of the
one introduced in \cite{DubG03}.
\begin{defn}
\label{def:DanSurf_Def} A \emph{Danielewski surface} is an integral
affine surface $S$ defined over a field $k$, equipped with an $\mathbb{A}^{1}$-fibration
$\pi:S\rightarrow\mathbb{A}_{k}^{1}$ restricting to a trivial $\mathbb{A}^{1}$-bundle
over the complement of the a $k$-rational point $o$ of $\mathbb{A}_{k}^{1}$ and
such that the fiber $\pi^{-1}\left(o\right)$ is reduced, consisting
of a disjoint union of affine lines $\mathbb{A}_{k}^{1}$ over $k$.
\end{defn}
\begin{notation} In what follows, we fix an isomorphism $\mathbb{A}^1_k\simeq {\rm Spec}\left(k\left[x\right]\right)$ and we assume that the $k$-rational point $o$ is simply the "origin" of $\mathbb{A}^1_k$, that is, the closed point $(x)$ of ${\rm Spec}\left(k\left[x\right]\right)$.
\end{notation}
\begin{enavant} In the following subsections, we recall the correspondence between Danielewski surfaces and weighted rooted trees established by the first author in \cite{DubG03} in a form
appropriate to our needs. Although the results given in \emph{loc}.
\emph{cit}. are formulated for surfaces defined over a field of characteristic
zero, most of them remain valid without any changes over a field of
arbitrary characteristic. We provide full proofs only when additional
arguments are needed. Then we consider Danielewski surfaces
$S$ with a trivial canonical sheaf $\omega_{S/k}=\Lambda^{2}\Omega_{S/k}^{1}$.
We call them \emph{special Danielewski surfaces}. We give a complete
classification of these surfaces in terms of their associated weighted
trees.
\end{enavant}
\subsection{Danielewski surfaces and weighted trees}
\indent\newline\noindent Here we review the correspondence which associates to every fine $k$-weighted tree $\gamma=\left(\Gamma,w\right)$
a Danielewski surface $\pi:S\left(\gamma\right)\rightarrow\mathbb{A}_{k}^{1}={\rm Spec}\left(k\left[x\right]\right)$
which is the total space of an $\mathbb{A}^{1}$-bundle over
the scheme $\delta:X\left(r\right)\rightarrow\mathbb{A}_{k}^{1}$
obtained from $\mathbb{A}_{k}^{1}$ by replacing its origin $o$ by
$r\geq1$ $k$-rational points $o_{1},\ldots,o_{r}$.
\begin{notation}
In what follows we denote
by $\mathcal{U}_{r}=\left(X_{i}\left(r\right)\right)_{i=1,\ldots,r}$
the canonical open covering of $X\left(r\right)$ by means of the
subsets $X_{i}\left(r\right)=\delta^{-1}\left(\mathbb{A}_{k}^{1}\setminus\left\{ o\right\} \right)\cup\left\{ o_{i}\right\} \simeq\mathbb{A}_{k}^{1}$.
\end{notation}
\begin{enavant} \label{txt:Abstract_DS_morph}\label{pro:WeightedTree_2_DanSurf}
Let $\gamma=\left(\Gamma,w\right)$ be a fine $k$-weighted tree $\gamma=\left(\Gamma,w\right)$ of
height $h$, with leaves $e_{i}$ at levels $n_{i}\leq$$h$, $i=1,\ldots,r$. To every
maximal sub-chain $\gamma_{i}=\left(\downarrow e_{i}\right)$
of $\gamma$ (see \ref{txt-def:subchain_def} for the notation) we associate a polynomial
\[
\sigma_{i}\left(x\right)=\sum_{j=0}^{n_{i}-1}w\left(\overleftarrow{e_{i,j}e_{i,j+1}}\right)x^{j}\in k\left[x\right],\quad i=1,\ldots,r.\]
We let $\rho:S\left(\gamma\right)\rightarrow X\left(r\right)$ be
the unique $\mathbb{A}^{1}$-bundle over $X\left(r\right)$ which
becomes trivial on the canonical open covering $\mathcal{U}_{r}$, and is
defined by pairs of transition functions \[
\left(f_{ij},g_{ij}\right)=\left(x^{n_{j}-n_{i}},x^{-n_{i}}\left(\sigma_{j}\left(x\right)-\sigma_{i}\left(x\right)\right)\right)\in k\left[x,x^{-1}\right]^{2},\quad i,j=1,\ldots,r.\]
This means that $S\left(\gamma\right)$ is obtained by gluing $n$
copies $S_{i}=\textrm{Spec}\left(k\left[x\right]\left[u_{i}\right]\right)$
of the affine plane $\mathbb{A}_{k}^{2}$ over $\mathbb{A}_{k}^{1}\setminus\left\{o\right\} \simeq\textrm{Spec}\left(k\left[x,x^{-1}\right]\right)$
by means of the transition isomorphisms induced by the $k\left[x,x^{-1}\right]$-algebras
isomorphisms\[
k\left[x,x^{-1}\right]\left[u_{i}\right]\stackrel{\sim}{\rightarrow}k\left[x,x^{-1}\right]\left[u_{j}\right],\quad u_{i}\mapsto x^{n_{j}-n_{i}}u_{j}+x^{-n_{i}}\left(\sigma_{j}\left(x\right)-\sigma_{i}\left(x\right)\right)\qquad i\neq,\, i,j=1,\ldots,r.\]
This definition makes sense as the transition functions
$g_{ij}$ satisfy the twisted cocycle relation $g_{ik}=g_{ij}+x^{n_{j}-n_{i}}g_{jk}$
in $k\left[x,x^{-1}\right]$ for every triple of distinct indices $i$, $j$ and $k$. Since $\gamma$ is a fine weighted tree, it follows that for every pair of distinct indices $i$ and $j$, the rational function $g_{ij}=x^{-n_i}\left(\sigma_j\left(x\right)-\sigma_i\left(x\right)\right)\in k\left[x,x^{-1}\right]$ does not extend to a regular function on $\mathbb{A}^1_k$. This condition guarantees that
$S\left(\gamma\right)$ is a separated scheme, whence an affine surface by
virtue of Fieseler's criterion (see proposition 1.4 in \cite{Fie94}).
Therefore, $\pi_{\gamma}=\delta\circ\rho:S\left(\gamma\right)\rightarrow\mathbb{A}_{k}^{1}=\textrm{Spec}\left(k\left[x\right]\right)$
is a Danielewski surface, the fiber $\pi^{-1}\left(o\right)$ being
the disjoint union of affine lines \[
C_{i}=\pi_{\gamma}^{-1}\left(o\right)\cap S_{i}\simeq\textrm{Spec}\left(k\left[u_{i}\right]\right),\quad i=1,\ldots,r.\]
\end{enavant}
\begin{enavant} \label{txt:canonical_morphism} A Danielewski surface $\pi:S\left(\gamma\right)\rightarrow\mathbb{A}_{k}^{1}$
above comes canonically equipped with a birational morphism $\left(\pi,\psi_{\gamma}\right):S\rightarrow\mathbb{A}_{k}^{1}\times\mathbb{A}_{k}^{1}=\textrm{Spec}\left(k\left[x\right]\left[t\right]\right)$
restricting to an isomorphism over $\mathbb{A}_{k}^{1}\setminus\left\{ o\right\} $. Indeed, this morphism corresponds to the unique regular function $\psi_{\gamma}$ on $S\left(\gamma\right)$
whose restrictions to the open subsets $S_i \simeq\textrm{Spec}\left(k\left[x\right]\left[u_{i}\right]\right)$ of $S$ are given by the polynomials \[\psi_{\gamma,i}=x^{n_i}u_{i}+\sigma_{i}\left(x\right)\in k\left[x\right]\left[u_i\right],\quad i=1,\ldots,r.\]
This function is referred to as the \emph{canonical
function} on $S\left(\gamma\right)$. The morphism $\left(\pi_{\gamma},\psi_{\gamma}\right):S\left(\gamma\right)\rightarrow\mathbb{A}_{k}^{2}$
is called the \emph{canonical birational morphism} from $S\left(\gamma\right)$
to $\mathbb{A}_{k}^{2}$.
\end{enavant}
\begin{enavant} It turns out that there exists a one-to-one correspondence between pairs $\left(S,\left(\pi,\psi\right)\right)$ consisting of a Danielewski surface $\pi:S\rightarrow\mathbb{A}_{k}^{1}$
and a birational morphism $\left(\pi,\psi\right):S\rightarrow\mathbb{A}_{k}^{2}$
restricting to an isomorphism outside the fiber $\pi^{-1}\left(o\right)$
and fine $k$-weighted trees $\gamma$. In particular, Proposition
3.4 in \cite{DubG03}, which remains valid over arbitrary base fields
$k$, implies the following result.
\end{enavant}
\begin{thm}
\label{thm:GenDanMor_2_Tree} For every pair consisting of a Danielewski
surface $\pi:S\rightarrow\mathbb{A}_{k}^{1}$ and a birational morphism
$\left(\pi,\psi\right):S\rightarrow\mathbb{A}_{k}^{1}\times\mathbb{A}_{k}^{1}$
restricting to an isomorphism over $\mathbb{A}_{k}^{1}\setminus\left\{ o\right\} $,
there exists a unique fine $k$-weighted tree $\gamma$ and an isomorphism
$\phi:S\stackrel{\sim}{\rightarrow}S\left(\gamma\right)$ such that
$\psi=\psi_{\gamma}\circ\phi$.
\end{thm}
\begin{rem}
\label{rem:Psi_2_Level1} If $\gamma=\left(\Gamma,w\right)$ is not
the trivial tree with one element then the canonical function $\psi_{\gamma}:S\left(\gamma\right)\rightarrow\mathbb{A}_{k}^{1}$
on the corresponding Danielewski surface $\pi:S\left(\gamma\right)\rightarrow\mathbb{A}_{k}^{1}$
is locally constant on the fiber $\pi^{-1}\left(o\right)$. It takes
the same value on two distinct irreducible components of $\pi^{-1}\left(o\right)$
if and only if the corresponding leaves of $\gamma$ belong to a same
subtree of $\gamma$ rooted in an element at level $1$. Since every Danielewski surface nonisomorphic
to $\mathbb{A}_{k}^{2}$ admits a birational morphism $\left(\pi,\psi\right)$
for which $\psi$ is locally constant but not constant on the fiber
$\pi^{-1}\left(o\right)$, it follows that every such surface correspond to a tree $\gamma$ with at least two elements at level $1$.
\end{rem}
\subsection{$\mathbb{A}^{1}$-fibrations on Danielewski surfaces}
\indent\newline\noindent Suppose that the structural $\mathbb{A}^{1}$-fibration $\pi:S\rightarrow\mathbb{A}_{k}^{1}$ on a Danielewski surface $S$ is unique up to automorphisms of the base. Then a second Danielewski surface $\pi':S'\rightarrow\mathbb{A}_{k}^{1}$ will be isomorphic to $S$ as an abstract surface if and only if it is isomorphic to $S$ as a fibered surface, that is, if and only if there exists a commutative diagram
\[\xymatrix{ S \ar[r]^{\sim}_{\Phi} \ar[d]_{\pi} & S' \ar[d]^{\pi'} \\ \mathbb{A}^1_k \ar[r]^{\sim}_{\phi} & \mathbb{A}^1_k \;, }\]
where $\Phi:S\stackrel{\sim}{\rightarrow} S'$ is an isomorphism and $\phi$ is an automorphism of $\mathbb{A}^1_k$ preserving the origin $o$.
\begin{enavant} So it is useful to a have characterisation of those Danielewski surfaces admitting two $\mathbb{A}^1$-fibrations with distinct general fibers. The first result toward such a classification has been obtained by T. Bandman and L. Makar-Limanov \cite{BML01} who established that a complex Danielewski
surface $S$ with a trivial canonical sheaf $\omega_{S}$ admits two
$\mathbb{A}^{1}$-fibrations with distinct general fibers if and only
if it is isomorphic to a surface $S_{P,1}$ in $\mathbb{A}_{\mathbb{C}}^{3}$
defined by the equation $xz-P\left(y\right)=0$, where $P$ is a polynomial
with simple roots. Over a field of characteristic zero, a complete classification has been given by the first author in \cite{DubG03} and \cite{DubEmb05}. It turns out that the main result of \cite{DubEmb05} remains valid over arbitrary base fields. This leads to the following characterisation.
\end{enavant}
\begin{thm}
\label{thm:Comb_ML_Trivial} For a Danielewski surface $\pi:S\rightarrow\mathbb{A}_{k}^{1}$, the following are equivalent :
1) $S$ admits two $\mathbb{A}^{1}$-fibrations with distinct general
fibers.
2) $S$ is isomorphic to a Danielewski surface $S\left(\gamma\right)$
defined by a fine $k$-weighted comb $\gamma=\left(\Gamma,w\right)$.
3) There exists an integer $h\geq1$ and a collection of monic polynomials
$P_{0},\ldots,P_{h-1}\in k\left[t\right]$ with simple roots $a_{i,j}\in k^{*}$,
$i=0,\ldots,h-1$, $j=1,\ldots,\deg_{t}\left(P_{i}\right)$, such
that $S$ is isomorphic to the surface $S_{P_{0},\ldots,P_{h-1}}\subset\textrm{Spec}\left(k\left[x\right]\left[y_{-1},\ldots,y_{h-2}\right]\left[z\right]\right)$
defined by the equations \[
\left\{ \begin{array}{lll}
xz-y_{h-2}{\displaystyle \prod_{l=0}^{h-1}}P_{l}\left(y_{l-1}\right)=0\\
zy_{i-1}-y_{i}y_{h-2}{\displaystyle \prod_{l=i+1}^{h-1}}P_{l}\left(y_{l-1}\right)=0 & xy_{i}-y_{i-1}{\displaystyle \prod_{l=0}^{i}}P_{l}\left(y_{l-1}\right)=0 & 0\leq i\leq h-2\\
y_{i-1}y_{j}-y_{i}y_{j-1}{\displaystyle \prod_{l=i+1}^{j}}P_{l}\left(y_{l-1}\right)=0 & & 0\leq i<j\leq h-2\end{array}\right.\]
\end{thm}
\begin{proof}
One checks in a similar way as in the proof of Theorem 2.9 in \cite{DubEmb05}
that a surface $S=S_{P_{0},\ldots,P_{h-1}}$ is a Danielewski surface $\pi={\rm pr}_{x}\mid_{S}:S\rightarrow\mathbb{A}_{k}^{1}$. Furthermore, the projection $\pi'={\rm pr}_{z}\mid_{S}:S\rightarrow\mathbb{A}_{k}^{1}$ is a second $\mathbb{A}^{1}$-fibration on $S$ restricting to
a trivial $\mathbb{A}^{1}$-bundle $\left(\pi'\right)^{-1}\left(\mathbb{A}_{k}^{1}\setminus\left\{ 0\right\} \right)\simeq\textrm{Spec}\left(k\left[z,z^{-1}\right]\left[y_{h-2}\right]\right)$
over $\mathbb{A}_{k}^{1}\setminus\left\{ 0\right\} $. So 3) implies
1). To show that 1) implies 2) we use the following fact, which is a consequence of a result due to M.H. Gizatullin \cite{Giz71} : if a nonsingular affine surface $S$ defined over an algebraically closed field $k$ admits an $\mathbb{A}^1$-fibration $q:S\rightarrow \mathbb{A}^1_k$, then this fibration is unique up to automorphisms of the base if and only if $S$ does not admit a completion by a nonsingular projective surface $\bar{S}$ for which the boundary divisor $\bar{S}\setminus S$ is a \emph{zigzag}, that is, a chain of nonsingular proper rational curves. In \cite{DubG03}, the first author constructed canonical
completions $\bar{S}$ of a Danielewski surface $S\left(\gamma\right)$
defined by a fine $k$-weighted tree $\gamma=\left(\Gamma,w\right)$
for which the dual graph $\Gamma'$ of the boundary divisor $\bar{S}\setminus S\left(\gamma\right)$
is isomorphic to the tree obtained from $\Gamma$ be deleting its
leaves and replacing its root by a chain with two elements. Clearly, $\bar{S}\setminus S\left(\gamma\right)$ is a zigzag if and only if $\Gamma$ is a comb. The construction given in \emph{loc}. \emph{cit}.
only depends on the existence of an $\mathbb{A}^{1}$-bundle
structure $\rho:S\left(\gamma\right)\rightarrow X\left(r\right)$
on a Danielewski surface $S\left(\gamma\right)$. So it remains valid over
an arbitrary base field $k$. Now let $S=S\left(\gamma\right)$ be a Danielewski surface
admitting two distinct $\mathbb{A}^{1}$-fibrations. Given an algebraic closure $\bar{k}$ of $k$, the surface $S_{\bar{k}}=S\times_{\textrm{Spec}\left(k\right)}\textrm{Spec}\left(\bar{k}\right)$
is a Danielewski surface isomorphic to the one defined by the
tree $\gamma$ consider as a fine $\bar{k}$-weighted tree via the
inclusion $k\subset\bar{k}$. Since every $\mathbb{A}^{1}$-fibration
$\pi:S\rightarrow\mathbb{A}_{k}^{1}$ lifts to an $\mathbb{A}^{1}$-fibration
$\pi_{\bar{k}}:S_{\bar{k}}\rightarrow\mathbb{A}_{\bar{k}}^{1}$ it
follows that $S_{\bar{k}}$ admits two $\mathbb{A}^{1}$-fibrations
with distinct general fibers. So we deduce from Gizatullin's criterion above
that $\gamma$ is a comb. Thus 1) implies 2).
It remains to show that every Danielewski surface $\pi:S=S\left(\gamma\right)\rightarrow \mathbb{A}^1_k$ defined by a fine $k$-weighted comb $\gamma$ of height $h\geq 1$ admits a closed embedding in an affine space as a surface $S_{P_0,\ldots, P_{h-1}}$. This follows from a general construction described in §4.6 of \cite{DubEmb05} that can be simplified in our more restrictive context. For the convenience of the reader, we indicate below the main steps of the proof. If $\gamma$ is a chain, then $S\left(\gamma\right)$ is isomorphic to the affine plane $\mathbb{A}^2_k$ which embeds in $\mathbb{A}^{h+2}_k$ as a surface $S_{P_0,\ldots ,P_{h-1}}$ for which all the polynomials $P_i$, $i=0,\ldots, h-1$ have degree one. We assume from now on that $\gamma$ has at least two elements at level $1$ (see Remark \ref{rem:Psi_2_Level1} above). We denote by $e_{0,0}<e_{1,0}<\cdots<e_{h-1,0}$ the
elements of the sub-chain $C=\Gamma\setminus L\left(\Gamma\right)$
of $\Gamma$ consisting of elements of $\Gamma$ which are not leaves
of $\Gamma$. For every $l=1,\ldots,h$, the elements of $\Gamma$
at level $l$ distinct from $e_{l,0}$ are denoted by $e_{l,1},\ldots,e_{l,r_{l}}$
provided that they exist. Since $\gamma$ is a comb, it follows from \ref{txt:Abstract_DS_morph} above that $S$ is isomorphic to the surface associated with a certain fine $k$-weighted tree with the same underlying tree $\Gamma$ as $\gamma$ and equipped with a weight function $w$ such that $w\left(\overleftarrow{e_{i,0}e_{i+1,0}}\right)=0$ for every index $i=0,\ldots,h-2$ and such that $w\left(\overleftarrow{e_{h-1,0}e_{h-1,1}}\right)=0$. We consider $S$ as an $\mathbb{A}^{1}$-bundle $\rho:S\rightarrow X\left(r\right)$
and we denote by $S_{i}=\textrm{Spec}\left(k\left[x\right]\left[u_{i}\right]\right)$
the trivialising open subsets of $S$ over $X\left(r\right)$. For every
$l=0,\ldots,h-1$ and every $i=1,\ldots,s_{l}$, we let $\tau_{l,i}\left(x,u_{i}\right)=xu_{i}+w\left(\overleftarrow{e_{l-1,0}e_{l,i}}\right)\in k\left[x\right]\left[u_{i}\right]$.
With this notation, the canonical function $\psi$ on $S$
restricts on an open subset $S_{i}$ corresponding to a leaf $e_{l,i}$
of $\Gamma$ at level $l$ to the polynomial $x^{l-1}\tau_{l,i}\left(x,u_{i}\right)\in k\left[x\right]\left[u_{i}\right]$. Therefore, $y_{-1}=\psi$ is constant with
the value $a_{0,i}=w\left(\overleftarrow{e_{0,0}e_{1,i}}\right)\in k^{*}$
on the irreducible component $\pi^{-1}\left(o\right)$ corresponding
to a leaf $e_{1,i}$, $i=1,\ldots,r_{1}$, at level $1$. It vanishes
identically on every irreducible component of $\pi^{-1}\left(o\right)$
corresponding to a leaf of $\gamma$ at level $l\geq2$. More generally, direct computations show that there exists a unique datum consisting of regular functions $y_{-1},\ldots,y_{h-2}$ and $y_{h-1}$ on $S$ and polynomials $P_i\in k\left[t\right]$, $i=0,\ldots, h-1$ satisfying the following conditions :
a) For every $l=0,\ldots,h-1$, and every $l\leq m\leq h$ , $y_{l-1}$
restricts on an open subset $S_{i}$ corresponding to a leaf $e_{m,i}$
of $\gamma$ at level $m$ to a polynomial $y_{l-1,i}\in k\left[x\right]\left[u_{i}\right]$
such that \begin{eqnarray*}
y_{l-1,i} & = & \left\{ \begin{array}{lll}
L_{l,i}\left(u_{i}\right) & \textrm{mod }x & \textrm{if }m=l\\
a_{l,i}+xL_{l+1,i}\left(u_{i}\right) & \textrm{mod }x^{2} & \textrm{if }m=l+1\\
\xi_{m}x^{m-l-1}\tau_{m,i}\left(x,u_{i}\right)+\nu_{m,i}x^{m-l} & \textrm{mod }x^{m-l+1} & \textrm{if }m>l+1,\end{array}\right.\end{eqnarray*}
where $L_{l,i}\left(u_{i}\right),L_{l+1,i}\left(u_{i}\right)\in k\left[u_{i}\right]$
are polynomials of degree $1$, $a_{l,i},\xi_{m} \in k^{*}$ and $\nu_{m,i}\in k$. Furthermore $a_{l,i}\neq a_{l,j}$ for every pair of distinct indices $i$ and $j$.
b) For every $l=0,\ldots,h-1$, $P_{l}$ is the unique monic polynomial
with simple roots $a_{l,1},\ldots,a_{l,r_{l}}$ such that $x^{-1}y_{l-1}\prod_{i=0}^{l-1}P_{i}\left(y_{i-1}\right)P_{l}\left(y_{l-1}\right)$ is a regular function on $S$.
By construction, these functions $y_{-1},\ldots,y_{h-2},y_{h-1}=z$ distinguish the irreducible
components of the fiber $\pi^{-1}\left(o\right)$ and induce coordinate
functions on them. It follows that the morphism $i=\left(\pi,y_{-1},\ldots,y_{h-1},z\right):S\hookrightarrow\mathbb{A}_{k}^{h+2}$
is an embedding. The same argument as in the proof of Lemma 3.6 in
\cite{DubEmb05} shows that $i$ is actually a closed embedding whose
image is contained in the surface $S_{P_{0},\ldots,P_{h-1}}\subset\mathbb{A}_{k}^{h+2}$
defined in Theorem \ref{thm:Comb_ML_Trivial} above. One checks that the induced morphism $\phi:S\rightarrow S_{P_{0},\ldots,P_{h-1}}$
defines a bijection between the sets of closed points of $S$
and $S_{P_{0},\ldots,P_{h-1}}$. Furthermore, $\phi$ is also birational as $y_{-1}$ induces an isomorphism $\pi^{-1}\left(\mathbb{A}^{1}\setminus\left\{ o\right\} \right)\stackrel{\sim}{\rightarrow}{\rm Spec}\left(k\left[x,x^{-1}\right]\left[y_{-1}\right]\right)$. Since $S_{P_{0},\ldots,P_{h-1}}$ is nonsingular, we conclude that $\phi$ an isomorphism by virtue of Zariski
Main Theorem (see e.g., 4.4.9 in \cite{EGAIII}).
\end{proof}
\subsection{Special Danielewski surfaces }
\indent\newline\noindent It follows from Adjunction
Formula that every Danielewski surface $S$ in $\mathbb{A}_{k}^{3}$
has a trivial canonical sheaf $\omega_{S/k}=\Lambda^2\Omega^1_{S/k}$. More generally, a Danielewski surface $\pi:S\rightarrow\mathbb{A}_{k}^{1}$
with a trivial canonical sheaf, or equivalently with
a trivial sheaf of relative differential forms $\Omega_{S/\mathbb{A}_{k}^{1}}^{1}$,
will be called \emph{special}.
\begin{enavant} \label{pro:Spec_DS_charac} These surfaces correspond
to a distinguished class of weighted trees $\gamma$. Indeed, it follows from the gluing construction given in \ref{txt:Abstract_DS_morph} above that a Danielewski surface $S\left(\gamma\right)$ admits a nowhere vanishing differential $2$-form if and only if all the leaves of $\gamma$ are at the same level. In turn,
this means that these surfaces $S$ are the total space of $\mathbb{A}^{1}$-bundles
$\rho:S\rightarrow X\left(r\right)$ over $X\left(r\right)$ defined
by means of transition isomorphisms \[
\tau_{ij}:k\left[x,x^{-1}\right]\left[u_{i}\right]\rightarrow k\left[x,x^{-1}\right]\left[u_{j}\right],\quad u_{i}\mapsto u_{j}+g_{ij}\left(x\right),\quad i,j=1,\ldots,r,\]
where $g=\left\{ g_{ij}\right\} _{i,j}\in C^1\left(X\left(r\right),\mathcal{O}_{X\left(r\right)}\right)\simeq \mathbb{C}\left[x,x^{-1}\right]^{2r}$ is a \v{C}ech cocycle with
values in the sheaf $\mathcal{O}_{X\left(r\right)}$ for the canonical open
covering $\mathcal{U}_{r}$. So they can be equivalently characterised among Danielewski surfaces by the fact that the underlying $\mathbb{A}^1$-bundle $\rho:S\rightarrow X\left(r\right)$ is actually the structural morphism of a principal homogeneous $\mathbb{G}_a$-bundle.
\end{enavant}
\begin{enavant}To determine isomorphism classes of special
Danielewski surfaces, we can exploit the fact that
the group $\textrm{Aut}\left(X\left(r\right)\right)\simeq\textrm{Aut}\left(\mathbb{A}_{k}^{1}\setminus\left\{ o\right\} \right)\times\mathfrak{S}_{r}$
acts on the set $\mathbb{P}H^{1}\left(X\left(r\right),\mathcal{O}_{X\left(r\right)}\right)$
of isomorphism classes of $\mathbb{A}^{1}$-bundles as above. Indeed, for every $\phi\in\textrm{Aut}\left(X\left(r\right)\right)$,
the image $\phi\cdot\left[g\right]$ of a class $\left[g\right]\in\mathbb{P}H^{1}\left(X\left(r\right),\mathcal{O}_{X\left(r\right)}\right)$
represented by a bundle $\rho:S\rightarrow X\left(r\right)$ is the
isomorphism class of the fiber product bundle ${\rm pr}_{2}:\phi^{*}S=S\times_{X\left(r\right)}X\left(r\right)\rightarrow X\left(r\right)$.
The following criterion generalises a result of J. Wilkens \cite{Wil98}.
\end{enavant}
\begin{thm}
\label{thm:Iso_classes} Two special Danielewski surfaces $\pi_{1}:S_{1}\rightarrow\mathbb{A}_{k}^{1}$
and $\pi_{2}:S_{2}\rightarrow\mathbb{A}_{k}^{1}$ with underlying
$\mathbb{A}^{1}$-bundle structures $\rho_{1}:S_{1}\rightarrow X\left(r_{1}\right)$
and $\rho_{2}:S_{2}\rightarrow X\left(r_{2}\right)$ are isomorphic
as abstract surfaces if and only if $r_{1}=r_{2}=r$ and their isomorphism
classes in $\mathbb{P}H^{1}\left(X\left(r\right),\mathcal{O}_{X\left(r\right)}\right)$
belongs to the same orbit under the action of $\textrm{Aut}\left(X\left(r\right)\right)$.
\end{thm}
\begin{proof}
The condition guarantees that $S_{1}$ and $S_{2}$ are isomorphic.
Suppose conversely that there exists an isomorphism $\Phi:S_{1}\stackrel{\sim}{\rightarrow}S_{2}$.
The divisor class group of a special Danielewski surface $\pi:S\rightarrow\mathbb{A}_{k}^{1}$
is generated by the classes of the connected components $C_{1},\ldots,C_{r}$
of $\pi^{-1}\left(o\right)$ modulo the relation $C_{1}+\cdots+C_{r}=\pi^{-1}\left(o\right)\sim0$,
whence is isomorphic to $\mathbb{Z}^{r-1}$. Therefore, $r_{1}=r_{2}=r$
for a certain $r\geq1$. If one of the $S_{i}$'s, say $S_{1}$ is
isomorphic to a surface $S_{P,1}\subset\mathbb{A}_{k}^{3}$ defined by the equation
$xz-P\left(y\right)=0$, then the result follows from \cite{ML01}.
Otherwise, we deduce from Theorem \ref{thm:Comb_ML_Trivial} that
the $\mathbb{A}^{1}$-fibrations $\pi_{1}:S_{1}\rightarrow\mathbb{A}_{k}^{1}$
and $\pi_{2}:S_{2}\rightarrow\mathbb{A}_{k}^{1}$ are unique up to
automorphisms of the base. In turn, this implies that $\Phi$ induces
an isomorphism $\phi:X\left(r\right)\stackrel{\sim}{\rightarrow}X\left(r\right)$
such that $\phi\circ\rho_{1}=\rho_{2}\circ\Phi$. Therefore, $\Phi:S_{1}\stackrel{\sim}{\rightarrow}S_{2}$
factors through an isomorphism of $\mathbb{A}^{1}$-bundles $\tilde{\phi}:S_{1}\stackrel{\sim}{\rightarrow}\phi^{*}S_{2}$,
where $\phi^{*}S_{2}$ denotes the the fiber product $\mathbb{A}^{1}$-bundle
${\rm pr}_{2}:\phi^{*}S_{2}=S_{2}\times_{X\left(r\right)}X\left(r\right)\rightarrow X\left(r\right)$.
This completes the proof as $\phi^{*}S_{2}\simeq S_{2}$.
\end{proof}
\section{Danielewski surfaces in $\mathbb{A}_{k}^{3}$ defined by an equation
of the form $x^{h}z-Q\left(x,y\right)=0$ and their automorphisms}
In this section, we study Danielewski surfaces $\pi:S\rightarrow\mathbb{A}_{k}^{1}$ non isomorphic to $\mathbb{A}^2_k$ admitting a closed embedding $i:S\hookrightarrow\mathbb{A}_{k}^{3}$ in the affine $3$-space as a surface $S_{Q,h}$ defined by the equation $x^{h}z-Q\left(x,y\right)=0$. We show that a same abstract Danielewski surface may admit many such closed embeddings. In particular, we establish that $S$ can be embedded as a surface $S_{\sigma,h}$ defined by an equation of the form $x^{h}z-\prod_{i=1}^r\left(y-\sigma_i\left(x\right)\right)=0$ for a suitable collection of polynomials $\sigma=\left\{\sigma_i\left(x\right)\right\}_{i=1,\ldots,r}$. Next we study the automorphism groups of the above surfaces $S$. We show that, in a closed embedding as a surface $S_{\sigma,h}$, every automorphism of $S$ explicitly arises as the restriction of an automorphism of the ambient space.
We will show on the contrary in the next section that it is not true for a general embedding as a surface $S_{Q,h}$.
\subsection{Danielewski surfaces $S_{Q,h}$}
\indent\newline\noindent A surface $S=S_{Q,h}$ in $\mathbb{A}_{k}^{3}$
defined by the equation $x^{h}z-Q\left(x,y\right)=0$ is a Danielewski surface
$\pi={\rm pr}_{x}\mid_{S}:S\rightarrow\mathbb{A}_{k}^{1}$
if and only if the polynomial $Q\left(0,y\right)$ splits with simple
roots $y_{1},\ldots,y_{r}\in k$, where $r=\deg_{y}\left(Q\left(0,y\right)\right)$.
If $r=1$, then $\pi^{-1}\left(o\right)\simeq\mathbb{A}_{k}^{1}$
and $\pi:S\rightarrow\mathbb{A}_{k}^{1}$ is isomorphic to a trivial
$\mathbb{A}^{1}$-bundle. Thus $S$ is isomorphic to the affine plane.
Otherwise, if $r\geq2$, then $S$ is not isomorphic to $\mathbb{A}_{k}^{2}$,
as follows for instance from the fact that the divisor class group
$\textrm{Div}\left(S\right)$ of $S$ is isomorphic to $\mathbb{Z}^{r-1}$,
generated by the classes of the connected components $C_{1},\ldots,C_{r}$
of $\pi^{-1}\left(o\right)$, with a unique relation $C_{1}+\ldots+C_{r}=\textrm{div}\left(\pi^{*}x\right)\sim0$.
The above class of Danielewski surfaces contains affine surfaces $S_{P,h}$ in $\mathbb{A}^3_k$ defined by an equation of the form $x^hz-P\left(y\right)=0$, where $P\left(y\right)$ is a polynomial which splits with simple roots $y_1,\ldots, y_r$ in $k$. Replacing the constants $y_{i}\in k$ by suitable polynomials $\sigma_{i}\left(x\right)\in k\left[x\right]$ leads to the following more general class of examples.
\begin{example}
\label{exa:Main_example} Let $h\geq1$ be an integer and let $\sigma=\left\{ \sigma_{i}\left(x\right)\right\} _{i=1,\ldots,r}$
be a collection of $r\geq2$ polynomials $\sigma_{i}\left(x\right)=\sum_{j=0}^{h-1}\sigma_{i,j}x^{j}\in k\left[x\right]$ such that
$\sigma_{i}\left(0\right)\neq\sigma_{j}\left(0\right)$ for every
$i\neq j$. The surface $S=S_{\sigma,h}$ in $\mathbb{A}_{k}^{3}={\rm Spec}\left(k\left[x,y,z\right] \right)$
defined by the equation \[x^{h}z-\prod_{i=1}^{r}\left(y-\sigma_{i}\left(x\right)\right)=0\]
is a Danielewski surface $\pi={\rm pr}_{x}\mid_{S}:S\rightarrow\mathbb{A}_{k}^{1}$.
The fiber $\pi^{-1}\left(o\right)$ consists of $r$ copies
$C_{i}$ of the affine line defined by the equations $\left\{ x=0,y=\sigma_{i}\left(0\right)\right\} _{i=1,\ldots,r}$ respectively.
For every index $i=1,\ldots,r$, the open subset $S_{i}=S\setminus\bigcup_{j\neq i}C_{i}$
of $S$ is isomorphic to the affine plane $\mathbb{A}_{k}^{2}=\textrm{Spec}\left(k\left[x,u_{i}\right]\right)$,
where $u_{i}$ denotes the regular function on $S_{i}$ induced by
the rational function \[ u_{i}=x^{-h}\left(y-\sigma_{i}\left(x\right)\right)=z\prod_{j\neq i}\left(y-\sigma_{j}\left(x\right)\right)^{-1}\in k\left(S\right) \]
on $S$. It follows that $\pi:S\rightarrow\mathbb{A}_{k}^{1}$
factors through an $\mathbb{A}^{1}$-bundle $\rho:S\rightarrow X\left(r\right)$
isomorphic to the one with transition pairs $\left(f_{ij},g_{ij}\right)=\left(1,x^{-h}\left(\sigma_{j}\left(x\right)-\sigma_{i}\left(x\right)\right)\right)$,
$i,j=1,\ldots,r$.
The collection $\sigma=\left\{\sigma_i\left(x\right)\right\}_{i=1,\ldots,r}$ is exactly the one associated with the following fine $k$-weighted tree $\gamma=\left(\Gamma,w\right)$.
\begin{pspicture}(-4.6,2.5)(8,-2.7)
\def\ncline[linestyle=dashed]{\ncline[linestyle=dashed]}
\rput(2,0){
\pstree[treemode=D,radius=2.5pt,treesep=1.2cm,levelsep=0.8cm]{\Tc{3pt}}{
\pstree{\TC*\mput*{{\scriptsize $\sigma_{1,0}$}}} {
\pstree{\TC*\mput*{$\sigma_{1,1}$}}{ \skiplevels{1}
\pstree{\TC*[edge=\ncline[linestyle=dashed]]}{
\TC*\mput*{$\sigma_{1,h-1}$}
}
\endskiplevels
}
}
\pstree{\TC*\mput*{{\scriptsize $\sigma_{2,0}$}}}{
\pstree{\TC*\mput*{$\sigma_{2,1}$}}{\skiplevels{1}
\pstree{\TC*[edge=\ncline[linestyle=dashed]]} {
\TC*\mput*{$\sigma_{2,h-1}$}
}
\endskiplevels
}
}
\pstree{\TC*\mput*{{\scriptsize $\sigma_{r-1,0}\;$}}} {
\pstree{\TC*\mput*{$\sigma_{r-1,1}$}}{ \skiplevels{1}
\pstree{\TC*[edge=\ncline[linestyle=dashed]]}{
\TC*\mput*{$\sigma_{r-1,h-1}$}
}
\endskiplevels
}
}
\pstree{\TC*\mput*{{\scriptsize $\sigma_{r,0}$} }}{
\pstree{\TC*\mput*{$\sigma_{r,1}$}}{\skiplevels{1}
\pstree{\TC*[edge=\ncline[linestyle=dashed]]} {
\TC*\mput*{$\sigma_{r,h-1}$}
}
\endskiplevels
}
}
}
}
\pnode(-0.2,-2.2){A}\pnode(4.2,-2.2){B}
\ncbar[angleA=270, arm=3pt]{A}{B}\ncput*[npos=1.5]{$r$}
\pnode(5,2){C}\pnode(5,-2){D}
\ncbar[arm=3pt]{C}{D}\ncput*[npos=1.5]{$h$}
\end{pspicture}
\noindent So $S$ is isomorphic to the corresponding Danielewski surface $\pi_{\gamma}:S\left(\gamma\right)\rightarrow \mathbb{A}^1_k$. By definition (see \ref{txt:canonical_morphism} above), the canonical function $\psi_{\gamma}$ on $S\left(\gamma\right)$ is the unique regular function restricting to the polynomial function $\psi_{\gamma,i}=x^hu_i+\sigma_i\left(x\right)\in k\left[x,u_i\right]$ on the trivialising open subsets $S_i\simeq\mathbb{A}^2_k$, $i=1,\ldots,r$ of $S\left(\gamma\right)$. So it coincides with the restriction of $y$ on $S$ under the above isomorphism. In the setting of Theorem \ref{thm:GenDanMor_2_Tree}, this means that $\gamma$ corresponds to the Danielewski surface $S$ equipped with the birational morphism ${\rm pr}_{x,y}:S\rightarrow \mathbb{A}^2_k$.
\end{example}
It turns out that up to isomorphisms, the above class of Danielewski surfaces $S_{\sigma,h}$ contains all possible Danielewski surfaces $S_{Q,h}$, as shown by the following result.
\begin{thm}
\label{thm:Equivalent_charac} Let $S_{Q,h}$ be a Danielewski surface in $\mathbb{A}^3_k$ defined by the equation $x^hz-Q\left(x,y\right)=0$, where $Q\left(x,y\right)\in k\left[x,y\right]$ is a polynomial such that $Q\left(0,y\right)$ splits with $r\geq 2$ simples roots in $k$. Then there exists a collection $\sigma=\left\{\sigma_i\left(x\right)\right\}_{i=1,\ldots, r}$ of polynomials of degrees $\deg\left(\sigma_i\left(x\right)\right)< h$ such that $S_{Q,h}$ is isomorphic to the surface $S_{\sigma,h}$ defined by the equation
$x^hz-\prod_{i=1}^r\left(y-\sigma_i\left(x\right)\right)=0$.
\end{thm}
\begin{proof}
Since $Q\left(0,y\right)$ splits with simple roots $y_1,\ldots, y_r$ in $k$, a variant of the classical Hensel Lemma (see e.g., Theorem 7.18 p. 208 in \cite{Eis95}) guarantees that the polynomial $Q(x,y)$ can be written in a unique way as \[Q\left(x,y\right)=R_{1}\left(x,y\right)\prod_{i=1}^{r}\left(y-\sigma_{i}\left(x\right)\right)+x^{h}R_{2}\left(x,y\right), \]
where $R_{1}\left(x,y\right)\in k\left[x,y\right]\setminus\left(x^{h}k\left[x,y\right]\right)$
is a polynomial such that $R_{1}\left(0,y\right)$ is a nonzero constant and where $\sigma=\left\{\sigma_i\left(x\right)\right\}_{i=1,\ldots, r}$ is a collection of polynomials of degree strictly lower than $h$ such that $\sigma_i\left(0\right)=y_i$ for every index $i=1,\ldots, r$. Since $y_i\neq y_j$ for every $i\neq j$ and $R_1\left(0,y\right)$ is a nonzero constant, it follows that for every index $i=1,\ldots,r$, the rational function
\[ u_i=x^{-h} \left(y-\sigma_i\left(x\right)\right)=\prod_{j\neq i}\left(y-\sigma_j\left(x\right)\right)^{-1}R_1\left(x,y\right)^{-1}\left(z-R_2\left(x,y\right)\right)\]
on $S_{Q,h}$ restricts to a regular function on the complement $S_i$ in $S_{Q,h}$ of the irreducible components of the fiber ${\rm pr}_x^{-1}\left(0\right)$ defined by the equations $\left\{x=0,y=y_j\right\}_{j\neq i}$ and induces an isomorphism $S_i\simeq{\rm Spec}\left(k\left[x,u_i\right]\right)$.
Therefore, the collection $\sigma=\left\{\sigma_i\left(x\right)\right\}_{i=1,\ldots,r}$ is precisely the one associated with the fine $k$-weighted rake $\gamma=\left(\Gamma,w\right)$ with all
its leaves at a same level $h$ corresponding to the Danielewski surface ${\rm pr}_x:S_{Q,h}\rightarrow \mathbb{A}^1_k$ equipped with the birational morphism $\psi={\rm pr}_{x,y}:S_{Q,h}\rightarrow\mathbb{A}^2_k$ (see \ref{thm:GenDanMor_2_Tree} and \ref{pro:Spec_DS_charac} above). In turn, we deduce from example \ref{exa:Main_example} that the Danielewski surface $S\left(\gamma\right)$ associated with $\gamma $ embeds as the surface $S_{\sigma,h}$ in $\mathbb{A}^3_k$ defined by the equation $x^hz-\prod_{i=1}^r\left(y-\sigma_i\left(x\right)\right)=0$. This completes the proof.
\end{proof}
\begin{defn}
\label{def:Embed_def}
Given a Danielewski surface $S$ isomorphic to a certain surface $S_{Q,h}$ in $\mathbb{A}^3_k$, a closed embedding $i_s:S\hookrightarrow\mathbb{A}_{k}^{3}$ of $S$ in $\mathbb{A}^3_k$ as a surface $S_{\sigma,h}$ defined by the equation \[x^hz-\prod_{i=1}^r\left(y-\sigma_i\left(x\right)\right)=0\] is called a \emph{standard embedding of} $S$. We say that $S_{\sigma,h}$ is a \emph{standard
form} \emph{of} $S$ \emph{in} $\mathbb{A}_{k}^{3}$.
\end{defn}
\begin{enavant} \label{lem:Hensel_lemma} \label{rem:def phi_s}
It follows from the above discussion that every Danielewski surface $S$ isomorphic to a certain surface $S_{Q,h}$ in $\mathbb{A}^3_k$ admits a standard embedding in $\mathbb{A}^3_k$. Following the proof of Theorem \ref{thm:Equivalent_charac}, we can in fact construct explicitly the isomorphisms between a Danielewski surface $S_{Q,h}$ and one of its standard forms $S_{\sigma,h}$. Let $Q(x,y)=R_{1}(x,y)\prod_{i=1}^{r}\left(y-\sigma_{i}\left(x\right)\right)+x^{h}R_{2}(x,y)$ be as in the proof of Theorem \ref{thm:Equivalent_charac}. Then, the endomorphism $\Phi^s$ of $\mathbb{A}^3_k$ defined by $\left(x,y,z\right)\mapsto\left(x,y,R_{1}\left(x,y\right)z+R_{2}\left(x,y\right)\right)$ induces an isomorphism $\phi^s$ between $S_{\sigma,h}$ and $S_{Q,h}$. One checks conversely that for every pair $\left(f,g\right)$ of polynomials such that
$R_{1}\left(x,y\right)f\left(x,y\right)+x^{h}g\left(x,y\right)=1$, the endomorphism $\Phi_s$ of $\mathbb{A}^3_k$ defined by \[\left(x,y,z\right)\mapsto\left(x,y,f\left(x,y\right)z+g\left(x,y\right)\prod_{i=1}^{r}\left(y-\sigma_{i}\left(x\right)\right)-f\left(x,y\right)R_{2}\left(x,y\right)\right)\] induces an isomorphism $\phi_s$ between $S_{Q,h}$ and $S_{\sigma,h}$ such that $\phi^s\circ\phi_s={\rm id}_{S_{Q,h}}$ and $\phi_s\circ\phi^s={\rm id}_{S_{\sigma,h}}$. Note that since $R_1\left(0,y\right)$ is a nonzero constant, the regular function $\xi=x^{-h}(R_{1}\prod_{i=1}^{r}\left(y-\sigma_{i}\left(x\right)\right))+R_{2}$ on $S_{\sigma,h}$ still induces a coordinate function on every irreducible component of the fiber $\pi^{-1}\left(o \right)$ of the morphism $\pi={\rm pr}_x:S_{\sigma,h}\rightarrow \mathbb{A}^1_k$, and the regular functions $\pi$, $y$ and $\xi$ define a new closed embedding of $S_{\sigma,h}$ in $\mathbb{A}_{k}^{3}$ inducing an isomorphism between $S_{\sigma,h}$ and the surface $S_{Q,h}$. This can be interpreted by saying that a closed embedding $i_{Q,h}:S\hookrightarrow\mathbb{A}_{k}^{3}$ of a Danielewski surface $S$ in $\mathbb{A}_{k}^{3}$ as a surface $S_{Q,h}$ is a twisted form of a standard embedding of $S$ obtained by modifying the function inducing a coordinate on every irreducible component of the fiber $\pi^{-1}\left(o\right)$.
\end {enavant}
\begin{enavant} Using standard forms makes the study of isomorphism classes of Danielewski surfaces $S_{Q,h}$ simpler. For instance, we have the following characterisation which generalises a result due to L. Makar-Limanov
\cite{ML01} for complex surfaces $S_{P,h}$ defined by the equations $x^{h}z-P\left(y\right)=0$.
\end{enavant}
\begin{prop} \label{thm:Normal_forms_iso} Two Danielewski surfaces $S_{\sigma_1,h_1}$ and $S_{\sigma_2,h_2}$ in $\mathbb{A}_{k}^{3}$ defined by the equations \[
x^{h_{1}}z=P_{1}\left(x,y\right)=\prod_{i=1}^{r_{1}}\left(y-\sigma_{1,i}\left(x\right)\right)\quad\textrm{and}\quad x^{h_{2}}z=P_{2}\left(x,y\right)=\prod_{i=1}^{r_{2}}\left(y-\sigma_{2,i}\left(x\right)\right)\]
are isomorphic if and only if $h_{1}=h_{2}=h$, $r_{1}=r_{2}=r$
and there exists a triple $\left(a,\mu,\tau\left(x\right)\right)\in k^{*}\times k^{*}\times k\left[x\right]$
such that $P_{2}\left(ax,y\right)=\mu^{r}P_{1}\left(x,\mu^{-1}y+\tau\left(x\right)\right)$.
\end{prop}
\begin{proof}
The condition is sufficient. Indeed, one checks that the automorphism \[\left(x,y,z\right)\mapsto \left(ax,\mu \left(y-\tau\left(x\right)\right), \mu^r a^{-2}z\right)\] of $\mathbb{A}^3_k$ induces an isomorphism between $S_{\sigma_1,h}$ and $S_{\sigma_2,h}$. Conversely,
suppose that $S_1=S_{\sigma_1,h_1}$ and $S_2=S_{\sigma_2,h_2}$ are isomorphic. Then $h_1=h_2=h$ and $r_1=r_2=r$ by virtue of Theorem \ref{thm:Iso_classes} above. If $h=1$ then the result follows from \cite{ML01}. Otherwise, if $h\geq 2$ then it follows from Theorem \ref{thm:Comb_ML_Trivial} and example \ref{exa:Main_example} above that the underlying $\mathbb{A}^{1}$-bundle structures $\rho_1:S_1\rightarrow X\left(r\right)$ and $\rho_2:S_2\rightarrow X\left(r\right)$ corresponding to the transition functions
\[\left\{g_{1,ij}=x^{-h}\left(\sigma_{1,j}\left(x\right)-\sigma_{1,i}\left(x\right)\right)\right\}_{i,j=1,\ldots,r} \textrm{ and } \left\{g_{2,ij}=x^{-h}\left(\sigma_{2,j}\left(x\right)-\sigma_{2,i}\left(x\right)\right)\right\}_{i,j=1,\ldots,r}\]
respectively are unique such structures on $S_1$ and $S_2$ up to automorphisms of the base $X\left(r\right)$. Therefore, every isomorphism $\Phi:S_1\stackrel{\sim}{\rightarrow} S_2$ induces an automorphism $\phi$ of $X\left(r\right)$ such that $\rho_2\circ\Phi=\phi\circ \rho_1$. Consequently, every such isomorphism $\Phi$ is determined by a collection of local isomorphisms $\Phi_{i}:S_{1,i}\stackrel{\sim}{\rightarrow}S_{2,\alpha\left(i\right)}$ where $\alpha\in\mathfrak{S}_{r}$, defined by $k$-algebra isomorphisms
\[
\Phi_{i}^{*}:k\left[x\right]\left[u_{2,\alpha\left(i\right)}\right]\longrightarrow k\left[x\right]\left[u_{1,i}\right],\quad x\mapsto a_{i}x,\quad u_{2,\alpha\left(i\right)}\mapsto\lambda_{i}u_{1,i}+b_{i}\left(x\right),\quad i=1,\ldots,r\]
where $a_{i},\lambda_{i}\in k^{*}$ and where $b_{i}\in k\left[x\right]$. These local isomorphisms glue to a global one if and only if $a_{i}=a$ and $\lambda_{i}=\lambda$ for every index $i=1,\ldots , r$,
and the relation $\lambda g_{1,ij}\left(x\right)+b_{i}\left(x\right)=g_{2,\alpha\left(i\right)\alpha\left(j\right)}\left(ax\right)+b_{j}\left(x\right)$
holds in $k\left[x,x^{-1}\right]$ for every indices $i,j=1,\ldots,r$. Since the $\sigma_{1,i}$'s and $\sigma_{2,i}$'s have degrees strictly lower than $h$, we conclude that the latter condition is equivalent to the fact that $b_{i}\left(x\right)=b\left(x\right)$ for every $i=1,\ldots,r$ and that
the polynomial $c\left(x\right)=\sigma_{2,\alpha\left(i\right)}\left(ax\right)-\lambda a^{h}\sigma_{1,i}\left(x\right)$ does not depend on the index $i$. Letting $\mu=\lambda a^h$ and $\tau\left(x\right)=\mu^{-1}c\left(x\right)$, this means exactly that $P_2\left(ax,y\right)=\mu^{r}P_1\left(x,\mu^{-1}y+\tau\left(x\right)\right)$.
\end{proof}
\begin{enavant} \label{rem:standard embeddings} \label{txt:Non_algeb_equiv} The proof above implies in particular that all standard embeddings of a same Danielewski surface are algebraically equivalent. It is natural to ask if a closed embedding $i_{Q,h}:S\hookrightarrow \mathbb{A}^3_k$ of Danielewski surface $S$ as a surface $S_{Q,h}$ is algebraically equivalent to a standard one.If so, then we say that the embedding $i_{Q,h}$ is \emph{rectifiable}. The fact that the endomorphisms $\Phi^s$ and $\Phi_s$ of $\mathbb{A}^3_k$ constructed in \ref{rem:def phi_s} are not invertible in general may lead one to suspect that there exists non-rectifiable embeddings of Danielewski surfaces nonisomorphic to the affine plane. This is actually the case, and the first known examples have been recently discovered by G. Freudenburg and L. Moser-Jauslin \cite{FrMo02}. For instance, they established that the surface $S_1$ in $\mathbb{A}^3_{\mathbb{C}}$ defined by the equation $f_1=x^2z-\left(1-x\right)\left(y^2-1\right)=0$ is a non-rectifiable embedding of a Danielewski surface. Indeed, a standard form for $S_1$ would be the Danielewski surface $S_0$ defined by the equation $f_0=x^2z-\left(y^2-1\right)=0$. We observe that the level surface $f_{0}^{-1}\left(1\right)$ of $f_{0}$ is a singular
surface. On the other hand, all the level surfaces of $f_{1}$ are
nonsingular as follows for instance from the Jacobian Criterion. Therefore, condition 3) in Definition \ref{def:Algebraic_equiv_def} cannot be satisfied and so, it is impossible to find an automorphism of $\mathbb{A}^3_{\mathbb{C}}$ mapping $S_1$ isomorphically onto $S_0$.
The classification of these embeddings up to algebraic equivalence is a difficult problem in general (see \cite{MoP05} for the case $h=r=2$).
However, if $k=\mathbb{C}$, the following result shows that things become simpler if one works in the holomorphic category.
\end{enavant}
\begin{thm}
\label{thm:Embed_analytic_equiv} The embeddings $i_{Q,h}:S\hookrightarrow\mathbb{A}_{\mathbb{C}}^{3}$
of a Danielewski surface $S$ as a surface defined by the equation $x^{h}z-Q\left(x,y\right)=0$
are all \emph{analytically} equivalent.
\end{thm}
\begin{proof}
It suffices to show that every embedding $i_{Q,h}$ is analytically
equivalent to a standard one $i_{\sigma,h}$. In view of the proof of Theorem \ref{thm:Equivalent_charac}, we can let $Q\left(x,y\right)=R_1\left(x,y\right)\prod_{i=1}^{r}\left(y-\sigma_{i}\left(x\right)\right)+x^hR_2(x,y)$.
It is enough to construct an holomorphic automorphism $\Psi$ of $\mathbb{A}_{\mathbb{C}}^{3}$
such that $$\Psi^{*}\left(x^{h}z-\prod_{i=1}^{r}\left(y-\sigma_{i}\left(x\right)\right)\right)=\alpha\left(x^{h}z-Q\left(x,y\right)\right)$$
for a suitable invertible holomorphic function $\alpha$ on $\mathbb{A}_{\mathbb{C}}^{3}$.
We let $R_1\left(0,y\right)=\lambda\in\mathbb{C}^{*}$ and we let $f\left(x,y\right)\in\mathbb{C}\left[x,y\right]$
be a polynomial such that $\lambda\exp\left(xf\left(x,y\right)\right)\equiv R_1\left(x,y\right)$
mod $x^{h}$. Now the result follows from the fact that the holomorphic
automorphism $\Psi$ of $\mathbb{A}_{\mathbb{C}}^{3}$ defined by
$$\Psi\left(x,y,z\right)= \left(x,y,\lambda\exp\left(xf\left(x,y\right)\right)z-x^{-h}[\lambda\exp\left(xf\left(x,y\right)\right)-R_1\left(x,y\right)]\prod_{i=1}^{r}\left(y-\sigma_{i}\left(x\right)\right)+R_2(x,y)\right)$$
satisfies $\Psi^{*}\left(x^{h}z-Q\left(x,y\right)\right)=\lambda\exp\left(xf\left(x,y\right)\right)\left(x^{h}z-\prod_{i=1}^{r}\left(y-\sigma_{i}\left(x\right)\right)\right)$.
\end{proof}
\begin{example}
We observed in \ref{txt:Non_algeb_equiv} that the
surfaces $S_{0}$ and $S_{1}$ defined by the equations $f_0=x^2z-\left(y^2-1\right)=0$
and $f_1=x^2z-\left(1-x\right)\left(y^2-1\right)=0$ are algebraically
inequivalent embeddings of a same surface $S$. However, they are
analytically equivalent via the automorphism $\left(x,y,z\right)\mapsto\left(x,y,e^{-x}z-x^{-2}\left(e^{-x}-1+x\right)(y^2-1)\right)$
of $\mathbb{A}_{\mathbb{C}}^{3}$.
\end{example}
\subsection{Automorphisms of Danielewski surfaces $S_{Q,h}$ in $\mathbb{A}_{k}^{3}$ }
\indent\newline\noindent In \cite{ML90} and \cite{ML01}, Makar-Limanov
computed the automorphism groups of surfaces in $\mathbb{A}^{3}$
defined by the equation $x^{h}z-P\left(y\right)=0$, where $h\geq1$ and where
$P\left(y\right)$ is an arbitrary polynomial. In particular, he established
that every automorphism of such a surface is induced by the restriction
of an automorphism of the ambient space. Recently, A. Crachiola \cite{Cra06} established
that this also holds for surfaces defined by the equations $x^{h}z-y^{2}-r\left(x\right)y=0$,
where $h\geq1$ and where $r\left(x\right)$ is an arbitrary polynomial
such that $r\left(0\right)\neq0$. This subsection is devoted to the proof of the more general structure Theorem \ref{thm:S_Q,h autos} below. We begin with the case of Danielewski surfaces in standard form.
\begin{thm}
\label{thm:Main_auto_thm} The automorphism group of a Danielewski surface
$S_{\sigma,h}$ defined by the equation \[ x^hz-P\left(x,y\right)=0,\qquad \textrm{ where } \quad P\left(x,y\right)=\prod_{i=1}^r\left(y-\sigma_i\left(x\right)\right)\] is induced by the restriction of an automorphism of $\mathbb{A}^3_k$ belonging to the subgroup $G_{\sigma,h}$ of ${\rm Aut}\left(\mathbb{A}_{k}^{3}\right)$
generated by the following automorphisms:
\emph{(a)} $\Delta_{b}\left(x,y,z\right)=\left(x,y+x^{h}b\left(x\right),z+x^{-h}\left(P\left(x,y+x^{h}b\left(x\right)\right)-P\left(x,y\right)\right)\right)$,
where $b\left(x\right)\in k\left[x\right]$.
\emph{(b)} If there exists a polynomial $\tau\left(x\right)$ such
that $P\left(x,y+\tau\left(x\right)\right)=\tilde{P}\left(y\right)$
then the automorphisms $H_{a}\left(x,y,z\right)=\left(ax,y+\tau\left(ax\right)-\tau\left(x\right),a^{-h}z\right)$,
where $a\in k^{*}$ should be added.
\emph{(c)} If there exists a polynomial $\tau\left(x\right)$ such
that $P\left(x,y+\tau\left(x\right)\right)=\tilde{P}\left(x^{q_{0}},y\right)$,
then the cyclic automorphisms $\tilde{H}_{a}\left(x,y,z\right)=\left(ax,y+\tau\left(ax\right)-\tau\left(x\right),a^{-h}z\right)$,
where $a\in k^{*}$ and $a^{q_{0}}=1$ should be added.
\emph{(d)} If there exists a polynomial $\tau\left(x\right)$ such
that $P\left(x,y+\tau\left(x\right)\right)=y^{i}\tilde{P}\left(x,y^{s}\right)$,
where $i=0,1$ and $s\geq2$, then the cyclic automorphisms $S_{\mu}\left(x,y,z\right)=\left(x,\mu y+\left(1-\mu\right)\tau\left(x\right),\mu^{i}z\right)$,
where $\mu\in k^{*}$ and $\mu^{s}=1$ should be added.
\emph{(e)} If $\textrm{char}\left(k\right)=s>0$ and $P\left(x,y\right)=\tilde{P}\left(y^{s}-c\left(x\right)^{s-1}y\right)$
for a certain polynomial $c\left(x\right)\in k\left[x\right]$ such
that $c\left(0\right)\neq0$, then the automorphism $T_{c}\left(x,y,z\right)=\left(x,y+c\left(x\right),z\right)$
should be added.
\emph{(}f\emph{)} If $h=1$, then the involution $I\left(x,y,z\right)=\left(z,y,x\right)$
should be added.
\end{thm}
\begin{rem}\label{rem: k+-actions}
Automorphisms of type a) in Theorem \ref{thm:Main_auto_thm} correspond to algebraic actions of the additive group $\mathbb{G}_a$ on the surface $S_{\sigma,h}$. More precisely, for every polynomial $b\in k\left[x\right]$, the subgroup $\left\{\Delta_{tb\left(x\right)}, t\in k \right\}$ of ${\rm Aut}\left(S_{\sigma,h}\right)$ is isomorphic to $\mathbb{G}_a$, the corresponding $\mathbb{G}_a$-action on $S_{\sigma,h}$ being defined by $t\star\left(x,y,z\right)=\Delta_{tb\left(x\right)}\left(x,y,z\right)$.
Similarly, automorphisms of type b) correspond to algebraic actions of the multiplicative group $\mathbb{G}_m$.
\end{rem}
\begin{proof} It is clear that every automorphism of $\mathbb{A}_{k}^{3}$ of types
(a)-(f) above leaves $S_{\sigma,h}$ invariant, whence induces an
automorphism of $S_{\sigma,h}$. If $h=1$, then the converse
follows from \cite{ML90}. Otherwise, if $h\geq2$, then the same argument as the one used in the proof of Proposition \ref{thm:Normal_forms_iso} above show that every automorphism of $S_{\sigma,h}$
is determined by a datum $\mathcal{A}_{\Phi}=\left(\alpha,\mu,a,b\left(x\right)\right)$
such that that the polynomial $c\left(x\right)=\sigma_{\alpha\left(i\right)}\left(ax\right)-\mu\sigma_{i}\left(x\right)+x^{h}b\left(x\right)$
does not depend on the index $i=1,\ldots,r$. Furthermore, it follows from the construction of the closed embedding of $S_{\sigma,h}$ in $\mathbb{A}^3_k$ given in Example \ref{exa:Main_example} that every such collection correspond to an automorphism of $S_{\sigma,h}$ induced by the restriction of the following
automorphism $\Psi$ of $\mathbb{A}_{k}^{3}$: \[
\Psi\left(x,y,z\right)=\left(ax,\mu y+c\left(x\right),a^{-h}\mu^rz+\left(ax\right)^{-h}(\prod_{i=1}^{r}\left(\mu y+c\left(x\right)-\sigma_{i}\left(ax\right)\right)-\mu^r\prod_{i=1}^{r}\left(y-\sigma_{i}\left(x\right)\right))\right).\]
One checks easily using this description that the composition of two automorphisms $\Phi_{1}$ and $\Phi_{2}$ of $S_{\sigma,h}$ defined by data $\mathcal{A}_{\Phi_{1}}=\left(\alpha_{1},\mu_{1},a_{1},b_{1}\right)$ and $\mathcal{A}_{\Phi_{2}}=\left(\alpha_{2},\mu_{2},a_{2},b_{2}\right)$
is the automorphism with corresponding datum $\mathcal{A}_{\Phi}=\left(\alpha_{2}\circ\alpha_{1},\mu_{2}\mu_{1},a_{2}a_{1},a_2^{-h}\mu_{2}b_{1}\left(x\right)+b_{2}\left(a_{1}x\right)\right)$.
Clearly, automorphisms of type (a) coincide with the
ones determined by data $\mathcal{A}=\left(\textrm{Id},1,1,b\left(x\right)\right)$, where $b\left(x\right)\in k\left[x\right]$. In view of the composition rule above, it suffices to consider from now on automorphisms corresponding to
data $\mathcal{A}=\left(\alpha,\mu,a,0\right)$.
1°) If $\alpha$ is trivial, then $\mu=1$ by virtue of Lemma
\ref{pro:auto_data} below, and so $\mathcal{A}=\left(\textrm{Id},1,a,0\right)$. Then, the relation $c\left(x\right)=\sigma_{i}\left(ax\right)-\sigma_{i}\left(x\right)$ holds for every $i=1,\ldots,r$.
1°a) If $a^{q}\neq1$ for every $q=1,\ldots,h-1$, then there exists
a polynomial $\tau\left(x\right)\in k\left[x\right]$ such that $\sigma_{i}\left(x\right)=\sigma_{i}\left(0\right)+\tau\left(x\right)$
for every $i=1,\ldots,r$. Thus $c\left(x\right)=\tau\left(ax\right)-\tau\left(x\right)$
and $P\left(x,y+\tau\left(x\right)\right)=\tilde{P}\left(y\right)=\prod_{i=1}^{r}\left(y-\sigma_{i}\left(0\right)\right)$ and the corresponding automorphism is of type (b).
1°b) If $a\neq1$ but $a^{q_{0}}=1$ for a minimal $q_{0}=2,\ldots,h-1$,
then there exists polynomials $\tau\left(x\right)$ and $\tilde{\sigma}_{i}\left(x\right)$,
$i=1,\ldots,r$, such that $\sigma_{i}\left(x\right)=\tilde{\sigma}_{i}\left(x^{q_{0}}\right)+\tau\left(x\right)$
for every $i=1,\ldots,r$. So there exists a polynomial $\tilde{P}$
such that $P\left(x,y+\tau\left(x\right)\right)=\tilde{P}\left(x^{q_{0}},y\right)$.
Moreover, $c\left(x\right)=\tau\left(ax\right)-\tau\left(x\right)$
and the corresponding automorphism is of type (c).
2°) If $\alpha$ is not trivial then $\mu^{s}=1$. Since $\Phi=\Phi_{2}\circ\Phi_{1}$,
where $\Phi_{1}$and $\Phi_{2}$ denote the automorphisms with data
$\mathcal{A}_{\Phi_{1}}=\left(\textrm{Id},1,a,0\right)$ and $\mathcal{A}_{\Phi_{2}}=\left(\alpha,\mu,1,0\right)$
respectively, it suffices to consider the situation that $\Phi$ is
determined by a datum $\mathcal{A}_{\Phi}=\left(\alpha,\mu,1,0\right)$,
where $\mu\in k^{*}$ and $\mu^{s}=1$. So the relation $\sigma_{\alpha\left(i\right)}\left(x\right)=\mu\sigma_{i}\left(x\right)+c\left(x\right)$
holds for every $i=1,\ldots,r$.
2°a) If $\mu^{s}=1$ but $\mu^{s'}\neq1$ for every $s'=1,\ldots,s-1$, then,
letting $\tau\left(x\right)=\left(1-\mu\right)^{-1}c\left(x\right)$
and $\tilde{\sigma}_{i}\left(x\right)=\sigma_{i}\left(x\right)-\tau\left(x\right)$
for every $i=1,\ldots,r$, we arrive at the relation $\tilde{\sigma}_{\alpha\left(i\right)}\left(x\right)=\mu\tilde{\sigma}_{i}\left(x\right)$
for every $i=1,\ldots,r$. Furthermore, if $i_{0}$ is a unique fixed
point of $\alpha$ then $\tilde{\sigma}_{i_{0}}\left(x\right)=0$
as $\sigma_{i_{0}}\left(x\right)=\tau\left(x\right)$. So we conclude
that $P\left(x,y+\tau\left(x\right)\right)=y^{i}\tilde{P}\left(x,y^{s}\right)$
where $i=0,1$ and where $s$ denotes the length of the nontrivial
cycles in $\alpha$. The corresponding automorphism is of type (d).
2°b) If $\mu=1$ then $\alpha$ is fixed point free by virtue of Lemma
\ref{pro:auto_data} and $\textrm{char}\left(k\right)=s$, where $s$
denotes the common length's of the cycles occurring in $\alpha$. Moreover,
$s'\cdot c\left(0\right)\neq0$ for every $s'=1,\ldots,s-1$ and $\sigma_{i_{m}}\left(x\right)=\sigma_{i_{1}}\left(x\right)+\left(m-1\right)\cdot c\left(x\right)$
for every index $i_{m}$ occurring in a cycle $\left(i_{1},\ldots,i_{s}\right)$
of length $s$ in $\alpha$. Letting $r=ds$, we may suppose up to
a reordering that $\alpha$ decomposes as the product of the standard
cycles $\left(is+1,is+2\ldots,\left(i+1\right)s\right)$, where $i=0,\ldots,d-1$.
Letting $R\left(x,y\right)=\prod_{m=1}^{s}\left(y-m\cdot c\left(x\right)\right)=y^{s}-c\left(x\right)^{s-1}y$,
we conclude that \[
P\left(x,y\right)=\prod_{i=0}^{d-1}R\left(x,y-\sigma_{is}\left(x\right)\right)=\tilde{P}\left(x,y^{s}-c\left(x\right)^{s-1}y\right)\]
for a suitable polynomial $\tilde{P}\left(x,y\right)\in k\left[x,y\right]$.
The corresponding automorphism is of type (e).
\end{proof}
\begin{enavant}
In the proof of Theorem \ref{thm:Main_auto_thm} above, we used the fact that every automorphism $\Phi$ of a Danielewski surface $S=S_{\sigma,h}$, where $h\geq 2$, is determined by a certain datum $\mathcal{A}_{\Phi}=\left(\alpha,\mu,a,b\left(x\right)\right)\in\mathfrak{S}_{r}\times k^{*}\times k^{*}\times k\left[x\right]$ for which the polynomial $\tilde{c}(x)=\sigma_{\alpha\left(i\right)}\left(ax\right)-\mu\sigma_{i}\left(x\right)\in k\left[x\right]$ does not depend on the index $i$. Actually, we needed the following more precise result.
\end{enavant}
\begin{lem}
\label{pro:auto_data} The elements in a datum $\mathcal{A}_{\Phi}=\left(\alpha,\mu,a,b\left(x\right)\right)$ corresponding to an automorphism $\Phi$ of $S$ satisfy the following additional properties
1\emph{)} The permutation $\alpha$ is either trivial or has at most
a unique fixed point. If it is nontrivial then all nontrivial cycles
with disjoint support occurring in a decomposition of $\alpha$ have
the same length $s\geq2$.
2\emph{)} If $\alpha$ is trivial then $\mu=1$ and the converse
also holds provided that $\textrm{char}\left(k\right)\neq s$. Otherwise,
if $\alpha$ is nontrivial and $\textrm{char}\left(k\right)\neq s$
then $\mu^{s}=1$ but $\mu^{s'}\neq1$ for
every $1\leq s'<s$.\\
\end{lem}
\begin{proof}
To simplify the notation, we let
$y_{i}=\sigma_{i}\left(0\right)$ for every $i=1,\ldots,r$. Note
that by hypothesis, $y_{i}\neq y_{j}$ for every $i\neq j$.
If $\alpha\in\mathfrak{S}_{r}$ has at least two fixed points, say
$i_{0}$ and $i_{1}$, then $y_{i_{0}}\left(1-\mu\right)=y_{i_{1}}\left(1-\mu\right)=\tilde{c}\left(0\right)$,
and so, $\mu=1$ and $\tilde{c}\left(0\right)=0$ as $y_{i_{0}}\neq y_{i_{1}}$. In
turn, this implies that $\alpha$ is trivial. Indeed, otherwise there
would exist an index $i$ such that $\alpha\left(i\right)\neq i$
but $y_{\alpha\left(i\right)}=y_{i}$, in contradiction with our hypothesis.
Suppose from now that $\alpha$ is nontrivial and let $s\geq2$ be the infimum of
the length's of the nontrivial cycles occurring in decomposition of
$\alpha$ into a product of cycles with disjoint supports. We deduce that $y_{i}\left(1-\mu^{s}\right)=y_{j}\left(1-\mu^{s}\right)$ for
every pair of distinct indices $i$ and $j$ in the support of a same
cycle of length $s$. Thus $\mu^{s}=1$ as $y_{i}\neq y_{j}$ for
every $i\neq j$.
If $\mu=1$ then $s'\cdot\tilde{c}\left(0\right)\neq0$ for every
$s'=1,\ldots,s-1$. Indeed, otherwise we would have $y_{\alpha^{s'}\left(i\right)}=y_{i}+s'\cdot\tilde{c}\left(0\right)=y_{i}$
for every index $i=1,\ldots,r$ which is impossible since $\alpha$
is nontrivial. In particular, $\alpha$ is fixed-point free. On the
other hand $s\cdot\tilde{c}\left(0\right)=0$ as $y_{i}=y_{\alpha^{s}\left(i\right)}=y_{i}+s\cdot\tilde{c}\left(0\right)$
for every index $i$ in the support of a cycle of length $s$ in $\alpha$.
This is possible if and only if the characteristic of the base field
$k$ is exactly $s$. We also conclude that every cycle in $\alpha$
have length $s$ for otherwise there would exist an index $i$ such
that $\alpha^{s}\left(i\right)\neq i$ but $y_{\alpha^{s}\left(i\right)}=y_{i}+s\cdot\tilde{c}\left(0\right)=y_{i}$
in contradiction with our hypothesis.
If $\mu\neq1$ then
$\mu^{s'}\neq1$ for every $s'<s$. Indeed, otherwise there would
exist an index $i$ such that $\alpha^{s'}\left(i\right)\neq i$ but
$y_{\alpha^{s'}\left(i\right)}=\mu^{s'}y_{i}+\tilde{c}\left(0\right)\sum_{p=0}^{s'-1}\mu^{p}=y_{i}$,
which is impossible. The same argument also implies that all the nontrivial
cycles in $\alpha$ have length $s$.
\end{proof}
\begin{enavant}
By combining Theorems \ref{thm:Equivalent_charac} and \ref{thm:Main_auto_thm}, we obtain the following description of the automorphisms groups of Danielewski surfaces $S_{Q,h}$.
\end{enavant}
\begin{thm}\label{thm:S_Q,h autos} Let $S_{Q,h}$ be the Danielewski surface in $\mathbb{A}^3_k$ defined by the equation $x^hz-Q(x,y)=0$ and let $S_{\sigma,h}$ be one of its standard forms. Then, every automorphism of $S_{Q,h}$ is of the form $\Phi^s\circ \psi \circ \Phi_s$, where $\psi$ belongs to the subgroup $G_{\sigma,h}$ of the automorphisms group of $\mathbb{A}^3_k$ defined in Theorem \ref{thm:Main_auto_thm} and $\Phi^s$ and $\Phi_s$ are the endomorphisms of $\mathbb{A}^3_k$ defined in \ref{rem:def phi_s}.
\end{thm}
\begin{enavant} We have seen in \ref{txt:Non_algeb_equiv} that the embeddings $i_{Q,h}$ are not rectifiable in general and so that the isomorphisms $\phi^s$ and $\phi_s$ do not extend to algebraic automorphisms of $\mathbb{A}^3_k$. Therefore, in contrast with the case of standard embeddings $i_s$ for which every automorphisms of a Danielewski surface $S\simeq S_{\sigma,h}$ arises as the restriction of an automorphism of the ambient space $\mathbb{A}^3_k$, the above result may lead one to suspect that for a general embedding $i_{Q,h}$ of $S$ as a surface $S_{Q,h}$, certain automorphisms of $S$ do not extend to algebraic automorphisms $\mathbb{A}^3_k$. In the next section we give examples of embeddings for which this phenomenon occurs. However, if $k=\mathbb{C}$, Theorem \ref{thm:Embed_analytic_equiv} leads on the contrary the following result.
\end{enavant}
\begin{cor}\label{cor:analytic extension}
Every algebraic automorphism of a Danielewski surface $S_{Q,h}$ in $\mathbb{A}^3_{\mathbb{C}}$ is extendable to a \emph{holomorphic} automorphism of $\mathbb{A}^3_{\mathbb{C}}$.
\end{cor}
\section{Special Danielewski surfaces and multiplicative group actions }
\indent\newline\noindent In this section, we fix a base field $k$ of characteristic zero and we consider special Danielewski surfaces $S$ admitting a nontrivial action of the multiplicative group $\mathbb{G}_m=\mathbb{G}_{m,k}$. We establish that every such surface is isomorphic to a Danielewski surface $S_{Q,h}$ which admits a standard embedding in $\mathbb{A}^3_k$ as a surface defined by an equation of the form $x^hz-P\left(y\right)=0$ for a suitable polynomial $P\left(y\right)\in k\left[y\right]$. In this embedding, every multiplicative group action on $S$ arises as the restriction of a linear $\mathbb{G}_m$-action on $\mathbb{A}^3_k$. We show on the contrary that this is not the case for a general embedding of $S$ as a surface $S_{Q,h}$.
\subsection{Multiplicative group actions on special Danielewski surfaces}
\indent\newline\noindent
Every Danielewski surface isomorphic to a surface $S_{P,h}$ in $\mathbb{A}^3_k$ defined by an equation of the form $x^hz-P\left(y\right)=0$ for a certain polynomial $P\left(y\right)$ admits an nontrivial action of the multiplicative group $\mathbb{G}_m$ which arises as the restriction of the $\mathbb{G}_m$-action $\Psi$ on $\mathbb{A}_{k}^{3}$ defined by $\Psi\left(a;x,y,z\right)=H_{a}\left(x,y,z\right)=\left(ax,y,a^{-h}z\right)$. In the setting of Lemma \ref{pro:auto_data} above, the automorphisms
$H_{a}$ correspond to data $\mathcal{A}_{\phi_{a}}=\left(1,1,a,0\right)$, where $a\in k^{*}$. Here we establish that Danielewski surfaces isomorphic to a surface $S_{P,h}$ in $\mathbb{A}^3_k$ are characterised by the fact that they admit such a nontrivial $\mathbb{G}_m$-action.
\begin{enavant} \label{txt:OMaKL_surface_Tree_charac} By virtue of example \ref{exa:Main_example}
above, the collection of polynomials $\sigma_{i}\left(x\right)$,
$i=1,\ldots,r$, corresponding to a Danielewski surface $S_{P,h}\subset\mathbb{A}_{k}^{3}$
is given by $\sigma_{i}\left(x\right)=y_{i}$ for every $i=1,\ldots,r$, where $y_1,\ldots,y_r$ denote the roots of the polynomial $P$. In turn, we deduce from Theorem \ref{thm:Iso_classes} and Proposition \ref{thm:Normal_forms_iso}
above that a Danielewski surface $S_{Q,h}$ with a standard form $S_{\sigma,h}$
defined by a datum $\left(r,h,\sigma=\left\{ \sigma_{i}\left(x\right)\right\} _{i=1,\ldots,r}\right)$
is isomorphic to a surface $S_{P,h}$ as above if and only if there
exists a polynomial $\tau\left(x\right)\in k\left[x\right]$ such
that $\sigma_{i}\left(x\right)=\sigma_{i}\left(0\right)+\tau\left(x\right)$
for every $i=1,\ldots,r$. So we conclude that every such surface
correspond to a fine $k$-weighted rake $\gamma$ of the following
type.
\begin{figure}[h]
\begin{pspicture}(-2.6,2.5)(8,-2.9)
\def\ncline[linestyle=dashed]{\ncline[linestyle=dashed]}
\rput(2,0){
\pstree[treemode=D,radius=2.5pt,treesep=1.2cm,levelsep=1cm]{\Tc{3pt}}{
\pstree{\TC*\mput*{{\scriptsize $y_1$}}} {
\pstree{\TC*\mput*{$\tau_1$}}{ \skiplevels{1}
\pstree{\TC*[edge=\ncline[linestyle=dashed]]}{
\TC*\mput*{$\tau_{h-1}$}
}
\endskiplevels
}
}
\pstree{\TC*\mput*{{\scriptsize $y_2$}}}{
\pstree{\TC*\mput*{$\tau_1$}}{\skiplevels{1}
\pstree{\TC*[edge=\ncline[linestyle=dashed]]} {
\TC*\mput*{$\tau_{h-1}$}
}
\endskiplevels
}
}
\pstree{\TC*\mput*{{\scriptsize $y_{r-1}\;$}}} {
\pstree{\TC*\mput*{$\tau_1$}}{ \skiplevels{1}
\pstree{\TC*[edge=\ncline[linestyle=dashed]]}{
\TC*\mput*{$\tau_{h-1}$}
}
\endskiplevels
}
}
\pstree{\TC*\mput*{{\scriptsize $y_r$} }}{
\pstree{\TC*\mput*{$\tau_1$}}{\skiplevels{1}
\pstree{\TC*[edge=\ncline[linestyle=dashed]]} {
\TC*\mput*{$\tau_{h-1}$}
}
\endskiplevels
}
}
}
}
\pnode(-0.2,-2.8){A}\pnode(4.2,-2.8){B}
\ncbar[angleA=270, arm=3pt]{A}{B}\ncput*[npos=1.5]{$r$}
\pnode(5,2.5){C}\pnode(5,-2.6){D}
\ncbar[arm=3pt]{C}{D}\ncput*[npos=1.5]{$h$}
\end{pspicture}
\end{figure}
\end{enavant}
\begin{enavant} One can easily deduce from the description of the automorphism group of a Danielewski surface $S_{\sigma,h}$ given Theorem \ref{thm:Main_auto_thm} above that such a surface admits a nontrivial $\mathbb{G}_m$-action if and only if it is isomorphic to a surface $S_{P,h}$. More generally, we have the following result.
\end{enavant}
\begin{thm}
\label{pro:OMaKL_surfaces_action_charac}
A special Danielewski surface $S$ admits a nontrivial action of the multiplicative group $\mathbb{G}_m$ if and only if it is isomorphic to a surface $S_{P,h}$ in $\mathbb{A}^3_k$ defined by the equation $x^hz-P(y)=0$.
\end{thm}
\begin{proof}
We may suppose that $S=S\left(\gamma\right)$ is the Danielewski surface associated with a fine $k$-weighted tree $\gamma=\left(\Gamma,w\right)$ with $r\geq 2$ elements at level $1$ and with all its leaves at level $h\geq 1$. We denote by $\sigma=\left\{\sigma_i\left(x\right)\right\}_{i=1,\ldots,r}$ the collection of polynomial associated with $\gamma$ (see \ref{pro:WeightedTree_2_DanSurf}). By virtue of Theorem \ref{thm:Iso_classes} above, the collection $\tilde{\sigma}$ defined by \[\tilde{\sigma}_i\left(x\right)=\sigma_i\left(x\right)-\frac{1}{r}\sum_{i=1}^r\sigma_i\left(x\right)\quad i=1,\ldots,r\] leads to a Danielewski surface isomorphic to $S$. So we may suppose from the beginning that $\sigma_1\left(x\right)+\cdots + \sigma_r\left(x\right)=0$. If $h=1$ then it follows that $S$ is isomorphic to a surface in $\mathbb{A}^3_k$ defined by an equation of the form $xz-P\left(y\right)=0$, and so, the assertion follows from the above discussion. Otherwise, if $h\geq 2$ then it follows from Theorem \ref{thm:Comb_ML_Trivial} that the structural $\mathbb{A}^1$-fibration $\pi=\pi_{\gamma}:S=S\left(\gamma\right)\rightarrow \mathbb{A}^1_k$ is unique up to automorphisms of the base. We consider $S$ as an $\mathbb{A}^1$-bundle $\rho:S\rightarrow X\left(r\right)$ defined by the transition cocycle \[g=\left\{g_{ij}=x^{-h}\left(\sigma_j\left(x\right)-\sigma_i\left(x\right)\right)\right\}_{i,j=1,\ldots,r}.\]
The same argument as in the proof of Theorem \ref{thm:Normal_forms_iso} implies that every automorphism $\Phi$ of $S$ is determined by a datum $\mathcal{A}_{\Phi}=\left(\alpha,\mu,a,b\left(x\right)\right)\in\mathfrak{S}_{r}\times k^{*}\times k^{*}\times k\left[x\right]$ for which the polynomial $\sigma_{\alpha\left(i\right)}\left(ax\right)-\mu\sigma_{i}\left(x\right)\in k\left[x\right]$ does not depend on the index $i$. In view of the composition rule given in the same proof, we deduce that an automorphism $\Phi$ of $S$ may belong to a subgroup of $\textrm{Aut}\left(S\right)$ isomorphic to $\mathbb{G}_m$ only if its associated datum is of the form $\mathcal{A}_{\Phi}=\left(\alpha,\mu,a,0\right)$. Suppose that there exists a nontrivial automorphism $\Phi$ determined by such a datum $\mathcal{A}_{\Phi}$. Then, since $\alpha\in\mathfrak{S}_r$, there exists an integer $N\geq 1$ such that
the polynomial $c\left(x\right)=\sigma_i\left(a^Nx\right)-\mu^N\sigma_i\left(x\right)$ does not depend on the index $i=1,\ldots,r$. Since $\sigma_1\left(x\right)+\cdots+\sigma_r\left(x\right)=0$ by hypothesis, we conclude that the identity $\sigma_i\left(a^Nx\right)=\mu^N\sigma_i\left(x\right)$ holds for every index $i=1,\ldots,r$. In particular, it follows that $\sigma_i\left(0\right)=\mu^N\sigma_i\left(0\right)$ for every index $i=1,\ldots,r$. Thus $\mu^N=1$ since $\gamma$ is a fine $k$-weighted tree with at least two elements at level $1$. Suppose that one of the polynomials $\sigma_i$ is not constant. Then the above identity implies that $a^{Np}=1$ for a certain integer $p$. Therefore, every automorphism $\Phi$ of $S$ with associated datum $\left(\alpha,\mu,a,0\right)$ is cyclic and $\textrm{Aut}\left(S\right)$ can not contain a subgroup isomorphic to $\mathbb{G}_m$. So, $S$ admits a nontrivial $\mathbb{G}_m$-action only if the polynomials $\sigma_i$, $i=1,\ldots,r$ are constant. This completes the proof since these fine $k$-weighted trees correspond to Danielewski surfaces $S_{P,h}$ by virtue of \ref{txt:OMaKL_surface_Tree_charac} above.
\end{proof}
\subsection{ Extensions of multiplicative group actions on a Danielewski surface}
\indent\smallskip\newline\noindent It follows from Theorem \ref{thm:Main_auto_thm} that every special Danielewski surface $S$ equipped with a nontrivial $\mathbb{G}_m$-action admits an equivariant embedding in $\mathbb{A}^3_k$ as a surface $S_{P,h}$ defined by an equation of the form $x^hz-P\left(y\right)=0$. In this embedding, the $\mathbb{G}_m$-action on $S$ even arises as the restriction of a linear $\mathbb{G}_m$-action on $\mathbb{A}^3_k$ corresponding to automorphisms of type b) in \ref{thm:Main_auto_thm}. On the other hand, a surface $S$ isomorphic to a surface $S_{P,h}$ admits closed embeddings $i_{Q,h}:S\hookrightarrow \mathbb{A}^3_k$ in $\mathbb{A}^3_k$ as surfaces $S_{Q,h}$ defined by equations of the form $x^hz-R\left(x,y\right)P\left(y\right)=0$ (see Theorem \ref{thm:Equivalent_charac}). It is natural to ask if there always exists $\mathbb{G}_m$-actions on $\mathbb{A}^3_k$ making these general embeddings equivariant. Clearly, this holds if the embedding $i_{Q,h}$ is algebraically equivalent to a standard embedding of $S$ as a surface $S_{P,h}$. The following result shows that there exists non rectifiable closed embeddings $i_{Q,h}$ of $S$ for which no nontrivial $\mathbb{G}_m$-action on $S$ can be extended to an action on the ambient space.
\begin{thm}\label{txt:Multiplicative_action}
\label{thm:Multiplicative_auto_non_extension} Every Danielewski surface $S\subset\mathbb{A}_{k}^{3}$
defined by the equation $x^{h}z-\left(1-x\right)P\left(y\right)=0$, where $h\geq2$
and where $P\left(y\right)$ has $r\geq2$ simple roots, admits a nontrivial $\mathbb{G}_m$-action
$\tilde{\theta}:\mathbb{G}_m\times S\rightarrow S$ which is not
algebraically extendable to $\mathbb{A}_{k}^{3}$. More precisely,
for every $a\in k\setminus\left\{ 0,1\right\} $ the automorphism
$\tilde{\theta}_{a}=\tilde{\theta}\left(a,\cdot\right)$ of $S$ do
not extend to an algebraic automorphism of $\mathbb{A}_{k}^{3}$.
\end{thm}
\begin{proof}
The endomorphisms $\Phi^s$ and $\Phi_s$ of $\mathbb{A}^3_k$ defined by $\Phi^s\left(x,y,z\right)=\left(x,y,(1-x\right)z)$
and $\Phi_s\left(x,y,z\right)=\left(x,y,(\sum_{i=0}^{h-1}x^{i})z+P\left(y\right)\right)$
induce isomorphisms $\phi^s$ and $\phi_s$ between $S$ and the
surface $S_{P,h}$ defined by the equation $x^{h}z-P\left(y\right)=0$ (see \ref{rem:def phi_s}). The
latter admits an action $\theta:\mathbb{G}_m\times S_{P,h}\rightarrow S_{P,h}$
of the multiplicative group $\mathbb{G}_m$ defined by $\theta\left(a,x,y,z\right)=H_{a}\left(x,y,z\right)=\left(ax,y,a^{-h}z\right)$ for every $a\in k^{*}$. The corresponding action $\tilde{\theta}$
on $S$ is therefore defined by $\tilde{\theta}\left(a,x,y,z\right)=\tilde{\theta}_{a}\left(x,y,z\right)=\phi^s\circ H_{a}\left(x,y,z\right)\mid_{S_{P,h}}\circ\phi_s$.
Since by construction, $\tilde{\theta}_{a}^{*}\left(x\right)=ax$
for every $a\in k^{*}$, the assertion is a consequence of the following
Lemma which guarantees that the automorphisms $\tilde{\theta}_{a}$
of $S$ are not algebraically extendable to an automorphism of $\mathbb{A}_{k}^{3}$ for every $a\in k^*\setminus \{ 1 \}$.
\end{proof}
\begin{lem}
\label{lem:Rigid_extension} If $\Phi$ is an algebraic automorphism
of $\mathbb{A}_{k}^{3}$ extending an automorphism of $S$, then $\Phi^{*}\left(x\right)=x$.
\end{lem}
\begin{proof}
Our proof is similar to the one of Theorem 2.1 in \cite{MoP05}. We
let $\Phi$ be an automorphism of $\mathbb{A}_{k}^{3}$ extending
an arbitrary automorphism of $S$. Since $f_{1}=x^{h}z-\left(1-x\right)P\left(y\right)$
is an irreducible polynomial, there exists $\mu\in k^{*}$ such that
$\Phi^{*}\left(f_{1}\right)=\mu f_{1}$. Therefore, for every $t\in k$,
the automorphism $\Phi$ induces an isomorphism between the level
surfaces $f_{1}^{-1}\left(t\right)$ and $f_{1}^{-1}\left(\mu^{-1}t\right)$
of $f_{1}$. There exists an open subset $U\subset\mathbb{A}_{k}^{1}$
such that for every $t\in U$, $f_{1}^{-1}\left(t\right)$ is a special
Danielewski surfaces isomorphic to a one defined by a fine $k$-weighted
rake $\gamma$ whose underlying tree $\Gamma$ is isomorphic to the
one associated with $S$. Since $\Gamma$ is not a comb, it follows
from Theorem \ref{thm:Comb_ML_Trivial} that for every $t\in U$,
the projection ${\rm pr}_{x}:f_{1}^{-1}\left(t\right)\rightarrow\mathbb{A}_{\mathbb{C}}^{1}$
is a unique $\mathbb{A}^{1}$-fibration on $f_{1}^{-1}\left(t\right)$
up to automorphisms of the base. Furthermore, ${\rm pr}_{x}:f_{1}^{-1}\left(t\right)\rightarrow\mathbb{A}_{k}^{1}$
has a unique degenerate fiber, namely ${\rm pr}_{x}^{-1}\left(0\right)$.
Therefore, for every $t\in U$, the image of the ideal $\left(x,f_{1}-t\right)$
of $k\left[x,y,z\right]$ by $\Phi^{*}$ is contained in the ideal
$\left(x,\mu f_{1}-t\right)=\left(x,P\left(y\right)+\mu^{-1}t\right)$,
and so $\Phi^{*}\left(x\right)\in\bigcap_{t\in U}\left(x,P\left(y\right)+\mu^{-1}t\right)=\left(x\right)$.
Since $\Phi$ is an automorphism of $\mathbb{A}_{k}^{3}$, we conclude
that there exists $c\in k^{*}$ such that $\Phi^{*}\left(x\right)=cx$.
In turn, this implies that for every $t,u\in k$, $\Phi$ induces
an isomorphism between the surfaces $S_{t,u}$ and $\tilde{S}_{t,u}$
defined by the equations $f_{1}+tx+u=x^{h}z-\left(1-x\right)P\left(y\right)+tx+u=0$
and $f_{1}+\mu^{-1}ctx+\mu^{-1}u=x^{h}z-\left(1-x\right)P\left(y\right)+\mu^{-1}ctx+\mu^{-1}u=0$
respectively. Since $\deg\left(P\right)\geq2$ there exists $y_{0}\in k$
such that $P'\left(y_{0}\right)=0$. Note that $y_{0}$ is not a root of $P$ as these ones are simple. We let $t=-u=-P\left(y_{0}\right)$.
Since $h\geq2$, it follows from the Jacobian Criterion that $S_{t,u}$
is singular, and even non normal along the nonreduced component of
the fiber ${\rm pr}_{x}^{-1}\left(0\right)$ defined by the equation $\left\{ x=0;\, y=y_{0}\right\} $.
Therefore $\tilde{S}_{t,u}$ must be singular along a multiple component
of the fiber ${\rm pr}_{x}^{-1}\left(0\right)$. This the case if
and only if the polynomial $P\left(y\right)-\mu^{-1}cP\left(y_{0}\right)$
has a multiple root, say $y_{1}$, such that $P\left(y_{1}\right)-\mu^{-1}P\left(y_{0}\right)=0$.
Since $P\left(y_{0}\right)\neq0$ this condition is satisfied if and
only if $c=1$. This completes the proof.
\end{proof}
\begin{example}
In particular, even the involution of the surface $S$ defined by the equation $x^{2}z-\left(1-x\right)P\left(y\right)=0$ induced by the endomorphism
$J\left(x,y,z\right)=\left(-x,y,\left(1+x\right)\left(\left(1+x\right)z+P\left(y\right)\right)\right)$
of $\mathbb{A}^3_k$ does not extend to an algebraic automorphism of $\mathbb{A}_{k}^{3}$.
\end{example}
\noindent It turns out that this kind of phenomenon does not occur with additive group actions. More precisely, we have the following result.
\begin{prop}
Let $S_{Q,h}$ be the Danielewski surface in $\mathbb{A}_{k}^{3}$ defined by the equation $x^hz-Q(x,y)=0$.
Then, every $\mathbb{G}_a$-action on $S_{Q,h}$ arrises as the restriction of a $\mathbb{G}_a$-action on $\mathbb{A}_{k}^{3}$ defined by
$\tilde{\Delta}\left(t,x,y,z\right)=\left(x, y+x^hb(x)t, z+x^{-h}(Q(x,y+x^hb(x)t)-Q(x,y))\right))$,
for a certain polynomial $b\left(x\right)\in k[x]$.
\end{prop}
\begin{proof}
With the notation of Remark \ref{rem: k+-actions}, it follows from Theorem \ref{thm:S_Q,h autos} that every additive group action on $S_{Q,h}$ is induced by the restriction to $S_{Q,h}$ of a collection of endomorphisms of $\mathbb{A}^3_k$ of the form $\delta_{t,b}=\Phi^s\circ \Delta_{tb(x)} \circ \Phi_s$, where $b\in k\left[x\right]$. One checks that
\[\begin{array}{lcl} \delta_{t,b}(x,y,z) &=& (x, y+x^hb(x)t, z+x^{-h}(Q(x,y+x^hb(x)t)-Q(x,y))+\alpha(x,y)(x^hz-Q(x,y))), \end{array}\]
\noindent for a certain polynomial $\alpha(x,y)\in k[x,y]$. Note that if $\alpha\left(x,y\right)\neq 0$, these endomorphisms $\delta_{t,b}$ do not define a $\mathbb{G}_a$-action on $\mathbb{A}^3_k$. However, they induce an action on $S_{Q,h}$ which coincides with the one induced by the $\mathbb{G}_a$-action $\tilde{\Delta}$ above.
\end{proof}
\begin{enavant} If $k=\mathbb{C}$, Corollary \ref{cor:analytic extension} implies in particular that every automorphism of $S$ extends to an holomorphic
automorphism of $\mathbb{A}_{\mathbb{C}}^{3}$. This leads the following result which contrasts
with an example, given by H. Derksen, F. Kutzschebauch and J. Winkelmann
in \cite{DKW}, of a non-extendable $\mathbb{C}_{+}$-action on an
hypersurface in $\mathbb{A}_{\mathbb{C}}^{5}$ which is even holomorphically
inextendable .
\end{enavant}
\begin{prop}
Every surface $S\subset\mathbb{A}_{\mathbb{C}}^{3}$ defined by the equation
$x^{h}z-\left(1-x\right)P\left(y\right)=0$, where $h\geq2$ and where
$P\left(y\right)$ has $r\geq2$ simple roots, admits a nontrivial
$\mathbb{C}^{*}$-action which is algebraically inextendable but holomorphically
extendable to $\mathbb{A}_{\mathbb{C}}^{3}$.
\end{prop}
\begin{proof}
We let $\tilde{\theta}:\mathbb{C}^{*}\times S\rightarrow S$ be the
$\mathbb{C}^{*}$-action on the surface $S\subset\mathbb{A}_{\mathbb{C}}^{3}$
defined by the equation $x^{2}z-\left(1-x\right)P\left(y\right)=0$ constructed
in the proof of Theorem \ref{txt:Multiplicative_action}.
For every $a\in\mathbb{C}^{*}$, the automorphism $\tilde{\theta}\left(a,\cdot\right)$
of $S$ maps a closed point $\left(x,y,z\right)\in S$ to the point
$\tilde{\theta}\left(a,x,y,z\right)=\left(ax,y,a^{-2}\left(1-ax\right)\left(\left(1+x\right)z+P\left(y\right)\right)\right)$.
One checks that the holomorphic automorphism $\Phi_{a}$ of $\mathbb{A}_{\mathbb{C}}^{3}$
such that $\Phi_{a}\mid_{S}=\tilde{\theta}\left(a,\cdot\right)$ is
the following one: \begin{eqnarray*}
\Phi_{a}\left(x,y,z\right) & = & \left(ax,y,a^{-2}e^{\left(1-a\right)x}z+\left(ax\right)^{-2}P\left(y\right)\left(e^{\left(1-a\right)x}\left(x-1\right)-ax+1\right)\right).\end{eqnarray*}
Clearly, the holomorphic map $\Phi:\mathbb{C}^{*}\times\mathbb{A}_{\mathbb{C}}^{3}\rightarrow\mathbb{A}_{\mathbb{C}}^{3}$,
$\left(a,\left(x,y,z\right)\right)\mapsto\Phi_{a}\left(x,y,z\right)$
defines a $\mathbb{C}^{*}$-action on $\mathbb{A}_{\mathbb{C}}^{3}$
extending the one $\tilde{\theta}$ on $S$.
\end{proof}
\bibliographystyle{amsplain}
|
1,941,325,220,052 | arxiv | \section{Introduction}
Image co-segmentation is a problem of segmenting common and salient objects from a set of related images. Since this concept was firstly introduced in 2006 ~\cite{b1}, it has attracted a lot of attentions. It has been widely used to support various computer vision applications, such as interactive image segmentation~\cite{b2}, 3D reconstruction ~\cite{b3}, object co-localization ~\cite{b4,b5}, and etc.
Conventional co-segmentation approaches utilize handcrafted features and prior knowledge~\cite{b7,b8,b6}. They are difficult to achieve good robustness. In recent years, deep learning is introduced to learn visual representation in a data driven manner for improving the performance of image co-segmentation~\cite{b10,b11,b9}. It has shown promising results. However, the exact image co-segmentation is still far away from our expectation. We still face the challenges like background clutter, appearance variance of co-object across images, similarity between co-object and non-common object, and etc. These challenges especially result in unsatisfactory prediction along object edges.
In this work, we propose a new deep learning approach based on signed normalized distance map (SNDM) for improving the performance of image co-segmentation. The distance map is a special representation of segmentation masks, in which the values reflect the spatial proximity to the object boundary of each pixel and the sign of values indicates the segmentation result. We transform the segmentation problem to a SNDM regression problem and solve this regression problem through constructing a dense U-shaped Siamese network and learning it with an edge enhanced 3D IOU loss. The proposed approach is evaluated on commonly-used datasets for image co-segmentation.
Our main contributions are summarized as follows.
(1) We introduce the SNDM into image co-segmentation. The segmentation problem is transformed into a SNDM regression problem. Since SNDM contain much more plentiful information than binary segmentation mask, it is potential to help improve the segmentation results, especially in object boundaries.
(2)A new dense Siamese U-net neural network is presented to complete SNDM regression, in which the blocks in the decoder part are densely connected to utilize multi-scale features more sufficiently.
(3)A new edge enhanced 3D IOU loss over SNDM is proposed by taking a SNDM as a 3D shape and penalizing segmentation errors at object boundaries.
The rest of this paper is organized as follows. Section 2 reviews the related work. Section 3 presents our dense Siamese U-net and edge enhanced 3D IOU learning loss for training. The experimental results are reported in Section 4. We conclude in Section 5.
\section{Related work}
Our main contributions are mainly related with the distance map based image segmentation and dense connection. The corresponding previous works are briefly introduced as follows.
\subsection{Image segmentation based on distance map}
Most of image segmentation methods use binary or multi-label mask as ground truth. Distance map offers an alternative to classical ground truth. Incorporating the distance maps of image segmentation labels into convolutional neural network (CNN) pipelines has received significant attention.
In Hayder et al.~\cite{b12}, a distance transform-based mask representation was introduced to allow an instance segmentation to predict shapes beyond the limits of initial bounding boxes, which allows the network to learn more specific information about the location of the object boundary than binary mask representation would do. Tan et al.~\cite{b13} added a decoder branch to do mask estimation and boundary distance map regression, where the distance map estimation acts as the supervisor to mask prediction. Dangi et al.~\cite{b14} proposed a multi-task learning based regularization framework to perform the main task of semantic segmentation, and an auxiliary task of pixel-wise distance map regression, simultaneously. Yin et al.~\cite{b15} modeled the kidney boundaries as boundary distance maps and predicted them in regression setting. Subsequently, the predicted boundary distance maps were used to learn pixelwise kidney masks.
Some methods use distance map to design new loss functions. Jia et al.~\cite{b16} employed a contour loss based on distance map information to obtain the segmentation of boundary regions. Caliva et al.~\cite{b17} used distance maps as the penalty term of the cross-entropy (CE) loss, enforcing the network to focus on the hard-to-segmented object boundaries. Boundary loss~\cite{b18} assigned the weights to the softmax probability outputs based on the ground truth distance map, while Hausdorff distance loss~\cite{b19} introduced not only the ground truth distance map but also the predicted segmentation distance map to weight the softmax probability outputs. SDF loss~\cite{b20} employed the product of predicted distance map and ground truth distance map to guide the distance map regression network during training.
\subsection{Dense connection}
For deep CNNs, as information about the input or gradient passes through many layers, it can vanish and “wash out” by the time it reaches the end (or beginning) of the network. Many studies have demonstrated this or related problems and emphasize the importance of using feature of shallow layers to optimize features from deep layers~\cite{b21}. Huang et al.~\cite{b22} proposed DenseNet, which utilizes a dense connection method to cope with the vanishing gradients problem. In DenseNet, to preserve the feed-forward nature, each layer obtains additional inputs from all preceding layers and passes on its own feature-maps to all subsequent layers. To address the issue of preserving spatial information in the U-Net architecture, Dong et al.~\cite{b23} designed a dense feature fusion module using the back-projection feedback scheme. It shows that the dense feature fusion module can simultaneously remedy the missing spatial information from high-resolution features and exploit the non-adjacent features. Zhang et al.~\cite{b24} proposed an edge-preserving densely connected encoder-decoder structure with multilevel pyramid pooling module for estimating the transmission map for their single image dehazing method.
Dense connection allows feature reuse throughout the networks and can consequently learn more compact and more accurate models. It alleviates the vanishing-gradient problem, strengthens feature propagation, and encourages feature reuse.
\section{The Proposed Method}
In this section, we present the details of our proposed method. First, SNDM is introduced in section 3.1. Then, our dense Siamese U-Net for SNDM regression is described in section 3.2. Finally, our edge enhanced 3D IOU loss of SNDM is presented in section 3.3.
\begin{figure}[htbp]
\centering
\subfigure[ ]{
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=1in]{./firure1_a.png}
\end{minipage}%
}%
\subfigure[ ]{
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=1in]{./firure1_b.png}
\end{minipage}%
}%
\subfigure[ ]{
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=1in]{./firure1_c.png}
\end{minipage}
}%
\centering
\caption{An example of SNDM: (a) the original image, (b) binary segmentation mask, and (c) SNDM generated on Fig. 1}
\label{fig1}
\end{figure}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.7\linewidth]{./figure2.pdf}
\caption{The architecture of our dense Siamese U-net for image co-segmentation.}
\label{fig2}
\end{center}
\end{figure*}
\subsection{Signed Normalized Distance Map}
We generate a SNDM on the binary segmentation mask to obtain richer information about structural features of objects. For each pixel in a segmentation mask, we compute the Euclidean distance between this pixel and the closest pixel in the boundary of the target object as the original distance map. Then the values in the original distance map is normalized to be in the range [0, 1] by the local maximum distance~\cite{b25}. We further add the sign indicating foreground (positive) and background (negative). Let d(x, b) be the Euclidean distance between the point x and b, B be the set of points on the object boundary, then the SNDM is computed by using
\begin{footnotesize}
\begin{equation}
M(x) = \left\{ {\begin{array}{*{20}{c}}
{ - \frac{{\max (D(x)) + 1 - D(x)}}{{\max (D(x))}},x \in background}\\
{\frac{{\max (D(x)) + 1 - D(x)}}{{\max (D(x))}},x \in foreground}
\end{array}} \right.
\label{eq1}
\end{equation}
\end{footnotesize}
where
\begin{footnotesize}
\begin{equation}
D(x) = \mathop {\min (d(x,b))}\limits_{\forall b \in B}
\label{eq2}
\end{equation}
\end{footnotesize}
According to Eq.~\ref{eq1}, the value of SNDM for a pixel far from the object boundary is very close to 0, the weight for this pixel is low and the sign of it is easy to be predicted wrongly. For solving this problem, we perform a linear transformation on M(x) and normalized the transformed M(x) to [0.1,1] for foreground and [-1,0.1] for background. Fig.~\ref{fig1} shows an example of SNDM.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{./figure3.pdf}
\centering
\caption{The illustration of dense modules: (a) dense connections; (b) the operations in a dense connection.}
\label{fig3}
\end{figure}
In the SNDM, there is a mutation from -1 to 1 around the object boundary, which is helpful to strengthen the distinguish between the foreground and the background in blurry edges.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1\linewidth]{./figure4.pdf}
\caption{The illustration of how to compute intersection and union between the predicted SNDM and the target SNDM.}
\label{fig4}
\end{center}
\end{figure}
\subsection{Network Architecture}
As shown in Fig.~\ref{fig2}, the overall structure of our segmentation network is a Siamese U-net enhanced by the dense connections in the decoder path, which is composed of three parts. The first part is a Siamese encoder which consists of two identical ResNet50 networks with shared parameters for feature extraction. The second part is the correlation block, through which the correlation maps are calculated from the two feature maps. The third part is a Siamese decoder network that consists of a hierarchy of decoders, each corresponding to an encoder layer. The values in SNDM are in the range of [-1,1], so we use Tanh activation function in the last decoder layer. Except this last decoder layer, ReLU activation function is used in all the other layers. Furthermore, we concatenate the input image in the last decoder layer for determining the result. This is ignored in previous U-net for image segmentation.
As show in Fig.~\ref{fig2}, the decoder subnetwork consists of multiple dense modules. The output of each module will be passed to all the following modules. This dense connection is illustrated in Fig.~\ref{fig3} (a) and the computation of each dense module is illustrated in Fig.~\ref{fig3} (b) by taking the final module as an example. As shown in Fig.~\ref{fig3} (a), a dense module accepts the features from the corresponding encoder layer and the features from all the previous decoder layers. For those feature maps that have different spatial resolution with current module, we transform them to feature maps with the same spatial resolution by performing the computation composed of deconvolution, batch normalization and rectified linear unit (ReLU). Finally, all of the features are fused by the concatenation operation.
\subsection{Edge Enhanced 3D IOU Loss of SNDM}
For training our network of regressing SNDM, we present a new loss by adapting the popular Dice loss~\cite{b26,b27} to our problem. Dice loss can be used for data sets with imbalanced positive and negative samples. It usually behaves better than the CE loss in segmentation applications. We can view a binary segmentation mask as a plane shape. Then the Dice loss is actually defined by the intersection and the union of two plane shapes corresponding to labeled segmentation and predicted segmentation, respectively. The values in a SNDM are in [-1, 1]. So, a SNDM cannot be seen as a plane shape, but a 3D shape. We illustrate such view of SNDM in Fig.~\ref{fig4}. Accordingly, we can calculate the intersection and the union of two 3D shapes to define the intersection over union(IOU) loss for two SNDMs. Let ${{g_i}}$ and ${{p_i}}$ be the value for the i-th foreground pixel in the labeled and the predicted SNDM, respectively, ${{u_j}}$ and ${{v_j}}$ be the value for the j-th background pixel in the labeled and the predicted SNDM, respectively, N and M be the number of foreground and background pixels, respectively, then the 3D IOU loss of SNDM is
\begin{footnotesize}
\begin{equation}
L = 1 - \frac{{\sum\nolimits_i^N {\min ({p_i},{g_i}) + \sum\nolimits_j^M {\min ( - {u_j}, - {v_j})} } }}{{\sum\nolimits_i^N {\max ({p_i},{g_i}) + \sum\nolimits_j^M {\max ( - {u_j}, - {v_j})} } }}
\label{eq3}
\end{equation}
\end{footnotesize}
For computing 3D IOU, we should use the absolute value measured in each pixel, so the values of background pixels are negated in Eq.~\ref{eq3}.
Our final purpose is to obtain accurate segmentation results. The sign of values in SNDM indicate the segmentation results. Thus if the sign of a value in labeled SNDM is opposite with that of the corresponding value in predicted SNDM, a misclassification must occurr. According to this knowledge, we can extend our 3D IOU loss by imposing a penalty on this kind of errors to improve the segmentation accuracy. We compute this penalty by using
\begin{footnotesize}
\begin{equation}
F = \left\{ {\begin{array}{*{20}{c}}
{1,\quad {p_i}*{g_i} > 0}\\
{\lambda,\quad {p_i}*{g_i} < = 0}
\end{array}} \right.
\label{eq4}
\end{equation}
\end{footnotesize}
and the loss is extended to
\begin{footnotesize}
\begin{equation}
L = 1 - \frac{{\sum\nolimits_i^N {\min ({p_i},{g_i}){F_i} + \sum\nolimits_j^M {\min ( - {u_j}, - {v_j})} } {F_j}}}{{\sum\nolimits_i^N {\max ({p_i},{g_i}){F_i} + \sum\nolimits_j^M {\max ( - {u_j}, - {v_j}){F_j}} } }}
\label{eq5}
\end{equation}
\end{footnotesize}
$\lambda$ in Eq.~\ref{eq4} is a trade-off parameter for balancing the objective of segmentation accuracy and the objective of SNDM regression. To increases $\lambda$ increases the importance of correcting misclassified examples. The appropriate $\lambda$ needs to be determined in the applications. In our experiments, the best $\lambda$ is observed to be 5.
Finally, the pixels closer to the object edge are more likely to be misclassified. Thus it is helpful to let the pixels closer to the object edge paid more attention in the training. Based on this idea, we add a weight to each pixel, which is proportional to the value of labeled SNDM. The final form of our loss is
\begin{footnotesize}
\begin{equation}
L = 1 - \frac{{\sum\nolimits_{\rm{i}}^N {\min ({p_i},} {g_i}){F_i}*\sqrt {{g_i}} + \sum\nolimits_{\rm{j}}^M {\min ( - {u_j},} - {v_j}){F_j}*\sqrt { - {u_j}} }}{{\sum\nolimits_{\rm{i}}^N {\max ({p_i},} {g_i}){F_i}*\sqrt {{g_i}} + \sum\nolimits_{\rm{j}}^M {\max ( - {u_j},} - {v_j}){F_j}*\sqrt { - {u_j}} }}
\label{eq6}
\end{equation}
\end{footnotesize}
Based on this loss, our dense Siamese U-net is optimized with the learning algorithm of Adam~\cite{b34}.
\section{Experiment}
\floatsetup[table]{capposition=top}
\newfloatcommand{capbtabbox}{table}[][0.9\FBwidth]
\begin{table*}[t]
\caption{The performance comparisions in Internet, where bold indicate the best results.}\smallskip
\centering
\resizebox{.7\columnwidth}{!}{
\smallskip\begin{tabular}{lllllllll}
\toprule
\multirow{2}{*}{Method} & \multicolumn{2}{l}{Airplane} & \multicolumn{2}{l}{Car} & \multicolumn{2}{l}{Horse} & \multicolumn{2}{l}{Average} \\
\cmidrule(r){2-3} \cmidrule(r){4-5} \cmidrule(r){6-7}\cmidrule(r){8-9}
& Precision & Jaccard & Precision & Jaccard & Precision & Jaccard & Precision & Jaccard \\
\midrule
Jerripothula et al. \cite{b8} &90.5 &0.61 &88.0 &0.71 &88.3 &0.60 &88.9 &0.64\\
Han et al. \cite{b5} &92.3 &0.60 &88.7 &0.68 &89.3 &0.58 &90.1 &0.62 \\
Yuan et al. \cite{b37} &92.6 &0.66 &90.4 &0.72 &90.2 &0.65 &91.1 &0.68\\
Li et al. \cite{b9} &94.6 &0.64 &94.0 &0.83 &91.4 &0.65 &93.3 &0.71\\
Chen et al.~\cite{chen2019show} &94.1 &0.65 &94.0 &0.82 &92.2 &0.63 &95.2 &0.78 \\
Gong et al. ~\cite{b38} &95.5 &0.76 &94.7 &0.87 &93.3 &0.65 &94.5 &0.76\\
OURS & \textbf{97.0} &\textbf{0.81} &\textbf{96.3} &\textbf{0.90} &\textbf{93.7} &\textbf{0.74} &\textbf{95.7} &\textbf{0.82}\\
\bottomrule
\end{tabular}
}
\label{tab1}
\end{table*}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.8\linewidth]{./figure5.png}
\caption{The co-segment results generated by our approach on the Internet dataset: the examples of airplane, car and horse are shown in sequence from left to right; the input images, the ground-truths, and the segmented results are shown in the top, middle and bottom row, respectively.
iCoseg tests.}
\label{fig5}
\end{center}
\end{figure*}
\subsection{Experimental Setup}
\textbf{Datasets.}
Natural images and commodity images are tested. For natural images, we use Pascal VOC 2012 [28] and MSRC [29] datasets to train our image co-segmentation network, and then use Internet [30] and sub-set of iCoseg [7] as the test sets. These four data sets are widely used in the community of image co-segmentation. These datasets are composed of real-world images with large intraclass variations, occlusions and background clutters. MSRC is composed of 591 images of 21 object groups. The groundtruth is roughly labeled, which does not align exactly with the object boundaries. VOC 2012 includes 11,540 images with ground-truth detection boxes and 2913 images with segmentation masks. Only 2913 images with segmentation masks can be considered in our problem. Note that not all the examples in these two datasets can be used. In MSRC, some images include only stuff without obvious foreground, such as only sky or grassland. In VOC 2012, the interested objects in some images have great changes in appearance and are cluttered in many other objects, so the meaningful correlation between them is ambiguous. We exclude them from consideration. The remained 1743 images in VOC 2012 and 507 images in MSRC are used to construct our training and validation set. From the training images, we sampled 13200 pairs of images containing common objects as training set and 857 pairs of images as validation set to train our image co-segmentation network.
The Internet dataset contains images of three object categories including airplane, car and horse. Thousands of images in this dataset were collected from the Internet. Following the same setting of the previous work ~\cite{b10,b30,b31}, we use the same subset of the Internet dataset where 100 images per class are available. iCoseg consists of 38 groups of total 643 images, each group contains 17 images on average. It contains images with large variations of viewpoints and multiple co-occurring object instances and difficult segmentaion. Following the compared methods, we evaluate our approach on its widely used subset.
For commodity image, we use SBCoseg dataset ~\cite{b32} that is a new challenging image dataset with simple background for evaluating co-segmentation algorithms. It contains 889 image groups with 18 images in each and the pixel-wise hand-annotated ground truths. The dataset is characterized by simple background produced from nearly a single color. It looks simple but is actually very challenging for current co-segmentation algorithms, as there are four difficult cases in it: easy-confused foreground with background(ECFB), transparent regions in objects(TP), minor holes in objects(MH), and shadows(SD). we divide the SBCoseg into 13842, 720, and 1440 images for training, validation, and testing, respectively. Each subset contains all the ECFB, TR, MH, SD, and normal cases.
Evaluation metrics. We use two commonly used metrics for evaluating the effects of image co-segmentation: Precision and Jaccard index ~\cite{b33}.
Precision is the percentage of correctly classified pixels in both background and foreground, which can be defined as
\begin{footnotesize}
\begin{equation}
Precision= \frac{|Segmentation \cap Groundtruth|}{|Segmentation|}
\label{eq8}
\end{equation}
\end{footnotesize}
The background pixels are taken into account in precision, so the image with large background area and small foreground area tend to perform well in precision. Therefore, the precision may not be very faithful to evaluate the performance of algorithms. Jaccard index is used to compensate for this shortcoming. Jaccard index (denoted by Jaccard in the following descriptions) is the overlapping rate of foreground between the segmentation result and the ground truth mask, which can be defined as
\begin{footnotesize}
\begin{equation}
Jaccard{\rm{ = }}\frac{{Segmentation \cap Segmentation}}{{Segmentation \cup Groundtruth}}
\label{eq9}
\end{equation}
\end{footnotesize}
\textbf{Parameter setting.}
We conduct the experiments on a computer with GTX 1080Ti GPU and implement the image co-segmentation network with PyTorch. In the experiments, the batch size for training is set to be 4, the learning rate is initialized to 0.00001 and is divided by 2 as the loss in the validation data do not decrease for 10 epochs. The optimization procedure ends after 120 epochs. We use Adam optimize ~\cite{b34} and the weight decay is set to be 5e-5. Considering the limited computing resource, we resize the input image to the resolution of 512*512 in advance. The co-segmentation results are resized back to the original image resolution for performance evaluation.
\begin{table*}[t]
\caption{The comparisons of Jaccard index on iCoseg-subset with 8 groups of images. The bold indicate the best results among all methods.}\smallskip
\centering
\resizebox{.70\columnwidth}{!}{
\smallskip\begin{tabular}{llllll}
\toprule
Class & Faktor and Irani ~\cite{b35}& Jerripothula et al.~\cite{b8}& Li et al.~\cite{b9}& Gong et al.~\cite{b38}& Ours\\
\midrule
Bear2& 0.70& 0.68& 0.88& \textbf{0.89}& 0.87\\
Brownbear& 0.92& 0.73& 0.92& \textbf{0.94}& \textbf{0.94}\\
Cheetah& 0.67& 0.78& 0.69& 0.78& \textbf{0.89}\\
Elephant& 0.67& 0.80& 0.85& 0.88& \textbf{0.89}\\
Helicopter& 0.82& 0.80& 0.92& \textbf{0.94}& 0.88\\
Hotballoon& 0.88& 0.80& 0.92& 0.94& \textbf{0.96}\\
Panda1& 0.70& 0.72& 0.83& 0.86& \textbf{0.90}\\
Panda2& 0.50& 0.61& 0.87& \textbf{0.88}& 0.87\\
Average& 0.78& 0.74& 0.84& 0.87& \textbf{0.90}\\
\bottomrule
\end{tabular}
}
\label{tab2}
\end{table*}
\begin{table*}[t]
\caption{The results on SBCoseg dataset. }
\centering
\resizebox{.95\columnwidth}{!}{
\smallskip\begin{tabular}{lllllllllllll}
\toprule
\multirow{2}{*}{Method} & \multicolumn{2}{l}{Normal} & \multicolumn{2}{l}{TP} & \multicolumn{2}{l}{SD} & \multicolumn{2}{l}{MH} & \multicolumn{2}{l}{ECFB} & \multicolumn{2}{l}{Average}\\
\cmidrule(r){2-3} \cmidrule(r){4-5} \cmidrule(r){6-7}\cmidrule(r){8-9}\cmidrule(r){10-11}\cmidrule(r){12-13}
& Precision & Jaccard & Precision & Jaccard & Precision & Jaccard & Precision & Jaccard & Precision & Jaccard & Precision & Jaccard\\
\midrule
Gong et al. ~\cite{b38} &99.3 &0.95 &98.9 &0.96 &99.1 &0.95 &99.1 &0.94 &98.9 &0.95 &99.1 &0.95\\
OURS &99.6 &0.97 &99.4 &0.98 &99.5 &0.97 &99.3 &0.94 &99.3 &0.97 &99.4 &0.97\\
\bottomrule
\end{tabular}
}
\label{tab3}
\end{table*}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.75\linewidth]{./figure6.png}
\caption{The co-segmentation results generated by our approach on the subset of iCoseg dataset.}
\label{fig6}
\end{center}
\end{figure}
\subsection{Comparison to the State-of-the-Arts}
The performance of our image co-segmentation network is tested and compared with other seven state-of-the-art techniques including Faktor and Irani ~\cite{b35}, Jerripothula et al.~\cite{b8}, Han et al.~\cite{b5}, Yuan et al. ~\cite{b37}, Li et al.~\cite{b9}, Chen et al.~\cite{chen2019show} and Gong et al.~\cite{b38}. These compared methods include the conventional methods and the most recent deep learning based methods.
The resultant performances on the Internet dataset from our method (denoted by OURS) as well as the compared methods are listed in Table~\ref{tab1}. We do ten times of tests by using our method and the value in Table~\ref{tab1} is the mean of ten tests. From the results shown in Table~\ref{tab1}, we can conclude that our method outperforms the currently best methods, not only methods based on deep neutral network but also traditional ones. It achieves the best performance on airplane and car categories on both Jaccard and Precision index, and the second best performance on horse category in terms of Precision index and the best performance in terms of Jaccard index. Specially, compare with the previous best method (from Gong et al.~\cite{b26}), the increased rates in average precision and average Jaccard index brought by our method are 1.2$\%$ and 7.9$\%$, respectively.
Fig.~\ref{fig5} shows some examples of segmented images from our methods. As can be seen, our method can generate promising object segments under different types of intra-class variations, such as colors, sharps, views, scales and backgrounds.
To further evaluate the proposed method, we also test our approach on the subset of iCoseg, which includes eight groups of images: bear2, brown bear, cheetah, elephant, helicopter, hotballoon, panda1 and panda2. Table~\ref{tab2} shows the comparison result for each group in term of Jaccard. The results show that our method get the best performance for 5 out of 8 object groups, and it is the best one on average.
Fig.~\ref{fig6} shows some examples of co-segmentation results on iCoseg dataset by using our approach. We can see that our method accurately segments the interested objects.
\subsection{Performance on commodity images}
We use finely annotated training set and validation set to train and valid the network respectively, then test the final model on the test set. As this is the first work time to use SBCoseg dataset as the training set to train the network, so there is only comparison between our results and Gong et al. ~\cite{b38}. We show the overall performance of our approach and the performance for Normal, TP, SD, MH, and ECFP, respectively, in Table~\ref{tab3}. We then randomly choose some examples from the test set and visualize the segmented results in Fig.~\ref{fig7}. It can be seen that our network segments these five cases very well.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.75\linewidth]{./figure7.pdf}
\caption{The co-segmentation results generated by our approach on the test data of SBCoseg dataset, where the 1st, 3rd, 5th shows origins and the 2nd, 4th, 6th column shows the segmentation results; the images from top to bottom correspond to ECFB, MH, Normal, SD, and TP, respectively.}
\label{fig7}
\end{center}
\end{figure}
\subsection{Ablation study}
\begin{table}[t]
\caption{The comparisons of ablated methods on Internet, where the bold indicates the best results among all methods.}
\begin{center}
\smallskip\begin{tabular}{lll}
\toprule
Method& Precision& Jaccard\\
\midrule
Baseline& 94.5 & 0.76\\
Baseline+& 94.7& 0.78\\
Full& 95.7& 0.82\\
\bottomrule
\end{tabular}
\end{center}
\label{tab4}
\end{table}
Two contributions of ours in this paper are dense connection and edge enhanced 3D IOU loss of SNDM. We justify the effectiveness of them through ablation experiments in this subsection. To this purpose, we make the following changes to our image co-segmentation approach and compare the performance of them:
1) Baseline: the traditional Siamese U-net for image co-segmentation, which is same as that reported in Gong et al.~\cite{b38}. So, there is no any use of our contributions in the baseline network. It doesn’t include dense connections and is trained based on traditional IOU loss.
2) Baseline+: We add the dense connections into the traditional Siamese U-net, but this network still outputs binary segmentation masks and is trained by using traditional Dice loss.
3) Full: This is the full approach of ours, including dense connection and edge enhanced 3D IOU loss of SNDM.
We repeat the experiments ten times on the Internet with each of the above methods and record the mean performance. The training sets are still VOC 2012 and MSRC. The performance of these modified versions of our network is shown in Table~\ref{tab4}, which demonstrates that both the dense connection and the 3D IOU loss of SNDM are useful.
\section{Conclusions}
This paper has proposed a new approach to image co-segmentation through introducing the dense connections into the decoder path of Siamese U-net and presenting a new 3D IOU loss measured over distance maps. It behaves well in the experiments. To our knowledge, the best performance on Internet and iCoseg subset is obtained by using our approach. Furthermore, the ablation study demonstrates that both dense connection and 3D IOU loss are valuable.
\section*{Acknowledgements}
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
\bibliographystyle{elsarticle-num}
|
1,941,325,220,053 | arxiv | \section{Introduction}
While the light quarks and leptons can be regarded as perturbative
spectators to the electroweak symmetry breaking (EWSB), the
massive top quark with a mass of EWSB scale suggests that top
quark is potentially enjoying a more intimate role in the flavor
dynamics and/or horizontal symmetry breaking. A potential
implication of this is the possibility that there exist extra
interactions for top quark, which distinguishs top quark from
other fermions of the standard model (SM) at the electroweak
scale. If this is true, the top quark physics will be much richer
than that of the SM and possible large deviations of top quark
properties from the SM predictions are expected \cite{sensitive}.
Detailed study about such new physics effects may reveal useful
information about the underlying interactions and the mechanism of
EWSB. Such study is essential when one considers the advancement
in experiments where the forthcoming CERN Large Hadron Collider
(LHC) and the planned Next-generation Linear Collider (NLC) will
sever as top quark factories and thus make the precise
measurements of top quark properties possible~\cite{review}. In
this paper, we restrict our discussion in the framework of the
topcolor-assisted technicolor model
(TC2)\cite{tc2-Hill,tc2-Lane,tc2-2}, In this model, the third
generation is singled out to participate in a special interaction
called topcolor interaction. Such an interaction will cause the
top quark condensation which can partially contribute to EWSB and
also provide main part of top quark mass. The TC2 model generally
predicts a number of scalars, and some of them couple very
strongly to the top quark. So, we expect the TC2 corrections to
the top quark properties are larger than those of the other models
which treat generations in an egalitarian manner, such as the
popular minimal supersymmetry model (MSSM)\cite{MSSM}.
Although the various exotic production
processes\cite{top-Higgs,pp-tc,ee-tc,cao1} and the rare decay
modes of the top quark\cite{rare-decay} can serve as a robust
probe of the TC2 model, the role of the dominant decay mode $t \to
W b$ should not be underestimated\cite{Nelson}. One advantage of
this decay mode is that it is free of non-perturbative theoretical
uncertainties\cite{Bigi} and future precision experimental data
can be compared with the accurate theoretical predictions. The
other advantage of this channel is that the $W$-boson, as a decay
product, is strongly polarized and the helicity contents
(transverse-plus $W_+$, transverse-minus $W_-$ and longitudinal
$W_L$) of the $W$-boson can be probed through the measurement of
the shape of the lepton spectrum in the $W$-boson decay\cite{CDF}.
Among the three polarizations of the $W$-boson in the top quark
decaying, the longitudinal mode is of particular interesting since
it is useful to understand the mechanism of EWSB\cite{equivalance}
. Therefore, the study of top quark decaying into the polarized
$W$-boson can provide some additional information about both the
$tWb$ coupling and EWSB. On the experimental side, the CDF
collaboration has already performed the measurement of the
helicity component of the $W$-boson in the top quark decaying from
Run~1 data and obtained the results
\begin{eqnarray}
&&\Gamma_L/\Gamma =0.91 \pm 0.37 (stat.) \pm 0.13 (syst.), \nonumber \\
&&\Gamma_+/\Gamma =0.11 \pm 0.15 \ , \nonumber
\end{eqnarray}
where $\Gamma $ is the total decay rate of $t\to Wb$, and
$\Gamma_L $ and $\Gamma_+ $ denote respectively the rates of top
quark decaying into a longitudinal and transverse-plus $W$-boson.
Although the error of these measurements is quite large at the
present time, it is expected to be reduced significantly during
Run~2 of the Tevatron and may reach $1\% \sim 2\%$ at the LHC
\cite{Willenbrock}. On the theoretical side, the predictions of
these quantities in the SM up to one-loop level are now available
\cite{Groot1,Groot2}. The tree-level results are 0.703 for
$\Gamma_L/\Gamma$, 0.297 for $\Gamma_-/\Gamma$ and
${\cal{O}}(10^{-4})$ for $\Gamma_+/\Gamma$, and the QCD
corrections to these predictions are respectively $-1.07 \% $,
$2.19\%$ and $0.10\%$, while the electroweak corrections are at
the level of a few per mill.
In order to probe the new physics from the future precise
measurement of $\Gamma_L/\Gamma$, $\Gamma_-/\Gamma$ or
$\Gamma_+/\Gamma$, we must know the new physics contributions to
these quantities in various models. By now, the one-loop
corrections to the total width of $t \to b W$ in the framework of
MSSM have been studied in \cite{tbw-MSSM} and the corrections to
$\Gamma_L/\Gamma$, $\Gamma_-/\Gamma$ or $\Gamma_+/\Gamma$ in MSSM
were recently studied in Ref.\cite{cao}, but the similar study in
the TC2 model is absent. Studying the corrections on these
quantities in the TC2 model is the main goal of this paper.
This paper is organized as follows. In the section II, we first
briefly introduce the TC2 model, then we calculate the
corrections and discuss our numerical results. The conclusions are
given in section III.
\section{Top quark decays into polarized W-boson in the TC2 model}
\subsection{The TC2 Model}
Among various kinds of dynamical electroweak symmetry breaking
models, the TC2 model \cite{tc2-Hill,tc2-Lane} is especially
attractive since it combines the fancy ideas of
technicolor\cite{technicolor} and top quark
condensation\cite{tc2-2} without conflicting with low energy
experimental data . The basic thought of the TC2 model is to
introduce two strongly interacting sectors. One sector(topcolor
interaction) provides the main part of top quark mass but has the
small contribution to EWSB, while the other sector(technicolor
interaction) is responsible for the bulk of EWSB and the masses of
light fermions. At EWSB scale, this model predicts two groups of
scalars corresponding to the technicolor condensates and topcolor
condensates, respectively\cite{tc2-Hill,tc2-Lane,tc2-2}. Either of
them can be arranged into a $SU(2)$
doublet\cite{2hd,2hd1,Rainwater}, and their roles in TC2 model are
quite analogous to the Higgs fields in the model proposed in
Ref.\cite{special} which is a special two-Higgs-doublet model in
essence. Explicit speaking, the doublet $\Phi_{TC}$ which
corresponds to the technicolor condensates is mainly responsible
for EWSB and light fermion masses, it also contributes a small
portion of top quark mass. Because its vacuum expectation value
(vev) $v_{TC}$ is near the EWSB scale($v_w$), the Yukawa couplings
of this doublet to the fermions are small. While the doublet
$\Phi_{TOPC}$ which corresponds to the topcolor condensates plays
a minor role in EWSB and only couples to the third generation
quarks, its main task is to generate the large top quark mass.
Since the the vev of $\Phi_{TOPC}$ (denoted as $F_t $) can not be
large(see below), the doublet $\Phi_{TOPC}$ can couple strongly to
top quark to generate the expected top quark mass.
One distinct feature of this model is that there exist tree level
flavor changing couplings for the two scalar
fields($\Phi_{TC},\Phi_{TOPC}$)\cite{special} which is
theoretically disfavored. Such defect may be partially alleviated
if the mixing angle between two scalar fields, which is a model
dependent parameter, satisfies $ \tan \alpha =\frac{F_t}{v_{TC}}
$\cite{special}. In this case, only one scalar field has the
flavor changing couplings and the rearranged Lagrangian has the
following characteristics: one rearranged doublet is fully
responsible for EWSB, but with small Yukawa coupling to all
fermions; while the other rearranged doublet(denoted as: $\Phi$)
has strong Yukawa coupling with the third generation
quarks\cite{cao1}. The Lagrangian relevant to our calculation then
can be written as \footnote{The Lagrangian in Ref.\cite{Rainwater}
corresponds to the case $ \tan \alpha = 0 $. As far as the process
considered in this paper, these two natural choices of $\tan
\alpha $ do not make any significant difference in numerical
results since in both cases, $h_t $ is top condensates dominant.
But our choice of $\tan \alpha $ will make the calculation
simplified.}
\begin{eqnarray}
{\cal L}= | D_{\mu} \Phi |^2 - Y_t
\frac{\sqrt{v_{w}^{2}-F_{t}^{2}}} {v_{w}} \bar{\Psi}_L \Phi t_R -
Y_t \frac{\sqrt{v_{w}^{2}-F_{t}^{2}}} {v_{w}} \bar{t}_R \Phi
\Psi_L -m_t \bar{t} t \label{laga}
\end{eqnarray}
where, $ v_{w} \equiv v/\sqrt{2} \simeq 174$ GeV, $Y_t =
\frac{(1- \epsilon) m_t}{F_t} $ is the Yukawa coupling , $\Psi_L $
is the $SU(2)_L $ top-bottom doublet as usual, $ \Phi $ is the
rearranged $SU(2) $ doublet and takes the form
\begin{eqnarray}
\Phi =\left ( \begin{array}{c} \frac{1}{\sqrt{2}} ( h_t^0
+ i \pi_t^0 ) \\
\pi_t^- \end{array} \right )
\end{eqnarray}
and the covariant derivative is
\begin{eqnarray}
D_{\mu} = \partial_{\mu}+ i \frac{g_Y}{2} Y B_{\mu} + i
\frac{g}{2} \tau_i W_{\mu}^i
\end{eqnarray}
with the hypercharge of the doublet is $Y =-1 $ and $ g $ is $
g_{weak} $. In Eq.(\ref{laga}), the factor
$\frac{\sqrt{v_{w}^{2}-F_{t}^{2}}} {v_{w}} = \frac{v_{TC}}{v_{w}}$
indicates the mixing effect between the two doublets. The physical
particles ($\pi^0_t,\pi_t^-$) and $h_t^0$ in the $\Phi$ field are
called top-pions and top-Higgs, respectively.
From Eq.(\ref{laga}), one can learn that the TC2 parameters
relevant to our discussion are $\epsilon$, $F_{t} $ and the
masses of the top-pions and top-Higgs. Before numerical
evaluation, we recapitulate the theoretical and experimental
constraints on these parameters.
In the TC2 model, $\epsilon $ parameterizes the portion of the
extended technicolor contribution to the top quark mass. The bare
value of $\epsilon $ is generated at the ETC scale, and can obtain
a large radiative enhancement from topcolor and $U(1)_{Y_1} $ by a
factor of order $10$ at the weak scale\cite{tc2-Hill}. This
$\epsilon $ can induce a nonzero top-pion mass (proportional to
$\sqrt{\epsilon}$ )\cite{Hill} which can ameliorate the problem of
having dangerously light scalars. Numerical analysis shows that,
with reasonable choice of other input parameters, $\epsilon $ with
order $10^{-2} \sim 10^{-1} $ may induce top-pions as massive as
the top quark\cite{tc2-Hill}. Indirect phenomenological
constraints on $\epsilon $ come from low energy flavor changing
processes such as $ b \to s \gamma $ \cite{b-sgamma}. However,
these constraints are very weak. Precise value of $\epsilon $ may
be obtained by elaborately measuring the coupling strength between
top-pions/top-Higgs and top quark at the next linear colliders.
From theoretical point of view, $\epsilon $ with value from $ 0.01
$ to $ 0.1 $ is favored. For the mode $t\to Wb$ considered in this
paper, $\epsilon $ affects our results via
$Y_t=\frac{(1-\epsilon)m_t}{F_t}$, we fix $ \epsilon =0.1$
conservatively in this paper.
Now, we turn to discuss the parameter $F_t $. The Pagels-Stokar
formula \cite{Pagels} gives the expression of $F_t $ in terms of
the number of quark color $N_c $, the top quark mass $m_t$, and
the scale $\Lambda $ at which the condensation occurs:
\begin{eqnarray}
F_t^2= \frac{N_c}{16 \pi^2} m_t^2 \ln{\frac{\Lambda^2}{m_t^2}}.
\label{ft}
\end{eqnarray}
From this formula, one can infer that, if $t\bar{t} $ condensation
is fully responsible for EWSB, i.e. $F_t \simeq v_w \equiv
v/\sqrt{2} = 174$ GeV, then, $\Lambda $ is about $10^{13} \sim
10^{14} $ GeV. Such a large value is less attractive since one
expects new physics scale should not be far higher than the weak
scale by the original idea of technicolor
theory\cite{technicolor}. On the other hand, if one believes new
physics exists at TeV scale, i.e., $\Lambda \sim 1 $ TeV, then
$F_t \sim 50$ GeV, which means that $t \bar{t} $ condensation
cannot be wholly responsible for EWSB and the breaking of
electroweak symmetry needs the joint effort of topcolor and other
interactions like technicolor. By the way, Eq.(\ref{ft}) should be
only understood as a rough guide, and $F_t $ may be somewhat
lower or higher. In this paper, we use the value $F_t =50$ GeV to
illustrate the numerical results.
Finally, we focus on the mass bounds of top-pions and top-Higgs.
On the theoretical side, some estimates have been done. The mass
splitting between the neutral top-pion and the charged top-pions
should be small since such splitting comes only from the
electroweak interactions\cite{mass-pion}. Ref.\cite{tc2-Hill} has
estimated the masses of top-pions using quark loop approximation
and showed that the masses are allowed to be a few hundred GeV in
the reasonable parameter space. Like Eq.(\ref{ft}), such estimates
can only be regarded as a rough guide and the precise values of
top-pion masses can only be determined by future experiments. The
mass of the top-Higgs $h_{t}$ can be estimated in the
Nambu-Jona-Lasinio (NJL) model in the large $N_{c}$ approximation
\cite{NJL} and is found to be about $2m_{t}$
\cite{top-Higgs,2hd1}. This estimates is also rather crude and the
mass below the $\overline{t}t$ threshold is quite possible in a
variety of scenarios \cite{y15}. On the experimental side, the
current experiments have restricted the masses of the charged
top-pions. For example, the absence of $t \to \pi_t^+b$ implies
that $m_{\pi_t^+}
> 165$ GeV \cite{t-bpion} and $R_b$ analysis yields $
m_{\pi_t^+}> 220$ GeV \cite{burdman}. For the masses of neutral
top-pion and top-Higgs, the experimental restrictions on them are
rather weak. The direct search for the neutral top-pion
(top-Higgs) via $ p p \to t \bar{t} \pi_t^0 (h_t) $ \footnote{The
production of a top-pion (top-higgs) associated with a single top
quark at hadron colliders, $p p \to t \pi_t^0 (h_t)$, has an
unobservably small rate since there exists severe cancellation
between diagrams contributing to this process \cite{Rainwater}.}
with $\pi_t^0 (h_t) \to b \bar{b} $ has been proved to be hopeless
at Tevatron with the top-pion (top-Higgs) heavier than $120 $ GeV
\cite{Rainwater}. The single production of $\pi_t^0 $ ($h_t $ ) at
Tevatron with $\pi_t^0 $ ($h_t $) mainly decaying to $t \bar{c} $
may shed some light on detecting the neutral top-pion
(top-Higgs)\cite{top-Higgs}, but the potential for the detection
is limited by the size of the mixing between top and charm quarks.
On the other hand, the detailed background analysis is absent now.
Anyhow, these mass bounds will be greatly tightened at the
upcoming LHC \cite{pp-tc,Rainwater}. In our following discussion,
we will neglect the mass difference among the top-pions and denote
the mass of them as $m_{\pi_t}$.
\subsection{Top quark decays into the polarized W boson}
Generally speaking, the effective $tbW $ vertex at one loop level
gets the contribution from penguin diagrams, fermion self-energy
diagrams as well as W boson self-energy diagrams. As far as TC2
model is concerned, the leading part of the first two kinds of
diagrams is $ {\cal{O}}(Y_t^2) $, while that for the last kind of
diagrams is ${\cal{O}} (g^2) $. Considering $Y_t^2 \gg g^2 $, we
can safely neglect the contribution of W boson self-energy. So,
the diagrams we need to calculate are only those shown in
Fig.\ref{feynman}. The effective $tbW $ can be written as
\begin{equation}
\Gamma^{\mu}=-i\frac{g V_{tb}}{\sqrt{2}}\{\gamma^{\mu}P_{L}
[1+F_{L}+\frac{1}{2}\delta Z_{b}^{L}+\frac{1}{2}\delta Z_{t}^{L}
]+\gamma^{\mu}P_{R}F_{R}+P_{t}^{\mu}P_{L}\widetilde{F}_{L}+P_{t}^{\mu}
P_{R}\widetilde{F}_{R}\}
\end{equation}
Here $P_{R,L}\equiv\frac{1}{2} \left ( 1 \pm \gamma_5 \right )$
are the chirality projectors. The form factors $F_{L,R}$ and
$\widetilde{F}_{L,R}$ represent the contributions from the
irreducible vertex loops. $\delta Z_b^L$ and $\delta Z_t^L$
denote respectively the field renormalization constants for bottom
quark and top quark. The explicit expressions are given by (we
have neglected bottom quark mass)
\begin{eqnarray}
F_L&=& \frac{(1-\epsilon)^2}{16 \pi^2 V_{tb}}
\frac{v_w^2-F_t^2}{v_w^2} \left ( \frac{ m_t}{\sqrt{2} F_t} \right
)^2 (2 C_{24}^e + 2 C_{24}^f ) \label{factor1} \\
F_R&=& 0 \\
\widetilde{F}_L &= & 0 \\
\widetilde{F}_R &=& \frac{(1-\epsilon)^2}{16 \pi^2 V_{tb}}
\frac{v_w^2-F_t^2}{v_w^2} \left ( \frac{ m_t}{\sqrt{2} F_t} \right
)^2 2 m_t ( C_0^e + 2 C_{11}^e + C_{21}^e -C_{12}^e -C_{23}^e
\nonumber \\
& & + C_{11}^f + C_{21}^f -C_0^f - C_{11}^f -C_{23}^f -C_{12}^f )
\\
\delta Z_t^L &=&\frac{(1-\epsilon)^2}{16 \pi^2 }
\frac{v_w^2-F_t^2}{v_w^2} \left ( \frac{ m_t}{\sqrt{2} F_t} \right
)^2 [ B_1^b +B_1^c+2 m_t^2 (B_1^{a \prime} +B_1^{b \prime}+B_1^{c
\prime}+ B_0^{b \prime} -B_0^{c \prime})]
\\
\delta Z_b^L &=& \frac{(1-\epsilon)^2}{16 \pi^2 }
\frac{v_w^2-F_t^2}{v_w^2} \left ( \frac{ m_t}{\sqrt{2} F_t} \right
)^2 2 B_1^d \label{factor2}
\end{eqnarray}
where the functions $B_{0, 1} $ and $ C_{0, i j} $ are
respectively two-point, three-point Feynman integrals defined
in\cite{Axelrod} and their functional dependences are
\begin{eqnarray}
C^e_{0,i j} &= & C_{0, i j} (-p_t, p_w, m_t, m_{\pi_t}, m_{\pi_t}
), \nonumber \\
C^f_{0,i j} &= & C_{0,i j} (-p_t, p_w, m_t, m_{h_t}, m_{\pi_t} ),
\nonumber \\
B^a_{0,1} & = & B_{0,1} (-p_t, m_b, m_{\pi_t} ), \nonumber \\
B^b_{0,1} & = & B_{0,1}(-p_t, m_t, m_{\pi_t} ), \nonumber \\
B^c_{0,1} & = & B_{0,1}(-p_t, m_t, m_{h_t} ), \nonumber \\
B^d_{0,1} & = & B_{0,1}(-p_b, m_t, m_{\pi_t} ), \nonumber
\end{eqnarray}
respectively, and $B^\prime_{0,1} $ denotes $\partial B_{0,
1}/\partial p^2 $.
The rate of the top quark decaying into the polarized $W$-boson
can be obtained either by helicity amplitude method\cite{helicity}
or by the project technique introduced in
Ref.\cite{Groot1,Groot2}. Their expressions are given by
\begin{eqnarray}
\Gamma_L =\frac{g^2 m_t |V_{tb} |^2 }{64 \pi } \frac{(1-
x^2)^2}{x^2} &&\left \{ 1+ Re ( \delta Z_b^L + \delta Z_t^L +
2 F_L ) + Re(\widetilde{F}_R) m_t (1- x^2) \right \}, \label{gammal} \\
\Gamma_-=\frac{g^2 m_t |V_{tb} |^2 }{32 \pi } (1-x^2)^2 & & \left
\{ 1+ Re ( \delta Z_b^L + \delta Z_t^L + 2 F_L ) \right \} ,
\label{gammam}
\end{eqnarray}
where $\Gamma_L $ ($\Gamma_- $) denotes the rate of the top quark
decaying into the longitudinal (transverse-minus ) $W$-boson and
$x=M_W/m_t $. In deriving Eqs.(\ref{gammal},\ref{gammam}), we
have neglected the $b$-quark mass for simplicity which will
produce an uncertainty of several per mille on $ \Gamma_{L,-}$.
Another consequence of neglecting $m_b$ is $\Gamma_+ =0$ due to
angular momentum conservation \cite{Groot1}. Then, the total decay
rate of $t \to b W$ is obtained by $\Gamma=\Gamma_L+\Gamma_-$. For
convenience, we define the ratios
\begin{eqnarray}
\hat{\Gamma}_{L,-}= \Gamma_{L,-}/\Gamma, \label{def}
\end{eqnarray}
which can be measured in experiments. We present the relative TC2
corrections as: $\delta \hat{\Gamma}_{L,-}/\hat{\Gamma}_{L,-}^0$
with $\delta \hat{\Gamma}_{L,-}$ denoting the TC2 corrections and
$\hat{\Gamma}_{L,-}^0$ denoting the SM predictions. In our
numerical evaluation, we fixed $m_t =178 $ GeV\cite{nature}, $m_b
=0 $, $M_W = 80.451 $ GeV and $ g_{weak}=0.654 $, and vary
$m_{h_t},m_{\pi_t}$ in experimentally allowed region.
The Fig.\ref{wid} are the plots of the relative TC2 correction to
the decay width $ \Gamma (t \to W b) $. One distinctive feature of
such correction is that, for fixed $ m_{h_t} $, after the
deviation from the SM predictions reaches its minimum at a certain
value of $m_{\pi_t} $, the relative correction increases
monotonously. This indicates that there are cancellations among
different diagrams\footnote{If different diagram contributions are
constructive, then for fix $m_{h_t} $, the deviation will decrease
monotonously with increasing $m_{\pi_t} $ to approach a
constant.}. Another feature is that the correction is negative in
all allowed parameter space. Noticing the fact that QCD correction
to $ \Gamma (t \to W b) $ is $-8.54 \% $\cite{Groot1,Groot2}, one
can conclude that the TC2 correction can enlarge the quantum
effects. From Fig.2, one can see that, for the light top-Higgs,
the relative correction can reach $ -8\% $. Comparing with the
correction in the popular MSSM model where the SUSY-QCD correction
and the SUSY-EW correction tend to cancel each other\cite{cao}, we
find that the TC2 correction is larger than either of SUSY-QCD and
SUSY-EW correction and TC2 correction might be detectable at the
future high energy colliders\cite{top,review}. In Fig.\ref{wid},
we have fixed $F_t$ as 50 GeV. To get the correction for any other
choice of $F_t $, we just multiply the results of Fig.\ref{wid} by
a factor $ \frac{v_w^2 -F_t^2}{v_w^2 -50^2} \frac{ 50^2}{F_t^2} $
(For example, multiplying a factor 1.6 for $F_t =40 $ GeV and 0.35
for $F_t =80 $ GeV).
In Fig.\ref{widl} and Fig.\ref{wid-} , we show the relative TC2
correction to $\hat{\Gamma}_L $ and $ \hat{\Gamma}_- $ as a
function of $ m_{\pi_t} $. One can see that the correction is
below $1 \% $, smaller than the corresponding QCD
correction\cite{Groot1,Groot2} but larger than the corresponding
MSSM corrections\cite{cao}. Comparing with the results in
Fig.\ref{wid}, we can see that the correction to
$\hat{\Gamma}_{L,-} $ is smaller than that to total decay width.
This is due to the cancellation between the correction to $
\Gamma_{L,-} $ and to $\Gamma $ in eq.(14). For example, when we
take $m_{h_t}=120$ GeV and $m_{\pi_t}=750$ GeV, the relative
correction to the total width $\delta\Gamma/\Gamma^0$ is about
$-8\%$($\delta\Gamma_L/\Gamma^0=-5.6\%$ and
$\delta\Gamma_{-}/\Gamma^0=-2.4\%$, respectively), but for the
same values of $m_{h_t}$ and $m_{\pi_t}$,
$\delta\hat{\Gamma}_L/\hat{\Gamma}_L^0=0.13\%$ and
$\delta\hat{\Gamma}_{-}/\hat{\Gamma}_{-}^0=-0.34\%$. Comparing
with the experimental data, we can conclude that the theoretical
prediction of $\hat{\Gamma}_{L,-}$ including the TC2 correction
should be within the experimentally allowed region.
\section{conclusion}
In this paper, we study the TC2 correction to the mode $t \to Wb$.
We find that, due to the cancellations among different diagrams,
the TC2 correction to the width $\Gamma (t \to b W) $ is generally
several percent in the allowed parameter region. The maximum value
of the relative correction can reach $8\% $ which is larger than
that of minimal supersymmetric model and comparable with the QCD
correction. Such TC2 correction should be observable at future
high energy colliders. We also study the TC2 correction to the
branching ratio of top quark decaying into different polarized W
boson states and find the relative TC2 correction is below $1 \%$.
\section*{Acknowledgment}
This work is supported by the National Natural Science Foundation
of China(Grant Nos. 10175017 and 10375017), the Excellent Youth
Foundation of Henan Scientific Committee(Grant No. 02120000300),
and the Henan Innovation Project for University Prominent Research
Talents(Grant No. 2002KYCX009).
|
1,941,325,220,054 | arxiv | \section{Introduction}\label{sec:Introduction}
\input{parts/introduction}
\section{Architecture description}\label{sec:Architecture}
\input{parts/architecture}
\section{Detection, classification, and context identification}\label{sec:GroundTruth}
\input{parts/groundTruth}
\section{Clustering, novelty detection, and active learning}\label{sec:Learning}
\input{parts/learning}
\section{Online model generation and improvement}\label{sec:Model}
\input{parts/model}
\section{Discussion}\label{sec:Conclusion}
\input{parts/conclusion}
\input{parts/acknowledgment}
\bibliographystyle{splncs03}
\section*{\large Acknowledgment}
This preliminary work partly results from the project DeCoInt$^2$, supported by the German Research Foundation (DFG) within the priority program SPP 1835: "Kooperativ interagierende Automobile", grant numbers DO~1186/1-1 and FU~1005/1-1 and SI~674/11-1.
The work is also supported by "Zentrum Digitalisierung Bayern".
|
1,941,325,220,055 | arxiv |
\section{Introduction}
\input{introduction}
\section{Preliminaries}
\label{preliminaries}
\input{preliminar}
\section{Product states}
\label{products}
\input{products}
\section{Separable states}
\label{separable}
\input{separable}
\section{Multiple copies}
\label{multiple}
\input{multiple}
\section{1$\times$1 modes}
\label{sec:1x1}
\input{1x1}
\subsection{Thermal states of fermionic chains}
\input{1x1particular}
\section{Detailed proofs}
\label{sec:proofs}
\input{tabsummary}
\input{proofs}
\subsection{$1\times1$-modes system}
\subsection{General states}
With these considerations, we may give the following definitions
of a product state. They are summarized in Table~\ref{tab:prod}.
\input{tabproduct}
\begin{itemize}
\item
We may call a state product if there exists some state
acting on the Fock space
of the form
$\tilde{\rho}=\tilde{\rho}_A\otimes\tilde{\rho}_B$,
and producing the same expectation values for all local observables.
Formally,~\cite{foot2}
\begin{eqnarray}
\nonumber
\product{0} &:=& \left\{ \rho :\, \exists\tilde{\rho}_A,\,\tilde{\rho}_B, \, [\tilde{\rho}_{A(B)},\parity_{A(B)}]=0\quad \mathrm{s.t.} \right.\\
&&\left. \quad \rho(A_\pi \, B_\pi) =\tilde{\rho}_A(A_\pi)\tilde{\rho}_B(B_\pi) \right.\\
\nonumber
&&\left. \quad \quad\forall A_\pi\in {\cal A}_{\pi},\, B_\pi\in {\cal B}_{\pi} \right\}.
\end{eqnarray}
\item
Alternatively, product states may be defined as those for which
the expectation value of products of local observables factorizes,
\begin{eqnarray}
\nonumber
\product{1} &:=& \left\{ \rho : \rho(A_\pi \, B_\pi) =\rho(A_\pi) \rho(B_\pi)
\right.\\
&&\left. \quad \quad \forall A_\pi\in {\cal A}_{\pi},\, B_\pi\in {\cal B}_{\pi} \right\}.
\end{eqnarray}
\item
At the level of the Fock representation, a product state
can be defined as that writable as a tensor product,
\begin{equation}
\product{2} := \left\{ \rho : \rho=\rho^A \otimes \rho^B \right\}.
\end{equation}
\item
From the point of view of the subalgebras of observables for both
partitions, one may ignore the commutation with the parity operator
and require factorization of any product of observables for a product
state~\cite{moriya05}. This yields another set
\begin{equation}
\product{3} :=\left\{ \rho : \rho(A \, B) =\rho(A) \rho(B)
\quad \forall A\in {\cal A},\, B\in {\cal B} \right\}.
\end{equation}
\end{itemize}
The two first definitions are equivalent,
$\product{0}\equiv\product{1}$.
They correspond to states
with a separable projection onto the diagonal blocks that preserve
parity in each of the subsystems.
This means that
$$\sum_{\alpha,\,\beta=e,\,o} \proj{\alpha}{A}\otimes \proj{\beta}{B}
\rho
\proj{\alpha}{A}\otimes \proj{\beta}{B},
$$
is a product in the sense of \product{2}.
The three remaining sets are strictly different.
In particular
$\product{2} \subset \product{1}$
and
$\product{3} \subset \product{1}$,
but $\product{3} \neq \product{2}$.
The inclusion $\product{2},\product{3} \subseteq \product{1}$
is immediate from the definitions.
The non equality of the sets can be seen by explicit examples as those
shown in Table~\ref{tab:prod}.
The difference between $\product{3}$ and $\product{2}$, however,
is limited to non-physical states, i.e. those
not commuting with parity~\cite{moriya05}.
\subsection{Physical states}
Being parity a conserved quantity in the systems of interest,
the only physical states will be those commuting with $\parity$.
It makes then sense to restrict the study of entanglement to such
states.
By applying each of the above definitions to the physical states,
$\pset$, we obtain the following sets of physical product states.
\begin{itemize}
\item
$\productpar{1} := \product{1} \cap \pset= \product{0} \cap \pset$
\item
$\productpar{2} := \product{2} \cap \pset$
\item
$\productpar{3} := \product{3} \cap \pset$
\end{itemize}
We notice that
$\rho\in\productpar{2}$ is equivalent to $\rho=\rho_A\otimes\rho_B$
where both factors are also
parity conserving.
With the parity restriction, the three sets are related by
\begin{equation}
\productpar{3}=\productpar{2}\subset\productpar{1}.
\end{equation}
The proofs of all the relations above
are shown in section~\ref{subsec:prod}.
\subsection{Pure states}
For pure states, all $\productpar{i}$ reduce to the same set.
If the state vector is written in a basis of well-defined parity
in each subsystem,
it is possible to show that the condition of
$\productpar{1}$ requires that such expansion has a single
non-vanishing coefficient, and thus the state can be written as
a tensor product also with the definition of $\product{2}$.
\subsection{Product states}
\label{subsec:prod}
\newtheorem{incl}{}[subsection]
\begin{incl}
{$\product{0}\equiv\product{1}$}
\label{proof1}
\end{incl}
\begin{proof}
States in $\product{0}$ satisfy the restriction
that
$$
\rho(A_\pi \, B_\pi) =\tilde{\rho}_A(A_\pi)\tilde{\rho}_B(B_\pi)
$$
for some product state $\tilde{\rho}$ and all parity conserving operators
$A_\pi, \, B_\pi$.
Since the only elements or $\rho$ contributing to such expectation values
are in the diagonal blocks $\proj{\alpha}{A}\otimes \proj{\beta}{B}
\rho
\proj{\alpha}{A}\otimes \proj{\beta}{B}$, ($\alpha,\,\beta=e,\,o$),
the condition is equivalent to saying that the sum of these blocks is equal to
the (parity commuting) product state
$\tilde{\rho}=\tilde{\rho}_A\otimes\tilde{\rho}_B$.
The condition for $\rho\in\product{1}$ turns out to be equivalent to this.
We may decompose the state as a sum
$$
\rho=\sum_{\alpha,\,\beta=e,o}\proj{\alpha}{A}\otimes \proj{\beta}{B}
\rho
\proj{\alpha}{A}\otimes \proj{\beta}{B}+R := \rho'+R,
$$
where $\rho'$ is a density matrix commuting
with $\parity_A$ and $\parity_B$, and
$R$ contains only the terms that violate parity in some subspace.
It is easy to check that $R$ gives no contribution to expectation
values of the form $\rho(A_\pi \, B_\pi)$,
so that $\rho'(A_\pi \, B_\pi)=\rho'(A_\pi)\rho'(B_\pi)$.
On the other hand, an operator that is odd under parity has the form
$A_{\not\pi}=\proj{e}{A}A_{\not\pi}\proj{o}{A}+\proj{o}{A}A_{\not\pi}\proj{e}{A}$.
Therefore
$\rho'(A_{\not\pi} \, B_{\not\pi})=0=\rho'(A_{\not\pi})\rho'(B_{\not\pi})$.
Since $\rho'$ commutes with parity, and then odd observables give zero
expectation value,
we have checked that all expectation values $\rho'(A\,B)$ factorize
and then $\rho'$ is a product.
\end{proof}
\begin{incl}
{$\product{2}\subset\product{1}$}
\label{proof2}
\end{incl}
\begin{proof}
The inclusion $\product{2}\subseteq\product{1}$ is immediate
from the fact that the
products of even observables in the
$A_\pi\, B_\pi$
correspond, via a Jordan-Wigner transformation, to
products of local even operators $\tilde{A}_e\,\tilde{B}_e$
in the Fock representation, and thus they factorize for any state in
$\product{2}$.
The strict character of the inclusion is shown with an explicit example as
$\ensuremath{\rho_{\product{1}}}$, in Table~\ref{tab:prod}.
\end{proof}
\begin{incl}
{$\product{3}\subset\product{1}$}
\end{incl}
\begin{proof}
The inclusion $\product{3}\subseteq\product{1}$ in immediate
from the definitions of both sets.
The example $\ensuremath{\rho_{\product{1}}}\notin\product{3}$
(Table~\ref{tab:prod}) shows it is strict.
\end{proof}
\begin{incl}
{$\product{2}\neq\product{3}$}
\end{incl}
\begin{proof}
The example
\begin{eqnarray}
\nonumber
\ensuremath{\rho_{\product{2}}} &=&\PiinPiii \\
\nonumber
&=&
\frac{1}{2}\left (
\begin{array}{r r}
1 & -1 \\
-1 & 1
\end{array}
\right ) \otimes
\frac{1}{2}\left (
\begin{array}{r r}
1 & 1 \\
1 & 1
\end{array}
\right ),
\end{eqnarray}
fulfills
$\ensuremath{\rho_{\product{2}}} \in \product{2}$, but $\ensuremath{\rho_{\product{2}}} \notin \product{3}$
because it has non vanishing expectation value for products of odd
operators, f.i.
$\langle c_2 c_3 \rangle_{\ensuremath{\rho_{\product{2}}}}=i\neq 0$.
On the other hand, it is also possible to construct a state as
$$
\ensuremath{\rho_{\product{3}}} =\PiiinPii,
$$
satisfying
$\ensuremath{\rho_{\product{3}}} \in \product{3}$ (it is easy to check the explicit
characterization for $1\times1$ modes of Table~\ref{tab:1x1}),
but $\ensuremath{\rho_{\product{3}}} \notin \product{2}$ because it is not possible
to write it as a tensor product.
\end{proof}
\begin{incl}
{$\productpar{2}\subset\productpar{1}$}
\end{incl}
\begin{proof}
The non strict inclusion is immediate from the result for general
states~(\ref{proof2}).
Actually, the same example $\ensuremath{\rho_{\product{1}}}$ is parity preserving and then
shows the non equivalence of both sets.
\end{proof}
\begin{incl}
{$\productpar{2}\equiv\productpar{3}$}
\end{incl}
\begin{proof}
For any physical state $[\rho,\,\parity]=0$, the expectation value of
any odd operator is null.
On the other hand, all $\product{3}$ states (in particular those in
$\productpar{3}$) fulfill
$\rho(A_{\not\pi} B_{\not\pi})=0$~\cite{moriya05}.
Since a state in $\productpar{2}$ can be written as a product of two factors
each of them commuting with the local parity operator,
then the only non--vanishing expectation values in these
sets of states correspond to
products of parity conserving local observables.
It is then enough to check that
$$
\rho(A_{\pi} B_{\pi})=\rho(A_{\pi})\rho(B_{\pi})
\iff
\rho=\rho_A \otimes \rho_B.
$$
Given the state $\rho$ we can look at the Fock
representation and write it as an expansion in the Pauli operator basis,
where coefficients correspond to expectation values of products
$\sigma_{a_1}^{(1)}\otimes \ldots \otimes \sigma_{a_m}^{(m)}$.
Making use of the Jordan-Wigner transformation~(\ref{eq:JW}),
any product
of even observables in the Fock space is mapped to a product of even
operators in the subalgebras $\cal{A}$, $\cal{B}$.
So it is easy to see that the property of factorization is equivalent
in both languages and thus
$$
\rho\in\productpar{2}\iff\rho\in\productpar{3}.
$$
This equivalence implies also that of the convex hulls,
$\separpar{2}\equiv\separpar{3}$.
\end{proof}
\subsubsection{Pure states}
\begin{incl}
{For pure states $\productpar{1}\iff\productpar{2}$}
\end{incl}
\begin{proof}
A pure state $|\Psi\rangle\langle\Psi| \in \pset$ is
such that $\parity \Psi= \pm\Psi$.
We consider the even case (the same reasoning applies for the odd one).
Since such a state vector is a direct sum of two components,
one of them even with respect to both $\parity_A,\,\parity_B$
and the other one odd
with respect to both local operations,
and applying the Schmidt decomposition to each of those components,
it is always possible to write the state as
$$
|\Psi \rangle =\sum_i \alpha_i |e_i\rangle |\varepsilon_i\rangle +
\sum_i \beta_i|o_i\rangle |\theta_i\rangle,
$$
where $\{|e_i\rangle\}$ ($\{|\varepsilon_i\rangle\}$) are mutually
orthogonal states with $\parity_A |e_i\rangle=+|e_i\rangle$
($\parity_B |\varepsilon_i\rangle=+|\varepsilon_i\rangle$)
and $\{|o_i\rangle\}$ ($\{|\theta_i\rangle\}$)
are mutually
orthogonal states with $\parity_A |o_i\rangle=-|o_i\rangle$
($\parity_B |\theta_i\rangle=-|\theta_i\rangle$).
The condition of $\productpar{1}$ imposes that
$\langle \Psi |A_\pi\,B_\pi|\Psi\rangle=
\langle \Psi |A_\pi|\Psi\rangle \langle \Psi |B_\pi|\Psi\rangle$ for all
parity preserving observables.
In particular, we may consider those of the form
\begin{eqnarray}
A_\pi&=&\sum_k A_k^e|e_k\rangle\langle e_k|+A_k^o |o_k\rangle\langle o_k|,
\nonumber
\\
B_\pi&=&\sum_k B_k^e|\varepsilon_k\rangle\langle \varepsilon_k|+B_k^o |\theta_k\rangle\langle \theta_k|.
\nonumber
\end{eqnarray}
On these observables the restriction reads
\begin{eqnarray}
\left(
\sum_i |\alpha_i|^2 A_i^e
\right.\!\!
&+& \!\!\left.
\sum_i |\beta_i|^2 A_i^o
\right)
\!\left(
\sum_i |\alpha_i|^2 B_i^e\! +\! \sum_i |\beta_i|^2 B_i^o
\right)
\nonumber
\\
&&=\sum_i |\alpha_i|^2 A_i^e B_i^e + \sum_i |\beta_i|^2 A_i^o B_i^o
\nonumber
\end{eqnarray}
Let us assume that the state $\Psi$ has more than one term in the
even-even sector, i.e. $\alpha_1\neq 0$ and $\alpha_2\neq 0$ (we may
reorder the sum, if necessary).
Then we apply the condition to $A=A_1^e|e_1\rangle\langle e_1|$,
$B=B_2^e|\varepsilon_2\rangle\langle \varepsilon_2|$, and applying the equality we deduce
$|\alpha_1|^2 A_1^e |\alpha_2|^2 B_2^e=0$, and thus $|\alpha_1| |\alpha_2|=0$,
so that there can only be a single term in the
$|e_i\rangle |\varepsilon_i\rangle$ sum.
An analogous argument shows that also the sum of
$|o_i\rangle |\theta_i\rangle$ must have at most one single
contribution, for the state to be in $\productpar{1}$.
By applying the equality to operators
$A=A_1^o|o_1\rangle\langle o_1|$ and
$B=B_1^e|\varepsilon_1\rangle\langle \varepsilon_1|$
we also rule out the possibility that $\Psi$ has a contribution from
each sector.
Then, if $\Psi\in\productpar{1}$, it has one single term in the Schmidt
decomposition, and therefore it is a product in the sense of $\productpar{2}$.
\end{proof}
\subsection{Separable states}
\label{subsec:separ}
\begin{incl}
{$\separ{2}\subset\separ{1}$ and $\separpar{2'}\subset\separpar{1}$}
\label{proof4}
\end{incl}
\begin{proof}
The first (non strict) inclusion is immediate from the relation between
product states~\ref{proof2}.
To see that both sets are not equal, we use again an explicit example.
It is possible to construct a state
in $\productpar{1}\subset\separpar{1}$ which has non-positive partial
transpose and is thus not in $\separ{2}$.
However, this has to be found in bigger systems than the previous
counterexamples, as in a 2-mode system the conditions for $\separpar{1}$ and
$\separpar{2}$ are identical, as shown in Table~\ref{tab:1x1}.
By constructing random matrices $\rho_A \otimes \rho_B$ in the
parity preserving sector, and adding off-diagonal terms $R$ which are
also randomly chosen, we
find a counterexample $\ensuremath{\rho_{\separpar{1}}}$ in a $2\times 2$-system
such that $\ensuremath{\rho_{\separpar{1}}} \in \productpar{1}$ by construction, but
its partial transposition with respect to the subsystem B,
$\ensuremath{\rho_{\separpar{1}}}^{T_B}$, has a negative eigenvalue.
When taking intersection with the set of physical states, the inclusion
still holds, and it is again strict, since the counterexample
$\ensuremath{\rho_{\separpar{1}}}$ is in particular in $\productpar{1}$.
\end{proof}
\begin{incl}
{$\separpar{1}\equiv\separ{1} \cap \pset$}
\end{incl}
\begin{proof}
Obviously,
$\separpar{1}\subseteq\separ{1} \cap \pset$.
To see the converse direction of the inclusion, we consider a state
$\rho\in\separ{1}\cap\pset$. Then there is a decomposition
$\rho=\sum_i\lambda_i\rho_i$ with $\rho_i\in\product{1}$, but not
necessarily in $\pset$.
We may split the sum into the even and odd terms under the parity
operator,
$$
\rho=\rho_{\pi}+\rho_{\not\pi}:=\sum_i \lambda_i \frac{1}{2}(\rho_i+\parity \rho_i \parity)
+\sum_i \lambda_i \frac{1}{2}(\rho_i-\parity \rho_i \parity).
$$
The second term, $\rho_{\not\pi}$, gives no contribution to operators that commute
with $\parity$. Since $\rho$ is physical, this term also gives zero
contribution to odd observables, so that
$$
\rho=\sum_i \lambda_i \frac{1}{2}(\rho_i+\parity \rho_i \parity).
$$
It only remains to be shown that each
$\rho_{i_\pi}:= \frac{1}{2}(\rho_i+\parity \rho_i \parity)$ is still a product state
in $\productpar{1}$.
But since for parity commuting observables all the contributions come
from the symmetric part of the density matrix,
$\rho_i(A_\pi B_\pi)=\rho_{i_\pi}(A_\pi B_\pi)$,
and the condition for $\productpar{1}$ holds for $\rho_{i_\pi}$.
Therefore we have found a convex decomposition of $\rho$ in terms of
product states all of them conforming to the symmetry.
The analogous relation for $\separpar{2}$ was shown in~\cite{moriya05}.
\end{proof}
\begin{incl}
{$\separpar{2}\subset\separpar{2'}$}
\end{incl}
\begin{proof}
Since $\productpar{2}=\product{2}\cap\pset$, taking convex hulls and
intersecting again with $\pset$ implies that
$\separpar{2}\subseteq\separpar{2'}$.
However, not all separable states can be decomposed as a convex sum of
product states all of them conforming to the parity symmetry.
In particular, the state
$$
\ensuremath{\rho_{\separpar{2'}}}=\SiipnSii,
$$
which has PPT and is thus in $\separpar{2'}$, is not in $\separpar{2}$
(recall that for the $1\times1$-system, only density matrices which are
diagonal in the number basis are in $\separpar{2}$).
\end{proof}
\begin{incl}
{$\equivsep{1}\equiv\equivsep{2'}\equiv\equivsep{2}$}
\end{incl}
\begin{proof}
From the relations
$\separpar{2}\subset\separpar{2'}\subset\separpar{1}$ and the definition
of the equivalence classes it is evident that
$\equivsep{2}\subseteq\equivsep{2'}\subseteq\equivsep{1}$.
To show the equivalence of all sets it is enough to prove that any
state $\rho \in \equivsep{1}$ is also in $\equivsep{2}$, i.e. that
there exists a state in $\separpar{2}$ equivalent to $\rho$.
For $\rho\in\equivsep{1}$, there is a $\tilde{\rho} \in \separpar{1}$,
i.e. $\tilde{\rho}=\sum\lambda_k\tilde{\rho}_k$ with each
$\tilde{\rho}_k \in \productpar{1}$,
producing identical expectation values for products of even operators
$A_\pi B_\pi$.
If we define
$$
\rho_k':= \sum_{\alpha,\,\beta=e,\,o}\proj{\alpha}{A}\otimes\proj{\beta}{B}
\tilde{\rho}_k \proj{\alpha}{A}\otimes\proj{\beta}{B},
$$
it is evident that $\rho':=\sum_k\lambda_k \rho_k'$ produces the
same expectation values as $\tilde{\rho}$ for the
relevant operators (see proof~\ref{proof1}).
Therefore, $\rho \sim \rho'$.
Moreover, since $\rho_k'(A_{\not\pi}B_{\not\pi})=0$ for all odd-odd
products, every $\rho_k'\in\productpar{2}$, and so
$\rho'\in\separpar{2}$.
\end{proof}
\input{proofs_mult}
\subsection{Multiple copies}
\label{subsec:mult}
\begin{incl}
{$\rho^{\otimes 2}\in\separpar{1} \Rightarrow \rho\in\separpar{1}$}
\label{mult1}
\end{incl}
\begin{proof}
An arbitrary state can be decomposed in two terms,
$\rho=\rho_E+\rho_O$, where
$$
\rho_E:=
\sum_{\alpha,\,\beta=e,\,o} \proj{\alpha}{A}\otimes \proj{\beta}{B}
\rho
\proj{\alpha}{A}\otimes \proj{\beta}{B},
$$
and
$$
\rho_O:=
\sum_{
\substack{
{\alpha,\,\beta,\,\gamma,\,\delta=e,\,o}\\
{(\alpha,\,\beta)\neq (\gamma,\,\delta)}}
} \proj{\alpha}{A}\otimes \proj{\beta}{B}
\rho
\proj{\gamma}{A}\otimes \proj{\delta}{B}.
$$
For any state in $\separpar{1}$, there exists a decomposition
$\rho_E=\sum_i\lambda_i\rho_E^i$,
$\rho_O=\sum_i\lambda_i\rho_O^i$,
such that $\rho_E^i+\rho_O^i\in\productpar{1}$.
Let us consider two copies of a state such that
$\tilde{\rho}:=\rho^{\otimes 2}\in\separpar{1}$.
Then, using the above decomposition of $\tilde{\rho}$,
and taking the partial trace with respect to the second
system,
we obtain a decomposition of the single copy,
$\rho=\rho_E+\rho_O=\sum_i \lambda_i \mathrm{tr}_2(\tilde{\rho}_E^i)
+\sum_i \lambda_i \mathrm{tr}_2(\tilde{\rho}_O^i)$.
Since $\tilde{\rho}_E^i$ was a tensor product,
$\tilde{\rho}_E^i=\tilde{\rho}_{\tilde{A}}\otimes\tilde{\rho}_{\tilde{B}}$,
with $\tilde{A}\equiv A_1A_2$, $\tilde{B}\equiv B_1B_2$,
so is $\mathrm{tr}_2(\tilde{\rho}_E^i)$, and therefore
$\rho\in\separpar{1}$.
\end{proof}
\begin{incl}
{$\rho^{\otimes 2}\in\equivsep{1} \Rightarrow \rho\in\equivsep{1}$}
\label{mult2}
\end{incl}
\begin{proof}
Using the same decomposition as above,
$\rho=\rho_E+\rho_O$,
a state $\rho\in\equivsep{1}$ satisfies $\rho_E\in\separpar{2'}$.
If we consider
$\tilde{\rho}:=\rho^{\otimes 2}=\tilde{\rho}_E+\tilde{\rho}_O$,
the condition $\equivsep{1}$ on the state of the two copies reads
$$
\tilde{\rho}_E=\rho_E\otimes\rho_E+\rho_O\otimes\rho_O\in\separpar{2'},
$$
in terms of the components of the single copy state.
Taking the trace with respect to one of the copies, then,
and using the fact that $\rho_O$ is traceless,
$\rho_E\in\separpar{2'}$,
so that $\rho\in\equivsep{1}$.
\end{proof}
\begin{incl}
{$\rho\ \mathrm{NPPT} \Rightarrow \rho^{\otimes 2}\notin\equivsep{1}$}
\label{mult4}
\end{incl}
\begin{proof}
We may restrict the proof to states such that $\rho \in \equivsep{1}$.
In other case the implication follows immediately
from the previous result (\ref{mult2}).
Written in a basis of well-defined local parities,
any density matrix that commutes with the parity operator has a block
structure (analogous to that of~(\ref{gen1x1even}) for the $1\times 1$
case).
\begin{equation}
\rho=\left(
\begin{array}{c c c c}
\rho_{ee} & 0 & 0 & C\\
0 & \rho_{eo} & D & 0 \\
0 & D^{\dagger} & \rho_{oe} & 0 \\
C^{\dagger} & 0 & 0 & \rho_{oo}
\end{array}
\right).
\label{geneven}
\end{equation}
The diagonal blocks correspond to the projections onto simultaneous
eigenspaces of both parity operators,
$\rho_{\alpha\beta}=\proj{\alpha}{A}\otimes \proj{\beta}{B} \rho \proj{\alpha}{A}\otimes \proj{\beta}{B}$,
whereas
$C=\proj{e}{A}\otimes \proj{e}{B} \rho \proj{o}{A}\otimes \proj{o}{B}$
and $D=\proj{e}{A}\otimes \proj{o}{B} \rho \proj{o}{A}\otimes \proj{e}{B}$.
From the characterization~(\ref{charactZ1}) of separability,
the state is in \equivsep{1} iff
all the diagonal blocks
$\rho_{\alpha \beta}$ are in \separpar{2'}.
It is then enough to prove that the partial transpose of $\rho$
is positive iff
$\proj{e}{A}\otimes \proj{e}{B}
\rho\otimes\rho
\proj{e}{A}\otimes \proj{e}{B}$
has PPT.
Non positivity of the partial transpose of $\rho$ implies then
the non separability (\separpar{2'})
of the one of the diagonal blocks of $\rho\otimes\rho$.
The partial transposition of the above matrix yields
\begin{equation}
\rho^{T_B}=\left(
\begin{array}{c c c c}
\rho_{ee}' & 0 & 0 & D'\\
0 & \rho_{eo}' & C' & 0 \\
0 & (C'^{\dagger}) & \rho_{oe}' & 0 \\
(D')^{\dagger} & 0 & 0 & \rho_{oo}'
\end{array}
\right),
\label{genevenPT}
\end{equation}
where $X':= X^{T_B}$, and the $T_B$ operation acts on each block
transposing the last $m_B-1$ indices.
If we take two copies of the state, we find for the corresponding
uppermost diagonal block
$\tilde{\rho}_{ee}:= \proj{e}{A}\otimes \proj{e}{B}
\rho\otimes\rho
\proj{e}{A}\otimes \proj{e}{B}$,
\begin{equation}
\tilde{\rho}_{ee}=\left(
\begin{array}{c c c c}
\rho_{ee}\otimes\rho_{ee} & 0 & 0 & C\otimes C\\
0 & \rho_{eo}\otimes\rho_{eo} & D\otimes D & 0 \\
0 & D^{\dagger}\otimes D^{\dagger} & \rho_{oe}\otimes\rho_{oe} & 0 \\
C^{\dagger}\otimes C^{\dagger} & 0 & 0 & \rho_{oo}\otimes\rho_{oo}
\end{array}
\right),
\label{rho2ee}
\end{equation}
and for the partial transposition
\begin{equation}
(\tilde{\rho}_{ee})^{T_B}=\left(
\begin{array}{c c c c}
\rho_{ee}'\otimes\rho_{ee}' & 0 & 0 & D'\otimes D'\\
0 & \rho_{eo}'\otimes\rho_{eo}' & C'\otimes C' & 0 \\
0 & C'^{\dagger}\otimes C'^{\dagger} & \rho_{oe}'\otimes\rho_{oe}' & 0 \\
D'^{\dagger}\otimes D'^{\dagger} & 0 & 0 & \rho_{oo}'\otimes\rho_{oo}'
\end{array}
\right).
\label{rho2eePT}
\end{equation}
The matrices (\ref{genevenPT}) and (\ref{rho2eePT}) are the direct sum of two
blocks. Thus they are positive definite iff each such block is positive
definite.
Let us consider one of the blocks of (\ref{rho2eePT}), namely
\begin{equation}
\left(
\begin{array}{c c}
\rho_{ee}'\otimes\rho_{ee}'& D'\otimes D'\\
D'^{\dagger}\otimes D'^{\dagger} & \rho_{oo}'\otimes\rho_{oo}'
\end{array}
\right).
\label{B1rho2ee}
\end{equation}
Let us first assume that $\rho_{oo}'$ is non-singular.
Applying a standard theorem in matrix analysis
and making use of the fact that our
$\rho \in \equivsep{1}$, so that each diagonal block is PPT,
we obtain that~(\ref{B1rho2ee}) is positive iff
$$
\rho_{ee}'\otimes\rho_{ee}'\geq (D'\otimes D') (\rho_{oo}'^{-1}\otimes
\rho_{oo}'^{-1}) (D'^{\dagger}\otimes D'^{\dagger}),
$$
which holds iff
$$
\rho_{ee}' \geq D' (\rho_{oo}')^{-1}D'^{\dagger}.
$$
Reasoning in the same way for the second block of (\ref{rho2eePT}),
one gets that
\begin{equation}
(\tilde{\rho}_{ee})^{T_B} \geq 0 \Leftrightarrow \rho^{T_B}\geq 0.
\end{equation}
The result holds also if the assumption of non-singularity
of $\rho_{oo}$ ($\rho_{oe}$ for the second block) is not valid.
In that case, we may take $\rho_{oo}$ diagonal and then,
by positivity of $(\tilde{\rho}_{ee})^{T_B}$ (or $\rho^{T_B}$
for the reverse implication), find that $D'$ must have some
null columns. This allows us to reduce both matrices to a similar
block structure, where the reduced $\rho_{oo}$ ($\rho_{oe}$) is
non-singular.
\end{proof}
\begin{incl}
{For $1\times1$ systems, $\rho^{\otimes 2}\in\equivsep{1} \iff \rho\in\separpar{2'}$}
\label{mult3}
\end{incl}
\begin{proof}
One of the directions is immediate, and valid for an arbitrarily large system,
since $\rho\in\separpar{2'}$ implies
$\rho^{\otimes 2}\in\separpar{2'}\subset\separpar{1}\subset\equivsep{1}$.
On the other hand, if we take
$\tilde{\rho}:=\rho^{\otimes 2}\in\equivsep{1}$,
then the diagonal blocks of this state are separable,
in particular
$\proj{e}{\tilde{A}}\otimes\proj{e}{\tilde{B}}\tilde{\rho}
\proj{e}{\tilde{A}}\otimes\proj{e}{\tilde{B}}\in\separpar{2'}$,
which was calculated in (\ref{rho2ee}).
For the case of $1\times1$ modes, with
$\rho$ given by~(\ref{gen1x1even}), this block reads
$$
\left(
\begin{array}{c c c c}
(1-x-y+z)^2 & 0 & 0 & r^2\\
0 & (x-z)^2 & s^2 & 0 \\
0 & (s^*)^2 & (y-z)^2 & 0 \\
(r^*)^2 & 0 & 0 & z^2
\end{array}
\right).
$$
This is in $\separpar{2'}$ iff it has PPT, and
this happens if and only if $\rho$ has PPT, i.e. $\rho\in\separpar{2'}$.
\end{proof}
\subsection{General states}
Taking the convex hull of the general product states,
we define the sets
\begin{itemize}
\item
$\separ{1}:=\mathrm{co}\left(\product{1}\right)$,
\item
$\separ{2}:=\mathrm{co}\left(\product{2}\right)$,
\item
$\separ{3}:=\mathrm{co}\left(\product{3}\right)$.
\end{itemize}
These contain both physical states, commuting with $\parity$,
and non-physical ones.
It can be shown that
$\separ{3} \subset \separ{2} \subset \separ{1}$.
The non-strict inclusion $\separ{2} \subseteq \separ{1}$
is immediate from the inclusion between product sets.
The strict character can be seen with an example, in particular
in the subset of physical states, $\ensuremath{\rho_{\separpar{1}}}$.
$\separ{3}\subset \separ{2}$ was proved in~\cite{moriya05}.
\subsection{Physical states}
From the physical sets of product states we define the
following sets of separable states,
\begin{itemize}
\item
$\separpar{1}:=\mathrm{co}\left(\productpar{1}\right)$,
\item
$\separpar{2}:=\mathrm{co}\left(\productpar{2}\right)$,
\end{itemize}
Obviously, the corresponding $\separpar{3}\equiv\separpar{2}$.
The inclusion relations among product states imply
$\separpar{2}\subseteq\separpar{1}$.
It is easy to see with an example that this inclusion is also strict.
Table~\ref{tab:separ} summarizes the definitions and mutual relations
of the various separability sets.
\input{tabsepar}
As shown in Fig.~\ref{fig:scheme}, we may take the physical states
that satisfy the definitions for separability introduced in
the previous subsection, and hence use
$\separ{i}\cap\pset$ as the definition of separable states.
This yields the sets
\begin{itemize}
\item
$\separ{1}\cap\pset\equiv\separpar{1}$,
\item
$\separpar{2'}:=\separ{2}\cap\pset$,
\item
$\separ{2}\cap\pset\equiv\separpar{2}$.
\end{itemize}
Only $\separpar{2'}$ is different from the
separable sets defined above.
Actually, given an $\separ{1}$ state that commutes with $\parity$,
it is possible to construct a decomposition according to $\separpar{1}$
by taking the parity preserving part of each term in the original
convex combination.
Therefore $\separ{1}\cap\pset\subseteq\separpar{1}$, while the
converse inclusion is evident.
For $\separ{3}\cap\pset$, on the other hand, it was shown in~\cite{moriya05}
that any parity preserving state in $\separ{3}$ has a decomposition in
terms of only parity preserving terms, and is thus in $\separpar{3}$.
All the considerations above leave us with three strictly different
sets of separable physical states,
\begin{equation}
\separpar{2} \subset \separpar{2'} \subset \separpar{1}
\end{equation}
From the definitions, it
is immediate that $\separpar{2} \subseteq \separpar{2'}$.
The inclusion is strict because
not every state $\rho \in\separpar{2'}$
has a decomposition in terms of products of even states
(see example $\ensuremath{\rho_{\separpar{2'}}}$ in Table~\ref{tab:separ}).
The condition for $\separpar{2}$ is then more restrictive.
From the relation between product sets,
$\separ{2} \subseteq \separ{1}$,
and $\separpar{2'} \subseteq \separpar{1}$.
The strict inclusion can be shown by constructing an explicit example
of a $\productpar{1}$ state without positive
partial transpose (PPT)~\cite{peres96}
in the 2$\times$2-modes system.
The detailed proofs of
the equivalences and inclusions above
are shown in section~\ref{subsec:separ}.
\subsection{Equivalence classes}
If one is only interested in the measurable correlations of the
state, rather than in its properties after further evolution or
processing, it makes sense to define an equivalence relation
between states by
$$
\rho_1 \sim \rho_2 \quad \mathrm{if} \quad
\rho_1(A_\pi B_\pi)=\rho_2(A_\pi B_\pi) \ \ \forall A_\pi\!\in\! \cal{A}_\pi,\,
B_\pi\!\in\!\cal{B}_\pi,
$$
i.e. two states are equivalent if they produce the same
expectation values for all physical local operators. Therefore,
two states that are equivalent cannot be distinguished by means of
local measurements.
With the restriction of parity conservation, the states that can be
locally prepared are of the form $\separpar{2}$, i.e.
$\rho=\sum_k \rho_k^A \otimes \rho_k^B$, where
$[\rho_k^{A(B)},\parity_{A(B)}]=0$.
Since the only
locally accessible observables are local, parity preserving
operators, i.e. quantities of the form $\rho(A_\pi B_\pi)$,
it makes sense to say that a given
state is separable if it is equivalent to a state that can
be prepared locally.
With this definition, the set of separable states is equal to
the equivalence class of $\separpar{2}$ with respect to the
equivalence relation above.
Generalizing this concept, we may construct the equivalence classes
for each of the relevant separability sets,
$$
\equivsep{i}:= \{\rho\,:\, \exists \tilde{\rho}\in\separpar{i},\,
\rho\sim\tilde{\rho}\},\quad i=1,2',2.
$$
From the inclusion relation among the separability sets,
$\equivsep{2}\subseteq \equivsep{2'}\subseteq\equivsep{1}$.
And, obviously, $\separpar{i}\subseteq\equivsep{i}$.
On the other hand, any state $\rho\in\equivsep{1}$ has also an equivalent state
in $\separpar{2}$ (see section~\ref{subsec:separ}), so that
$$
\equivsep{2}=\equivsep{2'}=\equivsep{1}.
$$
This equivalence class includes then all the separability sets
described in the previous subsection.
However, it is strictly larger, as can be seen by the explicit example
$\ensuremath{\rho_{\equivsep{1}}}$ in Table~\ref{tab:separ}.
\subsection{Characterization}
It is possible to give a characterization of the previously defined
separability sets in terms of the usual mathematical concept of
separability, i.e. with respect to the tensor product.
This allows us to use standard separability criteria
(see~\cite{horodecki07review} for a recent review) in order to decide
whether a given state is in each of these sets.
The definition $\separpar{2'}$ corresponds
to the separability
in the sense of the tensor product, i.e. the standard notion~\cite{Werner1},
applied to parity preserving states.
As convex hull of $\product{2}\cap\pset$, the
set $\separpar{2}$ consists of states with a decomposition
in terms of tensor products, with the additional restriction that
every factor commutes with the local version of the parity operator.
Using the block diagonal structure
$\proj{e}{} \rho \proj{e}{} +\proj{o}{} \rho \proj{o}{}$
of any parity preserving state,
each block
must have independent decompositions in the sense of the tensor product.
Then a state will be in $\separpar{2}$ iff both
$\proj{e}{} \rho \proj{e}{}$ and $\proj{o}{} \rho \proj{o}{}$
are in $\separpar{2'}$.
A state $\rho$ is in $\productpar{0}$ if its diagonal blocks
are a tensor product,
\begin{equation}
\sum_{\alpha,\,\beta=e,\,o} \proj{\alpha}{A}\otimes \proj{\beta}{B}
\rho
\proj{\alpha}{A}\otimes \proj{\beta}{B}=
\tilde{\rho}^A \otimes \tilde{\rho}^B \in \productpar{2}.
\label{charactP0}
\end{equation}
The set $\separpar{1}$ is characterized as
the convex hull of $\productpar{1}\equiv\productpar{0}$, i.e.
it is formed by convex combinations of states that
can be written as the sum of a parity preserving tensor product
plus some off--diagonal terms.
Finally, the equivalence class $\equivsep{1}\equiv\equivsep{2}$
is completely defined in terms of the expectation values of observable
products $A_\pi B_\pi$.
These have no contribution from off-diagonal blocks in $\rho$,
so the class can be characterized in terms of
the diagonal blocks alone.
Therefore a state is in $\equivsep{1}$ iff
\begin{equation}
\sum_{\alpha,\,\beta=e,\,o} \proj{\alpha}{A}\otimes \proj{\beta}{B}
\rho
\proj{\alpha}{A}\otimes \proj{\beta}{B}
\in \separpar{2'}.
\label{charactZ1}
\end{equation}
Since the condition involves only the block diagonal part of the state,
it is equivalent to the individual separability
(with respect to the tensor product) of each of the blocks.
|
1,941,325,220,056 | arxiv | \section{Introduction}\label{intro}
Recent developments in experimental technology and theoretical modeling have significantly advanced the study of exotic nuclei.
The existence of superheavy nuclei in the limit of a great number of protons has long been a subject of interest in nuclear physics~\cite{sob07,oga15,ada21}.
The shell effect is a key to define their stability.
The pairing correlations further play an influential role in determining the stability and shape of the ground state,
and accordingly, the height of the fission barrier~\cite{kar10}.
However, it is not simple to investigate the shell structure and the pairing in such nuclei in the extreme regime.
Strongly deformed actinide nuclei have thus been investigated
as they can reveal the nuclear-structure information in the island of enhanced stability of spherical superheavy nuclei~\cite{ack17}.
Details of the spectroscopic information in the actinide nuclei
have provided an excellent testing ground for the reliability and predicting power of modern nuclear-structure theories~\cite{her08,the15}.
Several deformed shell closures are expected to emerge at $Z=98$ and 100 and at $N=150$ and 152 in the actinide nuclei~\cite{ben03b}.
Correspondingly, beautiful rotational bands have been measured~\cite{her08,the15}, that are indicative of an axially-deformed rotor.
The rotational properties and the high-$K$ isomers have been studied in detail in terms of the pairing~\cite{dug01,afn03,tan06,gre12,afn13}.
In the course of the studies, anomalies have been revealed that are caused by residual interactions beyond the mean-field picture.
A collective phenomenon shows up as a low-lying $K^\pi=2^-$ state in the $N=150$ isotones,
and the energy falls peculiarly in $^{248}$Cf~\cite{yat75,yat75b}.
The nonaxial reflection-asymmetric $\beta_{32}$ (dynamic) fluctuation and (static) deformation
have been suggested by the projected shell-model calculation~\cite{che08}
and the relativistic energy-density-functional (EDF) calculation~\cite{zha12}.
The quasiparticle-random-phase approximation (QRPA) calculations
have also been performed to investigate the vibrational character~\cite{rob08,rez18}.
While the calculation using the Nilsson potential describes the anomalous behavior in $^{248}$Cf,
the selfconsistent QRPA calculation employing the Gogny-D1M EDF shows a smooth isotonic dependence.
The observation of a strong $\beta_{32}$ correlation that breaks the axial symmetry
suggests that the non-axial quadrupole deformation, $\gamma$ deformation, may also occur.
Nuclear triaxiality brings about exotic collective modes:
the appearance of the low-energy $K^\pi = 2^+$ band, the $\gamma$ band, is a good indicator for the $\gamma$ deformation~\cite{BM2,rin80}.
The $\gamma$ vibrational mode of excitation is regarded as a precursory soft mode of the permanent $\gamma$ deformation.
Experimentally, the $K^\pi=2^+$ state has not been observed to be as low in energy as the $K^\pi=2^-$ state in $^{244}$Pu and $^{246}$Cm~\cite{mul71,tho75}.
Therefore, it is interesting to investigate a microscopic mechanism that prefers the simultaneous breaking of the reflection and axial symmetry to the breaking of the axial symmetry alone.
Pairing correlations are essential for describing low-energy vibrational modes of excitation~\cite{mat13}
and are a key to understanding the collective nature.
Therefore, in this article, I am going to investigate the role of the pairing correlations in both the static and dynamic aspects,
as done in the studies in exotic nuclei in the neutron-rich region~\cite{yos06b,yos06,yos08b}.
To this end, I employ a nuclear EDF method in which the pairing and deformation are simultaneously considered.
Since the nuclear EDF method is a theoretical model being capable of handling nuclides with arbitrary mass numbers in a single framework~\cite{ben03,nak16},
the present investigation in the actinides can give an insight into the veiled nuclear structure of superheavy nuclei.
This paper is organized in the following way:
the theoretical framework for describing the low-lying vibrations is given in Sec.~\ref{model} and
details of the numerical calculation are also given;
Sec.~\ref{result} is devoted to the numerical results and discussion based on the model calculation;
after the discussion on the ground-state properties: the static aspects of pairing and deformation in Sec.~\ref{GS},
the discussion on the $K^\pi=2^-$ and $K^\pi=2^+$ states is given in Sec.~\ref{oct2_mode} and Sec.~\ref{gam_mode}, respectively;
then, a summary is given in Sec.~\ref{summary}.
\section{Theoretical model}\label{model}
\subsection{KSB and QRPA calculations}
Since the details of the framework can be found in Refs.~\cite{yos08,yos13b},
here I briefly recapitulate the basic equations relevant to the present study.
In the framework of the nuclear EDF method I employ,
the ground state is described by solving the
Kohn--Sham--Bogoliubov (KSB) equation~\cite{dob84}:
\begin{align}
\sum_{\sigma^\prime}
\begin{bmatrix}
h^q_{\sigma \sigma^\prime}(\boldsymbol{r})-\lambda^{q}\delta_{\sigma \sigma^\prime} & \tilde{h}^q_{\sigma \sigma^\prime}(\boldsymbol{r}) \\
\tilde{h}^q_{\sigma \sigma^\prime}(\boldsymbol{r}) & -h^q_{\sigma \sigma^\prime}(\boldsymbol{r})+\lambda^q\delta_{\sigma \sigma^\prime}
\end{bmatrix}
\begin{bmatrix}
\varphi^{q}_{1,\alpha}(\boldsymbol{r} \sigma^\prime) \\
\varphi^{q}_{2,\alpha}(\boldsymbol{r} \sigma^\prime)
\end{bmatrix} \notag \\
= E_{\alpha}
\begin{bmatrix}
\varphi^{q}_{1,\alpha}(\boldsymbol{r} \sigma) \\
\varphi^{q}_{2,\alpha}(\boldsymbol{r} \sigma)
\end{bmatrix}, \label{HFB_eq}
\end{align}
where
the single-particle and pair Hamiltonians $h^q_{\sigma \sigma^\prime}(\boldsymbol{r})$ and $\tilde{h}^q_{\sigma \sigma^\prime}(\boldsymbol{r})$ are given by the functional derivative of the EDF
with respect to the particle density and the pair density, respectively.
An explicit expression of the Hamiltonians is found in the Appendix of Ref.~\cite{kas21}.
The superscript $q$ denotes
$\nu$ (neutron, $ t_z= 1/2$) or $\pi$ (proton, $t_z =-1/2$).
The average particle number is fixed at the desired value by adjusting the chemical potential $\lambda^q$.
Assuming the system is axially symmetric,
the KSB equation (\ref{HFB_eq}) is block diagonalized
according to the quantum number $\Omega$, the $z$-component of the angular momentum.
The excited states $| i \rangle$ are described as
one-phonon excitations built on the ground state $|0\rangle$ as
\begin{align}
| i \rangle &= \hat{\Gamma}^\dagger_i |0 \rangle, \\
\hat{\Gamma}^\dagger_i &= \sum_{\alpha \beta}\left\{
f_{\alpha \beta}^i \hat{a}^\dagger_{\alpha}\hat{a}^\dagger_{\beta}
-g_{\alpha \beta}^i \hat{a}_{\bar{\beta}}\hat{a}_{\bar{\alpha}}\right\},
\end{align}
where $\hat{a}^\dagger$ and $\hat{a}$ are
the quasiparticle (qp) creation and annihilation operators that
are defined in terms of the solutions of the KSB equation (\ref{HFB_eq}) with the Bogoliubov transformation.
The phonon states, the amplitudes $f^i, g^i$ and the vibrational frequency $\omega_i$,
are obtained in the quasiparticle-random-phase approximation (QRPA): the linearized time-dependent density-functional theory for superfluid systems~\cite{nak16}.
The EDF gives the residual interactions entering into the QRPA equation.
In the present calculation scheme, the QRPA equation
is block diagonalized according to the quantum number $K=\Omega_\alpha + \Omega_\beta$.
\subsection{Numerical procedures}
I solve the KSB equation in the coordinate space using cylindrical coordinates
$\boldsymbol{r}=(\varrho,z,\phi)$.
Since I assume further the reflection symmetry, only the region of $z\geq 0$ is considered.
I use a two-dimensional lattice mesh with
$\varrho_i=(i-1/2)h$, $z_j=(j-1)h$ ($i,j=1,2,\dots$)
with a mesh size of
$h=0.6$ fm and 25 points for each direction.
The qp states are truncated according to the qp
energy cutoff at 60 MeV, and
the qp states up to the magnetic quantum number $\Omega=23/2$
with positive and negative parities are included.
I introduce the truncation for the two-quasiparticle (2qp) configurations in the QRPA calculations,
in terms of the 2qp-energy as 60 MeV.
For the normal (particle--hole) part of the EDF,
I employ mainly the SkM* functional~\cite{bar82}, and use the SLy4 functional~\cite{cha98} to complement the discussion.
For the pairing energy, I adopt the so-called mixed-type interaction:
\begin{equation}
V_{\rm{pair}}^{q}(\boldsymbol{r},\boldsymbol{r}^\prime)=V_0^{q}
\left[ 1-\frac{\rho(\boldsymbol{r})}{2\rho_0} \right]
\delta(\boldsymbol{r}-\boldsymbol{r}^\prime)
\end{equation}
with $\rho_0=0.16$ fm$^{-3}$, and $\rho(\boldsymbol{r})$ being the isoscalar (matter) particle density.
The same pair interaction is employed for the dynamical pairing
in the QRPA calculation.
The parameter $V_0^q$ is fitted to the three-point formula for the odd-even staggering
centered at the odd-mass system and averaged over the two neighboring nuclei~\cite{sat98,dob02}.
In the present work, I fix the pairing parameters to reproduce reasonably the data of $^{244}$Cm
which are 0.63 MeV and 0.57 MeV for neutrons and protons, respectively.
The resultant pairing gaps are 0.65 MeV and 0.59 MeV with $V_0^\nu=-270$ MeV fm$^3$ and $V_0^\pi=-310$ MeV fm$^3$.
\section{results and discussion}\label{result}
\subsection{Ground-state properties}\label{GS}
\begin{table}[t]
\caption{\label{tab:def}
Calculated deformation parameters $\beta_2$.
The evaluated data denoted as exp. are taken from Ref.~\cite{pri16}.
Listed in parentheses are the intrinsic quadrupole moments in the unit of $e$b and b for protons and neutrons, respectively.}
\begin{ruledtabular}
\begin{tabular}{lcccccc}
& \multicolumn{3}{c}{protons} & & \multicolumn{2}{c}{neutrons} \\
\cline{2-4} \cline{6-7}
& SkM* & SLy4 & exp. & & SkM* & SLy4 \\
\hline
$^{244}$Pu & 0.29 (11.7) & 0.28 (11.5) & 0.29 && 0.27 (18.3) & 0.27 (18.2) \\
$^{246}$Cm & 0.29 (12.3) & 0.28 (12.2) & 0.30 && 0.27 (18.7) & 0.27 (18.6) \\
$^{248}$Cf & 0.30 (12.9) & 0.30 (12.9) & && 0.27 (19.0) & 0.27 (19.0) \\
$^{250}$Fm & 0.30 (13.5) & 0.30 (13.4) & && 0.28 (19.3) & 0.28 (19.2) \\
$^{252}$No & 0.30 (13.9) & 0.30 (13.7) & && 0.28 (19.4) & 0.28 (19.2)
\end{tabular}
\end{ruledtabular}
\end{table}
Table~\ref{tab:def} summarizes the calculated deformation parameters:
\begin{equation}
\beta_2^q=\dfrac{4\pi}{5N_q \langle r^2\rangle_q}\int \mathrm{d} \boldsymbol{r} Y_{20}(\hat{r})\rho_q(\boldsymbol{r}),
\end{equation}
where the rms radius is given as $\sqrt{\langle r^2\rangle_q}=\sqrt{\int r^2\rho_q(\boldsymbol{r}) \mathrm{d} \boldsymbol{r}/N_q}$,
with $N_q$ being either the neutron number or proton number.
They are compared with the experimental data~\cite{pri16}, which is evaluated from the $B({\rm E2})$ value
based on the leading-order intensity relations of E2 matrix elements for an axially symmetric rotor~\cite{BM2}.
The calculated results obtained by employing the SLy4 functional are also included.
The strengths of the pair interaction were fitted similarly for SkM*;
the strength $V_0^\nu=-310$ MeV fm$^3$ and $V_0^\pi=-320$ MeV fm$^3$ produces the
pair gap as 0.67 MeV and 0.53 MeV for neutrons and protons, respectively.
Both the SkM* and SLy4 functionals reproduce well the strong deformation of $^{244}$Pu and $^{246}$Cm.
The calculations predict a stronger deformation for higher $Z$ isotones: $^{250}$Fm and $^{252}$No.
\begin{figure}[t]
\includegraphics[scale=0.4]{fig1.pdf}
\caption{\label{fig:gap}
Pair gaps of neutrons and protons in $^{244}$Pu, $^{246}$Cm, $^{248}$Cf, $^{250}$Fm, and $^{252}$No.
The calculated gaps (filled diamonds) are compared with the experimental data (filled squares).
The results obtained by reducing the pairing strength $V_0^\nu=-260$ MeV fm$^3$ while keeping the strength for protons,
and those obtained by increasing the pairing strength $V_0^\pi=-320$ MeV fm$^3$ while keeping the strength for neutrons
are also shown by open triangles for neutrons and protons, respectively.
Depicted also are the results obtained by employing the SLy4 functional (filled circles).
}
\end{figure}
I show in Fig.~\ref{fig:gap} the pair gaps of protons and neutrons:
\begin{equation}
\Delta^q = \left| \dfrac{\int \mathrm{d} \boldsymbol{r} \tilde{\rho}_q(\boldsymbol{r}) \tilde{U}^q(\boldsymbol{r})}{\int \mathrm{d} \boldsymbol{r} \tilde{\rho}_q(\boldsymbol{r})} \right|,
\end{equation}
where $\tilde{U}^q(\boldsymbol{r})$ is the pair potential given by
$\tilde{h}^q_{\sigma \sigma^\prime}(\boldsymbol{r})=\delta_{\sigma \sigma^\prime}\tilde{U}^q(\boldsymbol{r})$
and $\tilde{\rho}_q(\boldsymbol{r})$ is the anomalous (pair) density.
The calculations overestimate the measurements for neutrons.
When I slightly decrease the pair strength for neutrons as $V_0^\nu=-260$ MeV fm$^3$,
the calculation shows a reasonable agreement.
However, a proton-number dependence is opposite to what is shown in the measurements.
This indicates that
the neutrons deformed gap of 150 must be more significant than that of 148 in lower $Z$ and that
the level density around $N=150$ is high in higher $Z$ nuclei.
Since the deformed shell gap at 150 is formed by the up-sloping orbitals stemming from the $2g_{9/2}$ shell
and the down-sloping orbitals emanating from the $2g_{7/2}$ shell,
a slight modification of the spin--orbit interaction would impact the shell structure in this region,
and further, the prediction of the magic numbers in the superheavy region, as anticipated in Ref.~\cite{ben03b}.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.76]{fig2.pdf}
\caption{\label{fig:shell}
Single-particle energies of protons and neutrons near the Fermi levels in $^{248}$Cf.
The results obtained by using the SkM* and SLy4 functionals are compared.
The solid and dashed lines indicate the positive- and negative-parity states, respectively.
Each orbital is labeled by $\Omega^\pi$.
}
\end{center}
\end{figure}
The resultant pair gaps of protons are compatible with the experimental ones.
The SkM* functional with the mixed-type pairing well reproduces the overall isotonic dependence,
while the SLy4 shows a drop at $Z=98$, which the measurements do not reveal.
One may conjecture that this is because the deformed shell gap is overdeveloped at $Z=98$ in the SLy4 model.
I then show in Fig.~\ref{fig:shell} the single-particle levels in $^{248}$Cf to visualize the shell structure.
The single-particle orbitals were obtained by rediagonalizing the single-particle Hamiltonian:
\begin{equation}
\sum_{\sigma^\prime} h^q_{\sigma \sigma^\prime}[\rho_\ast,\tilde{\rho}_\ast]
\varphi_i^q(\boldsymbol{r} \sigma^\prime)=\varepsilon_i^q \varphi_i^q(\boldsymbol{r} \sigma)
\end{equation}
with $\rho_\ast$ and $\tilde{\rho}_\ast$ obtained as the solution of the KSB equation (\ref{HFB_eq}).
The energy gap at $Z=98$ is indeed larger than that obtained by using the SkM* functional,
as discussed in Ref.~\cite{cha06}.
Since the pair gap is about 0.5 MeV, the orbitals with $\Omega^\pi=7/2^+$ and $3/2^-$ are primarily active
for the pairing correlation. The occupation probability of the orbital with $\Omega^\pi=7/2^+$ is indeed only 0.14,
and that of the orbital with $\Omega^\pi=3/2^-$ is as large as 0.85.
For reference, they are 0.23 and 0.80 in the SkM* model.
A strong deformed-shell closure at $Z=98$ in the SLy4 model
is unfavorable in generating the low-energy $K^\pi=2^-$ state, as questioned in Ref.~\cite{rob08}.
\subsection{Vibrational states}\label{vib}
I consider the response to
the quadrupole ($\lambda=2$) and octupole ($\lambda=3$) operators defined by
\begin{align}
\hat{F}^q_{\lambda K}
=& \sum_{\sigma}\int \mathrm{d} \boldsymbol{r} r^\lambda Y_{\lambda K}
\hat{\psi}^\dagger_q(\boldsymbol{r} \sigma)\hat{\psi}_q(\boldsymbol{r} \sigma), \label{oct_op}
\end{align}
where $\hat{\psi}^\dagger_q(\boldsymbol{r} \sigma), \hat{\psi}_q(\boldsymbol{r} \sigma)$ represent the nucleon field operators.
The reduced transition probabilities can be evaluated with the intrinsic transition strengths as
\begin{align}
B({\rm E\lambda};I^\pi_K\to 0^+_{\rm gs})&=\frac{2-\delta_{K,0}}{2I_K +1} e^2| \langle i |\hat{F}^\pi_{\lambda K} |0 \rangle |^2, \\
B({\rm IS\lambda};I^\pi_K\to 0^+_{\rm gs})&=\frac{2-\delta_{K,0}}{2I_K +1} | \langle i |\hat{F}^\pi_{\lambda K}+\hat{F}^\nu_{\lambda K} |0 \rangle |^2
\end{align}
in the rotational coupling scheme~\cite{BM2}, where $I_K$ represents the nuclear spin of the $K^\pi$-band.
The intrinsic transition matrix element is given as
\begin{align}
\langle i | \hat{F}^q_{\lambda K}|0 \rangle &= \sum_{\alpha \beta} (f^i_{\alpha \beta}+g^i_{\alpha \beta}) \langle \alpha \beta |\hat{F}^q_{\lambda K}|0\rangle \\
& \equiv \sum_{\alpha \beta} M^{q,i}_{\alpha \beta} \label{eq:ME}
\end{align}
with the QRPA amplitudes and the 2qp matrix elements.
The QRPA amplitudes are normalized as
\begin{equation}
\sum_{\alpha \beta} \left[ |f^i_{\alpha \beta}|^2 - |g^i_{\alpha \beta}|^2 \right]=1.
\end{equation}
\subsubsection{$K^\pi=2^-$ state}\label{oct2_mode}
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.4]{fig3.pdf}
\caption{\label{fig:QRPA2}
Excitation energies of the (a) $K^\pi=2^-$ and (b) $K^\pi=2^+$ states in the $N=150$ isotones.
The calculations employing the SkM* and SLy4 functionals are compared with the measurements~\cite{rob08,mul71,tho75}.
The results obtained by increasing the pairing for protons ($V_0^\pi=-320$ MeV fm$^3$)
and decreasing the pairing for neutrons ($V_0^\nu=-260$ MeV fm$^3$) are depicted by open triangles.
The SkM* results without the dynamical pairing are given by open diamonds.
Shown is also the calculations using the Gogny D1M functional~\cite{rez18} and the Nilsson potential~\cite{rob08}.
}
\end{center}
\end{figure}
I show in Fig.~\ref{fig:QRPA2}(a) the intrinsic excitation energies of the $K^\pi=2^-$ state in the $N=150$ isotones.
Both the SkM* and SLy4 functionals well reproduce the isotonic trend of the energy, particularly a drop at $Z=98$,
but overestimate the measurements as a whole.
The QRPA calculation in Ref.~\cite{rob08} based on the Nilsson potential~\cite{nak96} also well describes a sharp fall
in energies at $Z=98$, while the one employing the D1M functional shows only a smooth proton-number dependence.
It is noted that in these calculations, the rotational correction and the Coriolis coupling effect~\cite{nee70b} are not included.
One needs to consider these effects for a quantitative description, but it is beyond the scope of the present study.
Rather, I am going to investigate a microscopic mechanism for the unique isotonic dependence of the octupole correlation.
A lowering in energy implies that the collectivity develops from $Z=96$ to $Z=98$.
However, the isoscalar intrinsic transition strengths to the $K^\pi=2^-$ state do not show such a trend:
6.83, 6.18, 6.11, 4.74, and 4.45 (in $10^5$ fm$^6$) in $^{244}$Pu, $^{246}$Cm, $^{248}$Cf, $^{250}$Fm, and $^{252}$No.
The isotonic dependence of the properties of the $K^\pi=2^-$ state is actually governed by the details around the Fermi surface:
the interplay between the underlying shell structures and the pairing correlations.
To see the role of the pairing, shown in Fig.~\ref{fig:QRPA2}(a) are the results obtained by varying the pair interaction.
I reduced the pairing strength $V_0^\nu=-260$ MeV fm$^3$ while keeping the strength for protons,
and I increased the pairing strength $V_0^\pi = -320$ MeV fm$^3$ while keeping the strength for neutrons.
Then, I find that the $K^\pi=2^-$ state in $^{248}$Cf is sensitive to the pairing of protons.
When the strength of the pair interaction for protons is increased,
the excitation energy in $^{248}$Cf becomes higher: 0.84 MeV to 1.00 MeV,
while the excitation energy in the other isotones does not change much.
As seen in Fig.~\ref{fig:gap}(b), $2\Delta$ for protons
rises by about 0.2 MeV in increasing the pairing strength for protons:
the increase in the pairing gap of protons directly affects the increase in the excitation energy.
On the other hand, the change in the pairing strength for neutrons does not alter the energy in $^{248}$Cf.
This indicates that the $K^\pi=2^-$ state is sensitive to the excitation of protons.
The calculation in Ref.~\cite{rob08} found a high amplitude for the proton configuration of $[633]7/2 \otimes [521]3/2$,
and the quasiparticle phonon model predicted that the main component of the wave function is this proton configuration with a weight of 62\%~\cite{jol11}.
Containing a large component of this proton configuration is supported by the population of the $K^\pi=2^-$ and $5^-$ states in
the $^{249}$Bk$(\alpha, \rm{t})$$^{250}$Cf reaction~\cite{yat76}.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.28]{fig4.pdf}
\caption{\label{fig:ME_oct}
Matrix elements $M_{\alpha \beta}$ (\ref{eq:ME}) of the 2qp excitations near the Fermi levels
for the $K^\pi=2^-$ state as functions of the unperturbed 2qp energy $E_\alpha + E_\beta$.
}
\end{center}
\end{figure}
Let me investigate the isotonic evolution of the microscopic structure.
Figure~\ref{fig:ME_oct} shows the matrix elements $M_{\alpha \beta}$ defined in Eq.~(\ref{eq:ME}) of the 2qp excitations near the Fermi levels for the $K^\pi=2^-$ state.
One can see that 2qp excitations near the Fermi levels have a coherent contribution to generate the $K^\pi=2^-$ state with the same sign.
It should be noted that not only the 2qp excitations near the Fermi levels but those in the giant-resonance region have
a coherent contribution for the enhancement in the transition strength to the low-frequency mode, as discussed in Ref.~\cite{yos08b}.
In $^{244}$Pu, the neutron 2qp excitation of $[622]5/2 \otimes [734]9/2$, whose 2qp energy is about 2 MeV,
is a main component with an amplitude $|f|^2-|g|^2$ of 0.45.
This is why the decrease in the pairing strength for neutrons lowers the excitation energy.
When adding two protons to $^{244}$Pu, the Fermi level of protons becomes higher.
The proton 2qp excitation of $[521]3/2 \otimes [633]7/2$, whose 2qp energy decreases to 1.8 MeV,
then has an appreciable contribution with an amplitude of 0.43,
together with the $\nu[622]5/2 \otimes \nu[734]9/2$ excitation with an amplitude of 0.31.
Note that both the $\pi[521]3/2$ and $\pi[633]7/2$ orbitals are particle-like.
In $^{248}$Cf, the Fermi level of protons is located just in between the $\pi[521]3/2$ and $\pi[633]7/2$ orbitals.
Then, the 2qp excitation of $\pi[521]3/2 \otimes \pi[633]7/2$ appears low in energy at 1.3 MeV, and
the $K^\pi=2^-$ state is predominantly generated by this configuration with an amplitude of 0.72.
The amplitude of the $\nu[622]5/2 \otimes \nu[734]9/2$ excitation declines to 0.14.
Adding two more protons, the 2qp excitation of $\pi[521]3/2 \otimes \pi[633]7/2$ is a hole--hole type excitation,
and the unperturbed 2qp states are located higher in energy.
Therefore, the matrix element and contribution of this 2qp excitation decrease.
The $K^\pi=2^-$ state in $^{250}$Fm is mainly constructed by
the $\pi[521]3/2 \otimes \pi[633]7/2$ and $\nu[622]5/2 \otimes \nu[734]9/2$
excitations with a wight of 0.42 and 0.41, respectively.
Then, the $\nu[622]5/2 \otimes \nu[734]9/2$ excitation dominantly
generates the $K^\pi=2^-$ state with a weight of 0.73 in $^{252}$No.
It seems that the collectivity of the $K^\pi=2^-$ state in $^{248}$Cf is weaker than in the other isotones:
single 2qp excitation dominates the state.
The sum of the backward-going amplitudes $\sum|g|^2$ and the octupole polarizability $\chi$ can be measures of the correlation,
in addition to the energy shift and the enhancement in the transition strength due to the QRPA correlations.
The calculated $\sum|g|^2$ is 0.22, 0.20, 0.24, 0.14, and 0.12 and
the stiffness parameter $C_{\beta_{32}}=\chi^{-1}_{\beta_{32}}$ (in MeV) is 418, 402, 306, 502, and 548
in $^{244}$Pu, $^{246}$Cm, $^{248}$Cf, $^{250}$Fm, and $^{252}$No, respectively.
Based on these measures, the octupole correlation is strong in $^{248}$Cf as well as in $^{244}$Pu and $^{246}$Cm.
In Fig.~\ref{fig:QRPA2}(a), I plot the calculated results obtained by neglecting the dynamical pairing, as depicted by open diamonds.
The dynamical pairing lowers the $K^\pi=2^-$ state by 0.04--0.06 MeV.
One sees a larger effect in $^{246}$Cm.
The energy shift due to the dynamical pairing is 0.06 MeV out of 0.65 MeV due to the QRPA correlations in total.
It is noted that the pairing correlation energy in the total binding energy is about 0.7\%.
As discussed above, the collective state in $^{246}$Cm is constructed mainly by
the $\nu[622]5/2 \otimes \nu[734]9/2$ and $\pi[521]3/2 \otimes \pi[633]7/2$ excitations.
Since these are a neutron hole--hole and a proton particle--particle excitations,
the residual pair interaction for both neutrons and protons is active and instrumental in forming the collective state.
Surprisingly, the SLy4 model reproduces the drop in energies at $Z=98$.
As mentioned above, and as questioned in Ref.~\cite{rob08}, Fig.~\ref{fig:shell} shows that the SLy4 model overestimates the deformed gap at $Z=98$,
requiring a high energy particle--hole excitation from $\pi[521]3/2$ to $\pi[633]7/2$.
Here, the pairing correlations are key to understanding the characteristic isotonic dependence.
The microscopic structure of the $K^\pi=2^-$ state in the SLy4 model is essentially the same as in the SkM* model:
the $\nu[622]5/2 \otimes \nu[734]9/2$ and $\pi[521]3/2 \otimes \pi[633]7/2$ excitations play a central role.
The unperturbed energy of the $\pi[521]3/2 \otimes \pi[633]7/2$ excitation is 1.93 MeV and 1.41 MeV in $^{246}$Cm and $^{248}$Cf;
a 0.5 MeV drop is comparable to the calculation in the SkM* model, as shown in Fig.~\ref{fig:ME_oct}.
The reduction of the pairing of protons, associated with a strong shell closure, accounts for the drop in energy.
As one sees in Fig.~\ref{fig:shell}, the SkM* and SLy4 functionals give different shell structures of neutrons around $N=150$:
the single-particle orbitals with $\Omega^\pi=7/2^+$ and $9/2^-$ are inverted.
Therefore, the $\nu[622]5/2 \otimes \nu[734]9/2$ excitation is a hole--hole type excitation in the SkM* model,
and this becomes a particle--hole type in $N=148$.
One can thus expect that the collectivity of the $K^\pi=2^-$ state is strongest in $^{246}$Cf.
Indeed, the SkM* model gives a strongly collective $K^\pi=2^-$ state at 0.65 MeV, with the isoscalar strength of $7.55 \times 10^5$ fm$^6$,
which is predominantly generated by the $\nu[622]5/2 \otimes \nu[734]9/2$ excitation with a weight of 0.31 and
the $\pi[521]3/2 \otimes \pi[633]7/2$ excitation with a weight of 0.56.
The sum of backward-going amplitudes is 0.35, and the stiffness parameter is 218 MeV.
If the $K^\pi=2^-$ state is observed in a future experiment at lower energy than in $^{248}$Cf,
our understanding of the shell structure around $N=150$ will be greatly deepened.
The ground state in $^{249}$Cf is $\nu[734]9/2$ and the excited bands of the $\nu[622]5/2$ and $\nu[624]7/2$ appear
at 145 keV and 380 keV, respectively~\cite{abu11}---it suggests the ordering of the single-particle levels as given in the SkM* model though
the deformed gap of $N=150$ must be larger than that of 148.
\subsubsection{$K^\pi=2^+$ state}\label{gam_mode}
I show in Fig.~\ref{fig:QRPA2}(b) the intrinsic excitation energies of the $K^\pi=2^+$ state in the $N=150$ isotones.
The excitation energy calculated is higher than that of a typical $\gamma$ vibration:
experimentally, the $2^+_2$ states were observed at low excitation energies ($\sim 800$ keV) for the well-deformed dysprosium ($Z=66$) and erbium ($Z=68$)
isotopes around $N=98$~\cite{NNDC} and in the neutron-rich region~\cite{wat16,zha19}.
It is noted that the present nuclear EDF method well describes the $\gamma$ vibration in the neutron-rich Dy isotopes~\cite{yos16}.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.28]{fig5.pdf}
\caption{\label{fig:ME_gam}
Similar to Fig.~\ref{fig:ME_oct} but for the $K^\pi=2^+$ state. See Tab.~\ref{tab:ME_gam2} for the configurations A, B, C, and D.
}
\end{center}
\end{figure}
One can see a similar isotonic dependence of the excitation energy to the $K^\pi=2^-$ state: a drop at around $Z=98$.
As for the $K^\pi=2^-$ state, this is governed by the underlying shell structure.
Let me then investigate the isotonic evolution of the microscopic structure.
Figure~\ref{fig:ME_gam} shows the matrix elements $M_{\alpha \beta}$ defined in Eq.~(\ref{eq:ME}) of the 2qp excitations near the Fermi levels for the $K^\pi=2^+$ state.
One sees that 2qp excitations near the Fermi levels have a coherent contribution to generate the $K^\pi=2^+$ state with the same sign as for the $K^\pi=2^-$ state.
Table~\ref{tab:ME_gam2} summarizes the amplitudes $|f|^2-|g|^2$ for the configurations possessing a dominant contribution to the $K^\pi=2^+$ state.
Both the neutron 2qp excitations A: $[620]1/2 \otimes [622]5/2$ and B: $[622]3/2 \otimes [624]7/2$
satisfy the selection rule of the $\gamma$ vibration~\cite{BM2}:
\begin{equation}
\Delta N= 0 \hspace{3pt}\mathrm{or}\hspace{3pt} 2, \Delta n_3 =0, \Delta \Lambda = \Delta \Omega= \pm 2.
\label{selection}
\end{equation}
However, the matrix element for B is small as this configuration is a particle--particle excitation.
\begin{table}[t]
\caption{\label{tab:ME_gam2}
Amplitudes $|f|^2-|g|^2$ of the 2qp excitations possessing a dominant contribution to the $K^\pi=2^+$ state.}
\begin{ruledtabular}
\begin{tabular}{cccccc}
configuration & $^{244}$Pu & $^{246}$Cm & $^{248}$Cf & $^{250}$Fm & $^{252}$No \\
\hline
A: $\nu[620]1/2 \otimes \nu[622]5/2$ & 0.31 & 0.27 & 0.18 & 0.20 & 0.30 \\
B: $\nu[622]3/2 \otimes \nu[624]7/2$ & 0.14 & 0.11 & 0.07 & 0.08 & 0.20 \\
C: $\pi [521]1/2\otimes \pi[512]5/2$ & 0.12 & 0.18 & 0.14 & 0.09 & 0.03 \\
D: $\pi[521]1/2 \otimes \pi[521]3/2$ & 0.07 & 0.27 & 0.49 & 0.52 & 0.32
\end{tabular}
\end{ruledtabular}
\end{table}
The proton 2qp excitation D: $[521]1/2 \otimes [521]3/2$ governs the isotonic dependence of the $K^\pi=2^+$ state,
that satisfies the selection rule as well.
The $\pi[521]3/2$ orbital is located above the Fermi level of protons in $^{244}$Pu and $^{246}$Cm,
and the $\pi[521]1/2$ orbital is located below the Fermi level in $^{252}$No, as seen in Fig.~\ref{fig:shell}.
Thus, the transition matrix element of the 2qp excitation D is small in these isotones.
The proton 2qp excitation C: $[521]1/2\otimes [512]5/2$ is a particle--hole excitation in $Z=94\textendash100$,
and this has an appreciable contribution in the isotones under the present study.
However, the particle--hole energy is relatively high, and the matrix element is not significant because the excitation is in disagreement with the selection rule.
In this respect, $^{248}$Cf and $^{250}$Fm are the most favorable systems where the $\gamma$ vibration occurs in the low energy.
The excitation energy of the $K^\pi=2^+$ state is higher than the $K^\pi=2^-$ state.
This is because the quasi-proton [521]1/2 orbital is higher in energy than the [633]7/2 orbital.
However, the SkM* model predicts that the $K^\pi=2^+$ state appears lower than the $K^\pi=2^-$ state in $^{250}$Fm;
this may be in contradiction with the measurements of the low-lying states in $^{251}$Md~\cite{bro13},
where the ground state is suggested to be $\pi[514]7/2$ and
the excited band of the $\pi[521]1/2$ configuration appears at 55 keV as the lowest state.
Finally, I investigate the roles of the pairing correlations.
Plotted in Fig.~\ref{fig:QRPA2}(b) are the results obtained by decreasing the pairing strength for neutrons (open triangles-down) and
by increasing the pairing strength for protons (open triangles-up).
One sees that the decrease of the pairing for neutrons lowers the excitation energies, and the effect is larger in the Pu, Cm, and No isotones.
The increase of the pairing for protons raises the excitation energies in the Cf and Fm isotones.
As the increase (decrease) of the pairing strength enhances (reduces) the pairing gap, the QRPA frequencies rise (decline).
The pairing effect depends on the structure of the states.
To see one aspect of the structure, let me introduce the quantity: $|M_\nu/M_\pi|=|\langle i|\hat{F}^\nu|0\rangle/\langle i|\hat{F}^\pi|0\rangle|$.
For the $K^\pi=2^+$ states calculated in the SkM* model,
this is 0.93, 0.89, 0.85, 0.86, and 0.98 in $^{244}$Pu, $^{246}$Cm, $^{248}$Cf, $^{250}$Fm, and $^{252}$No, respectively.
Here, the $|M_\nu/M_\pi|$ value is divided by $N/Z$.
The excitation of protons is stronger in the Cf and Fm, which is in accordance with the microscopic structure discussed above.
The pairing of neutrons (protons) is thus relatively sensitive in the Pu, Cm, No (Cf, Fm) isotones.
The dynamical pairing enhances the collectivity: the energy shift due to the residual pair interaction is 0.06 MeV--0.12 MeV.
This is equal to, but more substantial than, the case in the $K^\pi=2^-$ state, indicating that the coherence among particle--hole and particle--particle (hole--hole) excitations takes place
more strongly than in the $K^\pi=2^-$ state.
\section{Summary}\label{summary}
I have investigated the microscopic mechanism for the appearance of a low-energy $K^\pi=2^-$ state in the $N=150$ isotones in the actinides,
in particular, a sharp drop in energy in $^{248}$Cf.
Furthermore, I have studied the possible occurrence of the low-energy $K^\pi=2^+$ state to elucidate the mechanism
that prefers the simultaneous breaking of the reflection and axial symmetry to the breaking of the axial symmetry alone in this mass region.
To this end, I employed the nuclear energy-density functional (EDF) method: the Skyrme--Kohn--Sham--Bogoliubov and the quasiparticle
random-phase approximation were used to describe the ground state and the transition to excited states.
The Skyrme-type SkM* and SLy4 functionals reproduce the fall in the energy of the $K^\pi=2^-$ state at $Z=98$,
where the proton two-quasiparticle (2qp) excitation $[633]7/2 \otimes [521]3/2$ plays a decisive role for the peculiar isotonic dependence.
I have found interweaving roles by the pairing correlations of protons and the deformed shell closure at $Z=98$ formed by the [633]7/2 and [521]3/2 orbitals:
the SLy4 model produces a strong shell closure, and accordingly, the pairing of protons is greatly suppressed, resulting in a drop in energy.
The SkM* model predicts that the $K^\pi=2^-$ state appears lower in energy in $^{246}$Cf than in $^{248}$Cf as the Fermi level of neutrons is located
in between the $[622]5/2$ and $[734]9/2$ orbitals.
The $K^\pi=2^+$ state is predicted to appear higher in energy than the $K^\pi=2^-$ state
as the quasi-proton orbital $[521]1/2$ is located above the $[633]7/2$ orbital, except for $^{250}$Fm in the SkM* model with being unlikely based on the spectroscopic study of $^{251}$Md.
Compared with the available experimental data, the EDFs presently used are not perfect at quantitatively describing pair correlations and shell structures.
The present study shows a need for improvements in the EDFs based on further comparative studies for describing pair correlations and shell structures in heavy nuclei,
that are indispensable in predicting the shell effect and stability of superheavy nuclei.
\begin{acknowledgments}
This work was supported by the JSPS KAKENHI (Grants No. JP19K03824 and No. JP19K03872).
The numerical calculations were performed on Yukawa-21
at the Yukawa Institute for Theoretical Physics, Kyoto University.
\end{acknowledgments}
|
1,941,325,220,057 | arxiv | \section{Introduction}
Every year, the wise people who organize the European School of
Particle Physics feel it necessary to subject young experimentalists to
a course of lectures on `Beyond the Standard Model'. They treat this
subject as if it were a discipline of science that one could study and
master. Of course, it is no such thing. If we knew what lies beyond
the Standard Model, we could teach it with some confidence. But the
interest in this subject is precisely that we do not know what is
waiting for us there.
The confusion about `Beyond the Standard Model' goes beyond students
and summer school organizers to the senior scientists in our field. A
theorist such as myself who claims to be able to explain things about
physics beyond the Standard Model is very often met with skepticism
that such explanations are even possible. `Do we really have any
idea', one is told, `what we will find a higher energies?' `Don't we
just want the highest possible energy and luminosity?' `The Standard
Model works very well, so why must there be any new physics at all?'
And yet there are specific things that one can teach that should be
relevant to physics beyond the Standard Model. Though we do not know
what physics to expect at higher energies, the principles of
physics that we have learned in the explication of the
Standard Model should still apply there. In addition, we hope that
some of the questions not answered by the Standard Model should be
answered there. This course will concentrate its attention on these
two issues: What questions are likely to be addressed by new physics
beyond the Standard Model, and what general methods of analysis can we
use to create and analyze proposed answers to these questions?
A set of lectures on `Beyond the Standard Model' should have one
further goal as well. It is possible that the first sign of physics
beyond the Standard Model could be discovered next year at LEP, or
perhaps it is already waiting in the unanalyzed data from the Fermilab
collider. On the other hand, it is possible that this discovery will
have to wait for the great machines of the next generation.
Many people feel dismay at the fact
that the pace of discovery in high-energy physics is very slow, with
experiments operating on the time scale of a decade familiar in
planetary science rather than on the time scale of days or weeks.
Because of the cost and complexity of modern elementary particle
experiments, these long time scales are inevitable, and we have to
adjust our expectations to them. But the long time scales also require
that we set for ourselves very clear goals that we can try to realize a
decade in the future. To do this, it is useful to have a concrete
understanding of what experiments will look like at the next generation
of colliders and what physics issues they address. Even if we cannot
correctly predict what Nature will provide for us at higher energy, it
is essential to take some models as illustrative examples
and work out in complete detail how
to analyze them experimentally. With luck, we can choose models will
have features relevant to the ultimate correct theory of the next scale
in physics. But even if we are not sufficiently lucky or insightful to
predict what will appear, such a study will leave us prepared to solve
whatever puzzles Nature has set.
This, then, is what I would like to accomplish in these lectures. I
will set out some questions which I feel are the most important ones at
the present stage of our understanding, and the ones which I feel are
most likely to be addressed by the new phenomena of the next energy
scale. I will explain some theoretical ideas that have come from our
understanding of the Standard Model that I feel will play an important
role at the next level. Building on these ideas,
I will describe illustrative models of
physics beyond the Standard Model. And, for each case, I will describe
the program of
experiments that will clarify the nature of the new physics that the model
implies.
When we design a program of future high-energy experiments, we are also
calling for the construction of new high-energy accelerators that
would be needed to carry out this program. I hope that
students of high-energy physics will take an interest in this practical
or political aspect of our field of
science. Those who think about this seriously know
that we cannot ask society to support such expensive machines unless we
can promise that these facilities will give back fundamental knowledge that
is of the utmost importance and that cannot be obtained in any other way.
I hope that they will be interested to see how central a role the
CERN Large Hadron Collider (LHC) plays in each of the experimental programs
that I will describe. Another proposed facility will also play a major
role in my discussion, a high-energy $e^+e^-$ linear collider with
center-of-mass energy about 1 TeV. I will argue in these lectures that,
with these facilities, the scientific
justification changes qualitatively from that
of the present colliders at CERN and Fermilab. Whereas at current energies,
we search for new physics and try to place limits, at next step in energy we
must find new physics that addresses one of the major gaps in the
Standard Model.
This last issue leads to us to ask another, and perhaps unfamiliar,
question about the colliders of the next generation.
Much ink has been wasted in comparing hadron and lepton colliders
on the basis of energy reach and asking which is preferable. The real
issue for these machines is a different one.
We will see that illustrative models
of new physics based on simple ideas will out to have rich and
complex phenomenological consequences. Thus, it is a serious question whether
we will be able to understand the model that Nature has put forward for us
from experimental observations. I will argue through my
examples that these two
types of colliders, which focus on different and complementary aspects
of the high-energy phenomena, can bring back a complete picture of the new
phenomena of a clarity that neither, working alone, could achieve.
The outline of these lectures is as follows. In Section 2, I will
introduce the question of the mechanism of electroweak symmetric
breaking and also two related questions that influence the construction
and analysis of models of new physics. In Sections 3 and 4, I will
give one illustrative set of answers to these questions through a
detailed discussion of models with supersymmetry at the
weak-interaction scale. Section 3 will develop the formalism of
supersymmetry and derive its connection to the questions I have set
out. Section 4 will discuss more detailed properties of supersymmetric
models which provide interesting experimental probes. In Section 5, I
will discuss models with new strong interactions at the TeV mass scale,
models which give very different answers to our broad questions about
physics beyond the Standard Model. In Section 6, I will summarize the
lessons of our study of these two very different types of models and
draw some general conclusions.
\section{Three Basic Questions}
To begin our study of physics beyond the Standard Model, I will
review some properties of the Standard Model and some insights that
it provides. I will also discuss some questions that the Standard
Model does not answer, but which might reasonably be answered at the
next scale in fundamental physics.
\subsection{Why not just the Standard Model?}
To introduce the study of physics beyond the Standard Model, I must
first explain what is wrong with the Standard Model. To see this, we
only have to compare the publicity for the Standard Model, what we say
about it to beginning
students and to our colleagues in other fields, with the
explicit expression for the Standard Model Lagrangian.
When we want to advertise the virtues of the Standard Model, we say
that it is a model whose foundation is symmetry. We start from the
principle of local gauge invariance, which tells us that the
interactions of vector bosons are associated with a global symmetry
group. The form of these interactions is uniquely specified by the
group structure. Thus, from the knowledge of the basic symmetry group,
we can write down the Lagrangian or the equations of motion.
Specifying the group to be $U(1)$, we derive electromagnetism. To
create a complete theory of Nature, we choose the group, in accord with
observation, to be $SU(3)\times SU(2)\times U(1)$. This group is a
product, and we are free to include a different coupling constant for
each factor. But in the ideal theory, these would be the only
parameters. Specify to which representations of the gauge group the
matter particles belong, fix the three coupling constants, and we have
a complete theory of Nature.
This set of ideas is tantalizing because it is so close to being true.
The couplings of quarks and leptons to the strong, weak, and
electromagnetic interactions are indeed fixed correctly in terms of
three coupling constants. From the LEP and SLC experiments, we have
learned that the pattern of weak-interaction couplings of the quarks
and leptons follows the symmetry prediction to the accuracy of a few
percent, and also that the strong-interaction coupling is universal
among quark flavors at a similar level of accuracy.
On the other hand, the Lagrangian of the Minimal Standard Model tells a rather
different story. Let me write it here for reference:
\begin{eqnarray}
{\cal L} &=& \bar q i \not{\hbox{\kern-4pt $D$}} q + \bar\ell i \not{\hbox{\kern-4pt $D$}} \ell -
\frac{1}{4} (F^a_{\mu\nu})^2 \nonumber \\
& & + \left|D_\mu\phi\right|^2 - V(\phi) \nonumber \\
& & - \left( \lambda^{ij}_u \bar u_R^i \phi\cdot Q_L^j
+ \lambda^{ij}_d \bar d_R^i\phi^* \cdot Q_L^j +
\lambda^{ij}_\ell \bar e_R^i \phi^* \cdot L_L^j + {\mbox{\rm h.c.}} \right) \ .
\eeqa{eq:a}
The first line of \leqn{eq:a} is the pure gauge theory discussed in the
previous paragraph. This line of the Lagrangian contains only three
parameters, the three Standard Model gauge couplings $g_s$, $g$, $g'$,
and it does correctly describe the couplings of all species of quarks
and leptons to the strong, weak, and electromagnetic gauge bosons.
The second line of \leqn{eq:a} is associated with the Higgs boson field
$\phi$. The Minimal Standard Model introduces one scalar field, a
doublet of weak interaction $SU(2)$, so that its vacuum expectation
value can give a mass to the $W$ and $Z$ bosons. The potential energy
of this field $V(\phi)$ contains at least two new parameters which play
a role in determining the $W$ boson mass. At this moment, there is no
experimental evidence for the existence of the Higgs field $\phi$ and
very little evidence that constrains the form of its potential.
The third line of \leqn{eq:a} similarly gives an origin for the masses
of quarks and leptons. In the Standard Model, the left- and
right-handed quark fields belong to different representations of
$SU(2)\times U(1)$; a similar conclusion holds for the leptons. On the
other hand, a mass term for a fermion couples the left- and
right-handed components. This is impossible as long as the gauge
symmetry is exact. In the Standard Model, one can write a
trilinear term linking a left- and right-handed pair of species to the
Higgs field. When the Higgs field acquires a vacuum expectation value,
this coupling turns into a mass term. Unfortunately, a generic
fermion-fermion-boson coupling is restricted only rather weakly by
gauge symmetries. The Standard Model gauge symmetry allows three
complex $3\times 3$ matrices of couplings, the
paramaters $\lambda^{ij}$ of \leqn{eq:a}. When $\phi$ acquires a
vacuum expectation values, these matrices become the mass matrices of
quarks and leptons. Thus, whereas the gauge couplings of quarks and
leptons were strongly restricted by symmetry, the mass terms for these
particles can be of general and, indeed, complex, structure.
If we consider \leqn{eq:a} to be the fundamental Lagrangian
of Nature, the
situation is even worse. The Higgs coupling matrices $\lambda^{ij}$
are renormalizable couplings in this Lagrangian. The property of
renormalizability implies that, once these couplings are specified, the
theory gives definite predictions. However, the specification of the
renormalizable couplings is part of the statement of the problem.
Except in very special field theories, these couplings cannot be
determined from the internal consistency of the theory itself. The
Standard Model Lagrangian then leaves us with the three matrices
$\lambda^{ij}$, and the parameters of the Higgs potential $V(\phi)$, as
conditions of the problem which cannot in principle be determined. In
order to understand why the masses of the quarks, the leptons, and the
$W$ and $Z$ bosons have their observed values, we must find a deeper
theory beyond the Standard Model from which the Lagrangian \leqn{eq:a},
or some replacement for it, can be derived.
Thus, it is a disappointing feature of the Minimal Standard Model that
it has a large number of parameter which are undetermined, and which
cannot be determined. This disappointment, though, has an interesting
converse. Typically in physics, when we meet a system with a large
number of parameters, what stands behind it is a system with a simple
description which is realized with some complexity in its dynamics.
The transport coefficients of fluids or the properties of electrons in
a semiconductor are described in terms of a large number of parameters,
but these parameters can be computed from an underlying atomic picture.
Through this analogy, we would conclude that the gauge couplings of
quarks and leptons are likely to reflect a fundamental structure, but
that the Higgs boson is unlikely to be simple, minimal, or elementary.
The multiplicity of undermined couplings of the Minimal Standard Model
are precisely those of the Higgs boson. If we could break through and
discover the simple underlying picture behind the Higgs boson, or
behind the breaking of $SU(2)\times U(1)$ symmetry, we would then have
the correct deeper viewpoint from which to understand the undetermined
parameters of the Standard Model.
\subsection{Three models of electroweak symmetry breaking}
The argument given in the previous section leads us to the question:
What is actually the mechanism of electroweak symmetry breaking? In
this section, I would like to present three possible models for this
phenomenon and to discuss their strengths and weaknesses.
The first of these is the model of electroweak symmetry breaking
contained in the Minimal Standard Model. We introduce a Higgs field
\begin{equation}
\phi = \pmatrix{\phi^+ \cr \phi^0}
\eeq{eq:b}
with $SU(2)\times U(1)$ quantum numbers $I= \frac{1}{2}$, $Y =
\frac{1}{2}$. I will use $\tau^a = \sigma^a/2$ to denote the generators
of $SU(2)$, and I normalize the hypercharge so that the electric charge
is $Q = I^3 + Y$.
Take the Lagrangian for the field $\phi$ to be the second line of
\leqn{eq:a}, with
\begin{equation}
V(\phi) = - \mu^2 \phi^\dagger \phi + \lambda (\phi^\dagger\phi)^2 \ .
\eeq{eq:c}
This potential is minimized when $\phi^\dagger\phi = \mu^2/2\lambda$.
Thus, one particular vacuum state is given by
\begin{equation}
\VEV\phi = \pmatrix{0 \cr \frac{1}{\sqrt{2}}v\cr} \ ,
\eeq{eq:d}
where $v^2 = \mu^2/\lambda$.
The most general $\phi$ field configuration can be written in the same
notation as
\begin{equation}
\phi = e^{i\alpha(x)\cdot \tau}
\pmatrix{0 \cr \frac{1}{\sqrt{2}} (v + h(x))\cr} \ .
\eeq{eq:e}
In this expression, $\alpha^a(x)$ parametrizes an $SU(2)$ gauge
transformation. The field $h(x)$ is a gauge-invariant fluctation away
from the vacuum state; this is the physical Higgs field. The mass of
this field is given by
\begin{equation}
m_h^2 = 2\mu^2 = 2\lambda v^2 \ .
\eeq{eq:ee}
Notice that, in this model, $h(x)$ is the only gauge-invariant degree of
freedom in $\phi(x)$, and so the symmetry-breaking sector gives rise to
only one new particle, the Higgs scalar.
If we insert \leqn{eq:d} into the kinetic term for $\phi$, we obtain
a mass term for $W$ and $Z$; this is the usual Higgs mechanism for
producing these masses. If $g$ and $g'$ are the $SU(2)\times U(1)$
coupling constants, one finds the familiar result
\begin{equation}
m_W = g \frac{v}{2} \ ,
\qquad m_Z = \sqrt{g^2 + g^{\prime 2}} \frac{v}{2} \ .
\eeq{eq:f}
The measured values of the masses and couplings then lead to
\begin{equation}
v = 246 \ \mbox{GeV} \ .
\eeq{eq:g}
This is a very simple model of $SU(2)\times U(1)$ symmetry breaking.
Perhaps it is even too simple. If we ask the question, why is
$SU(2)\times U(1)$ broken, this model gives the answer `because
$(-\mu^2) < 0$.' This is a perfectly correct answer, but it teaches
us nothing. Normally, the grand qualitative phenomena of physics
happen as the result of definite physical mechanisms. But there is no
physically understandable mechanism operating here.
One often hears it said that if the minimal Higgs model is too simple,
one can make the model more complex by adding a second Higgs doublet.
For our next case, then, let us consider a model with two Higgs
doublets $\phi_1$, $\phi_2$, both with $I= \frac{1}{2}$, $Y =
\frac{1}{2}$. The Lagrangian of the Higgs fields is
\begin{equation}
{\cal L} = \left| D_\mu\phi_1\right|^2 +
\left| D_\mu\phi_1\right|^2 -
V(\phi_1,\phi_2) \ ,
\eeq{eq:h}
with
\begin{equation}
V = - \pmatrix{\phi_1^\dagger & \phi_2^\dagger\cr} M^2
\pmatrix{\phi_1 \cr \phi_2 \cr} + \cdots \ ,
\eeq{eq:i}
where $M^2$ is a $2\times 2$ matrix. It is not difficult to engineer a
form for $V$ such that, at the minimum, the vacuum expectation values
of $\phi_1$ and $\phi_2$ are aligned:
\begin{equation}
\VEV{\phi_1} = \pmatrix{0 \cr \frac{1}{\sqrt{2}}v_1\cr} \ , \qquad
\VEV{\phi_2} = \pmatrix{0 \cr \frac{1}{\sqrt{2}}v_2\cr} \ .
\eeq{eq:j}
The ratio of the two vacuum expectation values is conventionally
parametrized by an angle $\beta$,
\begin{equation}
\tan\beta = \frac{v_2}{v_1}\ .
\eeq{eq:k}
To reproduce the correct values of the $W$ and $Z$ mass,
\begin{equation}
v_1^2 + v_2^2 = v^2 = (246\ \mbox{GeV})^2 \ .
\eeq{eq:l}
The field content of this model is considerably richer than that of the
minimal model. An infinitesimal gauge transformation of the vacuum
configuration \leqn{eq:j} leads to a field configuration
\begin{equation}
\delta\phi_1 =\frac{1}{2} \pmatrix{v_1(\alpha_1+i\alpha_2) \cr
v_1( i\alpha_3)\cr} \ , \qquad
\delta\phi_2 = \frac{1}{2} \pmatrix{v_2(\alpha_1+i\alpha_2) \cr
v_2( i\alpha_3)\cr} \ .
\eeq{eq:m}
The fluctuations of the field configuration which are orthogonal to
this lead to new physical particles. These include the motions
\begin{equation}
\delta\phi_1 =\frac{1}{2} \pmatrix{\sin\beta\cdot (h_1+ih_2) \cr
\sin\beta\cdot ( ih_3)\cr} \ , \qquad
\delta\phi_2 = \frac{1}{2} \pmatrix{-\cos\beta\cdot (h_1 + i h_2) \cr
-\cos\beta\cdot( ih_3)\cr} \ ,
\eeq{eq:n}
as well as the fluctuations
$v_i \to v_i + H_i$ of the two vacuum expectation values. Thus we
find five new particles. The fields $h_1$ and $h_2$ combine to form
charged Higgs bosons $H^\pm$. The field $h_3$ is a CP-odd neutral
boson, usually called $A^0$. The two fields $H_i$ typically mix to form
mass eigenstates
called $h^0$ and $H^0$.
I have discussed this structure in some detail because we will later
see it appear in specific model contexts. But it does nothing as far
as answering the physical question that I posed a moment ago. Again,
if one asks what is the mechanism of weak interaction symmetry
breaking, the answer this model gives is that the matrix $(-M^2)$ has a
negative eigenvalue.
The third model I would like to discuss is a model of a very different
kind proposed in 1979 by Weinberg and Susskind \cite{Wein,Suss}.
Imagine that the fundamental interactions include a new gauge
interaction which is almost an exact copy of QCD with two quark
flavors. The new interactions differ from QCD in only two respects:
First, the quarks are massless; second, the nonperturbative scales
$\Lambda$ and $m_\rho$ are much larger in the new subsection. The two
flavors of quarks should be coupled to $SU(2)\times U(1)$ just as
$(u,d)$ are, and I will call them $(U,D)$.
In QCD, the strong interactions between quarks and antiquarks leads to
the generation of large effective masses for the $u$ and $d$. This
mass generation is associated with spontaneous symmetry breaking. The
strong interactions between very light quarks and antiquarks make it
energetically favorable for the vacuum of space to fill up with
quark-antiquark pairs. This gives vacuum expectation values to
operators built from quark and antiquark fields.
The analogue of this phenomenon should occur in our theory of new
interactions---for just the same reason---and so we should find
\begin{equation}
\VEV{\bar U U } = \VEV{\bar D D} = - \Delta \neq 0 \ .
\eeq{eq:o}
In terms of chiral components,
\begin{equation}
\bar U U = U_L^\dagger U_R + U_R^\dagger U_L \ ,
\eeq{eq:p}
and similarly for $\bar D D$. But, in the weak-interaction theory, the
left-handed quark fields transform under $SU(2)$ while the right-handed
fields do not. Thus, the vacuum expectation value in \leqn{eq:o}
signals $SU(2)$ symmetry breaking. In fact, under $SU(2)\times U(1)$,
the operator $\bar Q_L U_R$ has the same quantum numbers $I=
\frac{1}{2}$, $Y = \frac{1}{2}$ as the elementary Higgs boson that we
introduced in our earlier model. The vacuum expectation value of this
operator then has the same effect: It breaks $SU(2)\times U(1)$ to the
$U(1)$ symmetry of electromagnetism and gives mass to the three
weak-interaction bosons.
I will explain in Section 5.1 that the pion decay constant $F_\pi$ of
the new strong interaction theory plays the role of $v$ in \leqn{eq:f}
in determining the mass scale of $m_W$ and $m_Z$. If we were to set
$F_\pi$ to the value given in \leqn{eq:g}, we would need to scale up
QCD by the factor
\begin{equation}
{246\ \mbox{GeV}\over 93\ \mbox{MeV}} = 2600.
\eeq{eq:q}
Then the hadrons of these new strong interactions would be at TeV
energies.
For me, the Weinberg-Susskind model is much more appealing as a model
of electroweak symmetry breaking than the Minimal Standard Model. The
reason for this is that, in the Weinberg-Susskind model, electroweak
symmetry breaking happens naturally, for a reason, rather than being
included as the result of an arbitrary choice of parameters. I would
like to emphasize especially that the Weinberg-Susskind model is
preferable even though it is more complex. In fact, this complexity is
an essential part of its foundation. In this model, {\em something
happens}, and that physical action gives rise to a set of
consequences, of which electroweak symmetry breaking is one.
This notion that the consequences of physical theories flow from their
complexity is familiar from the theories in particle physics that we
understand well. In QCD, quark confinement, the spectrum of hadrons,
and the parton description of high-energy reactions all flow out of the
idea of a strongly-coupled non-Abelian gauge interaction. In the weak
interactions, the $V$--$A$ structure of weak couplings and all of its
consequences for decays and asymmetries follow from the underlying
gauge structure.
Now we are faced with a new phenomenon, the symmetry breaking of
$SU(2)\times U(1)$, whose explanation lies outside the realm of the
known gauge theories. Of course it is possible that this phenomenon
could be explained by the simplest, most minimal addition to the laws
of physics. But that is not how we have seen Nature work. In searching
for an explanation of electroweak symmetry breaking, we should not be
searching for a simplistic theory but rather for a simple idea from
which deep and rich consequences might flow.
\subsection{Questions for orientation}
The argument of the previous section gives focus to the study of
physics beyond the Standard Model. We have a phenomenon necessary to
the working of weak-interaction theory, the symmetry-breaking of
$SU(2)\times U(1)$, which we must understand. This symmetry-breaking
is characterized by a mass scale, $v$ in \leqn{eq:g}, which is close to
the energy scales now being probed at accelerators. At the same time,
it is a new qualitative phenomenon which cannot originate from the
known gauge interactions. Therefore, it calls for new physics, and in
an energy region where we can hope to discover it. For me, this is the
number one question of particle physics today:
\begin{description}
\item [*] {\bf What is the mechanism of electroweak symmetry breaking?}
\end{description}
Along with this question come two subsidiary ones. Both of these
are connected to the fact that electroweak symmetry breaking is
necessary for the generation of masses for the weak-interaction bosons,
the quarks, and the leptons. Perhaps there are also other particles
which cannot obtain mass until $SU(2)\times U(1)$ is broken. Then
these particles also must have masses at the scale of a few hundred GeV
or below. The heaviest of these particles must be especially strongly
coupled to the fields that are the basic cause of the
symmetry-breaking. At the very least, the top quark belongs to this
class of very heavy particles, and other members of this class might
well be found. Thus, we are also led to ask,
\begin{description}
\item [*]
{\bf What is the spectrum of elementary particles at the 1 TeV energy scale?}
\item [*]
{\bf Is the mass of the top quark generated by weak couplings or by
new strong interactions?}
\end{description}
In the remainder of this section, I will comment on these three
questions. In the following sections, when we consider explicit models
of electroweak symmetry breaking, I will develop the models theoretically
to propose
answer these questions. At any stage in the argument,
though, you should have firmly in mind that these answers will ultimately
come from experiment, and, in particular, from direct observations
of TeV-energy phenomena. The goal of my theoretical arguments, then, will
be to suggest particular phenomena which
could be observed experimentally to shed light on these questions.
We will see in Sections 4 and 5 that models which attempt to explain
electroweak symmetry breaking typically suggest a variety of new
experimental probes, which may allow us to
uncover a whole new layer of the fundamental
interactions.
\subsection{General features of electroweak symmetry breaking}
Since the question of electroweak symmetry breaking will be our main
concern, it is important to state at the beginning what we do know
about this phenomenon. Unfortunately, our knowledge is very limited.
Basically it consists of only three items.
First, we know the general scale of electroweak symmetry breaking,
which is set by the scale of $m_W$ and $m_Z$,
\begin{equation}
v = 246 \ \mbox{GeV} \ .
\eeq{eq:r}
If there are new particles associated with the mechanism of electroweak
symmetry breaking, their masses should be at the scale $v$. Of course,
this is only an order-of-magnitude estimate. The precise relation
between $v$ and the masses of new particles depends on the specific
model of electroweak symmetry breaking. In the course of these
lectures, I will discuss examples in which the most important new
particles lie below $v$ and other examples in which they lie higher by
a large factor.
Second, we know that the electroweak boson masses follow the pattern
\leqn{eq:f}, that is,
\begin{equation}
{m_W\over m_Z} = \cos\theta_w \ , \qquad m_\gamma = 0 \ .
\eeq{eq:s}
In terms of the original $SU(2)$ and $U(1)$ gauge bosons $A^a_\mu$,
$B_\mu$, this pattern tells us that the mass matrix had the form
\begin{equation}
m^2 = {v^2\over 2}\pmatrix{ g^2 & & & \cr & g^2 & & \cr
& & g^2 & -gg' \cr && -gg' & g^{\prime 2}\cr}
\eeq{eq:t}
acting on the vector $(A^1_\mu, A^2_\mu, A^3_\mu, B_\mu)$. Notice that
the $3\times 3$ block of this matrix acting on the $SU(2)$ bosons is
diagonal. This would naturally be a consequence of an unbroken $SU(2)$
symmetry under which $(A^1_\mu, A^2_\mu, A^3_\mu)$ form a triplet
\cite{Marvin,SSVZ}. This strongly suggests that an unbroken $SU(2)$ symmetry,
called {\em custodial $SU(2)$}, should be included in any successful model
of electroweak symmetry breaking.
The Minimal Standard Model actually contains such a symmetry
accidentally. the complex doublet $\phi$ can be viewed as a set of
four real-valued fields,
\begin{equation}
\phi = \frac{1}{\sqrt{2}} \pmatrix{ \phi^1 + i \phi^2\cr
\phi^3 + i \phi^4\cr} \ .
\eeq{eq:tt}
The Higgs potential \leqn{eq:c} is invariant to $SO(4)$ rotations of
these fields. The vacuum expectation value \leqn{eq:d} gives an
expectation value to one of the four components and so breaks $SO(4)$
spontaneously to $SO(3) = SU(2)$. In the Weinberg-Susskind model,
there is also a custodial $SU(2)$ symmetry, the isospin symmetry of the
new strong interactions. In this case, the custodial symmetry is not
an accident, but rather a component of the new idea.
Third, we know that the new interactions responsible for electroweak
symmetry breaking contribute very little to precision electroweak
observables. I will discuss this constraint in somewhat more detail in
Section 5.2. For the moment, let me point out that, if we take the value
of the electromagnetic coupling $\alpha$ and the weak interaction
parameters $G_F$ and $m_Z$ as input parameters, the value of the weak
mixing angle $\sin^2\theta_w$ that governs the forward-backward and polarization
asymmetries of the $Z^0$ can be shifted by radiative corrections
involving particles associated with the symmetry breaking. In the
Minimal Standard Model, this shift is rather small,
\begin{equation}
\delta(\sin^2\theta_w) = {\alpha\over \cos^2\theta_w - \sin^2\theta_w} {1 + 9 \sin^2\theta_w\over 24 \pi}
\log{m_h\over m_Z} \ .
\eeq{eq:w}
The coefficient of the logarithm has the value $6\times 10^{-4}$.
The accuracy of the LEP and SLC experiments is such that the size of
the logarithm cannot be much larger than 1, and larger radiative
corrections from additional sources
are forbidden. In models of electroweak symmetry breaking
based on new strong interactions, this can be an important constraint.
\subsection{The evolution of couplings}
Now I would like to comment similarly on the two subsidiary questions
that I put forward in Section 2.3. I will begin with the first of
these questions: What is the spectrum of elementary particles at the 1
TeV energy scale? In the discussion above, I have already argued for
the importance of this question. Because mass generation in quantum
field theory is associated with symmetry breaking, and because one of
the major symmetries of Nature is broken at the scale $v$, we might
expect a sizeable multiplet of particles to have masses of the order of
magnitude of $v$, that is, in the range of hundreds of GeV. Well above
the scale of $v$, these particles are effectively massless species
characterized by their definite quantum numbers under $SU(2)\times
U(1)$.
It is important to note that, at energies much higher than $v$, the
basic species are chiral. For example, the right- and left-handed
components of the $u$ quark have the following quantum numbers in this
high-energy world:
\begin{equation}
u_R \ : \ I=0,\ Y=\frac{2}{3} \qquad \pmatrix{u\cr d\cr}_L\ : \
I=\frac{1}{2},\ Y=-\frac{1}{6} \ .
\eeq{eq:x}
There are no relations between these two species; each half of the
low-energy $u$ quark has a completely different fundamental assignment.
And, each multiplet is prohibited from acquiring mass by $SU(2)\times
U(1)$ symmetry.
It is tempting to characterize the full set of elementary particles at 1
TeV---the particles, that is, that we have a chance of observing at
accelerators in the foreseeable future---as precisely those which are
forbidden to acquire mass until $SU(2)\times U(1)$ is broken. This
would explain why these particles are left over from the truly high-energy
dynamics of Nature, the dynamics which generates and perhaps unifies
the gauge and flavor interactions, to survive down to the much lower
energy scales accessible to our experiments.
\begin{figure}[t]
\begin{center}
\leavevmode
\epsfbox{Hmass.eps}
\end{center}
\caption{The simplest diagram which generates a Higgs boson mass term
in the Minimal Standard Model.}
\label{fig:one}
\end{figure}
Before giving in to this temptation, however, I would like to point out
that the Minimal Standard Model contains a glaring counterexample to
this point of view, the Higgs boson itself. The mass term for the
Higgs field
\begin{equation}
\Delta{\cal L} = - \mu^2 \phi^\dagger \phi
\eeq{eq:y}
respects all of the symmetries of the Standard Model whatever the value
of $\mu$. This model, then, gives no reason why $\mu^2$ is of order
$v$ rather than being, for example, twenty orders of magnitude larger.
Further, if we arbitrarily set $\mu^2 = 0$, the $\mu^2$ term would be
generated by radiative corrections. The first correction to the mass
is shown in Figure \ref{fig:one}. This simple diagram is
formally infinite,
but we might cut off its integral at a scale $\Lambda$ where
the Minimal Standard Model breaks down. With this prescription, the
diagram contributes to the Higgs boson mass $m^2 = -\mu^2$ in the amount
\begin{eqnarray}
-i m^2 &=& - i \lambda\int {d^4k\over (2\pi)^4} {i\over k^2} \nonumber \\
&=& - i {\lambda\over 16\pi^2} \Lambda^2 \ .
\eeqa{eq:z}
Thus, the contribution of radiative corrections to the Higgs boson mass
is {\em nonzero}, {\em divergent}, and {\em positive}. The last of
these properties is actually the worst. Since electroweak symmetry
breaking requires that $m^2$ be negative, the contribution we have just
calculated must be cancelled by the Higgs boson bare mass term, and
this cancellation must be made more and more fine to achieve a negative
$m^2$ of the order of $-v^2$ in models where $\Lambda$ is very large.
This problem is often called the `gauge hierarchy problem'.
I think of it as
just a special aspect of the fact that the Minimal Standard Model
does not explain why $-\mu^2$ is negative or why electroweak symmetry is
broken. Once we have left this fundamental question to a mere choice
of a parameter, it is not surprising that the radiative corrections to
this parameter might drive it in an unwanted direction.
To continue, however, I would like to set this issue aside and think
more carefully about the properties of the massless, chiral particle
multiplets that we find at the TeV energy scale and above. If these
particles are described by a renormalizable field theory but we can
ignore any mass parameters, the interactions of these particles are
governed by the dimensionless couplings of their renormalizable
interactions. The scattering amplitudes generated by these couplings
will reflect the maximal parity violation of the field content, with
forward-backward and polarization asymmetries in scattering processes
typically of order 1.
For massless fermions, there is an ambiguity in writing the quantum
numbers in such a chiral situation becuase a left-handed fermion has a
right-handed antifermion, and vice versa. For reasons that will be
clearer in the next section, I will choose the convention of writing
all species of fermions in terms of their left-handed components,
viewing all right-handed particles as antiparticles. Thus, I will now
recast the right-handed $u$ quark in \leqn{eq:x} as the antiparticle
of a left-handed species $\bar u$ which belongs to the $\bar 3$
representation of color $SU(3)$. The fermions of the Standard Model
thus belong to the left-handed multiplets
\begin{eqnarray}
L \ : \ I = \frac{1}{2}, \ Y = - \frac{1}{2} & \qquad &
Q \ : \ I = \frac{1}{2}, \ Y = \frac{1}{6} \nonumber \\
\bar e \ : \ I = 0, \ Y = 1 & \qquad &
\bar u \ : \ I = 0, \ Y = -\frac{2}{3} \nonumber \\
& \qquad &
\bar d \ : \ I = 0, \ Y = \frac{1}{3} \ .
\eeqa{eq:a1}
Here $L$ is the left-handed lepton doublet and $Q$ is the left-handed
quark doublet. $Q$ is a color $3$, and $\bar u$, $\bar d$ are color
$\bar 3$'s. The right-handed electron is the antiparticle of $\bar e$,
and there is no right-handed neutrino.
This set of quantum numbers of repeated for each quark and
lepton generation.
If the dimensionless couplings of the theory at TeV energies are small,
these coupling will run according to their renormalization group
equations, but only at a logarithmic rate. Thus, above the TeV scale,
the description of elementary particles would change very slowly. In
this circumstance, it is reasonable to extrapolate many orders of
magnitude above the TeV energy scale and to derive definite physical
conclusions from that extrapolation. I will now describe two
consequences of this idea.
\begin{figure}[t]
\begin{center}
\leavevmode
{\epsfxsize=4in\epsfbox{Hcouple.eps}}
\end{center}
\caption{Diagrams which renormalize the Higgs coupling constant
in the Minimal Standard Model.}
\label{fig:two}
\end{figure}
The first of these concerns the coupling constant of the minimal Higgs
theory. For this analysis, it is best to write the Higgs multiplet
as four real-valued fields as in \leqn{eq:tt}. Then the Higgs
Lagrangian (ignoring the mass term) takes the form
\begin{equation}
{\cal L} = \frac{1}{2} (\partial_\mu \phi^i)^2 - \frac{1}{2} \lambda_b \left( (\phi^i)^2
\right)^2 \ ,
\eeq{eq:b1}
where $i = 1, \ldots, 4$. I have given the coupling a subscript $b$ to
remind us that this is the bare coupling. The value of the first,
tree-level, diagram shown in Figure \ref{fig:two} is
\begin{equation}
- 2i \lambda_b \left( \delta^{ij}\delta^{k\ell} +
\delta^{ik}\delta^{j\ell} + \delta^{i\ell}\delta^{jk} \right) \ .
\eeq{eq:c1}
To compute the three one-loop diagrams in Figure \ref{fig:two}, we
need to contract two of these structures together, using $\delta^{ii} =
4$ where necessary. The easiest way to do this is to isolate the terms
in each diagram which are proportional to $ \delta^{ij}\delta^{k\ell}$.
Since the set of three diagrams is symmetric under crossing, the other
two index contractions must appear also with equal coefficients. The
contributions to this term from the three loop diagrams shown in Figure
\ref{fig:two} have the form
\begin{equation}
{(-2i\lambda_b)^2\over 2} \int {d^4k\over (2\pi)^4} {i\over k^2}{i\over k^2}
\left( [8 + 2 + 2] \delta^{ij}\delta^{k\ell} + \cdots \right) \ ,
\eeq{eq:d1}
where I have ignored the external momentum, and the numbers in the
bracket give the contribution from each diagram. In a scattering
process, this expression is a good approximation when $k$ lies in the
range from the momentum transfer $Q$ up to the scale $\Lambda$ at which
the Minimal Standard Model breaks down. Then the sum of the diagrams
in Figure \ref{fig:two} is
\begin{equation}
- 2i \lambda_b \left(1 - {12 \lambda_b^2\over (4\pi)^2} \log{\Lambda^2
\over Q^2} \right) \cdot
\left[ \delta^{ij}\delta^{k\ell} + \cdots \right] \ .
\eeq{eq:e1}
The coefficient in this expression can be thought of as the effective
value of the Higgs coupling constant for scattering processes at the
momentum transfer $Q$. Often, we trade the bare coupling $\lambda_b$
for the value of the effective coupling at a low-energy scale (for
example, $v$), which we call the renormalized coupling $\lambda_r$. In
terms of $\lambda_r$, \leqn{eq:e1} takes the form
\begin{equation}
- 2i \lambda_r \left(1 + {12 \lambda_r^2\over (4\pi)^2} \log{Q^2
\over v^2} \right) \cdot
\left[ \delta^{ij}\delta^{k\ell} + \cdots \right] \ .
\eeq{eq:f1}
Whichever description we choose, the effective coupling $\lambda(Q)$
has a logarithmically slow variation with $Q$. The most convenient way
to describe this variation is by writing a differential equation,
called the {\em renormalization group equation} \cite{PS}
\begin{equation}
{d\over d \log Q} \lambda(Q) = {3\over 2\pi^2} \lambda^2(Q) \ .
\eeq{eq:g1}
If the coupling is not so weak, we should add further terms to the
right-hand side which arise from higher orders of perturbation theory.
The solution of \leqn{eq:g1} is
\begin{equation}
\lambda(Q) = {\lambda_r \over 1 - (3\lambda/2\pi^2)\log Q/v}
\eeq{eq:h1}
It is interesting that the effective coupling is predicted to become
strong at high energy, specifically, at the scale
\begin{equation}
Q_* = v \exp\left[{2\pi^2\over 3 \lambda}\right] \ .
\eeq{eq:i1}
Either the minimal Higgs Lagrangian is a consequence of
strong-interaction behavior at the scale $Q_*$, or, at some energy
scale below $Q_*$ the simple Higgs theory must become a part of some
more complex set of interactions.
Making use of \leqn{eq:ee}, we can relate this bound on the validity of
the simple Higgs theory to the value of the Higgs mass, be rewriting
\leqn{eq:i1} as
\begin{equation}
Q_* = v \exp\left[{4\pi^2 v^2 \over 3 m_h^2}\right] \ .
\eeq{eq:i1p}
This is a remarkable formula, because the mass of the Higgs boson sits
in the denominator of an exponential. Thus, for small $m_h$ or a small
value of $\lambda$ at $v$, the energy scale $Q_*$ up to which the
minimal Higgs theory can be valid is very high. On the other hand, as
$m_h$ increases above $v$, the value of $Q_*$ decreases
catastrophically. Here is a table of the values predicted by
\leqn{eq:i1p}:
\begin{equation}
\begin{tabular}{rcr}
$m_h$ & \qquad & $Q_*$ \\ \cline{1-1} \cline{3-3} \\
150 GeV & & $6\times 10^{17}$ GeV \\
200 GeV & & $1 \times 10^{11}$ GeV \\
300 GeV & & $2\times 10^6$ GeV \\
500 GeV & & $6\times 10^3$ GeV \\
700 GeV & & $1\times 10^3$ GeV \\
\end{tabular}
\eeq{eq:ii1}
Notice that, as the mass of the Higgs boson goes above 700 GeV, the
scale $Q_*$ comes down to $m_h$. Larger values of the Higgs boson mass
in the minimal model are self-contradictory.
\begin{figure}[t]
\begin{center}
\leavevmode
{\epsfxsize=3.5in\epsfysize=4in\epsfbox{Lindner.eps}}
\end{center}
\caption{Region of validity of the minimal Higgs model in the
$(m_h, m_t)$ plane, including two-loop quantum corrections to the
Higgs potential,
from \protect\cite{Lindner}.}
\label{fig:three}
\end{figure}
A more accurate evaluation of the limit $Q_*$ in the Standard Model,
including the full field content of the model and terms in perturbation
theory beyond the leading logarithms, is shown in
Figure~\ref{fig:three} \cite{Lindner}.
Note that, in this more sophisticated calculation, the
limit $Q_*$ depends on the value of the top quark mass when $m_t$
becomes large. The calculation I have just described explains the top
boundary of the regions indicated in the figure; I will describe the
physics that leads to the right-hand boundary in Section 2.6.
\begin{figure}[t]
\begin{center}
\leavevmode
{\epsfxsize=2.0in\epsfbox{Grenorm.eps}}
\end{center}
\caption{A one-loop diagram contributing to the renormalization-group
evolution of a gauge coupling constant.}
\label{fig:four}
\end{figure}
The same idea, that the basic coupling constants can evolve slowly on a
logarithmic scale in $Q$ due to loop corrections from quantum field
theory, can be applied to the $SU(3)\times SU(2)\times U(1)$ gauge
couplings. The renormalization group equation for the gauge coupling
$g_i$ which includes the effects of one-loop diagrams such as that
shown in Figure~\ref{fig:four} has the form
\begin{equation}
{d\over d \log Q} g_i(Q) = - {b_i\over (4\pi)^2} g_i^3 \ .
\eeq{eq:j1}
That is, the rate of change of $g_i^2$ with $\log Q$ is proportional to
$g_i^4$, as the diagram indicates.
The $b_i$ are constants which depend on the gauge group and on the
matter multiplets to which the gauge bosons couple. For $SU(N)$ gauge
theories with matter in the fundamental representation,
\begin{equation}
b_N = \left( \frac{11}{3} N - \frac{1}{3} n_f - \frac{1}{6} n_s \right) \ ,
\eeq{eq:k1}
where $n_f$ is the number of chiral (left-handed) fermions and $n_s$ is
the number of complex scalars which couple to the gauge bosons. For a
$U(1)$ gauge theory in which the matter particles have charges $t$, the
corresponding formula is
\begin{equation}
b_1 = - \frac{2}{3} \sum_f t_f^2 - \frac{1}{3} \sum_s t_s^2 \ .
\eeq{eq:kk1}
I will not derive these formulae here; you can find their derivation in
any textbook of quantum field theory (for example, \cite{PS}). In the
$SU(N)$ case, when $n_f$ and $n_s$ are sufficiently small, $b_N$ is
positive, leading to a decrease of the effective coupling as $Q$
increases. This is the remarkable phenomenon of {\em asymptotic freedom}.
It is especially interesting that the effect of asymptotic freedom is
stronger for $SU(3)$ than for $SU(2)$ while the $SU(3)$ gauge coupling
is larger at the energy of $Z$ boson mass. This suggests that, if we
extrapolate to very high energy, the strong- and weak-interaction
coupling constants should become equal, and perhaps the three different
interactions that make up the Standard Model may become unified \cite{GQW}.
In
the remainder of this section, I will investigate this question
quantitatively.
In order to discuss the unification of gauge couplings, there
is one small technical point that we must address first. For a non-Abelian
group, we conventionally normalize the generators $t^a$ so that, in the
fundamental representation,
\begin{equation}
{\mbox{\rm tr}} [ t^a t^b ] = \frac{1}{2} \delta^{ab} \ .
\eeq{eq:l1}
Also, for any simple non-Abelian group, ${\mbox{\rm tr}}[t^a] = 0$. For example,
the matrices $\tau^a = \sigma^a/2$ which we used to represent the
$SU(2)$ generators below \leqn{eq:b} obey these conditions. However,
for a $U(1)$ group there is no similar natural way to normalize the
charges. In principle, we could hypothesize that the $SU(2)$ and
$SU(3)$ charges are unified with a charge proportional to the
hypercharge,
\begin{equation}
t_Y = c \cdot Y
\eeq{eq:m1}
for any value of the scale factor $c$.
In building a theory of unified strong, weak, and electromagnetic
interactions, we might not want to assume that all fermion species
necessarily
belong to the fundamental representation of some $SU(N)$ group; thus,
we would not wish to impose the condition \leqn{eq:l1} on $t_Y$. But it
is not so unreasonable to insist that there is a single large
non-Abelian group for which $t_Y$ and the $SU(2)$ and $SU(3)$ charges
are all generators, and that the quarks and leptons of the Standard
Model form a representation of this group. This leads to the
normalization condition for $t_Y$,
\begin{equation}
{\mbox{\rm tr}} (t_Y)^2 = {\mbox{\rm tr}} (t)^2 \ ,
\eeq{eq:n1}
where $t$ is a generator of $SU(2)$ or $SU(3)$. Any such generator gives the
same constraint. For convenience, I will choose to implement this condition
using $t = t^3$,
the third component of weak-interaction isospin. The
trace could be taken over three or over one Standard Model
generations. Before evaluating $c$, it is interesting to sum over the
fermions with quantum numbers in the table \leqn{eq:a1}, to check that
$t_Y$ has zero trace. Indeed, including each species in \leqn{eq:a1}
with its $SU(2)$
and color multiplicity, we find
\begin{eqnarray}
{\mbox{\rm tr}}[t_Y] &=& c {\mbox{\rm tr}}[Y] \nonumber \\
&=& c \left[ -\frac{1}{2} \cdot 2 + 1\cdot 1 +
\frac{1}{6}\cdot 6 - \frac{2}{3} \cdot 3 + \frac{1}{3} \cdot 3 \right]\nonumber \\
&=& 0
\eeqa{eq:o1}
Then we can compute
\begin{equation}
{\mbox{\rm tr}} (t^3)^2 = \left(\frac{1}{2}\right)^2 \cdot 2 \cdot 4 = 2 \ ,
\eeq{eq:p1}
and
\begin{equation}
{\mbox{\rm tr}} (t_Y)^2 = c^2 \left[ \left(\frac{1}{2}\right)^2 \cdot 2 + 1\cdot 1 +
\left(\frac{1}{6}\right)^2\cdot 6 +\left( \frac{2}{3}\right)^2 \cdot 3
+\left( \frac{1}{3}\right)^2 \cdot 3
\right] = c^2 \cdot \frac {10}{3} \ .
\eeq{eq:q1}
Equating these expressions, we find $c = \sqrt{3/5}$; that is,
\begin{equation}
t_Y = \sqrt{\frac{3}{5}} Y \ ,
\eeq{eq:r1}
or, writing the $U(1)$ gauge coupling $g'Y = g_1 t_Y$,
\begin{equation}
g_1 = \sqrt{\frac{5}{3}} g' \ .
\eeq{eq:s1}
These formulae give the normalization of the $U(1)$ coupling which
unifies with $SU(2)$ and $SU(3)$ in the $SU(5)$ and $SO(10)$ grand
unfied theories, and in many more complicated schemes of unification.
In the Standard Model, the $U(1)$ coupling constant $g_1$ and the
$SU(2)$ and $SU(3)$ couplings $g_2$ and $g_3$ evolve with $Q$ according
to the renormalization group equation \leqn{eq:j1} with
\begin{eqnarray}
b_3 &=& 11 - \frac{4}{3} n_g \nonumber \\
b_2 &=& \frac{22}{3} - \frac{4}{3} n_g - \frac{1}{6} n_h\nonumber \\
b_1 &=& \phantom{11} - \frac{4}{3} n_g - \frac{1}{10} n_h \ .
\eeqa{eq:t1}
In this formula, $n_g$ is the number of quark and lepton generations
and $n_h$ is the number of Higgs doublet fields. Note that a complete
generation of quarks and leptons has the same effect on all three gauge
couplings, so that (at the level of one-loop corrections), the validity
of unification is independent of the number of generations. The
solution to \leqn{eq:j1} can be written, in terms of the measured
coupling constants at $Q=m_Z$, as
\begin{equation}
g_i^2(Q) = {g_i^2(m_Z) \over 1 + (b_i/8\pi^2)\log Q/m_Z} \ .
\eeq{eq:u1}
Alternatively, if we let $\alpha_i = g_i^2/4\pi$,
\begin{equation}
\alpha_i^{-1}(Q) = \alpha_i^{-1}(m_Z) + {b_i\over 2\pi}\log{Q\overm_Z} \ .
\eeq{eq:v1}
The evolution of coupling constants predicted by \leqn{eq:t1} and
\leqn{eq:v1}, with $n_h = 1$, is shown in Figure \ref{fig:five}. It is
disappointing that, although the values of the coupling constants do
converge, they do not come to a common value at any scale.
\begin{figure}[t]
\begin{center}
\leavevmode
{\epsfxsize=4in\epsfbox{SMevolve.eps}}
\end{center}
\caption[*]{Evolution of the $SU(3)\times SU(2)\times U(1)$ gauge
couplings to high energy scales, using the one-loop renormalization group
equations of the Standard Model. The double line for $\alpha_3$ indicates
the current experimental error in this quantity; the errors in $\alpha_1$
and $\alpha_2$ are too small to be visible.}
\label{fig:five}
\end{figure}
We can be a bit more definite about this test of the unification of
couplings as follows: I will work in the $\bar{MS}$ scheme for
defining coupling constants. The precisely known values of $\alpha$,
$m_Z$, and $G_F$ imply $\alpha^{-1}(m_Z) = 127.90\pm .09$, $\sin^2\theta_w(m_Z)
= 0.2314\pm.003$ \cite{LEPDG}; combining this with the value of the
strong interaction coupling $\alpha_s(m_Z) = 0.118 \pm .003$
\cite{HPDG}, we find
for the $\bar{MS}$ couplings at $Q= m_Z$:
\begin{eqnarray}
\alpha_1^{-1} &=& 58.98 \pm .08 \nonumber \\
\alpha_2^{-1} &=& 29.60 \pm .04 \nonumber \\
\alpha_3^{-1} &=& 8.47 \pm .22
\eeqa{eq:w1}
On the other hand, if we assume that the three couplings come to a
common value at a scale $m_U$, we can put $Q= m_U$ into the three
equations \leqn{eq:v1}, eliminate the unknowns $\alpha^{-1}(m_U)$ and
$\log(m_U/m_Z)$, and find one relation among the measured coupling
constants at $m_Z$. This relation is
\begin{equation}
\alpha_3^{-1} = (1+ B) \alpha_2^{-1} - B \alpha_1^{-1} \ ,
\eeq{eq:x1}
where
\begin{equation}
B = {b_3-b_2\over b_2 - b_1} \ .
\eeq{eq:y1}
From the data, we find
\begin{equation}
B = 0.719 \pm .008 \pm .03 \ ,
\eeq{eq:z1}
where the second error reflects the omission of higher order
corrections, that is, finite radiative corrections at the thresholds and
two-loop corrections in the renormalization group equations.
On the other hand, the Standard Model gives
\begin{equation}
B = \frac{1}{2} + \frac{3}{110} n_h \ .
\eeq{eq:a2}
This is inconsistent with the unification hypothesis by a large margin.
But perhaps an interesting scheme for physics beyond the Standard Model
could fill this gap and allow a unification of the known gauge
couplings.
\subsection{The special role of the top quark}
In the previous section, we discussed the role of the quarks and
leptons in the energy region above 1 TeV. However, we ought to give
additional consideration to the role of the top quark. This quark is
sufficiently heavy that its coupling to the Higgs boson is an important
perturbative coupling at very high energies. Thus, even in the
simplest models, the top quark plays an important special role in the
renormalization group evolution of couplings. It is possible that the
top quark has an even more central role in electroweak symmetry
breaking, and, in fact, that electroweak symmetry breaking may be {\em
caused} by the strong interactions of the top quark. I will discuss
this connection of the top quark to electroweak symmetry breaking
later, in the context of specific models. In this section, I would
like to prepare for that discussion by analyzing the effects of the
large top quark-Higgs boson coupling which is already present in the
Minimal Standard Model.
In the minimal Higgs model, the masses of quarks and leptons arise from
the perturbative couplings to the Higgs boson written in the third line
of \leqn{eq:a}. These couplings are most often called the `Higgs
Yukawa couplings'. The top quark mass comes from a Yukawa coupling
\begin{equation}
\Delta{\cal L} = -\lambda_t \bar t_R \phi\cdot Q_L + {\mbox{\rm h.c.}} \ ,
\eeq{eq:b2}
where $Q_L = (t_L,b_L)$.
When the Higgs field acquires a vacuum expectation value of the form
\leqn{eq:d}, this term becomes
\begin{equation}
\Delta{\cal L} = -{\lambda_tv\over \sqrt{2}}\ \bar t t ,
\eeq{eq:b2plus}
and we can read off the relation $m_t = \lambda_t v/\sqrt{2}$. The
value of the top quark mass measured at Fermilab is $176 \pm 6$ GeV
for the on-shell mass \cite{mt}, which corresponds to
\begin{equation}
(m_t)_{\bar{MS}} = 166 \pm 6 \ \mbox{GeV} \ .
\eeq{eq:c2}
With the
value of $v$ in \leqn{eq:g}, this implies
\begin{equation}
\lambda_t = 1 \quad \mbox{or}\quad \alpha_t = {\lambda^2\over 4\pi}
= \left( 14.0 \pm 0.7 \right)^{-1} \ .
\eeq{eq:d2}
In this simplest model, the top quark Yukawa coupling is weak
at high energies but still is large enough to compete with QCD.
The large value of $\lambda_t$ gives rise to two interesting effects.
The first of these is an essential modification of the renormalization
group equation for the Higgs boson coupling $\lambda$ given in
\leqn{eq:g1}. Let me now rewrite this equation including the one-loop
corrections due to $\lambda_t$ and also to the weak-interaction
couplings \cite{Sher}:
\begin{equation}
{d\over d \log Q} \lambda = {3\over 2\pi^2 }
\left[ \lambda^2 - {1\over 32}\lambda_t^4 + {g^2\over 512}(3 + 2s^2 + s^4)
\right]\ ,
\eeq{eq:e2}
where I have abbreviated $s^2 = \sin^2\theta_w$.
A remarkable property of the formula \leqn{eq:e2} is that the top quark
Yukawa coupling enters the renormalization group equation with a
negative sign (which essentially comes from the factor (-1) for the top
quark fermion loop). This sign implies that, if the top quark mass is
sufficiently large that that $\lambda^4$ term dominates, the Higgs
coupling $\lambda$ is driven negative at large $Q$. This is a
dangerous instability which would push the expectation value $v$ of the
Higgs field to arbitrarily high values. The presence of this
instability gives an upper bound on the top quark mass for fixed $m_h$,
or, equivalently, a lower bound on the Higgs mass for fixed $m_t$. If
we replace $\lambda$, $\lambda_t$, and $g$ in \leqn{eq:e2} with the
masses of $h$, $t$, and $W$, we find the condition
\begin{equation}
m_h^2 > \frac{1}{2} \left[ m_t^2 - \frac{3}{4} m_W^2 \right] \ .
\eeq{eq:f2}
I should note that finite perturbative corrections shift this bound in
a way that is important quantitatively. This effect accounts for the
right-hand boundary of the regions shown in Figure \ref{fig:three}.
The implications of Figure \ref{fig:three} for the Higgs boson mass are
quite interesting. For the correct value of the top quark mass
\leqn{eq:c2}, the Minimal Standard Model description of the Higgs boson
can be valid only if the mass of the Higgs is larger than about 60 GeV.
But for values of the $m_h$ below 100 GeV or above 200 GeV, the Higgs
coupling must be sufficiently large that this coupling becomes strong
well below the Planck scale. Curiously, the fit of
current precision electroweak data to the Minimal Standard Model (for
example, to the more precise version of \leqn{eq:w}) gives the value
\cite{LEPDG}
\begin{equation}
m_h = 124^{+125}_{-71} \ \mbox{GeV}\ ,
\eeq{eq:g2}
which actually lies in the region for which the Minimal Standard Model
is good to extremely high energies. It is also important to point out
that the regions of Figure~\ref{fig:three} apply only to the Minimal
version of the Standard Model. In models with additional Higgs
doublets, with the boundaries giving limits on the lightest Higgs
boson, the upper boundary remains qualitatively correct, but the
boundary associated with the heavy top quark is usually pushed far to
the right.
The second perturbative effect of the top quark Yukawa coupling is its
influence back on its own renormalization group evolution. In the same
simple one-loop approximation as \leqn{eq:e2}, the renormalization
group equation for the top quark Yukawa coupling takes the form
\begin{equation}
{d\over d \log Q} \lambda_t = {\lambda_t\over (4\pi)^2 }
\left[ \frac{9}{2}\lambda_t^2 - 8g_3^2 - \frac{9}{4}g^2 \left(1 +
\frac{17}{24} s^2 \right) \right] \ .
\eeq{eq:h2}
The signs in this equation are not hard to understand. A theory with
$\lambda_t$ and no gauge couplings cannot be asymptotically free, and
so $\lambda_t$ must drive itself to zero at large distances or small
$Q$. On the other hand, the effect of the QCD coupling $g_3$ is to
increase quark masses and also $\lambda_t$ as $Q$ becomes small.
The two effects of the $\lambda_t$ and QCD renormalization of
$\lambda_t$ balance at the point
\begin{equation}
\lambda_t = \frac{4}{3} (4\pi \alpha_s)^{1/2} \sim 1.5\ ,
\eeq{eq:i2}
corresponding to $m_t \sim 250$ GeV. This condition was referred to by
Hill \cite{Hill} as the `quasi-infrared fixed point' for the top quark
mass. This `fixed point' is in fact a line in the $(\lambda_t,
\alpha_s)$ plane. The renormalization group evolution from large $Q$ to
small $Q$ carries a general initial condition into this line, as shown
in Figure \ref{fig:six}; then the parameters flow along the line, with
$\alpha_s$ increasing in the familiar way as $Q$ decreases, until we
reach $Q\sim m_t$. The effect of this evolution is that theories with
a wide range of values for $\lambda_t$ at a very high unification scale
all predict the physical value of $m_t$ to lie close to the fixed-point
value \leqn{eq:i2}. This convergence is shown in Figure
\ref{fig:seven}. The fixed point attracts initial conditions
corresponding to arbitrarily large values of $\lambda_t$ at high
energy. However, if the initial condition at high energy is
sufficiently small, the value of $\lambda_t$ or $m_t$ might not be able
to go up to the fixed point before $Q$ comes down to the value $m_t$.
Thus, there are two possible cases, the first in which the physical
value of $m_t$ is very close to the fixed point value, the second in
which the physical value of $m_t$ lies at an arbitrary point below the
fixed-point value.
\begin{figure}[t]
\begin{center}
\leavevmode
{\epsfxsize=2.75in\epsfbox{Lambdat.eps}}
\end{center}
\caption{Renormalization-group evolution of the top quark
Yukawa coupling $\lambda_t$ and
the strong interaction coupling $\alpha_s$, from large $Q$ to small $Q$.}
\label{fig:six}
\end{figure}
\begin{figure}[htb]
\begin{center}
\leavevmode
{\epsfxsize=3in\epsfbox{Hill.eps}}
\end{center}
\caption{Convergence of predictions for the top quark mass
in the Minimal Standard Model, due to
renormalization-group evolution, from
\protect\cite{Hill}.}
\label{fig:seven}
\end{figure}
In the Minimal Standard Model, the observed top quark mass \leqn{eq:c2}
must correspond to the second possibility.
However, in models with two Higgs doublet fields, the
quantity which is constrainted to a fixed point is $m_t/\cos\beta$,
where $\beta$ is the mixing angle defined in \leqn{eq:k}. The fixed
point location also depends on the full field content of the model. In
the supersymmetric models to be discussed in the next section, the
fixed-point relation is
\begin{equation}
{m_t \over \cos\beta} \sim 190\ \mbox{GeV}
\eeq{eq:j2}
for values of $\tan\beta$ that are not too large. In such theories it
is quite reasonable that the physical value of the top quark mass could
be determined by a fixed point of the renormalization group equation
for $\lambda_t$.
Now that we understand the implications of the large top quark mass in
the simplest Higgs models, we can return to the question of the
implications of the large top quark mass in more general models. We
have seen that the observed value of $m_t$ can consistently be
generated solely by perturbative interactions. We have also seen that,
in this case, the coupling $\lambda_t$ can have important effects on
the renormalization group evolution of couplings. But this observation
shows that the observed value of $m_t$ is not sufficiently large that
it must lead to nonperturbative effects or that it can by itself drive
electroweak symmetry breaking. In fact, we now see that $m_t$ or
$\lambda_t$ can be the cause of electroweak symmetry breaking only if
we combine these parameters with additional new dynamics that lies
outside the Standard Model. I will discuss some ideas which follow
this line in Section 5.
\subsection{Recapitulation}
In this section, I have introduced the major questions for physics
beyond the Standard Model by reviewing issues that arise when the
Standard Model is extrapolated to very high energy. I have highlighted the
issue of electroweak symmetry breaking, which poses an important
question for the Standard Model which must be solved at energies close
to those of our current accelerators. There are many possibilities, however,
for the form of this solution.
The new physics responsible for electroweak symmetry breaking
might be a new set of strong interactions which
changes the laws of particle physics fundamentally at some nearby
energy scale. But the analysis we have done tells us that the
solution might be constructed in a completely different way, in which
the new interactions are weakly coupled
for many orders of magnitude above the weak interaction scale but
undergoes qualitative changes
through the renormalization
group evolution of couplings.
The questions we have asked in Section 2.4 and this dichotomy of
strong-coupling versus weak-coupling solutions to these questions
provide a framework for examining theories of physics beyond the
Standard Model. In the next sections, I will consider some explicit
examples of such models, and we can see how they illustrate the different
possible answers.
\section{Supersymmetry: Formalism}
The first class of models that I would like to discuss are
supersymmetric extensions of the Standard Model. {\em Supersymmetry} is
defined to be a symmetry of Nature that links bosons and fermions. As we
will see later in this section, the introduction of supersymmetry into
Nature requires a profound generalization of our fundamental theories,
including a revision of the theory of gravity and a rethinking of our
basic notions of space-time. For many theorists, the beauty of this new
geometrical theory is enough to make it compelling. For myself, I think
this is quite a reasonable attitude. However, I do not expect you to
share this attitude in order to appreciate my discussion.
For the skeptical experimenter, there are other reasons to study
supersymmetry. The most important is that supersymmetry is a concrete
worked example of physics beyond the Standard Model. One of the
virtues of extending the Standard Model using supersymmetry is that the
phenomena that we hope to discover at the next energy scale---the new
spectrum of particles, and the mechanism of electroweak symmetry
breaking---occur in supersymmetric models at the level of perturbation
theory, without the need for any new strong interactions.
Supersymmetry naturally
predicts are large and complex spectrum of new particles.
These particles have signatures which are interesting, and which test
the capabilities of experiments. Because the theory has weak couplings,
these signatures can be worked out
directly in a rather straightforward way.
On the other hand, supersymmetric models have a
large number of undetermined parameters, so they can exhibit an
interesting variety of physical effects. Thus, the study of
supersymmetric models can give you very specific pictures of what it
will be like to experiment on physics beyond the Standard Model and,
through this, should aid you in preparing for these experiments. For
this reason, I will devote a large segment of these lectures to a detailed
discussion of supersymmetry. However, as a necessary corrective, I will
devote Section 5 of this article to a review of a model of electroweak
symmetry breaking that runs by strong-coupling effects.
This discussion immediately raises a question: Why is supersymmetry
relevant to the major issue that we are focusing on in these lectures,
that of the mechanism of electroweak symmetry breaking? A quick answer
to this question is that supersymmetry legitimizes the introduction of
Higgs scalar fields, because it connects spin-0 and spin-$\frac{1}{2}$ fields
and thus puts the Higgs scalars and the quarks and leptons on the same
epistemological footing. A better answer to this question is that
supersymmetry naturally gives rise to a mechanism of electroweak
symmetry breaking associated with the heavy top quark,
and to many other properties that are attractive
features of the fundamental interactions. These consequences of the
theory arise from renormalization group evolution, by arguments similar
to those we used to explain
the features of the Standard Model that we derived in Sections 2.5 and 2.6.
The spectrum of new particles predicted by supersymmetry will also
be shaped strongly by renormalization-group effects.
In order to explain
these effects, I must unfortunately subject you to a certain amount of
theoretical formalism. I will therefore devote this section to describing
construction of supersymmetric Lagrangians and the analysis of their
couplings. I will conclude this discussion in Section 3.7 by explaining
the supersymmetric mechanism of electroweak symmetry breaking. This
analysis will be lengthy, but it will give us the tools we need to build
a theory of the mass spectrum of supersymmetric particles. With this
understanding, we will be ready in Section 4
to discuss the experimental issues raised
by supersymmetry, and the specific experiments that should resolve them.
\subsection{A little about fermions}
In order to write Lagrangians which are symmetric between boson and
fermion fields, we must first understand the properties of
these fields separately. Bosons are simple, one component objects.
But for fermions, I would like to emphasize a few features which are
not part of the standard presentation of the Dirac equation.
The Lagrangian of a massive Dirac field is
\begin{equation}
{\cal L} = \bar\psi i \not{\hbox{\kern-2pt $\partial$}}\psi - m \bar\psi \psi \ ,
\eeq{eq:k2}
where $\psi$ is a 4-component complex field, the Dirac spinor. I would
like to write this equation more explicitly by introducing a particular
representation of the Dirac matrices
\begin{equation}
\gamma^\mu = \pmatrix{ 0 & \sigma^\mu\cr \bar \sigma^\mu & 0 \cr}\ ,
\eeq{eq:l2}
where the entries are $2\times 2$ matrices with
\begin{equation}
\sigma^\mu = (1, \vec \sigma)\ , \qquad \bar\sigma^\mu = (1, - \vec\sigma)\ .
\eeq{eq:ll2}
We may then write $\psi$ as a pair of 2-component complex fields
\begin{equation}
\psi = \pmatrix{ \psi_L \cr \psi_R \cr} \ .
\eeq{eq:m2}
The subscripts indicate left- and right-handed fermion components, and
this is justified because, in this representation,
\begin{equation}
\gamma^5 = \pmatrix{-1 & 0 \cr 0 & 1 \cr} \ .
\eeq{eq:n2}
This is a handy representation for calculations involving high-energy
fermions which include chiral interactions or polarization effects,
even within the Standard Model~\cite{PS}.
In the notation of \leqn{eq:l2}, \leqn{eq:m2}, the Lagrangian
\leqn{eq:k2} takes the form
\begin{equation}
{\cal L} = \psi^\dagger_L i \bar\sigma^\mu \partial_\mu \psi_L +
\psi^\dagger_R i \sigma^\mu \partial_\mu \psi_R
- m \left( \psi^\dagger_R \psi_L + \psi^\dagger_L \psi_R \right) \ .
\eeq{eq:o2}
The kinetic energy terms do not couple $\psi_L$ and $\psi_R$ but rather
treat them as distinct species. The mass term is precisely the
coupling between these components.
I pointed out above \leqn{eq:a1} that, since the antiparticle of a
masssless left-handed particle is a right-handed particle, there is an
ambiguity in assigning quantum numbers to fermions. I chose to resolve
this ambiguity by considering all left-handed states as particles and
all right-handed states as antiparticles. With this philosophy, we
would like to trade $\psi_R$ for a left-handed field. To do this,
define the $2\times 2$ matrix
\begin{equation}
c = -i\sigma^2 = \pmatrix{0 & -1 \cr 1 & 0} \ .
\eeq{eq:p2}
and let
\begin{equation}
\chi_L = c \psi^*_R \ , \qquad \chi_L^* = c\psi_R
\eeq{eq:q2}
Note that $c^{-1} = c^T = -c$, $c^* = c$, so \leqn{eq:q2} implies
\begin{equation}
\psi_R = -c \chi^*_L \ , \qquad \psi_R^\dagger = \chi_L^T c \ .
\eeq{eq:r2}
Also note, by multiplying out the matrices, that
\begin{equation}
c \sigma^\mu c^{-1} = (\bar \sigma^\mu)^T \ , \qquad
c \bar \sigma^\mu c^{-1} = (\sigma^\mu)^T \ .
\eeq{eq:s2}
Using these relations, we can rewrite
\begin{eqnarray}
\psi^\dagger_R i \sigma^\mu \partial_\mu \psi_R &=&
\chi_L^T c i\sigma^\mu \partial_\mu (-c) \chi^*_L \nonumber \\
&=& \chi_L^T i(\bar\sigma^\mu)^T \partial_\mu \chi^*_L \nonumber \\
&=& -\partial_\mu \chi_L^\dagger i(\bar\sigma^\mu) \chi_L \nonumber \\
&=& \chi^\dagger_L i \bar \sigma^\mu \partial_\mu \chi_L \ .
\eeqa{eq:t2}
The minus sign in the third line came from fermion interchange; it was
eliminated in the fourth line by an integration by parts. After this
rewriting, the two pieces of the Dirac kinetic energy term have
precisely the same form, and we may consider $\psi_L$ and $\chi_L$ as
two species of the same type of particle.
If we replace $\psi_R$ by $\chi_L$, the mass term in \leqn{eq:k2}
becomes
\begin{equation}
- m \left( \psi^\dagger_R \psi_L + \psi^\dagger_L \psi_R \right)
= - m \left(\chi^T_L c \psi_L - \psi^\dagger_L c \chi^*_L\right)\ .
\eeq{eq:u2}
Note that
\begin{equation}
\chi^T_L c \psi_L = \psi^T_L c \chi_L \ ,
\eeq{eq:v2}
with one minus sign from fermion interchange and a second from taking
the transpose of $c$. Thus, this mass term is symmetric between the
two species. It is interesting to know that the most general possible
mass term for spin-$\frac{1}{2}$ fermions can be written in terms of
left-handed fields $\psi_L^a$ in the form
\begin{equation}
- \frac{1}{2} m^{ab} \psi^{aT}_L c \psi^b_L + {\mbox{\rm h.c.}} \ ,
\eeq{eq:w2}
where $m^{ab}$ is a symmetric matrix. For example, this form for the
mass term incorporates all possible different forms of the neutrino
mass matrix, both Dirac and Majorana.
From here on, through the end of Section 4,
all of the fermions that appear in these lectures will be
2-component left-handed fermion fields. For this reason, there will be
no ambiguity if I now drop the subscript $L$ in my equations.
\subsection{Supersymmetry transformations}
Now that we have a clearer understanding of fermion fields, I would
like to explore the possible symmetries that could connect fermions to
bosons. To begin, let us try to connect a free massless fermion field
to a free massless boson field. Because the scalar product
\leqn{eq:v2} of two chiral fermion fields is complex, this connection
will not work unless we take the boson field to be complex-valued.
Thus, we should look for symmetries of the Lagrangian
\begin{equation}
{\cal L} = \partial_\mu \phi^* \partial^\mu\phi +
\psi^\dagger i \bar\sigma \cdot \partial \psi
\eeq{eq:x2}
which mix $\phi$ and $\psi$.
To build this transformation, we must introduce a symmetry parameter
with spin-$\frac{1}{2}$ to combine with the spinor index of $\psi$. I will
introduce a parameter $\xi$ which also transforms as a left-handed
chiral spinor. Then a reasonable transformation law for $\phi$ is
\begin{equation}
\delta_\xi \phi = \sqrt{2} \xi^T c \psi \ .
\eeq{eq:y2}
A fermion field has the dimensions of (mass)$^{3/2}$, while a boson
field has the dimensions of (mass)$^{1}$; thus, $xi$ must carry the
dimensions (mass)$^{-1/2}$ or (length)$^{1/2}$. This means that, in
order to form a dimensionally correct transformation law for $\psi$, we
must include a derivative. A sensible formula is
\begin{equation}
\delta_\xi \psi = \sqrt{2}i \sigma\cdot \partial \phi c \xi^* \ .
\eeq{eq:z2}
It is not difficult to show that the transformation \leqn{eq:y2},
\leqn{eq:z2} is a symmetry of \leqn{eq:x2}. Inserting these
transformations, we find
\begin{equation}
\delta_\xi {\cal L} = \partial_\mu\phi^* \partial^\mu(
\sqrt{2}\xi^T c \psi) + (\sqrt{2} i \xi^T c \sigma\cdot
\partial\phi) i \bar\sigma\cdot \partial \psi + (\xi^*)\ .
\eeq{eq:a3}
The term in the first set of parentheses is the right-hand side of
\leqn{eq:y2}. The term in the second set of parentheses is the
Hermitian conjugate of the right-hand side of \leqn{eq:z2}. The last
term refers to terms proportional to $\xi^*$ arising from the variation
of $\phi^*$ and $\psi$. To manipulate \leqn{eq:a3}, integrate both
terms by parts and use the identity
\begin{equation}
\sigma\cdot \partial \bar\sigma\cdot \partial = \partial^2
\eeq{eq:b3}
which can be verified directly from \leqn{eq:ll2}. This gives
\begin{equation}
\delta_\xi {\cal L} = - \phi^* \partial^2( \sqrt{2}\xi^T c \psi) -
\sqrt{2} i \xi^T c \cdot i \partial^2 \psi
+ (\xi^*)\ .
\eeq{eq:c3}
The two terms shown now cancel, and the $\xi^*$ terms cancel similarly.
Thus, $\delta_\xi{\cal L} = 0$ and we have a symmetry.
The transformation \leqn{eq:z2} appears rather strange at first sight.
However, this formula takes on a bit more sense when we work out the
algebra of supersymmetry transformations. Consider the commutator
\begin{eqnarray}
(\delta_\eta \delta_\xi - \delta_\xi \delta_\eta) \phi
&=& \delta_{\eta} (\sqrt{2} \xi^Tc\psi) - (\eta \leftrightarrow \xi)\nonumber \\
&=& \sqrt{2}\xi^T c (\sqrt{2}i \sigma^\mu \partial_\mu \phi c \eta^*)
- (\eta \leftrightarrow \xi)\nonumber \\
&=& 2i\xi^T c \sigma^\mu c \eta^* \partial_\mu \phi
- (\eta \leftrightarrow \xi)\nonumber \\
&=& -2i\xi^T (\bar\sigma^\mu )^T \eta^* \partial_\mu \phi
- (\eta \leftrightarrow \xi)\nonumber \\
&=& 2i[\eta^\dagger \bar\sigma^\mu\xi - \xi^\dagger \bar\sigma^\mu\eta]
\, \partial_\mu\phi \ .
\eeqa{eq:d3}
To obtain the fourth line, I have used \leqn{eq:s2}; in the passage to
the next line, a minus sign appears due to fermion interchange. In
general, supersymmetry transformations have the commutation relation
\begin{equation}
(\delta_\eta \delta_\xi - \delta_\xi \delta_\eta) A
= 2i[\eta^\dagger \bar\sigma^\mu\xi - \xi^\dagger \bar\sigma^\mu\eta]
\, \partial_\mu A
\eeq{eq:e3}
on every field $A$ of the theory.
To clarify the significance of this commutation relation, let me
rewrite the transformations $\delta_\xi$ as the action of a set of
operators, the supersymmetry charges $Q$. These charges must also be
spin-$\frac{1}{2}$. To generate the supersymmetry transformation, we
contract them with the spinor parameter $\xi$; thus
\begin{equation}
\delta_\xi = \xi^T c Q - Q^\dagger c \xi^* \ .
\eeq{eq:f3}
At the same time, we may replace $(i\partial_\mu)$ in \leqn{eq:e3} by
the operator which generates spatial translations, the energy-momentum
four-vector $P^\mu$. Then \leqn{eq:e3} becomes the operator relation
\begin{equation}
\left\{ Q^\dagger_a \ , \ Q_b \right\} = (\bar\sigma^\mu)_{ab} P_\mu
\eeq{eq:g3}
which defines the {\em supersymmetry algebra}. This anticommutation
relation has a two-fold interpretation. First, it says that the square
of the supersymmetry charge $Q$ is the energy-momentum. Second, it
says that the square of a supersymmetry transformation is a spatial
translation. The idea of a square appears here in the same sense as we
use when we say that the Dirac equation is the square root of the
Klein-Gordon equation.
We started this discussion by looking for symmetries of the trivial
theory \leqn{eq:x2}, but at this stage we have encountered a
structure with deep connections.
So it is worth looking back to see whether we
were forced to come to high level or whether we could have
taken another route. It turns out that,
given our premises, we could not have
ended in any other place~\cite{HLS}. We set out to look for an operator
$Q$ that was a symmetry of Nature which carried spin-$\frac{1}{2}$. From
this property, the quantity on the left-hand side of \leqn{eq:g3} is a
Lorentz four-vector which commutes with the Hamiltonian. In principle,
we could have written a more general formula
\begin{equation}
\left\{ Q^\dagger_a \ , \ Q_b \right\} = (\bar\sigma^\mu)_{ab} R_\mu \ ,
\eeq{eq:h3}
where $R^\mu$ is a conserved four-vector charge different from $P^\mu$.
But energy-momentum conservation is already a very strong restriction
on particle scattering processes, since it implies that the only degree
of freedom in a two-particle reaction is the scattering angle in the
center-of-mass system. A second vector conservation law, to the extent
that it differs from energy-momentum conservation, places new
requirements that contradict these restrictions except at particular,
discrete scattering angles. Thus, it is not possible to have an
interacting
relativistic field theory with an additional conserved spin-1 charge,
or with any higher-spin charge, beyond standard momentum and angular
momentum conservation \cite{CMthm}. For this reason, \leqn{eq:g3} is
actually the most general commutation relation that can be obeyed by
supersymmetry charges.
The implications of the supersymmetry algebra \leqn{eq:g3} are indeed
profound. If the square of a supersymmetry charge is the total
energy-momentum of everything, then {\em supersymmetry must act on
every particle and field in Nature}. We can exhibit this action
explicitly by writing out the $a=1,b=1$ component of \leqn{eq:g3},
\begin{equation}
\left\{ Q^\dagger_1 \ , \ Q_1 \right\} = P^0 + P^3 = P^+ \ .
\eeq{eq:i3}
On states with $P^+ \neq 0$ (which we can arrange for any particle
state by a rotation), define
\begin{equation}
a = {Q_1 \over \sqrt{P^+}}\ , \qquad
a^\dagger = {Q_1^\dagger \over \sqrt{P^+}} \ .
\eeq{eq:j3}
These operators obey the algebra
\begin{equation}
\{ a^\dagger\ , \ a \} = 1
\eeq{eq:k3}
of fermion raising and lowering operators. They raise and
lower $J^3$ by $\frac{1}{2}$ unit. Thus, in a supersymmetric theory, every
state of nonzero energy has a partner of opposite statistics differing
in angular momentum by $\Delta J^3 = \pm \frac{1}{2}$.
On the other hand, for any operator $Q$, the quantity $\{Q^\dagger,
Q\}$ is a Hermitian matrix with eigenvalues that are either positive or
zero. This matrix has zero eigenvalues for those states that satisfy
\begin{equation}
Q \ket{0} = Q^\dagger \ket{0} = 0 \ ,
\eeq{eq:l3}
that is, for supersymmetric states. In particular, if supersymmetry is
not spontaneously broken, the vacuum state is supersymmetric and
satisfies \leqn{eq:l3}. Since the vacuum also has zero three-momentum,
we deduce
\begin{equation}
\bra{0} H \ket{0} = 0
\eeq{eq:m3}
as a consequence of supersymmetry. Typically in a quantum field
theory, the value of the vacuum energy density is given by a
complicated sum of vacuum diagrams. In a supersymmetric theory, these
diagrams must magically cancel \cite{Zvac}. This is the first of a
number of magical cancellations of radiative corrections that we will
find in supersymmetric field theories.
\subsection{Supersymmetric Lagrangians}
At this point, we have determined the general formal properties of
supersymmetric field theories. Now it is time to be much more concrete
about the form of the Lagrangians which respect supersymmetry. In this
section, I will discuss the particle content of supersymmetric theories
and present the most general renormalizable supersymmetric Lagrangians
for spin-0 and spin-$\frac{1}{2}$ fields.
We argued from \leqn{eq:i3} that all supersymmetric states of nonzero
energy are paired. In particular, this applies to single-particle
states, and it implies that supersymmetric models contain boson and
fermion fields which are paired in such a way that the particle degrees
of freedom are in one-to-one correspondence. In the simple example
\leqn{eq:x2}, I introduced a complex scalar field and a left-handed
fermion field. Each leads to two sets of single-particle states, the
particle and the antiparticle. I will refer to this set of states---a
left-handed fermion, its right-handed antiparticle, a complex boson,
and its conjugate---as a {\em chiral supermultiplet}.
Another possible pairing is a a massless vector field and a left-handed
fermion, which gives a {\em vector supermultiplet}---two transversely
polarized vector boson states, plus the left-handed fermion and its
antiparticle. In conventional field theory, a vector boson obtains
mass from the Higgs mechanism by absorbing one degree of freedom from a
scalar field. In supersymmetry, the Higgs mechanism works by coupling
a vector supermultiplet to a chiral supermultiplet. This coupling
results in a massive vector particle, with three polarization states,
plus an extra scalar. At the same time, the left-handed fermions in
the two multiplets combine through a mass term of the form \leqn{eq:u2}
to give a massive Dirac fermion, with two particle and two antiparticle
states. All eight states are degenerate if supersymmetry is unbroken.
More complicated pairings are possible. One of particular importance
involves the graviton. Like every other particle in the theory, the
graviton must be paired by supersymmetry. Its natural partner is a
spin-$\frac{3}{2}$ field called the {\em gravitino}. In general relativity,
the graviton is the gauge field of local coordinate invariance. The
gravitino field can also be considered as a gauge field. Since it
carries a vector index plus the spinor index carried by $\xi$ or $Q$,
it can have the transformation law
\begin{equation}
\delta_\xi \psi_\mu = {1\over 2\pi G_N}\partial_\mu \xi(x) + \cdots
\eeq{eq:n3}
which makes it the gauge field of local supersymmetry. This gives a
natural relation between supersymmetry and space-time geometry and
emphasizes the profound character of this generalization of field
theory.
I will now present the most general Lagrangian for chiral
supermultiplets. As a first step, we might ask whether we can give a
mass to the fields in \leqn{eq:x2} consistently with supersymmetry.
This is accomplished by the Lagrangian
\begin{eqnarray}
{\cal L} &=& \partial_\mu \phi^* \partial^\mu\phi +
\psi^\dagger i \bar\sigma \cdot \partial \psi + F^\dagger F \nonumber \\
& & \hskip 0.5in + m (\phi F - \frac{1}{2} \psi^T c \psi) + {\mbox{\rm h.c.}} \ .
\eeqa{eq:o3}
In this expression, I have introduced a new complex field $F$.
However, $F$ has no kinetic energy and does not lead to any new
particles. Such an object is called an {\em auxiliary field}. If we
vary the Lagrangian \leqn{eq:o3} with respect to $F$, we find the field
equations
\begin{equation}
F^\dagger = - m\phi \ , \qquad F = - m \phi^* \ .
\eeq{eq:p3}
Thus $F$ carries only the degrees of freedom that are already present
in $\phi$. We can substitute this solution back into \leqn{eq:o3} and
find the Lagrangian
\begin{equation}
{\cal L} = \partial_\mu \phi^* \partial^\mu\phi - m^2 \phi^* \phi +
\psi^\dagger i \bar\sigma \cdot \partial \psi - \frac{1}{2} m(\psi^T c \psi -
\psi^\dagger c \psi^*) \ ,
\eeq{eq:q3}
which has equal, supersymmetric masses for the bosons and fermions.
It is not difficult to show that the Lagrangian \leqn{eq:o3} is
invariant to the supersymmetry transformation
\begin{eqnarray}
\delta_\xi \phi &=& \sqrt{2} \xi^T c \psi\nonumber \\
\delta_\xi \psi &=& \sqrt{2}i \sigma\cdot \partial \phi c \xi^* + \xi F\nonumber \\
\delta_\xi F &=& -\sqrt{2}i \xi^\dagger \bar\sigma\cdot \partial \psi \ .
\eeqa{eq:r3}
The two lines of \leqn{eq:o3} are invariant separately. For the first
line, the proof of invariance is a straightforward generalization of
\leqn{eq:c3}. For the second line, we need
\begin{eqnarray}
\delta_\xi (\phi F - \frac{1}{2} \psi^T c \psi)
&=& (\sqrt{2}\xi^T c \psi)F + \phi
( -\sqrt{2}i \xi^\dagger \bar\sigma\cdot \partial \psi )
- \psi^T c ( \sqrt{2}i \sigma\cdot \partial \phi c \xi^* + \xi F) \nonumber \\
&=& \sqrt{2}i \xi^\dagger \bar\sigma\cdot \partial \psi
-\sqrt{2} i \psi^T c \sigma\cdot \partial \phi c \xi^* \nonumber \\
&=& 0 \ .
\eeqa{eq:s3}
The first and last terms in the second line cancel by the use of
\leqn{eq:v2}; the terms in the third line cancel after an integration
by parts and a rearrangement similar to that in \leqn{eq:d3} in the
second term. Thus, \leqn{eq:r3} is an invariance of \leqn{eq:o3}.
With some effort, one can show that this transformation obeys the
supersymmetry algebra, in the sense that the commutators of
transformations acting on $\phi$, $\psi$, and $F$ follow precisely the
relation \leqn{eq:e3}.
The introduction of the auxiliary field $F$ allows us to write a much
more general class of supersymmetric Lagrangians. Let $\phi_j$,
$\psi_j$, $F_j$ be the fields of a number of chiral supermultiplets
indexed by $j$. Assign each multiplet the supersymmetry transformation
laws \leqn{eq:r3}. Then it can be shown by a simple generalization of
the discussion just given that the supersymmetry transformation leaves
invariant Lagrangians of the general form
\begin{eqnarray}
{\cal L} &=& \partial_\mu \phi_j^* \partial^\mu\phi_j +
\psi_j^\dagger i \bar\sigma \cdot \partial \psi_j + F_j^\dagger F_j \nonumber \\
& & \hskip 0.3in + (F_j {\partial W \over \partial \phi_j}
- \frac{1}{2} \psi^T_j c \psi_k {\partial^2 W \over \partial \phi_j \partial \phi_k}) + {\mbox{\rm h.c.}}
\ ,
\eeqa{eq:t3}
where $W(\phi)$ is an analytic function of the complex fields $\phi_j$
which is called the {\em superpotential}. It is important to repeat
that $W(\phi)$ can have arbitrary dependence on the $\phi_j$, but it
must not depend on the $\phi_j^*$. The auxiliary fields $F_j$ obey
the equations
\begin{equation}
F^\dagger = - {\partial W \over \partial \phi_j} \ .
\eeq{eq:u3}
If $W$ is a polynomial in the $\phi_j$, the elimination of the $F_j$
by substituting \leqn{eq:u3} into \leqn{eq:t3} produces polynomial
interactions for the scalar fields.
\begin{figure}[t]
\begin{center}
\leavevmode
{\epsfxsize=3in\epsfbox{Shmass.eps}}
\end{center}
\caption{(a) Yukawa and four-scalar couplings arising from the
supersymmetric Lagrangian with superpotential \protect\leqn{eq:w3}; (b)
Diagrams which give the leading radiative corrections to the scalar
field mass term.}
\label{fig:eight}
\end{figure}
The free massive Lagrangian \leqn{eq:o3} is a special case of
\leqn{eq:t3} for one supermultiplet with the superpotential
\begin{equation}
W = \frac{1}{2} m \phi^2 \ .
\eeq{eq:v3}
A more interesting model is obtained by setting
\begin{equation}
W = \frac{1}{3} \lambda \phi^3 \ .
\eeq{eq:w3}
In this case, $W$ leads directly to a Yukawa coupling proportional to
$\lambda$, while substituting for $F$ from \leqn{eq:u3}
yields a four scalar coupling
proportional to $\lambda^2$:
\begin{equation}
{\cal L} = \partial_\mu \phi^* \partial^\mu\phi +
\psi^\dagger i \bar\sigma \cdot \partial \psi - \lambda^2 |\phi^2|^2
-\lambda\bigl[ \psi^T c \psi \phi - \phi^* \psi^\dagger c \psi^*
\bigr] \ .
\eeq{eq:ww3}
These two vertices are shown in
Figure~\ref{fig:eight}(a). Their sizes are such that the two leading
diagrams which contribute to the scalar field mass renormalization,
shown in Figure~\ref{fig:eight}(b), are of the same order of magnitude.
In fact, it is not difficult to compute these diagrams for external
momentum $p = 0$. The first diagram has the value
\begin{equation}
- 4\lambda^2 i \cdot \int {d^4k\over (2\pi)^4} \, {i\over k^2} = 4 \lambda^2
\int {d^4k\over (2\pi)^4} \, {1\over k^2} \ .
\eeq{eq:x3}
To compute the second diagram, note that the standard form of the
fermion propagator is
$\VEV{\psi \psi^\dagger}$, and be careful to include all minus signs
resulting from fermion reordering. Then you will find
\begin{eqnarray}
& & \frac{1}{2} (-2i\lambda)(2i\lambda)\int {d^4k\over (2\pi)^4} \,
{\mbox{\rm tr}} \left[ {i\sigma\cdot k\over k^2} c \bigl( {-i\sigma\cdot k\over k^2}
\bigr)^T c \right] \nonumber \\
& & \hskip 0.4in = -2\lambda^2 \int {d^4k\over (2\pi)^4} \,
{ {\mbox{\rm tr}}[ \sigma\cdot k \bar \sigma \cdot k]\over k^4 } \ .
\eeqa{eq:xx3}
Using \leqn{eq:b3}, the trace gives $2k^2$, and the two diagrams
cancel precisely. Thus, the choice \leqn{eq:w3} presents us with
an interacting quantum field theory, but one with exceptional
cancellations in the scalar field mass term.
In this simple model, it is not difficult to see that the scalar field
mass corrections must vanish as a matter of principle.
The theory with superpotential
\leqn{eq:w3} is invariant under the symmetry
\begin{equation}
\phi \to e^{2i\alpha} \phi \ , \qquad \psi \to e^{-i\alpha} \psi\ .
\eeq{eq:y3}
This symmetry is inconsistent with the appearance of a fermion mass
term $m\psi^T c \psi$, as in \leqn{eq:q3}. The symmetry does not
prohibit the appearance of a scalar mass term, but if the theory is to
remain supersymmetric, the scalar cannot have a different mass from the
fermion. However, the cancellation of radiative corrections in models
of the form \leqn{eq:t3} is actually much more profound. It can be
shown that, in a general model of this type, the only nonvanishing
radiative corrections to the potential terms are field rescalings. If
a particular coupling---the mass term, a cubic interaction, or any
other---is omitted from the original superpotential, it cannot be
generated by radiative corrections \cite{SPnr,GSRk}.
For later reference, I will write the potential energy associated
with the most general system with a Lagrangian of the form
\leqn{eq:t3}. This is
\begin{equation}
V = - F_j^\dagger F_j - F_j {\partial W \over \partial \phi_j} - F^\dagger_j
\left( {\partial W \over \partial \phi_j}\right)^* \ .
\eeq{eq:z3}
Substituting for $F_j$ from \leqn{eq:u3}, we find
\begin{equation}
V = \sum_j \left| {\partial W \over \partial \phi_j} \right|^2 \ .
\eeq{eq:a4}
This simple result is called the {\em F-term} potential. It is
minimized by setting all of the $F_j$ equal to zero. If this is
possible, we obtain a vacuum state with $\vev{H} = 0$ which is also
invariant to supersymmetry, in accord with the discussion of
\leqn{eq:m3}. On the other hand, supersymmetry is spontaneously
broken if for some reason it is not possible to satisfy all of the
conditions $F_j= 0$ simultaneously. In that case, we obtain a vacuum
state with $\vev{H} >0$.
\subsection{Coupling constant unification}
At this point, we have not yet completed our discussion of the
structure of supersymmetric Lagrangians. In particular, we have not
yet written the supersymmetric Lagrangians of vector fields, beyond
simply noting that a vector field combines with a chiral fermion to
form a vector supermultiplet. Nevertheless, it is not too soon to try
to write a supersymmetric generalization of the Standard Model.
I will first list the ingredients needed for this generalization. For
each of the $SU(3)\times SU(2)\times U(1)$ gauge bosons, we need a
chiral fermion $\lambda^a$ to form a vector supermultiplet. These
new fermions are called {\em gauginos}. I will refer the specific
partners of specific gauge bosons with a tilde.
For example, the fermionic partner of the
gluon will be called $\widetilde g$, the {\em gluino}, and the fermionic
partners of the $W^+$ will be called $\widetilde w^+$, the {\em wino}.
None of
these fermions have the quantum numbers of quarks and leptons. So we
need to add a complex scalar for each chiral fermion species to put
the quarks and leptons into chiral supermultiplets. I will use the labels for
left-handed fermion multiplets in \leqn{eq:a1} also to denote the quark
and lepton supermultiplets. Hopefully, it will be clear from context
whether I am talking about the supermultiplet or the fermion. The
scalar partners of quarks and leptons are called {\em squarks} and {\em
sleptons}. I will denote these with a tilde. For example, the partner
of $e^-_L = L^-$ is the selectron $\widetilde e^-_L$ or $\widetilde L^-$.
The partner of $\bar e^* = e^-_R$ is a distinct selectron which I will call
$\widetilde e^-_R$. The Higgs fields must
also belong to chiral supermultiplets. I will denote the scalar
components as $h_i$ and the left-handed fermions as $\widetilde h_i$. We
will see in a moment that at least two different Higgs multiplets are
required.
Although we need a bit more formalism to write the supersymmetric
generalization of the Standard Model gauge couplings, it is already
completely straightforward to write the supersymmetric generalization
of the Yukawa couplings linking quarks and leptons to the Higgs sector.
The generalization of the third line of \leqn{eq:a} is given by writing
the superpotential
\begin{equation}
W = \lambda^{ij}_u \bar u^i h_2 \cdot Q^j
+ \lambda^{ij}_d \bar d^i h_1 \cdot Q^j +
\lambda^{ij}_\ell \bar e^i h_1 \cdot L^j
\eeq{eq:b4}
Note that, where in \leqn{eq:a} I wrote $\phi$ and $\phi^*$, I am
forced here to introduce two different Higgs fields $h_1$ and $h_2$.
The hypercharge assignments of $\bar u$ and $Q$ require for the first
term a Higgs field with $Y = +\frac{1}{2}$; for the next two terms, we need a
Higgs field with $Y = - \frac{1}{2}$. Since $W$ must be an analytic function
of supermultiplet fields, as I explained below \leqn{eq:t3}, replacing
$h_1$ by $(h_2)^*$ gives a Lagrangian which is not supersymmetric.
There is another, more subtle, argument for a second Higgs doublet.
Just as in the Standard Model, triangle loop diagrams involving the
chiral fermions of the theory contain terms which potentially violate
gauge invariance. These anomalous terms cancel when one sums over the
chiral fermions of each quark and lepton generation. However, the
chiral fermion $\widetilde h_2$ leads to a new anomaly term which
violates the conservation of hypercharge. This contribution is
naturally cancelled by the contribution from $\widetilde h_1$.
We still need several more ingredients to construct the full
supersymmetric generalization of the Standard Model, but we have now
made
a good start. We have introduced the minimum number of new particles
(unfortunately, this is not a small number), and we have generated new
couplings for them without yet introducing new parameters beyond those
of the Standard Model.
In addition, we already have enough information to study the
unification of forces using the formalism of Section 2.5. To begin, we
must extend the formulae \leqn{eq:k1}, \leqn{eq:kk1} to supersymmetric
models. For $SU(N)$ gauge theories, the gauginos
give a contribution $(-\frac{2}{3}N)$ to the right-hand side
of \leqn{eq:k1}. In \leqn{eq:kk1}, there is no contribution either from
the gauge bosons or from their fermionic partners. We should also
group together the contributions from matter fermions and scalars.
Then we can write the renormalization group coefficient $b_N$ for
$SU(N)$ gauge theories with $n_f$ chiral supermultiplets in the
fundamental representation as
\begin{equation}
b_N = 3 N - \frac{1}{2} n_f \ .
\eeq{eq:c4}
Similarly, the renormalization group coefficient for $U(1)$ gauge
theories is now
\begin{equation}
b_1 = - \sum_f t_f^2 \ ,
\eeq{eq:d4}
where the sum runs over chiral supermultiplets.
Evaluating these expressions for $SU(3)\times SU(2)\times U(1)$ gauge
theories with $n_g$ quark and lepton generations and $n_h$ Higgs
fields, we find
\begin{eqnarray}
b_3 &=& 9 - 2 n_g \nonumber \\
b_2 &=& 6 - 2 n_g - \frac{1}{2} n_h\nonumber \\
b_1 &=& \phantom{9} - 2 n_g - \frac{3}{10} n_h \ .
\eeqa{eq:e4}
Now insert these expressions into \leqn{eq:y1}; for $n_h =2$, we find
\begin{equation}
B = \frac{5}{7} = 0.714 \ ,
\eeq{eq:f4}
in excellent agreement with the experimental value \leqn{eq:z1}.
Apparently, supersymmetry repairs the difficulty that the
Standard Model has in linking in a simple way to grand unification.
The running coupling constants extrapolated from the experimental
values \leqn{eq:w1} using the supersymmetric renormalization group
equations are shown in Figure~\ref{fig:nine}.
\begin{figure}[t]
\begin{center}
\leavevmode
{\epsfxsize=4in\epsfbox{SUSYevolve.eps}}
\end{center}
\caption{Evolution of the $SU(3)\times SU(2)\times U(1)$ gauge
couplings to high energy scales, using the one-loop renormalization group
equations of the supersymmetric generalization of the Standard Model.}
\label{fig:nine}
\end{figure}
Of course it is not difficult to simply make up a model that agrees with
any previously given value of $B$.
I hope to have convinced you that the value
\leqn{eq:f4} arises naturally in grand unified theories based on
supersymmetry. By comparing this agreement to the error bars for $B$
quoted in \leqn{eq:z1}, you can decide for yourself whether this
agreement is fortuitous.
\subsection{The rest of the supersymmetric Standard Model}
I will now complete the Lagrangian of the supersymmetric
generalization of the Standard Model. First, I must write the
Lagrangian for the vector supermultiplet and then I must show how to
couple that multiplet
to matter fields. After this, I will discuss some general
properties of the resulting system.
The vector multiplet $( A_\mu^a, \lambda^a)$ containing the gauge
bosons of a Yang-Mills theory and their partners has the supersymmetric
Lagrangian
\begin{equation}
{\cal L} = -\frac{1}{4} \left( F_{\mu\nu}^a\right)^2 + \lambda^{\dagger a}
i \bar\sigma^\mu D_\mu \lambda^a + \frac{1}{2} (D^a)^2 \ ,
\eeq{eq:g4}
where $D_\mu= (\partial_\mu - i g A_\mu^a t^a)$ is the gauge-covariant
derivative, with $t^a$ the gauge group generator. In order to write the
interactions of this multiplet in the simplest form, I have introduced
a set of auxiliary real scalar fields, called $D^a$. (The name is
conventional; please do not confuse them with the covariant
derivatives.) The gauge interactions of a chiral multiplet are then
described by generalizing the first line of \leqn{eq:t3} to
\begin{eqnarray}
{\cal L}
&=& D_\mu \phi_j^* D^\mu\phi_j + \psi_j^\dagger i \bar\sigma^\mu D_\mu
\psi_j + F_j^\dagger F_j \nonumber \\
& & -\sqrt{2}i g\left( \phi_j \lambda^{Ta} t^a c \psi_j -
\psi^\dagger t^a c \lambda^{*a} \phi_j\right) + g D^a \phi^\dagger
t^a \phi \ .
\eeqa{eq:h4}
Eliminating the auxiliary fields using their field equation
\begin{equation}
D^a = - g \sum_j \phi^\dagger t^a \phi
\eeq{eq:i4}
gives a second contribution to the scalar potential, which should be
added to the F-term \leqn{eq:a4}. This is the {\em D-term}
\begin{equation}
V = \frac{g^2}{2} \left( \sum_j \phi^\dagger t^a \phi \right)^2\ .
\eeq{eq:j4}
As with the F-term, the ground state of this potential is obtained by
setting all of the $D^a$ equal to zero, if it is possible. In that
case, one obtains a supersymmetric vacuum state with $\vev{H} = 0$.
The full supersymmetric generalization of the Standard Model can be
written in the form
\begin{equation}
{\cal L} = {\cal L}_{\rm gauge} + {\cal L}_{\rm kin} + {\cal L}_{\rm Yukawa} + {\cal L}_\mu \ .
\eeq{eq:k4}
The first term is the kinetic energy term for the gauge multiplets of
$SU(3)\times SU(2)\times U(1)$. The second term is the kinetic energy
term for quark, lepton, and Higgs chiral multiplets, including gauge
couplings of the form \leqn{eq:h4}. The third term is the Yukawa and
scalar interactions given by the second line of \leqn{eq:t3} using the
superpotential \leqn{eq:b4}. The last term is that following from an
additional gauge-invariant term that we could add to the superpotential,
\begin{equation}
\Delta W = \mu h_1 \cdot h_2 \ .
\eeq{eq:l4}
This term contributes a supersymmetric mass term to the Higgs fields
and to their fermions partners. This term is needed on
phenomenological grounds, as I will discuss in Section 4.4. The
parameter $\mu$ is the only new parameter that we have added so far to
the Standard Model.
This Lagrangian does not yet describe a realistic theory. It has exact
supersymmetry. Thus, it predicts charged scalars degenerate with the
electron and massless fermionic partners for the photon and gluons. On
the other hand, it has some very attrative properties. For the reasons
explained below \leqn{eq:y3}, there is no quadratically divergent
renormalization of the Higgs boson masses, or of any other mass in the
theory. Thus, the radiative correction \leqn{eq:z}, which was such a
problem for the Standard Model, is absent in this generalization. In
fact, the only renormalizations in the theory are renormalizations of
the $SU(3)\times SU(2)\times U(1)$ gauge couplings and rescalings of
the various quark, lepton, and Higgs fields. In the next section, I
will show that we can modify \leqn{eq:k4} to maintain this property
while making the mass spectrum of the theory more realistic.
The Lagrangian \leqn{eq:k4} conserves the discrete quantum number
\begin{equation}
R = (-1)^{L + Q + 2J} \ ,
\eeq{eq:m4}
where $L$ is the lepton number, $Q= 3B$ is the quark number, and $J$ is
the spin. This quantity is called {\em R-parity}, and it is
constructed precisely so that $R = +1$ for the conventional gauge
boson, quark, lepton, and Higgs states while $R = -1$ for their
supersymmetry partners. If $R$ is exactly conserved, supersymmetric
particles can only be produced in pairs, and the lightest
supersymmetric partner must be absolutely stable. On the other hand,
$R$-parity can be violated only by adding terms to ${\cal L}$ which violate
baryon- or lepton-number conservation.
It is in fact straightforward to write a consistent
$R$-parity-violating supersymmetric theory. The following terms which
can be added to the superpotential are invariant under $SU(3)\times
SU(2)\times U(1)$ but violate baryon or lepton number:
\begin{equation}
\Delta W =
\lambda^{ijk}_B \bar u^i \bar d^j \bar d^k
+ \lambda^{ijk}_L Q^i \cdot L^j \bar d^k +
\lambda^{ijk}_e L^i \cdot L^j \bar e^k + \mu_L^{i} L^i \cdot h_2 \ .
\eeq{eq:n4}
A different phenomenology is produced if one adds the baryon-number
violating couplings $\lambda_B$, or if one adds the other couplings\
written in \leqn{eq:n4},
which violate lepton number. If one were to add both types of
couplings at once, that would be a disaster, leading to rapid proton
decay.
For a full exploration of the phenomenology of supersymmetric theories,
we should investigate both models in which $R$-parity is conserved, in
which the lightest superpartner is stable, and models in which
$R$-parity is violated, in which the lightest superpartner decays
through $B$- or $L$- violating interactions. In these lectures, since
my plan is to present illustrative examples rather than a systematic
survey, I will restrict my attention to models with conserved
$R$-parity.
\subsection{How to describe supersymmetry breaking}
Now we must address the question of how to modify the Lagrangian
\leqn{eq:k4} to obtain a model that could be realistic. Our problem is
that the supersymmetry on which the model is based is not manifest in
the spectrum of particles we see in Nature. So now we must add new
particles or interactions which cause supersymmetry to be spontaneously
broken.
It would be very attractive if there were a simple model of
supersymmetry breaking that we could connect to the supersymmetric
Standard Model. Unfortunately, models of supersymmetry breaking are
generally not simple. So most studies of supersymmetry do not invoke
the supersymmetry breaking mechanism directly but instead try to treat
its consequences phenomenologically. This can be done by adding to
\leqn{eq:k4} terms which violate supersymmetry but become unimportant
at high energy. Some time ago, Grisaru and Girardello \cite{GG} listed
the terms that one can add to a supersymmetric Lagrangian without
disturbing the cancellation of quadratic divergences in the scalar mass
terms. These terms are
\begin{equation}
{\cal L}_{\rm soft} = - M^2_j \left| \phi_j \right|^2
- m_a \lambda^{Ta} c \lambda^a
+ B \mu h_1 \cdot h_2 + A W(\phi) \ ,
\eeq{eq:o4}
where $W$ is the superpotential \leqn{eq:b4}, plus other possible
analytic terms cubic in the scalar fields $\phi_j$. These terms give
mass to the squarks and sleptons and to the gauginos, moving
the unobserved superpartners to higher energy. Note that terms of the
structure $\phi^* \phi\phi$ and the mass term $\psi^T c \psi$ do
not appear in \leqn{eq:o4} because they can
regenerate the divergences of the nonsupersymmetric theory.
All of the coefficients in \leqn{eq:o4} have the dimensions of (mass)
or (mass)$^2$. These new terms in \leqn{eq:o4} are called {\em soft
supersymmetry-breaking terms}. We can build a phenomenological model
of supersymmetry by adding to \leqn{eq:k4} the various terms in
${\cal L}_{\rm soft}$ with coefficients to be determined by experiment.
It is not difficult to understand that it is the new, rather
than the familiar, half of the spectrum of the supersymmetric model
that obtains mass from \leqn{eq:o4}. In
Section 2.5, I argued that the particles we see in high-energy
experiments are visible only because they are protected
from acquiring very large masses by some
symmetry principle . In that
discussion, I invoked only the Standard Model gauge symmetries. In
supersymmetric models, we have a more complex situation. In each
supermultiplet, one particle is protected from acquiring mass, as
before, by $SU(2) \times U(1)$. However, their superpartners---the squarks,
sleptons, and gauginos---are protected from
obtaining mass only by the supersymmetry relation to their partner.
Thus, if supersymmetry is spontaneously broken, all
that is necessary to generate masses for these partners is a coupling
of the supersymmetry-breaking expectation values
to the Standard Model supermultiplets.
This idea suggests a general structure for a realistic supersymmetric model.
All of the phenomena of the model are driven by supersymmetry breaking.
First, supersymmetry is broken spontaneously in some new sector of
particles at high energy. Then, the coupling between these particles
and the quarks, leptons, and gauge bosons leads to soft
supersymmetry-breaking terms for those supermultiplets. It is very
tempting to speculate further that those terms might then give rise to
the spontaneous breaking of $SU(2)\times U(1)$ and so to the masses for the
$W$ and $Z$ and for the quarks and leptons. I will explain in the next
section how this might happen.
The size of the mass terms in \leqn{eq:o4} depends on two factors. The
first of these is the mass scale at which supersymmetry is broken.
Saying for definiteness that supersymmetry breaking is due to the
nonzero value of an $F$ auxiliary field, we can denote this scale by
writing $\vev{F}$, which has the dimensions of (mass)$^2$. The second
factor is the mass of the bosons or fermions which couple the
high-energy sector to the particles of the Standard Model and thus
communicate the supersymmetry breaking. I will call this mass ${\cal M}$, the
{\em messenger scale}. Then the mass parameters that appear in
\leqn{eq:o4} should be of the order of
\begin{equation}
m_S = {\vev{F}\over {\cal M}} \ .
\eeq{eq:p4}
If supersymmetry indeed gives the mechanism of electroweak symmetry
breaking, then $m_S$ should be of the order of 1 TeV. A case that is
often discussed in the literature is that in which the messenger is
supergravity. In that case, ${\cal M}$ is the Planck mass $m_{\mbox{\scriptsize Pl}}$, equal
to $10^{19}$ GeV, and
$\vev{F} \sim 10^{11}$ (GeV)$^2$. Alternatively, both $\vev{F}$ and
${\cal M}$ could be of the order of a few TeV.
The detailed form of the soft supersymmetry-breaking terms depends on
the underlying model that has generated them. If one allows these
terms to have their most general form (including arbitrary flavor- and
CP-violating interactions, they contain about 120 new parameters.
However, any particular model of supersymmetry breaking generates a
specific set of these soft terms with some observable regularities.
One of our goals in Section 4 of these lectures will be to understand
how to determine the soft parameters experimentally and thus uncover
the patterns which govern their construction.
\subsection{Electroweak symmetry breaking from supersymmetry}
There is a subtlety in trying to determine the pattern of the soft
parameters experimentally. Like all other coupling constants in a
supersymmetric theory, these parameters run under the influence of the
renormalization group equations. Thus, the true underlying pattern
might not be seen directly at the TeV
energy scale. Rather, it might be necessary to extrapolate the
measured values of parameters to higher energy to look for regularities.
The situation here is very similar to that of the Standard Model
coupling constants. The underlying picture which leads to the values
of the $SU(3)\times SU(2)\times U(1)$ coupling constants is not very
obvious from the data \leqn{eq:w1}. Only when these data are
extrapolated to very high energy using the renormalization group do we
see evidence for their unification. Obviously, such evidence must be
indirect. On the other hand, the discovery of supersymmetric
particles, and the discovery that these particles showed other
unification relations---with the same unification mass scale---would
give powerful support to this picture.
I will discuss general systematics of the renormalization-group running
of the soft parameters in Section 4.2. But there is one set of
renormalization group equations that I would like to call your
attention to right away. These are the equations for the soft mass of
the Higgs boson and the squarks which are most strongly coupled
to it. We saw in Section 2.6 that the top quark Yukawa coupling was
sufficiently large that it could have an important effect in
renormalization group evolution. Let us consider, then, the evolution
equations for the three scalars that interact through this coupling,
the Higgs boson $h_2$, the scalar top $\widetilde Q_{t} = \widetilde t_L$,
and the
scalar top $\widetilde t_R$. The most important terms in
these equations are the following:
\begin{eqnarray}
{d\over d \log Q} M^2_{h} &=& {1\over (4\pi)^2} \left\{ 3 \lambda_t^2
(M^2_h + M^2_Q + M^2_t) + \cdots \right\}\nonumber \\
{d\over d \log Q} M^2_{Q} &=& {1\over (4\pi)^2} \left\{ 2 \lambda_t^2
(M^2_h + M^2_Q + M^2_t) - {32\over 3}g_3^2 m_3^2\cdots \right\}\nonumber \\
{d\over d \log Q} M^2_{t} &=& {1\over (4\pi)^2} \left\{ 3 \lambda_t^2
(M^2_h + M^2_Q + M^2_t) - {32\over 3}g_3^2 m_3^2 + \cdots \right\}
\ , \nonumber \\
\eeqa{eq:r4}
where $g_3$ is the QCD coupling, $m_3$ is the mass of the gluino,
and the omitted terms are of electroweak
strength. The last two equations exhibit the competition between the
top quark Yukawa coupling and QCD renormalizations which we saw earlier
in \leqn{eq:e2} and \leqn{eq:h2}. The supersymmetric QCD couplings
cause the masses of the $\widetilde Q_{t}$ and $\widetilde t_R$ to
increase at low energies, while the effect of $\lambda_t$ causes all
three masses to decrease.
\begin{figure}[t]
\begin{center}
\leavevmode
{\epsfxsize=4in\epsfbox{GKevolve.eps}}
\end{center}
\caption[*]{Example of the
evolution of the soft supersymmetry-breaking mass terms from
the grand unification scale to the weak interaction scale,
from \protect\cite{Gordycrew}. The initial conditions for the
evolution equations at the grand unification scale are taken to be the
universal among species, in a simple pattern presented in Section 4.2.}
\label{fig:ten}
\end{figure}
Indeed, if the $\widetilde Q_{t}$ and $\widetilde t_R$ masses
stay large, the equations \leqn{eq:r4} predict that $M^2_h$ should go
down through zero and become negative \cite{RI}. Thus, if all scalar mass
parameters are initially positive at high energy scales, these
equations imply that the Higgs boson $h_2$ will acquire a negative
parameter and thus an instability to electroweak symmetry breaking. An
example of the solution to the full set of renormalization group
equations, exhibiting the instability in $M^2_h$, is shown in
Figure~\ref{fig:ten} \cite{Gordycrew}.
At first sight, it might have been any of the scalar fields in the
theory whose potential would be unstable by
renormalization group evolution. But the Higgs scalar $h_2$ has
the strongest instability if the top quark is heavy. In this way, the
supersymmetric extension of the Standard Model naturally contains the
essential feature that we set out to find, a physical mechanism for
electroweak symmetry breaking. As a bonus, we find that this mechanism
is closely associated with the heaviness of the top quark.
If you have been patient through all of the formalism I have presented
in this section, you now see that your patience has paid off. It was
not obvious when we started that supersymmetry would give the essential
ingredients of a theory of electroweak symmetry breaking. But it
turned out to be so. In the next section, I will present more details
of the physics of supersymmetric models and present a program for their
experimental exploration.
\section{Supersymmetry: Experiments}
In the previous section, I have presented the basic formalism of
supersymmetry. I have also explained that supersymmetric models have
several features that naturally answer questions posed by the Standard
Model. At the beginning of Section 3, I told you that supersymmetry
might be considered a worked example of physics beyond the Standard
Model. Though I doubt you are persuaded by now that physics beyond
the Standard Model must be supersymmetric, I hope you see these models
as reasonable alternatives that can be understood in very concrete
terms.
Now I would like to analyze the next step along this line of reasoning.
What if, at LEP 2 or at some higher-energy machine, the superpartners
appear? This discovery would change the course of experimental
high-energy physics and shape it along a certain direction. We should
then ask, what will be the important issues in high-energy physics, and
how will we resolve these issues experimentally? In this section, I
will give a rather detailed answer to this question.
I emphasize again that I am not asking you to become a believer in
supersymmetry. A different discovery about physics beyond the Standard
Model would change the focus of high-energy physics in a different
direction. But we will learn more by choosing a particular direction
and studying its far-reaching implications than by trying to reach
vague but general conclusions. I will strike off in a different
direction in Section 5.
On the other hand, I hope you are not put off by the complexity of the
supersymmetric Standard Model. It is true that this model has many
ingredients and a very large content of new undiscovered particles. On
the other hand, the model develops naturally from a single physical
idea. I argued in Section 2.2 that this
structure, a complex phenomenology built up around a definite
principle of physics, is seen often in Nature. It
leads to a more attractive solution to the problems of the
Standard Model than a model whose only virtue is minimality.
It is true that, in models with complex consequences, it may not be
easy to see the underlying structure in the experimental data. This
is the challenge that experimenters will face. I will now discuss how
we can meet this challenge for the particular case in which the physics
beyond the Standard Model is supersymmetric.
\subsection{More about soft supersymmetry breaking}
As we discussed in Section 3.6, a realistic supersymmetric theory has a
Lagrangian of the form
\begin{equation}
{\cal L} = {\cal L}_{\rm gauge} + {\cal L}_{\rm kin} + {\cal L}_{\rm Yukawa} + {\cal L}_\mu
+ {\cal L}_{\rm soft}\ .
\eeq{eq:s4}
Of the various terms listed here, the first three contain only
couplings that are already present in the Lagrangian of the Standard
Model. The fourth term contains one new parameter $\mu$. The last
term, however, contains a very large number of new parameters.
I have already explained that one should not be afraid of seeing a
large number of undetermined parameters here. The same proliferation
of parameters occurs in any theory with a certain level of complexity
when viewed from below. The low-energy scattering amplitudes of QCD,
for example, contain many parameters which turn out to be the masses
and decay constants of hadronic resonances. If it is possible to
measure these parameters, we will obtain a large amount of new
information.
In thinking about the values of the soft supersymmetry-breaking
parameters, there are two features that we should take into account.
The first is that the soft parameters obey renormalization group
equations. Thus, they potentially change significantly from their
underlying values at the messenger scale defined in \leqn{eq:p4} to
their physical values observable at the TeV scale. We have seen in
Section 3.7 that these changes can have important physical
consequences. In the next section, I will describe the renormalization
group evolution of the supersymmetry-breaking mass terms in more
detail, and we will use our understanding of this evolution to work out
some general predictions for the superparticle spectrum.
\begin{figure}[t]
\begin{center}
\leavevmode
{\epsfxsize=2.0in\epsfbox{Gmix.eps}}
\end{center}
\caption{A potentially
dangerous contribution of supersymmetric particles to
flavor-changing neutral current processes.}
\label{fig:eleven}
\end{figure}
The second feature is that there are strong constraints on the flavor
structure of soft supersymmetry breaking terms which come from
constraints on flavor-changing neutral current processes. In
\leqn{eq:o4}, I have written independent mass terms for each of the
scalar fields. In principle, I could also have written mass terms that
mixed these fields. However, if we write the scalars in the basis in
which the quark masses are diagonalized, we must not find substantial
off-diagonal terms. A mixing
\begin{equation}
\Delta{\cal L} = \Delta M^2_d \bar s^\dagger \bar d \ ,
\eeq{eq:t4}
for example, would induce an excessive contribution to the $K_L$--$K_S$
mass difference through the diagram shown in Figure~\ref{fig:eleven}
unless
\begin{equation}
{ \Delta M^2_d \over M^2_d}< 10^{-2}\left( {M_d \over
300 \ \mbox{GeV}}\right)^2 \ .
\eeq{eq:u4}
Similar constraints arise from $D$--$\bar D$ mixing, $B$--$\bar B$
mixing, $\mu\to e \gamma$ \cite{SUSYFCNC}.
The strength of the constraint \leqn{eq:u4} suggests that the physical
mechanism that generates the soft supersymmetry breaking terms
contains a natural feature that suppresses such off-diagonal terms.
One possibility is that equal soft masses are generated for all
scalars with the same $SU(2)\times U(1)$ quantum numbers. Then the
scalar mass matrix is proportional to the matrix {\bf 1} and so is
diagonal in any basis \cite{DGg,Sakai}.
Another possibility is that, by virtue of
discrete flavor symmetries, the scalar mass matrices are approximately
diagonal in the same basis in which the quark mass matrix is diagonal
\cite{NLS}. These two solutions to the potential problem of
supersymmetric flavor violation are called, respectively,
`universality' and `alignment'. A problem with the alignment scenario
is that the bases which diagonalize the $u$ and $d$ quark mass matrices
differ by the weak mixing angles, so it is not possible to completely
suppress the mixing both for the $u$ and $d$ partners. This scenario
then leads to a prediction of $D$--$\bar D$ mixing near the current
experimental bound.
\subsection{The spectrum of superparticles---concepts}
We are now ready to discuss the expectations for the mass spectrum of
supersymmetric partners. Any theory of this spectrum must have two
parts giving , first, the generation of the underlying soft parameters
at the messenger scale and , second, the modification of these
parameters through renormalization group evolution. In this section, I
will make the simplest assumptions about the underlying soft parameters
and concentrate on the question of how these parameters are modified by
the renormalization group. In the next section, we will confront the
question of how these simple assumptions can be tested.
Let us begin by considering the fermionic partners of gauge bosons,
the gauginos.
If the messenger scale lies above the scale of grand
unification, the gauginos associated with the $SU(3)\times SU(2)\times
U(1)$ gauge bosons will be organized into a single representation of
the grand unification group and thus will have a common soft mass term.
This gives a very simple initial condition for renormalization group
evolution.
The renormalization group equation for a gaugino mass $m_i$ is
\begin{equation}
{d \over d \log Q} m_i = - {1\over (4\pi)^2} \cdot 2 b_i \cdot m_i\ ,\
\eeq{eq:v4}
where $i= 3,2,1$ for the gauginos of $SU(3)\times SU(2)\times U(1)$ and
$b_i$ is the coefficient in the equation \leqn{eq:j1} for the coupling
constant renormalization. Comparing these two equations, we find that
$m_i(Q)$ and $\alpha_i(Q)$ have the same renormalization group
evolution, and so their ratio is constant as a function of $Q$. This
relation is often written
\begin{equation}
{m_i(Q)\over \alpha_i(Q)} = {m_{1/2}\over \alpha_U}\ ,
\eeq{eq:w4}
where $\alpha_U$ is the unification value of the coupling constant
($\alpha_U^{-1} = 24$), and $m_{1/2}$ is the underlying soft mass
parameter. As the $\alpha_i$ flow from their unified value at very
large scales to their observed values at $Q= m_Z$, the gaugino masses
flow along with them. The result is that the grand unification of
gaugino masses implies the following relation among the observable
gaugino masses:
\begin{equation}
{m_1 \over \alpha_1} = {m_2 \over \alpha_2} = {m_3 \over \alpha_3}\ .
\eeq{eq:x4}
I will refer to this relation as {\em gaugino unification}. It implies
that, for the values at the weak scale,
\begin{equation}
{m_1 \over m_2} = 0.5 \ , \qquad {m_3 \over m_2} = 3.5 \ .
\eeq{eq:y4}
I caution you that these equations apply to a perturbative (for
example, $\bar{MS}$) definition of the masses. For the gluino mass
$m_3$, the physical, on-shell, mass may be larger than the $\bar{MS}$
mass by 10--20\%, due to a radiative correction which depends on the
ratio of the masses of the gluon and quark partners \cite{Damien}.
\begin{figure}[t]
\begin{center}
\leavevmode
\epsfbox{Gradm.eps}
\end{center}
\caption{A simple radiative correction giving gaugino masses
in the pattern of `gaugino unification'.}
\label{fig:elevenplus}
\end{figure}
Though gaugino unification is a consequence of the grand unification of
gaugino masses, it does not follow uniquely from this source. On the
contrary, this result can also follow from models in which gaugino
masses arise from radiative corrections at lower energy. For example,
in a model of Dine, Nelson, Nir, and Shirman \cite{DNNS}, gaugino
masses are induced by the diagram shown in Figure~\ref{fig:elevenplus},
in which a supersymmetry-breaking expectation value of $F$ couples to
some new supermultiplets of mass roughly 100 TeV, and this influence is
then tranferred to the gauginos through their Standard Model gauge
couplings. As long as the mass pattern of the heavy particles is
sufficiently simple, we obtain gaugino masses $m_i$ proportional to the
corresponding $\alpha_i$, which reproduces \leqn{eq:x4}.
Now consider the masses of the squarks and sleptons, the
scalar partners of quarks and leptons.
We saw in Section 3.4 that,
since the left- and right-handed quarks belong to different
supermultiplets $Q$, $\bar u$, $\bar d$, each has its own scalar
partners. The same situation applies for the leptons. In this section,
I will assume for maximum
simplicity that the underlying values of the squark
and slepton mass parameters are completely universal, with the value
$M_0$. This is a stronger assumption than the prediction of grand
unification, and one which does not necessarily have a
fundamental justification.
Nevertheless, there are two effects that distort this
universal mass prediction into a complex particle spectrum.
The first of these effects comes from the $D$-term potential
\leqn{eq:j4}. Consider the contributions to this potential from the
Higgs fields $h_1$, $h_2$ and from a squark or slepton field
$\widetilde f$. Terms contributing to the $\widetilde f$ mass comes
from the $D^a$ terms associated with the $U(1)$ and the neutral $SU(2)$
gauge bosons,
\begin{eqnarray}
V &=& {g^{\prime 2}\over 2} \left( h^*_1 (-\frac{1}{2}) h_1 +
h^*_2 (\frac{1}{2}) h_2 + \widetilde f^* Y \widetilde f\right)^2 \nonumber \\
& & + {g^{2}\over 2} \left( h^*_1 \tau^3 h_1 +
h^*_2 \tau^3 h_2 + \widetilde f^* I^3 \widetilde f\right)^2 \ .
\eeqa{eq:z4}
The factors in the first line are the hypercharges of the fields $h_1$,
$h_2$. Now replace these Higgs fields by their vacuum expectation
values
\begin{equation}
\VEV{h_1} = {1\over \sqrt{2}}\pmatrix{v \cos \beta\cr 0 \cr} \qquad
\VEV{h_2} = {1\over \sqrt{2}}\pmatrix{0 \cr v \sin \beta\cr}
\eeq{eq:a5}
and keep only the cross term in each square. This gives
\begin{eqnarray}
V &=& 2 {g^{\prime 2}\over 2}{v^2\over 2} \left(-\frac{1}{2} \cos^2\beta + \frac{1}{2}
\sin^2\beta\right) \widetilde f^* Y \widetilde f +
2 {g^{2}\over 2}{v^2\over 2} \left(+\frac{1}{2} \cos^2\beta - \frac{1}{2}
\sin^2\beta\right) \widetilde f^* I^3 \widetilde f \nonumber \\
&=& - c^2 m_Z^2 (\sin^2\beta - \cos^2\beta) \widetilde f^*
(I^3 - {s^2\over c^2} Y)\widetilde f \nonumber \\
&=& - m_Z^2 (\sin^2\beta - \cos^2\beta) \widetilde f^*
(I^3 - s^2 Q)\widetilde f \ .
\eeqa{eq:b5}
Thus, this term gives a contribution to the scalar mass
\begin{equation}
\Delta M_f^2 = - m_Z^2 {\tan^2\beta-1\over \tan^2\beta+1} (I^3 - s^2 Q)\ .
\eeq{eq:c5}
\begin{figure}[htb]
\begin{center}
\leavevmode
\epsfbox{GMass.eps}
\end{center}
\caption{Renormalization of the soft scalar mass due to the gaugino
mass.}
\label{fig:elf}
\end{figure}
The second effect is the renormalization group running of the scalar
mass induced by the gluino mass through the diagram shown in
Figure~\ref{fig:elf}. The renormalization group equation for the scalar
mass $M_f$ is
\begin{equation}
{d \over d \log Q} M_f = - {1\over (4\pi)^2} \cdot 8\cdot
\sum_i C_2(r_i) g_i^2 m_i^2 \ ,
\eeq{eq:d5}
where
\begin{equation}
C_2(r_i) = \cases{ \frac{3}{5} Y^2 & $U(1)$ \cr
0, \ \frac{3}{4} & singlets, doublets of $SU(2)$\cr
0, \ \frac{4}{3} & singlets, triplets of $SU(3)$\cr} \ .
\eeq{eq:e5}
In writing this equation, I have ignored the Yukawa couplings of the
flavor $f$. This is a good approximation for light flavors, but we have
already seen that it is not a good approximation for the top squarks,
and it may fail also for the $b$ and $\tau$ partners if $\tan\beta$ is
large. In those cases, one must add further terms
to the renormalization group equations, such as those given
in \leqn{eq:r4}.
To integrate the equation \leqn{eq:d5}, we need to know the behavior of
the gaugino masses as a function of $Q$. Let me assume that this is
given by gaugino unification according to \leqn{eq:w4}. Then
\begin{equation}
{g_i^2 m_i^2\over 4\pi} = \alpha_i(Q)\cdot {\alpha_i^2(Q)
m_i({\cal M})\over \alpha_{i{\cal M}}^2} = \alpha_i^3(Q) \cdot
{m_2\over \alpha_2^2}\ .
\eeq{eq:f5}
where $\alpha_{i{\cal M}}$ is the value of $\alpha_i$ at the messenger scale,
and the quantities at the extreme right are to be evaluated at the weak
interaction scale. If we inserting this expression into \leqn{eq:d5}
and taking the evolution of $\alpha_i(Q)$ to be given by \leqn{eq:v1},
the right-hand side of \leqn{eq:d5} is given as an explicit function of
$Q$. To integrate the equation from messenger scale to the weak scale,
we only need to evaluate
\begin{eqnarray}
\int^{\cal M}_{m_Z} d\,\log Q \, \alpha_i^3(Q) &=&
\int^{\cal M}_{m_Z} d\,\log Q {\alpha^3_{i {\cal M}} \over (1 +
(b_i/2\pi)\alpha_{i{\cal M}}\log(Q/{\cal M}))^3}\nonumber \\
&=&
{2\pi\over b_i\alpha_{i {\cal M}}} {\alpha^3_{i {\cal M}} \over (1 +
(b_i/2\pi)\alpha_{i{\cal M}}\log(Q/{\cal M}))^2}\bigg|^{\cal M}_{m_Z} \nonumber \\
&=& {2\pi\over b_i}(\alpha_i^2 - \alpha_{i {\cal M}}^2)
\eeqa{eq:g5}
Then, assembling the renormalization group and D-term contributions,
the physical scalar mass at the weak interaction scale is given by
\begin{equation}
M_f^2 = M_0^2 + \sum_i {2\over b_i}C_2(r_i)\,
{\alpha_i^2 - \alpha_{i {\cal M}}^2\over \alpha_2^2}\, m_2^2 + \Delta M_f^2 \ .
\eeq{eq:h5}
The term in \leqn{eq:h5} induced by the renormalization group effect is
not simple, but it is also not so difficult to understand. It is
amusing that it is quite similar in form to the formula one would find
for a one-loop correction from a diagram of the general structure shown
in Figure~\ref{fig:elf}. Indeed, in the model of Dine, Nelson, Nir,
and Shirman referred to above, for which the messenger scale is quite
close to the weak interaction scale, the computation of radiative
corrections gives the simple result
\begin{equation}
M_f^2 = \sum_i 2 C_2(r_i)
{ \alpha_i^2\over \alpha_2^2} m_2^2 + \Delta M_f^2 \ ,
\eeq{eq:i5}
where, in this formula, the quantity $m_2/\alpha_2$ is simply the mass
scale of the messenger particles. The formulae \leqn{eq:h5} and
\leqn{eq:i5} do differ quantitatively, as we will see in the next
section.
The equations \leqn{eq:w4} and \leqn{eq:h5} give a characteristic
evolution from the large scale ${\cal M}$ down to the weak interaction scale.
The colored particles are carried upward in mass by a large factor,
while the masses of color-singlet sleptons and gauginos change by a
smaller amount. The effects of the top Yukawa coupling discussed in
Section 3.7 add to these mass shifts, lowering the masses of the top
squarks and sending the (mass)$^{2}$ of the Higgs field $h_2$ down through
zero. These observations explain all of the basic qualitative features of
the evolution which we saw illustrated
in Figure~\ref{fig:ten}.
\subsection{The spectrum of superparticles---diagnostics}
Now that we understand the various effects that can contribute to the
superpartner masses, we can try to analyze the inverse problem: Given a
set of masses observed experimentally, how can we read the pattern of
the underlying mass parameters and determine the value of the messenger
scale? In this section, I will present some general methods for
addressing this question.
This question of the form of the underlying soft-supersymmetry breaking
parameters requires careful thought. If supersymmetric particles are
discovered at LEP 2 or LHC, this will become the most important
question in high-energy physics. It is therefore important not to
trivialize this question or to address it only in overly restrictive
contexts. In reading the literature on supersymmetry experiments
at colliders, it is important to keep in mind the broadest range of
possibilities for the spectrum of superparticles. Be especially
vigilant for code-words such as `the minimal SUGRA framework'
\cite{BSnow} or `the Monte Carlo generator described in [93]'
\cite{Atlas} which imply the restriction to the special case in which
$M_0$ is universal and ${\cal M}$ is close to the Planck mass.
Nevertheless, in this section, I will make some simplifying
assumptions. If the first supersymmetric partners are not found a LEP
2, the $D$-term contribution \leqn{eq:c5} is a small correction to the
mass formula. In any event, I will ignore it from here on. Since this
term is model-independent, it can in principle be computed and
subtracted if the value of $\tan\beta$ is known. (It is actually not so
easy to measure $\tan\beta$; a collection of methods is given in
\cite{FMor}.) In addition, I will ignore the effects of weak-scale
radiative corrections. These are sometimes important and can distort
the overall pattern unless they are subtracted correctly
\cite{DamienII}.
I will also assume, in my description of the spectrum of scalars, that
the spectrum of gauginos is given in terms of $m_2$ by gaugino
unification. As I have explained in the previous section, gaugino
unification is a feature of the simplest schemes for generating the
soft supersymmetry-breaking masses both when ${\cal M}$ is very large and
when it is relatively small. However, there are many more complicated
possibilities. The assumption of gaugino unification can be tested
experimentally, as I will explain in Section~4.5. This is an essential
part of any experimental investigation of the superparticle spectrum.
If the assumption is not valid, that also affects the interpretation of
the spectrum of scalar particles. In particular, the renormalization
effects included in the various curves shown in this section must be
recomputed using the correct mass relations among the three gauginos.
Once the gaugino masses are determined, we can ask about the relation
between the mass spectrum of gauginos and that of scalars. To analyze
this relation, it is useful to form the `Dine-Nelson plot', that is,
the plot of
\begin{equation}
{M_f\over m_2} \quad \mbox{against} \quad
C \equiv \left[\sum_i C_2(r_i){\alpha_i^2\over \alpha_2^2} \right]^{1/2} \ ,
\eeq{eq:j5}
suggested by \leqn{eq:i5}. Some sample curves on this plot are shown
in Figure~\ref{fig:twelve}. The quantity $C$ takes on only five
distinct values, given by the $SU(3)\times SU(2)\times U(1)$ quantum
numbers of $\bar e$, $L$, $\bar d$, $\bar u$, and $Q$. These are
indicated in the figure as vertical dashed lines. (The values of $C$
for $\bar d$ and $\bar u$ are almost identical. The dot-dash line is
the prediction of \leqn{eq:i5}. The solid lines are the predictions of
the renormalization group term in \leqn{eq:h5} for ${\cal M} = 100$ TeV,
$2\times 10^{16}$ GeV (the grand unification scale), and $10^{18}$ GeV
(the superstring scale).
\begin{figure}[tb]
\begin{center}
\leavevmode
{\epsfysize=4.5in\epsfbox{DineNelson.eps}}
\end{center}
\caption[*]{The simplest predictions for the mass spectrum of
squarks and sleptons, expressed on the Dine-Nelson plot
\protect\leqn{eq:j5}. The dot-dashed curve is the
prediction of \protect\leqn{eq:i5}; the solid curves show
the effect of renormalization-group evolution with (from
bottom to top) ${\cal M} = 10^5$ GeV, $2\times 10^{16}$ GeV,
$10^{18}$ GeV.}
\label{fig:twelve}
\end{figure}
With this orientation, it is interesting to ask how a variety of models
of supersymmetry breaking appear in this presentation. In
Figure~\ref{fig:thirteen}, I show the Dine-Nelson plot for a collection
of models from the literature discussed in \cite{mysusy}. The highest
solid curve from Figure~\ref{fig:twelve} has been retained for
reference. The model in the upper left-hand corner is the `minimal
SUGRA' model with a universal $M_0$ at the Planck scale. In this case,
the dashed curve lies a constant distance in $m^2$ above the solid
curve. The model in the upper right-hand corner is that of \cite{DNNS}
with renormalization-group corrections properly included. The model in
the bottom right-hand corner gives an example of the alignment scenario
of \cite{NLS}. The plot is drawn in such a way as to suggest that, the
underlying soft scalar masses tend to zero for the first generation of
quarks and leptons. This behavior could be discovered experimentally
with the analysis I have suggested here.
\begin{figure}[p]
\begin{center}
\leavevmode
{\epsfysize=6.0in\epsfbox{DineNelsonex.eps}}
\end{center}
\caption{Scalar spectrum predicted in a number of theoretical models of
supersymmetry breaking, as displayed on the Dine-Nelson plot, from
\protect\cite{mysusy}.}
\label{fig:thirteen}
\end{figure}
It is interesting that the various models collected in
Figure~\ref{fig:thirteen} look quite different to the eye in this
presentation. This fact gives me confidence that, if we could actually
measure the mass parameters needed for this analysis, those data would
provide us with incisive information on the physics of the very large
scales of unification and supersymmetry breaking.
\subsection{The superpartners of $W$ and Higgs}
Now that we have framed the problem of measuring the mass spectrum of
superparticles, we must address the question of how this can be done.
What are the signatures of the presence of supersymmetric particles,
and how can we translate from the characteristics of observable
processes to the values of the parameters of which determine the
supersymmetry spectrum?
I will discuss the signatures and decay schemes for superparticles in
the next section. First, though, we must discuss a complication which
needs to be taken into account in this phenomenology.
After $SU(2)\times U(1)$ symmetry-breaking, any two particles with the
same color, charge, and spin can mix. Thus, the
spin-$\frac{1}{2}$ supersymmetric partners of the $W$ bosons and the charged
Higgs bosons can mix with one another. Similarly, the partners of the
$\gamma$, $Z^0$, $h_1^0$, and $h^0_2$ enter into a $4\times 4$ mixing
problem.
Consider first the mixing problem of the charged fermions. The mass
terms for these fermions arise from the gaugino-Higgs coupling in
\leqn{eq:h4}, the soft gaugino mass term in \leqn{eq:o4}, and the
fermion mass term arising from the superpotential \leqn{eq:l4}. The
relevant terms from the Lagrangian are
\begin{eqnarray}
\Delta{\cal L} &=& - \sqrt{2}i{g\over 2}\left(
h_2^0 \widetilde w^{-T} c \widetilde h^+_2
- \widetilde h^{-T}_1 c \widetilde w^+ h^0_1 \right) \nonumber \\
& & \hskip 0.4in
- m_2 \widetilde w^{-T} c \widetilde w^+ +\mu \widetilde h_1^{-T} c
\widetilde h_2^+\ .
\eeqa{eq:k5}
If we replace $h^0_1$ and $h^0_2$ by their vacuum expectation values
in \leqn{eq:a5}, these terms take the form
\begin{equation}
\Delta{\cal L} = - \pmatrix{ \widetilde w^- & i\widetilde h^-_1\cr}^T c\,
\mbox{\bf m}
\pmatrix{ \widetilde w^+ \cr i \widetilde h^+_2\cr}\ ,
\eeq{eq:l5}
where ${\bf m}$ is the mass matrix
\begin{equation}
\mbox{\bf m} = \pmatrix{ m_2 & \sqrt{2}m_W \sin\beta \cr
\sqrt{2}m_W \cos\beta & \mu\cr }
\eeq{eq:m5}
The physical massive fermions are the eigenstates of this mass matrix.
They are called {\em charginos}, $\widetilde \chi^\pm_{1,2}$, where 1
labels the lighter state. More precisely, the charginos
$\widetilde\chi_1^+$, $\widetilde\chi_2^+$ are the linear combinations
that diagonalize the matrix $\mbox{\bf m}^\dagger \mbox{\bf m}$, and
$\widetilde\chi_1^-$, $\widetilde\chi_2^-$ are the linear combinations
that diagonalize the matrix $\mbox{\bf m}\mbox{\bf m}^\dagger$.
\begin{figure}[htb]
\begin{center}
\leavevmode
{\epsfxsize=3.5in\epsfbox{WHMixing.eps}}
\end{center}
\caption{Contours of fixed chargino mass in the plane of the
mass parameters $(\mu,m_2)$, computed for $\tan\beta = 4$.}
\label{fig:fourteen}
\end{figure}
The diagonalization of the matrix \leqn{eq:m5} is especially simple in
the limit in which the supersymmetry parameters $m_2$ and $\mu$ are
large compared to $m_W$. In the region $\mu > m_2 \gg m_W$,
$\widetilde\chi^+_1$ is approximately $\widetilde w^+$, with mass $m_1
\approx m_2$, while $\widetilde\chi^+_2$ is approximately $\widetilde
h_2^+$, with mass $m_2 \approx \mu$. For $m_2 > \mu \gg m_W$, the
content of $\widetilde\chi^+_1$ and $\widetilde\chi^+_2$ reverses.
More generally, we refer to the region of parameters in which
$\widetilde\chi^+_1$ is mainly $\widetilde w^+$ as the {\em gaugino
region}, and that in which $\widetilde\chi^+_1$ is mainly $\widetilde
h_2^+$ as the {\em Higgsino region}. If charginos are found are LEP 2, it
is quite likely that they may be mixtures of gaugino and Higgsino;
however, the region of parameters in which the charginos are
substantially mixed decreases as the mass increases. The contours of
constant $\widetilde\chi^+_1$ mass in the $(\mu, m_2)$ plane, for
$\tan\beta =4$ are shown in Figure~\ref{fig:fourteen}.
An analysis similar to that leading to \leqn{eq:m5} gives the mass
matrix of the neutral fermionic partners. This is a $4\times 4$ matrix
acting on the vector $(\widetilde b, \widetilde w^3, i\widetilde
h_1^0, i\widetilde h_2^0)$, where $\widetilde b$ and $\widetilde w^3$
are the partners of the $U(1)$ and the neutral $SU(2)$ gauge boson. In
this basis, the mass matrix takes the form
\begin{equation}
\mbox{\bf m} = \pmatrix{m_1 & 0 & - m_Z s \cos\beta & m_Z s\sin\beta \cr
0 & m_2 & m_Z c \cos\beta & - m_Z c\sin\beta \cr
- m_Z s \cos\beta & m_Z c\cos\beta & 0 & -\mu\cr
m_Z s \sin\beta & - m_Z c\sin\beta & -\mu & 0 \cr} \ .
\eeq{eq:n5}
The linear combinations which diagonalize this matrix are called
{\em neutralinos}, $\widetilde\chi^0_1$ through $\widetilde\chi^0_4$ from
lowest to highest mass. The properties of these states are similar to
those of the charginos. For example, in the gaugino region,
$\widetilde \chi^0_1$ is mainly $\widetilde b$ with mass $m_1$, and
$\widetilde \chi^0_2$ is mainly $\widetilde w^3$, with mass $m_2$.
Note that, when $\mu =0$, the neutralino mass matrix \leqn{eq:n5} has
an eigenvector with zero eigenvalue $(0,0,\sin\beta,\cos\beta)$. In
addition, the vector $(0,0, \cos\beta, -\sin\beta)$ has a relatively
small mass $m_\chi \sim m_Z^2/m_2$. This situation is excluded by the
supersymmetry searches at LEP 1, for example, \cite{LEPsusy}.
Thus, we are required
on phenomenological grounds to include the superpotential \leqn{eq:l4}
with a nonzero value of $\mu$. It is also important to note that, with
the `minimal SUGRA' assumptions used in many phenomenological studies,
it is easiest to arrange electroweak symmetry breaking through the
renormalization group mechanism discussed in Section 3.7 if $\mu$
is of order
$m_3 \approx 3.5 m_2$. Thus, this set of assumptions typically leads to
the gaugino region of the chargino-neutralino physics.
\subsection{Decay schemes of superpartners}
With this information about the mass eigenstates of the superpartners,
we can work out their decay schemes and, from this, their signatures.
As I have explained at the end of Section 3.5, I restrict this
discussion to the situation in which $R$-parity, given by \leqn{eq:m4},
is conserved and so the lightest supersymmetric partner is stable. In
most of this discussion, I will assume that this stable particle is the
lightest neutralino $\widetilde\chi^0_1$. The neutralino is a massive
but weakly-interacting particle. It would not be observed directly
in a detector at a high-energy collider but rather would appear as
missing energy and unbalanced momentum.
In this context, we can discuss the decays of specific superpartners.
Clearly, the lighter superpartners will have the simplest decays, while
the heavier superpartners will decay to the lighter ones. Since heavy
squarks and sleptons often decay to charginos and neutralinos, it is
convenient to begin with these.
The decay pattern of the lighter chargino depends on its field content
and, in particular, on whether its parameters lie in the gaugino region
or the Higgsino region. In the gaugino region, the lighter chargino
is mainly $\widetilde w^+$, with mass $m_2$. The second neutralino is
almost degenerate, but the first neutralino has mass $m_1 = 0.5 m_2$,
assuming gaugino unification. If $m_2 > 2 m_W$, the decay $\ch{1}\to
W^+ \ne{1}$ typically dominates. If $m_2$ is smaller, the chargino
decays to 3-body final states through the diagrams shown in
Figure~\ref{fig:fifteen}, and through the analogous diagrams involving
quarks. The last two diagrams involve virtual sleptons. If the
slepton mass is large, the branching ratio to quarks versus leptons is
the usual color factor of 3. However, if the sleptons are light, the
branching ratio to leptons may be enhanced.
In the Higgsino region, the chargino $\ch{1}$ and the two lightest
neutralinos $\chi{1}$, $\chi{2}$ are all roughly degenerate at the mass
$\mu$. The first diagram in Figure~\ref{fig:fifteen} dominates in this
case, but leads to only a small visible energy in the $\ell^+ \nu$ or
$u\bar d$ system.
\begin{figure}[t]
\begin{center}
\leavevmode
{\epsfxsize=3.75in\epsfbox{Cdecay.eps}}
\end{center}
\caption{Diagrams leading to the decay of the chargino $\ch{1}$ to the
3-body final state $\ell^+\nu \ne{1}$. The chargino can decay to $u\bar
d \ne{1}$ by similar processes.}
\label{fig:fifteen}
\end{figure}
The decay schemes of the second neutralino $\ne{2}$ are similar to
those of the chargino. Since supersymmetry models typically have a
light neutral Higgs boson $h^0$, the decay $\ne{2} \to \ne{1} h^0$ may
be important. If neither this process nor the on-shell decay to $Z^0$
are allowed, the most important decays are the 3-body processes such as
$\ne{2} \to \ne{1} q \bar q$. The process $\ne{2} \to \ne{1}
\ell^+\ell^-$ is particularly important at hadron colliders, as we will
see in Section~4.8.
Among the squarks and sleptons, we see from Figure~\ref{fig:thirteen}
that the $\widetilde e_R^-$ of each generation is typically the
lightest. This particle couples to $U(1)$ but not $SU(2)$ and so, in
the gaugino region, it decays through $\widetilde e^-_R \to e
\ne{1}$. On the other hand, the partners $\tilde L$ of the left-handed
leptons prefer to decay to $\ell \ne{2}$ or $\nu \ch{1}$ if these modes
are open.
It is a typical situation that the squarks are heavier than the gluino.
For example, the renormalization group term in \leqn{eq:h5}, with ${\cal M}$
of the order of the unification scale, already gives a contribution
equal to $3m_2$. In that case, the squarks decay to the gluino,
$\widetilde q \to q \widetilde g$. If the gluinos are heavier, then,
in the gaugino region, the superpartners of the right-handed quarks
decay dominantly to $q\ne{1}$, while the partners of the left-handed
quarks prefer to decay to $q \ne{2}$ or $q \ch{1}$.
\begin{figure}[t]
\begin{center}
\leavevmode
\epsfbox{Gluino.eps}
\end{center}
\caption{Branching fractions for gluino decay in the various classes
of final states possible for $m(\widetilde g) < m(\widetilde q)$,
from \protect\cite{BGH}.
The four graphs correspond to the gluino masses (a) 120 GeV,
(b) 300 GeV, (c) 700 GeV, (d) 1000 GeV. The branching fractions
are given as a function of $\mu$ with $m_2$ determined from the gluino
mass by the gaugino unification relation (133).}
\label{fig:sixteen}
\end{figure}
If the squarks and gluinos are much heavier than the color-singlet
superpartners, their decays can be quite complex, including cascades
through heavy charginos, neutralinos, and Higgs bosons \cite{BBTW,BGH,BTW}.
Figure~\ref{fig:sixteen} shows the branching fractions of the gluino as
a function of $\mu$, assuming gaugino unification and the condition
that the squarks are heavier than the gluino. The boundaries apparent
in the figure correspond to the transition from the gaugino region (at
large $|\mu|$) to the Higgsino region. The more complex decays
indicated in the figure can be an advantage in hadron collider
experiments, because they lead to characteristic signatures such as
multi-leptons or direct $Z^0$ production in association with missing
transverse momentum. On the other hand, as the dominant gluino decay
patterns become more complex, the observed inclusive cross sections
depend more indirectly on
the underlying supersymmetry parameters.
Up to now, I have been assuming that the lightest superpartner is the
$\ne{1}$. However, there is an alternative possibility that is quite
interesting to consider. According to Goldstone's theorem, when a
continuous symmetry is spontaneously broken, a massless particle
appears as a result. In the most familiar examples, the continuous
symmetry transforms the internal quantum numbers of fields, and the
massless particle is a Goldstone boson. If the spontaneously broken
symmetry is coupled to a gauge boson, the Goldstone boson combines with
the gauge boson to form a massive vector boson; this is the Higgs
mechanism. Goldstone's theorem also applies to the spontaneous
breaking of supersymmetry, but in this case the massless particle is a
Goldstone fermion or {\em Goldstino}. Then it would seem that the
Goldstino should be the
lightest superpartner into which all other superparticles decay?
To analyze this question, we need to know two results from the theory of
the Goldstino. Both have analogues in the usual theory of Goldstone
bosons. I have already pointed out in \leqn{eq:n3} that the gravitino,
the spin-$\frac{3}{2}$ supersymmetric partner of the graviton, acts as the
gauge field of local supersymmetry. This particle can participate in a
supersymmetric version of the Higgs mechanism. If supersymmetry is
spontaneously broken by the expectation value of an $F$ term, the
gravitino and the Goldstino combine to form a massive spin-$\frac{3}{2}$
particle with mass
\begin{equation}
m_\psi = {\VEV{F}\over \sqrt{3} m_{\mbox{\scriptsize Pl}}}\ ,
\eeq{eq:o5}
where $m_{\mbox{\scriptsize Pl}}$ is the Planck mass. Notice that, if the messenger scale
${\cal M}$ is of the order of $m_{\mbox{\scriptsize Pl}}$, this mass scale is of the order of the
scale $m_S$ of soft supersymmetry-breaking mass terms given in
\leqn{eq:p4}. In fact, in this case, the massive gravitino is
typically heavier than the $\ne{1}$. On the other hand, if ${\cal M}$ is of
order 100 TeV, with $\VEV{F}$ such that the superparticle masses are at
the weak interaction scale, $m_\psi$ is of order $10^{-2}$ eV and so is
much lighter than any of the superpartners we have discussed above.
The second result bears on the probability for producing Goldstinos.
The methods used to analyze pion physics in QCD generalize to this case
and predict that the Goldstino $\widetilde G$ is produced through the
effective Lagrangian
\begin{equation}
\Delta{\cal L} = {1\over \VEV{F}} j^T_\mu c \partial^\mu \widetilde G \,
\eeq{eq:p5}
where $\VEV{F}$ is the supersymmetry-breaking vacuum expectation value
in \leqn{eq:o5} and $j_\mu$ is the conserved current associated with
supersymmetry. Integrating by parts, this gives a coupling for the vertex
$\widetilde f \to f \widetilde G$ proportional to
\begin{equation}
{\Delta m\over \VEV{F}} \ ,
\eeq{eq:q5}
where $\Delta m$ is the supersymmetry-breaking mass difference between
$f$ and $\widetilde f$. If the Goldstino becomes incorporated into a
massive spin-$\frac{3}{2}$ field, this does not affect the production
amplitude, as long as the Goldstinos are emitted at energies large
compared to their mass. I will discuss this point for the more
standard case of a Goldstone boson in Section 5.3.
This result tells us that, if the messenger scale ${\cal M}$ is of
order $m_{\mbox{\scriptsize Pl}}$ and $\VEV{F}$ is connected with ${\cal M}$ through \leqn{eq:p4},
the rate for the decay of any superpartner to the Goldstino is so slow
that it is irrelevant in accelerator experiments. On the other hand,
if ${\cal M}$ is less than 100 TeV, decays to the Goldstino can become
relevant.
For the case of the coupling of the $\widetilde b$, the superpartner
of the $U(1)$ gauge boson, to the photon and $Z^0$ fields, the
effective Lagrangian \leqn{eq:p5} takes the more explicit form
\begin{equation}
\Delta{\cal L} = {m_1\over \VEV{F}} \widetilde b^\dagger \sigma^{\mu\nu}
(c F_{\mu\nu} - s Z_{\mu\nu}) \widetilde G \ .
\eeq{eq:r5}
This interaction leads to the decay $ \widetilde b \to \gamma
\widetilde G$ with lifetime \cite{DDTR}
\begin{equation}
c\tau = (\mbox{0.1 mm})\left({\mbox{100\ GeV}\over m_1 }\right)^5
\left({\VEV{F}^{1/2}\over \mbox{100\ TeV}}\right)^4 \ .
\eeq{eq:s5}
It is difficult to estimate whether the value of $c\tau$ resulting from
\leqn{eq:s5} should be meters or microns. But this argument does
predict that, if the $\ne{1}$ is the lightest superpartner of Standard
Model particles, all decay chains should end with the decay of the
$\ne{1}$ to $\gamma \widetilde G$. If the lifetime \leqn{eq:s5} is
short, each $\ne{1}$ momentum vector, which we visualized above as
missing energy, should be realized instead as missing energy plus a
direct photon.
It is also possible in this case of small $\VEV{F}$ that the lightest
sleptons $\widetilde e^-_R$ could be lighter than the $\ne{1}$. If
these particles are the lightest superparticles, they lead to an
unacceptable cosmological abundance of stable charged matter. This
problem disappears, however, if they can decay to the Goldstino. In
that case, all supersymmetric decay chains terminate with leptons and
missing energy, for example,
\begin{equation}
\ne{1} \to \ell^- \widetilde\ell^+_R \to \ell^- \ell^+ \widetilde G \ .
\eeq{eq:t5}
From here on, I will concentrate on the most straightforward case in
which the $\ne{1}$ is the lightest superparticle and is stable over
the time scales observable in collider experiments. However, it is
important to keep these alternative phenomenologies in mind when you
are actually looking for superparticle signatures in the data.
\subsection{The mass scale of supersymmetry}
At last, we have all the background we require to discuss the
experiments which will detect and study supersymmetric particles at
colliders. In this section, I would like to recapitulate the general
ideas that we have formulated for this study. I will also note the
implication of the these idea for the mass range of supersymmetric
particles. If the picture of supersymmetry that I have constructed here is
correct, the supersymmetric particles should be discovered at planned,
or even at the present, accelerators.
Although the mass scale of supersymmetry depends on many
parameters and is in principle adjustible over a large range, there is
a good reason to expect to find supersymmetric particles relatively
near at hand. As I have discussed in Section 3.7, supersymmetry
provides a mechanism for electroweak symmetry breaking. If we assume
that this indeed is the mechanism of supersymmetry breaking, the $W$
and $Z$ masses must be masses characteristic of the scale of soft
supersymmetry-breaking parameters. Alternatively, $m_W$ can only be
much less than $m_S$ in \leqn{eq:p4} by virtue of an unnatural
cancellation or fine-tuning of parameters. This possibility has been
studied quantitatively in a number of theoretical papers
\cite{ENT,BG,ACas}, with the conclusion that the relation between
$m_W$ and $m_S$ is natural (by the authors' definitions) only when
\begin{equation}
m_2 < 3 m_W \ .
\eeq{eq:tt5}
Of course, it is possible that the mechanism of electroweak symmetry
breaking does not involve supersymmetry. In that case, there might
still be supersymmetry at a very high scale (to satisfy aesthetic
arguments or to aid in the quantization of gravity), but in this case
supersymmetry would not be relevant to experimental high-energy physics.
The schemes for the supersymmetric mass spectrum
discussed in Sections 4.2 and 4.3 give a definite expectation for the
ordering of states. The gaugino unification relation predicts that
the gluino is the heaviest of the gauginos, with the on-shell gluino
mass satisfying
\begin{equation}
m(\widetilde g) \sim 4 m_2 \ .
\eeq{eq:u5}
Our results were much less definitive about the mass relations of the
squarks and sleptons. Roughly, though,
\begin{equation}
m(\widetilde q) \sim (2-6) \cdot m(\widetilde \ell)\ , \quad
\mbox{and} \quad
m(\widetilde\ell) \sim
m_2 \ ,
\eeq{eq:v5}
in the models discussed in Section 4.3.
The relations \leqn{eq:tt5}--\leqn{eq:v5} predict that we should
find charginos below 250 GeV in mass and gluinos below 1 TeV. This
mass region is not very far away.
The LEP 2 and Tevatron experimental programs will cover
almost half of this parameter space in the next five years. The LHC
can probe for supersymmetric particles up to masses about a factor 3
beyond the region predicted by the relations above, and an $e^+e^-$ linear
collider with up to 1.5 TeV in the center of mass would have a roughly
equivalent reach.
Search strategies for supersymmetric particles depend on the
detailed properties of the model. But in general,
assuming $R$-parity conservation and the identification
of $\ne{1}$ as the lightest superparticle, the basic signature of
supersymmetry is new particle production associated with missing
energy. In collider experiments, we would typically be looking for a
multi-jet or multi-lepton final state, together with the characteristic
missing transverse momentum or acoplanarity.
Because I would like to continue in a somewhat different direction, I
will not describe in detail the techniques and strategies for the
discovery of supersymmetry at these colliders. The search strategies
for various supersymmetric particles at LEP 2 are described in
\cite{LEPtwo}. Experimental strategies for discovering supersymmetry
at the Tevatron are reviewed in \cite{Tevatron}, together with an
estimation of the reach in the mass spectrum.
\begin{figure}[p]
\begin{center}
\leavevmode
{\epsfxsize=3.5in\epsfbox{SUSYLHC.eps}}
\end{center}
\caption{Cross sections for various signatures of supersymmetric particle
production at the LHC, from from \protect\cite{BTW}. The observables
studied are, from top to bottom, missing $E_T$, like-sign dileptons,
multi-leptons, and $Z +$ leptons. The top graph plots the cross sections
as a function of $m(\widetilde g)$ for $m(\widetilde q) = 2 m(\widetilde g)$,
and $\mu = -150$ GeV, and $m_2$ given by gaugino unification. The bottom
graph, plotted for $m(\widetilde g) = 750$ GeV as a function of $\mu$,
shows the model-dependence of the cross sections.}
\label{fig:seventeenxx}
\end{figure}
It is important to point out, though, that if the phenomenology of
supersymmetry follows the general lines I have laid out here, it will
be discovered, at the latest, by the LHC. The cross sections for LHC
signatures of supersymmetry involving multiple leptons and direct $Z^0$
production associated with missing transverse energy are shown in
Figure~\ref{fig:seventeenxx} \cite{BTW}. These cross sections are very
large, of order 100 fb, for example, for the like-sign dilepton signal,
at a collider that is designed to produce an event sample of 100
fb$^{-1}$ per year per detector. Supersymmetry can also be seen by
looking for events with large jet activity and missing transverse
momentum. A sample comparison of signal and background for an
observable that measures the jet activity is shown in
Figure~\ref{fig:seventeen}~\cite{Ianscrew}. The authors of this
analysis conclude that, at the LHC, the major backgrounds to
supersymmetry reactions do not come from Standard Model background
processes but rather from other
supersymmetry reactions.
\begin{figure}[tb]
\begin{center}
\leavevmode
{\epsfxsize=4in\epsfysize=4.5in\epsfbox{LHCmisse.eps}}
\end{center}
\caption{Simulation of the observation of supersymmetric particle
production at the LHC, from \protect\cite{Ianscrew}, at a point in
parameter space with $m(\widetilde g) = 1$ TeV. The observable
$M_{eff}$ is given by the sum of the missing $E_T$ and the sum of the
$E_T$ values for the four hardest jets. The supersymmetry signal is
shown as the open circles. Among the backgrounds, the squares are
due to QCD processes, and the other points shown are due to $W$,
$Z$, and $t$ production.}
\label{fig:seventeen}
\end{figure}
That prospect is enticing, but it is only the beginning of an experimental
research program on supersymmetry. We have seen that the theory of the
supersymmetry spectrum is complex
and subtle. The investigation of supersymmetry
should allow us to measure this spectrum. That in turn will give us access
to the soft supersymmetry-breaking parameters, which are generated at
very short distances and which therefore should hold information about
the very deep levels of fundamental physics. So it is important to
investigate to what extent these experimental measurements
are actually feasible using accelerators that we can foresee.
In discussing this question, I will assume, pessimistically, that the
scale of supersymmetry is relatively high, and so I will concentrate on
experiments for the high-energy colliders of the next generation, the LHC
and the $e^+e^-$ linear collider discussed in the introduction.
As a byproduct, this approach will illustrate the deep analytic
power that both of these machines can bring to bear on new
physical phenomena.
\subsection{Superspectroscopy at $e^+e^-$ colliders}
I will start this discussion of supersymmetry measurements from the
side of $e^+e^-$ colliders. It is intuitively clear that, if we had an
$e^+e^-$ collider operating in the energy region appropriate to
supersymmetric particle production, some precision measurements could
be made. But I have stressed that the soft supersymmetry-breaking
Lagrangian can contain a very large number of parameters which become
intertwined in the mass spectrum. Thus, it is important to ask, is
there a set of measurements which extracts and disentangles
these parameters? I will explain now how to do that.
I do not wish to imply, with this approach, that precision supersymmetry
measurements are possible only at $e^+e^-$ colliders. In fact, the next
section will be devoted to precision information that can be obtained
from hadron collider experiments. And, indeed, to justify the
construction of an $e^+e^-$ linear collider, it is necessary to show that
the $e^+e^-$ machine adds significantly to the results that will be
available from the LHC. Nevertheless, it has pedagogical virtue to begin
from
the $e^+e^-$ side, because the $e^+e^-$ experiments allow a completely
systematic approach to the issues of parameter determination. I will
return to the question of comparing $e^+e^-$ and $pp$ colliders in
Section 4.9.
To begin, let me review some of the parameters of future $e^+e^-$ colliders.
Cross sections for $e^+e^-$ annihilation decreases with
the center-of-mass energy as $1/E_{\mbox{\scriptsize CM}}^2$. Thus, to be effective, a
future collider must provide a data sample of 20-50 fb$^{-1}$/year at
an center of mass energy of 500 GeV, and a data sample increasing from
this value as $E_{\mbox{\scriptsize CM}}^2$ at higher energies. The necessary luminosities
are envisioned in the machine designs \cite{loew}. Though new sources
of machine-related background appear, the experimental environment is
anticipated to be similar to that of LEP \cite{nlcbook}. An important
feature of the experimental arrangement not available at LEP is an
expected 80--90\% polarization of the electron beam. We will see in a moment
that this
polarization provides a powerful physics analysis tool.
The simplest supersymmetry analyses at $e^+e^-$ colliders involve $e^+e^-$
annihilation to slepton pairs. Let $\widetilde \mu_R$ denote the
second-generation $\widetilde e^-_R$. This particle has a simple
decay $\widetilde \mu_R \to \mu \ne{1}$, so pair-production of
$\widetilde \mu_R$ results in a final state with $\mu^+\mu^-$ plus
missing energy. The production process is simple $s$-channel
annihilation through a vitual $\gamma$ and $Z^0$; thus, the cross
section and polarization asymmetry are characteristic of the standard
model quantum numbers of the $\widetilde\mu_R$ and are independent of the
soft supersymmetry-breaking parameters.
\begin{figure}[t]
\begin{center}
\leavevmode
{\epsfxsize=2.5in\epsfbox{LCmass.eps}}
\end{center}
\caption{Schematic energy distribution in a slepton or squark decay,
allowing a precision supersymmetry mass measurement at an
$e^+e^-$ collider.}
\label{fig:eighteenvv}
\end{figure}
It is straightforward to measure the mass of the $\widetilde \mu_R$, and
the method of this analysis can be applied to many other examples.
Because the $\widetilde \mu_R$ is a scalar, it decays isotropically to
its two decay products. When we transform to the lab frame, the
distribution of $\mu$ energies is flat between the kinematic endpoints,
as indicated in Figure~\ref{fig:eighteenvv}. The endpoints occur at
\begin{equation}
E_\pm = (1 \pm \beta) \gamma {\cal E} \ ,
\eeq{eq:w5}
with $\beta = (1 - 4 m(\widetilde{\mu})^2/E_{\mbox{\scriptsize CM}}^2)^{1/2}$, $\gamma = E_{\mbox{\scriptsize CM}}/
2 m(\widetilde{\mu})$, and
\begin{equation}
{\cal E} = {m(\widetilde{\mu})^2 - m(\ne{1})^2\over 2 m(\widetilde{\mu}^2) }
\ .
\eeq{eq:x5}
Given the measured values of $E_\pm$, one can solve algebraically for
the mass of the parent $\widetilde\mu_R$ and the mass of the missing
particle $\ne{1}$. Since many particles have two-body decays to the
$\ne{1}$, this mass can be determined redundantly. For heavy
supersymmetric particles, the lower endpoint may sometimes be obscured
by background from cascade decays through heavier charginos and
neutralinos. So it is also interesting to note that, once the mass of
the $\ne{1}$ is known, the mass of the parent particle can be
determined from the measurement of the higher endpoint only.
\begin{figure}[tb]
\begin{center}
\leavevmode
{\epsfxsize=5in\epsfbox{Smu.eps}}
\end{center}
\caption{Simulation of the $\widetilde\mu_R$ mass measurement at an
$e^+e^-$ linear collider, from \protect\cite{Tsuk}. The left-hand
graph gives the event distribution in the decay muon energy.
The right-hand graph shows the $\chi^2$ contours as a function of the
masses of the parent $\widetilde\mu_R$ and the daughter $\ne{1}$.}
\label{fig:eighteen}
\end{figure}
A simulation of the $\widetilde \mu_R$ mass measurement done by the JLC
group \cite{Tsuk} is shown in Figure~\ref{fig:eighteen}. The
simulation assumes 95\% right-handed
electron polarization, which essentially
eliminates the dominant background $e^+e^-\to W^+W^-$, but even with 80\%
polarization the endpoint discontinuities are clearly visible. The
measurement gives the masses of $\widetilde{\mu}_R$ and $\ne{1}$ to about
1\% accuracy. As another example of this technique,
Figure~\ref{fig:eighteenxx} shows a simulation by the NLC group
\cite{nlcbook} of the mass measurement of the $\widetilde \nu$ in
$\widetilde\nu \to e^- \ch{1}$.
\begin{figure}[t]
\begin{center}
\leavevmode
{\epsfxsize=2.7in\epsfbox{Snu.eps}}
\end{center}
\caption{Simulation of the $\widetilde\nu$ mass measurement at an
$e^+e^-$ linear collider, from \protect\cite{nlcbook}. The bottom
graph gives the event distribution in the decay electron energy.
The top graph shows the $\chi^2$ contours as a function of the
masses of the parent $\widetilde\nu$ and the daughter $\ch{1}$.}
\label{fig:eighteenxx}
\end{figure}
\begin{figure}[tb]
\begin{center}
\leavevmode
{\epsfxsize=3.0in\epsfbox{SProd1.eps}}
\end{center}
\caption{Feynman diagrams for the process of selectron pair production.}
\label{fig:nineteen}
\end{figure}
To go beyond the simple mass determinations, we can look at processes in
which the production reactions are more complex.
Consider, for example, the pair-production of the first-generation
$\widetilde e^-_R$. The production
goes through two Feynman diagrams, which are shown in
Figure~\ref{fig:nineteen}. Because the $\ne{1}$ is typically light
compared to other superparticles, it is the second diagram that is
dominant, especially at small angles. By measuring the forward peak in
the cross section, we obtain an additional measurement of the lightest
neutralino mass, and a measurement of its coupling to the electron. We
have seen in \leqn{eq:h4} that the coupling of $\widetilde{b}$ to $e^+
\widetilde e^-_R$ is proportional to the standard model $U(1)$ coupling
$g'$. Thus, this information can be used to determine one of the
neutralino mixing angles. Alternatively, if we have other diagnostics
that indicate that the neutralino parameters are in the gaugino region,
this experiment can check the supersymmmetry relation of couplings.
For a 200 GeV $\widetilde e^-_R$, with
a 100 fb$^{-1}$ data sample at 500 GeV, the ratio of
couplings can be determined to 1\% accuracy~\cite{NFT}.
Notice that the neutralino exchange diagram in
Figure~\ref{fig:nineteen} is present only for $e^-_R e^+_L \to
\widetilde e^-_R\widetilde e^+_R$, since $\widetilde e^-_R$ is the
superpartner of the right-handed electron. On the other hand,
with the initial state $e^-_L e^+_R$, we have the analogous diagram
producing the
superpartner of the left-handed electron $\widetilde L^-$. In the
gaugino region, the process $e^-_L e^+_R \to \widetilde L^- \widetilde
L^+$ has large contributions both from $\ne{1}$ ($\widetilde b$)
exchange and from $\ne{2}$ ($\widetilde w^3$) exchange. The reaction
$e^-_L e^+_L \to \widetilde L^- \widetilde e^+_R$ is also mediated by
neutralino exchange and contains additional useful information.
\begin{figure}[t]
\begin{center}
\leavevmode
{\epsfxsize=3.5in\epsfbox{Chmass.eps}}
\end{center}
\caption{Simulation of the $\ch{1}$ mass measurement at an
$e^+e^-$ linear collider, from \protect\cite{nlcbook}. The bottom
graph gives the event distribution in the energy of the $\bar q q$
pair emitted in a $\ch{1}$ hadronic decay. The hadronic system is
restricted to a bin in mass around 30 GeV.
The bottom graph shows the $\chi^2$ contours as a function of the
masses of the parent $\ch{1}$ and the daughter $\ne{1}$.}
\label{fig:twentyandahalf}
\end{figure}
Along with the sleptons, the chargino $\ch{1}$ is expected to be a
relatively light particle which is available for precision measurements
at an $e^+e^-$ collider. The dominant decays of the chargino are $\ch{1}
\to q\bar q \ne{1}$ and $\ch{1}\to \ell^+\nu \ne{1}$, leading to events
with quark jets, leptons, and missing energy. In mixed hadron-lepton
events, one chargino decay can be analyzed as a two-body decay into the
observed $q\bar q$ system plus the unseen neutral particle $\ne{1}$; then the
mass measurement technique of Figure~\ref{fig:eighteenvv} can be
applied. The simulation of a sample measurement, using jet pairs
restricted to an interval around 30 GeV in mass, is shown in
Figure~\ref{fig:twentyandahalf}~\cite{nlcbook}.
The full data sample (50 fb$^{-1}$ at
500 GeV) gives the $\ch{1}$ mass to an accuracy of 1\%~\cite{NLCsusy}.
\begin{figure}[tb]
\begin{center}
\leavevmode
{\epsfxsize=3.0in\epsfbox{CProd.eps}}
\end{center}
\caption{Feynman diagrams for the process of chargino pair production.}
\label{fig:twenty}
\end{figure}
The diagrams for chargino pair production are shown in
Figure~\ref{fig:twenty}. The cross section depends strongly on the
initial-state polarization. If the $\widetilde \nu$ is very heavy, it
is permissible to ignore the second diagram; then the first diagram
leads to a cross section roughly ten times larger for $e^-_L$ than for
$e^-_R$. If the $\widetilde \nu$ is light, this diagram interferes
destructively to lower the cross section.
For a right-handed electron beam, the second diagram vanishes. Then
there is an interesting connection between the chargino production
amplitude and the values of the chargino mixing angles \cite{Tsuk}.
Consider first the limit of very high energy, $E_{\mbox{\scriptsize CM}}^2 \gg m_Z^2$. In
this limit, we can ignore the $Z^0$ mass and consider the virtual gauge
bosons in the first diagram to be the $U(1)$ and the neutral $SU(2)$
bosons. But the $e^-_R$ does not couple to the $SU(2)$ gauge bosons.
On the other hand, the $W^+$ and $\widetilde w^+$ have zero hypercharge
and so do not couple to the $U(1)$ boson. Thus, at high energy, the
amplitude for $e^-_R e^+ \to \ch{1} \chm{1}$ is nonzero only if the
charginos have a Higgsino component and is, in fact, proportional to
the chargino mixing angles. Even if we do not go to asymptotic
energies, this polarized cross section is large in the Higgsino region
and small in the gaugino region, as shown in
Figure~\ref{fig:twentyone}. This information can be combined with the
measurement of the forward-backward asymmetry to determine both of the
chargino mixing angles in a manner independent of the other
supersymmetry parameters \cite{Feng}.
\begin{figure}[t]
\begin{center}
\leavevmode
{\epsfxsize=4.0in\epsfbox{SigC.eps}}
\end{center}
\caption[*]{Contours of constant cross section, in fb,
for the reaction $e^-_R e^+ \to \ch{1}
\chm{1}$ at $E_{\mbox{\scriptsize CM}} = 500$ GeV, from \protect\cite{Feng}. The plot
shows how the value of this cross section
maps to the position in the ($\mu, m_2$)
plane. The boundaries of the indicated regions are the curves on which
the $\ch{1}$ mass equals 50 GeV and 250 GeV.}
\label{fig:twentyone}
\end{figure}
If the study with $e^-_R$ indicates that the chargino parameters are in
the gaugino region, measurement of the differential cross section for
$e^-_L e^+ \to \ch{1} \chm{1}$ can be used to determine the magnitude
of the second diagram in Figure~\ref{fig:twenty}. The value of this
diagram can be used to estimate the $\widetilde\nu$ mass or to test
another of the coupling constant relations predicted by supersymmetry.
With a 100 fb$^{-1}$ data sample, the ratio between the $\widetilde{w}^+
\widetilde{\nu} e^-_L$ coupling and the $W^+ \nu e^-_L$ coupling can be
determined to 25\% accuracy if $m(\widetilde\nu)$ must also be
determined by the fit, and to 5\% if $m(\widetilde\nu)$ is known from
another measurement.
These examples demonstrate how the $e^+e^-$ collider experiments can
determine superpartner masses and the mixing angle of the charginos and
neutralinos. The experimental program is systematic and does not
depend on assumptions about the values of other supersymmetry
parameters. It only demands the basic requirement that the
color-singlet superpartners are available for study at the energy at
which the collider can run. If squarks can be pair-produced at these
energies, they can also be studied in this systematic way. Not only
can their masses be measured, but polarization observables can be used
to measure the small mass differences predicted by \leqn{eq:h5} and
\leqn{eq:i5}~\cite{FFin}.
\subsection{Superspectroscopy at hadron colliders}
At the end of Section 4.6, I explained that it should be relatively
straightforward to identify the signatures of supersymmetry at the LHC.
However, it is a challenging problem there to extract precision
information about the underlying supersymmetry parameters. For a long
time, it was thought that this information would have to come from
cross sections for specific signatures whose origin is complex and
model-dependent. However, it has been realized more recently that the
LHC can, in certain situations, offer ways to determine supersymmetry
mass parameters kinematically.
Let me briefly describe the parameters of the LHC \cite{LHCrep}.
This is a $pp$ collider with 14 TeV in the center of mass. The design
luminosity corresponds to a data sample, per experiment, of
100 fb$^{-1}$ per year. A simpler experimental environment, without
multiplet hadronic collisions per proton bunch crossing, is obtained
by running at a lower luminosity of 10 fb$^{-1}$ per year, and this
is probably what will be done initially. If the supersymmetric partners
of Standard Model particles indeed lie in the region defined by
our estimates \leqn{eq:tt5}--\leqn{eq:v5}, this low luminosity should
already be sufficient to begin detailed exploration of the supersymmetry
mass spectrum.
\begin{figure}[htb]
\begin{center}
\leavevmode
{\epsfxsize=3.5in\epsfbox{Basa.eps}}
\end{center}
\caption{The asymmetry between the cross sections for dilepton events
with $\ell^+\ell^+$ and those with $\ell^-\ell^-$ expected
at the LHC, plotted as a function of the ratio of the
gluino to the squark mass, from \protect\cite{Atlas}. The
three curves refer to the idicated values of the lighter
of the squark and gluino masses.}
\label{fig:twentytwo}
\end{figure}
Before we discuss methods for direct mass measurement, I should point
out that the many signatures available at the LHC which do not
give explicit kinematic reconstructions
do offer a
significant amount of information. For example, the
ATLAS collaboration \cite{Atlas, Basa} has suggested comparing the
cross-sections for like-sign dilepton events with $\ell^+\ell^+$ versus
$\ell^-\ell^-$. The excess of events with two positive leptons comes
from the process in which two $u$ quarks exchange a gluino and convert
to $\widetilde u$, making use of the fact that the proton contains more
$u$ than $d$ quarks. The contribution of this process peaks when the
squarks and gluinos have roughly equal masses, as shown in
Figure~\ref{fig:twentytwo}. Thus, this measurement allows one to
estimate the ration of the squark and gluino masses.
Presumably, if the values of $\mu$,
$m_1$, and $m_2$ were known from the $e^+e^-$ collider experiments, it
should be possible to make a precise theory of multi-lepton production
and to use the rates of these processes to determine $m(\widetilde g)$
and $m(\widetilde q)$.
\begin{figure}[htb]
\begin{center}
\leavevmode
{\epsfxsize=3.5in\epsfbox{Baersleptons.eps}}
\end{center}
\caption{Distribution of the dilepton mass in the process
$p\bar p \to \ch{1} \ne{2}+X$, with the $\ne{2}$ decaying to
$\ell^+\ell^- \ne{1}$, from \protect\cite{bcpt}.}
\label{fig:twentythree}
\end{figure}
In some circumstances, however, the LHC provides direct information on
the superparticle spectrum. Consider, for example, decay chains which
end with the decay $\ne{2}\to \ell^+\ell^-\ne{1}$ discussed in
Section~4.5. The dilepton mass distribution has a discontinuity at the
kinematic endpoint where
\begin{equation}
m(\ell^+\ell^-) = m(\ne{2}) - m(\ne{1}) \ .
\eeq{eq:y5}
The sharpness of this kinematic edge is shown in
Figure~\ref{fig:twentythree}, taken from a study of the process $q\bar
q \to \ch{1}\ne{2}$ \cite{bcpt}. Under the assumptions of gaugino
unification plus the gaugino region of parameter space, the mass
difference in \leqn{eq:y5} equals $0.5m_2$. Thus, if we have some
independent evidence for these assumptions, the position of this edge
can be used to give the overall scale of superparticle masses. Also,
if the gluino mass can be measured, the ratio of that mass to the
mass difference \leqn{eq:y5} provides a test of these assumptions.
At a point in parameter space studied for the ATLAS Collaboration
in \cite{Ianscrew}, it is possible to go much further. We
need not discuss why this particular point in the `minimal SUGRA'
parameter space was chosen for special study, but it turned out to have
a number of advantageous properties. The value of the gluino mass was
taken to be
300 GeV, leading to a very large gluino production cross section, equal to
1 nb,
at the LHC. The effect of Yukawa couplings discussed in Section 3.7
lowers the masses of the superpartners of $t_L$ and $b_L$, in
particular, making $\widetilde b_L$ the lightest squark. Then a major
decay chain for the $\widetilde g$ would be \begin{equation} \widetilde g \to
\widetilde b_L \bar b \to b \bar b \ne{2}\ , \eeq{eq:z5} which could be
followed by the dilepton decay of the $\ne{2}$.
\begin{figure}[phtb]
\begin{center}
\leavevmode
{\epsfxsize=3in\epsfbox{Iansb.eps}}
\medskip
{\epsfxsize=3in\epsfbox{Iansg.eps}}
\end{center}
\caption[*]{Reconstruction of the mass of the $\widetilde b$ and the
$\widetilde g$ at the LHC, at a point in supersymmetry parameter space
studied in \protect\cite{Ianscrew}. In the plot on the left,
the peak near 300 GeV shows the reconstructed $\widetilde b$. The
plot on the right shows the event distribution in the variable
$m(\widetilde g) - m(\widetilde b)$. The dashed distribution shows the
values for the events lying between 230 GeV and 330 GeV in the left-hand
figure.}
\label{fig:twentyfour}
\end{figure}
Since the number of events expected at this point is
very large, we can select events in which
the $\ell^+\ell^-$ pair falls close to its kinematic endpoint. For
these events, the dilepton pair and the daughter $\ne{1}$ are both at
rest with respect to the parent $\ne{2}$. Then, if we are also given
the mass of the $\ne{1}$, the energy-momentum 4-vector of the $\ne{2}$
is determined. This mass might be obtained from the assumptions listed
below \leqn{eq:y5}, from a more general fit of the LHC supersymmetry
data to a model of the supersymmetry mass spectrum, or from a direct
measurement at an $e^+e^-$ collider. In any event, once the momentum
vector of the $\ne{2}$ is determined, there is no more missing momentum
in the decay chain. It is now possible to successively add $b$ jets to
reconstruct the $\widetilde b_L$ and then the $\widetilde g$. The
mass peaks for these states obtained from the
simulation results of \cite{Ianscrew} are
shown in Figure~\ref{fig:twentyfour}. For a fixed $m(\ne{1})$, the
masses of $\widetilde b_L$ and $\widetilde g$ are determined to 1\%
accuracy.
It may seem that this example uses many special features of the
particular point in parameter space which was chosen for the analysis.
At another point, the spectrum might be different in a way that would
compromise parts of this analysis. For example, the $\ne{2}$ might be
allowed to decay to an on-shell $Z^0$, or the gluino might lie below
the $\widetilde b_L$. On the other hand, the method just described
can be extended to any superpartner with a three-body becay involving
one unobserved neutral. In \cite{Ianscrew}, other examples are discussed
which
apply these
ideas to decay chains that end with $\widetilde q \to \ne{1} h^0 q$
and $\widetilde t \to \ne{1} W^+ b$.
To properly evaluate the capability of the LHC to perform precision
supersymmetry measurements, we must remember that Nature has chosen (at
most) one point in the supersymmetry parameter space, and that every
point in parameter space is special in its own way. It is not likely
that we will know, in advance, which particular trick that will be
most effective. However, we have now only begun the study of
strategies to determine the superparticle spectrum from the kinematics
of LHC reactions. There are certainly many more tricks to be
discovered.
\subsection{Recapitulation}
If physics beyond the Standard Model is supersymmetric, I
am optimistic about the future prospects for experimental particle
physics. At the LHC, if not before, we will discover the superparticle
spectrum. This spectrum encodes information about physics at the
energy scale of supersymmetry breaking, which might be as high as the
grand unification or even the superstring scale. If we can measure
the basic parameters that determine this spectrum,
we can uncover the patterns that will let us
decode this information and see much more deeply into fundamental
physics.
It is not clear how much of this program can already be done at the LHC
and how much must be left to the experimental program of an $e^+e^-$
linear collider. For adherents of the linear collider, the worst case
would be that Nature has chosen a minimal parameter set and also some
special mass relations that allow the relevant three or four parameters
to be determined at the LHC. Even in this case, the linear collider
would have a profoundly interesting experimental program. In this
simple scenario, the LHC experimenters will be able to fit their data
to a small number of parameters, but the hadron
collider experiments cannot verify that this is the whole story. To give
one example, it is not known how, at a hadron collider, to measure the
mass of the $\ne{1}$, the particle that provides the basic quantum of
missing energy-momentum used to build up the supersymmetry mass spectrum.
The LHC experiments may give indirect determinations of $m(\ne{1})$.
The linear collider can provide a direct precision measurement of this
particle mass. If the predicted value were found, that would be an
intellectual triumph comparable to the direct discovery of the $W$ boson
in $p\bar p$ collisions.
I must also emphasize that there is an important difference between the
study of the supersymmetry spectrum and that of the spectrum of weak
vector bosons. In the latter case, the spectrum was predicted by a
coherent theoretical model, the $SU(2)\times U(1)$ gauge theory. In the
case of supersymmetry, as I have emphasized in Section 4.3, the minimal
parametrization is just a guess---and one guess among many. Thus,
it is a more likely
outcome that a simple parametrization of the supersymmetry
spectrum would omit crucial details. To discover these features, one would
need the model-independent approach to supersymmetry parameter
measurements that the $e^+e^-$ experiments can provide.
In this more general arena for the construction and testing of supersymmetry
model, the most striking feature of the comparison of colliders is how
much each facility adds to the results obtainable at the other.
From the $e^+e^-$ side, we
will obtain a precision understanding of the color-singlet portion of
the supersymmetry spectrum. We will measure parameters which determine
what decay chains the colored superparticles will follow.
From the $pp$ side, we will observe some of these decay chains
directly and obtain precise inclusive cross sections for the decay
products. This should allow us to analyze these decay chains back to their
origin and to measure the superspectrum parameters of heavy colored
superparticles. Thus,
if the problem that Nature poses for us is supersymmetry,
these two colliders together can solve that problem experimentally.
\section{Technicolor}
In the previous two sections, I have given a lengthy discussion of the
theoretical structure of models of new physics based on supersymmetry.
I have explained how supersymmetry leads to a solution to the problem
of electroweak symmetry breaking. I have explained that the
ramifications of supersymmetry are quite complex and lead to a rich
variety of phenomena that can be studied experimentally at colliders.
This discussion illustrated one of the major points that I made at the
beginning of these lectures. In seeking an explanation for electroweak
symmetry breaking, we could just write down the minimal Lagrangian
available. However, for me, it is much more attractive to look
for a theory in which electroweak symmetry breaking emerges from a
definite physical idea. If the idea is a profound one, it will
naturally lead to new phenomena that we can discover in experiments.
Supersymmetry is an idea that illustrates this picture, but it might
not be the right idea. You might worry that this example was a very
special one. Therefore, if I am to provide an overview of ideas on
physics beyond the Standard Model, I should give at least one more
example of a physical idea that leads to electroweak symmetry breaking,
and one assumptions of a very different kind.
Therefore, in this section, I will
discuss models of electroweak symmetry breaking based on the postulate of
new strong
interactions at the electroweak scale.
We will see that this idea leads to a different set of
physical predictions but nevertheless implies a rich and intriguing
experimental program.
\subsection{The structure of technicolor models}
The basic structure of a model of electroweak symmetry breaking by new
strong interactions is that of the Weinberg-Susskind model discussed at
the end of Section 2.2. This model was based on a strong-interaction
model that was essentially a scaled up version of QCD. From here on, I
will refer to the new strong interaction gauge symmetry as
`technicolor'. In this section, I will discuss more details of this
model, and also add features that are necessary to provide for quark
and lepton mass generation.
In Section 2.2, I pointed out that the Weinberg-Susskind model leads to
a vacuum expectation value which breaks $SU(2)\times U(1)$. To
understand this model better, we should first try to compute the $W$
and $Z$ boson mass matrix that comes from this symmetry breaking.
QCD with two massless flavors has the global symmetry $SU(2)\times
SU(2)$; independent $SU(2)$ symmetries can rotate the doublets $q_L =
(u_L,d_L)$ and $q_R = (u_R,d_R)$. When the operator $\bar q q$ obtains
vacuum expectation values as in \leqn{eq:o}, the two $SU(2)$ groups are
locked together by the pairing of quarks with antiquarks in the vacuum.
Then the overall $SU(2)$ is unbroken; this is the manifest isospin
symmetry of QCD. The second $SU(2)$ is that associated with the axial
vector currents
\begin{equation}
J^{\mu 5 a} = \bar q \gamma^\mu \gamma^5 \tau^a q \ .
\eeq{eq:a6}
This symmetry is spontaneously broken. By Goldstone's theorem, the
symmetry breaking leads to a massless boson for each spontaneously
broken symmetry, one created or annihilated by each broken symmetry
current. These three particles are identified with the pions of QCD.
The matrix element between the axial $SU(2)$ currents and the pions can
be parametrized as
\begin{equation}
\bigl\langle 0 \big| J^{\mu 5 a}\big|\pi^b(p)\bigr\rangle
= i f_\pi p^\mu \delta^{ab} \ .
\eeq{eq:b6}
By recognizing that $J^{\mu 5 a}$ is a part of the weak interaction
current, we can identify $f_\pi$ as the pion decay constant, $f_\pi =
93$ MeV. The assumption of Weinberg and Susskind is that the same story
is repeated in technicolor. However, since the technicolor quarks are
assumed to be massless,
the pions remain precisely massless at this stage of the argument.
\begin{figure}[tb]
\begin{center}
\leavevmode
{\epsfxsize=3.5in\epsfbox{PiabTC.eps}}
\end{center}
\caption{Contributions to the vacuum polarization of the $W$ boson from
technicolor states.}
\label{fig:twentyfive}
\end{figure}
If the system with spontaneously broken symmetry and massless pions is
coupled to gauge fields, the gauge boson should obtain mass through the
Higgs mechanism. To compute the mass term, consider the gauge boson
vacuum polarization diagram shown in Figure~\ref{fig:twentyfive}.
Let
us assume first that we couple only the weak interaction $SU(2)$ bosons
to the techniquarks. The coupling is
\begin{equation}
\Delta {\cal L} = g A^a_\mu J^a_{L\mu}\ .
\eeq{eq:c6}
Then the matrix element \leqn{eq:b6} allows a pion to be
annihilated and a gauge boson created, with the amplitude
\begin{equation}
ig \cdot (-\frac{1}{2}) \cdot i f_\pi p_\mu \delta^{ab} \ ;
\eeq{eq:d6}
the second factor comes from $J^a_{L\mu} = \frac{1}{2}(J^a_\mu -
J^{a5}_\mu)$. Using this amplitude, we can evaluate the amplitude for a
process in which a gauge boson converts to a Goldstone boson and then
converts back. This corresponds to the diagram contributing to the
vacuum polarization shown as the second term on the right-hand side of
Figure~\ref{fig:twentyfive}. The value of this diagram is
\begin{equation}
\left({gf_\pi p_\mu\over 2}\right) {1\over p^2}
\left(- {gf_\pi p_\nu\over 2}\right) \ .
\eeq{eq:e6}
The full vacuum polarization amplitude $i\Pi^{ab}_{\mu\nu}(p)$ consists
of this term plus more
complicated terms with massive particles or multiple particles
exchanged. These are indicated as the shaded blob
in Figure~\ref{fig:twentyfive}.
If there are no massless particles in the symmetry-breaking
sector other than the pions, \leqn{eq:e6} is the only term with a
$1/p^2$ singularity near $p=0$. Now recognize that the gauge
current $J^a_{L\mu}$ is conserved, and so the vacuum polarization must
satisfy
\begin{equation}
p^\mu\, \Pi^{ab}_{\mu\nu}(p) = 0 \ .
\eeq{eq:f6}
These two requirements are compatible only if the vacuum polarization
behaves near $p=0$ as
\begin{equation}
\Pi^{ab}_{\mu\nu} = \left({gf_\pi\over 2}\right)^2 \left(g_{\mu\nu}
- {p_\mu p_\nu\over p^2} \right) \delta^{ab} \ .
\eeq{eq:g6}
This is a mass term for the vector boson, giving
\begin{equation}
m_W = g{v\over 2}\ , \quad \mbox{with} \quad v = f_\pi\ .
\eeq{eq:h6}
This is the result that I promised above \leqn{eq:q}.
Now add to this structure the $U(1)$ gauge boson $B_\mu$ coupling to
hypercharge. Repeating the same arguments, we find the mass matrix
\begin{equation}
m^2 = \left({f_\pi\over 2}\right)^2\pmatrix{g^2 & & & \cr
& g^2 & & \cr
& & g^2 & -gg'\cr
& &-gg'& (g')^2 \cr} \ ,
\eeq{eq:i6}
acting on $(A^1_\mu, A^2_\mu, A^3_\mu, B_\mu)$.
This has just the form of \leqn{eq:t}. The eigenvalues of
this matrix give the vector boson masses \leqn{eq:f}, with $v = 246\
\mbox{GeV}= f_\pi$. This is the result promised above \leqn{eq:q}.
More generally, in a model with $N_D$ technicolor doublets, we require,
\begin{equation}
v^2 = N_D f_\pi^2 \ .
\eeq{eq:j6}
Thus, a larger technicolor sector lies lower in energy and is closer
to the scale
of present experiments.
In my discussion of \leqn{eq:t}, I pointed out that this equation
calls for the presence of an unbroken $SU(2)$ global symmetry of the
new strong interactions, called {\em custodial $SU(2)$},
in addition to the spontaneously broken weak
interaction $SU(2)$ symmetry. This global $SU(2)$ symmetry requires that
the first three diagonal entries in \leqn{eq:i6} are equal, giving the
mass relation $ m_W/ m_Z = \cos\theta_w$. Custodial $SU(2)$ symmetry also
acts on the heavier states of the new strong interaction theory and will
play an important role in our analysis of the experimental probes of this
sector.
The model I have just described gives mass the the $W$ and $Z$ bosons,
but it does not yet give mass to quarks and leptons. In order to
accomplish this, we must couple the quarks and leptons to the
techniquarks. This is done by introducing further gauge bosons called
{\em Extended Technicolor} (ETC) bosons \cite{DimS,EandL}.
If we imagine that the
ETC bosons connect light fermions to techniquarks, and that they
are very heavy, a typical coupling induced by these bosons would have
the form
\begin{eqnarray}
i \Delta{\cal L} &=& (i g_E \bar u_L\gamma^\mu U_L) {-i\over -m_E^2}
(i g_E \bar U_R \gamma_\mu u_R) \nonumber \\
&=& -i {g_E^2\over m_E^2} \bar u_L \gamma^\mu U_L \bar U_R
\gamma_\mu u_R
\eeqa{eq:k6}
Now replace $U_L\bar U_R$ by its vacuum expectation value due to
dynamical techniquark mass generation:
\begin{equation}
\VEV{ U_L \bar U_R} = -{1\over 4} \VEV{\bar U U} =
{1\over 4} \Delta \ ,
\eeq{eq:l6}
where $m_E$ and $g_E$ are the ETC mass and coupling, $\Delta$ is as in
\leqn{eq:o} and the unit matrix is in the space of Dirac indices.
Inserting \leqn{eq:l6} into \leqn{eq:k6}, we find a fermion mass term
\begin{equation}
m_u = {g^2_E\over m_E^2} \Delta \ .
\eeq{eq:m6}
The origin of this term is shown diagrammatically in
Figure~\ref{fig:twentysix}. In principle, masses could be generated in
this way for all of the quarks and leptons.
\begin{figure}[tb]
\begin{center}
\leavevmode
\epsfbox{TCmass.eps}
\end{center}
\caption{ETC generation of quark and lepton masses.}
\label{fig:twentysix}
\end{figure}
From \leqn{eq:m6}, we can infer the mass scale required for the ETC
interactions. Estimating with $g_E \approx 1$, and $\Delta \sim 4\pi
f_\pi^3$ (which gives $\VEV{\bar u u} = ($300 MeV$)^3$ in QCD), we find
\begin{equation}
m_E = g_E \left({4 \pi f_\pi^3\over m_f}\right)^{1/2} =
\cases{43 \ \mbox{TeV}& $f= s$ \cr
1.0 \ \mbox{TeV}& $f = t$ \cr }\ ,
\eeq{eq:n6}
using the $s$ and $t$ quark masses as reference points in the fermion
mass spectrum.
The detailed structure of the ETC exchanges must be paired with a
suitable structure of the techiquark sector. We might call `minimal
technicolor' the theory with precisely one weak interaction $SU(2)$
doublet of techniquarks. In this case, all of the flavor structure
must appear in the ETC group. In particular, some ETC bosons must be
color triplets to give mass to the quarks through the mechanism of
Figure~\ref{fig:twentysix}. Another possibility is that the
technicolor sector could contain techniquarks with the $SU(3)\times
SU(2)\times U(1)$ quantum numbers of a generation of quarks and leptons
\cite{FS}. Then the ETC bosons could all be color singlets, though they
would still carry generation quantum numbers. In this case also,
\leqn{eq:j6} would apply with $N_D = 4$, putting $f_\pi = 123$ GeV.
More complex cases in which ETC bosons can be doublets of $SU(2)$ have
also been discussed in the literature \cite{CST}.
\subsection{Experimental constraints on technicolor}
The model that I have just described makes a number of characteristic
physical predictions that can be checked in experiments at energies
currently available. Unfortunately, none of these predictions checks
experimentally. Many theorists view this as a repudiation of the
technicolor program. However, others point to the fact that we have
built up the technicolor model assuming that the dynamics of the
technicolor interactions exactly copies that of QCD. By modifying the
pattern or the explicit energy scale of chiral symmetry breaking, it is
possible to evade these difficulties. Nevertheless, it is important to
be aware of what the problems are. In this section, I will review the
three major experimental problems with technicolor models and then
briefly examine how they may be avoided through specific assumptions
about the strong interaction dynamics.
The first two problems are not specifically associated with technicolor
but rather with the ETC interactions that couple techniquarks to the
Standard Model quarks and leptons. If two matrices of the ETC group
link quarks with techniquarks, the commutator of these
matrices should link quarks with quarks. This implies that there should
be ETC bosons which create new four-quark interactions with
coefficients of order $g_E^2/m_E^2$. In the Standard Model, there are
no flavor-changing neutral current couplings at the tree level. Such
couplings are generated by weak interaction box diagrams and other loop
effects, but the flavor-changing part of these interactions is
suppressed to the level observed experimentally by the GIM cancellation
among intermediate flavors \cite{GIM}. This cancellation follows from
the fact that the couplings of the various flavors of quarks and
leptons to the $W$ and $Z$ depend only on their $SU(2)\times U(1)$
quantum numbers. For ETC, however, either the couplings or the boson
masses must depend strongly on flavor in order to generate the observed
pattern of quark and lepton masses. Thus, generically, one expects
large flavor-changing neutral current effects. It is possible to
suppress these couplings to a level at which they do not contribute
excessively to the $K_L$--$K_S$ mass difference, but only by raising
the ETC mass scale to $m_E \geq 1000$ TeV. In a similar way, ETC
interactions generically give excessive contributions to $K^0 \to \mu^+
e^-$ and to $\mu \to e \gamma$ unless $m_E \geq 100$ TeV
\cite{ELP,DimEll}. These estimates contradict the value of the
ETC boson masses
required in \leqn{eq:n6}. There are schemes for natural flavor
conservation in technicolor theories, but they require a very large
amount of new structure just above 1~TeV \cite{DimGR,Randall,Georgi}.
The second problem comes in the value of the top quark mass. If ETC is
weakly coupled, the value of any quark mass should be bounded by
approximately
\begin{equation}
m_f \leq {g_E^2\over 4\pi} \Delta^{1/2}\ ,
\eeq{eq:o6}
where $\Delta$ is the techniquark bilinear expectation value.
Estimating as above, this bounds the quark masses at about 70 GeV
\cite{Raby}. To see this problem from another point of view, look back
at the mass of the ETC boson associated with the top quark, as given in
\leqn{eq:n6}. This is comparable to the mass of the technicolor $\rho$
meson, which we would estimate from \leqn{eq:q} to have a value of
about 2 TeV. So apparently the top quark's ETC boson must be a
particle with technicolor strong interactions. This means that the
model described above is not self-consistent. Since this new
strongly-interacting particle generates mass for the $t$ but not the
$b$, it has the potential to give large contributions to other
relations that violate weak-interaction isospin. In particular, it can
give an unwanted large correction to the relation $m_W = m_Z
\cos\theta_w$ in \leqn{eq:s}.
The third problem relates directly to the technicolor sector itself.
This issue arises from the precision electroweak measurments. In
principle, the agreement of precision electroweak measurements with the
Standard Model is a strong constraint on any type of new physics. The
constraint turns out to be especially powerful for technicolor. To
explain this point, I would like to present some general formalism and
then specialize it to the case of technicolor.
At first sight, new physics can affect the observables of precision
electroweak physics through radiative corrections to the $SU(2)\times
U(1)$ boson propagators, to the gauge boson vertices, and to 4-fermion
box diagrams. Typically, though, the largest effects are those from
vacuum polarization diagrams. To see this, recall that almost all
precision electroweak observables involve 4-fermion reactions with
light fermions only. (An exception is the $Z\to b\bar b$ vertex, whose
discussion I will postpone to Section 5.7.) In this case, the vertex
and box diagrams involve only those new particles that couple directly
to the light generations. If the new particles are somehow connected to
the mechanism of $SU(2)\times U(1)$ breaking and fermion mass
generation, these couplings are necessarily small. The vacuum
polarization diagrams, on the other hand, can involve all new particles
which couple to $SU(2)\times U(1)$, and can even be enhanced by color
or flavor sums over these particles.
The vacuum polarization corrections also can be accounted in a very
simple way. It is useful, first, to write the $W$ and $Z$ vacuum
polarization amplitudes in terms of current-current expectation values
for the $SU(2)$ and electromagnetic currents. Use the relation
\begin{equation}
J_Z = J_3 - s^2 J_Q \ ,
\eeq{eq:p6}
where $J_Q$ is the electromagnetic current, and $s^2 = \sin^2\theta_w$,
$c^2 = \cos^2\theta_w$. Write
the weak coupling
constants explicitly in terms of $e$, $s^2$ and $c^2$.
Then the
vacuum polarization amplitudes of $\gamma$, $W$, and $Z$ and the $\gamma Z$
mixing amplitude take the form
\begin{eqnarray}
\Pi_{\gamma\gamma} & = & e^2 \Pi_{QQ}\nonumber \\
\Pi_{WW} & = & {e^2\over s^2} \Pi_{QQ}\nonumber \\
\Pi_{ZZ} &=& {e^2\over c^2s^2} (\Pi_{33} - 2s^2 \Pi_{3Q}+ s^4\Pi_{QQ}) \nonumber \\
\Pi_{Z\gamma} &=& {e^2\over cs} (\Pi_{3Q} - s^2 \Pi_{QQ}) \ .
\eeqa{eq:q6}
The current-current amplitudes $\Pi_{ij}$ are functions of $(q^2/M^2)$,
where $M$ is the mass of the new particles whose loops contribute to
the vacuum polarizations.
If these new particles are too heavy to be found at the $Z^0$ or in the
early stages of LEP 2, the ratio $q^2/M^2$ is bounded for $q^2 = m_Z^2$.
Then
it is reasonable to expand the current-current
expectation values a power series. In making this expansion, it is
important to take into account that any amplitude involving an
electromagnetic current will vanish at $q^2 = 0$ by the standard QED
Ward identity. Thus, to order $q^2$, we have six coefficients,
\begin{eqnarray}
\Pi_{QQ} & = & \phantom{\Pi_{11}(0) + } q^2 \Pi'_{QQ}(0) + \cdots \nonumber \\
\Pi_{11} & = & \Pi_{11}(0) + q^2 \Pi'_{11}(0) + \cdots \nonumber \\
\Pi_{3Q} & = & \phantom{\Pi_{11}(0) + } q^2 \Pi'_{3Q}(0) + \cdots \nonumber \\
\Pi_{33} & = & \Pi_{33}(0) + q^2 \Pi'_{33}(0) + \cdots
\eeqa{eq:r6}
To specify the coupling constants $g$, $g'$ and the scale $v$ of the
electroweak theory, we must measure three parameters. The most
accurate reference values come from $\alpha$, $G_F$, and $m_Z$. Three
of the coefficients in \leqn{eq:r6} are absorbed into these parameters.
This leaves three independent coefficients which can in principle be
extracted from experimental measurements. These are conventionally
defined \cite{PandT} as
\begin{eqnarray}
S &=& 16\pi \bigl[ \Pi'_{33}(0) - \Pi'_{3Q}(0) \bigr] \nonumber \\
T &=& {4\pi\over s^2 c^2 m_Z^2}\bigl[ \Pi_{11}(0) - \Pi_{33}(0) \bigr]
\nonumber \\
U &=& 16\pi \bigl[ \Pi'_{33}(0) - \Pi'_{11}(0) \bigr]
\eeqa{eq:s6}
I include in these parameters only the contributions from new physics.
From the definitions, you can see that $S$ measures the overall
magnitude of $q^2/M^2$ effects, and $T$ measures the magnitude of
effects that violate the custodial $SU(2)$ symmetry of the new particles.
The third parameter $U$ requires both $q^2$-dependence and
$SU(2)$ violation and typically is small in
explicit models.
\begin{figure}[t]
\begin{center}
\leavevmode
{\epsfxsize=2.5in\epsfbox{STsample.eps}}
\end{center}
\caption{Schematic determination of $S$ and $T$ from precision electroweak
measurements. For each observable, the width of the
band corresponds to the
experimental error in its determination.}
\label{fig:twentyseven}
\end{figure}
\begin{figure}[tb]
\begin{center}
\leavevmode
{\epsfxsize=5in\epsfbox{STplot.eps}}
\end{center}
\caption{Current determination of $S$ and $T$ by
a fit to the corpus of precision electroweak data, from
\protect\cite{LEPDG}. The various ellipses show fits to a
subset of the data, including the values of $\alpha$, $G_F$, and $m_Z$
plus those of one or
several additional observables.}
\label{fig:twentyeight}
\end{figure}
By inserting the new physics contributions to the intermediate boson
propagators in weak interaction diagrams, we generate shifts from the
Standard Model predictions which are linear in $S$, $T$, and $U$. For
example, the effective value of $\sin^2\theta_w$ governing the forward-backward
and polarization asymmetries at the $Z^0$ is shifted from its value
$(s^2)_{\mbox{\scriptsize SM}}$, in the Minimal Standard Model,
by
\begin{equation}
(s^2)_{\mbox{\scriptsize eff}} - (s^2)_{\mbox{\scriptsize SM}} = {\alpha\over c^2-s^2}
\bigl[ {1\over 4} S - s^2 c^2 T \bigr] \ .
\eeq{eq:t6}
All of the standard observables except for $m_W$ and $\Gamma_W$ are
independent of $U$, and since $U$ is in any event expected to be small,
I will ignore it from here on. In that case, any precision weak
interaction measurement restricts us to the vicinity of the line in the
$S$-$T$ plane. The constraints that come from the measurements of
$(s^2)_{\mbox{eff}}$, $m_W$, and $\Gamma_Z$ are sketched in
Figure~\ref{fig:twentyseven}. If these lines meet, they indicate
particular values of $S$ and $T$ which fit the deviations from the
Standard Model in the whole corpus of weak interaction data.
Figure~\ref{fig:twentyeight} shows such an $S$-$T$ fit to the data
available in the summer of 1996 \cite{LEPDG}. The various curves show
fits to $\alpha$, $G_F$, $m_Z$ plus a specific subset of the other
observables; the varying slopes of these constraints illustrate the
behavior shown in Figure~\ref{fig:twentyseven}.
There is one important subtlety in the interpretation of the final
values of $S$ and $T$. In determining the Minimal Standard Model reference
values for the fit, it is necessary to specify the value of the top
quark mass and also a value for the mass of the Minimal Standard Model
Higgs boson. Raising $m_t$ gives the same physical effect as increasing
$T$; raising $m_H$ increases $S$ while slightly decreasing $T$. Though
$m_t$ is known from direct measurements, $m_H$ is not. The analysis of
Figure~\ref{fig:twentyeight} assumed $m_t = 175$ GeV, $m_H = 300$ GeV.
In comparing $S$ and $T$ to the predictions of technicolor models, it
is most straightforward to compute the difference between the
technicolor contribution to the vacuum polarization and that of a 1 TeV
Higgs boson. Shifting to this reference value, we have the experimental
constraint
\begin{equation}
S = -0.26 \pm 0.16 \ .
\eeq{eq:u6}
The negative sign indicates that there should be a smaller contribution
to the $W$ and $Z$ vacuum polarizations than that predicted by a 1 TeV
Standard Model Higgs boson. This is in accord with the fact that a lower
value of the Higgs boson mass gives the best fit to the Minimal Standard
Model, as I have indicated in \leqn{eq:g2}.
In many models of new physics, the contributions to $S$ become small as
the mass scale $M$ increases, with the behavior $S \sim m_Z^2/M^2$.
This is the case, for example, in supersymmetry. For example,
charginos of mass about 60 GeV can contribute to $S$ at the level of of
a few tenths of a unit, but heavier charginos have a negligible effect
on this parameter. In technicolor models, however, there is a new
strong interaction sector with resonances that can appear directly in
the $W$ and $Z$ vacuum polarizations. There is a concrete formula which
describes these effects. Consider a technicolor theory with $SU(2)$
isospin global symmetry. In such a theory, we can think about
producing hadronic resonances through $e^+e^-$ annihilation. In the
standard parametrization, the cross section for $e^+e^-$ annihilation to
hadrons through a virtual photon is given by the point cross section
for $e^+e^-\to \mu^+\mu^-$ times a factor $R(s)$, equal asymptotically to
the sum of the squares of the quark charges. Let $R_V(s)$ be the
analogous factor for a photon which couples to the isospin current
$J^{\mu 3}$ and so creates $I=1$ vector resonances only, and let
$R_A(s)$ be the factor for a photon which couples to the axial isospin
current $J^{\mu 5 3}$. Then
\begin{equation}
S = {1\over 3\pi} \int^\infty_0 {ds\over s} \left[ R_V(s) - R_A(s) -
H(s)\right] \ ,
\eeq{eq:v6}
where $H(s) \approx \frac{1}{4}\theta(s-m_h^2)$ is the
contribution of the Standard Model Higgs boson used to compute the
reference value in \leqn{eq:t6}.
In practice, this $H(s)$ gives a small correction. If one
evaluates $R_V$ and $R_A$ using the spectrum of QCD, scaled up
appropriately by the factor \leqn{eq:q}, one finds \cite{PandT}
\begin{equation}
S = + 0.3 N_D {N_{TC}\over 3} \ ,
\eeq{eq:w6}
where $N_D$ is the number of weak doublets and $N_{TC}$ is the number
of technicolors. Even for $N_D =1$ and $N_{TC} =3$, this is a
substantial positive value, one inconsistent with \leqn{eq:u6} at the 3
$\sigma$ level. Models with several technicolor weak doublets are in
much more serious conflict with the data.
These phenomenological problems of technicolor are challenging for the
theory, but they do not necessarily rule it out. Holdom \cite{Holdom}
has suggested a specific dynamical scheme which solves the first of these
three problems. In estimating the scale of ETC interactions, we assumed that
the techniquark condensate falls off rapidly at high momentum, as the
quark condensate does in QCD. If the techniquark mass term fell only
slowly at high momentum, ETC would have a larger influence at larger
values of $m_E$. Then the flavor-changing direct effect of ETC on
light quark physics would be reduced. It is possible that such a
difference between technicolor and QCD would also ameliorate the other
two problems I have discussed \cite{Appelquist}. In particular, if the
$J=1$ spectrum of technicolor models is not dominated
by the low-lying $\rho$ and $a_1$ mesons, as is the case in QCD,
there is a chance that
the vector and axial vector contributions to \leqn{eq:v6} would cancel
to a greater extent.
It is disappointing that theorists are unclear about the precise
predictions of technicolor models, but it is not surprising.
Technicolor relies on the presence of a new strongly-coupled gauge
theory. Though the properties of QCD at strong coupling now seem to be
well understood through numerical lattice gauge theory computations,
our understanding of strongly coupled field theories is quite
incomplete. There is room for quantiatively and even qualitatively
different behavior, especially in theories with a large number of
fermion flavors. What the arguments in this section show is that
technicolor cannot be simply a scaled-up version of QCD. It is a
challenge to theorists, though, to find the strong-interaction
theory whose different
dynamical behavior fixes the problems that extrapolation from
QCD would lead us to
expect.
\subsection{Direct probes of new strong interactions}
If the model-dependent constraints on technicolor have led us into a murky
theoretical situation, we should look for experiments that have a
directly, model-independent interpretation. The guiding principle
of technicolor is that $SU(2)\times U(1)$ symmetry breaking is caused
by new strong interactions. We should be able to test this idea by
directly observing elementary particle reactions involving these new
interactions. In the next few sections, I will explain how these
experiments can be done.
In order to design experiments on new strong interactions, there are
two problems that we must discuss. First, the natural energy scale
for technicolor, and also for alternative theories with new strong
interactions, is of the order of 1 TeV. Thus, to feel these
interactions, we will need to set up parton colllisions with energies
of order 1 TeV in the center of mass. This energy range is well beyond
the capabilities of LEP 2 and the Tevatron, but it should be available
at the LHC and the $e^+e^-$ linear collider. Even for these facilities,
the experiments are challenging. For the LHC, we will see that it
requires the full design luminosity. For the linear collider, it
requires a center-of-mass energy of 1.5 TeV, at the top of the energy
range now under consideration.
\begin{figure}[t]
\begin{center}
\leavevmode
\epsfbox{GBET.eps}
\end{center}
\caption{The Goldstone Boson Equivalence Theorem.}
\label{fig:twentynine}
\end{figure}
Second, we need to understand which parton collisions we should study.
Among the particles that interact in high-energy collisions, do any
carry the new strong interactions? At first it seems that all of the
elementary particles of collider physics are weakly coupled. But
remember that, in the models we are discussing, the $W$ and $Z$ bosons
acquire their mass through their coupling to the new strong
interactions. As a part of the Higgs mechanism, these bosons, which
are massless and transversely polarized before symmetry breaking, pick
up longitudinal polarization states by combining with the Goldstone
bosons of the symmetry-breaking sector. It is suggestive, then, that
at very high energy, the longitudinal polarization states of the $W$
and $Z$ bosons should show their origin and interact like the pions of
the strong interaction theory. In fact, this correspondence can be
proved; it is called the Goldstone Boson Equivalence Theorem
\cite{GBET,GBETx,LQT,ChandG}. The statement of the theorem is shown in
Figure~\ref{fig:twentynine}.
\begin{figure}[t]
\begin{center}
\leavevmode
{\epsfxsize=3.75in\epsfbox{GBETWard.eps}}
\end{center}
\caption{Ward identity used in the proof of the Goldstone Boson
Equivalence Theorem.}
\label{fig:thirty}
\end{figure}
It is complicated to give a completely general proof of this theorem,
but it is not difficult to demonstate the simplest case. Consider a
process in which one $W$ boson is emitted. Since the $W$
couples to a conserved gauge current, the emission amplitude obeys a
Ward identity, shown in Figure~\ref{fig:thirty}. We can analyze this
Ward identity as we did the analogous diagrammatic identity in
Figure~\ref{fig:twentyfive}. The current which creates the $W$
destroys a state of the strong interaction theory; this is either a
massive state or a massless state consisting of one pion. Call the
vertex from which the $W$ is created directly $\Gamma_W$, and call the
vertex for the creation of a pion $i\Gamma_\pi$. Then the Ward identity
shown in Figure~\ref{fig:thirty} reads
\begin{equation}
q_\mu \Gamma_W^\mu(q) + q_\mu \left( {g f_\pi q^\mu\over 2}\right)
{i\over q^2} i \Gamma_\pi(q) = 0 \ .
\eeq{eq:x6}
Using \leqn{eq:h6}, this simplifies to
\begin{equation}
q_\mu \Gamma_W^\mu = m_W \Gamma_\pi \ .
\eeq{eq:y6}
To apply this equation, look at the explicit polarization vector
representing a vector boson of longitudinal polarization. For a $W$
boson moving in the $\hat 3$ direction, $q^\mu = (E,0,0,q)$ with
$E^2-q^2 = m_W^2$, the longitudinal polarization vector is
\begin{equation}
\epsilon^\mu = \left( {q\over m_W}, 0 , 0 , {E\over m_W}
\right) \ .
\eeq{eq:z6}
This vector satisfies $\epsilon \cdot q = 0$. At the same time, it
becomes increasingly close to $q^\mu/m_W$ as $E \to \infty$. Because
of this, the contraction of $\epsilon^\mu$ with the first term in the
vertex shown in Figure~\ref{fig:thirty} is well approximated by
$(q_\mu/m_W)\Gamma_W^\mu$ in this limit, while at the same time the
contraction of $\epsilon^\mu$ with the pion diagram gives zero. Thus,
$\Gamma_W$ is the complete amplitude for emission of a physical $W$
boson. According to \leqn{eq:y6}, it satisfies
\begin{equation}
\epsilon_\mu \Gamma_W^\mu = \Gamma_\pi
\eeq{eq:a7}
for $E \gg m_W$. This is the precise statement of Goldstone boson
equivalence.
The Goldstone boson equivalence theorem tells us that the longitudinal
polarization states of $W^+$, $W^-$, and $Z^0$, studied in very high
energy reactions, are precisely the pions of the new strong
interactions. In the simplest technicolor models, these particles would
have the scattering amplitudes of QCD pions. However, we can also
broaden our description to include more general models. To do this, we
simply write the most general theory of pion interactions at energies
low compared to the new strong-interaction scale, and then reinterpret
the initial and final particles are longitudinally polarized weak
bosons.
This analysis is dramatically simplified by the observation we made below
\leqn{eq:t} that the new strong interactions should contain a global
$SU(2)$ symmetry which remains exact when the weak interaction $SU(2)$
is spontaneously broken. I explained
there that this symmetry is required to obtain the relation $m_W =
m_Z\cos\theta_w$, which is a regularity of the weak boson mass
spectrum. This unbroken symmetry shows up in technicolor models as
the manifest $SU(2)$ isospin symmetry of the techniquarks.
From here on, I will treat the pions of the new strong interactions as
massless particles with an exact isospin $SU(2)$ symmetry. The
pions form a triplet with $I=1$. Then a two-pion state has isospin 0,
1, or 2. Using Bose statistics, we see that the three scattering
channels of lowest angular momentum are
\begin{eqnarray}
I = 0 &\quad& J= 0 \nonumber \\ I = 1 &\quad & J=1 \nonumber \\
I = 2 &\quad& J= 0
\eeqa{eq:b7}
From here on, I will refer to these channels by their isospin value.
Using the analogy to the conventional strong interactions, it is
conventional to call a resonance in the $I=0$ channel a $\sigma$ and a
resonance in the $I=1$ channel a $\rho$ or techni-$\rho$.
Now we can describe the pion interactions by old-fashioned pion
scattering phenomenology \cite{Kallen}. As long we are at energies
sufficiently low that the process $\pi\pi \to 4 \pi$ is not yet
important, unitarity requires the scattering amplitude in the channel
$I$ to have the form
\begin{equation}
{\cal M}_I = 32\pi e^{i\delta_I} \sin \delta_I \cdot \cases{ 1 & $J=0$\cr
3 \cos\theta & $J=1$\cr}\ ,
\eeq{eq:c7}
where $\delta_I$ is the phase shift in the channel $I$. Since the
pions are massless, these can be expanded at low energy as
\begin{equation}
\delta_I = {s\over A_I} \left(1 + {s\over M_I^2} + \cdots \right)\ ,
\eeq{eq:d7}
where $A_I$ is the relativistic generalization of the scattering length
and $M_I$ similarly represents the effective range. The parameter $M_I$
is given this name because it estimates the position of the leading
resonance in the channel $I$. The limit $M_I\to \infty$ is called the
{\em Low Energy Theorem} (LET) model.
Because the pions are Goldstone bosons, it turns our that their
scattering lengths can be predicted in terms of the amplitude
\leqn{eq:b6} \cite{scleng}. Thus,
\begin{equation}
A_I = \cases{ 16 \pi f_\pi^2 = (1.7\ \mbox{TeV})^2 & $I=0$\cr
96 \pi f_\pi^2 = (4.3\ \mbox{TeV})^2 & $I=1$\cr
-32 \pi f_\pi^2 & $I=2$\cr}\ .
\eeq{eq:e7}
Experiments which involve $WW$ scattering at
very high energy should give us the chance to observe these values of
$A_I$ and to measure the corresponding values of $M_I$.
The values of $A_I$ given in \leqn{eq:e7}
represent the basic assumptions about manifest and
spontaneously broken symmetry which are built into our analysis. The
values of $M_I$, on the other hand, depend on the details of the
particular set of new strong interactions that Nature has provided.
For example, in a technicolor model, the quark model of techicolor
interactions predicts that the strongest low-lying resonance should be
a $\rho$ ($I=1)$, as we see in QCD. In a model with strongly coupled
spin-0 particles, the strongest resonance would probably be a $\sigma$,
an $I=0$
scalar bound state. More generally, if we can learn
which channels have low-lying resonances and what the masses of these
resonances are, we will have a direct experimental window into the
nature of the new interactions which break $SU(2)\times U(1)$.
\subsection{New strong interactions in $WW$ scattering}
How, then, can we create collisions of longitudinal $W$ bosons at TeV
center-of-mass energies? The most straightforward method to create
high-energy $W$ bosons is to radiate them from incident colliding
particles, either quarks at the LHC or electrons and positrons at the
linear collider.
\begin{figure}[tb]
\begin{center}
\leavevmode
{\epsfysize=1.5in\epsfbox{Wparton.eps}}
\end{center}
\caption{Kinematics for the radiation of a longitudinal $W$ parton.}
\label{fig:thirtyone}
\end{figure}
The flux of $W$ bosons associated with a proton or electron beam can be
computed by methods similar to those used to discuss parton evolution
in QCD \cite{PS,Dawson}. We imagine that the $W$ bosons are emitted from the
incident fermion lines and come together in a collision process with
momentum transfer $Q$. The kinematics of the emission process is shown
in Figure~\ref{fig:thirtyone}. The emitted bosons are produced with a
spectrum in longitudinal momentum, parametrized by the quantity $x$,
the {\em longitudinal fraction}. They also have a spectrum in
transverse momentum $p_\perp$. The emitted $W$ boson is off-shell, but
this can be ignored to a first approximation if $Q$ is much larger than
$(m_W^2 + p_\perp^2)^{1/2}$. In this limit, the distribution of the
emitted $W$ bosons is described by relatively simple formulae. Note
that an incident $d_L$ or $e^-_L$ can radiate a $W^-$, while an
incident $u_L$ or $e^+_R$ can radiate a $W^+$.
The distribution of transversely polarized $W^-$ bosons emitted from an
incident $d_L$ or $e^-_L$ is given by
\begin{equation}
\int dx\,
f_{e^-_L \to W^-_{tr} }(x,Q) = \int^{Q^2}_{0} {d p_\perp^2\over
p_\perp^2 + m_W^2}
\int {dx\over x} \ {\alpha\over 4\pi s^2} {1 + (1-x)^2\over x} \ ,
\eeq{eq:f7}
where, as before, $s^2 = \sin^2\theta_w$. The integral over transverse momenta
gives an enhancement factor of $\log Q^2/m_W^2$, analogous to the
factor $\log s/m_e$ which appears in the formula for radiation of
photons in electron scattering processes. The distribution of
longitudinally polarized $W^-$ bosons has a somewhat different
structure,
\begin{eqnarray}
\int dx\,
f_{e^-_L \to W^-_{long} }(x,Q) &=& \int^{Q^2}_{0} {d p_\perp^2 m_W^2\over
(p_\perp^2 + m_W^2)^2 }
\int {dx\over x} \ {\alpha\over 2\pi s^2} {1-x\over x} \nonumber \\
& = & \int {dx\over x} \ {\alpha\over 2\pi s^2} {1-x\over x} \ .
\eeqa{eq:g7}
This formula does not show the logarithmic distribution in $p_\perp$
seen in \leqn{eq:f7}; instead, it produces longitudinally polarized $W$
bosons at a characteristic $p_\perp$ value of order $m_W$.
When both beams radiate longitudinally polarized $W$ bosons, we can
study boson-boson scattering through the reactions shown in
Figure~\ref{fig:thirtytwo}. In $pp$ reactions one can in principle
study all modes of $WW$ scattering, though the most complete simulations
have been done for the especially clean $I=2$ channel, $W^+W^+\to
W^+W^+$. In $e^+e^-$ collisions, one is restricted to the channels
$W^+W^- \to W^+W^-$ and $W^+W^- \to Z^0 Z^0$.
The diagrams in which a longitudinal
$Z^0$ appears in the initial state are suppressed by the small $Z^0$
coupling to the electron
\begin{equation}
{ g^2(e^-_L \to e^-_L Z^0)\over g^2(e^-_L \to \nu W^-)} =
\left((\frac{1}{2} - s^2)/cs\over 1/\sqrt{2}s \right)^2 = 0.2 \ .
\eeq{eq:h7}
The $I=2$ process $W^-W^- \to W^-W^-$ could be studied in a dedicated
$e^-e^-$ collision experiment.
\begin{figure}[t]
\begin{center}
\leavevmode
{\epsfxsize=4in\epsfbox{WWscat1.eps}}
\end{center}
\caption{Collider processes which involve $WW$ scattering.}
\label{fig:thirtytwo}
\end{figure}
I will now briefly discuss the experimental strategies for observing
these reactions in the LHC and linear collider environments and present
some simulation results. In the $pp$ reactions, the most important
background processes come from the important high transverse momentum
QCD processes which, with some probability, give final states that
mimic $W$ boson pairs. For example, in the process $gg\to gg$ with a
momentum transfer of 1~TeV, each final gluon typically radiates gluons
and quarks before final hadronization, to produce a system of hadrons
with of order 100 GeV. When the mass of this system happens to be
close to the mass of the $W$, the process has the characteristics of
$WW$ scattering. Because of the overwhelming rate for $gg\to gg$, all
studies of $WW$ scattering at hadron colliders have restricted
themselves to detection of one or both weak bosons in leptonic decay
modes. Even with this restriction, the process $gg \to t\bar t$
provides a background of isolated lepton pairs at high transverse
momentum. This background and a similar one from $q\bar q \to W +$
jets, with jets faking leptons, are controlled by requiring some
further evidence that the initial $W$ bosons are color-singlet systems
radiated from quark lines. To achieve this, one could require a forward
jet associated with the quark from which the $W$ was radiated, or a low
hadronic activity in the central rapidity region, characteristic of the
collision of color-singlet species.
\begin{figure}[t]
\begin{center}
\leavevmode
{\epsfxsize=4.0in
\epsfbox{LHC1.eps}}
\end{center}
\caption{Expected numbers of $W^+W^+ \to (\ell \nu)(\ell \nu)$ events due
to signal and background processes, after all cuts, for a 100 fb$^{-1}$
event sample at the LHC, from \protect\cite{Atlas}. The signal corresponds
to a Higgs boson of mass 1~TeV.}
\label{fig:thirtythree}
\end{figure}
Figure~\ref{fig:thirtythree} shows a simulation by the ATLAS
collaboration of a search for new strong interactions in $W^+W^+$
scattering \cite{Atlas}. In this study, both $W$ bosons were assumed to
be observed in their leptonic decays to $e$ or $\mu$, and a forward jet
tag was required. The signal corresponds to a model with a 1~TeV Higgs
boson, or, in our more general terminology, a 1~TeV $I=0$ resonance.
The size of the signal is a few tens of events in a year of running at
the LHC at high luminosity. Note that the experiment admits a
substantial background from various sources of tranversely polarized
weak bosons. Though there is a significant excess above the Standard
Model expectation, the signal is not distinguished by a resonance peak,
and so it will be important to find experimental checks that the
backgrounds are correctly estimated. An illuminating study of the other
important reaction $pp \to Z^0 Z^0 + X$ is given in \cite{Baggerscrew}.
\begin{figure}[p]
\begin{center}
\leavevmode
{\epsfxsize=4.0in\epsfbox{HansWW.eps}}
\end{center}
\caption{Expected numbers of $W^+W^-$ and $ZZ \to 4$ jet events due
to signal and background processes, after all cuts, for a 200 fb$^{-1}$
event sample at an $e^+e^-$ linear collider at 1.5 TeV in the
center of mass, from \protect\cite{Bargerscrew}. Three different models
for the signal are compared to the Standard Model background.}
\label{fig:thirtyfour}
\end{figure}
The $WW$ scattering experiment is also difficult at an $e^+e^-$ linear
collider. A center of mass energy well above 1 TeV must be used, and
again the event rate is a few tens per year at high luminosity. The
systematic problems of the measurement are different, however, so that
the $e^+e^-$ results might provide important new evidence even if a small
effect is first seen at the LHC. In the $e^+e^-$ environment, it is
possible to identify the weak bosons in their hadronic decay modes, and
in fact this is necessary to provide sufficient rate. Since the
hadronic decay captures the full energy-momentum of the weak boson, the
total momentum vector of the boson pair can be measured. This, again,
is fortunate, because the dominant backgrounds to $WW$ scattering
through new strong interactions come from the photon-induced processes
$\gamma\gamma\to W^+ W^-$ and $\gamma e \to Z W \nu$. The first of
these backgrounds can be dramatically reduced by insisting that the
final two-boson system has a transverse momentum between 50 and 300
GeV, corresponding to the phenomenon we noted in \leqn{eq:g7} that
longitudinally polarized weak bosons are typically emitted with a
transverse momentum of order $m_W$. This cut should be accompanied by
a forward electron and positron veto to remove processes with an
initial photon which has been radiated from one of the fermion lines.
The expected signal and background after cuts, in $e^+e^- \to \nu\bar\nu
W^+ W^-$ and $e^+e^- \to \nu\bar\nu Z^0 Z^0$, at a center-of-mass energy
of 1.5 TeV, are shown in Figure~\ref{fig:thirtyfour}
\cite{Bargerscrew}. The signal is shown for a number of different
models and is compared to the Standard Model expectation for
transversely polarized boson pair production. In the most favorable
cases of 1~TeV resonances in the $I=0$ or $I=1$ channel, resonance
structure is apparent in the signal, but in models with higher
resonance masses one must again rely on observing an enhancement over
the predicted Standard Model backgrounds. At an $e^+e^-$ collider, one
has the small advantage that these backgrounds come from electroweak
processes and can therefore be precisely estimated.
Recently, it has been shown that the process $WW \to t\bar t$ can be
observed at an $e^+e^-$ linear collider at 1.5 TeV \cite{barklowtt}. This
reaction probes the involvement of the top quark in the new strong
interactions. If the $W$ and top quark masses have a common origin,
the same resonances which appear in $WW$ scattering should also appear
in this reaction. However, some models, for example, Hill's topcolor
\cite{topcolor}, attribute the top quark mass to interactions specific
to the third generation which lead to top pair condensation. The study
of $WW\to t\bar t$ can directly address this issue experimentally.
\subsection{New strong interactions in $W$ pair-production}
In addition to providing direct $WW$ scattering processes, new strong
interactions can affect collider processes by creating a resonant
enhancement of fermion pair annihilation in to weak bosons. The most
important reactions for studying this effect are shown in
Figure~\ref{fig:thirtyfive}. As with the processes studied in Section
5.4, these occur both in the $pp$ and $e^+e^-$ collider environment.
\begin{figure}[tb]
\begin{center}
\leavevmode
\epsfbox{WWscat2.eps}
\end{center}
\caption{Collider processes which involve vector boson pair-production.}
\label{fig:thirtyfive}
\end{figure}
The effect is easy to understand by a comparison to the familiar strong
interactions. In the same way that the boson-boson scattering
processes described in the previous section were analogous to pion-pion
scattering, the strong interaction enhancement of $W$ pair production
is analogous to the behavior of the pion form factor. We might
parametrize the
enhancement of the amplitude for fermion pair annihilation into
longitudinally polarized $W$ bosons by a form factor $F_\pi(q^2)$. In
QCD, the pion form factor receives a large enhancement from the $\rho$
resonance. Similarly, if the new strong interactions contain a strong
$I=1$ resonance, the amplitude for longitudinally polarized $W$ pair
production should be multiplied by the factor
\begin{equation}
F_\pi(q^2) = { - M_1^2 + i M_1 \Gamma_1 \over
q^2- M_1^2 + i M_1 \Gamma_1 } \ ,
\eeq{eq:i7}
where $M_1$ and $\Gamma_1$ are the mass and width of the resonance. If
there is no strong resonance, the new strong interactions still have an
effect on this channel, but it may be subtle and difficult to detect.
A benchmark is that the phase of the new pion form factor is related to
the pion-pion scattering phase shift in the $I=1$ channel,
\begin{equation}
\arg F_\pi(s) = \delta_1(s) \ ;
\eeq{eq:j7}
this result is true for any strong-interaction model as long as $\pi\pi
\to 4 \pi$ processes are not important at the given value of $s$
\cite{BjD}.
At the LHC, an $I=1$ resonance in the new strong interactions can be
observed as an enhancement in $pp\to W Z + X$, with both $W$ and $Z$
decaying to leptons, as long as the resonance is sufficiently low in
mass that its peak occurs before the $q\bar q$ luminosity spectrum cuts
off. The ATLAS collaboration has demonstrated a sensitivity up to
masses of 1.6 TeV~\cite{wulz}. The signal for a 1 TeV resonance is quite
dramatic,
as demonstrated in Figure~\ref{fig:thirtynine}.
\begin{figure}[tb]
\begin{center}
\leavevmode
{\epsfxsize=5in\epsfbox{LHC2.eps}}
\end{center}
\caption{Reconstructed masses at the LHC
for new strong interaction resonances
decaying into gauge boson pairs, from \protect\cite{Atlas}:
(a) a 1~TeV techni-$\rho$ resonance decaying into $WZ$ and observed in the
3-lepton final state; (b) a 1.46~TeV techni-$\omega$ decaying into $\gamma Z$
and observed in the $\gamma \ell^+\ell^-$ final state.}
\label{fig:thirtynine}
\end{figure}
Also shown in this figure is an estimate of a related effect that
appears in some but not all models, the production of an $I=0$, $J=1$
resonance analogous to the $\omega$ in QCD, which then decays to 3 new
pions or to $\pi \gamma$. Though the first of these modes is not
easily detected at the LHC, the latter corresponds to the final state
$Z^0 \gamma$, which can be completely reconstructed if the $Z^0$ decays
to $\ell^+\ell^-$.
At an $e^+e^-$ collider, the study of the new pion form factor can be
carried a bit farther. The process $e^+e^-\to W^+ W^-$ is the most
important single process in high-energy $e^+e^-$ annihilation, with a
cross section greater than that for all annihilation processes to quark
pairs. If one observes this reaction in the topology in which one $W$
decays hadronically and the other leptonically, the complete event can
be reconstructed including the signs of the $W$ bosons. The $W$ decay
angles contain information on the boson polarizations. So it is
possible to measure the pair production cross section to an accuracy of
a few percent, and also to extract the contribution from $W$ bosons
with longitudinal polarization. The experimental techniques for this
analysis have been reviewed in \cite{barklowM}.
\begin{figure}[t]
\begin{center}
\leavevmode
{\epsfxsize=3.5in\epsfbox{Trhopeak.eps}}
\end{center}
\caption{Technirho resonance effect on the differential cross section
for $e^+e^-\to W^+W^-$ at $\cos\theta = -0.5$. The figure shows
the effect on the various $W$ polarization states.}
\label{fig:thirtyseven}
\end{figure}
Because an $I=1$ resonance appears specifically in the pair-production
of longitudinally polarized $W$ bosons, the resonance peak in the cross
section has associated with it an effect in the $W$ polarizations which
is significant even well below the peak. This effect is seen in
Figure~\ref{fig:thirtyseven}, which shows the differential cross
section for $W$ pair production at a fixed angle as a function of
center-of-mass energy, in a minimal technicolor model with the $I=1$
technirho resonance at 1.8 TeV. By measuring the amplitude for
longitudinal $W$ pair production accurately, then, it is possible to
look for $I=1$ resonances which are well above threshold. In addition,
measurement of
the interference between the transverse and longitudinal $W$ pair
production amplitudes
allows one to determine the phase of the new pion form
factor~\cite{barklowM}. This effect is present even in models with no
resonant behavior, simply by virtue of the relation \leqn{eq:j7} and
the model-independent leading term in \leqn{eq:d7}.
Figure~\ref{fig:thirtysix} shows the behavior of the new pion form
factor as an amplitude in the complex plane as a function of the
center-of-mass energy in the nonresonant and resonant cases.
\begin{figure}[tb]
\begin{center}
\leavevmode
\epsfbox{FPi.eps}
\end{center}
\caption{Dependence of $F_\pi(s)$ on energy, in models without and
with a new strong interaction resonance in the $I=J=1$ channel.}
\label{fig:thirtysix}
\end{figure}
\begin{figure}[tb]
\begin{center}
\leavevmode
\epsfbox{timstrho.eps}
\end{center}
\caption{Determination of the new pion form factor an an $e^+e^-$ linear
collider at 1.5 TeV with an unpolarized data sample of 200 fb$^{-1}$,
from \protect\cite{barklowM}. The simulation results are compared to
model with a high-mass $I=1$ resonance and the model-independent
contribution to pion-pion scattering. The contour about the
light Higgs point (with no new strong interactions) is a 95\% confidence
contour; that about the point $M = 4$ TeV is a 68\% confidence contour.}
\label{fig:thirtyeight}
\end{figure}
The expectations for the measurement of the new pion form factor at a
1.5 TeV linear collider, from simulation results of Barklow
\cite{barklowM}, are show in Figure~\ref{fig:thirtyeight}. The
estimated sensitivity of the measurement is compared to the
expectations from a model incorporating the physics I have just
described~\cite{mycoll}. A nonresonant model with scattering in the
$I=1 $ channel given only by the scattering length term in \leqn{eq:d7}
is already distinguished from a model with no new strong interactions
at the 4.6 $\sigma$ level, mainly by the measurement of the imaginary
part of $F_\pi$. In addition, the measurement of the resonance effect
\leqn{eq:i7} in the real part of $F_\pi$ can distinguish the positions
of $I=1$ resonances more than a factor two above the collider
center-of-mass energy.
\subsection{Overview of $WW$ scattering experiments}
It is interesting to collect together and summarize the various probes
for resonances in the new strong interactions that I have described in
the previous two sections. I have described both direct studies of $WW$
scattering processes and indirect searches for resonances through their
effect on fermion annihilation to boson pairs. With the LHC and the
$e^+e^-$ linear collider, these reactions would be studied in a number of
channels spanning all of the cases listed in \leqn{eq:b7}. Of course,
with fixed energy and luminosity, we can only probe so far into each
channel. It is useful to express this reach quantitatively and to ask
whether it should give a sufficient picture of the resonance structure
that might be found.
There is a well-defined way to estimate how far one must reach to
have interesting sensitivity to new resonances. The model-independent
lowest order expressions for the $\pi\pi$ scattering amplitudes
\begin{equation}
{\cal M}_I = 32\pi e^{i\delta_I} {s\over A_I} \cdot \cases{ 1 & $J=0$\cr
3 \cos\theta & $J=1$\cr}\ ,
\eeq{eq:k7}
violate unitarity when $s$ becomes sufficiently large, and this gives
a criterion for the value of $s$ by which new resonances must appear
\cite{LQT}. The unitarity violation begins for $s = A_I/2$; with the
values of the $A_I$ given in \leqn{eq:e7}, we find the bounds
\begin{equation}
I=0\ : \quad \sqrt{s} < 1.3 \ \mbox{TeV} \ , \qquad
I=1\ : \quad \sqrt{s} < 3.0 \ \mbox{TeV} \ .
\eeq{eq:l7}
For comparison, if we scale up the QCD resonance masses by the factor
\leqn{eq:q}, we find a techni-$\rho$ mass of 2.0~TeV, well below the
the $I=1$ unitarity bound given in \leqn{eq:l7}. It is
interesting to compare these goals to the reach expected for the
experiments we have described.
One of the working groups at the recent Snowmass summer study addressed
the question of estimating the sensitivity to new strong interaction
resonances in each of the boson-boson scattering channels that will be
probed by the high-energy colliders \cite{Persis}. Their results are
reproduced in Table 1. Results are given for experiments at the LHC
and at a 1.5 TeV $e^+e^-$ linear collider, with luminosity samples of 100
fb$^{-1}$ per experiment. The method of the study was to use
simulation data from the literature
to estimate the sensitivity to the parameters $M_I$ in
\leqn{eq:d7}, allowing just this one degree of freedom per channel.
Situations with multiple resonances with coherent or cancelling effects
were not considered. Nevertheless, the determination of these basic
parameters should give a
general qualitative picture of the new strong
interactions. The estimates of the sensitivity to these
parameters go well beyond the goals set in
\leqn{eq:l7}.
\begin{table*}[ht]
\begin{center}
\caption[*]{LHC and linear collider (`NLC') sensitivity to resonances
in the new strong interactions, from \protect\cite{Persis}. `Reach'
gives the value of the resonance mass corresponding to an enhancement
of the cross section for boson-boson scattering at the 95\%\ confidence
level obtained in Section VIB2. `Sample' gives a representative set of
errors for the determination of a resonance mass from this enhancement.
`Eff. ${\cal L}$ Reach' gives the estimate of the resonance mass for a 95\%\
confidence level enhancement. All of these estimates are based on
simple parametrizations in which a single resonance dominates the
scattering cross section.\bigskip}
\label{tab:Peskin}
\begin{tabular}{cccccc}
\hline
\hline
Machine & Parton Level Process & I & Reach & Sample & Eff. ${\cal L}$ Reach \\
\hline \\
LHC & $qq' \to qq'ZZ$ & 0 & 1600 & $1500^{+100}_{-70}$& 1500 \\ \\
LHC & $q \bar q \to WZ$ & 1 & 1600 & $1550^{+50}_{-50}$ & \\ \\
LHC & $qq' \to qq'W^+W^+$ & 2 & 1950 & $2000^{+250}_{-200}$& \\ \\
NLC & $e^+e^- \to \nu \bar \nu ZZ$ & 0 & 1800 & $1600^{+180}_{-120}$&
2000 \\ \\
NLC & $e^+e^- \to \nu \bar \nu t \bar t$ & 0 & 1600 & $1500^{+450}_{-160}$&
\\ \\
NLC & $e^+e^- \to W^+W^-$ & 1 & 4000 & $3000^{+180}_{-150}$ \\ \\
\hline
\hline
\end{tabular}
\end{center}
\end{table*}
If new strong interactions are found, further experiments at higher
energy will be necessary to characterize them precisely. Eventually,
we will need to work out the detailed hadron spectroscopy of these new
interactions, as was done a generation ago for QCD. Some techniques
for measuring this spectrum seem straightforward if the high energy
accelerators will be available. For example, one could measure the
spectrum of $J=1$ resonances from the cross section for $e^+e^-$ or
$\mu^+\mu^-$ annihilation to multiple longitudinal $W$ and $Z$ bosons. I
presume that there are also elegant spectroscopy experiments that can
be done in high-energy $pp$ collisions, though these have not yet been
worked out. It may be interesting to
think about this question.
If the colliders of the next generation do
discover these new strong interactions, the new spectroscopy will be
a central issue of particle physics twenty years from now.
\subsection{Observable effects of extended technicolor}
Beyond these general methods for observing new strong interactions,
which apply to any model in which electroweak symmetry breaking has a
strong-coupling origin, each specific model leads to its own
model-dependent predictions. Typically, these predictions can be tested
at energies below the TeV scale, so they provide phenomena that can
be explored before the colliders of the next generation reach their
ultimate energy and luminosity. On the other hand, these predictions
are specific to their context. Excluding one such phenomenon rules out
a particular model but not necessarily the whole class of
strong-coupled theories. We have seen an example of this already in
Section 5.2, where the strong constraints on technicolor models from
precision electroweak physics force viable models to have particular
dynamical behavior but do not exclude these models completely.
In this section, I would like to highlight three such predictions
specifically associated with technicolor theories. These three
phenomena illustrate the range of possible effects that might be found.
A systematic survey of the model-dependent predictions of models of
strongly-coupled electroweak symmetry breaking is given in
\cite{Persis}.
All three of these predictions are associated with the extended
technicolor mechanism of quark and lepton mass generation described at
the end of Section 5.1 and in Figure~\ref{fig:twentysix}. To see the first
prediction,
note from the figure that the Standard Model quantum numbers of
the external fermion must be carried either by the techniquark or by
the ETC gauge boson. The simplest possibility is to assign the
techniquarks the quantum numbers of a generation of quarks and leptons
\cite{FS}. Call these fermions $(U,D,N,E)$. The pions of the
technicolor theory, the Goldstone bosons of spontaneously broken chiral
$SU(2)$, have the quantum numbers
\begin{equation}
\pi^+ \sim \bar U \gamma^5 D + \bar N \gamma^5 E \ ,
\pi^0 \sim \bar U \gamma^5 U - \bar D \gamma^5 D + \bar N \gamma^5 N
- \bar E \gamma^5 E \ .
\eeq{eq:m7}
But the theory contains many more pseudoscalar mesons. In fact, in the
absence of the coupling to $SU(3)\times SU(2)\times U(1)$, the model
has the global symmetry $SU(8)\times SU(8)$ (counting each techniquark
as three species), which would be spontaneously broken to a vector
$SU(8)$ symmetry by dynamical techniquark mass generation. This would
produce an $SU(8)$ representation of Goldstone bosons, 63 in all. Of
these, three are the Goldstone bosons eaten by the $W^\pm$ and $Z^0$ in
the Higgs mechanism. The others comprise four color singlet bosons,
for example,
\begin{equation}
P^+ \sim \frac{1}{3} \bar U \gamma^5 D - \bar N \gamma^5 E \ ,
\eeq{eq:n7}
four color triplets, for example,
\begin{equation}
P_3 \sim \bar U \gamma^5 E \ ,
\eeq{eq:o7}
and four color octets, for example,
\begin{equation}
P_8^+ \sim \bar U \gamma^5 t^a D \ ,
\eeq{eq:oo7}
where $t^a$ is a $3\times 3$ $SU(3)$ generator. These additional
particles are known as {\em pseudo-Goldstone bosons} or, more simply,
{\em technipions}.
Phenomenologically, the technipions resemble Higgs bosons with the same
Standard Model quantum numbers. They are produced in $e^+e^-$
annihilation at the same rate as for pointlike charged bosons. The idea
of Higgs bosons with nontrivial color is usually dismissed in studies
of the Higgs sector because this structure is not `minimal'; however,
we see that these objects appear naturally from the idea of
technicolor. The colored objects are readily pair-produced at proton
colliders, and the neutral isosinglet color-octet state can also be
singly produced through gluon-gluon fusion \cite{LaneColl}.
The masses of the technipions arise from Standard Model radiative
corrections and from ETC interactions; these are expected to be of the
order of a few hundred GeV. Technipions decay by a process in which
the techniquarks exchange an ETC boson and convert to ordinary quarks
and leptons. This decay process favors decays to heavy flavors, for
example, $P^+_8 \to \bar t b$. In this respect, too, the technipions
resemble Higgs bosons of a highly nonminimal Higgs sector resulting
from an underlying composite structure.
\begin{figure}[t]
\begin{center}
\leavevmode
{\epsfxsize=4.5in\epsfbox{ETCpairs.eps}}
\end{center}
\caption{Cross section for the production of ETC boson pair states in
$pp$ collisions, from \protect\cite{AandW}. The $\bar E E$ states
are observed as $t \bar t Z^0$ systems of definite
invariant mass. The two
sets of curves correspond to signal and Standard Model background
(with the requirement $|p_\perp(t) | > 50$ GeV) for $pp$
center-of-mass energies of 10 and 20 TeV.}
\label{fig:forty}
\end{figure}
If ETC bosons are needed to generate mass in technicolor models, it is
interesting to ask whether these bosons can be observed directly. In
\leqn{eq:n6}, I showed that the ETC boson associated with the top quark
should have a mass of about 1 TeV, putting it within the mass range
accessible to the LHC. Arnold and Wendt considered a particular
signature of ETC boson pair production at hadron colliders
\cite{AandW}. They assumed (in contrast to the assumptions of the
previous few paragraphs) that the ETC bosons carry color; this allows
these bosons to be pair-produced in gluon-gluon collisions. Because
ETC bosons carry technicolor, they will not be produced as free
particles; rather, the ETC boson pair will form a technihadron $\bar E
E$. These hadrons will decay when the ETC boson emits a top quark and
turns into a techniquark, $E \to T \bar t$. When both ETC bosons have
decayed, we are left with a technicolor pion, which is observed as a
longitudinally polarized $Z^0$. The full reaction is
\begin{equation}
gg \to \bar E E \to \bar E T + \bar t \to Z^0 + t + \bar t \ ,
\eeq{eq:p7}
in which the $t Z^0$ system and the $Z^0 t \bar t$ systems both form
definite mass combinations corresponding to technihadrons. The cross
section for this reaction is shown in Figure~\ref{fig:forty}. Note
that the multiple peaks in the signal show
contributions from both the $J=0$ and the $J=2$ bound
states of ETC bosons.
A second manifestation of ETC dynamics is less direct, but it is
visible at lower eneriges. To understand this effect, go back to the
elementary ETC gauge boson coupling that produces the top quark mass,
\begin{equation}
\Delta {\cal L} = g_E E_\mu \bar Q_L \gamma^\mu T_L \ ,
\eeq{eq:q7}
where $Q_L = (t,b)_L$ and $T_L = (U,D)_L$. If we put this interaction
together with a corresponding coupling to the right-handed quarks, we
obtain the term \leqn{eq:k6} which leads to the fermion masses. On the
other hand, we could contract the vertex \leqn{eq:q7} with its own
Hermitian conjugate. This gives the vertex
\begin{equation}
i \Delta{\cal L} = (i g_E \bar Q_L\gamma^\mu T_L) {-i\over -m_E^2}
(i g_E \bar T_L \gamma_\mu Q_L) \ .
\eeq{eq:r7}
By a Fierz transformation \cite{PS}, this expression can be rearranged
into
\begin{equation}
i \Delta{\cal L} = {-ig_E^2\over m_E^2} ( \bar Q_L\gamma^\mu \tau^a Q_L)
( \bar T_L\gamma_\mu \tau^a T_L) \ ,
\eeq{eq:rr7}
where $\tau^a$ are the weak isospin matrices. The last factor gives
just the technicolor currents which couple to the weak interaction
vector bosons. Thus, we can replace this factor by
\begin{equation}
\bar T_L\gamma_\mu \tau^3 T_L \to {1\over 4}{e\over cs} f_\pi^2 Z_\mu
\eeq{eq:s7}
Then this term has the interpretation of a technicolor modification of
the $Z\to b\bar b$ and $Z\to t\bar t$ vertices \cite{ChivSS}.
It is not difficult to estimate the size of this effect. Writing the
new contribution to the $Z^0$ vertex together with the Standard Model
contributions, we have
\begin{equation}
\Delta{\cal L} = {e\over cs} Z_\mu \bar Q_L\gamma^\mu \left\{ \tau^3 - s^2 Q
- {g_E^2\over 2 m_E^2} f_\pi^2 \tau^3 \right\} Q_L \ .
\eeq{eq:t7}
For the left-handed $b$, $\tau^3 = -\frac{1}{2}$, and so the quantity in brackets is
\begin{eqnarray}
g_L^b &=& -\frac{1}{2} + \frac{1}{3}s^2 + \frac{1}{4} {g_E^2\over m_E^2} f_\pi^2
\nonumber \\
&=& \bigl( g_L^b \bigr)_{\mbox{\scriptsize SM}} \left( 1 - \frac{1}{2}
{m_t \over 4\pi f_\pi} \right) \ ,
\eeqa{eq:u7}
where in the last line I have used \leqn{eq:n6} to estimate
$g_E/m_E$. The value of the correction, when squared, is about 6\% and
would tend to decrease the branching ratio for $Z^0 \to b\bar b$.
\begin{figure}[tb]
\begin{center}
\leavevmode
{\epsfxsize=2.25in\epsfbox{TCZbb.eps}}
\end{center}
\caption{Modification of the $Z^0 b \bar b$ and $Z^0 t \bar t$
vertices by ETC interactions.}
\label{fig:fortyone}
\end{figure}
The effect that we have estimated is that of the first diagram in
Figure~\ref{fig:fortyone}. In more complicated models of ETC
\cite{CST,KH,Guo}, effects corresponding to both of the diagrams shown in
the figure contribute, and can also have either sign. Typically, the
two types of diagrams cancel in the $Z^0 b \bar b$ coupling and add in
the $Z^0 t \bar t$ coupling \cite{Mur}. Thus, it is interesting to
study this effect experimentally in $e^+e^-$ experiments both at the $Z^0$
resonance and at the $t \bar t$ threshold.
\subsection{Recapitulation}
In this section, I have discussed the future experimental program of
particle physics for the case in which electroweak symmetry breaking
has its origin in new strong interactions. We have discussed
model-independent probes of the new strong interaction sector and
experiments which probe specific aspects of technicolor models. In this
case, as opposed to the case of supersymmetry, some of the most
important experiments can only be done at very high energies and
luminosities, corresponding to the highest values being considered for
the next generation of colliders. Nevertheless, I have argued that, if
plans now proposed can be realized, these experiments form a rich
program which provides a broad experimental view of the new
interactions.
Two sets of contrasting viewpoints appeared in our analysis. The first
was the contrast between experiments that test model-dependent as
opposed to model-independent conclusions. The search for technipions,
for corrections to the $Z t \bar t$ vertex, and for other specific
manifestations of technicolor theories can be carried out at energies
well below the 1 TeV scale. In fact, the precision electroweak
experiments and the current precision determination of the $Z^0 \to
b\bar b $ branching ratio already strongly constrain technicolor
theories. However, such constraints can be evaded by clever
model-building. If an anomaly predicted by technicolor is found, it
will be important and remarkable. But in either case, we will need to
carry out the TeV-energy experiments to see the new interactions
directly and to clearly establish their properties.
The second set of contrasts, which we saw also in our study of
supersymmetry, comes from the different viewpoints offered by $pp$ and
$e^+e^-$ colliders. In the search for anomalies, the use of both types of
experiments clearly offers a broader field for discovery. But these
two types of facilities also bring different information to the more
systematic program of study of the new strong interactions summarized
in Table 1. The table makes quantitative the powerful capabilities of
the LHC to explore the new strong interaction sector. But it also shows
that an $e^+e^-$ linear collider adds to the LHC an exceptional
sensitivity in the $I=1$ channel, reaching well past the unitarity
bound, and sensitivity to the process $W^+W^- \to t\bar t$, which tests
the connection between the new strong interactions and the top quark
mass generation. Again in this example, we see how the LHC and the
linear collider, taken together, provide the information for a broad
and coherent picture of physics beyond the standard model.
\section{Conclusions}
This concludes our grand tour of theoretical ideas about what physics
waits for us at this and the next generation of high-energy colliders.
I have structured my presentation around two specific concrete models
of new physics---supersymmetry and technicolor. These models contrast
greatly in their details and call for completely different experimental
programs. Nevertheless, they have some common features that I would
like to emphasize.
First of all, these models give examples of solutions to the problem I
have argued is the highest-priority problem in elementary particle
physics, the mechanism of electroweak symmetry breaking. Much work has
been devoted to `minimal' solutions to this problem, in which the
future experimental program should be devoted to finding a few, or even
just one, Higgs scalar bosons. It is possible that Nature works in
this way. But, for myself, I do not believe it. Through these
examples, I have tried to explain a very different view of electroweak
symmetry breaking, that this phenomenon arises from a new principle of
physics, and that its essential simplicity is found not by counting the
number of particles in the model but by understanding that the model is
built around a coherent physical mechanism. New principles have deep
implications, and we have seen in our two examples that these can lead
to a broad and fascinating experimental program.
If my viewpoint is right, these new phenomena are waiting for us,
perhaps already at the LEP 2 and Tevatron experiments of the next few
years, and at the latest at the LHC and the $e^+e^-$ linear collider. If
the new physical principle that we are seeking explains the origin of
$Z$ and $W$ masses, it cannot be too far away. In each of the models
that I have discussed, I have given a quantitative estimate of the
energy reach required. At the next generation of colliders, we will be
there.
For those of you who are now students of elementary particle physics,
this conclusion comes with both discouraging and encouraging messages.
The discouragement comes from the long time scale required to construct
new accelerator facilities and to carry out the large-scale experiments
that are now required on the frontier. Some of your teachers can
remember a time when a high-energy physics experiment could be done in
one year. Today, the time scale is of order ten years, or longer if the
whole process of designing and constructing the accelerator is
considered.
The experiments that I have described put a premium not only on high
energy but also high luminosity. This means that not only the
experiments but also the accelerator designs required for these studies
will require careful thinking and brilliant new ideas. During the
school, Alain Blondel was fond of repeating, `Inverse picobarns must be
earned!' The price of inverse femtobarns is even higher. Thus, I
strongly encourage you to become involved in the problems of
accelerator design and the interaction of accelerators with
experiments, to search for solutions to the challenging problems that
must be solve to carry out experiments at 1 TeV and above.
The other side of the message is filled with promise. If we can have
the patience to wait over the long time intervals that our experiments
require, and to solve the technical problems that they pose, we will
eventually arrive at the physics responsible for electroweak symmetry
breaking. If the conception that I have argued for in these lectures is
correct, this will be genuinely a new fundamental scale in physics,
with new interactions and a rich phenomenological structure.
Though the experimental discovery and clarification
of this structure will be complex, the accelerators planned for the
next generation---the LHC and the $e^+e^-$ linear collider---will provide
the powerful tools and analysis methods that we will require. This is
the next frontier in elementary particle physics, and it is waiting for
you. Enjoy it when it arrives!
\section*{Acknowledgments}
I am grateful to Belen Gavela, Matthias Neubert, and Nick Ellis for
inviting me to speak at the European School, to Alain Blondel,
Susannah Tracy, and Egil Lillestol for providing the very pleasant
arrangements for the school, to the students at the school for their
enthusiasm and for their criticisms,
and to Erez Etzion, Morris Swartz,
and Ian Hinchliffe for comments on the manuscript.
This work was supported by the Department
of Energy under contract DE--AC03--76SF00515.
\bigskip
\newpage
|
1,941,325,220,058 | arxiv | \section{Introduction}
Magnetic monopoles found after gauge-fixing into the maximum Abelian gauge
have been successful in explaining the fundamental string tension at
$T=0$ in $SU(2)$ lattice gauge theory \cite{jssnrw}. However, there has
been an apparent serious problem with the spacial string tension at finite
temperature \cite{js_lat94}.
This problem has recently been resolved, and the monopole results are now
in good agreement with the full $SU(2)$ answers. This is discussed in more
detail in a recent paper \cite{jssnrw_msum}, so here I will only mention
that the difficulty was with the method of calculation rather than monopoles
themselves. A contribution from Dirac sheets, derived first by Smit
and van der Sijs \cite{smit}, had been omitted. Evidently this is justified
at $T=0$, but not for temperatures above
the deconfining temperature, which corresponds to $N_{t}=8$ for the present
calculations at $\beta=2.5115$ \cite{fingberg}.
In the remainder of this report, I will discuss the contributions of monopoles
to correlators of Polyakov loops, and the closely related question of
how the magnetic current screens itself.
\section{Polyakov Loops}
We denote by $C(R)$ the correlation function of Polyakov loops,
$<P(R),P(0)>$, where the arguements refer to the spacial locations
of the two Polyakov loops. For temperature $T < T_{c}$, or $N_{t} > 8$,
$C(R)$ can be used to measure a T-dependent physical string tension,
which is expected to approach 0 smoothly for $SU(2)$ lattice gauge theory
as $T \rightarrow T_{c}$.
For $T > T_{c}$, or $N_{t} < 8$, we are in the
deconfined phase and $C(R)$ determines electric screening masses.
The above remarks are for $C(R)$ in full $SU(2)$. Here we are concerned with
how much of this physics can be obtained from monopoles. The monopole
contribution to $C(R)$
is readily calculated using the same method as for Wilson loops.
The naive method \cite{jssnrw} can be used, since numerically the
Dirac sheet contributions are negligible for Polyakov loop correlators.
In the following we denote the correlator of Polyakov loops calculated from
monopoles by $C_{mon}(R)$.
In Fig.~(1) we show $- \ln(C_{mon})/N_{t}$ vs. $R$
for the four temperatures
$T/T_{c}$ ~= 0.66, 1.00, 1.33, and 2.00 respectively, corresponding to $N_{t}$ ~=12, 8, 6, and 4.
The data points were generated by averaging over 500 widely spaced configurations
at each $N_{t}$. The algorithm used and details of the gathering of configurations are given in
\cite{jssnrw}.
\begin{figure}
\psfig{file=post_poly,width=8cm}
\caption{Log of the monopole Polyakov loop correlator vs. R}
\end{figure}
First taking the case of $N_{t}=12$, or $T/T_{c}=0.66$, the data points
are well fit by a simple linear function. The slope of the straight line
determines the physical string tension, $\sigma_{p}(T)$. The fit gives
$\sigma_{p}(2/3T_{c})= 0.029(2)$. We may translate this into a zero
temperature value using the formula
$$ \sigma_{p}(0)=\sigma_{p}(T)+\frac{\pi}{3N_{t}^{2}}$$
which gives $\sigma_{p}(0)=0.036(2)$, within errors of high precision
numbers for the zero temperature string tension at $\beta=2.5115$.
So for finite temperatures $T<T_{c}$, monopoles appear to explain the physical string tension
just as at zero temperature. We have no other runs at finite temperature below $T_{c}$,
but the data for $T=T_{c}$ shows curvature at all values of $R$ implying that
the physical string tension has vanished.
Turning to $T/T_{c}=1.33,2.00$, or $N_{t}=6,4$, Fig.(1) shows a very
different behavior for $-ln(C_{mon})/N_{t}$. The expected behavior for $-ln(C)/N_{t}$
in full $SU(2)$ is of the general form, $A-B/R^{\gamma}\exp(-\mu(T) R)$, where
$\mu(T)$ is an electric screening mass. At very high temperature, in the perturbative
regime, $\gamma=2$, and $\mu(T)$ can be calculated analytically, starting at
one loop. At lower temperatures, in the non-pertubative regime, $\gamma=1$, and
and $\mu(T)$ is determined by the lightest glueball mass in three-dimensional
$SU(2)$ gauge theory. Our data is not extensive enough to determine $\gamma$ and
$\mu(T)$ independently in fits to $-ln(C_{mon})/N_{t}$. Instead we explored fits
where a value of $\gamma$ was chosen, and then a minimum $\chi^{2}$ was
searched for by varying $\mu(T)$. For the choice $\gamma=2$, we obtain $\mu(T)=
0.0 \pm 0.1$, whereas for $\gamma=1$, the range of allowed screening masses
is slightly larger, $\mu(T) = 0.0 \pm 0.2$. In short, although qualitatively
$-ln(C_{mon})/N_{t}$ is of the expected general shape, quantitatively we have negligible evidence
for an electric screening mass arising from monopoles at temperatures $T > T_{c}$.
At ultra-high temperatures, this is not surprising, since the
electric screening mass is calculable from gluons in perturbation theory, and
monopoles although present, are not needed to explain it.
At lower temperatures still satisfying $T > T_{c}$, the electric screening
mass can no longer be calculated perturbatively. Nevertheless, our results suggest
that also here, its origin is not to be explained by monopoles.
It is important to emphasize
that Polyakov correlators get contributions only from
magnetic currents in purely spacial directions. Meanwhile a spacial Wilson loop, which
determines the spacial string tension, receives contributions from magnetic currents
in both space and time directions. (It is the directions dual to the plane of the
generalized Wilson loop being computed that count.)
In the next section, we will show that the time and space components of the magnetic current
behave very differently.
\section{Magnetic Screening}
We begin by reviewing the reasons for believing the magnetic current screens itself. One
comes from the classic work of Polyakov for d=3 $U(1)$ theory with monopoles \cite{polyakov}. This is
relevant here, since at high temperature a $d=4$ theory effectively looks
three dimensional.
Polyakov's analysis ties an area law for Wilson loops to plasma-like
behavior for the monopole gas, with strong Debye screening. Another arguement for
screening comes from the Abelian Higgs model when it is equivalent to
a type II superconductor \cite{wyld}. Taking the ``dual" , the tube of
electric flux connecting a pair of opposite sign external charges is accompanied by strong
screening of the (magnetic) supercurrent in directions perpendicular to the
flux tube. In both cases, confinement is accompanied by strong screening
of the magnetic charges (or currents) which cause confinement.
Returning to our finite temperature calculations, for a Wilson loop in xy,xz, or yz planes, the
magnetic current which contributes is in
tz, ty, or tx planes.
For such purely spacial loops we do see an area law (spacial string tension).
The only component of current common in all these cases is the
time component. This suggests strong screening of the time component of the magnetic
current in spacial directions.
For a Polyakov correlator, the generalized Wilson loop is in
tz, tx, or ty planes and the corresponding magnetic current which contributes is in
xy,yz, or xz planes.
For Polyakov loop correlators we do not have
an area law. Further, the time component of the magnetic current
never contributes. We may then have a consistent situation if strong
screening of the time component of magnetic current is accompanied by weak, partial
screening of the spacial components.
To test these ideas, we study screening
using a formalism explained in more detail
in \cite{jsrw}. An infinitesimal external ``static" monopole-anti-monopole pair
is inserted
into the system of magnetic current, then the potential between this
pair in momentum space is calculated . The magnitude of the external charges is $\kappa g$,
where $\kappa <<1$, and $g = 2\pi/e$ is the unit of magnetic charge. For the screening of
the time-component of the current, we obtain for the screened potential $V_{\kappa}(k)$,
$$
V_{\kappa}({ k}) =
-g^2V({ k})(1-g^2m_{44}({ k})V({ k})),
$$
where $V({ k})$ is the $d=3$ Fourier Transform (FT) of
the lattice Coulomb potential, and $m_{44}({ k})$ is the
FT of the static magnetic charge
correlation function.
The effect of magnetic vacuum polarization on $V_{\kappa}({ k})$
is contained in
$f({ k})=m_{44}({ k})V({ k})$.
\begin{figure}
\psfig{file=post_fttk,width=8cm}
\caption{Timelike current screening in momentum space}
\end{figure}
In Fig.(2), we show $f(k)$ vs $k^2$ for all four temperatures.
The screening is stronger ($f(k)$ larger) for $T=2T_{c}$, and rather similar for
the other temperatures, consistent with the much larger spacial string tension
for $T=2T_{c}$ \cite{jssnrw_msum}.
More quantitatively, if $V_{\kappa}$ is exponentially screened in
position space, then
$1/f(k)$ should be linear in $k^{2}$ for small $k^{2}$. In Fig.(3) we
show $1/f(k)$ vs. $k^{2}$ for the time-like current.
\begin{figure}
\psfig{file=post_oneoverfbig,width=8cm}
\caption{$1/f(k)$ vs. $k^2$ for the time-like magnetic current}
\end{figure}
The slope defines the square of a
magnetic screening mass $M_{mag}^{sc}$. Fits to the data give
$M_{mag}^{sc}=0.6(1)/a$ for $T/T_{c}=2$, and $0.5(1)/a$ for
$T/Tc=1.33,\ 1.0,\ 0.66$
An intriguing question for the future is the relation between this
screening mass found from the time-like magnetic current and the
magnetic mass found from the gluon propogator \cite{karsch}.
If $V_{\kappa}$ has no long range part, $1/f(0)$ must equal $g^2$.
Using straight line fits to $1/f(k)$, the fitted value of
$1/f(0)=24.3 \pm 1.2$ This compares well with
$$g^{2}=\frac{4 \pi^{2}}{e^2}=\beta \pi^{2} =24.789$$
Turning now to the screening of the spacial current, we change our viewpoint
and regard a spacial direction as the ``static" one. The resulting screened potential
$V_{\kappa}$ is given by a formula identical to the one above except now
the screening function
$f({ k})=m_{xx}({ k})V({ k})$, if the static direction was chosen as the x-direction,etc for y,z.
\begin{figure}
\psfig{file=post_fkkk,width=8cm}
\caption{Spacelike current screening in momentum space}
\end{figure}
In Fig.(4), we show the results averaged over the three choices x,y,z.
Comparing Figs.(2) and (4), we see that the screening of the spacial current,
in contrast to the timelike,
{\it decreases} with temperature. To take the most extreme case, for $T=2T_{c}$,
for the time component of the magnetic current, the screening is strongest ($f(k)$
largest), while for the spacial component of the current, the screening is
weakest ($f(k)$ smallest).
To summarize, we have a consistent picture of monopole
dynamics emerging from study of spacial Wilson loops, Polyakov loop correlators,
and magnetic current screening. The spacial string tension reflects the
strong Debye-like screening of the time component of the magnetic current.
The
Coulombic behavior (zero screening mass) of Polyakov loop correlators reflects the weak screening
of the spacial magnetic current.
This work was supported by the National Science Foundation.
The calculations were carried out on the Cray
C90 system at the San Diego Supercomputer Center (SDSC),
supported in part by the National Science
Foundation.
|
1,941,325,220,059 | arxiv | \section{Introduction}
In the QCD ground state confinement and chiral symmetry breaking are intertwined
as lattice simulations have now established~\cite{LATTICE}. The loss of confinement
with increasing temperature as described by a jump in the Polyakov line is followed
by a rapid cross-over in the chiral condensate for $2+1$ flavors. When the quarks are
in the adjoint representation, the cross over occurs much later than the deconfinement
transition. There is increasing lattice evidence that the topological nature of the underlying
gauge configurations maybe key in understanding some aspects of these results~\cite{CALO-LATTICE}.
This work is a continuation of our earlier studies~\cite{LIU1,LIU2,LIU3,LIU5} of the gauge
topology using the instanton-dyon liquid model. The starting point of the model are the KvBLL
instantons threaded by finite holonomies and their splitting into instanton-dyon constituents
\cite{KVLL}, with strong semi-classical interactions~\cite{DP,DPX,LARSEN}. At low temperature,
the phase preserves center symmetry but breaks
spontaneously chiral symmetry. At sufficiently high temperature, the phase restores both
symmetries as the constituent instanton-dyons regroup into topologically neutral
instanton-anti-instanton molecules. The importance of fractional topological constituents for
confinement was initially suggested through instanton-quarks in~\cite{ARIEL}, and more recently
using bions in~\cite{UNSAL}.
The instanton-dyons carry fractional topological charge $1/N_c$ and are able to localize chiral
quarks into zero modes. For quarks in the fundamental representation, as the KvBLL fractionate,
the zero-mode migrates to the heavier instanton-dyon constituent~\cite{KRAAN}. The random
hopping of these zero modes in the instanton-dyon liquid is at the origin of the spontaneous
breaking of chiral symmetry as has been shown both numerically~\cite{SHURYAK1,SHURYAK2}
and using mean-field methods~\cite{LIU2}. In supersymmetric QCD some arguments were
presented in~\cite{TIN}.
At finite temperature the light quarks are subject to anti-periodic boundary conditions on $S^1$
to develop the correct occupation statistics in bulk. General twisted fermionic boundary conditions
on $S^1$ amounts to thermal QCD with Bohm-Aharanov phases that alter fundamentally the nature
of the light quarks~\cite{RW,RMT}. A particularly interesting proposal consists of a class of $Z_{N_c}$
twisted QCD boundary conditions with $N_c=N_f$ resulting in a manifestly $Z_{N_c}$ symmetric
QCD dubbed $Z_{N_c}$-QCD~\cite{JAP}. The confined phase is both center and chiral symmetric
eventhough the boundary conditions are flavor breaking. The deconfined phase is center and chiral
symmetry broken~\cite{JAP,TAKUMI}.
The purpose of this paper is to address some aspects of twisted fermionic boundary conditions
in the context of the instanton-dyon liquid model. Since the localization of the
zero-modes on a given instanton species is very sensitive to the nature of the the twist on $S^1$,
this deformation offers an insightful tool for the possible understanding of the fundamental aspects
of the spontaneous breaking of chiral symmetry through the underlying topological constituents.
Similar issues were addressed using PNJL models~\cite{JAP} and more recently monopole-dyons
and without anti-monopole-dyons for small $S^1$~\cite{THOMAS}. A numerical analysis
in the the instanton-dyon liquid model with $N_f=N_c=2$ was recently presented in~\cite{LARS3}.
In section 2 we briefly review the model and discuss the general case of $N_c=N_f$ twisted boundary conditions.
The special cases of $N_c=N_f=2,3$ are given and the corresponding normalizable zero-modes around the center
symmetric point constructed. We derive explicitly the pertinent hopping matrices between the instanton-dyons and
the instanton-anti-dyons for the case of $N_c=N_f=2,3$ which are central to the quantitative study of the
spontaneous breaking of chiral symmetry.
In section 3 we use a series of fermionization and bosonization transformations to map the instanton-dyon partition function
on a 3-dimensional effective theory. For $N_f>2$,
additional discrete symmetries combining charge conjugation and exchange between conjugate flavor pairs
are identified, with the same chiral condensates at high temperature. In section 4 we derive the effective
potential for the ground state of the 3-dimensional effective theory. We explicitly show that it supports a center symmetric state with spontaneously broken chiral symmetry. The center asymmetric phase at high temperature
supports unequal chiral condensates. Our conclusions are in section 5.
\section{ Effective action with twisted fermions}
\subsection{General setting}
For simplicity we detail here the general setting for $N_c=2$.
The pertinent changes for any $N_c$ will be quoted when
appropriate. For a fixed holonomy with $A_4(\infty)/2\omega_0=\nu \tau^3/2$ and $\omega_0=\pi T$, the
SU(2) KvBLL instanton~\cite{KVLL} is composed of a pair of instanton-dyons labeled by L, M
(instanton-anti-dyons by $\overline {\rm L},\overline {\rm M}$). In general, there are are $N_c-1$ BPS
instanton-dyons and only one twisted instanton-dyon. As a result the global gauge
symmetry is reduced through $SU(N_c)\rightarrow U(1)^{N_c-1}$.
For example, the grand-partition function for dissociated $N_c=2$ KvBLL instantons and anti-instantons
and $N_f$ massless flavors is
\be
{\cal Z}_{1}[T]&&\equiv \sum_{[K]}\prod_{i_L=1}^{K_L} \prod_{i_M=1}^{K_M} \prod_{i_{\bar L}=1}^{K_{\bar L}} \prod_{i_{\bar M}=1}^{K_{\bar M}}\nonumber\\
&&\times \int\,\frac{f_Ld^3x_{Li_L}}{K_L!}\frac{f_Md^3x_{Mi_M}}{K_M!}
\frac{f_Ld^3y_{{\bar L}i_{\bar L}}}{K_{\bar L}!}\frac{f_Md^3y_{{\bar M}i_{\bar M}}}{K_{\bar M}!}\nonumber\\
&&\times {\rm det}(G[x])\,{\rm det}(G[y])\,\left|{\rm det}\,\tilde{\bf T}(x,y)\right|^{N_f}e^{-V_{D\overline D}(x-y)}\nonumber\\
\label{SU2}
\ee
Here $x_{mi}$ and $y_{nj}$ are the 3-dimensional coordinate of the i-dyon of m-kind
and j-anti-dyon of n-kind. Here
$G[x]$ a $(K_L+K_M)^2$ matrix and $G[y]$ a $(K_{\bar L}+K_{\bar M})^2$ matrix whose explicit form are given in~\cite{DP,DPX}.
$V_{D\bar D}$ is the streamline interaction between ${\rm D=L,M}$ dyons and ${\rm \bar D=\bar L, \bar M}$ antidyons as numerically discussed in~\cite{LARSEN}. For the SU(2) case it is Coulombic asymptotically with a core at short distances~\cite{LIU1}.
We will follow our original discussion with light quarks in~\cite{LIU2}, with the determinantal interactions in (\ref{SU2})
providing for an effective core repulsion on average.
We omit the explicit repulsion between the cores as in~\cite{LIU5}, for simplicity.
The fugacities $f_{i}$ are related to the overall instanton-dyon density, and can be estimated using lattice simulations~\cite{CALO-LATTICE}. Here they are external parameters, with a dimensionless density
\begin{eqnarray}
\label{NTT}
{\bf n}=\frac{4\pi \sqrt{f_Lf_M}}{\omega_0^2}\approx {\bf C}e^{-\frac{S(T)}2}
\end{eqnarray}
For definiteness, the KvBLL instanton action to one-loop is
\begin{eqnarray}
{S(T)}\equiv \frac {2\pi}{\alpha_s(T)}=\left(11\frac{N_c}3-2\frac{N_f}3\right){\rm ln}\left(\frac T{0.36T_D}\right)
\end{eqnarray}
The fermionic determinant ${\rm det}\,\tilde{\bf T}(x,y)$ with twisted quarks will be detailed below.
In many ways (\ref{SU2}) resembles the partition function for the instanton-anti-instanton ensemble~\cite{ALL}.
\subsection{Twisted boundary conditions and normalizable zero modes}
Consider $N_f=N_c$ QCD on $S^1\times R^3$ with the following anti-periodic boundary conditions
modulo a flavor twist in the center of $SU(N_c)$
\begin{eqnarray}
\label{BOUND1}
\psi_f(\beta, \vec x) =-z^{f-1}\psi_f(0, \vec x)
\end{eqnarray}
with $z=e^{i 2\pi/N_c}$ and $f=1,2,3, ...=u,d,s, ...$ respectively. Under a $Z_{N_c}$ twisted gauge transformation of the type
\begin{eqnarray}
\Omega(\beta, \vec x)=z^k\Omega(0, \vec x)
\end{eqnarray}
(\ref{BOUND1}) is $Z_{N_C+N_f}$ symmetric following the flavour relabeling $f+k\rightarrow f$.
As a result the theory is usually referred to as $Z_{N_c}$-QCD~\cite{JAP}.
In contrast, (\ref{BOUND1}) breaks explicitly chiral flavor symmetry through
\begin{eqnarray}
\label{SYMNF}
U_L(N_f)\times U_R(N_f)\rightarrow U_L^{N_f}(1)\times U_R^{N_f}(1)
\end{eqnarray}
To construct explicitly the fermionic zero modes in a BPS or KK dyon with the twisted boundary
conditions (\ref{BOUND1}), we consider the generic boundary condition
\begin{eqnarray}
\label{BOUND2}
\psi(x_4+\beta,\vec x)=-e^{i\phi}\psi(x_4,\vec x)
\end{eqnarray}
and redefine the quark field through $\psi=e^{iT\phi x_4}\tilde \psi$. The latter satisfies
a modified Dirac equation with an imaginary chemical potential $-\phi\,T$~\cite{RW},
\begin{eqnarray}
\label{DIRAC1}
(i\gamma \cdot D-\gamma_4T\phi)\tilde \psi=0
\end{eqnarray}
In a BPS dyon with periodic boundary conditions, the solution to (\ref{DIRAC1}) asymptote
\begin{eqnarray}
\label{DIRAC2}
\tilde\psi\rightarrow e^{-\pi T \nu r\pm \phi T r}
\end{eqnarray}
which is normalizable for $|\phi|<\pi \nu$. For anti-periodic boundary condition, the requirement
for the existence of a normalizable zero mode in a BPS dyon is $|\phi-\pi|<\pi \nu$.
\subsection{Case: $N_c=N_f=3$}
For $N_c=N_f=3$, the flavor twisted boundary condition (\ref{BOUND1}) takes the explicit form
\be
\label{BNC}
&&u(\beta)=-u(0)\nonumber\\
&&d(\beta)=e^{-i\pi/3}d(0)\nonumber\\
&&s(\beta)=e^{+i\pi/3}s(0)
\ee
The d,s boundary conditions in (\ref{BNC})
admit a discrete symmetry under the combined charge conjugation and the flavor
exchange $d\leftrightarrow s$.
The normalizability condition for the quark zero modes following from the flavor twisted boundary conditions
in (\ref{DIRAC1}-\ref{DIRAC2}) shows that $f=1=u$ always support a normalizable KK zero mode,
while $f=2,3=d,s$ support BPS zero modes that are at the edge of the normalizability domain in
the symmetric phase with $\nu=1/3$. The BPS modes carry a time dependence of the form $e^{\pm \frac{i\omega_0}{3}x_4}$
as $\nu \rightarrow 1/3$, while the KK mode carries a time dependence of the form $e^{i\omega_0 x_4}$. In both cases,
we are restricting the modes to the lowest frequencies in Euclidean $x_4$-time, for simplicity. This means a moderatly large temperature ranging from the center symmetric to asymmetric phase.
The explicit form of the twisted zero modes in a BPS dyon and satisfying the twisted boundary condition (\ref{BOUND2}) can be
obtained in closed form in the hedgehog gauge,
\be
\label{ZERO1}
\tilde \psi_{\mp, A\alpha}(r)=(\alpha_1(r)\epsilon+\alpha_2(r) \sigma \cdot \hat r\epsilon)_{A\alpha}
\ee
in color-spin, with $\epsilon_{A\alpha}=-\epsilon_{\alpha A}$ and
\be
\alpha_{1,2}(r)=&&\frac{\chi_{1,2}(r)}{\sqrt{2\pi \nu T r \sinh(2\pi \nu T r)}}\nonumber\\
\chi_1(r)=&&-\frac{\tilde\phi }{\pi \nu}\sinh(\tilde\phi T r)+\tanh(\pi T \nu r)\cosh(\tilde\phi T r)\nonumber\\
\chi_2(r)=&&\mp\left(\frac{\tilde\phi }{\pi \nu}\cosh(\tilde\phi T r)-\coth(\pi T \nu r)\sinh(\tilde\phi T r)\right)\nonumber\\
\ee
Here $\tilde\phi\equiv \phi-\pi$ and $\mp$ refers to $M,\bar M$ respectively.
Asymptotically, the BPS zero modes take the compact form in the hedgehog gauge
\be
\label{ZERO2}
&&(\tilde\psi_{M} \epsilon )(r)\rightarrow \frac{1+{\rm sgn}(\tilde\phi) \sigma \cdot \hat r}{\sqrt{2\pi T \nu r\sinh(2\pi T \nu r)}}
e^{|\tilde\phi|Tr}\nonumber\\
&&(\tilde\psi_{\bar M} \epsilon )(r)\rightarrow \frac{1-{\rm sgn}(\tilde\phi) \sigma \cdot \hat r}{\sqrt{2\pi T \nu r\sinh(2\pi T \nu r)}}
e^{|\tilde\phi|Tr}
\ee
For the KK instanton-dyon, we recall the additional time-dependent gauge transformation from the BPS
instanton-dyon. The explicit form of the zero modes are also similar (\ref{ZERO1}-\ref{ZERO2}) with now
$\tilde \phi=\phi$. We note that for the flavor twisted boundary condition (\ref{BOUND1}), $f=d,s$ corresponds to $\tilde\phi=\mp \pi/3$
(mod $2\pi$) in (\ref{ZERO2}) which are not normalizable BPS zero modes at exactly $\nu=1/3$.
Following our analysis in~\cite{LIU5}, we choose to regulate the
zero modes by approaching the holonomies in the center symmetric phase as follows
($\epsilon_{1,2}\rightarrow +0$)
\be
\label{CENTER3}
\nu_{M1}=&&\frac{1}{3}+\epsilon_1\nonumber\\
\nu_{M2}=&&\frac{1}{3}-\epsilon_2\nonumber\\
\nu_L=&&\frac{1}{3}+\epsilon_2-\epsilon_1
\ee
As a result, the M1-instanton-dyon carries 2 zero modes (d,s), the M2-instanton-dyon carries none, and the
L-dyon carries 1 zero mode (u). This regularization enforces the Nye-Singer index theorem for fundamental quarks~\cite{NS}
and the discrete symmetry noted earlier.
\subsection{Case: $N_c=N_f=2$}
For the case of $N_f=N_c=2$, a more general set of twisted boundary conditions will be analyzed with
\be
\label{BOUND4}
&&u(\beta)=e^{i\theta}(-u(0))\nonumber\\
&&d(\beta)=e^{i\theta}(-e^{i\pi}d(0))
\ee
which is (\ref{BOUND1}) for $\theta=0$. (\ref{BOUND4}) is seen to have the additional discrete
symmetry when $\theta \rightarrow \pi-\theta$ and $u\leftrightarrow d$ at $\nu=1/2$. Thus, only
the range $\theta<\pi/2$ will be considered. In this case, the M-instanton-dyon
carries 1 zero-mode (d), while the L-instanton-dyon carries 1 zero-mode (u).
For (\ref{BOUND4}) the normalizable zero modes are asymptotically of the form (\ref{ZERO2})
with $\phi=\theta$.
For completeness we note the Roberge-Weiss boundary condition~\cite{RW}
\be
\label{BOUND5}
&&u(\beta)=e^{i\theta}u(0)\nonumber\\
&&d(\beta)=e^{i\theta}d(0)
\ee
In the range $0<\theta <\pi/2$, the M-instanton-dyon carries 2 zero modes with none on the L-instanton-dyon.
In the range $\frac{\pi}{2}<\theta<\frac{3\pi}{2}$, the 2 zero modes jump onto the L-instanton-dyon.
In the range $0<\frac{3\pi}{2}<\theta<2\pi$ they jump back on the M-instanton-dyon.
We note that for $\theta=\theta_0+\pi/2$ with $0<\theta_0<\pi/2$,
the M-zero mode moves to be an L-zero mode with the asymptotic
\begin{eqnarray}
\label{BXX1}
\frac{(1-\sigma \cdot \hat r)}{\sqrt{r\sinh(\pi T r)}}e^{(\pi/2-\theta_0)Tr}e^{i(\theta_0-\pi/2)T x_4}e^{i\pi Tx_4}
\end{eqnarray}
This is to be compared to the case with $\theta=\frac{\pi}{2}-\theta_0$ with the asymptotic
\begin{eqnarray}
\label{BXX2}
\frac{(1+\sigma \cdot \hat r)}{\sqrt{r\sinh(\pi T r)}}e^{(\pi/2-\theta_0)Tr}e^{i(\frac{\pi}{2}-\theta_0)Tx_4}
\end{eqnarray}
\subsection{Twisted fermionic determinant}
The fermionic determinant
can be viewed as a sum of closed fermionic loops connecting all instanton-dyons and instanton-antidyons. Each link
-- or hopping -- between an instanton-dyon and ${\rm \bar{L}}$-anti-instanton-dyon is described by the hopping chiral
matrix
\begin{eqnarray}
\label{T12}
\tilde {\bf T}(x,y)\equiv \left(\begin{array}{cc}
0&i{\bf T}_{ij}\\
i{\bf T}_{ji}&0
\end{array}\right)
\end{eqnarray}
Each of the entries in ${\bf T}_{ij}$ is a ``hopping amplitude" of a fermionic
zero-mode $\varphi_D$ from an instanton-dyon to a zero-mode
$\varphi_{\bar D}$ (of opposite chirality) of an instanton-anti-dyon
\begin{eqnarray}
{\bf T}_{LR}(x_{LR})=\int d^4x \varphi_{L}^{\dagger}(x-x_L)i(\partial_{4}-i\sigma\cdot\nabla )\varphi_R(x-x_R)\nonumber\\
{\bf T}_{RL}(x_{LR})=\int d^4x \varphi_{R}^{\dagger}(x-x_L)i(\partial_{4}+i\sigma\cdot\nabla )\varphi_L(x-x_R)\nonumber\\
\end{eqnarray}
with $x_{LR}\equiv x_L-x_R$,
and similarly for the other components. In the hedgehog gauge, these matrix elements can be made
explicit in momentum space. Their Fourier transform is
\begin{eqnarray}
\label{TPX}
T_{LR} (p)={\rm Tr}\left(\varphi_L^{\dagger}(p)(-\Phi T-i\sigma\cdot p)\varphi_R(p)\right)
\end{eqnarray}
with $\Phi T$ the contribution from the lowest
Matubara mode retained. We recall that the use of the zero-modes in the string gauge
to assess the hopping matrix elements, introduces only minor changes in the overall estimates
as we discussed in~\cite{LIU2} (see Appendix A).
\subsubsection{Case $N_c=N_f=3$}
For general $\nu$, we use the Fourier transform of the zero modes
(\ref{ZERO1}) in (\ref{TPX}) to obtain
\be
\label{TPXX}
T_{i}(p)=\Phi_i T(F_{2i}^2(p)-F_{1i}^2(p))+{\rm sgn}(\tilde\phi_i)2pF_{1i}(p)F_{2i}(p)\nonumber\\
\ee
The key physics in the Fourier transforms $F_{1,2}(p)$ is captured by retaining only the flux-induced
mass-like in the otherwise massless asymptotics, i.e.
\begin{eqnarray}
F_{1i}(p)\approx\frac 13 F_{2i}(p)\approx \frac{\omega_0}{(p^2+((\nu-|\tilde\phi_i| /\pi)\omega_0)^2)^{\frac 54}}
\end{eqnarray}
The i-assignments are respectively given by
\begin{eqnarray}
\label{ASSIGN}
i\equiv (\bar L L, \bar M_1M_1, \bar M_2M_2) \qquad
\biggl\{^{\tilde\phi_i=\left(0, -\frac {\pi}3, +\frac {\pi}3\right)}_{\Phi_i=(\pi,-\frac {\pi}3, +\frac {\pi}3)}\biggr.
\end{eqnarray}
In the center symmetric phase with $\nu=1/3$, (\ref{TPXX}) are long-ranged for the M-instanton-dyons,
\begin{eqnarray}
\label{HOP3}
T_3(p)=-T_2(p)\approx \Phi T\frac{8C^2}{p^5}+{\rm sgn}(\tilde\phi)\frac{6C^2}{p^4}
\end{eqnarray}
Here $C$ is a normalization constant fixed by the regularization detailed in (\ref{CENTER3}).
\subsubsection{Case $N_c=N_f=2$}
For $N_c=N_f=2$, the
Fourier transform of the lowest Matsubara zero-mode for both boundaries (\ref{BOUND4}-\ref{BOUND5}) is
\begin{eqnarray}
\psi_M(p)=f_1(p)-i{\rm sgn} (\theta) f_2(p)\sigma\cdot \hat p
\end{eqnarray}
The correponding hopping matrix is ($0\leq \theta<\pi/2$)
\begin{eqnarray}
\label{HOPSU2}
T_{LR}(p)=\tilde \theta T(f_2^2(p)-f^2_1(p))+{\rm sgn}(\theta) 2pf_1(p)f_2(p)
\end{eqnarray}
with the assignments
\begin{eqnarray}
\tilde\theta=\biggl\{^{\theta-\pi\,\,: u}_{\theta\,\,\,\,\,\,\,\,\,\,: d}\biggr.
\end{eqnarray}
and
\be
\label{F1F2}
&&f_1(p)\approx \frac 13 f_2(p)\approx \frac{\omega_0}{(p^2+((\nu_i-\theta /\pi)\omega_0)^2)^{\frac 54}}
\ee
It follows that
\begin{eqnarray}
\label{ASSIGNX}
T_{LR}(p)\approx f_1(p)^2(8\tilde \theta T+6\, {\rm sgn}(\theta)\, p)
\end{eqnarray}
Using (\ref{BXX1}-\ref{BXX2}) we note that the hopping matrix
element (\ref{ASSIGNX}) satisfies the anti-periodicity condition
\begin{eqnarray}
\label{ANTIX}
T_{LR}(p,\theta_0+\pi/2)=-T_{LR}(p,\theta_0-\pi/2)
\end{eqnarray}
with the $\theta$-argument exhibited for clarity.
\section{SU($N_c$) ensemble}
Following~\cite{DP,LIU1,LIU2} the moduli
determinants in (\ref{SU2}) can be fermionized using $2N_c$ pairs of ghost fields $\chi_{m},\chi^{\dagger}_{m}$ for the
instanton-dyons
and $2N_c$ for the instanton-anti-dyons. The ensuing Coulomb factors from the determinants are then bosonized using $2N_c$ boson fields $v_m,w_m$ for the instanton-dyons and similarly for
the instanton-anti-dyons. The result is
\be
&&S_{1F}[\chi,v,w]=-\frac {T}{4\pi}\int d^3x\nonumber\\
&&\sum_{m=1}^{N_c}\left(| \nabla \chi_m|^2+\nabla v_m \cdot \nabla w_m\right)+\nonumber\\
&&\sum_{\bar m=1}^{N_c}\left(| \nabla \chi_{\bar m}|^2+\nabla v_{\bar m} \cdot \nabla w_{\bar m}\right)
\label{FREE1}
\ee
For the streamline interaction part $V_{D\bar D}$, we note that as a pair
interaction in (\ref{SU2}) between the instanton-dyons and instanton-anti-dyons, it can be bosonized using
standard methods~\cite{POLYAKOV,KACIR} in terms of $\vec \sigma$ and $\vec b$ fields. As a result each dyon species acquire additional fugacity factors of the form
\begin{eqnarray}
M:e^{-\vec\alpha_i \cdot \vec b+i\vec \alpha_{i}\cdot \vec \sigma} \qquad \bar M:e^{-\vec\alpha_i \cdot \vec b-i\vec \alpha_{i}\cdot \vec \sigma}
\end{eqnarray}
with $\vec\alpha_i$ and $i=1,2, ...N_c-1$ the ith root of the $SU(N_c)$ Lie generator, and $i=N_c$ its affine root
due to its compacteness. Therefore, there is an additional contribution to the free part (\ref{FREE1})
\begin{eqnarray}
S_{2F}[\sigma, b]=\frac T{8} \int d^3x\, \left(\nabla \vec b\cdot\nabla \vec b+ \nabla \vec \sigma\cdot\nabla\vec \sigma\right)
\label{FREE2}
\end{eqnarray}
where for simplicity we approximated the streamline by a Coulomb interaction, and the interaction part is now
\be
&&S_I[v,w,b,\sigma,\chi]=-\int d^3x \nonumber\\
&&\left(\sum_{i=1}^{N_c}e^{-\vec\alpha_i \cdot \vec b+i\vec \alpha_{i}\cdot \vec \sigma}f_i\right.\nonumber\\
&&\times\left. \left(4\pi v_i+|\chi_i -\chi_{i+1}|^2+v_i-v_{i+1}\right)e^{w_i-w_{i+1}}\right.\nonumber\\
&&\left.+\sum_{\bar i=1}^{N_c}e^{-\vec\alpha_{\bar i} \cdot \vec b-i\vec \alpha_{\bar i}\cdot \vec \sigma}f_{\bar i}\right.\nonumber\\
&&\left.\times \left(4\pi v_i+|\chi_{\bar i} -\chi_{\bar i+1}|^2+v_{\bar i}-v_{\bar i+1}\right)e^{w_{\bar i}-w_{\bar i+1}}\right)\nonumber\\
\label{FREE3}
\ee
without the fermions. We now show the minimal modifications to (\ref{FREE3}) when the fermionic determinantal
interaction is included.
\subsection{Fermionic fields}
To fermionize the determinant in (\ref{SU2})
and for simplicity, consider first the case of $N_f=1$ fermionic zero-modes attached to the kth instanton-dyon, and
define the additional Grassmanians $\chi=(\chi^i_1,\chi^j_2)^T$ with $i,j=1,.., K_{k,\bar k}$ so that
\begin{eqnarray}
\left|{\rm det}\,\tilde{\bf T}\right| =\int D[\chi]\,\, e^{\,\chi^\dagger \tilde {\bf T} \, \chi}
\label{TDET}
\end{eqnarray}
We can re-arrange the exponent in (\ref{TDET}) by defining a Grassmanian source $J(x)=(J_1(x),J_2(x))^T$ with
\begin{eqnarray}
J_1(x)=\sum^{K_L}_{i=1}\chi^i_1\delta^3(x-x_{ki})\nonumber\\
J_2(x)=\sum^{K_{\bar L}}_{j=1}\chi^j_2\delta^3(x-y_{\bar k j})
\label{JJ}
\end{eqnarray}
and by introducing 2 additional fermionic fields $ \psi_k(x)=(\psi_{k1}(x),\psi_{k2}(x))^T$. Thus
\begin{eqnarray}
e^{\,\chi^\dagger \tilde {\bf T}\,\chi}=\frac{\int D[\psi]\,{\rm exp}\,(-\int\psi_k^\dagger \tilde {\bf G}\, \psi_k +
\int J^\dagger \psi_k + \int\psi_k^\dagger J)}{\int d
D[\psi]\, {\rm exp}\,(-\int \psi_k^\dagger \tilde {\bf G} \,\psi_k) }
\label{REFERMIONIZE}
\end{eqnarray}
with $\tilde{\bf G}$ a $2\times 2$ chiral block matrix
\begin{eqnarray}
\tilde {\bf G}= \left(\begin{array}{cc}
0&-i{\bf G}(x,y)\\
-i{\bf G}(x,y)&0
\end{array}\right)
\label{GG}
\end{eqnarray}
with entries ${\bf TG}={\bf 1}$. The Grassmanian source contributions in (\ref{REFERMIONIZE}) generates a string
of independent exponents for the L-instanton-dyons and $\bar{\rm L}$-instanton-anti-dyons
\begin{eqnarray}
\prod^{K_k}_{i=1}e^{\chi_1^i\dagger \psi_{k1}(x_{ki})+\psi_{k1}^\dagger(x_{ki})\chi_1^i}\nonumber \\ \times
\prod^{K_{\bar k}}_{j=1}e^{\chi_2^j\dagger \psi_{k2}(y_{\bar k j})+\psi_{k2}^\dagger(y_{\bar k j})\chi_2^j}
\label{FACTOR}
\end{eqnarray}
The Grassmanian integration over the $\chi_i$ in each factor in (\ref{FACTOR}) is now readily done to yield
\begin{eqnarray}
\prod_{i}[-\psi_{k1}^\dagger\psi_{k1}(x_{ki})]\prod_j[-\psi_{k2}^\dagger\psi_{k2}(y_{\bar k j})]
\label{PLPR}
\end{eqnarray}
for the k-instanton-dyon and $\bar {\rm k}$-instanton-anti-dyon.
The net effect of the additional fermionic determinant in (\ref{SU2}) is to shift the k-instanton-dyon
and $\bar{\rm k}$-instanton-anti-dyon fugacities in (\ref{FREE3}) as follows
\be
f_k\rightarrow -f_k\psi_{k1}^\dagger\psi_{k1}\equiv -f_L\psi_k^\dagger\gamma_+\psi_k\nonumber\\
f_{\bar k}\rightarrow -f_{\bar k}\psi_{k2}^\dagger\psi_{k2}\equiv -f_{\bar k}\psi_k^\dagger\gamma_-\psi_k
\label{SUB}
\ee
where we have now identified the chiralities with $\gamma_\pm=(1\pm \gamma_5)/2$. Note that
for the instanton-dyons and instanton-anti-dyons with no zero-mode attached, the fugacities remain unchanged.
\subsection{Resolving the constraints}
In terms of (\ref{FREE1}-\ref{FREE3}) and the substitution
(\ref{SUB}), the instanton-dyon partition function (\ref{SU2})
for finite $N_f$ can be exactly re-written as an interacting
effective field theory in 3-dimensions,
\be
{\cal Z}_{1}[T]\equiv &&\int D[\psi]\,D[\chi]\,D[v]\,D[w]\,D[\sigma]\,D[b]\,\nonumber\\&&\times
e^{-S_{1F}-S_{2F}-S_{I}-S_\psi}
\label{ZDDEFF}
\ee
with the additional chiral fermionic contribution $S_\psi=\psi^\dagger\tilde{\bf G}\,\psi$.
Since the effective action in (\ref{ZDDEFF}) is linear in the $v_{M,L,\bar M,\bar L}$, the latters
integrate to give the following constraints
\be
\label{DELTAX}
&&-\frac{T}{4\pi}\nabla^2w_k+f_ke^{
\vec\alpha_{k}\cdot (-\vec b+i
\vec \sigma)}\prod_f \psi_{kf}^\dagger\gamma_+\psi_{kf} e^{w_k-w_{k+1}}\nonumber\\&&
-f_{k-1}e^{\vec\alpha_{k-1}\cdot (-\vec b+i
\vec\sigma)}\prod_f \psi_{k-1f}^\dagger\gamma_+\psi_{k-1f}\,{\mbox{e}}^{w_{k-1}-w_k}=0\nonumber\\
\ee
and similarly for the anti-dyons.
To proceed further the formal classical solutions to the constraint equations or $w[\sigma, b]$
should be inserted back into the 3-dimensional effective action. The result is
\be
{\cal Z}_{1}[T]=\int D[\psi]\,D[\sigma]\,D[b]\,e^{-S}
\label{ZDDEFF1}
\ee
with the 3-dimensional effective action
\be
&&S=S_F[\sigma, b]+\int d^3x\,\sum_f \psi_f^\dagger \tilde{\bf G}_f \psi_f\nonumber\\
&& +\sum_{k=1}^{N_c}4\pi f_kv_k\int d^3x\,\prod_{f} \psi_{kf}^\dagger\gamma_+\psi_{kf}\,e^{w_k-w_{k+1}+\vec\alpha_{k}\cdot (-\vec b+\vec i\sigma)}\nonumber\\
&&+\sum_{\bar k=1}^{N_c}4\pi f_{\bar k}v_{\bar k}\int d^3x\,\prod_{f} \psi_{\bar kf}^\dagger\gamma_-\psi_{\bar kf}
\,e^{w_{\bar k}-w_{\bar k+1}+\vec\alpha_{\bar k}\cdot (-\vec b+\vec i\sigma)}\nonumber\\
\label{NEWS}
\ee
Here $S_F$ is $S_{2F}$ in (\ref{FREE2}) plus additional contributions resulting from the $w(\sigma, b)$ solutions
to the constraint equations (\ref{DELTAX}) after their insertion back. This procedure for the linearized approximation of the constraint
was discussed in~\cite{LIU1,LIU2}.
For the general case with
\begin{eqnarray}
\tilde{\bf G}_1\neq \tilde{\bf G}_2\neq ...\neq \tilde{\bf G}_{N_f}
\end{eqnarray}
these contributions in (\ref{NEWS}) are only
$U_L^{N_f}(1)\times U_R^{N_f}(1)$ symmetric, which is commensurate with (\ref{SYMNF}). The determinantal interactions preserve
the individual $U_{L+R}(1_k)$ vector flavor symmetries, but upset the individual $U_{L-R}(1_k)$
axial flavor symmetries. However, the latters induce the shifts
\begin{eqnarray}
\label{BACK0}
\psi_{kf}^\dagger\gamma_\pm \psi_{kf}\rightarrow e^{2\xi_k}\psi_{kf}^\dagger\gamma_\pm \psi_{kf}
\end{eqnarray}
which can be re-absorbed by shifting back the constant magnetic contributions
\begin{eqnarray}
\label{BACK}
\vec\alpha_{\bar k}\cdot (-\vec b+\vec i\sigma)\rightarrow \vec\alpha_{\bar k}\cdot (-\vec b+\vec i\sigma)-2\xi_k
\end{eqnarray}
thanks to the free form in (\ref{FREE2}). This observation is unaffected by the screening of the
magnetic-like field, since a constant shift $\vec b\rightarrow \vec b+2\xi_k$ can always be reset
by a field redefinition. This hidden symmetry was noted recently in~\cite{THOMAS}.
We note that this observation holds for the general form of the
streamline interaction used in~\cite{LIU2} as well, due to its vanishing form in momentum space. From
(\ref{BACK}) it follows that $\sum_k\xi_k=0$, so that only the axial flavor singlet $U_{L-R}(1)$ is explicitly
broken by the determinantal contributions in (\ref{NEWS}) as expected in the instanton-dyon-anti-dyon
ensemble. As a result, (\ref{NEWS}) is explicitly $U(1)_L^{N_f}\times U_R^{N_f}(1)/U_{L-R}(1)$ symmetric.
\subsection{Special cases: $N_c=N_f=2,3$}
For the case $N_c=N_f=3$ with the twisted boundary condition (\ref{BNC}), the fermionic
terms in the effective action (\ref{NEWS}) are explicitly
\be
\label{SF3X}
&&\psi^{\dagger}_u \tilde G_1\psi_u+\psi_d^{\dagger}\tilde G_2\psi_d+\psi_s^{\dagger}\tilde G_3\psi_s\nonumber \\
&&+4\pi f_1\nu_1\psi^{\dagger}_u\gamma_{+}\psi_ue^{w_1-w_2}\nonumber\\
&&+4\pi f_2\nu_2\psi_d^{\dagger}\gamma_{+}\psi_d\psi_s^{\dagger }\gamma_{+}\psi_se^{w_2-w_3}
+4\pi f_3\nu_3e^{w_3-w_1}\nonumber\\
&&+4\pi f_{\bar 1}\bar\nu_{1}\psi^{\dagger}_u\gamma_{-}\psi_ue^{\bar w_1-\bar w_2}\nonumber\\
&&+4\pi f_{\bar 2}\bar\nu_{2}\psi_d^{\dagger}\gamma_{-}\psi_d\psi_s^{\dagger }\gamma_{-}\psi_se^{\bar w_2-\bar w_3}
+4\pi f_{\bar 3}\bar\nu_{3}e^{\bar w_3-\bar w_1}\nonumber\\
\ee
following the regulartization (\ref{CENTER3}) around the center symmetric point. As noted earlier,
(\ref{SF3X}) is explicitly symmetric under the combined
charge conjugation and the flavor exchange $d\leftrightarrow s$ since $\tilde G_2=-\tilde G_3\neq \tilde G_1$.
With this in mind, (\ref{SF3X}) is symmetric under
$(U^3_L(1)\times U^3_R(1))/U_{L-R}(1)$.
For the case $N_c=N_f=2$ with the twisted boundary condition (\ref{BOUND4}), the fermionic
terms in the effective action (\ref{NEWS}) are now
\be
&&f_Mv_M\psi_d^{\dagger}\gamma_{+}\psi_de^{w_M-w_L}+f_Lv_L\psi_u^{\dagger}\gamma_{+}\psi_ue^{w_L-w_M} \nonumber\\
&&+f_{\bar M}v_{\bar M}\psi_d^{\dagger}\gamma_{-}\psi_de^{w_{\bar M}-w_{\bar L}}
+f_{\bar L}v_{\bar L}\psi_u^{\dagger}\gamma_{-}\psi_ue^{ w_{\bar L}-w_{\bar M}} \nonumber\\
\ee
while for the Roberge-Weiss boundary condition (\ref{BOUND5}) they are
\be
&&f_Mv_M\psi_u^{\dagger}\gamma_{+}\psi_u\psi_d^{\dagger}\gamma_{+}\psi_de^{w_M-w_L}+f_Lv_Le^{w_L-w_M} \nonumber\\
&&+f_ {\bar M}v_{\bar M}\psi_u^{\dagger}\gamma_{-}\psi_u\psi_d^{\dagger}\gamma_{-}\psi_de^{w_{\bar M}-w_{\bar L}}
+f_{\bar L}v_{\bar L}e^{w_{\bar L}-w_{\bar M}} \nonumber\\
\ee
\section{Equilibrium state}
To analyze the ground state and the fermionic fluctuations we bosonize the fermions
in (\ref{ZDDEFF1}-\ref{NEWS}) by introducing the identities
\be
\label{deltax}
&&\int D[\Sigma_k]\,\delta\left(\psi^\dagger_k(x)\psi_k(x)+2\Sigma_k(x)\right)={\bf 1}
\ee
and by re-exponentiating them to obtain
\be
{\cal Z}_{1}[T]=\int D[\psi]\,D[\sigma]\,D[b]\,D[\vec\Sigma]\,D[\vec\lambda]\,
e^{-S-S_C}\nonumber\\
\label{ZDDEFF2}
\ee
with
\be
\label{SCx}
&&-S_C=\int d^3x \,i\Lambda_k(x)(\psi_f^{\dagger}(x)\psi_k(x)+2\Sigma_k(x))
\ee
The ground state is parity even so that $f_{L,M}=f_{\bar L, \bar M}$.
By translational invariance, the ground state corresponds to constant $\sigma, b, \vec\Sigma, \vec\Lambda$.
We will seek the extrema of (\ref{ZDDEFF2}) with finite condensates in the mean-field approximation, i.e.
\be
\label{DEFCC}
&&\left<\psi^\dagger_k(x)\psi_l(x)\right>=-2\delta_{kl}\Sigma_k
\ee
With this in mind, the classical solutions to the constraint equations (\ref{DELTAX}) are also constant
\be
\label{SU2SOL}
&&f_k \left< \prod _f\psi^{\dagger}_{kf}\gamma_{+}\psi_{kf} \right>e^{w_k-w_{k+1}}
\nonumber \\
&&=f_{k+1}\left<\prod_f \psi_{k+1f}^\dagger\gamma_+\psi_{k+1f}\right>\,{\mbox{e}}^{w_{k+1}-w_{k+1}}
\ee
with
\be
\label{SU2SOLx}
\left<\prod_f \psi_{kf}^\dagger\gamma_+\psi_{kf}\right>=\prod_{f}\Sigma_{kf}
\ee
and similarly for the anti-dyons. The expectation values in (\ref{SU2SOL}-\ref{SU2SOLx})
are carried in (\ref{ZDDEFF2}) in the mean-field approximation through Wick contractions.
\subsection{$N_c=N_f=3$ in symmetric phase}
In the center-symmetric phase, with all holonomies being equal $\nu_{1,2,3}=1/3$, the pressure simplifies to
\be
\label{P33}
{\cal P}_{uds}-{\cal P}_{per}=&&8\pi (f_1f_2f_3)^{\frac 13}(\Sigma_{u}\Sigma_{d}\Sigma_{s})-2\vec \Lambda\cdot \vec \Sigma\nonumber \\
&&+\sum_{i=1}^{3}\int \frac{d^3p}{(2\pi)^3}\ln(1+\Lambda_i^2|T_{i}|^2(p))\nonumber\\
\ee
with the individual fermionic terms being
\be
{\cal P}_i\equiv &&\int \frac{d^3p}{(2\pi)^3}\ln(1+\Lambda_i^2|T_{i}|^2(p))\nonumber\\
\equiv &&\omega_0^3\int \frac{d^3 \tilde p}{(2\pi)^3}
\ln \left(1+\frac{\tilde \Lambda_i^2}{\tilde p^8}\left(1+\frac{4|\tilde\phi_i|}{3\pi\tilde p}\right)^2\right)
\ee
Here $\tilde p=p/\omega_0$ and $\tilde\Lambda_i=\Lambda/\omega^2_0$ are dimensionless. From (\ref{ASSIGN}),
we recall the assignment of quark phases $(\tilde\phi_1,\tilde\phi_2,\tilde\phi_3)=(\pi,-\pi/3, +\pi/3)$, for $(u,d,s)$ respectively.
The center symmetric phase breaks spontaneously chiral symmetry, as the gap equation have nonzero
solution. Each of the flavor chiral
condensate is found to be
\begin{eqnarray}
\label{QQ3}
\frac{\left<\bar q q\right>_{\tilde\phi_i}}{T^3}=2\pi^2\tilde \Lambda_i
\int \frac{d^3\tilde p}{(2\pi)^3}\frac{\frac{5}{3\tilde p^5}}{1+\frac{\tilde \Lambda_i^2}{\tilde p^8}\left(1+\frac{4|\tilde\phi_i|}{3\pi\tilde p}\right)^2}
\end{eqnarray}
We now note that at asymptotically low temperatures, the $1/p^4$ contribution in the hopping matrix element (\ref{HOP3}) is dominant.
\subsection{$N_c$=$N_f=3$ in general asymmetric phase}
In general asymmetric phase the holonomies have values away from the center
\be
\label{H123}
\nu_1=&&\frac{1}{3}+\epsilon_1\nonumber\\
\nu_2=&&\frac{1}{3}-\epsilon_2\nonumber\\
\nu_3=&&1-\nu_1-\nu_2
\ee
Note that in general, the parameters $\epsilon_{1,2}$ are not small.
With these choices for the holonomies (\ref{H123}) , the u-flavor rides the L-instanton-dyon,
and the ds-flavors ride the $M1,M2$-instanton-dyons. For the ds-flavors, the hopping matrix elements between the
instanton-dyon and anti-instanton-dyon are given by
\be
&&T_d(p)=-T_s(p)=\nonumber\\
&&\frac{\pi T}{3}(F_2^2(p)-F_1^2(p))+2ipF_1(p)F_2(p)
\ee
with
\begin{eqnarray}
\label{XF1X}
F_1(p)\approx \frac 1{3}{F_2}(p)\approx \frac{\omega_0}{(p^2+((\nu_1-1/3)\omega_0)^2)^{\frac{5}{4}}}
\end{eqnarray}
while for the u-quarks it is
\begin{eqnarray}
T_u(p)=\pi T(f_2^2(p)-f_1^2(p))+2ipf_2(p)f_1(p)
\end{eqnarray}
with
\begin{eqnarray}
f_1(p)\approx \frac 1{3} {f_2}(p)\approx \frac{\omega_0}{(p^2+(\nu_3\omega_0)^2)^{\frac{5}{4}}}
\end{eqnarray}
In the mean-field approximation, the modification of the effective pressure is
\be
\label{Puds}
{\cal P}_{uds}-{\cal P}_{per}&&=+24\pi(f_1f_2f_3\nu_1\nu_2\nu_3\Sigma_d^2\Sigma_u)^{\frac 13}\nonumber\\
&&-4\Sigma_d\Lambda_d-2\Sigma_u\Lambda_u\nonumber\\
&&+\int \frac{d^3p}{(2\pi)^3}\ln\left((1+\Lambda_d^2|T_d|^2)^2(1+\Lambda_u^2|T_u|^2)\right)\nonumber\\
\ee
where ${\cal P}_{\rm per}$ is the perturbative contribution with twisted quark boundary conditions~\cite{RW}.
For $\nu_1\rightarrow 1/3$ the holonomy induced mass-like contribution in (\ref{XF1X}) becomes arbitrarily small.
As we noted earlier, we use it to regulate the infrared sensitivity of the ds-contributions in (\ref{Puds}) through a suitable
redefinition of the fugacities $f_{2,3}$ as in~\cite{LIU5}. With this in mind,
the extrema of (\ref{Puds}) with respect to $\Sigma, \Lambda$ yield the
respective gap equations
\be
\label{SADX}
\Lambda_d=&&4\pi f(\nu_1\nu_2\nu_3)^{\frac 13}\left(\frac{\Sigma_u}{\Sigma_d}\right)^{\frac 13}\nonumber\\
\Lambda_u=&&4\pi f(\nu_1\nu_2\nu_3)^{\frac 13}\left(\frac{\Sigma_d}{\Sigma_u}\right)^{\frac 23}\nonumber\\
\Sigma_i=&&\int \frac{d^3p}{(2\pi)^3}\frac{\Lambda_i|T_i(p)|^2}{1+\Lambda_i^2|T_i(p)|^2}
\ee
Using (\ref{SADX}) in (\ref{Puds}) results in the shifted pressure at the saddle point
\be
&&{\cal P}_{uds}-{\cal P}_{per}=\nonumber\\
&&\int \frac{d^3p}{(2\pi)^3}{\rm ln}
\left[(1+\Lambda_d^2|T_d|^2)^2 (1+(\frac{\tilde \Lambda_0^3}{\Lambda_d^2})^2|T_u|^2)\right]\nonumber\\
\ee
with $\tilde \Lambda_0=4\pi f(\nu_1\nu_2\nu_3)^{\frac 13}$.
We note that the gap equation follows from ${d{\cal P}}/{d\Lambda_d}=0$.
The chiral condensates follow from standard arguments as
\be
\left<\bar d d\right>=\left<\bar s s\right>=&&2\Lambda_d T
\int \frac{d^3p}{(2\pi)^3}\frac{F_1^2(p)+F^2_2(p)}{1+\Lambda_d^2|T_{d}(p)|^2}\nonumber\\
\left<\bar u u\right>=&&2\Lambda_u T
\int \frac{d^3p}{(2\pi)^3}\frac{f_1^2(p)+f^2_2(p)}{1+\Lambda_u^2|T_{u}(p)|^2}
\ee
In contrast and at asymptotically high temperatures, the $1/p^5$ contribution in the hopping matrix element (\ref{HOP3}) is dominant.
Therefore the u-hopping is different from the d- and s-hoppings with $T_{1}(p)\approx 3T_{2}(p)$. The extrema
of the pressure in $\Lambda_{1,2,3}$ are now found to be
\begin{eqnarray}
3\Lambda_1=\Lambda_2=\Lambda_3=\frac{4\pi T} 3 (3\nu_1\nu_2\nu_3f_1f_2f_3)^{\frac 13}
\end{eqnarray}
with distinct chiral condensates
\begin{eqnarray}
\label{QQRATIO}
3\left<\bar u u\right>\approx \left<\bar d d\right>\approx \left<\bar s s\right>\approx 0.78\,T^3(\tilde \Lambda_2)^{\frac 35}
\end{eqnarray}
The high temperature phase breaks flavor symmetry but preserves the discrete combined charge conjugation symmetry
and the exchange $d\leftrightarrow s$.
As a check on these observations, we note that for $\tilde \Lambda \approx 1$, the chiral condensates
in (\ref{QQ3}) are numerically close
\be
\left<\bar q q\right>_{\tilde\phi=\pi} \approx && 0. 61T^3\nonumber\\
\left<\bar q q\right>_{\tilde\phi=\frac \pi 3}\approx && 0.76T^3
\ee
The remaining task is to solve the gap equations for the
four remaining parameters $\Lambda_d,\Lambda_u,\epsilon_1,\epsilon_2$.
The numerical analysis of those equations will be presented elsewhere.
\subsection{$N_c=N_f=2$ in symmetric phase}
The analysis of the $N_f=N_c=2$ follows similar arguments using the
twisted boundary conditions (\ref{BOUND4}) for $\pi\nu>\theta$. In this case the
u-flavor rides the L-dyon, and the d-flavor rides the M-dyon with the hopping
matrices
\be
\label{TUPXX}
T_u(p)=&&(\pi -\theta )T({\tilde f}_2^2(p)-{\tilde f}_1^2(p))+2ip{\tilde f}_1(p){\tilde f}_2(p)\nonumber\\
T_d(p)=&&\theta T (f_2(p)^2-f_1^2(p))+2ipf_1(p)f_2(p)
\ee
with
\begin{eqnarray}
\label{f12X}
f_1(p)\approx \frac 13 f_2(p)\approx \frac{\omega_0}{(p^2+((\nu-\theta /\pi)\omega_0)^2)^{\frac 54}}
\end{eqnarray}
${\tilde f}_{1,2}$ follows from $f_{1,2}$ using the substitution $\theta\rightarrow -\pi+\theta$.
We note that for $\theta=0$, the first contribution in $T_d$ vanishes, since the d-boundary is
periodic with zero Matsubara frequency. It is proportional to the Matsubara frequency in $T_u$,
since the u-boundary is anti-periodic. This difference is in addition to the different mass-like contributions
induced by the holonomy (d: $\nu\omega_0$ and u: $\tilde\nu\omega_0$),
which regulate the small-momenta (large distance) behavior of the hopping amplitudes
and
causes the flavor condensates to be relatively different.
In the mean-field limit, the non-perturbative pressure is
\be
\label{P22}
{\cal P}_{ud}-{\cal P}_{\rm per}=&& 16\pi f(\nu_1\nu_2\Sigma_1\Sigma_2)^{\frac 12}-2\Lambda_1\Sigma_1-2\Lambda_2\Sigma_2\nonumber \\
&&+\sum_{i=1,2}\int \frac{d^3p}{(2\pi)^3}\ln(1+\Lambda_i^2|T_{i}(p)|^2)
\ee
while the perturbative one (with our twisted boundary conditions) is given by
\be
&&{\cal P}_{\rm per}=-\frac{4\pi^2T^3}3\left(\nu_1\nu_2\right)^2\nonumber\\
&&-\frac{4T^3}{\pi^2}\sum_f\sum_{n=1}^{\infty}\frac{(-1)^ne^{i\theta_fn}}{n^4}{\rm Tr}_{f}L^n\nonumber\\
\ee
The first contribution comes from the gluons, while the second contribution comes from
the twisted quarks. The Polyakov line $L$ is in the fundamental representation,
with the flavor twist explicitly factored out.
The dominant contribution in the sum stems from the $n=1$ term. Note that
for $\theta_1=0$ and $\theta_2=\pi$, the fermionic contribution almost cancels.
The gap equations related to the parameters $\Lambda_i,\Sigma_i$ are
\be
\label{GAP2F}
&&\Lambda_1=4\pi f(\nu_1\nu_2)^{\frac 12}\left(\frac{\Sigma_2}{\Sigma_1}\right)^{\frac{1}{2}}\nonumber\\
&&\Lambda_2=4\pi f(\nu_1\nu_2)^{\frac 12}\left(\frac{\Sigma_1}{\Sigma_2}\right)^{\frac{1}{2}}\nonumber\\
&&\Sigma_i=\int \frac{d^3 p}{(2\pi)^3}\frac{\Lambda_i|T_i|^2}{1+\Lambda_i^2|T_i(p)|^2}
\ee
The chiral condensates are readily obtained as
\begin{eqnarray}
\left<\bar q_i q_i\right>=2\Lambda_iT\int \frac{d^3p}{(2\pi)^3}\frac{f_1^2(p)+f^2_2(p)}{1+\Lambda_i^2|T_{i}(p)|^2}
\end{eqnarray}
We note that for large $\Lambda$ or asymptotically small temperatures,
the second term in (\ref{ASSIGNX}) proportional to $p$ is dominant. In this case, the
hopping matrix elements for $M,L$ are equal. It follows that the extrema of
the pressure (\ref{P22}) are also equal,
\begin{eqnarray}
\Lambda\equiv \Lambda_{1}=\Lambda_{2}=2\pi (f_Lf_M)^{\frac 12}
\end{eqnarray}
In this limit, the chiral condensates are also the same
\begin{eqnarray}
\label{ASUD}
\left<\bar u u\right>\approx\left<\bar d d\right>\approx 2\Lambda T
\int \frac{d^3p}{(2\pi)^3}\frac{f_1^2(p)+f^2_2(p)}{1+\Lambda^2|T_{1,2}(p)|^2}
\end{eqnarray}
with $f_{1,2}(p)$ given in (\ref{F1F2}).
Before we discuss the general asymmetric case, let us make the following
comments on the so called Roberge-Weiss symmetry~\cite{RW}.
Since the hopping matrix elements satisfy the anti-periodicity
condition (\ref{ANTIX}), the pressure (\ref{P22}) satisfies the so called
$half$-periodicity condition
\begin{eqnarray}
{\cal P}(\theta+\pi/2)={\cal P}(\theta-\pi/2)
\end{eqnarray}
in the center symmetric phase. Using the explicit form (\ref{ASSIGNX}), we find that
\begin{eqnarray}
\label{CUSPFREE}
\left(\frac{d{\cal P}}{d\theta}\right)_{\theta\rightarrow \pi/2}=0
\end{eqnarray}
which is cusp free despite the switching of the zero-mode from the M- to L-instanton-dyon.
These observations are in agreement with those put forth
by Roberge and Weiss~\cite{RW} at low temperatures. At high temperature
(\ref{CUSPFREE}) develops a cusp in the center asymmetric phase~\cite{RW}.
We have checked that these properties hold also for the twisted boundary condition
(\ref{BOUND4}).
\subsection{$N_c=N_f=2$: general asymmetric case}
To proceed, we first note that the gap equations (\ref{GAP2F}) can be simplified by noting that
$\Lambda_1\Lambda_2=n^2$ and that $\Lambda_2\Sigma_2=\Lambda_1\Sigma_1$.
We have set $n=4\pi f(\nu_1\nu_2)^{\frac 12}$ with $\nu_1=\nu$ and $\nu_2=1-\nu$.
With this in mind, (\ref{GAP2F}) reduces to a single gap equation,
\begin{eqnarray}
\label{GAPXXX0}
\int d^3\tilde p\,\frac{|\tilde T_1|^2}{1/\tilde\Lambda_1^2+|\tilde T_1|^2}=
\int d^3\tilde p\,\frac{|\tilde T_2|^2}{\tilde\Lambda_1^2/\tilde n^4+|\tilde T_2|^2}
\end{eqnarray}
After rescaling of all variables $\tilde p=p/\omega_0$, $\tilde\Lambda_{1,2}=\Lambda_{1,2}/\omega_0^2$
and $\tilde n=n/\omega_0^2$ with $\omega_0=\pi T$, the hopping matrices (\ref{TUPXX}) simplify
\be
\label{GAPXXX}
|\tilde T_1|^2\approx &&\frac{(6\tilde p)^2}{(\tilde p^2+\nu_1^2)^5}\nonumber\\
|\tilde T_2|^2\approx &&\frac{64+(6\tilde p)^2}{(\tilde p^2+\nu_2^2)^5}
\ee
After using the gap equations (\ref{GAP2F}) and the rescaling, the pressure (\ref{P22}) becomes
\be
\label{PREY}
\frac{{\cal P}_{ud}}{\omega_0^3}=&&\int \frac{d^3\tilde p}{(2\pi)^3}
{\rm ln}\left[(1+\tilde\Lambda_1^2|\tilde T_1|^2)(1+(\frac{\tilde n^2}{\tilde\Lambda_1})^2|\tilde T_2|^2)\right]\nonumber\\
&&-\frac{4\pi^2}3\frac {T^3}{\omega_0^3}(\nu_1\nu_2)^2
\ee
Its extremum in $\Lambda$ is the gap equation $\partial{\cal P}_{ud}/\partial\tilde\Lambda_1=0$, which is (\ref{GAPXXX0}).
Similarly, there is the gap equation for the holonomy $\nu$. The task is to solve them together.
We found that (\ref{PREY}) leads to the
momentum-dependent constituent masses
for the d-, u-quarks
\be
\label{MASSud}
&&\frac{M_d(p)}{\omega_0}\equiv (1+\tilde p^2)^{\frac 12}\tilde\Lambda_1|\tilde T_1(p)|\nonumber\\
&&\frac{M_u(p)}{\omega_0}\equiv (1+\tilde p^2)^{\frac 12}\frac{\tilde n^2}{\tilde\Lambda_1}|\tilde T_2(p)|
\ee
The u-quark is subtantially heavier than the d-quark at low momentum because of its
anti-periodic boundary condition, with the d-quark turning
massless at zero momentum owing to its periodic boundary condition.
The results for the numerical solution of the gap equations are shown in Fig.~\ref{fig_ll}
and Fig.~\ref{fig_lud} . In Fig.~\ref{fig_ll}, we show the dependence of the Polyakov
line $L={\rm cos}(\pi \nu[{\bf n}])$ on the input parameter ${\bf n}=4\pi f/\omega_0^2$ (square-blue) in the lower line. For
comparison we also show the behavior of the same Polyakov line (circle-red) in the upper line,
for the untwisted (QCD) theory with both u-, d-quarks being anti-periodic fermions.
The input parameter ${\bf n}$ is a definite monotonously decreasing function of the temperature as defined in (\ref{NTT}).
The rightmost part of the plot corresponds to the dense low-$T$ case, in which we find a confining or $L\rightarrow 0$ behaviour.
The main conclusion from this plot is that confinement (or restoration of center symmetry) occurs at a lower
density ${\bf n}$ for the twisted theory, as compared to the QCD-like one.
In Fig.~\ref{fig_lud} we show the behavior of the flavor condensates $|\left<\bar d d\right>|/T^3$ (upper-diamond-blue),
$|\left<\bar uu\right>|/T^3$ (lower-square-green) for the twisted u-, d-quarks versus ${\bf n}=4\pi f/\omega_0^2$.
For comparison, we also show the value of $|\left<\bar uu \right>|/T^3=|\left<\bar d d\right>|/T^3$ (middle-triangle-magenta)
for the untwisted (anti-periodic) bounday conditions. It follows closely the line for the anti-periodic d-quark in the twisted case.
The value of the Polyakov line for the twisted quarks is shown
also (circle-red), to indicate the transition region. At high densities or low temperatures, center symmetry is restored but the quark condensates are still distinct for
the twisted boundary condition. The induced effective masses in (\ref{MASSud}) show that the d-quark is much lighter than
the u-quark, resulting in a much larger chiral condensate. Only at vanishingly small temperatures, the relation (\ref{ASUD}) is recovered as both hoppings become identical. The nature of the boundary condition becomes irrelevant at zero temperature.
At low densities or high temperatures, center symmetry is broken and the chiral condensate
$|\left<\bar d d\right>|$ is still substantially larger than $|\left<\bar uu\right>|$.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=8cm]{LL}
\caption{ Polyakov line versus the dimensionless
density ${\bf n}=4\pi f/\omega_0^2$ for $N_f=N_c=2$. The lower (square-blue) line is for the $Z_2$ twisted
quarks, while the upper (circle-red) line is for the usual anti-periodic quarks.}
\label{fig_ll}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=8cm]{LUD}
\caption{Dimensionless condensates $|\left<\bar d d\right>|/T^3$ (diamond-blue),
$|\left<\bar uu \right>|/T^3$ (square green) for twisted boundary conditions,
with increasing dimensionless density or lower temperatures $4\pi f/\omega_0^2$.
For comparison we show $|\left<\bar uu\right>|/T^3$ (triangle-magenta) for the anti-periodic quarks. The Polyakov line (square-red)
shows a rapid crossing from a center broken to a center symmetric phase for the twisted quarks. }
\label{fig_lud}
\end{center}
\end{figure}
\subsection{Mesonic spectrum}
The excitation spectrum with twisted boundary conditions can be calculated
following the analysis in~\cite{LIU2}. For the $N_c=N_f=2$ case, this follows by
substituting
\be
\label{MES1}
\Lambda(\psi^{\dagger}\gamma_{\pm}\psi+2\Sigma^{\pm})\rightarrow \sum_{fg}\Lambda^{\pm}_{fg}(\psi^{\dagger}_f\gamma_{\pm}\psi_g+2\Sigma^{\pm}_{fg})
\ee
in (\ref{SCx}) with
\begin{eqnarray}
\Lambda_{\pm}\equiv \Lambda_0\pm i\pi_{ps}+\pi_s
={\rm diag}(\Lambda_1,\Lambda_2)\pm i\pi_{ps}+\pi_s
\end{eqnarray}
Here $\pi_{s,ps}$ refer to the scalar and pseudo-scalar $U(2)$-valued mesonic fields.
For the chargeless chiral partners $\sigma^3, \pi^0$, the effective actions to quadratic order
are respectively given by
\be
\label{MES2}
S(\pi_{ps}^3)=&&\frac{1}{2f_{\pi}^2}\int \frac{d^3p}{(2\pi)^3}\pi_{ps}^3(p)\Delta_{-}^3(p)\pi_{ps}^3(-p)\nonumber\\
S(\pi_{s}^3)=&&\frac{1}{2f_{\pi}^2}\int \frac{d^3p}{(2\pi)^3}\pi_{s}^3(p)\Delta_{+}^3(p)\pi_{s}^3(-p)
\ee
with the corresponding propagators ($p_\pm =q\pm p/2$)
\be
\label{MES3}
\Delta_{\pm}^3(p)=\frac{1}{2}\int \frac{d^3q}{(2\pi)^3}\frac{(T_1(p_+)\pm T_1(p_-))^2}{(1+\Lambda_1^2|T_1(p_+)|^2)(1+\Lambda_1^2|T_1(p_-)|^2)}\nonumber \\+\frac{1}{2}\int \frac{d^3q}{(2\pi)^3}\frac{(T_2(p_+)\pm T_2(p_-))^2}{(1+\Lambda_2^2|T_2(p_+)|^2)(1+\Lambda_2^2|T_2(p_-)|^2)}\nonumber\\
\ee
with the hopping matrices $T_{1,2}$ labeled as $1\equiv d$ and $2\equiv u$.
In deriving (\ref{MES2}-\ref{MES3}) we made explicit use of the gap equations (\ref{GAP2F}). We note that $\Delta^3_-(0)=0$
translates to a massless $\pi^0=\pi^3_{ps}$, while $\Delta^3_+(0)\neq 0$ translates to a massive $\sigma$,
for both the center symmetric and broken phases. The masslessness of $\pi^0$ is ensured by the hidden symmetry displayed in
(\ref{BACK0}-\ref{BACK}), and reflects on the remaining spontaneously broken symmetry for $N_f=2$.
The charged mesons $\pi_{s}^{\pm},\pi_{ps}^{\pm}$, follow a similar analysis with now the propagators
for the quadratic contributions given by
\begin{eqnarray}
\Delta_{\pm}^{1,2}(p)=\frac{(\Sigma_1\Sigma_2)^{\frac 12}}{\pi f}-2\int \frac{d^3q}{(2\pi)^3}\,{\mathbb F}_\mp (p,q)
\end{eqnarray}
Here $\Delta_-^{1,2}$ refer to the charged scalars $\pi_s^\pm$, while $\Delta_+^{1,2}$ refer to their
charged chiral partners $\pi_{ps}^\pm$, with
\begin{eqnarray}
{\mathbb F}_\pm (p,q)=\frac{T_{1}(p_+)T_2(p_-)(\Lambda_1\Lambda_2T_{1}(p_+)T_2(p_-)\pm 1)}{(1+\Lambda_1^2|T_1(p_+)|^2)(1+\Lambda_2^2|T_2(p_-)|^2)}
\end{eqnarray}
In the exactly center symmetric phase, with $\Lambda_1=\Lambda_2$, the charged pions $\pi_{ps}^\pm$
are also massless. But in general, in the asymmetric phase $\Lambda_1\neq \Lambda_2$, and
both $\pi^\pm$ are massive (but degenerate).
The singlet mesons $\sigma=\pi_{s0},\eta=\pi_{ps,0}$ propagators follow similarly
\be
&&2\Delta_{\sigma}(p)=\frac{n_D}{2}+\Delta_{+}^3(p)\nonumber\\
&&2\Delta_{\eta}(p)=\frac{n_D}{2}+\Delta_{-}^3(p)
\ee
with $n_D$ the mean instanton-dyon density defined through the gap equation
\begin{eqnarray}
\frac{n_D}{4}=\frac{1}{2}\sum_{i=1}^2\int \frac{d^3p}{(2\pi)^3}\frac{\Lambda_i^2|T_i|^2}{1+\Lambda_i^2|T_i|^2}
\end{eqnarray}
\section{Conclusions}
We have constructed the partition function for the instanton-dyon liquid model with twisted flavor
boundary conditions, and derived and solved the resulting gap equations in the mean-field approximation.
In addition to manifest $U^{N_F}(1)\times U^{N_f}(1))/U_{L-R}(1)$
flavor symmetries, for $Z_{N_c}$-QCD some discrete charge conjugation plus
flavor exchange symmetries were identified .
The central constructs are the so called hopping matrix elements between
instanton-dyon and anti-instanton-dyon zero modes. One technical point is
to note that some of these hoppings may become singular at large distances
(small momenta) when the contribution from the $Z_{N_c}$-twists and the
holonomies cancel the exponentially decreasing asymptotics.
These singularities are readily regulated through a suitable redefinition
of the pertinent fugacities~\cite{LIU5}.
The low temperature phase is center symmetric with zero Polyakov line. It also breaks chiral symmetry, with
still sizably different chiral condensates in our mean-field analysis. The latters are about equal at very small
temperatures. The high temperature phase is center asymmetric with always
unequal chiral condensates. Our results are qualitatively consistent with the
lattice results reported recently in~\cite{TAKUMI}, although with a more pronounced difference between the
flavor chiral condensates across the transition region caused mostly by the differences in the leading (twisted) Matsubara modes
in the center symmetric phase. In the symmetric ground state we observe the emergence of one massless pion $\pi^0$
(2-flavor case).
The instanton-dyon model offers a very concise framework for
discussing the interplay of twisted boundary
conditions (also known as flavor holonomies) with
center symmetry and chiral symmetry in the QCD-like models.
A further comparison between the mean field results derived in this paper, with the direct
simulations \cite{LARS3} of the instanton-dyon model and lattice results \cite{TAKUMI},
is obviously of great interest.
\section{Acknowledgements}
We thank Takumi Iritani for an early discussion.
This work was supported by the U.S. Department of Energy under Contract No.
DE-FG-88ER40388.
|
1,941,325,220,060 | arxiv | \section{Methodology}
\input{sections/method}
\section{Results}
\input{sections/results}
\section{Discussion and Conclusions}
\input{sections/discussion}
\section*{Acknowledgements}
This material is based on work funded by the United States Department of Energy (DOE) National Nuclear Security Administration (NNSA) Office of Defense Nuclear Nonproliferation Research and Development (DNN R\&D) Next-Generation AI research portfolio and Pacific Northwest National Laboratory, which is operated by Battelle Memorial Institute for the U.S. Department of Energy under contract DE-AC05-76RLO1830. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the United States Government or any agency thereof. We would like to thank Joonseok Kim and Jasmine Eshun for their assistance preparing data.
\bibliographystyle{acl_natbib}
\subsection{Treatments and Outcomes}
We consider the inclusion of our consolidated research entities to be treatments in our analyses --- \textit{e.g.,~} does the inclusion of \textit{biLSTM} architectures in the publication have a causal relationship with future research outcomes? --- and the basis of several research outcomes related to the \textit{adoption}, \textit{retirement}, and \textit{maintenance} of CL methodologies, tasks and approaches.
That is, the association of the identified research entities with authors' publications allows us to identify when authors {adopt} new emerging technologies (\textit{e.g.,~} the first use of \textit{transformers}), {retire} previously used methods or research applications (\textit{e.g.,~} if authors stop publishing on \textit{LSTM} architectures after \textit{biLSTM} architectures are introduced), continue to use -- or {maintain} publications in -- methods (\textit{e.g.,~} when authors continue to publish on \textit{NER}). We associate these behaviors as future outcomes for each author's publications in previous years.
For each year in which an author published in an ACL venue, we calculate adoption and retirement outcomes for each consolidated research element the following year, maintenance outcomes for each research element considering the following two years.
Alongside these fine-grained research outcomes, we also examine coarse-grained, or general outcomes for authors:
\begin{itemize}[noitemsep,nolistsep]
\item overall pauses in publishing within ACL venues (no publications in any ACL community for two years),
\item persistent publication records (continuing to publish in consecutive years),
\item publication volume increases (the increase or decrease in number of publications in ACL venues relative to the previous year).
\end{itemize}
In our analyses, we focus on recent six years (2014-2019) for which we have complete treatment and outcome annotations and consider each year independently. We leverage two types of publication record granularities -- publication records and yearly research portfolios -- to analyze the temporal dynamics of the causal system underpinning CL publication venues at multiple resolutions. Note, we present a detailed description of the treatments, covariates and outcomes we used
in Appendix~\ref{sec:appendix}.
\subsection{Causal Structure Learning}
Structural causal models are a way of describing relevant features of the world and how they interact with each other. Essentially, causal models represent the mechanisms by which data is generated. The causal model formally consists of two sets of variables $U$ (exogenous variables that are external to the model) and $V$ (endogenous variables that are descendants of exogenous variables), and a set of functions $f$ that assign each variable in $V$ a value based on the values of the other variables in the model. To expand this definition: a variable $X$ is a direct cause of a variable $Y$ if $X$ appears in the function that assigns $Y$ value. Graphical models or Directed Acyclic Graphs (DAGs) have been widely used as causal model representations.
The causal effect rule is defined as: given a causal graph $G$ in which a set of variables $PA(X)$ are designated as a parents of $X$, the causal effect of $X$ on $Y$ is given by:
\begin{equation}
\begin{array}{l}
P(Y=y | do (X=x)) =\\
\sum_{z} P (Y = y | X = x, PA = z) P(PA = z),
\end{array}
\end{equation}
where $z$ ranges over all the combinations of values that the variables in PA can take.
The first approach for our causal analysis aims to examine the causal relationships that are identified using an ensemble of causal discovery algorithms~\cite{saldanhaevaluation}. Our ensemble considers the relationships identified by CCDR~\cite{aragam2015concave}, MMPC (Max-Min Parents-Children)~\cite{tsamardinos2003time}, GES (Greedy Equivalence Search)~\cite{chickering2002optimal}, and PC (Peter-Clark)~\cite{colombo2014order}. We use the implementations provided by the pcalg R package~\cite{Hauser2012MarkovEquiv,Kalisch2012pcalg} and causal discovery toolbox (CDT)~\cite{Kalainathan2019Causal}\footnote{\url{https://fentechsolutions.github.io/CausalDiscoveryToolbox/html/index.html}}. The outcomes of our ensemble approach to causal discovery is a causal graph reflecting the relationships within the causal system, weighting edges by the agreement among the individual algorithms on whether the causal relationship exists.
After applying this causal discovery approach to each year individually, we are able to construct a dynamic causal graph and investigate trends in causal relationships -- \textit{e.g.,~} as they are introduced, persist over time, or are eliminated.
\subsection{Treatment Effect Estimation}
We further investigate the magnitude and effect of causal relationships using average treatment effect (ATE) estimates. We compare pair-wise estimates using several causal inference models: {\it Causal Forest}~\cite{tibshirani2018package} and {\it Propensity Score Matching}~\cite{ho2007matching} using the ``MatchIt'' R package\footnote{\url{https://cran.r-project.org/web/packages/MatchIt/vignettes/MatchIt.html}}, and a cluster-based conditional treatment effect estimation tool -- {\it Visualization and Artificial Intelligence for Natural Experiments (VAINE)}\footnote{\url{https://github.com/pnnl/vaine-widget}}~\cite{guo2021vaine}.
\begin{figure*}[t!]
\centering
\includegraphics[width=0.67\textwidth]{figs/tee_retire_same_keyword2.png}
\caption{Treatment effect estimates obtained using three causal inference methods -- Causal Forest, Propensity Score Matching and VAINE, for publish on $x$ $\rightarrow$ retire $x$ over time, across TEE methods.}
\label{fig:publish_x_retire_x_tee}
\end{figure*}
\begin{figure*}[t!]
\centering
\includegraphics[width=0.49\textwidth]{figs/publish_x_retire_x.png}
\includegraphics[width=0.49\textwidth]{figs/publish_x_maintain_x.png}
\caption{Summary of causal structure learning using our ensemble model discovered from Publish on $x$ to Retire $x$ (above) or Maintain $x$ in the next 2 years (below), by year. Shaded cells indicate that an edge was discovered, white cells indicate that no edge was discovered for that year. At right is a summary of the number of years for which an edge was discovered.}
\label{fig:publish_x_future_x}
\vspace{-0.25cm}
\end{figure*}
VAINE is designed to discover natural experiments and estimate causal effects using observational data and address challenges traditional approaches have with continuous treatments and high-dimensional feature spaces. First, VAINE allows users to automatically detect sets of observations controlling for various covariates in the latent space. Then, using linear modeling, VAINE allows to estimate the treatment effect within each group and then average these local treatment effects to estimate the overall average effect between a given treatment and outcome variable. VAINE's novel approach for causal effect estimation allows it to handle continuous treatment variables without arbitrary discretization and produces results that are intuitive, interpretable, and verifiable by a human. VAINE is an interactive capability that allows the user to explore different parameter settings such as the number of groups, the alpha threshold to identify significant effects, etc.
Below we define what we mean by learning a causal effect from observational data. Given $n$ instances $[(x_1, t_1),\dots,(x_n, t_n)]$ learning causal effects quantifies how the outcome $y$ is expected to change if we modify the treatment from $c$ to $t$, which can be defined as $\mathbb{E}(y \mid t) - \mathbb{E}(y \mid c)$, where $t$ and $c$ denote a treatment as a control.
Similarly to our causal discovery based analyses, we examine the growth and decay of causal influence for a series of treatments (research focus represented by materials, methodology, or application-based keywords) on our outcomes of interest.
\subsection{Evaluation}
Evaluating causal analysis methods is challenging~\cite{saldanhaevaluation,weld2020adjusting,gentzel2019case,shimoni2018benchmarking,mooij2016distinguishing,dorie2019automated,singh2018comparative,raghu2018evaluation}. Broadly, evaluation techniques include structural, observational, interventional and qualitative techniques e.g., visual inspection. Observational evaluation by nature are non-causal and do not have the ability to measure the errors under interventions. Structural measures are limited due to the requirements of known structure, are oblivious to magnitude and type and dependence, as well as treatments and outcomes, and constrain research directions. Unlike structural and observational measures, interventional measures allow to evaluate model estimates of interventional effects e.g., ``what-if counterfactual evaluation".
In this work we rely on both qualitative and quantitative evaluation. Methods that we use for causal inference were independently validated using structural and observational measures on synthetic datasets -- causal forest~\cite{wager2018estimation}, propensity score matching~\cite{causaleval}, causal ensemble~\cite{saldanhaevaluation}, and VAINE~\cite{guo2021vaine}. Since we rely on four complementary causal inference techniques, we draw our conclusions based on their agreement. In addition, to perform qualitative evaluation with the human in the loop we rely on recently release visual analytics tools to evaluate causal discovery and inference~\cite{causaleval}.
\subsection{Continuing Existing Avenues of Research}
One of the first trends we noticed, in both the causal structures and treatment effects, was a causal relationship between publishing on a given research entity (\textit{e.g.,~} robustness, LSTMs, transformers, NER, etc.) and whether an author would \textit{continue} to publish on the same topic, task, or methodology in the following year(s). Does publishing once influence whether you will publish again? In short, no. We see a consistent trend in \textit{positive} treatment effects, as illustrated in Figure~\ref{fig:publish_x_retire_x_tee}, from publishing in the current year to not
publishing (pausing or retiring research entities) in the future -- publishing on $x$ leads to \textit{not publishing} on $x$ in the future.
In Figure~\ref{fig:publish_x_future_x}, we summarize the temporal dynamics of causal relationships from publishing on $x$ in a current year's publication to retiring $x$ (no publications associated with research entity $x$) in the next 2 years (above) or maintaining $x$ (at least one publication associated) in the next 2 years (below) indicated by our causal structural learning analyses which aligns with our TEE results. We show the consistency in which research entities are included in these trends using Figure~\ref{fig:publish_x_future_x_venn}. We see that many of the elements where causal relationships were identified in all 6 years are present in both the retirement and maintenance relationships. Of all the elements, research on \textit{Transparency} is the only case where there is only a retirement relationship. All elements with identified maintenance relationships in at least one year were also present in the set of retirement relationships.
\subsection{Emerging Research Foci, and the Impacts on Retirement of Old Research Foci}
\label{sec:churn}
The introduction or popularization of new model architectures (especially in deep learning) has an initial strong impact on retirement of previous SOTA architectures, but this is often focused on the initial adoption. We investigate several examples of such phenomena.
Table~\ref{tab:lstm_bilstm} illustrates the decaying causal influence that using bidirectional LSTM-based architectures in current publications has on the retirement of (no longer using) LSTM in future publications. At first, there is a strong causal effect (approx. 0.8), where the use of biLSTM layers lead to no longer using LSTM layers. However, this reduces over time, with CF estimating close to no effect past 2015. We see a complementary trend on the relative publication volume increase outcome (\textit{Increase \# publications next year}), where there is an initial strong effect (0.76) that decays until it shifts to a negative effect (in 2018) then neutral (in 2019), as shown in Table~\ref{tab:lstm_bilstm}.
\begin{table}[t]
\centering
\small
\setlength\tabcolsep{5 pt}
\begin{tabular}{l|ccccccc}
\hline
Method & 2014 & 2015 & 2016 & 2017 & 2018 & 2019 \\
\hline
CF & 0 & 0.71 \cellcolor{posColor!71} & -0.01 \cellcolor{negColor!1} & 0.07 \cellcolor{posColor!7} & 0.09 \cellcolor{posColor!9} &-0.03 \cellcolor{negColor!3}\\
VAINE & 0 & 0.88 \cellcolor{posColor!88}& 0.46 \cellcolor{posColor!46} & 0.68 \cellcolor{posColor!68} & 0.68 \cellcolor{posColor!68} &0.77 \cellcolor{posColor!77}\\
\hline
\textit{Mean} & 0 & 0.80 \cellcolor{posColor!80}& 0.22 \cellcolor{posColor!22}& 0.32 \cellcolor{posColor!32}& 0.39 \cellcolor{posColor!39}&0.37 \cellcolor{posColor!37}\\
\hline
\end{tabular}
\caption{Treatment effect estimates for the treatment \textit{Publish on bidirectional LSTM} on outcome \textit{Retire LSTM} by year, illustrating a decaying influence.}
\label{tab:lstm_bilstm}
\end{table}
\begin{table}[t]
\centering
\small
\setlength\tabcolsep{5 pt}
\begin{tabular}{l|ccccccc}
\hline
Method & 2014 & 2015 & 2016 & 2017 & 2018 & 2019 \\
\hline
CF &
0 \cellcolor{posColor!0} &
0.76 \cellcolor{posColor!76} &
-0.03 \cellcolor{negColor!3} &
0.36 \cellcolor{posColor!36} &
-0.2 \cellcolor{negColor!20} &
0.00
\\
VAINE &
0 &
0 &
0 &
0.39 \cellcolor{posColor!39} &
-0.23 \cellcolor{negColor!23} &
0
\\
\hline
\textit{Mean} &
0 &
0.38 \cellcolor{posColor!38} &
-0.02 \cellcolor{negColor!2} &
0.38 \cellcolor{posColor!38} &
-0.22 \cellcolor{negColor!22} &
0.00
\\
\hline
\end{tabular}
\caption{Treatment effect estimates for the treatment \textit{Publish on bidirectional LSTM} on outcome \textit{Increase Publications next year}, illustrating a strong initial influence shift to negative (2018) then neutral (2019).
}
\label{tab:lstm_bilstm}
\vspace{-0.2cm}
\end{table}
\begin{figure*}[ht]
\centering
\includegraphics[width=\textwidth, trim={0 0 0 0.75cm},clip]{figs/maintain_nonenglsh_records.png}
\caption{Recurrent causal relationships (identified for at least two years) that influence {\bf continued publication patterns} related to non-English languages in the CL community, \textit{e.g.,~} scientist co-authorship PageRank effects maintaining non-English publication focus in 2014 and 2016. Black markers identify the effect, with line segments extending to the cause nodes, and distinct relationships are represented by varying colors.}
\label{fig:cd_maintain_nonenglish}
\vspace{-0.3cm}
\end{figure*}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.8\textwidth, trim={0 0 0 0.75cm},clip]{figs/china_nonenglish.png}
\caption{Recurrent causal relationships
that influence {\bf new publication patterns} related to non-English languages.
Black markers identify the effect, with line segmnets extending to the cause nodes, and distinct relationships are represented by varying colors. Empty time windows indicate no recurrent relationships were discovered.
}
\label{fig:cd_adopt_nonenglish}
\end{figure*}
In addition, we see a consistent divergence from the trend described above (publishing on \textit{x} influences not publishing on \textit{x} in the next two years) for research related to non-English languages in the 2016-2018 time frame (see Figure~\ref{fig:cd_maintain_nonenglish}). These might be explained by the impact of large-scale research programs and funding (note, we will empirically confirm or dispute this hypothesis as a part of our future work). For example, we find that these outcomes (whether researchers continue to publish research related to non-English languages in 2017-2020) align with the last few years (and program-wide evaluation events\footnote{\url{https://www.nist.gov/itl/iad/mig/lorehlt-evaluations}}) of the LORELEI (Low Resource Languages for Emergent Incidents) DARPA program\footnote{\url{https://www.darpa.mil/program/low-resource-languages-for-emergent-incidents}}.
The goal of the LORELEI program was ``to dramatically advance the state of computational linguistics and human language technology to enable rapid, low-cost development of capabilities for low-resource languages'', and resulted in several publications on such languages from performers \textit{e.g.,~} \cite{strassel2016lorelei}. ``What is funded is published'' may be an intuitive influence, but here we see qualitative evidence that funding could influence the causal mechanisms of the publication ecosystem --- these signals are strong enough to be reflected in causal systems discovered using causal discovery algorithms in observational data. For \textit{adopting} non-English as a research focus, we also see influence from the authors' country associations -- \ie institution affiliations in China influence adopting non-English research (Fig.~\ref{fig:cd_adopt_nonenglish}).
\begin{table}[t!]
\centering
\small
\setlength\tabcolsep{5 pt}
Publish on Arabic $\rightarrow$ Continue Publishing on non-English
\begin{tabular}{l|ccccccc}
\hline
Method & 2014 & 2015 & 2016 & 2017 & 2018 & 2019 \\
\hline
CF & 0.04 \cellcolor{posColor!4} & 0.03 \cellcolor{posColor!3} & 0.28 \cellcolor{posColor!28} & 0.56 \cellcolor{posColor!56} & 0.40 \cellcolor{posColor!40} &-0.55\cellcolor{negColor!55}
\\
VAINE & 0 & 0.20 \cellcolor{posColor!20} & 0.43 \cellcolor{posColor!43} & 0.51 \cellcolor{posColor!51} & 0.13 \cellcolor{posColor!13} &0.06 \cellcolor{posColor!6}
\\
\hline
\textit{Mean} & 0.02 \cellcolor{posColor!2} & 0.12 \cellcolor{posColor!12} & 0.36 \cellcolor{posColor!36} & 0.54 \cellcolor{posColor!54} & 0.27 \cellcolor{posColor!27} &0.00 \cellcolor{posColor!0} \\ \hline
\multicolumn{6}{l}{}\\
\end{tabular}
\vspace{-0.25cm}
Publish on Arabic $\rightarrow$ Stop Publishing on non-English
\begin{tabular}{l|cccccc}
\hline
Method & 2014 & 2015 & 2016 & 2017 & 2018 & 2019 \\ \hline
CF & 0.44 \cellcolor{posColor!44} & 0.80 \cellcolor{posColor!80} & 0.40 \cellcolor{posColor!40} & 0.13 \cellcolor{posColor!13} & 0.50 \cellcolor{posColor!50} &0.91 \cellcolor{posColor!91}
\\
VAINE & 0.81 \cellcolor{posColor!81} & 0.71 \cellcolor{posColor!71} & 0.45 \cellcolor{posColor!45} & 0.20 \cellcolor{posColor!20} & 0.82 \cellcolor{posColor!82} &0.91 \cellcolor{posColor!91}
\\
\textit{Mean} & 0.62 \cellcolor{posColor!62} & 0.75 \cellcolor{posColor!75} & 0.42 \cellcolor{posColor!42} & 0.16 \cellcolor{posColor!16} & 0.66 \cellcolor{posColor!66} &0.91 \cellcolor{posColor!91}
\\
\hline
\multicolumn{6}{l}{}\\
\end{tabular}
\vspace{-0.25cm}
Publish on Arabic $\rightarrow$ Increase Publications next year
\begin{tabular}{l|cccccc}
\hline
Method & 2014 & 2015 & 2016 & 2017 & 2018 & 2019 \\ \hline
CF ~~~~~& -0.03 \cellcolor{negColor!3} & -0.03 \cellcolor{negColor!3} & 0.31 \cellcolor{posColor!31} & 1.8 \cellcolor{posColor!100} & 0.23 \cellcolor{posColor!23} & -0.09 \cellcolor{negColor!9}
\\
\hline
\end{tabular}
\caption{Treatment effect estimates for the treatment \textit{Publish on Arabic} for outcomes \textit{Maintain non-English in the next two years} (above) and \textit{Stop publishing on non-English in the next year} (below) by year, illustrating a peak in influence continuing to publish in 2017.}
\label{tab:arabiic_maintain_nonenglish}
\vspace{-0.3cm}
\end{table}
\begin{figure}[t]
\centering
\small
\begin{tikzpicture}
\begin{axis} [ybar, bar width=20pt,
height=1.2in,
width=3in,
xtick = {2014,2015,2016,2017,2018,2019},
xticklabels = {2014,2015,2016,2017,2018,2019},
xmin=2013.5, xmax=2019.5,
ymin=0,ymax=18,
ylabel=\% authors,
ylabel near ticks,
axis y line*=left,
axis x line*=bottom,
title=\small Persistence of Authorship in non-English Research,
title style={at={(0.5,0.8)}}
]
\addplot[no marks, fill=blue!30!white, blue!30!white] coordinates {
(2014, 2.290076)
(2015, 10.619469)
(2016, 8.800000)
(2017, 15.189873)
(2018, 4.672897)
(2019, 7.619048)
};
\end{axis}
\end{tikzpicture}
\vspace{-0.2cm}
\caption{
The percentage of authors who published on non-English languages in a given year, who also published on non-English languages in the following year.}
\label{fig:pct_nonenglish}
\vspace{-0.3cm}
\end{figure}
Table~\ref{tab:arabiic_maintain_nonenglish} illustrates a peak in the positive influence of publishing in a particular non-English language (\ie Arabic, which was one of the languages of interest for the LORELEI program) and continuing to publish on non-English languages. We see that the divergence of the causal relationships, and the persistence of authorship in non-English language research, illustrated in Figure~\ref{fig:pct_nonenglish}, center around or peak in 2017 and begin to flip (causal forest estimates a negative effect of -0.55) in 2019.
\section{Related Work}
There are two complementary causal inference frameworks -- structural causal models~\cite{pearl2009causal} and treatment effect estimation approaches~\cite{rosenbaum1983central}.
Existing approaches to learn the causal structure (aka causal discovery) broadly fall into two categories: constraint-based~\citep{spirtes2000causation,yu2016review} and score-based~\citep{chickering2002optimal}.
Recently, there have been an increased interest in causal inference on observational data~\cite{guo2020survey}, including text data, in the computational linguistics and computational social science communities~\cite{lazer2009social}. For example, recent work by~\cite{roberts2020adjusting} estimated the effect of perceived author gender on the number of citations of the scientific articles and~\cite{veitch2020adapting} measured the effect that presence of a theorem in a paper had on the rate of the paper's acceptance.
Additional examples in the computational social science domain include: measuring the effect of alcohol mentions on Twitter on college success~\cite{kiciman2018using}; estimating the effect of the ``positivity'' of the product reviews and recommendations on sales on Amazon~\cite{pryzant2020causal,sharma2015estimating}; understanding factors effecting user performance on StackExhange, Khan Academy, and Duolingo~\cite{alipourfard2018using,fennell2019predicting};
estimating the effect of censorship on subsequent censorship and posting rate on Weibo~\cite{roberts2020adjusting} and word use in the mental health community on users' transition to post in the suicide community on Reddit~\cite{de2016discovering,de2017language}; or the effect of exercise on shifts of topical interest on Twitter~\cite{falavarjani2017estimating}. Moreover,~\citet{keith2020text} presented the overview of causal approaches for computational social science problems focusing on the use of text to remove confounders from causal estimates, which was also used by~\cite{weld2020adjusting}. Earlier work utilized matching methods to learn causal association between word features and class labels in document classification~\cite{paul2017feature,wood2018challenges}, and use text as treatments~\cite{fong2016discovery,dorie2019automated} or covariates e.g., causal embeddings~\cite{veitch2020adapting,scholkopf2021toward}.
Several studies have leveraged the ACL Anthology dataset to analyze diversity in computer science research~\cite{vogel2012he}, and perform exploratory data analysis such as knowledge extraction and mining~\cite{singh2018cl,radev2012rediscovering,gabor2016semantic}. However, unlike any other work, our approach focuses on leveraging complementary methods for causal inference -- structural causal models and treatment effect estimation to discover and measure the effect of scientists' research focus on their productivity and publication behavior, specifically the emergence, retirement and persistence of computational linguistics methodologies, approaches and topics.
\section{Data Preprocessing}
Our causal analysis relies on the publication records from the Association of Computational Linguistic (ACL) research community from 1986 through 2020. We collect the ACL Anthology dataset\footnote{\url{https://aclanthology.org/}}~\cite{gildea2018acl} with the bibtex provided with the accompanying abstracts. Excluding all records that do not contain authors (\textit{e.g.,~} bibtex entries for the workshop proceedings), we convert the bibtex representation into a data representation
where each row represents each paper-author combination (\ie for a paper (paperX) with three authors, there are three representative rows: paperX-author1, paperX-author2, and paperX-author3).
Then, we extract features that encode {\bf paper} properties: the year it was published, whether the paper was published in a conference or journal, the number of authors, the number of pages, and word count in paper. We also compute Gunning fog index~\cite{gunning1952technique} -- influenced by the number of words, sentences, and complex words.
We then annotate each row with properties related to the {\bf author} during the year the paper was published. As a proxy of the length of the author's research career in the computational linguistics community, we calculate the number of years since the author's first publication in the anthology. Each author's location is represented as the location (country) of the institution the author is associated with in the metadata or full text.
To measure productivity at varying granularities, we calculate the number of one's papers published in total, in the last year, and in the last five years.
We then construct a dynamic network representation of the anthology using author-to-paper relationships for each calendar year, as encoded in the metadata. After projecting those relationships into the dynamic co-authorship network that reflects author to co-author connections by year, we calculate centrality and page rank network statistics over time to measure the influence of the author. These {\bf collaboration behavior} features complement previously described author properties.
We also added three features to encode the diversity in co-authorship. First, the number of all co-authors who published the papers with the author. Second, the average number of papers co-authored per co-author, which is computed as the total number of papers co-authored per co-author divided by the number of co-authors. The last is a likelihood that a co-author is an author on a paper, which is the second feature divided by the total number of the author's papers. This enables us to measure the diversity, or lack thereof, of collaborative relationships of each author, and encodes how collaboration behavior evolves over time.
\subsection{Encoding Research Focus}
\label{sec:research_entities}
After extracting the full text of each paper from the PDF using GROBID~\cite{GROBID}, we use the SpERT model trained to extract key research entities from scientific publications. The SpERT model~\cite{luan2018multi} was trained to extract scientific entities of different types such as tasks, methods, and materials and the relationships between them such as ``Feature-Of" and ``Used-for", using the SciERC dataset\footnote{\url{http://nlp.cs.washington.edu/sciIE/}}. After applying the model to the ACL data, we consolidate noisy references of research entities into representative clusters manually, resulting in 50 entities that encode research tasks, methods, and materials\footnote{Research entities trending in the CL community used for our causal analyses: ``artificial intelligence'',
``adversarial'',
``annotation'',
``arabic'',
``attention'',
``baselines'',
``bidirectional lstm'',
``causal'',
``chinese'',
``classification'',
``coreference'',
``crowdsourcing'',
``deep learning'',
``dialog'',
``embeddings'',
``ethics'',
``explanation'',
``fairness'',
``french'',
``generative'',
``german'',
``grammars'',
``graph models'',
``heuristics'',
``interpretability'',
``language models'',
``lstm'',
``machine learning'',
``monolingual'',
``multilingual'',
``multiple languages'',
``NER'',
``node2vec'',
``non-English language'',
``pos/dependency/parsing'',
``QA'',
``reinforcement learning'',
``robustness'',
``russian'',
``sentiment'',
``statistical/probabilistic models'',
``summarization'',
``topic model'',
``transfer learning'',
``transformers'',
``translation'',
``transparency'',
``unsupervised methods'',
``word2vec'',
``benchmark''.}.
These consolidated entities are representative of the top 300 entities extracted from all ACL anthology publications for which we were able to extract the full text (121,134 out of 127,041 which is 95.3\% of all records for which there was an ACL anthology bibtex entry), after removing trivial or general terms such as ``system'', `approach'', ``it'', ``task'', and ``method''. We present the coverage across papers (\% of papers with at least one associated entity) over time in Figure~\ref{fig:keyword_coverage}, illustrating the coverage approximates the overall coverage (around 41\%) for the bulk of the dataset (1980-2019), with coverage trending upwards over time.
\begin{figure}
\centering
\small
\begin{tikzpicture}
\begin{axis}
height=1.25in,
width=3in,
xtick={1960,1970,1980,1990,2000,2010,2020},
xticklabels={1960,1970,1980,1990,2000,2010,2020},
xmin=1960,
xmax=2020
ylabel=\% Papers,
ylabel near ticks
]
\addplot[no marks] coordinates {
(1965, 21.428571428571427)
(1967, 50.0)
(1969, 11.11111111111111)
(1973, 0.0)
(1974, 50.0)
(1975, 16.666666666666664)
(1976, 57.14285714285714)
(1977, 9.090909090909092)
(1978, 13.157894736842104)
(1979, 37.5)
(1980, 28.688524590163933)
(1981, 31.428571428571427)
(1982, 28.187919463087248)
(1983, 31.292517006802722)
(1984, 24.705882352941178)
(1985, 25.146198830409354)
(1986, 24.269005847953213)
(1987, 29.11392405063291)
(1988, 26.180257510729614)
(1989, 34.62732919254658)
(1990, 30.668414154652684)
(1991, 30.25)
(1992, 27.056277056277057)
(1993, 32.241153342070774)
(1994, 31.852551984877124)
(1995, 35.80705009276438)
(1996, 27.653061224489793)
(1997, 33.56890459363957)
(1998, 31.5018315018315)
(1999, 34.35374149659864)
(2000, 39.712606139777925)
(2001, 36.22291021671827)
(2002, 37.65541740674956)
(2003, 35.54347826086956)
(2004, 36.07907742998353)
(2005, 41.518275538894095)
(2006, 38.26611622737377)
(2007, 39.38782374705684)
(2008, 40.30816640986132)
(2009, 40.278729723554946)
(2010, 38.31394162073893)
(2011, 37.28967712596635)
(2012, 37.6173285198556)
(2013, 37.87696019300362)
(2014, 37.278106508875744)
(2015, 40.542159652538565)
(2016, 43.40630564575259)
(2017, 46.94185753838409)
(2018, 46.022632717590284)
(2019, 47.5609756097561)
};
\addplot[no marks,dashed] coordinates {
(2014, 0)
(2014, 58)
};
\end{axis}
\end{tikzpicture}
\caption{Relative coverage of consolidated research entity representations in the ACL data. Percentage of papers with at least one entity associated by publication year. Dashed line indicates the start of our causal analysis period (2014).}
\label{fig:keyword_coverage}
\end{figure}
|
1,941,325,220,061 | arxiv | \section{Introduction}
The physics of ordered (crystals) and disordered (glasses) solid states
and their interrelation has been subject to investigations in various
model systems \cite{Pusey1986,Ivlev2012}. Such model systems capture
typical features of more complex matter and often allow for the variation
of the interparticle interactions to explore physical regimes otherwise
not accessible. A qualitatively strong variation concerns the
distinction between hard and soft repulsion as in the Yukawa potential
which describes the range from excluded-volume to charge-based
interactions.
Yukawa potentials are realized in both colloidal suspensions
\cite{Bitzer1994,Beck1999,Heinen2011} and complex plasmas
\cite{Ivlev2012}, and since in complex plasmas the damping can be tuned,
this offers a way for the comparison of Brownian and Newtonian dynamics
with the same particle-particle interaction in experimental systems
\cite{Gleim1998}. While in sterically stabilized colloidal suspensions,
the interaction can typically be well-approximated by the hard-sphere
interaction \cite{Pusey1986}, for charged particles in suspensions,
hard-sphere plus Yukawa interaction is more appropriate. In complex
plasmas, the average interparticle distance compared to the particles'
diameters is typically large enough to allow for an approximation of
point-like particles and hence a screened Coulomb potential for point
particles is appropriate. In addition to the screening length, in complex
plasmas also a second repulsive length scale arises from the
non-equilibrium ionization-recombination balance
\cite{Khrapak2010,Wysocki2010} which gives rise to a double Yukawa
potential at interparticle distances $r$ as
\begin{equation}\label{eq:pot}
\frac{U(r)}{k_\text{B}T} = \frac{\Gamma}{r} \left[ \exp(-\kappa r)
+ \epsilon \exp(-\alpha\kappa r) \right]\,.
\end{equation}
Distance $r$ is given in units of the mean interparticle distance
$1/\sqrt[3]{\rho}$ with the density $\rho = N/V$ for $N$ particles in a
volume $V$. The coupling parameter is $\Gamma = Q^2
\sqrt[3]{\rho}/(k_\text{B}T)$, with the charge $Q$, and $\kappa =
1/(\lambda\sqrt[3]{\rho})$ is the inverse of the screening length
$\lambda$. The second (longer-ranged) Yukawa potential is specified by a
relative strength $\epsilon$, and a relative inverse screening length
$\alpha < 1$. In the limit of vanishing screening, one recovers the
one-component plasma (OCP), the simplest model that exhibits
characteristics of charged systems \cite{Hansen1986}. Motivated by the
success of mode-coupling theory for ideal glass transitions (MCT) for the
hard-sphere system (HSS), cf. \cite{Sperl2005}, in the following, the
glass-transition shall be calculated within MCT \cite{Goetze2009}. Since
for time-reversible evolution operators, i.e., Newtonian and Brownian
dynamics, the glassy dynamics within MCT are identical \cite{Szamel1991},
the calculations are applicable to both complex plasma and charged
colloids.
\section{Methods}
We consider a system of $N$ point-like particles in a volume $V$ of
density $\rho = N/V$ interacting via the pairwise repulsive potential in
Eq.~(\ref{eq:pot}). We investigate the glass transitions in two cases: the
single Yukawa ($\epsilon=0$) and the double Yukawa potential ($\epsilon >
0$). Within MCT, the glass transition is defined as a singularity of the
form factor $f_q = \lim_{t \rightarrow \infty} \phi_q(t)$ that is the
long-time limit of the density autocorrelation function. In the liquid
state, $f_q$ is zero, while in the glass state, $f_q > 0$. At the
transition, the form factors adopt their critical values $f_q^c \geq 0$.
$f_q$ is the solution of \cite{Bengtzelius1984}
\begin{equation}\label{eq:fq}
\frac{f_q}{1-f_q} = {\cal F}_q[f_k]\,,
\end{equation}
which is the long time limit of the full MCT equations of motion. $f_q$ is
distinguished from other solutions of the Eq.~(\ref{eq:fq}) by its maximum
property, thus it can be calculated using the iteration
$f^{(n+1)}_q/(1-f^{(n+1)}_q) = {\cal F}_q[f^{(n)}_k]$ \cite{Franosch1997}
with $f^{(0)}_k=1$ and the memory kernel given by
\begin {equation}\label{eq:Fq}
{\cal F}_q[f_k]= \frac{1}{16 \pi^3} \int d^3k \frac{S_q S_k S_p}{q^4}
[\mathbf{q}\cdot\mathbf{k} c_k+\mathbf{q}\cdot\mathbf{p}c_p]^2 f_k f_p\,,
\end {equation}
where $\mathbf{p}=\mathbf{q}-\mathbf{k}$; all wave vectors are expressed
in normalized units. Note that the number density does not appear
explicitly in the kernel ${\cal F}_q$, since we express the length scales
in units of $1/\sqrt[3]{\rho}$.
The only inputs to the Eq.~(\ref{eq:Fq}) are the static structure factors
$S_q$. The Fourier transformed direct correlation functions $c_q$, are
related to structure factors through the Ornstein-Zernike (OZ) relation
\begin{equation}\label{eq:oz}
\gamma_q=\frac{c_q^2}{1-c_q},
\end{equation}
where the spatial Fourier transform of $\gamma_q$ is $\gamma(r) = h(r) -
c(r)$ and $h(r)$ is the total correlation function which is related to
structure factor through $S_q=1+h_q$. We close the equations by the
hypernetted-chain (HNC) approximation,
\begin{equation}\label{eq:HNC}
c(r)=\exp{[-U(r)/(k_BT)+\gamma(r)]}-\gamma(r)-1\,,
\end{equation}
where $U(r)$ is the interaction potential. It was found earlier that HNC
captures well various structural features for repulsive potentials,
especially also for the OCP \cite{Ng1974}. For the HSS, the quality of HNC
is known to be inferior to the Percus-Yevick (PY) approximation in certain
thermodynamic aspects \cite{Hansen1986}, so we expect HNC to vary in
performance for different parameter regions of the Yukawa potentials in
Eq.~(\ref{eq:pot}).
We solve Eq.~(\ref{eq:oz}) and Eq.~(\ref{eq:HNC}) by iteration and use the
usual mixing method in order to ensure convergence \cite{Hansen1986}. We
iterate $n$ times from an initial guess, $c^{(0)}(r)$, until a
self-consistent result is achieved, i.e.,
\begin{equation}
\left[\int_0^R|c^{(n+1)}(r)-c^{(n)}(r)|^2\,\mathrm{d}r\right]^{1/2} <
\delta,
\end{equation}
with $\delta = 10^{-5}$,
where $R$ is the cut-off length of $c(r)$. We employ $R=47.1239$ and a
mesh of size $M=2396$ points. Consequently, the resolution in real and
Fourier space is $\Delta r=R/M=0.0197$ and $\Delta q=\pi/R=0.0667$,
respectively. We use an orthogonality-preserving algorithm for the
numerical calculation of Fourier transforms \cite{Lado1971}. For a
particular $\kappa$ we begin the computation of $c(r)$ at a small coupling
parameter $\Gamma$, successively increase $\Gamma$, and use the outcome as
an initial guess for the subsequent calculation.
\section{Single-Yukawa Potential}
\subsection{Glass-Transition Diagram}
\begin{figure}[hbt]
\includegraphics[width=\columnwidth]{PD1Y.eps}
\caption{\label{fig:PD1Y}Glass-transition diagram for the single Yukawa
potential (filled circles). Transition points are shown together with the
full curve exhibiting Eq.~(\ref{eq:scale}). For comparison, a similar
curve is shown for the melting of the crystal.}
\end{figure}
The MCT results for the single Yukawa case are shown in
Fig.~\ref{fig:PD1Y}. The filled circles for different $\Gamma$ and
$\kappa$ indicate the glass transition points calculated by
Eq.~(\ref{eq:fq}). For $\kappa\rightarrow 0$, the glass transition for the
OCP limit is found at $\Gamma^c_\text{OCP}=366$. When screening is
introduced for $\kappa > 0$, the glass-transition line moves to higher
critical coupling strengths $\Gamma^c(\kappa)$. Figure~\ref{fig:PD1Y}
shows for reference the melting curve for weakly screened Yukawa systems,
described by $\Gamma(\kappa) = 106\,e^{\kappa}/\left(1 + \kappa +
\kappa^2/2\right)$ \cite{Vaulina2000,Vaulina2002}. This expression has
been suggested originally on the basis of the Lindemann-type arguments,
cf. \cite{Lindemann1910}. The Lindemann criterion states that the
liquid-crystal phase transition occurs when in the crystal the
root-mean-square displacement $\langle \delta r^2\rangle$ of particles
from their equilibrium positions reaches a certain fraction of the mean
interparticle distance. Within the simplest one-dimensional harmonic
approximation this yields the scaling $U''(r=1)\langle \delta
r^2\rangle/T\simeq {\rm const.}$, where primes denote the second
derivative with respect to distance. Applied to the Yukawa interaction
this leads to the melting curve above, where the value of the constant is
determined from the condition $\Gamma\simeq 106$ at melting of the OCP
system ($\kappa=0$) \footnote{Note that $\Gamma\simeq 172$ if the
Wigner-Seitz radius $a = \sqrt[3]{3/4\pi\rho}$ is used as a unit length
instead of $1/\sqrt[3]{\rho}$.}. This expression for the melting curve is
widely used due to its particular simplicity and reasonable accuracy:
Deviations from numerical simulation data of Ref.~\cite{Hamaguchi1997} do
not exceed several percent, as long as $\kappa\lesssim 8$. Moreover,
similar arguments can be used to reasonably describe freezing of other
simple systems, e.g. Lennard-Jones-type fluids~\cite{Khrapak2011}.
Remarkably, when comparing the predicted glass-transition with the melting
curve, one observes that both transition lines run in parallel. The
glass-transition line is described by the function
\begin{equation}\label{eq:scale}
\Gamma^c(\kappa) = \Gamma^c_\text{OCP}\,e^{\kappa}
\left(1+\kappa+\kappa^2/2\right)^{-1}\,,
\end{equation}
which is shown as solid line in Fig.~\ref{fig:PD1Y}, i.e., the glass
transition is found at 3.45 of the coupling strength of the melting curve.
The fit quality given by Eq.~(\ref{eq:scale}) is remarkable for two
distinct reasons: First, the potential changes quite drastically along the
line from a long-ranged interaction at low $\kappa$ to the paradigmatic
hard-sphere system at very large $\kappa$ to be detailed below. Such
simplicity along control-parameter dependent glass-transition lines is not
to be expected and not observed for other potentials, cf. the square-well
system \cite{Dawson2001,Sperl2004}. Second, the non-trivial changes along
the transition lines are apparently quite similar for the transition into
ordered and disordered solids alike, and Eq.~(\ref{eq:scale}) applies to
both. For the mentioned square-well system, ordered and disordered solids
have no such correlation \cite{Sperl2004}.
Since both MCT and the structural input involve approximations, typically
the glass transitions are found for higher couplings than predicted, the
deviation is around 10\% in the densities for the HSS \cite{Goetze2009}.
While one can expect that absolute values for transition points need to be
shifted to match experimental values \cite{Sperl2005}, the qualitative
evolution of glass-transition lines with control parameters is usually
quite accurate and even counterintuitive phenomena like melting by cooling
have been predicted successfully \cite{Dawson2001}. Hence, we assume the
description of the liquid-glass transitions in the single Yukawa system to
be qualitatively correct.
\subsection{Glass-Form Factors}
\begin{figure}[hbt]
\includegraphics[width=\columnwidth]{fq1Ylarge.eps}
\caption{\label{fig:fq1Y}Critical glass-form factors $f_q$ for the glass
transition in the single Yukawa system. For increasing screening
parameter $\kappa$, the inset shows the location of the respective
transition points on the MCT-transition line, cf. Fig.~\ref{fig:PD1Y},
with the same symbols as in the main panel. The full curve shows the
solution for the HSS within the HNC approximation. The result
for HSS within the PY approximation \cite{Franosch1997} is shown dashed.}
\end{figure}
The different points on the glass-transition lines shall be discussed in
detail in the following. For the well-known case of the glass transition
in the HSS, the critical form factors are shown by a full curve in
Fig.~\ref{fig:fq1Y}. Different from earlier results calculated for $S_q$
within the PY approximation \cite{Franosch1997}, here we also show the HSS
within the HNC approximation to be consistent with the Yukawa results. The
control parameter for the HSS is the packing fraction $\varphi = \rho
(\pi/6)\,d^3$ with the hard-core diameter $d$ as the unit of length. For
HNC, the transition point is found at a packing fraction of
$\varphi^c_\text{HSS} = 0.525$. This value as well as the behavior of
$f_q$ in Fig.~\ref{fig:fq1Y} is very close in HNC and the PY approximation
where $\varphi^c_\text{HSS} = 0.516$ \cite{Franosch1997}. It is seen that
the distribution of $f_q$ is dominated by a peak at interparticle
distances which indicate the cage effect \cite{Goetze2009,Franosch1997};
oscillations for higher wave vectors follow this length scale in a way
similar to the static structure factor. For both PY and HNC, the peak
positions for $f_q$ coincide, for the principal peak even the peak heights
are almost identical. For HNC, the $f_q$ are typically above the PY
solutions resulting in a 10\% larger half-width of the distribution of the
$f_q$ for the HNC. The predicted deviations between HNC and PY are mostly
indistinguishable when comparing to experiments except for the small-$q$
limit where experimental results favor the PY-MCT calculation, cf.
\cite{Megen1995,Goetze2009}.
For the Yukawa potential, overall the critical form factors exhibit
similar features as for the HSS. Different from the HSS, in the OCP limit
the form factors vanish for the limit $q\rightarrow 0$. This anomaly for
charged systems corresponds to the small wave-vector behavior in the
static structure $S_q\propto q^2$ for $q\rightarrow 0$ \cite{Hansen1986}.
Since in the OCP, mass and charge fluctuations are proportional to each
other, the conservation of momentum implies the conservation of the
microscopic electric current, and hence no damping of charge fluctuations
in the long wave-length limit. Considering Eq.~(\ref{eq:Fq}) we shall
demonstrate, that $f_q \propto q^2$ for small wave vectors.
Denoting $\theta$ as the angle between $\mathbf{q}$ and $\mathbf{k}$ we
can expand the direct correlation functions as:
\begin{equation}\label{eq:expanc}
c_{|\mathbf{q}-\mathbf{k}|}= c_k - c'_kq \;\text{cos}\; \theta+
\frac{1}{2} q^2 \text{cos}^2\theta c''_k- \frac{1}{6} q^3
\text{cos}^3\theta c'''_k
\end{equation}
where the primes represent the respective first, second and third
derivatives of $c_k$ with respect to $k$. Substituting
Eq.~(\ref{eq:expanc}) into Eq.~(\ref{eq:Fq}) leads to
\begin{subequations}\label{eq:Fqexp}
\begin {equation}\label{eq:Fqlimitmore}
{\cal F}_q[f_k] \rightarrow S_q \alpha +q^2 S_q \beta\,,
\end {equation}
where \cite{Bayer2007}
\begin{equation}
\alpha=\frac{1}{4\pi^2} \int_0^\infty dk k^2 S_k^2 [c_k^2 +
\frac{2}{3} k c_k c'_k+\frac{1}{5} k^2 {c'_k}^2] f_k^2 \,,
\end{equation}
and
\begin{equation}\label{eq:beta}
\begin{split}
\beta=&\frac{1}{4\pi^2} \int_0^\infty dk k^2
S_k^2 [\frac{1}{3}
{c'_k}^2+\frac{1}{28} k^2 {c''_k}^2+\frac{2}{5} k c'_k c''_k
\\&+\frac{1}{3}
c'_k c''_k+\frac{1}{15} k c_k c'''_k+\frac{1}{21} k^2 c'_k c''_k]
f_k^2\,.\end{split}
\end{equation}
The term linear in $q$ in Eq.~(\ref{eq:Fqlimitmore}) vanishes.
\end{subequations}
Similarly,
the small-$q$ expansion of the static structure factor in the OCP reads
\cite{Baus1980}
\begin{equation}\label{eq:sqlimit}
S(q)= \frac{q^2}{k_D^2}+\frac{q^4}{k_D^4}[c^R(0)-1]+{\cal O}(q^6)
\end{equation}
where $k_D^2= 4 \pi \Gamma$ represents the inverse Debye length, and
$c^R(q)=c(q)-c^S(q)$ is the regular term of the direct correlation
function, assuming that at large distances particles can only be weakly
coupled, which creates the singular term $c^S(q)=-U(q)/k_\text{B}T$. From
Eq.~(\ref{eq:Fqlimitmore}) and Eq.~(\ref{eq:sqlimit}) we get
\begin {equation}\label{eq:Fqlimitmoreocp}
{\cal F}_q= q^2 \frac{\alpha}{k_D^2} + q^4[\frac{\beta}{k_D^2}
+\frac{\alpha}{k_D^4}(c^R(0)-1)]+{\cal O}(q^6)\,.
\end{equation}
From Eq.~(\ref{eq:fq}) one can conclude that $f_q$ has the same limit as
$F_q$, hence we have shown that $f_q\propto q^2$ for vanishing $q$.
For non-vanishing screening, $\kappa > 0$, the small-$q$ behavior of the
form factors is characterized by finite intercepts at $q=0$. This regular
behavior is ensured by the $q\rightarrow0$ limit of $c^S_q =
-4\pi\Gamma/(q^2 + \kappa^2)$. For larger wave vectors, the $f_q$ first
decrease in comparison to OCP -- cf. $\kappa = 5.7$ ($\times$) and 14.0
($\blacktriangledown$) in Fig.~\ref{fig:fq1Y} -- before increasing beyond
the OCP result for $\kappa \gtrsim 30$. For very large screening, the form
factors of the Yukawa potential apparently approach the HSS case.
\subsection{HSS Limit}
By setting $U(d_\text{eff})/k_\text{B}T\sim 1$ for $\epsilon = 0$ in
Eq.~(\ref{eq:pot}) one can define an effective diameter that becomes a
well-defined hard-core diameter for $\kappa\rightarrow\infty$. Along the
glass-transition line $\Gamma^c(\kappa)$ the effective packing fraction
and diameter are given (with logarithmic accuracy) by
\begin{equation}\label{eq:HSSlimit}
\varphi^c_\text{eff} =
\frac{\pi}{6}\left(\frac{\ln\Gamma^c}{\kappa}\right)^3\,,\quad
d^c_\text{eff} = \ln\Gamma^c/\kappa\,,
\end{equation}
where only the definition of the packing fraction has been used.
Figure~\ref{fig:phieff} displays the effective packing fractions along the
single-Yukawa transition line up to $\kappa\approx 100$. For small
$\kappa$, the large effective diameter yields considerable
overlaps among the particles and hence a packing fraction beyond unity.
The effective hard-sphere diameter can be seen in the inset of
Fig.~\ref{fig:phieff}. For $\kappa\gtrsim 40$ the Yukawa potentials'
effective diameter $d^c_\text{eff}$ reaches its asymptotic value.
Together with the findings on the $f_q$ this establishes the crossover of
the glass-transition properties of the Yukawa system to the hard-sphere
limit. The relation in Eq.~(\ref{eq:scale}) fits effective diameters and
densities well for smaller $\kappa\lesssim 10$ and underestimates the
calculated values for larger $\kappa$, as expected.
\begin{figure}[htb]
\includegraphics[width=\columnwidth]{phi_eff.eps}
\caption{\label{fig:phieff}Effective packing fraction
$\varphi^c_\text{eff}$ for Yukawa potentials along the transition line in
Fig.~\ref{fig:PD1Y}. The horizontal dashed line shows the HSS-HNC limit of
$\varphi^c_\text{HSS} = 0.525$. The inset shows the effective hard-sphere
diameter, $d^c_\text{eff} = \ln\Gamma^c/\kappa$ equivalent to the
effective densities. In both plots, the dotted curves display the
small-$\kappa$ asymptotes derived from Eq.~(\ref{eq:scale}). }
\end{figure}
\section{Double-Yukawa Potential}
\subsection{Glass-Transition Diagrams}
\begin{figure}[bht]
\includegraphics[width=\columnwidth]{PD2Yscaled.eps}
\caption{\label{fig:PD2Ys}Glass transition diagram for double Yukawa
potentials with $\alpha = 0.125$, $\epsilon = 0.2$ (diamonds) and 0.01
(squares). The single Yukawa data (filled circles) is shown together with
the analytical description by Eq.~(\ref{eq:scale}) (solid curve labeled
$\epsilon = 0$). The single Yukawa points are scaled according to
Eq.~(\ref{eq:scale2Y}) for $\epsilon = 0.2$, and shown by open circles.
Dotted and dashed curves represent scaled versions of
Eq.~(\ref{eq:scale}) for $\epsilon = 0.01$ and $\epsilon = 0.2$,
respectively. The solid curves labeled $\epsilon = 0.01$ and $\epsilon =
0.2$, respectively, show the solution of Eq.~(\ref{eq:scaleall}).
}
\end{figure}
Progressing towards the double Yukawa potentials, we show in
Fig.~\ref{fig:PD2Ys} the results of MCT calculations for the same relative
screening $\alpha = 0.125$ and a weak ($\epsilon = 0.01$) as well as a
strong ($\epsilon = 0.2$) second repulsion. In both cases, for small
$\kappa$ the transition lines start at OCP and follow the single-Yukawa
line. After a crossover regime, for $\kappa \gtrsim 15$ for $\epsilon =
0.01$ and $\kappa \gtrsim 10$ for $\epsilon = 0.2$, the transitions are
described well by rescaling the original single-Yukawa results according
to
\begin{equation}\label{eq:scale2Y}
\Gamma' = \Gamma/\epsilon,\quad\kappa' = \kappa/\alpha\,.
\end{equation}
In Fig.~\ref{fig:PD2Ys}, scaling by Eq.~(\ref{eq:scale2Y}) is demonstrated
by transforming the MCT results for $\epsilon = 0$ (full circles) into
a rescaled version (open circles) for $\epsilon = 0.2$ which compares well
to the full MCT calculation for the double Yukawa potential (diamonds).
Similarly, formula~(\ref{eq:scale}) can be used to describe all double
Yukawa results for small screening lengths, and the results for large
screening lengths by scaling Eq.~(\ref{eq:scale}) with
Eq.~(\ref{eq:scale2Y}). The dotted and dashed curves in
Fig.~\ref{fig:PD2Ys} exhibit the scaled curves for $\epsilon = 0.01$ and
0.2, respectively. The linear combination of the analytical descriptions
for both length scales reads
\begin{equation}\label{eq:scaleall}
\begin{array}{l}
\Gamma^c(\kappa)/\Gamma^c_\text{OCP} =
\left[e^{-\kappa}(1+\kappa+\kappa^2/2)\right.\\\left.
\qquad\qquad\qquad
+\epsilon\, e^{-\kappa\alpha}(1+\kappa\alpha+\kappa^2\alpha^2/2)
\right]^{-1}\,,\end{array}
\end{equation}
and is demonstrated by the solid line for $\epsilon = 0.01$ in
Fig.~\ref{fig:PD2Ys}. It is seen that Eq.~(\ref{eq:scaleall}) describes
the MCT results for double Yukawa potentials for the entire range of
control parameters including crossover regions. In conclusion, the
MCT predictions for both single and double Yukawa potentials can be
rationalized by a single analytical formula (\ref{eq:scale}) which traces
the melting curve, captures the interplay between large and small
repulsive length scales, and extends for all parameters from OCP to HSS.
\subsection{Localization Lengths}
Another length scale resulting from the dynamical MCT calculations is
given by the localization length. It is defined from the long-time limit
of the mean-squared displacement $\delta r^2(t) = \langle|r(t) - r(0)|^2
\rangle$ as ${r_{s}}^c=\sqrt{\lim_{t\rightarrow\infty}\delta r^2(t)/6}$.
For the glass transition in the HSS, MCT predicts a localization length
within HNC of $r_{s}^c/d = 0.0634$. This scale is quite close to the
classical result of a Lindemann length \cite{Lindemann1910}.
For the single and double Yukawa potential, the evolution of the
localization length with $\kappa$ is demonstrated in Fig.~\ref{fig:loc}.
From a value of $r_{s}^c = 0.070$ for OCP, the localization lengths
increase for the single Yukawa potential, reach a maximum around $\kappa
\approx 10$ and decrease to the values for HSS for large $\kappa$. The
maximum can be interpreted as follows: The widths of the distributions in
$f_q$ seen in Fig.~\ref{fig:fq1Y} correspond to an inverse length scale
equivalent to ${r_{s}}^c$, and the smaller width of the $f_q$ mean an
increase of ${r_{s}}^c$. For large $\kappa$, the localization length needs
to approach the HSS value, hence the ${r_{s}}^c$ decrease again. Both
trends together yield a maximum.
\begin{figure}[htb]
\includegraphics[width=\columnwidth]{rsc_12Y.eps}
\caption{\label{fig:loc} Localization length for single Yukawa (full
circles) and double Yukawa (diamonds) potential with $\alpha = 0.125$ and
$\epsilon = 0.2$. The open circles show the single-Yukawa data scaled
according to Eq.~(\ref{eq:scale2Y}). The horizontal dashed line shows the
HSS limit for $r_{s}^c$.
}
\end{figure}
The localization lengths for the double Yukawa system follows the
single-Yukawa results for small $\kappa\lesssim 5$ as observed in
Fig.~\ref{fig:PD2Ys} and hence increases; for $\kappa\gtrsim 5$, the
double Yukawa system approaches the scaled single-Yukawa results shown by
the circles. For larger $\kappa$, the evolution follows the scaled
single-Yukawa results and while deviating for $\kappa\gtrsim 50$ from the
scaled results, a scaled maximum is reached around $\kappa\approx 80$.
Altogether, the variation of the localization lengths is around 10\% which
is small compared to other glass-transition diagrams \cite{Sperl2004}.
Hence we conclude that for both single- and double-Yukawa potentials the
MCT results for the localization length are always close to the values
usually assumed for the Lindemann criterion.
\section{Conclusion}
In summary, we have demonstrated above the full glass-transition diagram
for the single and double Yukawa systems. While some parallel running
lines for limited parameter ranges have been shown earlier for logarithmic
core potential plus Yukawa tail \cite{Foffi2003star,Sciortino2004}, here
we describe the transition diagrams by analytical formulae. In particular
it could be shown how the HSS limit continuously evolves into the OCP
limit. We have shown that the glass-transition lines resulting from the
combination of HNC and MCT -- two rather complex nonlinear functionals --
can be described analytically over their entire range from the OCP limit
for small $\kappa$ to the HSS limit for large $\kappa$. Qualitatively, the
behavior of the transition line can be estimated by the Lindemann
criterion for melting \cite{Lindemann1910}, while quantitatively, glass
transition and crystal melting are following remarkably similar trends for
stronger coupling.
It is important to note that the present calculations were performed for
point particles with various degrees of charging and screening. The limit
of the HSS emerges from that calculations without actual excluded volume
in the potentials. With the important difference of a finite hard-sphere
radius being present, the possibility that in addition to a Coulomb
crystal a dilute system of charges may also form a Coulomb glass was
explored in the restricted primitive model for a mixture of charged hard
spheres \cite{Bosse1998} and the hard-sphere jellium model
\cite{Wilke1999} as well as for a system of charged hard spheres to
describe charge-stabilized colloidal suspensions \cite{Lai1995}. In
conclusion, the present calculations offer exhaustive analytical
descriptions for glass transitions over a wide range of quite different
interaction potentials. The predictions should motivate data collapse from
computer simulation and different experimental model systems in order to
confirm or challenge the unified picture presented above.
Financial support within the ERC (Advanced grant INTERCOCOS, project
number 267499) is gratefully acknowledged.
\bibliographystyle{apsrev}
\end{document}
|
1,941,325,220,062 | arxiv |
\section{Introduction}
The minimal model conjecture predicts that
an arbitrary algebraic variety is
birational to either a minimal model or a Mori fibre space $\pi \colon V \rightarrow B$.
A distinguished property of Mori fibre spaces in characteristic zero is that
any relative numerically trivial line bundle is automatically trivial (cf. \cite[Lemma 3.2.5]{KMM87}).
In \cite[Theorem 1.4]{Tan}, the second author constructs counterexamples
to the same statement in positive characteristic.
More specifically, if the characteristic is two or three,
then there exists a Mori fibre space $\pi \colon V \rightarrow B$ and a line bundle $L$ on $V$ such that
$\dim V=3, \dim B=1, L \equiv_{\pi} 0,$ and $L \not\sim_{\pi} 0$.
Then it is tempting to ask how bad the torsion indices can be.
One of the main results of this paper is to give such an explicit upper bound
of torsion indices for three-dimensional del Pezzo fibrations.
\begin{thm}[Theorem \ref{t-triv-lb-mfs-curve}] \label{i-triv-lb-mfs-curve}
Let $k$ be an algebraically closed field of characteristic $p>0$.
Let $\pi \colon V \rightarrow B$ be a projective $k$-morphism such that $\pi_* \mathcal{O}_V = \mathcal{O}_B$, where $V$ is a three-dimensional $\mathbb{Q}$-factorial
normal quasi-projective variety over $k$ and $B$ is a smooth curve over $k$.
Assume there exists an effective $\mathbb{Q}$-divisor $\Delta$ such that $(V, \Delta)$ is klt and $\pi \colon V \rightarrow B$ is a $(K_V+\Delta)$-Mori fibre space.
Let $L$ be a $\pi$-numerically trivial Cartier divisor on $V$.
Then the following hold.
\begin{enumerate}
\item If $p \geq 7$, then $L \sim_{\pi} 0$.
\item If $p \in \left\{ 3, 5 \right\}$, then $p^2L \sim_{\pi} 0$.
\item If $p =2$, then $16 L \sim_{\pi} 0$.
\end{enumerate}
\end{thm}
We also prove a theorem of Graber--Harris--Starr type for del Pezzo fibrations in positive characteristic.
\begin{thm}[Theorem \ref{t-rc-3fold}] \label{intro-rc-3fold}
Let $k$ be an algebraically closed field of characteristic $p>0$.
Let $\pi:V \to B$ be a projective $k$-morphism such that $\pi_*\MO_V=\MO_B$,
$V$ is a normal three-dimensional variety over $k$, and $B$ is a smooth curve over $k$.
Assume that there exists an effective $\Q$-divisor $\Delta$ such that
$(V, \Delta)$ is klt and $-(K_V+\Delta)$ is $\pi$-nef and $\pi$-big.
Then the following hold.
\begin{enumerate}
\item There exists a curve $C$ on $V$ such that $C \to B$ is surjective and
the following properties hold.
\begin{enumerate}
\item If $p\geq 7$, then $C \to B$ is an isomorphism.
\item If $p \in \{3, 5\}$, then $K(C)/K(B)$ is a purely inseparable extension of degree
$\leq p$.
\item If $p=2$, then $K(C)/K(B)$ is a purely inseparable extension of degree
$\leq 4$.
\end{enumerate}
\item If $B$ is a rational curve, then $V$ is rationally chain connected.
\end{enumerate}
\end{thm}
Theorem \ref{intro-rc-3fold} can be considered as a generalisation
of classical Tsen's theorem, i.e. the existence of sections on ruled surfaces.
Tsen's theorem was used to establish the log minimal model program in characteristic $p>5$ \cite[Section 3.4]{BW17}.
Also, Tsen's theorem was used to show that
$H^i(X, W\MO_{X, \Q})=0$ for
threefolds $X$ of Fano type in characteristic $p>5$ when $i>0$ (cf. \cite[Theorem 1.3]{GNT}).
The proofs of Theorem \ref{i-triv-lb-mfs-curve} and Theorem \ref{intro-rc-3fold}
are carried out by studying the generic fibre $X:=V \times_B \Spec\,K(B)$ of $\pi$, which is a surface of del Pezzo type defined over an imperfect field.
Roughly speaking, Theorem \ref{i-triv-lb-mfs-curve} and Theorem \ref{intro-rc-3fold}
hold by the following two theorems.
\begin{thm}[Theorem \ref{t-klt-bdd-torsion}]\label{i-klt-bdd-torsion}
Let $k$ be a field of characteristic $p>0$.
Let $X$ be a $k$-surface of del Pezzo type.
Let $L$ be a numerically trivial Cartier divisor on $X$.
Then the following hold.
\begin{enumerate}
\item If $p \geq 7$, then $L \sim 0$.
\item If $p \in \{3, 5\}$, then $pL \sim 0$.
\item If $p=2$, then $4L \sim 0$.
\end{enumerate}
\end{thm}
\begin{thm}[Theorem \ref{t-ex-rat-points-dP}] \label{i-ex-rat-points-dP}
Let $k$ be a $C_1$-field of characteristic $p>0$. Let $X$ be a $k$-surface of del Pezzo type such that $k=H^0(X, \mathcal{O}_X)$.
Then
\begin{enumerate}
\item If $p \geq 7$, then $X(k) \neq \emptyset$;
\item If $p \in \left\{ 3,5 \right\}$, then $X(k^{1/p}) \neq \emptyset$;
\item If $p =2$ , then $X(k^{1/4}) \neq \emptyset$.
\end{enumerate}
\end{thm}
\subsection{Sketch of the proof of Theorem \ref{i-klt-bdd-torsion}}\label{ss-intro1}
Let us overview some of the ideas used in the proof of Theorem \ref{i-klt-bdd-torsion}.
By considering the minimal resolution and running a minimal model program,
the problem is reduced to the case when $X$ is a regular surface of del Pezzo type
which has a $K_X$-Mori fibre space structure $X \to B$.
In particular, it holds that $\dim B=0$ or $\dim B=1$.
\subsubsection{The case when $\dim B=0$} \label{sss2-intro0}
Assume that $\dim B=0$.
In this case, $X$ is a regular del Pezzo surface.
We first classify $Y:=(X \times_k \overline k)_{\red}^N$
(Theorem \ref{i-classify-bc}).
We then compare $X \times_k \overline k$ with $Y=(X \times_k \overline k)_{\red}^N$
(Theorem \ref{i-p2-bound}).
\begin{thm}[Theorem \ref{t-classify-bc}]\label{i-classify-bc}
Let $k$ be a field of characteristic $p>0$.
Let $X$ be a projective normal surface over $k$ with canonical singularities
such that $k=H^0(X, \MO_X)$ and $-K_X$ is ample.
Then the normalisation $Y$ of $(X \times_k \overline{k})_{\red}$ satisfies one of the following properties.
\begin{enumerate}
\item $X \times_k \overline k$ is normal.
Moreover, $X \times_k \overline k$ has at worst canonical singularities.
In particular, $Y \simeq X \times_k \overline k$ and $-K_Y$ is ample.
\item $Y$ is isomorphic to a Hirzebruch surface, i.e. a $\mathbb P^1$-bundle over $\mathbb P^1$.
\item $Y$ is isomorphic to a weighted projective surface $\mathbb P(1, 1, m)$
for some positive integer $m$.
\end{enumerate}
\end{thm}
\begin{thm}[cf. Theorem \ref{t-p2-bound}]\label{i-p2-bound}
Let $k$ be a field of characteristic $p>0$.
Let $X$ be a projective normal surface over $k$ with canonical singularities such that $k=H^0(X, \MO_X)$ and $-K_X$ is ample.
Let $Y$ be the normalisation of $(X \times_k \overline k)_{\red}$ and let
\[
\mu: Y \to X \times_k \overline k
\]
be the induced morphism.
\begin{enumerate}
\item If $p \geq 5$, then $\mu$ is an isomorphism and $Y$ has at worst canonical singularities.
\item If $p=3$, then the absolute Frobenius morphism $F_{X \times_k \overline k}$
of $X \times_k \overline k$ factors through $\mu$:
\[
F_{X \times_k \overline k}:X \times_k \overline k\to Y \xrightarrow{\mu} X \times_k \overline k.
\]
\item
If $p=2$, then the second iterated absolute Frobenius morphism $F^2_{X \times_k \overline k}$
of $X \times_k \overline k$ factors through $\mu$:
\[
F^2_{X \times_k \overline k}:X \times_k \overline k\to Y \xrightarrow{\mu} X \times_k \overline k.
\]
\end{enumerate}
\end{thm}
Note that Theorem \ref{i-classify-bc} shows that $Y=(X \times_k \overline k)_{\red}^N$
is a rational surface.
In particular, any numerically trivial line bundle on $Y$ is trivial.
By Theorem \ref{i-p2-bound},
if $L'$ denotes the pullback of $L$ to $X \times_k \overline k$,
then it holds that $L'^4 \simeq \MO_{X \times_k \overline k}$ in the case (3).
Then the flat base change theorem implies that also $L^4$ is trivial.
We now discuss the proofs of Theorem \ref{i-classify-bc}
and Theorem \ref{i-p2-bound}.
Roughly speaking, we apply Reid's idea (\cite[cf. the proof of Theorem 1.1]{Rei94})
to prove Theorem \ref{i-classify-bc}
by combining with a rationality criterion (Lemma \ref{l-rationality}).
As for Theorem \ref{i-p2-bound},
we use the notion of Frobenius length of geometric normality $\ell_F(X/k)$
introduced in \cite{Tan19} (cf. Definition \ref{d-lF}, Remark \ref{r-lF}).
Roughly speaking, if $p=2$, then we can prove that $\ell_F(X/k) \leq 2$
by computing certain intersection numbers (cf. the proof of Proposition \ref{p-p2-bound}).
Then general result on $\ell_F(X/k)$ (Remark \ref{r-lF}) implies (3) of Theorem \ref{i-p2-bound}.
\subsubsection{The case when $\dim B=1$}\label{sss2-intro1}
Assume that $\dim B=1$, i.e. $\pi:X \to B$ is a $K_X$-Mori fibre space to a curve $B$.
Since $X$ is of del Pezzo type, we have that the extremal ray $R$ of $\overline{\text{NE}}(X)$
that is not corresponding to $\pi:X \to B$ is spanned by an integral curve $\Gamma$, i.e. $R=\R_{\geq 0}[\Gamma]$.
In particular, $\Gamma \to B$ is a finite surjective morphism of curves.
If $K_X \cdot \Gamma<0$, then the problem is reduced
to the above case (\ref{sss2-intro0}) by contracting $\Gamma$.
Even if $K_X \cdot \Gamma=0$, then we may contract $\Gamma$ and
apply the same strategy.
Hence, it is enough to treat the case when $K_X \cdot \Gamma >0$.
Note that the numerically trivial Cartier divisor $L$ on $X$ descends to $B$,
i.e. we have $L \sim \pi^*L_B$ for some Cartier divisor $L_B$ on $B$.
Then, a key observation is that
the extension degree $[K(\Gamma):K(B)]$ is at most five (Proposition \ref{p-cov-deg-bound}).
For example, if $p>5$, then $\Gamma \to B$ is separable.
Then the Hurwitz formula implies that $-K_B$ is ample, hence $L_B \sim 0$.
If $K(\Gamma)/K(B)$ is purely inseparable of degree $p^e$,
then it hold that $L_B^{p^e} \sim 0$, since $-K_{\Gamma^N}$ is ample.
For the remaining case, i.e. $p=2$, $[K(\Gamma):K(B)]=4$, and $K(\Gamma)/K(B)$ is inseparable but not purely inseparable,
we prove that $H^0(B, L_B^4) \neq 0$ by applying Galois descent for the separable closure of $K(\Gamma)/K(B)$ (cf. the proof of Proposition \ref{p-ess-klt-bdd-torsion}).
\subsection{Sketch of the proof of Theorem \ref{i-ex-rat-points-dP}}\label{ss-intro2}
Let us overview some of the ideas used in the proof of Theorem \ref{i-ex-rat-points-dP}.
The first step is the same as Subsection \ref{ss-intro1},
i.e. considering the minimal resolution and running a minimal model program,
we reduce the problem to the case when $X$ is a regular surface of del Pezzo type
which has a $K_X$-Mori fibre space structure $X \to B$.
\subsubsection{The case when $\dim B=0$}
Assume that $\dim B=0$.
In this case, $X$ is a regular del Pezzo surface with $\rho(X)=1$.
Since the $p$-degree of a $C_1$-field is at most one (Lemma \ref{l-Cr-pdeg}),
it follows from \cite[Theorem 14.1]{FS18} that $X$ is geometrically normal.
Then Theorem \ref{i-classify-bc} implies that
the base change $X \times_k \overline k$ is a canonical del Pezzo surface,
i.e. $X \times_k \overline k$ has at worst canonical singularities
and $-K_{X \times_k \overline k}$ is ample.
In particular, we have that $1 \leq K_X^2 \leq 9$.
Note that if $X$ is smooth, then it is known that $X$ has a $k$-rational point
(cf. \cite[Theorem IV.6.8]{Kol96}).
Following the same strategy as in \cite[Theorem IV.6.8]{Kol96},
we can show that $X(k) \neq \emptyset$ if $K_X^2 \leq 4$
(Lemma \ref{l-rat-pts-low-deg}).
For the remaining cases $5 \leq K_X^2 \leq 9$,
we use results established in \cite{Sch08},
which restrict the possibilities for the type of singularities on $X \times_k \overline k$.
For instance, if $p \geq 11$, then \cite[Theorem 6.1]{Sch08} shows that
the singularities on $X \times_k \overline k$ are of type $A_{p^e-1}$.
However, such singularities cannot appear, because the minimal resolution
$V$ of $X \times_k \overline k$ satisfies $\rho(V) \leq 9$.
Hence, $X$ is actually smooth if $p \geq 11$ (Proposition \ref{p-dP-large-p1}).
For the remaining cases $p \leq 7$, we study the possibilities one by one,
so that we are able to deduce what we desire.
For more details, see Subsection \ref{ss1-pi-pts}.
\subsubsection{The case when $\dim B=1$}
Assume that $\dim B=1$, i.e. $\pi:X \to B$
is a $K_X$-Mori fibre space to a curve $B$.
Then the outline is similar to the one in (\ref{sss2-intro1}).
Let us use the same notation as in (\ref{sss2-intro1}).
The typical case is that $-K_B$ is ample.
In this case, $B$ has a rational point.
Then also the fibre of $\pi$ over a rational point, which is a conic curve, has a rational point.
Although we need to overcome some technical difficulties,
we may apply this strategy up to suitable purely inseparable covers
for almost all the cases
(cf. the proof of Proposition \ref{p-rat-point-mfs}).
There is one case we can not apply this strategy:
$p=2$, $K_X \cdot \Gamma>0$, and $K(\Gamma)/K(B)$
is inseparable and not purely inseparable.
In this case, we can prove that $-K_B$ is actually ample (Proposition \ref{p-weird}).
\subsection{Large characteristic}
Using the techniques developed in this paper, we also prove the following theorem, which shows that some a priori possible pathologies of log del Pezzo surfaces over imperfect fields can appear exclusively in small characteristic.
\begin{thm}[cf. Corollary \ref{c-geom-red-7} and Theorem \ref{t-h1-vanish}]\label{intro-vanishing}
Let $k$ be a field of characteristic $p \geq 7$.
Let $X$ be a $k$-surface of del Pezzo type such that $k=H^0(X, \mathcal{O}_X)$.
Then $X$ is geometrically integral over $k$ and $H^i(X, \MO_X)=0$ for any $i>0$.
\end{thm}
As a consequence, we deduce the following result on del Pezzo fibrations in large characteristic:
\begin{cor} \label{c-genfb-largep}
Let $k$ be an algebraically closed field of characteristic $p \geq 7$.
Let $\pi \colon V \to B$ be a projective $k$-morphism of normal $k$-varieties
such that $\pi_* \MO_V= \MO_B$ and $\dim\,V-\dim\,B=2$.
Assume that there exists an effective $\Q$-divisor $\Delta$ on $V$
such that $(V,\Delta)$ is klt and $-(K_V+\Delta)$ is $\pi$-nef and $\pi$-big.
Then general fibres of $\pi$ are integral schemes and
there is a non-empty open subset $B'$ of $B$
such that the equation
$(R^i\pi_* \MO_V)|_{B'}=0$ holds for any $i>0$.
\end{cor}
The authors do not know whether surfaces of del Pezzo type are geometrically normal if the characteristic is sufficiently large.
On the other hand, even if $p$ is sufficiently large,
regular surfaces of del Pezzo type can be non-smooth.
More specifically, for an arbitrary imperfect field $k$ of characteristic $p>0$,
we construct a regular surface of del Pezzo type which is not smooth
(Proposition \ref{p-count}).
\subsection{Related results}
In this subsection, we summarise known results on log del Pezzo surfaces mainly over imperfect fields.
\subsubsection{Vanishing theorems}
We first summarise results over algebraically closed fields of characteristic $p>0$.
It is well known that smooth rational surfaces satisfy the Kodaira vanishing theorem (cf. \cite[Proposition 3.2]{Muk13}).
However, the Kawamata--Viehweg vanishing theorem fails even
for smooth rational surfaces (cf. \cite[Theorem 3.1]{CT18}).
Moreover, the surface used in \cite[Theorem 3.1]{CT18}
is a weak del Pezzo surface
if the base field is of characteristic two (\cite[Lemma 2.4]{CT18}).
Also in characteristic three,
there exists a surface of del Pezzo type which violates the Kawamata--Viehweg
vanishing (\cite[Theorem 1.1]{Ber}).
On the other hand, if the characteristic is sufficiently large,
it is known that surfaces of del Pezzo type satisfy the Kawamata--Viehweg vanishing by \cite[Theorem 1.2]{CTW17}.
We now overview known results over imperfect fields.
If the characteristic is two or three,
there exists a surface $X$ of del Pezzo type such that $H^1(X, \MO_X) \neq 0$
(cf. Subsection \ref{ss1-patho}).
On the other hand, regular del Pezzo surfaces of characteristic $p \geq 5$
satisfy the Kawamata--Viehweg vanishing theorem as shown in \cite[Theorem 1.1]{Das}.
\subsubsection{Geometric properties}
In characteristic two and three,
there exist regular del Pezzo surfaces
which are not geometrically reduced (cf. Subsection \ref{ss1-patho}).
On the other hand,
Patakfalvi and Waldron prove
that regular del Pezzo surfaces are geometrically normal if the base field is of characteristic $p\geq 5$ (cf. \cite[Theorem 1.5]{PW}).
Furthermore,
Fanelli and Schr\"{o}er show that
a regular del Pezzo surface $X$ is geometrically normal in every characteristic $p$
if $[k:k^p] \leq p$ and $\rho(X)=1$ (cf. \cite[Theorem 14.1]{FS18}).
\medskip
\indent \textbf{Acknowledgements:}
We would like to thank P. Cascini, S. Ejiri, A. Fanelli, S. Schr\"{o}er, and J. Waldron for many useful discussions.
We also would like to thank the referee for the constructive suggestions and reading the manuscript carefully.
The first author was supported by the Engineering and Physical Sciences Research Council [EP/L015234/1].
The second author was funded by
the Grant-in-Aid for Scientific Research (KAKENHI No. 18K13386).
\section{Preliminaries}
\subsection{Notation}\label{ss-notation}
In this subsection, we summarise notation we will use in this paper.
\begin{enumerate}
\item We will freely use the notation and terminology in \cite{Har77}
and \cite{Kol13}.
\item
We say that a noetherian scheme $X$ is {\em excellent} (resp. {\em regular})
if
the local ring $\MO_{X, x}$ at any point $x \in X$ is excellent (resp. regular).
For the definition of excellent local rings,
we refer to \cite[\S 32]{Mat89}.
\item
For a scheme $X$, its {\em reduced structure} $X_{\red}$
is the reduced closed subscheme of $X$ such that the induced morphism
$X_{\red} \to X$ is surjective.
\item For an integral scheme $X$,
we define the {\em function field} $K(X)$ of $X$
to be $\MO_{X, \xi}$ for the generic point $\xi$ of $X$.
\item
For a field $k$,
we say that $X$ is a {\em variety over} $k$ or a $k$-{\em variety} if
$X$ is an integral scheme that is separated and of finite type over $k$.
We say that $X$ is a {\em curve} over $k$ or a $k$-{\em curve}
(resp. a {\em surface} over $k$ or a $k$-{\em surface},
resp. a {\em threefold} over $k$)
if $X$ is a $k$-variety of dimension one (resp. two, resp. three).
\item For a field $k$, we denote $\overline k$ (resp. $k^{\text{sep}}$) an algebraic closure (resp. a separable closure) of $k$.
If $k$ is of characteristic $p>0$,
then we set $k^{1/p^{\infty}}:=\bigcup_{e=0}^{\infty} k^{1/p^e}
=\bigcup_{e=0}^{\infty} \{x \in \overline k\,|\, x^{p^e} \in k\}$.
\item For an $\mathbb{F}_p$-scheme $X$ we denote by $F_X \colon X \to X$ the {\em absolute Frobenius morphism}. For a positive integer $e$ we denote by $F^e_X \colon X \to X$
the $e$-th iterated absolute Frobenius morphism.
\item
If $k$ is a field of characteristic $p>0$ such that $[k:k^p]<\infty$, we define its $p$-{\em degree} $\pdeg(k)$ as the non-negative integer $n$ such that $[k:k^p]=p^n$.
The $p$-degree $\pdeg(k)$ is also called the degree of imperfection in some literature.
\item If $k \subset k'$ is a field extension and $X$ is a $k$-scheme, we denote
$X \times_{\Spec\,k} \Spec\,k'$ by $X \times_k k'$ or $X_{k'}$.
\item
Let $k$ be a field, let $X$ be a scheme over $k$ and
let $k \subset k'$ be a field extension.
We denote by $X(k')$ the set of the $k$-morphisms $\Hom_k(\Spec\,k', X)$.
Note that if $X$ is a scheme of finite type over $k$ and $k \subset k'$ is
a purely inseparable extension, then
the induced map $\theta:X(k') \to X$ is injective and its image $\theta(X(k'))$ consists of closed points of $X$.
\item Let $L$ be a Cartier divisor on a variety $X$ over $k$.
We define the {\em base locus} $\Bs(L)$ of $L$
by
\[
\Bs(L):=\bigcap_{s \in H^0(X,L)} \left\{ x \in X \mid s(x)=0 \right\}.
\]
In particular, $\Bs(L)$ is a closed subset of $X$.
\item
Let $k$ be an algebraically closed field.
For a normal surface $X$ over $k$ and a canonical singularity $x \in X$ (i.e. a rational double point), we refer to the table at \cite[pages 15-17]{Art77} for the list of equations of type $A_n$, $D_n^m$ and $E_n^m$.
For example,
we say that $x$ is a canonical singularity of type $A_n$
if the henselisation of $\MO_{X, x}$ is isomorphic to $k\{x, y, z\}/(z^{n+1}+xy)$,
where $k\{x, y, z\}$ denotes the henselisation of the local ring of $k[x, y, z]$ at
the maximal ideal $(x, y, z)$.
\end{enumerate}
\subsection{Geometrically klt singularities}
The purpose of this subsection is to introduce the notion of geometrically klt singularities and its variants.
\begin{dfn}
Let $(X, \Delta)$ be a log pair over a field $k$ such that $k$ is algebraically closed in $K(X)$.
We say that $(X, \Delta)$ is \emph{geometrically klt} (resp. terminal, canonical, lc) if
$(X \times_k {\overline{k}}, \Delta\times_k{\overline{k}})$ is klt (resp. terminal, canonical, lc).
\end{dfn}
\begin{lem}\label{l-gred-open}
Let $k$ be a field.
Let $X$ and $Y$ be varieties over $k$ which are birational to each other.
Then $X$ is geometrically reduced over $k$ if and only if $Y$ is geometrically reduced over $k$.
\end{lem}
\begin{proof}
Recall that for a $k$-scheme,
being geometrically reduced is equivalent to being $S_1$ and geometrically $R_0$.
Since both $X$ and $Y$ are $S_1$,
the assertion follows from the fact that being geometrically $R_0$ is a condition on the generic point.
\end{proof}
We prove a descent result for such singularities.
\begin{prop}\label{p-klt-descent}
Let $(X, \Delta)$ be a geometrically klt (resp. terminal, canonical, lc) pair
such that $k$ is algebraically closed in $K(X)$.
Then $(X, \Delta)$ is klt (resp. terminal, canonical, lc).
\end{prop}
\begin{proof}
We only treat the klt case, as the others are analogous.
Let $\pi \colon Y \rightarrow X$ be a birational $k$-morphism, where $Y$ is a normal variety and we write $K_Y+ \Delta_Y =\pi^*(K_X+\Delta)$.
It suffices to prove that $\lfloor{\Delta_Y \rfloor} \leq 0$.
Thanks to Lemma \ref{l-gred-open}, $Y$ is geometrically integral.
Let $\nu \colon W \to Y \times_k \overline {k}$ be the normalisation morphism and let us consider the following commutative diagram:
\[
\begin{CD}
W \\
@V \nu VV \\
Y \times_k \overline{k} @> g >> Y \\
@V \pi_{\overline{k}} VV @V \pi VV\\
X \times_k \overline{k} @> f >> X.
\end{CD}
\]
Denote by $\psi:= \pi_{\overline{k}} \circ \nu$ and $h:= g \circ \nu$ the composite morphisms.
We have
\[K_W+\Delta_W := \psi^*(K_{X_{\overline{k}}}+\Delta_{\overline{k}})=h^* \pi^*(K_X+\Delta)=h^* (K_Y + \Delta_Y). \]
By \cite[Theorem 4.2]{Tan18b}, there exists an effective $\mathbb{Z}$-divisor $D$ such that
\[h^*(K_Y+\Delta_Y)= K_Y+D+h^*\Delta_Y, \]
and thus $\Delta_W=D+h^*\Delta_Y \geq h^*\Delta_Y$.
Since $(X_{\overline{k}}, \Delta_{\overline{k}})$ is klt,
any coefficient of $\Delta_W$ is $<1$.
Then any coefficient of $\Delta_Y$ is $<1$, thus $(X, \Delta)$ is klt.
\end{proof}
\begin{rem}
If $k$ is a perfect field, being klt is equivalent to being geometrically klt by
\cite[Proposition 2.15]{Kol13}.
However, over imperfect fields, being geometrically klt is a strictly stronger condition.
As an example, let $k$ be an imperfect field of characteristic $p>0$ and consider the log pair $(\mathbb{A}^1_k, \frac{2}{3}P)$, where $P$ is a closed point whose residue field $\kappa(P)$ is a purely inseparable extension of $k$ of degree $p$.
This pair is klt over $k$, but it is not geometrically lc.
\end{rem}
\subsection{Surfaces of del Pezzo type}
In this subsection, we summarise some basic properties of surfaces of del Pezzo type over arbitrary fields.
For later use, we introduce some terminology.
Note that del Pezzo surfaces in our notation allow singularities.
\begin{dfn}\label{d-dP-wdP}
Let $k$ be a field.
A $k$-surface $X$ is {\em del Pezzo} if
$X$ is a projective normal surface such that $-K_X$ is an ample $\Q$-Cartier divisor.
A $k$-surface $X$ is {\em weak del Pezzo} if
$X$ is a projective normal surface such that $-K_X$ is a nef and big $\Q$-Cartier divisor.
\end{dfn}
\begin{dfn}
Let $k$ be a field.
A $k$-surface $X$ is \emph{of del Pezzo type}
if $X$ is a projective normal surface over $k$ and
there exists an effective $\mathbb{Q}$-divisor $\Delta \geq 0$ such that $(X, \Delta)$ is klt and $-(K_X+\Delta)$ is ample.
In this case, we say that $(X, \Delta)$ is a log del Pezzo pair.
\end{dfn}
We study how the property of being of del Pezzo type behaves under birational transformations.
\begin{lem}\label{l-dP-min-res}
Let $k$ be a field.
Let $X$ be a $k$-surface of del Pezzo type.
Let $f : Y \to X$ be the minimal resolution of $X$.
Then $Y$ is a $k$-surface of del Pezzo type.
\end{lem}
\begin{proof}
Let $\Delta$ be an effective $\Q$-divisor such that $(X, \Delta)$ is a log del Pezzo pair.
We define a $\Q$-divisor $\Delta_Y$ by $K_Y+ \Delta_Y =f^*(K_X+\Delta)$.
Since $f:Y \to X$ is the minimal resolution of $X$,
we have that $\Delta_Y$ is an effective $\Q$-divisor.
The pair $(Y, \Delta_Y)$ is klt and $-(K_Y+ \Delta_Y)$ is nef and big.
By perturbing the coefficients of $\Delta_Y$,
we can find an effective $\Q$-divisor $\Gamma$ such that $(Y, \Gamma)$ is klt and $-(K_Y+\Gamma)$ is ample.
\end{proof}
\begin{lem} \label{l-pert-ample}
Let $k$ be a field.
Let $(X, \Delta)$ be a two-dimensional projective klt pair over $k$.
Let $H$ be a nef and big $\Q$-Cartier $\Q$-divisor.
Then there exists an effective $\Q$-Cartier $\Q$-divisor $A$ such that $A \sim_{\mathbb{Q}} H$ and
$(X, \Delta + A)$ is klt.
\end{lem}
\begin{proof}
Thanks to the existence of log resolutions for excellent surfaces \cite{Lip78},
the same proof of \cite[Lemma 2.8]{GNT} works in our setting.
\end{proof}
\begin{lem}\label{l-dP-under-bir-mor}
Let $k$ be a field.
Let $X$ be a $k$-surface of del Pezzo type.
Let $f : X \rightarrow Y$ be a birational $k$-morphism to a projective normal $k$-surface $Y$.
Then $Y$ is a $k$-surface of del Pezzo type.
\end{lem}
\begin{proof}
Let $\Delta$ be an effective $\Q$-divisor such that $(X, \Delta)$ is a log del Pezzo pair.
Set $H:=-(K_X+\Delta)$, which is an ample $\Q$-Cartier $\mathbb{Q}$-divisor on $X$.
By Lemma \ref{l-pert-ample}, there exists
an effective $\Q$-Cartier $\Q$-divisor $A$
such that $A \sim_{\mathbb{Q}} H$ and $(X, \Delta +A)$ is klt.
Then the pair $(Y, f_*\Delta+f_*A)$ is klt and
$K_X+\Delta+A \sim_{\mathbb{Q}} f^*(K_Y+f_*\Delta+f_*A) \sim_{\mathbb
Q} 0$.
It follows from \cite[Corollary 4.11]{Tan18a} that $Y$ is $\Q$-factorial.
By Nakai's criterion, the $\Q$-divisor $f_*A$ is ample.
In particular $(Y, f_*\Delta)$ is a log del Pezzo pair.
\end{proof}
\subsection{Geometrically canonical del Pezzo surfaces}
In this subsection we collect results on the anti-canonical systems of geometrically canonical del Pezzo surfaces we will need later.
\subsubsection{Canonical del Pezzo surfaces over algebraically closed fields}
We verify that the results in \cite[Chapter III, Section 3]{Kol96} hold for del Pezzo surfaces with canonical singularities over algebraically closed fields.
Recall that we say that $X$ is a canonical (weak) del Pezzo surface over a field $k$ if
$X$ is a surface over $k$, $X$ is (weak) del Pezzo in the sense of Definition \ref{d-dP-wdP},
and $(X, 0)$ is canonical in the sense of \cite[Definition 2.8]{Kol13}.
\begin{prop} \label{p-cohomology-can-dP}
Let $X$ be a canonical weak del Pezzo surface over an algebraically closed field $k$.
Then the following hold.
\begin{enumerate}
\item $H^2(X, \MO_X(-mK_X))=0$ for any non-negative integer $m$.
\item $H^i(X, \mathcal{O}_X) =0$ for any $i >0$.
\item $H^0(X, \mathcal{O}_X(-K_X)) \neq 0$.
\item $H^1(X ,\mathcal{O}_X(mK_X))=0$ for any integer $m$.
\item $h^0(X, \mathcal{O}_X(-mK_X)) = 1+\frac{m(m+1)}{2} K_X^2$
for any non-negative integer $m$.
\end{enumerate}
\end{prop}
\begin{proof}
The assertion (1) follows from Serre duality.
We now show (2).
It follows from \cite[Theorem 5.4 and Remark 5.5]{Tan14} that
$X$ has at worst rational singularities.
Then the assertion (2) follows from the fact that $X$ is a rational surface
\cite[Theorem 3.5]{Tan15}.
We now show (3).
By $H^2(X, \MO_X(-K_X))=0$ and the Riemann--Roch theorem,
we have $h^0(X, \MO_X(-K_X)) \geq 1 + K_X^2 >0$.
Thus (3) holds.
We now show (4).
By (3), there exists an effective Cartier divisor $D$ such that $D \sim -K_X$.
In particular, $D$ is effective, nef, and big.
It follows from \cite[Proposition 3.3]{CT} that
\[
H^1(X, \MO_X(-nD))= H^1(X, \MO_X(K_X+nD))=0
\]
for any $n \in \Z_{>0}$. Replacing $D$ by $-K_X$, the assertion (4) holds.
Thanks to (1) and (4), assertion (5) follows from the Riemann--Roch theorem.
\end{proof}
\begin{lem}\label{l-comp-antican-sys}
Let $Y$ be a canonical weak del Pezzo surface over an algebraically closed field $k$.
If a divisor $\sum_{i=1}^r a_i C_i \in |-K_Y|$ is not irreducible or not reduced, then every $C_i$ is a smooth rational curve.
\end{lem}
\begin{proof}
Taking the minimal resolution of $Y$,
we may assume that $Y$ is smooth.
Fix an index $1 \leq i_0 \leq r$.
By adjunction, we have
\begin{equation}\label{e-comp-antican-sys}
2p_a(C_{i_0})-2 = -C_{i_0} \cdot \left(\sum_{i \neq i_0} \frac{a_i}{a_{i_0}} C_i \right) - \frac{a_{i_0}-1}{a_{i_0}} C_{i_0} \cdot (-K_Y).
\end{equation}
Note that both the terms on the right hand side are non-positive.
Since $Y$ is smooth and $\sum_i a_i C_i$ is nef and big,
it follows from \cite[Theorem 2.6]{Tan15} that $H^1(X, -n\sum_i a_i C_i)=0$ for $n \gg 0$.
Hence, $\sum_i a_i C_i$ is connected.
Therefore, if $\sum_i a_i C_i$ is reducible,
the first term in the right hand side of (\ref{e-comp-antican-sys})
is strictly negative, hence $p_a(C_{i_0})<0$.
If $a_{i_0} \geq 2$ and $C_{i_0} \cdot K_Y<0$, then
the second term in the right hand side of (\ref{e-comp-antican-sys})
is strictly negative, hence $p_a(C_{i_0})<0$.
If $C_{i_0} \cdot K_Y=0$, then
$C_i$ is a smooth rational curve with $C_i^2=-2$.
\end{proof}
\begin{prop} \label{p-gen-memb-antican}
Let $Y$ be a canonical weak del Pezzo surface over an algebraically closed field $k$.
Let $\Bs(-K_Y)$ be the base locus of $-K_Y$, which is a closed subset of $Y$.
Then the following hold.
\begin{enumerate}
\item $\Bs(-K_Y)$ is empty or $\dim (\Bs(-K_Y)) =0$.
\item A general member of the linear system $|-K_Y|$ is irreducible and reduced.
\end{enumerate}
\end{prop}
\begin{proof}
Taking the minimal resolution of $Y$, we may assume that $Y$ is smooth.
Using Proposition \ref{p-cohomology-can-dP}, the same proof of \cite[Theorem 8.3.2.i]{Dol12} works in our setting, so that (1) holds and general members of $|-K_Y|$ are irreducible.
It is enough to show that a general member of $|-K_Y|$ is reduced.
Suppose it is not.
Then there exist $a>1$ such that a general member is of the form $aC \in |-K_Y|$ for some curve $C$.
In particular, $C$ is a smooth rational curve by Lemma \ref{l-comp-antican-sys}.
Recall that we have the short exact sequence
\[ 0 \rightarrow H^0(Y, \mathcal{O}_Y) \rightarrow H^0(Y, \mathcal{O}_Y(C)) \rightarrow H^0(C, \mathcal{O}_C(C)) \rightarrow 0. \]
Since $H^1(Y, \MO_Y)=0$ (Proposition \ref{p-cohomology-can-dP}),
we have that $h^0(Y, \mathcal{O}_Y(C)) = 1+h^0(C, \mathcal{O}_C(C))$. As $C$ is a smooth rational curve, we conclude by the Riemann--Roch theorem that $h^0(Y, \mathcal{O}_Y(C)) = 2 + C^2$.
We now consider the induced map
\begin{eqnarray*}
\theta:H^0(Y, \MO_Y(C) ) &\to& H^0(Y, \MO_Y(aC) ) \simeq H^0(Y, \MO_Y(-K_Y) )\\
\varphi & \mapsto & \varphi^a.
\end{eqnarray*}
Since a general member of $|-K_Y|$ is of the form $aD$ for some $D \geq 0$,
$\theta$ is a dominant morphism if we consider $\theta$ as a morphism of affine spaces.
Therefore, it holds that
\[
h^0(Y, \mathcal{O}_Y(-K_Y))
\leq h^0(Y, \mathcal{O}_Y(C)) =2+C^2 =-K_Y\cdot C \leq K_Y^2, \]
which contradicts Proposition \ref{p-cohomology-can-dP}.
\end{proof}
\subsubsection{Anti-canonical systems on geometrically canonical del Pezzo surfaces}
In this section, we study anticanonical systems on geometrically canonical del Pezzo surfaces over an arbitrary field $k$ and we describe their anti-canonical model when the anticanonical degree is small.
We need the following results on geometrically integral curves of genus one.
\begin{lem} \label{l-gen-genus-one}
Let $k$ be a field.
Let $C$ be a geometrically integral Gorenstein projective curve over $k$ of arithmetic genus one with $k=H^0(C, \mathcal{O}_C)$.
Let $L$ be a Cartier divisor on $C$ and let $R(C,L):= \bigoplus_{m \geq 0} H^0(C, mL)$ be the graded $k$-algebra. Then the following hold.
\begin{enumerate}
\item [(i)] If $\deg_k (L) =1$, then $\Bs(L)=\{P\}$ for some $k$-rational point $P$ and $R(C, L)$ is generated
by $\bigoplus_{1 \leq j \leq 3} H^0( C, jL )$ as a $k$-algebra.
\item [(ii)] If $\deg_k(L) \geq 2$, then $L$ is globally generated and $R(C, L)$ is generated
by $H^0( C, L ) \oplus H^0(C, 2L)$ as a $k$-algebra.
\item [(iii)] If $\deg_k L \geq 3$, then $L$ is very ample and $R(C, L)$ is generated by $H^0( C, L )$ as a $k$-algebra.
\end{enumerate}
\end{lem}
\begin{proof}
See \cite[Lemma 11.10 and Proposition 11.11]{Tan19}.
\end{proof}
\begin{prop}\label{p-antican-ring-dP}
Let $k$ be a field.
Let $X$ be a geometrically canonical weak del Pezzo surface over $k$
such that $k=H^0(X, \mathcal{O}_X)$.
Let $R(X, -K_X) = \bigoplus_{m \geq 0} H^0(X, \mathcal{O}_X(-mK_X))$ be the graded $k$-algebra. Then the following hold.
\begin{enumerate}
\item If $m$ is a positive integer such that $mK_X^2 \geq 2$,
then $|-mK_X|$ is base point free.
\item If $K_X^2=1$, then $\Bs(-K_X)=\{P\}$ for some $k$-rational point $P$.
\item If $K_X^2=1$, then $R(X, -K_X)$ is generated by $\bigoplus_{1 \leq j \leq 3}
H^0(X, -jK_X)$ as a $k$-algebra.
\item If $K_X^2 =2$, then $R(X, -K_X)$ is generated by $H^0(X, -K_X) \oplus H^0(X, -2K_X)$
as a $k$-algebra.
\item If $K_X^2 \geq 3$, then $R(X, -K_X)$ is generated by $H^0(X, -K_X)$
as a $k$-algebra.
\end{enumerate}
In particular, if $-K_X$ is ample, then $|-6K_X|$ is very ample.
\end{prop}
\begin{proof}
Consider the following condition.
\begin{enumerate}
\item[(2)'] If $K_X^2=1$, then $\Bs(-K_X)$ is not empty and of dimension zero.
\end{enumerate}
Since $K_X^2=1$, (2) and (2)' are equivalent.
Note that to show that (1), (2)', and (3)--(5),
we may assume that $k$ is algebraically closed.
From now on, let us prove (1)--(5) under the condition that $k$ is algebraically closed.
It follows from Proposition \ref{p-gen-memb-antican}
that a general member $C$ of $|-K_X|$ is a prime divisor.
Since $C$ is a Cartier divisor and $X$ is Gorenstein,
then $C$ is a Gorenstein curve.
By adjunction, $C$ is a Gorenstein curve of arithmetic genus $p_a(C)=1$.
By Proposition \ref{p-cohomology-can-dP}, we have the following exact sequence for every integer $m$:
\[ 0 \to H^0(X, -(m-1)K_X) \to H^0(X,-mK_X) \to H^0(C, -mK_X|_C) \to 0. \]
By the above exact sequence,
the assertions (1) and (2) follow
from (3) and (2) of Lemma \ref{l-gen-genus-one}, respectively.
We prove the assertions (3), (4) and (5).
By the above short exact sequence, it is sufficient to prove the same statement for the $k$-algebra $R(C, \mathcal{O}_C(-K_X))$, which is the content of Lemma \ref{l-gen-genus-one}.
\end{proof}
\begin{thm}\label{t-dP-small-degree}
Let $k$ be a field.
Let $X$ be a geometrically canonical del Pezzo surface over $k$ such that $H^0(X, \MO_X)=k$.
Then the following hold.
\begin{enumerate}
\item If $K_X^2=1$, then $X$ is isomorphic to a weighted hypersurface in $\mathbb{P}_k(1,1,2,3)$
of degree six.
\item If $K_X^2=2$, then
$X$ is isomorphic to a weighted hypersurface in $\mathbb{P}_k(1,1,1,2)$
of degree four.
\item If $K_X^2=3$, then $X$ is isomorphic to a hypersurface in $\mathbb{P}_k^3$
of degree three.
\item If $K_X^2=4$, then $X$ is isomorphic to a complete intersection of two quadric hypersurfaces in $\mathbb{P}_k^4$.
\end{enumerate}
\end{thm}
\begin{proof}
Using Proposition \ref{p-antican-ring-dP}, the proof is the same as in \cite[Theorem III.3.5]{Kol96}.
\end{proof}
\subsection{Mori fibre spaces to curves}
In this subsection, we summarise properties of regular curves with anti-ample canonical divisor and of Mori fibre space of dimension two over arbitrary fields.
\begin{lem}\label{l-conic}
Let $k$ be a field.
Let $C$ be a projective Gorenstein integral curve over $k$.
Then the following are equivalent.
\begin{enumerate}
\item $\omega_C^{-1}$ is ample.
\item $H^1(C, \MO_C)=0$.
\item $C$ is a conic curve of $\mathbb P^2_K$, where $K:=H^0(C, \MO_C)$.
\item $\deg_k \omega_C = -2 \dim_k (H^0(C, \MO_C))$.
\end{enumerate}
\end{lem}
\begin{proof}
It follows from \cite[Corollary 2.8]{Tan18a}
that (1), (2), and (4) are equivalent.
Clearly, (3) implies (1).
By \cite[Lemma 10.6]{Kol13}, (1) implies (3).
\end{proof}
\begin{lem}\label{p-Fano-curve}
Let $k$ be a field and let $C$ be a projective Gorenstein integral curve over $k$
such that $k=H^0(C, \MO_C)$ and $\omega_C^{-1}$ is ample.
Then the following hold.
\begin{enumerate}
\item If $C$ is geometrically integral over $k$, then $C$ is smooth over $k$.
\item If the characteristic of $k$ is not two, then $C$ is geometrically reduced over $k$.
\item If the characteristic of $k$ is not two and $C$ is regular, then $C$ is smooth over $k$.
\end{enumerate}
\end{lem}
\begin{proof}
By Lemma \ref{l-conic}, $C$ is a conic curve in $\mathbb{P}^2_k$.
Thus, the assertion (1) follows from the fact that an integral conic curve over an algebraically closed field is smooth.
Let us show (2) and (3).
Since the characteristic of $k$ is not two and $C$ is a conic curve in $\mathbb P^2_k$,
we can write
\[
C=\Proj\,k[x, y, z]/(ax^2+by^2+cz^2)
\]
for some $a, b, c \in k$.
Since $C$ is an integral scheme, two of $a, b, c$ are not zero.
Hence, $C$ is reduced.
Thus (2) holds.
If $C$ is regular, then each of $a, b, c$ is nonzero, hence $C$ is smooth over $k$.
\end{proof}
\begin{prop}\label{p-MFS-basic}
Let $k$ be a field.
Let $\pi:X \to B$ be a $K_X$-Mori fibre space
from a projective regular $k$-surface $X$ to a projective regular $k$-curve with $k=H^0(B, \MO_B)$.
Let $b$ be a (not necessarily closed) point.
Then the following hold.
\begin{enumerate}
\item The fibre $X_b$ is irreducible.
\item The equation $\kappa(b)=H^0(X_b, \MO_{X_b})$ holds.
\item The fibre $X_b$ is reduced.
\item The fibre $X_b$ is a conic in $\mathbb P^2_{\kappa(b)}$.
\item If ${\rm char}\,k \neq 2$, then any fibre of $\pi$ is geometrically reduced.
\item If ${\rm char}\,k \neq 2$ and $k$ is separably closed,
then $\pi$ is a smooth morphism.
\end{enumerate}
\end{prop}
\begin{proof}
If $X_b$ is not irreducible, it contradicts the hypothesis $\rho(X/B)=1$.
Thus (1) holds.
Let us show (2).
Since $\pi$ is flat, the integer
\[
\chi:= \dim_{\kappa(b)}H^0(X_b, \MO_{X_b}) - \dim_{\kappa(b)}H^1(X_b, \MO_{X_b}) \in \Z
\]
is independent of $b\in B$.
Since $H^1(X_b, \MO_{X_b})=0$ for any $b \in B$, it suffices to show that
$\dim_{\kappa(b)}H^0(X_b, \MO_{X_b})=1$ for some $b \in B$.
This holds for the case when $b$ is the generic point of $B$.
Hence, (2) holds.
Let us prove (3).
It is clear that the generic fibre is reduced.
We may assume that $b \in B$ is a closed point.
Assume that $X_b$ is not reduced.
By (1), we have $X_b=mC$ for some prime divisor $C$ and $m \in \Z_{\geq 2}$.
Since $-K_X \cdot_{\kappa(b)} X_b=2$, we have that $m=2$.
Then we obtain an exact sequence:
\[
0 \to \MO_X(-C)|_C \to \MO_{X_b} \to \MO_C \to 0.
\]
Since $C^2=0$ and $\omega_{C}^{-1}$ is ample, we have that $\MO_X({-C})|_C \simeq \MO_C$.
Since $H^1(C, \MO_C)=0$, we get an exact sequence:
\[
0 \to H^0(C, \MO_C) \to H^0(X_b, \MO_{X_b}) \to H^0(C, \MO_C) \to 0.
\]
Then we obtain $\dim_{\kappa(b)} H^0(X_b, \MO_{X_b}) \geq 2$,
which contradicts (2).
Hence (3) holds.
We now show (4). By \cite[Corollary 2.9]{Tan18a},
$\deg_{\kappa(b)} \omega_{X_b}=(K_X+X_b) \cdot_{\kappa(b)} X_b<0$. Hence (4) follows from (2) and Lemma \ref{l-conic}.
The assertions (5) and (6) follow from Proposition \ref{p-Fano-curve}.
\end{proof}
\subsection{Twisted forms of canonical singularities}
The aim of this subsection is to prove Proposition \ref{p-insep-bdd-rat-pts}.
The main idea is to bound the purely inseparable degree of regular non smooth points on geometrically normal surfaces according to the type of singularities.
For this, the notion of Jacobian number plays a crucial role.
\begin{dfn} \label{d-number}
Let $k$ be a field of characteristic $p>0$.
Let $R$ be an equi-dimensional $k$-algebra essentially of finite type over $k$.
Let $J_{R/k}$ be its Jacobian ideal of $R$ over $k$ (cf. \cite[Definition 4.4.1 and Proposition 4.4.4]{HS06}).
We define the {\em Jacobian number} of $R/k$ as $\nu(R):=\nu(R/k) := \dim_k (R/J_{R/k})$.
Note that $\nu(R/k) <\infty$ if $R/J_{R/k}$ is an artinian ring
and its residue fields are finite extensions of $k$.
\end{dfn}
\begin{rem}\label{r-base-ch}
Let $k \subset k'$ be a field extension of characteristic $p>0$
and let $R$ be an equi-dimensional $k$-algebra essentially of finite type over $k$.
Then the following hold.
\begin{enumerate}
\item
By \cite[Definition 4.4.1]{HS06}, we get
\[
J_{R/k} \cdot (R \otimes_k k') = J_{R \otimes_k k'/k'}.
\]
In particular, if $R/J_{R/k}$ is an artinian ring and its residue fields are finite extensions of $k$,
then we have $\nu(R/k)=\nu(R \otimes_k k'/k')$.
\item
Assume that $k$ is a perfect field.
By \cite[Definition 4.4.9]{HS06},
$\Spec\,(R/J_{R/k})$ set-theoretically coincides with
the non-regular locus of $\Spec\,R$.
\item
Assume that $R$ is of finite type over $k$.
Then (1) and (2) imply that $\Spec\,(R/J_{R/k})$ set-theoretically coincides with
the the non-smooth locus of $\Spec\,R \to \Spec\,k$.
\end{enumerate}
\end{rem}
\begin{rem}\label{r-nu-2dim-gn}
In our application,
$R$ will be assumed to be a local ring $\MO_{X, x}$ at a closed point $x$
of a geometrically normal surface $X$ over $k$.
In this case, (3) of Remark \ref{r-base-ch} implies that
$R/J_{R/k}$ is an artinian local ring
whose residue field is a finite extension of $k$.
Hence, $\nu(R/k) = \dim_k (R/J_{R/k})$ is well-defined as in Definition \ref{d-number}.
\end{rem}
To treat local situations,
let us recall the notion of essentially \'etale ring homomorphisms.
For its fundamental properties, we refer to \cite[Subsection 2.8]{Fu15}.
\begin{dfn}\label{d-ess-et}
Let $f\colon R \to S$ be a local homomorphism of local rings.
We say that $f$ is {\em essentially \'etale}
if there exists an \'etale $R$-algebra $\overline S$ and a prime ideal $\mathfrak p$
of $\overline S$ such that
$\mathfrak p$ lies over the maximal ideal of $R$ and $S$ is $R$-isomorphic to $\overline S_{\mathfrak p}$.
\end{dfn}
\begin{lem}\label{l-nu-et}
Let $k$ be a field.
Let $f \colon R \to S$ be an essentially \'etale local $k$-algebra homomorphism
of local rings which are essentially of finite type over $k$.
Let $\m_R$ and $\m_S$ be the maximal ideals of $R$ and $S$, respectively.
Set $\kappa(R):=R/\m_R$ and $\kappa(S):=S/\m_S$.
Then the following hold.
\begin{enumerate}
\item
If $M$ is an $R$-module of finite length whose support is contained the maximal ideal $\m_R$,
then the equation
\[
\dim_k (M \otimes_R S) = [\kappa(S):\kappa(R)] \dim_k M
\]
holds.
\item
Suppose that $R$ is an integral domain,
$R/J_{R/k}$ is an artinian ring, and $\kappa(R)$ is a finite extension of $k$.
Then the equation
\[
\nu(S/k)=[\kappa(R):\kappa(S)]\nu(R/k)
\]
holds.
\end{enumerate}
\end{lem}
\begin{proof}
Let us show (1).
Since $M$ is a finitely generated $R$-module,
there exists a sequence of $R$-submodules $M=:M_0 \supset M_1 \supset \cdots \supset M_n =0$
such that $M_i/M_{i+1} \simeq R/ \p$ for some prime ideal $\p$ by \cite[Theorem 6.4]{Mat89} .
Since the support of $M$ is $\m_R$, we have $\p=\m_R$.
As $R \to S$ is flat, the problem is reduced to the case when $M=R/\m_R=\kappa(R)$.
In this case, we have
\[
\kappa(R) \otimes_R S = (R/\m_R) \otimes_R S \simeq S/\m_R S =S/\m_S = \kappa(S),
\]
where the equality $S/m_R S = S/m_S$ follows from the assumption that $f$ is a localisation of
an unramified homomorphism.
Hence, (1) holds.
Let us show (2).
Set $n:=\dim R$.
We use the description of the Jacobian of $R$ via Fitting ideals
(cf. \cite[Discussion 4.4.7]{HS06}):
$J_{R/k}= \text{Fit}_n (\Omega_{R/k}^1)$
and
$J_{S/k}= \text{Fit}_n (\Omega_{S/k}^1)$.
We have
\[
J_{S/k}= \text{Fit}_n (\Omega_{S/k}^1)= \text{Fit}_n (\Omega^1_{R/k} \otimes_R S)=\text{Fit}_n (\Omega^1_{R/k})S
=J_{R/k}S,
\]
where the third equality follows from (3) of \cite[\href{https://stacks.math.columbia.edu/tag/07ZA}{Tag 07ZA}]{StackProject}.
As $f: R \to S$ is flat, we obtain $S/J_{S/k} \simeq (R/J_{R/k}) \otimes_R S$.
By (1) and Definition \ref{d-number}, the assertion (2) holds.
\end{proof}
\begin{ex}\label{ex-A_p^n}
Let $k$ be a field of characteristic $p>0$. Let $X=\Spec\,R$ be a surface over $k$ such that
\begin{enumerate}
\renewcommand{\labelenumi}{(\roman{enumi})}
\item $X \times_k {\overline{k}}=\Spec\,(R\otimes_k \overline k)$ is a normal surface,
\item $X \times_k {\overline{k}}$ has a unique singular point $x$,
and $x$ is a canonical singularity of type $A_{p^n-1}$.
\end{enumerate}
We prove that $\nu(R/k) = p^n$.
By Remark \ref{r-base-ch}, we have $\nu(R/k)=\nu(R \otimes_k \overline{k}/\overline{k}).$
In order to compute $\nu(R \otimes_k \overline{k}/\overline{k})$, it is sufficient to localise at the singular point by \cite[Corollary 4.4.5]{HS06}.
Thus we can suppose that $k$ is algebraically closed and $R$ is a local $k$-algebra.
By \cite[pages 16-17]{Art77} (cf. (12) of Subsection \ref{ss-notation}),
the henselisation $R^h$ of $R$ is isomorphic to
\[ k\{x,y,z\}/(z^{p^n}+xy). \]
In particular there exist essentially \'etale local $k$-algebra homomorphisms
$R \to S$ and $k[x,y,z]/(z^{p^n}-xy) \to S.$
A direct computation shows $\nu(k[x,y,z]/(z^{p^n}-xy))=p^n$.
Thus by Lemma \ref{l-nu-et}, we have
\[ \nu(R) = \nu (S) = \nu(k[x,y,z]/(z^{p^n}-xy))=p^n. \]
\end{ex}
The following is a generalisation of \cite[Lemma 14.2]{FS18}.
\begin{lem}\label{l-bound-points}
Let $k$ be a field of characteristic $p>0$.
Let $X=\Spec\, R$, where $R$ is an equi-dimensional local $k$-algebra of essentially finite type over $k$.
Let $x$ be the closed point of $X$.
Suppose that
$R/J_{R/k}$ is a local artinian ring and
its residue field $\kappa(x)$ is a finite extension of $k$.
Then $[ \kappa(x) : k]$ is a divisor of $\nu(R/k)$.
\end{lem}
\begin{proof}
Let $R/J_{R/k}=:M_0 \supset M_1 \supset \cdots \supset M_n =0$ be a composition sequence of
$R/J_{R/k}$-submodules (cf. \cite[Theorem 6.4]{Mat89}).
Since $R/J_{R/k}$ is an artinian local ring, it holds that $M_i/M_{i+1} \simeq \kappa(x)$ for any $i$.
We have
{\small
\[
\nu(R/k)=
\dim_k (R/J_{R/k}) = \sum_{i=0}^{n-1} \dim_k (M_i/M_{i+1})
=n \dim_k \kappa(x) = n[\kappa(x):k].
\]
}
We thus conclude that $[\kappa(x):k]$ is a divisor of $\nu(R/k)$.
\end{proof}
\begin{lem} \label{l-deg-ext-sing}
Let $X$ be a regular variety over a separably closed field $k$.
Suppose that $X_{\overline{k}} = X \times_k \overline k$ is a normal variety with a unique singular point $y$.
Let $x$ be the image of $y$ by the induced morphism $X_{\overline k} \to X$.
Then the following hold.
\begin{enumerate}
\item $[\kappa(x) : k]$ is a divisor of $\nu(\MO_{X,x})$.
\item $X \times_k \kappa(x)$ is not regular.
\end{enumerate}
\end{lem}
\begin{proof}
Since $k$ is separably closed,
the induced morphism $X_{\overline{k}} \to X$ is a universal homeomorphism.
Note that the local ring $\mathcal{O}_{X,x}$ is not geometrically regular over $k$.
Applying Lemma \ref{l-bound-points} to the local ring $\mathcal{O}_{X,x}$,
we deduce that
$[\kappa(x):k]$ is a divisor of $\nu(\MO_{X,x})$.
Thus (1) holds.
Consider the base change $\pi \colon X \times_k \kappa(x) \rightarrow X$.
Let $x'$ be the point on $X \times_k \kappa(x)$ lying over $x$.
Note that $x'$ is a $\kappa(x)$-rational point of $X \times_k \kappa(x)$ whose base change by $(-) \times_{\kappa(x)} \overline k$ is not regular.
By \cite[Corollary 2.6]{FS18},
we conclude that $X \times_k \kappa(x)$ is not regular at $x'$.
\end{proof}
We now explain how the previous results can be used to construct closed points with purely inseparable residue field on a regular surface. This will be used in Section \ref{s-pi-pts} to find purely inseparable points on regular del Pezzo surfaces.
\begin{prop} \label{p-insep-bdd-rat-pts}
Let $X$ be a regular surface over $k$.
Suppose that $X_{\overline k}=X \times_k \overline k$ is a normal surface over $\overline k$
with a unique singular point $y$.
Assume that $y$ is a canonical singularity of type $A_{p^n-1}$.
Let $z$ be the image of $y$ by the induced morphism $X_{\overline k} \to X_{k^{1/p^n}} =X \times_k k^{1/p^n}$.
Then $z$ is a $k^{1/p^n}$-rational point on $X_{k^{1/p^n}}$.
\end{prop}
\begin{proof}
Set $R:=\MO_{X, x}$, where $x$ is the unique closed point along which $X$ is not smooth.
Let $k^{\text{sep}}$ be the separable closure of $k$.
For $R_{k^{\text{sep}}}:=R \otimes_k k^{\text{sep}}$,
it follows from Example \ref{ex-A_p^n} that $\nu(R_{k^{\text{sep}}})=p^n$.
Lemma \ref{l-deg-ext-sing} implies
that $k^{\sep} \subset \kappa(z)$ is purely inseparable and $[\kappa(z):k]$ is a divisor of $p^n$.
In particular, $\kappa(z) \subset (k^{\text{sep}})^{1/p^n}$.
Consider the Galois extension $k^{1/p^n} \subset (k^{\text{sep}})^{1/p^n}$ and denote by $G$ its Galois group.
For $X_{(k^{\text{sep}})^{1/p^n}}:=X \times_k (k^{\text{sep}})^{1/p^n}$,
$G$ acts on the set $X_{(k^{\text{sep}})^{1/p^n}}((k^{\text{sep}})^{1/p^n})$.
The unique singular $(k^{\text{sep}})^{1/p^n}$-rational point on $X_{(k^{\text{sep}})^{1/p^n}}$
is fixed under the $G$-action.
Thus it descends to a $k^{1/p^n}$-rational point on $X_{k^{1/p^n}}$.
\end{proof}
\section{Behaviour of del Pezzo surfaces under base changes}
In this section, we study the behaviour of canonical del Pezzo surfaces over an imperfect field $k$ under the base changes to the algebraic closure $\overline{k}$.
\subsection{Classification of base changes of del Pezzo surfaces}\label{s-classify}
In this subsection, we give classification of base changes of
del Pezzo surfaces with canonical singularities over imperfect fields
(Theorem \ref{t-classify-bc}).
To this end, we need two auxiliary lemmas:
Lemma \ref{l-Reid} and Lemma \ref{l-rationality}.
The former one classify $\Q$-factorial surfaces over algebraically closed fields
whose anti-canonical bundles are sufficiently positive.
Its proof is based on a simple but smart idea by Reid
(cf. the proof of \cite[Theorem 1.1]{Rei94}).
The latter one, i.e. Lemma~\ref{l-rationality},
gives a rationality criterion
for the base changes of log del Pezzo surfaces.
\begin{lem}\label{l-Reid}
Let $k$ be an algebraically closed field.
Let $Y$ be a projective normal $\Q$-factorial surface over $k$
such that $-K_Y \equiv A+D$
for an ample Cartier divisor $A$ and a pseudo-effective $\Q$-divisor $D$.
Let $\mu:Z \to Y$ be the minimal resolution of $Y$.
Then one of the following assertions holds.
\begin{enumerate}
\item $D\equiv 0$ and $Y$ has at worst canonical singularities.
\item $Z$ is
isomorphic to a $\mathbb P^1$-bundle over a smooth projective curve.
\item $Z \simeq \mathbb P^2$.
\end{enumerate}
\end{lem}
\begin{proof}
Assuming that (1) does not hold, let us prove that either (2) or (3) holds.
We have
\[
K_Z+E=\mu^*K_Y
\]
for some effective $\mu$-exceptional $\Q$-divisor $E$ on $Z$.
In particular, it holds that
\[
K_Z+E+\mu^*(D)=\mu^*(K_Y+D) \equiv -\mu^*A.
\]
Since (1) does not hold, we have that $D \not\equiv 0$ or $E \neq 0$.
Then we get
\[
K_Z+\mu^*A \equiv -E-\mu^*(D) \not\equiv 0,
\]
hence $K_Z+\mu^*A$ is not nef.
By the cone theorem for a smooth projective surface \cite[Theorem 1.24]{KM98},
there is a curve $C$ that spans
a $(K_Z+\mu^*A)$-negative extremal ray $R$ of $\overline{{\rm NE}}(Z)$.
Note that $C$ is not a $(-1)$-curve.
Indeed, otherwise $\mu(C)$ is a curve and we obtain $\mu^*A \cdot C>0$,
which induces a contradiction:
\[
(K_Z+\mu^*A) \cdot C\geq -1+1=0.
\]
It follows from the classification of the $K_Z$-negative extremal rays
\cite[Theorem 1.28]{KM98} that
either $Z \simeq \mathbb P^2$ or
$Z$ is a $\mathbb P^1$-bundle over a smooth projective curve.
In any case, one of (2) and (3) holds.
\end{proof}
\begin{lem}\label{l-rationality}
Let $(X, \Delta)$ be a projective two-dimensional klt pair over a field of characteristic $p>0$ such that $-(K_X+\Delta)$ is nef and big.
Assume that $k=H^0(X, \MO_X)$.
Then $(X \times_k \overline{k})_{\red}$ is a rational surface.
\end{lem}
\begin{proof}
See \cite[Proposition 2.20]{NT}.
\end{proof}
We now give a classification of the base changes of del Pezzo surfaces with canonical singularities.
\begin{thm}\label{t-classify-bc}
Let $k$ be a field of characteristic $p>0$.
Let $X$ be a canonical del Pezzo surface over $k$ with $k=H^0(X, \MO_X)$.
Then the normalisation $Y$ of $(X \times_k \overline{k})_{\red}$ satisfies one of the following properties.
\begin{enumerate}
\item $X$ is geometrically canonical over $k$.
In particular, $Y \simeq X \times_k \overline k$ and $-K_Y$ is ample.
\item $X$ is not geometrically normal over $k$ and
$Y$ is isomorphic to a Hirzebruch surface, i.e. a $\mathbb P^1$-bundle over $\mathbb P^1$.
\item $X$ is not geometrically normal over $k$ and $Y$ is isomorphic to a weighted projective surface $\mathbb P(1, 1, m)$
for some positive integer $m$.
\end{enumerate}
\end{thm}
\begin{proof}
Replacing $k$ by its separable closure,
we may assume that $k$ is separably closed.
Let $f:Y \to X$ be the induced morphism and let $\mu:Z \to Y$ be the minimal resolution of $Y$.
By \cite[Theorem 4.2]{Tan18b},
there is an effective $\Z$-divisor $D$ on $Y$ such that
\begin{itemize}
\item $K_Y+D=f^*K_X$, and
\item if $X \times_k \overline k$ is not normal, then $D \neq 0$.
\end{itemize}
Since $-K_X$ is an ample Cartier divisor, so is $-f^*K_X$.
Moreover,
it follows from \cite[Lemma 2.2 and Lemma 2.5]{Tan18b}
that $Y$ is $\Q$-factorial.
Hence, we may apply Lemma~\ref{l-Reid} to $-K_Y= -f^*K_X+D$.
By Lemma~\ref{l-rationality}, $Y$ is a rational surface.
Thus, if (2) or (3) of Lemma~\ref{l-Reid} holds,
then one of (1)--(3) of Theorem \ref{t-classify-bc} holds, as desired.
Therefore, let us treat the case when (1) of Lemma~\ref{l-Reid} holds.
Then it holds that $D=0$ and $Y$ has at worst canonical singularities.
In this case, we have that $Y= X \times_k \overline k$ and $X$ is geometrically canonical.
Hence, (1) of Theorem \ref{t-classify-bc} holds, as desired.
\end{proof}
\subsection{Bounds on Frobenius length of geometric non-normality}
In this subsection, we give an upper bound for
the Frobenius length of geometric non-normality for canonical del Pezzo surfaces
(Proposition \ref{p-p2-bound}).
We start by recalling its definition (Definition \ref{d-lF}) and
fundamental properties (Remark \ref{r-lF}).
\begin{dfn}\label{d-lF}
Let $k$ be a field of characteristic $p>0$.
Let $X$ be a proper normal variety over $k$ such that $k=H^0(X, \MO_X)$.
The {\em Frobenius length of geometric non-normality} $\ell_F(X/k)$ of $X/k$
is defined by
{\small
\[
\ell_F(X/k):=\min\{\ell \in \Z_{\geq 0}\,|\,
(X \times_k k^{1/p^{\ell}})_{\red}^N \text{ is geometrically normal over }k^{1/p^{\ell}}\}.
\]
}
\end{dfn}
\begin{rem}\label{r-lF}
Let $k$ and $X$ be as in Definition \ref{d-lF}.
Set $\ell:=\ell_F(X/k)$. Let $(k', Y)$ be one of $(k^{1/p^{\infty}}, (X \times_k k^{1/p^{\infty}})_{\red}^N)$
and $(\overline k, (X \times_k \overline k)_{\red}^N)$.
We summarise some results from \cite[Section 5]{Tan19}.
\begin{enumerate}
\item
The existence of the right hand side of Definition \ref{d-lF}
is assured by \cite[Remark 5.2]{Tan19}.
\item
If $X$ is not geometrically normal,
then $\ell$ is a positive integer \cite[Remark 5.3]{Tan19} and
there exist nonzero effective Weil divisors $D_1, ..., D_{\ell}$
such that
\[
K_Y+(p-1)\sum_{i=1}^{\ell} D_i \sim f^*K_X,
\]
where $f:Y \to X$ denotes the induced morphism \cite[Proposition 5.11]{Tan19}.
\item
The $\ell$-th iterated absolute Frobenius morphism $F^{\ell}_{X\times_k k'}$ factors through
the induced morphism $Y \to X \times_k k'$ \cite[Proposition 5.4 and Theorem 5.9]{Tan19}:
\[
F^{\ell}_{X\times_k k'}:X \times_k k' \to Y \to X \times_k k'.
\]
\end{enumerate}
\end{rem}
\begin{prop}\label{p-p2-bound}
Let $k$ be a field of characteristic $p>0$.
Let $X$ be a canonical del Pezzo surface over $k$ with $k=H^0(X, \MO_X)$.
Let $Y$ be the normalisation of $(X \times_k \overline k)_{\red}$ and let $f:Y \to X$ be the induced morphism.
Assume that the linear equivalence
\[
K_Y+\sum_{i=1}^r C_i \sim f^*K_X
\]
holds for some prime divisors $C_1, ..., C_r$ (not necessarily $C_i \neq C_j$ for $i \neq j$).
Then it holds that $r \leq 2$.
\end{prop}
\begin{proof}
Set $C:=\sum_{i=1}^r C_i$. We have $K_Y+C \sim f^*K_X$.
If $C=0$, then there is nothing to show.
Hence, we may assume that $C \neq 0$.
In particular, $X$ is not geometrically normal.
In this case, it follows from Theorem \ref{t-classify-bc}
that $Y$ is isomorphic to either a Hirzebruch surface or $\mathbb P(1, 1, m)$ for some $m>0$.
We first treat the case when $Y \simeq \mathbb P(1, 1, m)$.
If $m=1$, then the assertion is obvious.
Hence, we may assume that $m \geq 2$.
In this case, for the minimal resolution $g:Z \to Y$,
we have that
\[
K_Z+\frac{m-2}{m}\Gamma=g^*K_Y
\]
where $\Gamma$ is the negative section of the fibration $ Z \rightarrow \mathbb{P}^1$
such that $\Gamma^{2}=-m$. Note that $m$ is the $\Q$-factorial index of $Y$, i.e. $m D$ is Cartier for any $\Z$-divisor $D$ on $Y$.
We have that
$$-K_Z=\frac{m-2}{m}\Gamma-g^*K_Y\equiv \frac{m-2}{m}\Gamma+g^*C-g^*f^*K_X$$
Consider the intersection number with
a fibre $F_Z$ of $Z \to \mathbb P^1$:
$$2=\left(\frac{m-2}{m}\Gamma+g^*C-g^*f^*K_X\right) \cdot F_Z
\geq \frac{m-2}{m}+C \cdot g_*(F_Z) +1.$$
Thus we obtain
$$2 \geq C \cdot (mg_*(F_Z)) \geq r,$$
where the last inequality holds since $mg_*(F_Z)$ is an ample Cartier divisor.
Therefore, we obtain $r \leq 2$, as desired.
It is enough to treat the case when $Y$ is a Hirzebruch surface.
For a fibre $F$ of $\pi: Y \to \mathbb P^1$, we have that
\[
-2+C \cdot F= (K_Y+C) \cdot F = f^*K_X \cdot F \leq -1,
\]
hence $C \cdot F \leq 1$.
There are two possibilities: $C \cdot F=1$ or $C \cdot F=0$.
Assume that $C \cdot F=1$.
Then there is a section $\Gamma$ of $\pi$ and a $\pi$-vertical $\Z$-divisor $C'$ such that $C=\Gamma+C'$.
Consider the intersection number with $\Gamma$:
\[
-2+\Gamma \cdot C'=(K_Y+\Gamma+C') \cdot \Gamma =(K_Y+C) \cdot \Gamma = f^*K_X \cdot \Gamma \leq -1.
\]
Therefore, we have $\Gamma \cdot C' \leq 1$.
This implies that either $C'=0$ or $C'$ is a prime divisor.
In any case, we get $r \leq 2$, as desired.
We may assume that $C \cdot F=0$, i.e. $C$ is a $\pi$-vertical divisor.
Let $\Gamma$ be a section of $\pi$ such that $\Gamma^2 \leq 0$.
We have that
\[
-2 + C \cdot \Gamma=(K_Y+\Gamma+C) \cdot \Gamma \leq (K_Y+C) \cdot \Gamma =f^*K_X \cdot \Gamma \leq -1.
\]
Hence, we obtain $C \cdot \Gamma \leq 1$, which implies $r \leq 1$.
\end{proof}
\begin{thm}\label{t-p2-bound}
Let $k$ be a field of characteristic $p>0$.
Let $X$ be a canonical del Pezzo surface over $k$
such that $k=H^0(X, \MO_X)$.
Let $Y$ be the normalisation of $(X \times_k \overline k)_{\red}$ and let
\[
\mu: Y \to X \times_k \overline k
\]
be the induced morphism.
\begin{enumerate}
\item If $p \geq 5$, then $X$ is geometrically canonical,
i.e. $\mu$ is an isomorphism and $Y$ has at worst canonical singularities.
\item If $p=3$, then $\ell_{F}(X/k) \leq 1$ and the absolute Frobenius morphism $F_{X \times_k \overline k}$
of $X \times_k \overline k$ factors through $\mu$:
\[
F_{X \times_k \overline k}:X \times_k \overline k\to Y \xrightarrow{\mu} X \times_k \overline k.
\]
\item
If $p=2$, then $\ell_{F}(X/k) \leq 2$ and the second iterated absolute Frobenius morphism $F^2_{X \times_k \overline k}$
of $X \times_k \overline k$ factors through $\mu$:
\[
F^2_{X \times_k \overline k}:X \times_k \overline k\to Y \xrightarrow{\mu} X \times_k \overline k.
\]
\end{enumerate}
\end{thm}
\begin{proof}
The assertion follows from Remark \ref{r-lF} and Proposition \ref{p-p2-bound}.
\end{proof}
\section{Numerically trivial line bundles on log del Pezzo surfaces}\label{s-nume-triv}
The purpose of this section is to give an explicit upper bound on the torsion index of numerically trivial line bundles on log del Pezzo surfaces over imperfect fields (Theorem \ref{t-klt-bdd-torsion}).
To achieve this result, we use the minimal model program to reduce the problem
to the case when our log del Pezzo surface admits a Mori fibre space structure $\pi:X \to B$.
The cases $\dim B=0$ and $\dim B=1$
will be settled in Theorem \ref{t-cano-bdd-torsion} and
Proposition \ref{p-ess-klt-bdd-torsion}, respectively.
\subsection{Canonical case}
In this subsection, we study numerically trivial Cartier divisor on del Pezzo surfaces with canonical singularities.
\begin{thm}\label{t-cano-bdd-torsion}
Let $k$ be a field of characteristic $p>0$.
Let $X$ be a canonical weak del Pezzo surface over $k$ such that $k=H^0(X, \MO_X)$.
Let $L$ be a numerically trivial Cartier divisor on $X$.
Then the following hold.
\begin{enumerate}
\item If $p \geq 5$, then $L \sim 0$.
\item If $p=3$, then $3L \sim 0$.
\item If $p=2$, then $4L \sim 0$.
\end{enumerate}
\end{thm}
\begin{proof}
We first reduce the problem to the case when $-K_X$ is ample.
It follows from \cite[Theorem 4.2]{Tan18a} that
$-K_X$ is semi-ample.
As $-K_X$ is also big,
$|-mK_X|$ induces a birational morphism
$f:X \to Y$ to a projective normal surface $Y$.
Then it holds that $K_Y$ is $\Q$-Cartier and $K_X=f^*K_Y$.
In particular, $Y$ has at worst canonical singularities.
Then \cite[Theorem 4.4]{Tan18a} enables us to find a numerically trivial Cartier divisor $L_Y$ on $Y$ such that $f^*L_Y \sim L$.
Hence the problem is reduced to the case when $-K_X$ is ample.
We only treat the case when $p=2$, as the other cases are easier.
By Theorem \ref{t-p2-bound}, the second iterated absolute Frobenius morphism
\[
F^2_{X \times_k \overline k}:X \times_k \overline k \to X \times_k \overline k
\]
factors through the normalisation $(X \times_k \overline k)_{\red}^N$ of $(X \times_k \overline k)_{\red}$:
\[
F^2_{X \times_k \overline k}:
X \times_k \overline k \to (X \times_k \overline k)_{\red}^N \xrightarrow{\mu}
X \times_k \overline k,
\]
where $\mu$ denotes the induced morphism.
Set $\mathcal L:=\MO_X(L)$ and
let $\mathcal L_{\overline k}$ be the pullback of $\mathcal L$ to $X \times_k \overline k$.
Since $(X \times_k \overline k)_{\red}^N$ is a normal rational surface by Lemma \ref{l-rationality},
any numerically trivial invertible sheaf is trivial:
$\mu^*\mathcal L_{\overline k} \simeq \MO_{(X \times_k \overline k)_{\red}^N}$.
As $F^2_{X \times_k \overline k}$ factors through $\mu$, we have that
\[
\mathcal L_{\overline k}^4 =
(F^2_{X \times_k \overline k})^*\mathcal L_{\overline k} \simeq \MO_{X \times_k \overline k}.
\]
Then it holds that
\[
H^0(X, \mathcal L^4) \otimes_k \overline k \simeq
H^0(X \times_k \overline k, \mathcal L^4_{\overline k}) \simeq
H^0(X \times_k \overline k, \MO_{X \times_k \overline k}) \neq 0.
\]
Hence we obtain $H^0(X, \mathcal L^4) \neq 0$, i.e. $4L \sim 0$.
\end{proof}
\subsection{Essential step for the log case}
In this subsection,
we study the torsion index of numerically trivial line bundles on log del Pezzo surfaces admitting the following special Mori fibre space structure onto a curve.
\begin{nota}\label{n-ess-klt-case}
We use the following notation.
\begin{enumerate}
\item $k$ is a field of characteristic $p>0$.
\item $X$ is a regular $k$-surface of del Pezzo type such that
$k=H^0(X, \MO_X)$ and $\rho(X)=2$.
\item $B$ is a regular projective curve over $k$ such that $k=H^0(B, \MO_B)$.
\item $\pi:X \to B$ is a $K_X$-Mori fibre space.
\item
Let $R = \R_{\geq 0}[\Gamma]$ be the extremal ray which does not correspond to $\pi$,
where $\Gamma$ denotes a curve on $X$.
Note that $\pi(\Gamma)=B$.
Set $d_{\Gamma}:=\dim_k H^0(\Gamma, \MO_{\Gamma}) \in \Z_{>0}$ and $m_{\Gamma}:=[K(\Gamma):K(B)] \in \Z_{>0}$.
We denote by $\pi_{\Gamma}:\Gamma \to B$ the induced morphism.
\item Assume that $K_X \cdot \Gamma >0$.
\end{enumerate}
\end{nota}
\begin{lem}\label{l-ess-klt-case}
We use Notation \ref{n-ess-klt-case}.
Then the following hold.
\begin{enumerate}
\item[(7)] $\Gamma^2 \leq 0$.
\item[(8)] There exists a rational number $\alpha$ such that $0 \leq \alpha <1$
and $(X, \alpha \Gamma)$ is a log del Pezzo pair.
\end{enumerate}
\end{lem}
\begin{proof}
The assertion (7) follows from Lemma \ref{l-ext-ray} below.
Let us prove (8).
By Notation \ref{n-ess-klt-case}(2), there is an effective $\Q$-divisor $\Delta$ such that $(X, \Delta)$ is a log del Pezzo pair.
We write $\Delta= \alpha \Gamma + \Delta'$
for some rational number $0 \leq \alpha<1$ and an effective $\Q$-divisor $\Delta'$
with $\Gamma \not\subset \text{Supp}(\Delta')$.
Since $\overline{\NE}(X)$ is generated by $\Gamma$ and a fibre $F$ of the morphism $\pi:X \to B$,
we conclude that any prime divisor $C$ such that $C \neq \Gamma$ is nef.
In particular, $\Delta'$ is nef.
Hence, $(X, \alpha \Gamma)$ is a log del Pezzo pair.
Thus, (8) holds.
\end{proof}
\begin{lem}\label{l-ext-ray}
Let $k$ be a field.
Let $X$ be a projective $\Q$-factorial normal surface over $k$
Let $R=\R_{\geq 0}[\Gamma]$ is an
extremal ray of $\overline{\NE}(X)$,
where $\Gamma$ is a curve on $X$.
If $\Gamma^2 > 0$, then $\rho(X)=1$.
\end{lem}
\begin{proof}
We may apply the same argument as in
\cite[Theorem 3.21, Proof of the case where $C^2>0$ in page 20]{Tan14}.
\end{proof}
The first step is to prove that $m_{\Gamma} \leq 5$ (Proposition \ref{p-cov-deg-bound}).
To this end, we find an upper bound and a lower bound for $\alpha$
(Lemma \ref{l-alpha-upper}, Lemma \ref{l-alpha-lower}).
\begin{lem}\label{l-alpha-upper}
We use Notation \ref{n-ess-klt-case}.
Take a closed point $b$ of $B$ and set $F_b:=\pi^*(b)$.
Let $\kappa(b)$ be the residue field at $b$ and set $d(b):=[\kappa(b):k]$.
Then the following hold.
\begin{enumerate}
\item
$K_X \cdot_k F_b = -2d(b)$.
\item
$\Gamma \cdot_k F_b = m_{\Gamma}d(b)$.
\item
If $\alpha$ is a rational number such that $-(K_X+\alpha \Gamma)$ is ample,
then $\alpha m_{\Gamma}<2$.
\end{enumerate}
\end{lem}
\begin{proof}
Let us show (1).
We have that
\[
\deg_k \omega_{F_b} = (K_X+F_b)\cdot_k F_b= K_X \cdot_k F_b<0.
\]
Hence, Lemma \ref{l-conic} implies that
\[
K_X \cdot_k F_b=\deg_k \omega_{F_b}=-2d(b).
\]
Thus (1) holds.
Clearly, (2) holds.
Let us show (3). Since $-(K_X+\alpha \Gamma)$ is ample, (1) and (2) imply that
\[
0> (K_X+\alpha \Gamma) \cdot_k F_b=-2d(b)+\alpha m_{\Gamma}d(b).
\]
Thus (3) holds.
\end{proof}
\begin{lem}\label{l-alpha-lower}
We use Notation \ref{n-ess-klt-case}.
Then the following hold.
\begin{enumerate}
\item
$(K_X+\Gamma) \cdot_k \Gamma = -2d_{\Gamma}<0$.
\item
For a rational number $\beta$ with $0 \leq \beta \leq 1$, it holds that
\[
(K_X+\beta \Gamma) \cdot_k \Gamma \geq d_{\Gamma}(1-3\beta).
\]
\item
If $\alpha$ is a rational number such that $0 \leq \alpha <1$ and $-(K_X+\alpha \Gamma)$ is ample,
then it holds that $1/3 < \alpha$.
\end{enumerate}
\end{lem}
\begin{proof}
We fix a rational number $\alpha$
such that $0 \leq \alpha <1$ and $-(K_X+\alpha \Gamma)$ is ample,
whose existence is guaranteed by Lemma \ref{l-ess-klt-case}.
Let us show (1).
It holds that
\[
(K_X+\Gamma) \cdot_k \Gamma \leq (K_X+\alpha \Gamma) \cdot_k \Gamma <0,
\]
where the first inequality follows from $\Gamma^2\leq 0$ and $0\leq \alpha <1$,
whilst the second one holds since $-(K_X+\alpha \Gamma)$ is ample.
Therefore, by adjunction and Lemma \ref{l-conic}, we deduce
$(K_X+\Gamma) \cdot_k \Gamma =\deg_k \omega_{\Gamma} = -2d_{\Gamma}$.
Thus (1) holds.
Let us show (2).
For $k_{\Gamma}:=H^0(\Gamma, \MO_{\Gamma})$,
the equation $d_{\Gamma} = [k_{\Gamma}:k]$ (Notation \ref{n-ess-klt-case}(5))
implies that
\[
K_X \cdot_k \Gamma = \deg_k (\omega_X|_{\Gamma}) = d_{\Gamma} \cdot \deg_{k_{\Gamma}} (\omega_X|_{\Gamma}) \in d_{\Gamma} \Z.
\]
Combining with $K_X \cdot_k \Gamma>0$ (Notation \ref{n-ess-klt-case}(6)),
we obtain $K_X \cdot_k \Gamma \geq d_{\Gamma}$.
Hence, it holds that
\[
(K_X+\beta \Gamma) \cdot_k \Gamma
= (1-\beta) K_X \cdot_k \Gamma+\beta (K_X+\Gamma) \cdot_k \Gamma
\]
\[
=(1-\beta) K_X \cdot_k \Gamma+\beta (-2d_{\Gamma})
\geq (1-\beta) d_{\Gamma} + \beta (-2d_{\Gamma})=d_{\Gamma}(1-3\beta).
\]
Thus (2) holds.
The assertion (3) follows from (2).
\end{proof}
\begin{prop}\label{p-cov-deg-bound}
We use Notation \ref{n-ess-klt-case}.
It holds that $m_{\Gamma} \leq 5.$
\end{prop}
\begin{proof}
We fix a rational number $\alpha$
such that $0 \leq \alpha <1$ and $-(K_X+\alpha \Gamma)$ is ample,
whose existence is guaranteed by Lemma \ref{l-ess-klt-case}.
Then the inequality $m_{\Gamma}<6$ holds by
\[
\frac{2}{m_{\Gamma}} > \alpha > \frac{1}{3},
\]
where the first and second inequalities follow from Lemma \ref{l-alpha-upper}
and Lemma \ref{l-alpha-lower}, respectively.
\end{proof}
To prove the main result of this subsection (Proposition \ref{p-ess-klt-bdd-torsion}),
we first treat the case when $K(\Gamma)/K(B)$ is separable or purely inseparable.
\begin{lem}\label{l-sep-or-p-insep}
We use Notation \ref{n-ess-klt-case}.
Let $L_B$ be a numerically trivial Cartier divisor on $B$.
Then the following hold.
\begin{enumerate}
\item If $K(\Gamma)/K(B)$ is a separable extension, then $\omega_B^{-1}$ is ample
and $L_B \sim 0$.
\item
If $K(\Gamma)/K(B)$ is a purely inseparable morphism of degree $p^e$
for some $e \in \Z_{>0}$,
then $p^e L_B \sim 0$.
\end{enumerate}
\end{lem}
\begin{proof}
We first prove (1).
Assume that $K(\Gamma)/K(B)$ is a separable extension.
Let $\Gamma^N \to \Gamma$ be the normalisation of $\Gamma$.
Set $\pi_{\Gamma^N}:\Gamma^N \to B$ to be the induced morphism.
Since $\omega_{\Gamma}^{-1}$ is ample, so is $\omega_{\Gamma^N}^{-1}$.
Hence we obtain $H^1(\Gamma^N, \MO_{\Gamma^N})=0$ (Lemma \ref{l-conic}).
Thanks to the Hurwitz formula (cf. \cite[Theorem 4.16 in Section 7]{Liu02}),
we have that $H^1(B, \MO_B)=0$, thus $\omega_B^{-1}$ is ample (Lemma \ref{l-conic}).
In particular, the numerically trivial Cartier divisor $L_B$ is trivial, i.e. $L_B \sim 0$.
Thus (1) holds.
We now show (2).
Since $K(\Gamma)/K(B)$ is a purely inseparable morphism of degree $p^e$,
the $e$-th iterated absolute Frobenius morphism $F^e_B:B \to B$ factors through
the induced morphism $\pi_{\Gamma^N}:\Gamma^N \to B$:
\[
F^e_B:B \to \Gamma^N \xrightarrow{\pi_{\Gamma^N}} B.
\]
It holds that $\pi_{\Gamma^N}^*L_B \sim 0$, hence
$p^e L_B = (F^e_B)^* L_B \sim 0.$ Thus (2) holds.
\end{proof}
\begin{prop}\label{p-ess-klt-bdd-torsion}
We use Notation \ref{n-ess-klt-case}.
Let $L$ be a numerically trivial Cartier divisor on $X$.
Then the following hold.
\begin{enumerate}
\item
If $p \geq 7$, then $L \sim 0$.
\item
If $p \in \{3, 5\}$, then $pL \sim 0$.
\item
If $p=2$, then $4L \sim 0.$
\end{enumerate}
\end{prop}
\begin{proof}
By \cite[Theorem 4.4]{Tan18a},
there exists a numerically trivial Cartier divisor $L_B$ on $B$ such that
$\pi^* L_B \sim L$.
If $K(\Gamma)/K(B)$ is separable, then Lemma \ref{l-sep-or-p-insep}(1) implies that $L \sim 0$.
Therefore, we may assume that $K(\Gamma)/K(B)$ is not a separable extension.
Thanks to Proposition \ref{p-cov-deg-bound}, we have
\[
[K(\Gamma):K(B)]=m_{\Gamma} \leq 5.
\]
Let us show (1).
Assume $p \geq 7$.
In this case,
there does not exist an inseparable extension $K(\Gamma)/K(B)$ with $[K(\Gamma):K(B)]\leq 5$.
Thus (1) holds.
Let us show (2).
Assume $p \in \{3, 5\}$.
Since $K(\Gamma)/K(B)$ is not a separable extension and $[K(\Gamma):K(B)]\leq 5$,
it holds that $K(\Gamma)/K(B)$ is a purely inseparable extension of degree $p$.
Hence, Lemma \ref{l-sep-or-p-insep}(2) implies that $pL \sim 0$.
Thus (2) holds.
Let us show (3).
Assume $p=2$.
Since $K(\Gamma)/K(B)$ is not a separable extension and $[K(\Gamma):K(B)]\leq 5$,
there are the following three possibilities (i)--(iii).
\begin{enumerate}
\item[(i)] $K(\Gamma)/K(B)$ is a purely inseparable extension of degree $2$.
\item[(ii)] $K(\Gamma)/K(B)$ is a purely inseparable extension of degree $4$.
\item[(iii)] $K(\Gamma)/K(B)$ is an inseparable extension of degree $4$ which is not purely inseparable.
\end{enumerate}
If (i) or (ii) holds, then Lemma \ref{l-sep-or-p-insep}(2) implies that $4L \sim 0$.
Hence we may assume that (iii) holds.
Let $\Gamma^N \to \Gamma$ be the normalisation of $\Gamma$.
Corresponding to the separable closure of $K(B)$ in $K(\Gamma)=K(\Gamma^N)$,
we obtain the following factorisation
\[
\Gamma^N \to B_1 \to B
\]
where $K(\Gamma^N)/K(B_1)$ is a purely inseparable extension of degree two and
$K(B_1)/K(B)$ is a separable extension of degree two.
In particular, $K(B_1)/K(B)$ is a Galois extension.
Set $G:={\rm Gal} (K(B_1)/K(B))=\{{\rm id}, \sigma\}$.
Since $L_B|_{\Gamma^N} \sim L|_{\Gamma^N} \sim 0$ and
the absolute Frobenius morphism $F_{B_1}:B_1 \to B_1$ factors through $\Gamma^N \to B_1$,
it holds that $2L_B|_{B_1} \sim 0$.
In particular, we have that $H^0(B_1, 2L_B|_{B_1}) \neq 0$.
Fix $0 \neq s \in H^0(B_1, 2L_B|_{B_1})$. We obtain
\[
0 \neq s \sigma(s) \in H^0(B_1, 4L_B|_{B_1})^{G}.
\]
As $s \sigma(s)$ is $G$-invariant,
$s \sigma(s)$ descends to $B$, i.e. there is an element
\[
t \in H^0(B, 4L_B)
\]
such that $t|_{B_1}=s \sigma(s)$.
In particular, we obtain $t \neq 0$, hence $4L_B \sim 0$.
Therefore, we have $4L \sim 0$.
\end{proof}
\subsection{General case}
We are ready to prove the main theorem of this section.
\begin{thm}\label{t-klt-bdd-torsion}
Let $k$ be a field of characteristic $p>0$.
Let $X$ be a $k$-surface of del Pezzo type.
Let $L$ be a numerically trivial Cartier divisor on $X$.
Then the following hold.
\begin{enumerate}
\item If $p \geq 7$, then $L \sim 0$.
\item If $p \in \{3, 5\}$, then $pL \sim 0$.
\item If $p=2$, then $4L \sim 0$.
\end{enumerate}
\end{thm}
\begin{proof}
Replacing $k$ by $H^0(X, \MO_X)$, we may assume that $k=H^0(X, \MO_X)$.
Furthermore, replacing $X$ by its minimal resolution, we may assume that $X$ is regular by Lemma \ref{l-dP-min-res}.
We run a $K_X$-MMP:
\[
\varphi:X=:X_0 \to X_1 \to \cdots \to X_n.
\]
Since $-K_X$ is big, the end result $X_n$ is a $K_{X_n}$-Mori fibre space.
It follows from \cite[Theorem 4.4(3)]{Tan18a} that
there exists a Cartier divisor $L_n$ with $\varphi^*L_n \sim L$.
Since also $X_n$ is of del Pezzo type by Lemma \ref{l-dP-under-bir-mor}, we may replace $X$ by $X_n$.
Let $\pi:X \to B$ be the induced $K_X$-Mori fibre space.
If $\dim B=0$, then we conclude by Theorem \ref{t-cano-bdd-torsion}.
Hence we may assume that $\dim B=1$.
Since $X$ is a surface of del Pezzo type,
there is an effective $\Q$-divisor such that $(X, \Delta)$ is klt and $-(K_X+\Delta)$ is ample.
Hence any extremal ray of $\overline{\NE}(X)$ is spanned by a curve.
Note that $\rho(X)=2$ and a fibre of $\pi:X \to B$ spans an extremal ray of $\overline{\NE}(X)$.
Let $R=\R_{\geq 0}[\Gamma]$ be the other extremal ray, where $\Gamma$ is a curve on $X$.
To summarise, (1)--(5) of Notation \ref{n-ess-klt-case} hold.
There are the following three possibilities:
\begin{enumerate}
\item[(i)] $\Gamma^2 \geq 0$.
\item[(ii)] $\Gamma^2<0$ and $K_X \cdot \Gamma \leq 0$.
\item[(iii)] $\Gamma^2<0$ and $K_X \cdot \Gamma>0$.
\end{enumerate}
Assume (i).
In this case, any curve $C$ on $X$ is nef.
Since $-(K_X+\Delta)$ is ample, also $-K_X$ is ample.
Therefore, we conclude by Theorem \ref{t-cano-bdd-torsion}.
Assume (ii).
In this case, $-K_X$ is nef and big.
Again, Theorem \ref{t-cano-bdd-torsion} implies the assertion of Theorem \ref{t-klt-bdd-torsion}.
Assume (iii).
In this case, all the conditions (1)--(6) of Notation \ref{n-ess-klt-case} hold.
Hence the assertion of Theorem \ref{t-klt-bdd-torsion} follows from Proposition \ref{p-ess-klt-bdd-torsion}.
\end{proof}
\section{Results in large characteristic}
In this section, we prove the existence of geometrically normal birational models of log del Pezzo surfaces over imperfect fields of characteristic at least seven (Theorem \ref{t-dP-large-p}).
As consequences, we prove geometric integrality (Corollary \ref{c-geom-red-7}) and vanishing of irregularity for such surfaces (Theorem \ref{t-h1-vanish}).
\subsection{Analysis up to birational modification}
The purpose of this subsection is to prove Theorem \ref{t-dP-large-p}.
To this end, we establish auxiliary results on Mori fibre spaces
(Proposition \ref{p-dP-large-p1}, Proposition \ref{p-dP-large-p2})
We start by recalling the following well-known relation between the Picard rank and the anti-canonical volume of del Pezzo surfaces.
\begin{lem} \label{l-degree-picardrank}
Let $Y$ be a smooth weak del Pezzo surface over an algebraically closed field $k$.
Then $\rho(Y) = 10 - K_Y^2.$
In particular, it holds that $\rho(Y) \leq 9$.
\end{lem}
\begin{proof}
Let $Y=:Y_1 \rightarrow Y_2 \rightarrow \cdots \rightarrow Y_n=Z$ be a $K_Y$-MMP, where $Z$ is a weak del Pezzo surface endowed with a $K_Z$-Mori fibre space $Z \rightarrow B$.
It is sufficient to prove the relation $\rho(Z) = 10 - K_Z^2$,
which is well known (cf. \cite[Theorem 1.28]{KM98}).
\end{proof}
\begin{prop}\label{p-dP-large-p1}
Let $k$ be field of characteristic $p\geq 11$.
Let $X$ be a regular del Pezzo $k$-surface such that
$k=H^0(X, \MO_X)$.
Then $X$ is smooth over $k$.
\end{prop}
\begin{proof}
By Theorem \ref{t-p2-bound}, $X \times_k \overline{k}$ has at most canonical singularities.
By \cite[Theorem 6.1]{Sch08} such singularities are of type $A_{p^e-1}$.
Since $X \times_k \overline{k}$ is a canonical del Pezzo surface,
its minimal resolution $\pi \colon Y \rightarrow X \times_k \overline{k}$ is a smooth weak del Pezzo surface and we have
\[
9 \geq \rho(Y) \geq \rho(X \times_k \overline{k})+ \sum_{x \in \text{Sing}(X \times_k \overline{k})} (p-1) \geq \sum_{x \in \text{Sing}(X \times_k \overline{k})} 10,
\]
where the first inequality follows from Lemma \ref{l-degree-picardrank} and the last inequality holds by $p \geq 11$.
Thus, we obtain $\text{Sing}(X \times_k \overline{k})=\emptyset$, as desired.
\end{proof}
\begin{prop}\label{p-dP-large-p2}
Let $k$ be field of characteristic $p>0$.
Let $X$ be a regular $k$-surface of del Pezzo type such that $k=H^0(X, \MO_X)$.
Assume that there is a $K_X$-Mori fibre space $\pi:X \to B$ to a
projective regular $k$-curve $B$.
Let $\Gamma$ be a curve which spans the extremal ray of $\overline{\NE}(X)$
not corresponding to $\pi$.
Then the following hold.
\begin{enumerate}
\item If $K_X \cdot \Gamma < 0 $ (resp. $\leq 0$), then $-K_X$ is ample (resp. nef and big). If $p \geq 5$, then $\omega_B^{-1}$ is ample and $B$ is smooth over $k$.
\item If $K_X \cdot \Gamma >0$ and $p \geq 7$,
then $\omega_B^{-1}$ is ample and $B$ is smooth over $k$.
\item
If $K_X \cdot \Gamma >0$, $p \geq 7$, and $k$ is separably closed,
then $\Gamma$ is a section of $\pi$ and $\pi$ is smooth. In particular, $X$ is smooth over $k$.
\end{enumerate}
\end{prop}
\begin{proof}
The first part of assertion (1) follows immediately from Kleimann's criterion for ampleness (resp. \cite[Theorem 2.2.16]{Laz04a}).
Assume $p \geq 5$.
The anti-canonical model $Z$ of $X$ is geometrically normal by Theorem \ref{t-p2-bound} and thus $H^1(Z, \mathcal{O}_Z)=0$.
This implies that $H^1(X, \mathcal{O}_X)=0$ and $H^1(B, \mathcal{O}_B)=0$.
Hence, the assertion (1) holds by Lemma \ref{l-conic} and Lemma \ref{p-Fano-curve}.
Let us show (2).
The field extension $K(\Gamma)/K(B)$ corresponding to
the induced morphism $\pi_{\Gamma} : \Gamma \to B$ is separable (Proposition \ref{p-cov-deg-bound}).
Thus $B$ is a curve such that $\omega_B^{-1}$ is ample (Lemma \ref{l-sep-or-p-insep}).
Since $p>2$, $B$ is a $k$-smooth curve by Lemma \ref{p-Fano-curve}.
Thus (2) holds.
Let us show (3).
It follows from Proposition \ref{p-MFS-basic}(6) that $\pi$ is a smooth morphism.
Hence it suffices to show that $\pi_{\Gamma} : \Gamma \to B$ is a section of $\pi$.
Since $K(\Gamma)$ is separable over $K(B)$ and $B$ is smooth over $k$,
$K(\Gamma)$ is separable over $k$, i.e. $K(\Gamma)$ is geometrically reduced over $k$.
Hence also $\Gamma$ is geometrically reduced over $k$.
Since $X_{\overline k}$ is a smooth projective rational surface with $\rho(X_{\overline k})=2$,
$X_{\overline k}$ is a Hirzebruch surface and
$\pi_{\overline k}:X_{\overline k} \to B_{\overline k}$ is a projection.
Since the pullback $\Gamma_{\overline k}$ of $\Gamma$ is a curve with $\Gamma_{\overline k}^2<0$ by Lemma \ref{l-ext-ray},
$\Gamma_{\overline k}$ is a section of $\pi_{\overline k}:X_{\overline k} \to B_{\overline k}$.
The base change $\Gamma_{\overline k} \to B_{\overline k}$ is an isomorphism,
hence so is the original one $\pi_{\Gamma}:\Gamma \to B$.
Thus (3) holds.
\end{proof}
\begin{thm}\label{t-dP-large-p}
Let $k$ be a separably closed field of characteristic $p \geq 7$.
Let $X$ be a $k$-surface of del Pezzo type such that $k=H^0(X, \MO_X)$.
Then there exists a birational map $X \dashrightarrow Y$
to a projective normal $k$-surface $Y$ such that
one of the following properties holds.
\begin{enumerate}
\item $Y$ is a regular del Pezzo surface such that $k=H^0(Y, \MO_Y)$ and $\rho(Y)=1$.
In particular, $Y$ is geometrically canonical over $k$.
Moreover, if $p\geq 11$, then $Y$ is smooth over $k$.
\item There is a smooth projective morphism $\pi:Y \to B$
such that $B \simeq \mathbb P^1_k$ and the fibre $\pi^{-1}(b)$ is isomorphic to $\mathbb P^1_{k(b)}$
for any closed point $b$ of $B$,
where $k(b)$ denotes the residue field of $b$.
In particular, $Y$ is smooth over $k$ and $Y \times_k \overline k$ is a Hirzebruch surface.
\end{enumerate}
\end{thm}
\begin{proof}
Let $f:Z \to X$ be the minimal resolution of $X$.
By Lemma \ref{l-dP-min-res}, $Z$ is a $k$-surface of del Pezzo type.
We run a $K_Z$-MMP:
\[
Z=:Z_0 \to Z_1 \to \cdots \to Z_n=:Y.
\]
By Lemma \ref{l-dP-under-bir-mor}, the surfaces $Z_i$ are of del Pezzo type.
The end result $Y$ is a $K_Y$-Mori fibre space $\pi:Y \to B$.
If $\dim B=0$, then $Y$ is a regular del Pezzo surface, hence (1) holds by Theorem \ref{t-p2-bound} and Proposition \ref{p-dP-large-p1}.
If $\dim B=1$, then Proposition \ref{p-dP-large-p2} implies that (2) holds.
\end{proof}
\begin{cor}\label{c-geom-red-7}
Let $k$ be a field of characteristic $p \geq 7$.
Let $X$ be a $k$-surface of del Pezzo type such that $k=H^0(X, \MO_X)$.
Then $X$ is geometrically integral over $k$.
\end{cor}
\begin{proof}
We may assume $k$ is separably closed.
It is enough to show that $X$ is geometrically reduced \cite[Lemma 2.2]{Tan18b}.
By Lemma \ref{l-gred-open}, we may replace $X$ by a surface birational to $X$.
Then the assertion follows from Theorem \ref{t-dP-large-p}.
\end{proof}
\subsection{Vanishing of $H^1(X, \MO_X)$}
In this subsection, we prove that surfaces of del Pezzo type over an imperfect field of characteristic $p \geq 7$ have vanishing irregularity.
\begin{lem}\label{l-h1-vanish}
Let $k$ be a field of characteristic $p > 0$.
Let $X$ be a $k$-surface of del Pezzo type such that $k=H^0(X, \MO_X)$.
If $X$ is geometrically normal over $k$,
then it holds that $H^i(X, \MO_X)=0$ for $i>0$.
\end{lem}
\begin{proof}
The assertion immediately follows from Lemma \ref{l-rationality}.
\end{proof}
\begin{thm}\label{t-h1-vanish}
Let $k$ be a field of characteristic $p\geq 7$.
Let $X$ be a $k$-surface of del Pezzo type such that $k=H^0(X, \mathcal{O}_X)$.
Then $H^i(X, \MO_X)=0$ for $i>0$.
\end{thm}
\begin{proof}
We may assume that $k$ is separably closed.
Let $X \dashrightarrow Y$ be the birational morphism as in the statement
of Theorem \ref{t-dP-large-p}.
Lemma \ref{l-h1-vanish} implies that $H^i(Y, \MO_Y)=0$ for $i>0$.
Let $\varphi:W \to X$ and $\psi:W \to Y$ be birational morphisms
from a regular projective surface $W$.
Since both $Y$ and $W$ are regular, we have that $H^i(W, \MO_W)=0$ for $i>0$.
Then the Leray spectral sequence implies that $H^1(X, \MO_X)=0$.
It is clear that $H^j(X, \MO_X)=0$ for $j \geq 2$.
\end{proof}
\begin{rem}
We now give an alternative proof of Theorem \ref{t-h1-vanish}.
We use the same notation as in \cite[Chapter 9]{FGAex}.
Assume that $H^1(X, \MO_X) \neq 0$ and let us derive a contradiction.
We may assume that $k$ is separably closed.
Since $X$ is geometrically integral over $k$ (Corollary \ref{c-geom-red-7}),
$X$ has a $k$-rational point, i.e. $X(k) \neq \emptyset$.
By \cite[Theorem 9.2.5 and Corollary 9.4.18.3]{FGAex},
there exists a scheme $\mathbf{Pic}_{X/k}$ that represents
any of the functors $\text{Pic}_{X/k}$, $\text{Pic}_{X/k, (\text{\'et})}$, and
$\text{Pic}_{X/k, (\text{fppf})}$.
Then $\mathbf{Pic}_{X/k}$ is a group $k$-scheme which is locally of finite type over $k$ \cite[Proposition 9.4.17]{FGAex} and its connected component $\mathbf{Pic}_{X/k}^0$ containing the identity is an open and closed group subscheme of finite type over $k$ \cite[Proposition 9.5.3]{FGAex}.
By $H^1(X, \MO_X) \neq 0$ and $H^2(X, \MO_X)=0$,
$\mathbf{Pic}_{X/k}^0$ is smooth and $\dim \mathbf{Pic}_{X/k}^0>0$
\cite[Remark 9.5.15 and Theorem 9.5.11]{FGAex}.
Since $k$ is separably closed, $\mathbf{Pic}_{X/k}^0(k)$ is an infinite set.
In particular, there exists a numerically trivial Cartier divisor $L$ on $X$
with $L \not\sim 0$.
This contradicts Theorem \ref{t-klt-bdd-torsion}.
\end{rem}
In characteristic zero, it is known that the image of a variety of Fano type under a surjective morphism remains of Fano type (cf. \cite[Theorem 5.12]{FG12}).
The same result is false over imperfect fields of low characteristic as shown in \cite[Theorem 1.4]{Tan}.
We now prove that this phenomenon can appear exclusively in low characteristic.
\begin{cor}\label{c-h1-vanish}
Let $k$ be a field of characteristic $p \geq 7$.
Let $X$ be a $k$-surface of del Pezzo type such that $k=H^0(X, \mathcal{O}_X)$ and
let $\pi \colon X \rightarrow Y$ be a projective $k$-morphism such that $\pi_* \mathcal{O}_X= \mathcal{O}_Y$.
Then $Y$ is a $k$-variety of Fano type.
Furthermore, if $\dim Y=1$, then $Y$ is smooth over $k$.
\end{cor}
\begin{proof}
We distinguish two cases according to $\dim Y$.
If $\dim Y=2$, then $\pi$ is birational and we conclude by Lemma \ref{l-dP-under-bir-mor}.
If $\dim Y=1$, then thanks to the Leray spectral sequence, we have an injection:
\[
H^1(Y, \MO_Y) \hookrightarrow H^1(X, \MO_X),
\]
where $H^1(X, \MO_X)=0$ by Theorem \ref{t-h1-vanish}. Therefore $\omega_Y^{-1}$ is ample by Lemma \ref{l-conic} and $Y$ is smooth over $k$ by Lemma \ref{p-Fano-curve}.
\end{proof}
\section{Purely inseparable points on log del Pezzo surfaces}\label{s-pi-pts}
The aim of this section is to construct purely inseparable points
of bounded degree on log del Pezzo surfaces $X$ over $C_1$-fields of positive characteristic
(Theorem \ref{t-ex-rat-points-dP}).
Since we may take birational model changes,
the problem is reduced to the case when $X$ has a Mori fibre space structure $X \to B$.
The case when $\dim B=0$ and $\dim B=1$ are treated in
Subsection \ref{ss1-pi-pts} and Subsection \ref{ss2-pi-pts}, respectively.
In Subsection \ref{ss3-pi-pts}, we prove the main result
of this section (Theorem \ref{t-ex-rat-points-dP}).
\subsection{Purely inseparable points on regular del Pezzo surfaces}\label{ss1-pi-pts}
In this subsection we prove the existence of purely inseparable points with bounded degree on geometrically normal regular del Pezzo surfaces over $C_1$-fields.
If $K_X^2 \leq 4$, then we apply the strategy as in \cite[Theorem IV.6.8]{Kol96}
(Lemma \ref{l-rat-pts-low-deg}).
We analyse the remaining cases by using a classification result given by \cite[Section 6]{Sch08} and Proposition \ref{p-insep-bdd-rat-pts}.
We first relate the $C_r$-condition
(for definition of $C_r$-field, see \cite[Definition IV.6.4.1]{Kol96})
for a field of positive characteristic to its $p$-degree.
\begin{lem}\label{l-Cr-pdeg}
Let $k$ be a field of characteristic $p>0$.
If $r$ is a positive integer and $k$ is a $C_r$-field, then $\pdeg (k) \leq r$,
where $\pdeg (k):=\log_p [k:k^p]$.
In particular, if $k$ is a $C_1$-field, then $\pdeg(k) \leq 1$.
\end{lem}
\begin{proof}
Suppose by contradiction that $[k:k^p] \geq p^{r+1}$.
Let $s_1, ..., s_{p^r+1}$ be elements of $k$ which are linearly independent over $k^p$.
Let us consider the following homogeneous polynomial of degree $p$:
\[ P:=\sum_{k=1}^{p^r+1} s_i x_i^p =s_1x_1^p + \cdots+ s_{p^r+1}x_{p^r+1}^p \in k[x_1, \dots, x_{p^r+1}]. \]
Since $s_1, ..., s_{p^r+1}$ are linearly independent over $k^p$, the polynomial $P$ has only the trivial solution in $k$.
In particular $k$ is not a $C_r$-field.
\end{proof}
We then study rational points on geometrically normal del Pezzo surfaces of degree $\leq 4$ (compare with \cite[Exercise IV.6.8.3]{Kol96}).
We need the following result.
\begin{lem}[cf. Exercise IV.6.8.3.2 of \cite{Kol96}]\label{l-pt-dP2}
Let $k$ be a $C_1$-field. Let $S$ be a weighted hypersurface of degree 4 in $\mathbb{P}_k(1,1,1,2)$.
Then $S(k) \neq \emptyset$.
\end{lem}
\begin{proof}
Let us recall the definition of normic forms (\cite[Definition IV.6.4.2]{Kol96}).
A homogeneous polynomial $h \in k[y_1, ..., y_m]$ of degree $m$
is called a normic form if $h=0$ has only the trivial solution in $k$.
If $k$ has a normic form of degree two, then the same argument as in the proof of \cite[Theorem IV.6.7]{Kol96} works.
Suppose now that $k$ does not have a normic form of degree two.
We can write $\mathbb{P}_k(1,1,1,2) = \Proj\,k[x_0, x_1, x_2, x_3]$,
where $\deg x_0 = \deg x_1 =\deg x_2=1$ and $\deg x_3=2$.
Let
\[
F(x_0, x_1, x_2, x_3):= c x_3^2+f(x_0, x_1, x_2)x_3+g(x_0, x_1, x_2) \in k[x_0, x_1, x_2, x_3]
\]
be the defining polynomial of $S$,
where $c \in k$ and $f(x_0, x_1, x_2), g(x_0, x_1, x_2) \in k[x_1, x_2, x_3]$.
If $c=0$, then $F(0, 0, 0, 1)=0$.
Thus, we may assume that $c \neq 0$.
Fix $(a_0, a_1, a_2) \in k^3 \setminus \{(0, 0, 0)\}$.
Set $\alpha:=f(a_0, a_1, a_2) \in k$ and $\beta:=g(a_0, a_1, a_2) \in k$.
Since $h(X, Y):=cX^2 +\alpha XY +\beta Y^2$ is not a normic form,
there is $(u, v) \in k^2 \setminus \{(0, 0)\}$ such that $h(u, v) = cu^2+\alpha uv + \beta v^2 =0$.
Since $c \neq 0$, we obtain $v \neq 0$.
Therefore, it holds that $F(a_0, a_1, a_2, u/v)=c(u/v)^2+\alpha (u/v)+\beta=0$, as desired.
\end{proof}
\begin{lem}\label{l-rat-pts-low-deg}
Let $X$ be a geometrically normal regular del Pezzo surface over a $C_1$-field $k$ of characteristic $p>0$ such that $k=H^0(X, \mathcal{O}_X)$. If $K_X^2 \leq 4$, then $X(k) \neq \emptyset$.
\end{lem}
\begin{proof}
Since $X$ is geometrically normal, then it is geometrically canonical by Theorem \ref{t-classify-bc}.
Thus we can apply Theorem \ref{t-dP-small-degree} and
we distinguish the cases according to the degree of $K_X$.
If $K_X^2=1$, then $X$ has a $k$-rational point by Proposition \ref{p-antican-ring-dP}(2).
If $K_X^2=2$, then $X$ can be embedded as a weighted hypersurface of degree 4 in $\mathbb{P}_k (1,1,1,2)$ and we apply Lemma \ref{l-pt-dP2} to conclude it has a $k$-rational point.
If $K_X ^2=3$, then $X$ is a cubic hypersurface in $\mathbb{P}^3_k$ and thus it has a $k$-rational point by definition of $C_1$-field.
If $K_X^2=4$, then $X$ is a complete intersection of two quadrics in $\mathbb{P}^4$ and thus it has a $k$-rational point by \cite[Corollary in page 376]{Lan52}.
\end{proof}
We now discuss the existence of purely inseparable points on geometrically normal regular del Pezzo surfaces over $C_1$-fields.
\begin{prop}\label{p-rat-point-p-geq-7}
Let $X$ be a regular del Pezzo surface over a $C_1$-field $k$ of characteristic $p \geq 7$ such that $k=H^0(X, \mathcal{O}_X)$. Then $X(k) \neq \emptyset$.
\end{prop}
\begin{proof}
If $X$ is a smooth del Pezzo surface, we conclude that there exists a $k$-rational point by \cite[Theorem IV.6.8]{Kol96}.
If $p \geq 11$, then $X$ is smooth by Proposition \ref{p-dP-large-p1} and we conclude.
It suffices to treat the case when $p=7$ and $X$ is not smooth.
By Theorem \ref{t-p2-bound}(2), $X$ is geometrically canonical.
By \cite[Theorem 6.1]{Sch08}, any singular point of
the base change $X_{\overline k} = X \times_k \overline k$
is of type $A_{p^n-1}$.
It follows from Lemma \ref{l-degree-picardrank}
that $X_{\overline{k}}$ has a unique $A_6$ singular point.
Thus by Lemma \ref{l-degree-picardrank} we have $K_X^2 \leq 3$,
hence Lemma \ref{l-rat-pts-low-deg} implies $X(k) \neq \emptyset$.
\end{proof}
\begin{prop} \label{p-ins-rat-point-3,5}
Let $X$ be a regular del Pezzo surface over a $C_1$-field $k$ of characteristic $p \in \left\{3,5 \right\}$ such that $k=H^0(X, \mathcal{O}_X)$.
If $X$ is geometrically normal over $k$, then $X(k^{1/p}) \neq \emptyset$.
\end{prop}
\begin{proof}
It is sufficient to consider the case when $X$ is not smooth by \cite[Theorem IV.6.8]{Kol96}.
By Theorem \ref{t-classify-bc}, $X_{\overline{k}}$ has canonical singularities.
If $p=5$ and $X$ is not smooth, then the singularities of $X_{\overline{k}}$ must be of type $A_4$ or $E^0_8$ according to \cite[Theorem 6.1 and Theorem 6.4]{Sch08}.
If $X_{\overline{k}}$ has one singular point of type $E_8^0$ or two singular points of type $A_4$, then $K_X^2=1$ by Lemma \ref{l-degree-picardrank}. Thus we conclude that $X$ has a $k$-rational point by Lemma \ref{l-rat-pts-low-deg}.
If $X_{\overline{k}}$ has a unique singular point of type $A_4$, it follows from Proposition \ref{p-insep-bdd-rat-pts} that $X(k^{1/p}) \neq \emptyset$.
If $p=3$ and $X$ is not smooth, then the singularities of $X_{\overline{k}}$ must be of type $A_2$, $A_8$, $E_6^0$ or $E_8^0$ according to \cite[Theorem 6.1 and Theorem 6.4]{Sch08}.
If one of the singular points is of the type $A_8$, $E_6^0$ and $E_8^0$, then $K_X^2 \leq 3$ by Lemma \ref{l-degree-picardrank} and we conclude $X(k) \neq \emptyset$ by Lemma \ref{l-rat-pts-low-deg}.
Thus, we may assume that all the singularities of $X_{\overline k}$ are of type $A_2$.
If there is a unique singularity of type $A_2$ on $X_{\overline k}$,
then it follows from Proposition \ref{p-insep-bdd-rat-pts} that $X(k^{1/3}) \neq \emptyset$.
Therefore, we may assume that
there are at least two singularities of type $A_2$ on $X_{\overline k}$.
Then it holds that $K_X^2 \leq 5$.
By \cite[Table 8.5 in page 431]{Dol12}, we have that $K_X^2 \neq 5$,
hence $K_X^2 \leq 4$. Thus Lemma \ref{l-rat-pts-low-deg} implies $X(k) \neq \emptyset$.
\end{proof}
\begin{prop} \label{p-ins-rat-point-2}
Let $X$ be a regular del Pezzo surface over a $C_1$-field $k$ of characteristic $p=2$ such that $k=H^0(X, \mathcal{O}_X)$.
If $X$ is geometrically normal, then $X(k^{1/4}) \neq \emptyset$.
\end{prop}
\begin{proof}
It is sufficient to consider the case when $X$ is not smooth by \cite[Theorem IV.6.8]{Kol96}.
The singularities of $X_{\overline{k}}$ are canonical by Theorem \ref{t-classify-bc}.
Hence, by \cite[Theorem in page 57]{Sch08}, they must be of type $A_1$, $A_3$ $A_7$, $D_n^0$ with $4 \leq n \leq 8$ or $E_n^0$ for $n=6,7,8$.
We distinguish five cases for the singularities appearing on $X_{\overline{k}}$.
\begin{enumerate}
\item
There exists at least a singular point of type $A_7$, $D_n^0$ with $n \geq 5$ or $E_n^0$ for $n=6,7,8$.
\item There are at least two singular points with one being of type $A_3$.
\item There exists at least one singular point of type $D_4^0$.
\item There is a unique singular point of type $A_3$.
\item All the singular points are of type $A_1$.
\end{enumerate}
In case (1), it holds that $K_X^2 \leq 4$.
Hence, we obtain $X(k) \neq \emptyset$ by Lemma \ref{l-rat-pts-low-deg}.
In case (2),
if $K_X^2 \leq 4$, then Lemma \ref{l-rat-pts-low-deg} again implies $X(k) \neq \emptyset$.
Hence, we may assume that $K_X^2=5$.
Then there exist exactly two singular points $P$ and $Q$ on $X_{\overline k}$
such that $P$ is of type $A_3$ and $Q$ is of type $A_1$.
However, this cannot occur by \cite[Table 8.5 at page 431]{Dol12}.
In case (3) we have that $K_X^2 \leq 5$. However a $D_4^0$ singularity cannot appear on a del Pezzo of degree five according to \cite[Table 8.5 at page 431]{Dol12}.
Thus $K_X^2 \leq 4$ and Lemma \ref{l-rat-pts-low-deg} implies $X(k) \neq \emptyset$.
In case (4), we apply Proposition \ref{p-insep-bdd-rat-pts} to conclude that $X(k^{1/4}) \neq \emptyset$.
In case (5), consider $X_{(k^{\sep})^{1/2}}$.
By Proposition \ref{p-insep-bdd-rat-pts}, on $X_{(k^{\sep})^{1/2}}$ there are singular points $\left\{P_i\right\}_{i=1}^m$ of type $A_1$ such that $\kappa(P_i)=(k^{\sep})^{1/2}$ and their union $\coprod_i P_i$ is the non-smooth locus of $X_{(k^{\sep})^{1/2}}$.
Let $Y= \text{Bl}_{\coprod_i P_i} X_{(k^{\sep})^{1/2}}$
be the blowup of $X_{(k^{\sep})^{1/2}}$ along $\coprod_i P_i$.
Since each $P_i$ is a $(k^{\sep})^{1/2}$-rational point
whose base change to the algebraic closure is a canonical singularity of type $A_1$, the surface $Y$ is smooth.
Since the closed subscheme $\coprod_i P_i$ is invariant under the action of the Galois group $\text{Gal}((k^{\sep})^{1/2}/k ^{1/2})$,
the birational $(k^{\sep})^{1/2}$-morphism $Y \rightarrow X_{(k^{\sep})^{1/2}}$ descends
to a birational $k^{1/2}$-morphism $Z \rightarrow X_{k^{1/2}}$,
where $Z$ is a smooth projective surface over $k^{1/2}$
whose base change to the algebraic closure is a rational surface.
It holds that $Z(k^{1/2}) \neq \emptyset$ by \cite[Theorem IV.6.8]{Kol96}, which implies $X(k^{1/2}) \neq \emptyset$.
\end{proof}
\subsection{Purely inseparable points on Mori fibre spaces}\label{ss2-pi-pts}
In this subsection, we discuss the existence of purely inseparable points on log del Pezzo surfaces over $C_1$-fields admitting Mori fibre space structures onto curves.
We start by recalling auxiliary results.
\begin{lem}\label{l-conic-rat-pt}
Let $k$ be a $C_1$-field and let $C$ be a regular projective curve such that $k=H^0(C, \mathcal{O}_C)$ and $-K_C$ is ample.
Then it holds that $C \simeq \mathbb P^1_k$.
In particular, $C(k) \neq \emptyset$.
\end{lem}
\begin{proof}
Since $C$ is a geometrically integral conic curve in $\mathbb P^2_k$ (Lemma \ref{l-conic}),
the assertion follows from definition of $C_1$-field.
\end{proof}
\begin{lem} \label{l-suff-rat-pts-base}
Let $X$ be a regular projective surface over a $C_1$-field $k$ of characteristic $p>0$ such that $k=H^0(X, \mathcal{O}_X)$.
Let $\pi \colon X \rightarrow B$ be a $K_X$-Mori fibre space to a regular projective curve $B$.
Then the following hold.
\begin{enumerate}
\item
Let $k \subset k'$ be an algebraic field extension.
If $B(k') \neq \emptyset$, then $X(k') \neq \emptyset$.
\item
If $-K_B$ is ample, then $X(k) \neq \emptyset$.
\end{enumerate}
\end{lem}
\begin{proof}
Let us show (1).
Let $b$ be a closed point in $B$ such that $k \subset \kappa(b) \subset k'$.
By Proposition \ref{p-MFS-basic}, the fibre $X_b$ is a conic in $\mathbb{P}^2_{\kappa(b)}$.
By \cite[Corollary in page 377]{Lan52}, $\kappa(b)$ is a $C_1$-field,
hence we deduce $X_{\kappa(b)}(\kappa(b)) \neq \emptyset$.
Thus, (1) holds.
The assertion (2) follows from Lemma \ref{l-conic-rat-pt} and (1) for the case when $k'=k$.
\end{proof}
To discuss the case when $p=2$,
we first handle a complicated case in characteristic two.
\begin{prop}\label{p-weird}
Let $k$ be a field of characteristic two such that $[k:k^2] \leq 2$.
Let $X$ be a regular $k$-surface of del Pezzo type and
let $\pi:X \to B$ be a $K_X$-Mori fibre space to a curve $B$.
Let $\Gamma$ be a curve which spanns the $K_X$-negative extremal ray
which is not corresponding to $\pi$.
Assume that
\begin{enumerate}
\item $K_X \cdot \Gamma>0$, and
\item $K(\Gamma)/K(B)$ is an inseparable extension of degree four
which is not purely inseparable.
\end{enumerate}
Then $-K_B$ is ample.
\end{prop}
\begin{proof}
We divide the proof in several steps.
\setcounter{step}{0}
\begin{step}\label{s1-weird}
In order to show the assertion of Proposition \ref{p-weird},
we may assume that
\begin{enumerate}
\setcounter{enumi}{2}
\item $B$ is not smooth over $k$,
\item $\pdeg(k)=1$, i.e. $[k:k^2]=2$, and
\item the generic fibre of $\pi$ is not geometrically reduced.
\end{enumerate}
\end{step}
\begin{proof}
If (3) does not hold, then $B$ is a smooth curve over $k$.
Since $(X_{\overline k})_{\red}$ is a rational surface by Lemma \ref{l-rationality},
$B_{\overline k}$ is a smooth rational curve.
Then $-K_B$ is ample, as desired.
Thus, we may assume (3).
From now on, we assume (3).
If (4) does not hold, then $k$ is a perfect field.
In this case, $B$ is smooth over $k$, which contradicts (3).
Thus, we may assume (4).
Let us prove the assertion of Proposition \ref{p-weird} if (5) does not hold.
In this case, the generic fibre $X_{K(B)}$ of $\pi:X \to B$ is a geometrically integral regular conic over $K(B)$.
Thus it is smooth over $K(B)$ by Lemma \ref{p-Fano-curve}.
We use notation as in Notation \ref{n-ess-klt-case}.
Lemma \ref{l-ess-klt-case}(8) enables us to find
a rational number $\alpha$ such that $ 0\leq \alpha <1$ and $(X, \alpha \Gamma)$
is a log del Pezzo pair.
Then Lemma \ref{l-alpha-upper}(3) implies that $\alpha m_{\Gamma} <2$.
Since our assumption (2) implies $m_{\Gamma}=[K(\Gamma):K(B)]=4$, we have that $\alpha <1/2$.
By the assumption (2) and $\alpha<1/2$,
the induced pair $(X_{\overline{K(B)}}, \alpha \Gamma|_{X_{\overline{K(B)}}})$
on the geometric generic fibre is $F$-pure.
It follows from \cite[Corollary 4.10]{Eji} that $-K_B$ is ample.
Hence, we may assume that (5) holds.
This completes the proof of Step \ref{s1-weird}.
\end{proof}
From now on, we assume that (3)--(5) of Step \ref{s1-weird} hold.
\begin{step}\label{s2-weird}
$X$ and $B$ are geometrically integral over $k$.
$X$ is not geometrically normal over $k$.
\end{step}
\begin{proof}
Since $[k:k^2]=2$,
it follows from \cite[Theorem 2.3]{Sch10}
that $X$ and $B$ are geometrically integral over $k$
(note that $\log_2 [k:k^2]$ is called the degree of imperfection for $k$
in \cite[Theorem 2.3]{Sch10}).
If $X$ is geometrically normal over $k$,
then also $B$ is geometrically normal over $k$, i.e. $B$ is smooth over $k$.
This contradicts (3) of Step \ref{s1-weird}.
This completes the proof of Step \ref{s2-weird}.
\end{proof}
We now introduce some notation.
Set $k_1:=k^{1/2}$.
By Step \ref{s2-weird}, $X \times_k k_1$ is integral and non-normal
(cf. \cite[Proposition 2.10(3)]{Tan19}).
Let $\nu:X_1:=(X \times_k k_1)^N \to X \times_k k_1$ be its normalisation.
Let $X_1 \to B_1$ be the Stein factorisation of the induced morphism $X_1 \to X \to B$.
To summarise, we have a commutative diagram
\[
\begin{CD}
X_1 @>\nu >> X \times_k k_1 @>>> X\\
@VVV @VVV @VVV\\
B_1 @>>> B \times_k k_1 @>>> B.
\end{CD}
\]
Let $C \subset X \times_k k_1$ and $D \subset X_1$ be the closed subschemes
defined by the conductors for $\nu$.
For $K:=K(B)$, we apply the base change $(-) \times_B \Spec\,K$ to the above diagram:
\[
\begin{CD}
V_1 @>>> V \times_K L @>>> V\\
@VVV @VVV @VVV\\
\Spec\,K_1 @>>> \Spec\,L @>>> \Spec\,K,
\end{CD}
\]
where $V:=X \times_B K$, $L:=K(B \times_k k_1) = K(B) \otimes_k k_1$, and $K_1=K(B_1)$.
Since taking Stein factorisations commute with flat base changes,
the morphism $V_1 \to \Spec\,K_1$ coincides the Stein factorisation of the induced morphism $V_1 \to \Spec\,K$.
\begin{step}\label{s3-weird}
$C$ dominates $B$.
\end{step}
\begin{proof}
Assuming that $C$ does not dominate $B$,
let us derive a contradiction.
Since $B$ is geometrically integral over $k$ (Step \ref{s2-weird}),
we can find a non-empty open subset $B'$ of $B$ such that $B'$ is smooth over $k$ and
the image of $C$ on $B$ is disjoint from $B'$.
Let $B'_1, X',$ and $X_1'$ be the inverse images of $B'$ to $B_1, X,$ and $X_1$, respectively.
Then the resulting diagram is as follows
\[
\begin{CD}
X'_1 @>\simeq >> X' \times_k k_1 @>>> X'\\
@VVV @VVV @VV\pi' V\\
B'_1 @>\simeq >> B' \times_k k_1 @>>> B'.
\end{CD}
\]
Since $X'_1 \simeq X' \times_k k_1 = X' \times_k k^{1/2}$ is normal,
it holds that $X'$ is geometrically normal over $k$.
Let $\pi'_{\overline k}:X'_{\overline k} \to B'_{\overline k}$ be the base change
of $\pi'$ to the algebraic closure $\overline k$.
Since $X'$ is geometrically normal over $k$,
$X'_{\overline k}$ is a normal surface.
Note that $B'_{\overline k}$ is a smooth curve.
Since general fibres of $\pi'_{\overline k}:X'_{\overline k} \to B'_{\overline k}$
are $K_{X'_{\overline k}}$-negative and $(\pi'_{\overline k})_*\MO_{X'_{\overline k}} = \MO_{B'_{\overline k}}$,
general fibres of $\pi'_{\overline k}$ are isomorphic to $\mathbb P^1_{\overline k}$.
Then the generic fibre of $\pi'_{\overline k}:X'_{\overline k} \to B'_{\overline k}$ is smooth,
hence so is the generic fibre of $\pi:X \to B$.
This contradicts (5) of Step \ref{s1-weird}.
This completes the proof of Step \ref{s3-weird}.
\end{proof}
\begin{step}\label{s4-weird}
The following hold.
\begin{enumerate}
\item[(i)] $L/K$ is a purely inseparable extension of degree two.
\item[(ii)] $V$ is a regular conic curve on $\mathbb P^2_K$ which is not geometrically reduced over $K$.
\item[(iii)] $V_1 \to V \times_K L$ is the normalisation of $V \times_K L$.
\item[(iv)] $V \times_K L$ is an integral scheme which is not regular.
\item[(v)] The restriction $D|_{V_1}$ of the conductor $D$ to $V_1$
satisfies $D_{V_1} =Q$, where $Q$ is a $K_1$-rational point.
\item[(vi)] $V_1$ is isomorphic to $\mathbb P^1_{K_1}$.
\item[(vii)] $[K_1:L]$ is a purely inseparable extension of degree two, and $K_1=K^{1/2}$.
\end{enumerate}
\end{step}
\begin{proof}
The assertions (i)--(iii) follows from the construction.
Step \ref{s3-weird} implies (iv).
Let us show (v).
For the induced morphism $\varphi:V_1 \to V$, we have that
\[
K_{V_1}+ D|_{V_1} \sim \varphi^*K_V.
\]
Since $-K_V$ is ample, it holds that
\[
0> \deg_{K_1}(K_{V_1}+D|_{V_1}) \geq -2+\deg_{K_1}(D|_{V_1}),
\]
which implies $\deg_{K_1} (D|_{V_1}) \leq 1$.
Step \ref{s3-weird} implies that $D|_{V_1} \neq 0$,
hence $D|_{V_1}$ consists of a single rational point.
Thus, (v) holds.
Let us show (vi).
Since $V_1$ has a $K_1$-rational point around which $V_1$ is regular,
$V_1$ is smooth around this point.
In particular, Lemma \ref{l-gred-open} implies that $V_1$ is geometrically reduced.
Then $V_1$ is a geometrically integral conic curve in $\mathbb P^2_{K_1}$.
Therefore, $V_1$ is smooth over $K_1$.
Since $V_1$ has a $K_1$-rational point,
$V_1$ is isomorphic to $\mathbb P^1_{K_1}$.
Thus, (vi) holds.
Let us show (vii).
The inclusion $K_1 \subset K^{1/2}$, which is equivalent to $K_1^2 \subset K$,
follows from the fact that
$K$ is algebraically closed in $K(V)$ and the following:
\[
K_1^2 \subset K(V_1)^2 = K(V \times_K L)^2 = (K(V) \otimes_K L)^2 \subset K(V).
\]
It follows from \cite[Theorem 3]{BM40} that
the $p$-degree $\pdeg(K)$ is two, i.e. $[K^{1/2}:K]=4$
(note that the $p$-degree is called the degree of imperfection in \cite{BM40}).
Hence, it is enough to show that $K_1 \neq L$.
Assume that $K_1 = L$.
Then $V_1$ is smooth over $L$ by (vi).
Hence, $V \times_K L$ is geometrically integral over $L$.
Therefore, $V$ is geometrically integral over $K$, which contradicts (5) of Step \ref{s1-weird}.
This completes the proof of Step \ref{s4-weird}.
\end{proof}
\begin{step}\label{s5-weird}
Set-theoretically, $C$ does not contain $\Gamma \times_k k_1$.
\end{step}
\begin{proof}
Assuming that $C$ contains $\Gamma \times_k k_1$,
let us derive a contradiction.
In this case, the set-theoretic inclusion
\[
f^{-1}(\Gamma) \subset \nu^{-1}(C) = D
\]
holds, where $f:X_1 \to X$ is the induced morphism.
Since $B_1 \to B$ is a universal homeomorphism and
the geometric generic fibre
$\Gamma \times_B \Spec\,\overline{K}$ of $\Gamma \to B$ consists of two points,
the geometric generic fibre of $D \to B_1$ contains two distinct points.
In particular, it holds that $\deg_{K_1}(D|_{V_1}) \geq 2$.
However, this contradicts (v) of Step \ref{s4-weird}.
This completes the proof of Step \ref{s5-weird}.
\end{proof}
\begin{step}\label{s6-weird}
$-K_{B_1}$ is ample.
\end{step}
\begin{proof}
It follows from Lemma \ref{l-ess-klt-case}(8) that
there is a rational number $\alpha$ such that $0\leq \alpha<1$ and $(X, \alpha \Gamma)$
is a log del Pezzo pair.
Consider the pullback:
\[
K_{X_1}+D +\alpha f^*\Gamma = f^*(K_X+ \alpha \Gamma).
\]
Take the geometric generic fibre $W$ of $\pi_1:X_1 \to B_1$,
i.e. $W=V_1 \times_{K_1} \Spec\,\overline{K}_1 \simeq \mathbb P^1_{\overline K_1}$ (Step \ref{s4-weird}(vi)).
It is clear that $-(K_W+(D +\alpha f^*\Gamma)|_W)$ is ample.
Since $D|_{V_1}=Q$ is a rational point (Step \ref{s4-weird}(v)),
its pullback $D|_W=:Q_W$ to $W$ is a closed point on $W$.
As $-(K_W+(D+ \alpha f^*\Gamma)|_W)$ is ample,
all the coefficients of $B:=(\alpha f^*\Gamma)|_W$ must be less than one.
Therefore, Step \ref{s5-weird} implies that $(W, (D+ \alpha f^*\Gamma)|_W)$ is $F$-pure.
It follows from \cite[Corollary 4.10]{Eji} that
$-K_{B_1}$ is ample.
This completes the proof of Step \ref{s6-weird}.
\end{proof}
\begin{step}\label{s7-weird}
$-K_B$ is ample.
\end{step}
\begin{proof}
As $-K_{B_1}$ is ample (Step \ref{s6-weird}),
Lemma \ref{l-conic} implies that $H^1(B_1, \MO_{B_1})=0$.
Since $K(B_1)=K(B)^{1/2}$ (Step \ref{s4-weird}(vii)),
the morphism $B_1 \to B$ coincides with the absolute Frobenius morphism
of $B$.
Hence, $B_1$ and $B$ are isomorphic as schemes.
Thus, the vanishing $H^1(B_1, \MO_{B_1})=0$ implies $H^1(B, \MO_B)=0$.
Then $-K_B$ is ample by Lemma \ref{l-conic}.
This completes the proof of Step \ref{s7-weird}.
\end{proof}
Step \ref{s7-weird} completes the proof of Proposition \ref{p-weird}.
\end{proof}
\begin{prop}\label{p-rat-point-mfs}
Let $X$ be a regular $k$-surface of del Pezzo type over a $C_1$-field $k$ of characteristic $p>0$ such that $k=H^0(X, \mathcal{O}_X)$.
Let $\pi \colon X \rightarrow B$ be a $K_X$-Mori fibre space
to a regular projective curve.
Then the following hold.
\begin{enumerate}
\item If $p \geq 7$, then $X(k) \neq \emptyset$.
\item If $p = \left\{ 3, 5 \right\}$, then $X(k^{1/p}) \neq \emptyset$.
\item If $p = 2$, then $X(k^{1/4}) \neq \emptyset$.
\end{enumerate}
\end{prop}
\begin{proof}
Let $R=\R_{\geq 0}[\Gamma]$ be the extremal ray of $\overline{\NE}(X)$ not corresponding to $\pi:X \to B$.
In particular, we have $\pi(\Gamma)=B$.
We distinguish two cases:
\begin{enumerate}
\item[(I)] $K_X \cdot \Gamma \leq 0$;
\item[(II)] $K_X \cdot \Gamma >0$.
\end{enumerate}
Suppose that (I) holds.
In this case, $-K_X$ is nef and big.
If $p>2$, then the generic fibre $X_{K(B)}$ is a smooth conic.
In particular, the base change $X_{\overline{K(B)}}$ is strongly $F$-regular.
By \cite[Corollary 4.10]{Eji}, $-K_B$ is ample.
Hence, Proposition \ref{p-rat-point-mfs} implies $X(k) \neq \emptyset$.
We now treat the case when (I) holds and $p=2$.
Then $-K_X$ is semi-ample and big.
Let $Z$ be its anti-canonical model.
In particular, $Z$ is a canonical del Pezzo surface.
By Theorem \ref{t-p2-bound}, we have $\ell_F(Z/k) \leq 2$.
Therefore, for $k_W:=k^{1/4}$ and $W:=(Z \times_k k_W)^N_{\red}$, $W$ is geometrically normal over $k_W$. In particular, $H^0(W, \mathcal{O}_W) = k_W=k^{1/4}$.
We have the following commutative diagram
\[
\begin{CD}
Y @> \nu >> X \times_k k^{1/4} @>>> X \\
@V f VV @VVV @VVV\\
W @> \mu >> Z \times_{k} k^{1/4} @>>> Z \\
@VVV @VVV @VVV\\
\Spec\,k^{1/4} @= \Spec\,k^{1/4} @>>> \Spec\,k,
\end{CD}
\]
where $\mu$ and $\nu$ are the normalisations.
It follows from Theorem \ref{t-classify-bc}
that $W$ is geometrically klt and $H^1(W, \mathcal{O}_W)=0$.
Since the morphism $Y \rightarrow W$ is birational and $W$ is klt by Proposition \ref{p-klt-descent}, it holds that $H^1(Y, \mathcal{O}_Y)=0$.
Consider the Stein factorisation $\pi_1 \colon Y \rightarrow B_1$ of the induced morphism
$Y \rightarrow X \xrightarrow{\pi} B$.
Since $H^1(Y, \mathcal{O}_Y)=0$, we conclude that $H^1(B_1, \mathcal{O}_{B_1})=0$.
In particular, since $k_W$ is a $C_1$-field,
it holds that $B_1 \simeq \mathbb{P}^1_{k_W}$ (Lemma \ref{l-conic-rat-pt}).
Thanks to \cite[Theorem 4.2]{Tan18b},
we can find an effective divisor $D$ on $Y$ such that $K_Y+D=f^*K_X.$
Since $-K_X$ is big, also $-K_Y$ is big.
Fix a general $k_W$-rational point $c \in B_1$ and let $F_c$ be its $\pi_1$-fibre.
Since we take $c$ to be general, $F_c$ avoids the non-regular points of $Y$.
By adjunction, $\omega_{F_c}^{-1}$ is ample.
This implies that $F$ is a conic on $\mathbb P^2_{k_W}$.
Hence, $Y(k^{1/4})= Y(k_W) \neq \emptyset$.
Therefore, we deduce $X(k^{1/4}) \neq \emptyset$.
We suppose (II) holds.
We have $[K(\Gamma):K(B)] \leq 5$ by Proposition \ref{p-cov-deg-bound}.
If $K(\Gamma)/K(B)$ is separable, then $-K_B$ is ample (Lemma \ref{l-sep-or-p-insep}).
Then Proposition \ref{p-rat-point-mfs} implies $X(k) \neq \emptyset$.
Hence, we may assume that $K(\Gamma)/K(B)$ is inseparable.
If $K(\Gamma)/K(B)$ is not purely inseparable,
then $-K_B$ is ample by Proposition \ref{p-weird}.
Again, Proposition \ref{p-rat-point-mfs} implies $X(k) \neq \emptyset$.
Hence, it is enough to treat the case when
$K(\Gamma)/K(B)$ is purely inseparable.
Since $[K(\Gamma):K(B)] \leq 5$, it suffices to prove that $X(k^{1/p^e}) \neq \emptyset$
for the positive integer $e$ defined by $[K(\Gamma):K(B)]=p^e$.
Set $C:=\Gamma^N$.
Since $\omega_{\Gamma}^{-1}$ is ample, also $-K_C$ is ample.
Hence Proposition \ref{p-rat-point-mfs} implies $C(k') \neq \emptyset$,
where $k':=H^0(C, \MO_C)$.
Since
\[
k'^{p^e} \subset K(\Gamma)^{p^e} \subset K(B),
\]
it holds that $k'^{p^e} \subset k$.
Therefore, we obtain $X(k^{1/p^e}) \neq \emptyset$, as desired.
\end{proof}
\subsection{General case}\label{ss3-pi-pts}
In this subsection,
using the results proven above,
we prove the main result in this section (Theorem \ref{t-ex-rat-points-dP})
We present a generalisation of the Lang--Nishimura theorem on rational points.
Although the argument is similar to the one in \cite[Proposition A.6]{RY00},
we include the proof for the sake of completeness.
\begin{lem}[Lang-Nishimura]\label{l-inv-rat-pts-reg}
Let $k$ be a field.
Let $f : X \dashrightarrow Y$ be a rational map
between $k$-varieties.
Suppose that $X$ is regular and $Y$ is proper over $k$.
Fix a closed point $P$ on $X$.
Then there exists a closed point $Q$ on $Y$
such that $k \subset \kappa(Q) \subset \kappa(P)$, where $\kappa(P)$ and $\kappa(Q)$ denote the residue fields.
\end{lem}
\begin{proof}
The proof is by induction on $n:=\dim X$.
If $n=0$, then there is nothing to show.
Suppose $n>0$.
Consider the blowup $\pi:\text{Bl}_P X \to X$ at the closed point $P$.
Since $X$ is regular,
the $\pi$-exceptional divisor $E$ is isomorphic to $\mathbb{P}^{n-1}_{\kappa(P)}$ by \cite[Section 8, Theorem 1.19]{Liu02}.
Consider now the induced map $f:\text{Bl}_{P} X \dashrightarrow Y$.
By the valuative criterion of properness, the map $f$ induces a rational map $E=\mathbb{P}^{n-1}_{\kappa(P)} \dashrightarrow Y$ from the $\pi$-exceptional divisor $E$.
Then by the induction hypothesis $Y$ has a closed point $Q$ whose residue field is contained in $\kappa(P)$.
\end{proof}
\begin{thm} \label{t-ex-rat-points-dP}
Let $k$ be a $C_1$-field of characteristic $p>0$. Let $X$ be a $k$-surface of del Pezzo type such that $k=H^0(X, \mathcal{O}_X)$.
Then the following hold.
\begin{enumerate}
\item If $p \geq 7$, then $X(k) \neq \emptyset$;
\item If $p \in \left\{ 3,5 \right\}$, then $X(k^{1/p}) \neq \emptyset$;
\item If $p =2$ , then $X(k^{1/4}) \neq \emptyset$.
\end{enumerate}
\end{thm}
\begin{proof}
Let $Y \to X$ be the minimal resolution of $X$.
We run a $K_Y$-MMP
$Y=:Y_0 \to Y_1 \to \cdots \to Y_n =: Z$.
Note that the end result is a Mori fibre space.
Thanks to Lemma \ref{l-inv-rat-pts-reg},
we may replace $X$ by $Z$.
Hence it is enough to treat the following two cases.
\begin{enumerate}
\item[(i)] $X$ is a regular del Pezzo surface with $\rho(X)=1$.
\item[(ii)] There exists a Mori fibre space structure $ \pi \colon X \rightarrow B$ to a curve $B$.
\end{enumerate}
Assume (i).
By Lemma \ref{l-Cr-pdeg}, we have $\pdeg (k) \leq 1$.
Therefore $X$ is geometrically normal by \cite[Theorem 14.1]{FS18}.
Thus we conclude by Propositions \ref{p-rat-point-p-geq-7},
Proposition \ref{p-ins-rat-point-3,5}, and Proposition \ref{p-ins-rat-point-2}.
If (ii) holds, then
the assertion follows from Proposition \ref{p-rat-point-mfs}.
\end{proof}
\section{Pathological examples}\label{s-patho}
In this section, we collect pathological features appearing on surfaces of del Pezzo type over imperfect fields.
\subsection{Summary of known results}\label{ss1-patho}
We first summarise previously known examples of pathologies appearing on del Pezzo surfaces over imperfect fields.
\subsubsection{Geometric properties}
We have shown that if $p \geq 7$ and $X$ is a surface of del Pezzo type,
then $X$ is geometrically integral (Corollary \ref{c-geom-red-7}).
We have established a partial result on geometric normality (Theorem \ref{t-dP-large-p}).
Let us summarise known examples in small characteristic related to these properties.
\begin{enumerate}
\item
Let $\mathbb F$ be a perfect field of characteristic $p>0$ and
let $k:=\mathbb F(t_1, t_2, t_3)$.
Then
\[
X:=\Proj\,k[x_0, x_1, x_2, x_3]/(x_0^p+t_1x_1^p+t_2x_2^p+t_3x_3^p)
\]
is a regular projective surface which is not geometrically reduced over $k$.
It is easy to show that $H^0(X, \MO_X)=k$.
If the characteristic of $k$ is two or three, then $-K_X$ is ample,
hence $X$ is a regular del Pezzo surface.
\item There exist a field of characteristic $p=2$ and a regular del Pezzo surface $X$ over $k$
such that $H^0(X, \MO_X)=k$, $X$ is geometrically reduced over $k$, and $X$ is not geometrically normal over $k$ (see \cite[Main Theorem]{Mad16}).
\item If $k$ is an imperfect field of characteristic $p=2,3$ there exists a geometrically normal regular del Pezzo surface $X$ of Picard rank one which is not smooth (see \cite[Section 14, Equation 27]{FS18}). In \cite[Theorem 14.8]{FS18}, an example of a regular geometrically integral but geometrically non-normal del Pezzo surface of Picard rank two is constructed when $p=2$.
\item If $k$ is an imperfect field of characteristic $p \in \{2, 3\}$,
then there exists a $k$-surface $X$ of del Pezzo type
such that $H^0(X, \MO_X)=k$, $X$ is geometrically reduced over $k$, and $X$ is not geometrically normal over $k$ (\cite{Tan}).
\end{enumerate}
\subsubsection{Vanishing of $H^1(X, \MO_X)$}\label{ss}
We have shown that if $X$ is a surface of del Pezzo type over a field of characteristic $p \geq 7$,
then $H^i(X, \MO_X)=0$ for $i>0$.
Let us summarise known examples in small characteristic which violate
the vanishing of $H^1(X, \MO_X)$.
\begin{enumerate}
\item
If $k$ is an imperfect field of characteristic $p=2$,
then there exists a regular weak del Pezzo surface $X$ such that $H^1(X, \MO_X) \neq 0$
(see \cite{Sch07}).
\item
There exist an imperfect field of characteristic $p=2$ and
a regular del Pezzo surface $X$ such that $H^1(X, \MO_X) \neq 0$
(see \cite[Main theorem]{Mad16}).
\item
If $k$ is an imperfect field of characteristic $p \in \{2, 3\}$,
then there exists a surface $X$ of del Pezzo type such that $H^1(X, \MO_X) \neq 0$
(see \cite{Tan}).
\end{enumerate}
\begin{rem}
Since $h^1(X, \mathcal{O}_X)$ is a birational invariant for surfaces with klt singularities,
the previous examples do not admit regular $k$-birational models which are geometrically normal.
This shows that Theorem \ref{t-dP-large-p} cannot be extended to characteristic two and three.
\end{rem}
\subsection{Non-smooth regular log del Pezzo surfaces}
In this subsection, we construct examples of regular $k$-surfaces of del Pezzo type which are not smooth (cf. Theorem \ref{t-dP-large-p}).
\begin{prop}\label{p-count}
Let $k$ be an imperfect field of characteristic $p>0$. Then there exists a $k$-regular surface $X$ of del Pezzo type which is not smooth over $k$.
\end{prop}
\begin{proof}
Fix a $k$-line $L$ on $\mathbb P^2_k$.
Let $Q \in L$ be a closed point such that
$k(Q)/k$ is a purely inseparable extension of degree $p$
whose existence is guaranteed by the assumption that $k$ is imperfect.
Consider the blow-up $\pi : X \to \mathbb{P}^2_k$ at the point $Q$.
We have
\[
K_X = \pi^* K_{\mathbb{P}^2_k} + E\quad\text{and}\quad \widetilde{L}+E = \pi^*L,
\]
where $E$ denotes the $\pi$-exceptional divisor and $\widetilde L$ is the proper transform of $L$. Since $\widetilde L \cup E$ is simple normal crossing and the $\Q$-divisor
\[
-(K_X+\widetilde L+\epsilon E) = \pi^*(K_X+L) - \epsilon E
\]
is ample for any $0 <\epsilon \ll 1$,
the pair $(X, (1-\delta) \widetilde L+\epsilon E)$ is log del Pezzo for $0 \delta \ll 1$.
Hence, $X$ is of del Pezzo type.
It is enough to show that $X$ is not smooth.
There exists an affine open subset $\Spec\,k[x, y] = \mathbb A^2_k$ of $\mathbb P^2_k$
such that $Q \in \Spec\,k[x, y]$ and the maximal ideal corresponding to $Q$
can be written as $(x^p-\alpha,y)$ for some $\alpha \in k \setminus k^p$.
Let $X'$ be the inverse image of $\Spec\,k[x, y]$ by $\pi$.
Since blowups commute with flat base changes,
the base change $X'_{\overline k}$ is isomorphic to
the blowup of $\Spec\,\overline{k}[x, y]$
along the non-reduced ideal $((x-\beta)^p, y)$,
where $\beta \in \overline k$ with $\beta^p=\alpha$.
After choosing appropriate coordinate,
$X'_{\overline k}$ is isomorphic to the blowup of
$\mathbb{A}^2_{\overline{k}}=\Spec\,\overline{k}[x', y']$ along $(x'^p, y')$.
We can directly check that $X'_{\overline k}$ contains
an affine open subset of the form $\Spec\,k[s, y, u]/(st-u^p)$, which is not smooth.
\end{proof}
\begin{rem}
The surface $X$ constructed in Proposition \ref{p-count} is del Pezzo (resp. weak del Pezzo) if and only if $p=2$ (resp. $p \leq 3$).
Indeed, $-E^2=[k(Q):k]=p$ implies
$K_X \cdot_k E = (K_X+E) \cdot_k E -E^2 = -2p+ p=-p$.
Thus the desired conclusion follows from
\[
K_X \cdot_k \widetilde{L} = K_X \cdot_k \pi^*L - K_X \cdot_k E=-3+p.
\]
\end{rem}
\section{Applications to del Pezzo fibrations}
In this section, we give applications of
Theorem \ref{t-klt-bdd-torsion} and Theorem \ref{t-ex-rat-points-dP} on log del Pezzo surfaces over imperfect fields to the birational geometry of threefold fibrations.
The first application is to rational chain connectedness.
\begin{thm} \label{t-rc-3fold}
Let $k$ be an algebraically closed field of characteristic $p>0$.
Let $\pi:V \to B$ be a projective $k$-morphism such that $\pi_*\MO_V=\MO_B$,
$V$ is a normal threefold over $k$, and $B$ is a smooth curve over $k$.
Assume that there exists an effective $\Q$-divisor $\Delta$ such that
$(V, \Delta)$ is klt and $-(K_V+\Delta)$ is $\pi$-nef and $\pi$-big.
Then the following hold.
\begin{enumerate}
\item There exists a curve $C$ on $V$ such that $C \to B$ is surjective and
the following properties hold.
\begin{enumerate}
\item If $p\geq 7$, then $C \to B$ is an isomorphism.
\item If $p \in \{3, 5\}$, then $K(C)/K(B)$ is a purely inseparable extension of degree
$\leq p$.
\item If $p=2$, then $K(C)/K(B)$ is a purely inseparable extension of degree
$\leq 4$.
\end{enumerate}
\item If $B$ is a rational curve, then $V$ is rationally chain connected.
\end{enumerate}
\end{thm}
\begin{proof}
Let us show (1).
Thanks to \cite[Ch. IV, Theorem 6.5]{Kol96}, $K(B)$ is a $C_1$-field. Then Theorem \ref{t-ex-rat-points-dP} implies the assertion (1).
The assertion (2) follows from (1) and the fact that general fibres are
rationally connected (see Lemma \ref{l-rationality}).
\end{proof}
The second application is to Cartier divisors on Mori fibre spaces which are numerically trivial over the bases.
\begin{thm} \label{t-triv-lb-mfs-curve}
Let $k$ be an algebraically closed field of characteristic $p>0$.
Let $\pi \colon V \rightarrow B$ be a projective $k$-morphism such that $\pi_* \mathcal{O}_V = \mathcal{O}_B$, where $X$ is a $\mathbb{Q}$-factorial normal quasi-projective threefold and $B$ is a smooth curve.
Assume there exists an effective $\mathbb{Q}$-divisor $\Delta$ such that $(V, \Delta)$ is klt and $\pi \colon V \rightarrow B$ is a $(K_V+\Delta)$-Mori fibre space.
Let $L$ be a $\pi$-numerically trivial Cartier divisor on $V$.
Then the following hold.
\begin{enumerate}
\item If $p \geq 7$, then $L \sim_{\pi} 0$.
\item If $p \in \left\{ 3, 5 \right\}$, then $p^2L \sim_{\pi} 0$.
\item If $p =2$, then $16 L \sim_{\pi} 0$.
\end{enumerate}
\end{thm}
\begin{proof}
We only prove the theorem in the case when $p=2$, since the other cases are similar and easier.
Since the generic fibre $V_{K(B)}$ is a $K(B)$-surface of del Pezzo type, we have by Theorem \ref{t-klt-bdd-torsion} that $4L|_{V_{K(B)}} \sim 0$.
Therefore, $4L$ is linearly equivalent to a vertical divisor,
i.e. we have
\[
4L\sim \sum_{i=1}^r \ell_i D_i,
\]
where $\ell_i \in \Z$ and $D_i$ is a prime divisor such that $\pi(D_i)$ is a closed point $b_i$.
Since $\rho(V/B)=1$ and $V$ is $\Q$-factorial,
all the fibres of $\pi$ are irreducible.
Hence, we can write $\pi^*(b_i)=n_i D_i$ for some $n_i \in \Z_{>0}$.
Let $m_i$ be the Cartier index of $D_i$,
i.e. the minimum positive integer $m$ such that $mD_i$ is Cartier.
Since the divisor $\pi^*(b_i)=n_i D_i$ is Cartier, then there exists $r_i \in \Z_{>0}$ such that $n_i = r_i m_i$.
We now prove that $r_i$ is a divisor of $4$.
Since $K(B)$ is a $C_1$-field and the generic fibre is a surface of del Pezzo type,
we conclude by Theorem \ref{t-ex-rat-points-dP} that there exists a curve $\Gamma$ on $V$ such that the degree $d$ of the morphism $\Gamma \rightarrow B$ is a divisor of 4.
By the equation
\[
r_i \cdot (m_iD_i) \cdot \Gamma = n_iD_i \cdot \Gamma = \pi^*(b_i) \cdot \Gamma = d,
\]
$r_i$ is a divisor of $4$.
Therefore, it holds that $4m_iD_i \sim_{\pi} 0$.
On the other hand, the divisor $4L=\sum_{i=1}^r \ell_i D_i$ is Cartier,
hence we have that $\ell_i = s_im_i$ for some $s_i \in \Z$.
Therefore it holds that
\[
16L\sim \sum_{i=1}^r 4\ell_i D_i\sim \sum_{i=1}^r s_i (4 m_i D_i) \sim_{\pi} 0,
\]
as desired.
\end{proof}
|
1,941,325,220,063 | arxiv | \section{Introduction and Preliminaries}
For a complex number $z$ and $c\neq 0,-1,-2,-3,\ldots$,
the {\em hypergeometric series }is defined by:
$$
1+ \sum_{n=1}^\infty
\frac{(a)_n(b)_n}{(c)_n(1)_n} z^n.
$$
Here $(a)_n$ denotes the shifted factorial notation defined, in terms of the gamma function,
by:
$$(a)_n=\frac{\Gamma(a+n)}{\Gamma(a)}=\left\{
\begin{array}{ll}a(a+1) \cdots (a+n-1) & \mbox{if $n\ge 1$};\\
1 & \mbox{if $n=0$, $a\neq 0$.}\end{array}\right.
$$
Note that the hypergeometric series defines an analytic function,
denoted by the symbol $_2F_1[a,b;c;z]$, in $|z|<1$.
As quoted in the historial remarks in \cite[1.55, p.~24]{AVV97}, the concept of
hypergeometric series was first introduced by J. Wallis in 1656 to refer to a generalization of the geometric series.
Less than a century later, Euler extensively studied the analytic properties of
the hypergeometric function and found, for instance, its integral representation (see \cite[Theorem 1.19 (2)]{AVV97}.
Gauss made his first contribution to the subject in 1812. Due to the outstanding contribution made by Gauss to the field,
the hypergeometric function is also sometimes known as
the {\em Gauss hypergeometric function}. Most elementary functions which are solutions to
certain differential equations, can be written in terms of
the Gauss hypergeometric functions.
One can easily verify by using Frobenius technique that the function $_2F_1[a,b;c;z]$
is one of the solutions of the {\em hypergeometric differential equation} \cite{AAR99,BW10,RV60}
$$z(1-z)w''+(c-(a+b+1)z)w'-abw=0.
$$
We refer to \cite{RV43,RV60} for Kummer's 24 solutions to the hypergeometric
differential equation, and to \cite{BW10} for related applications.
The asymptotic behavior of $_2F_1[a,b;c;z]$ near $z=1$ reveals that:
\begin{equation}\label{eq2}
_2 F_1[a,b;c;1]= \frac{\Gamma(c-a-b) \Gamma(c)}{\Gamma(c-a) \Gamma(c-b)}<\infty,
\quad \mbox{valid for ${\rm Re}\,(c-a-b)>0$}.
\end{equation}
Interpolating polynomials for elementary real functions such as trigonometric functions, logarithmic
function, exponential function, etc. have already been derived in undergraduate texts
in Numerical Analysis; see for instance \cite{A89}. These elementary functions are in fact
hypergeometric functions with specific parameters $a,b,c$ (see for instance \cite{AAR99,RV60}).
Most of such polynomial approximations
are computed when the functional values at the given boundary points are possible.
Hence the asymptotic behaviour (\ref{eq2}) of the hypergeometric function near $z=1$
motivates us to construct interpolating polynomials for real hypergeometric functions
$_2F_1[a,b;c;x]$, $a,b,c\in \mathbb{R}$, $c\not\in\{0,-1,-2,-3,\ldots\}$,
of a real variable $x$ using several numerical techniques in the interval $[0,1]$, however,
the interval may be extended to $[-1,1]$ as the hypergeometric series
in $x$ is convergent for $|x|<1$ and it has a certain asymptotic behaviour near
$-1$ as well with suitable choices of the parameters $a,b,c$;
see for instance \cite[Theorem~26]{RV60}. More precisely, when we compute
an interpolating polynomial $p_n(x)$ of a hypergeometric function ${}_2F_1[a,b;c;x]$ on $[0,1]$ we take
the value ${}_2F_1[a,b;c;1]$ in the sense that the hypergeometric function defined at
$x=1$ by means of its asymptotic behavior at $x=1$ (see \eqref{eq2}).
Several hypergeometric functional identities also play a crucial role in determining
functional values at the interpolating points.
The following lemmas are useful in describing the error analysis for the interpolating polynomials
that we obtained in this paper. Our subsequent paper(s) in this series will cover the study of
interpolating polynomials using other techniques.
\begin{lemma}\cite[Lemma~1.33(1), p.~13 (see also Lemma~1.35(2))]{AVV97}\label{lem-AVV}
If $a,b,c\in(0,\infty)$, then $_2F_1[a,b;c;x]$ is strictly increasing on $[0,1)$. In particular,
if $c > a + b$ then for $x \in [0, 1]$ we have
$$
{}_2F_1[a,b;c;x]\le \frac{\Gamma(c)\Gamma(c-a-b)}{\Gamma(c-a)\Gamma(c-b)}.
$$
\end{lemma}
\begin{lemma}\cite[Lemma~2.16(2), p.~36]{AVV97}\label{lem-AVV-p36}
The gamma function $\Gamma(x)$ is a log-convex function on $(0,\infty)$. In other words,
the logarithmic derivative, $\Gamma'(x)/{\Gamma(x)}$, of the gamma function is increasing
on $(0,\infty).$
\end{lemma}
Note that in all the plots in this paper, blue color graphs are meant for the original functions
and red color graphs are for interpolating polynomials.
\section{Linear Interpolation on ${}_2F_1[a,b;c;x]$}\label{sec2}
For performing linear interpolation of the function ${}_2F_1[a,b;c;x]=f(x)$, we consider the
end points $x_0=0$ and $x_1=1$ of the interval $[0,1]$.
The functional values at these points are respectively $f(0)=1$ and
$f(1)$ described in \eqref{eq2}.
Hence, the equation of the segment of the straight line joining $0$ and $1$ is
$$P_l(x)= f(x_0)+\frac{x-x_0}{x_1-x_0}(f(x_1)-f(x_0))
=\frac{\Gamma(c) \Gamma(c-a-b) -\Gamma(c-a) \Gamma(c-b)}{\Gamma(c-a) \Gamma(c-b)}x + 1,
$$
when $c-a-b>0$ and $c\neq 0,-1,-2,-3,\ldots$. The polynomial $P_l(x)$ represents
the linear interpolation of ${}_2F_1[a,b;c;x]$ interpolating at $0$ and $1$.
Using Lemma~\ref{lem-AVV},
we obtain the following error estimate:
\begin{lemma}\label{2lem}
Let $a,b,c\in(-2,\infty)$ with $c-a-b>2$.
The deviation of the given function $f(x)={}_2F_1[a,b;c;x]$ from the approximating
function $P_l(x)$ for all values of $x\in[0,1]$ is estimated by
$$|E_l(f,x)|=|f(x)-P_l(x)|\le \frac{|a(a+1)b(b+1)|}{8}
\frac{\Gamma(c)\Gamma(c-a-b-2)}{\Gamma(c-a)\Gamma(c-b)}.
$$
\end{lemma}
\begin{proof}
It requires to maximize
$$|E_l(f,x)|=\frac{x(1-x)}{2}|f''(x)|
$$
in $[0,1]$, equivalently, to find
$$\frac{(1-0)^2}{8}\max_{0\le x\le 1}|f''(x)|,
$$
where $f(x)={}_2F_1[a,b;c;x]$. The following well-known derivative formula is useful:
\begin{equation}\label{der-for}
\frac{d}{dx}\,{}_2F_1[a,b;c;x]=\frac{ab}{c}\,{}_2F_1[a+1,b+1;c+1;x].
\end{equation}
The proof follows from (\ref{eq2}), Lemma~\ref{lem-AVV}, (\ref{der-for}), and the fact that
$\Gamma(x+1)=x\Gamma(x)$.
\end{proof}
\begin{remark}
It follows from Lemma~\ref{2lem} that there is no error for either of the choices
$a=0$, $a=-1$, $b=0$, $b=-1$. In other words, for either of these choices
$E_l(f,x)$ vanishes.
\end{remark}
Figure~\ref{Pl} shows linear interpolation of the hypergeometric function at $0$ and $1$, whereas
Table~\ref{Tl} compares the values of the hypergeometric function
up to four decimal places with its interpolating polynomial values in the interval $[0,1]$
for the choice of parameters $a=1$, $b=2$ and $c=6$.
Figure~\ref{Pl} and Table~\ref{Tl} also indicate errors at various points
within the unit interval except at the end points.
\begin{figure}[H]
\includegraphics[width=8cm]{Pl.eps}
\caption{Linear interpolation of ${}_2F_1[1,2;6;x]$ at $0$ and $1$.}\label{Pl}
\end{figure}
\begin{table}[H]
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Nodes $x_i$ & $0$ & $0.25$ & $0.5$ & $0.75$ & $1$ \\
\hline
Actual values $_2F_1[1,2;6;x_i]$ & $1$ & $1.0936$ & $1.2149$ & $1.3843$ & $1.6667$ \\
\hline
Polynomial approximations & $1$ & $1.1667$ & $1.3333$ & $1.5000$ & $1.6667$\\
by $P_l(x_i)$ &&&&&\\
\hline
Validity of error bounds & $0$ & $0.0731<1.25$ & $0.1184<1.25$ & $0.1157<1.25$ & $0$\\
by $E_l(f,x_i)$ &&&&&\\
\hline
\end{tabular}
\caption{Comparison of the functional and linear polynomial values}
\end{table}\label{Tl}
\section{Quadratic Interpolation on ${}_2F_1[a,b;c;x]$}
Let the three points in consideration for quadratic interpolation be $x_0=0$, $x_1=0.5$ and $x_2=1$.
The functional values at $x_0=0$ and $x_2=1$ can be found easily in terms of the
parameters but the functional value at $x_1=0.5$ can be obtained through
different identities involving hypergeometric functions $_2F_1[a,b;c;x]$
dealing with various constraints on the parameters $a,b,c$.
This section consists of two subsections and in each subsection
the method to obtain the functional value of $_2F_1[a,b;c;x]$ at $x=0.5$ uses three different identities.
Finally, we compare the resultant interpolations. In fact we observe that the interpolating polynomial
remains unchanged in two cases although the
approaches are different (see Section~\ref{sec3.2} for more details).
\subsection{Quadratic Interpolation on ${}_2F_1[a,1-a;c;x]$}\label{sec3.1}
This section deals with the value ${}_2F_1[a,b;c;1/2]$ where $a+b=1$ due to the following identity
of Bailey (see \cite[p.~11]{Bai35} and also \cite[p.~69]{RV60}):
\begin{equation}\label{3.1eq1}
{}_2F_1[a,1-a;c;{1}/{2}]=\frac{2^{1-c}\,\Gamma(c)\Gamma(\frac{1}{2})}{\Gamma(\frac{1}{2}(c+a))\,\Gamma(\frac{1}{2}(1+c-a))}
=\frac{\Gamma(\frac{1}{2}c)\,\Gamma(\frac{1}{2}(1+c))}{\Gamma(\frac{1}{2}(c+a))\,\Gamma(\frac{1}{2}(1+c-a))},
\end{equation}
where $c$ is neither zero nor negative integers.
It follows from \eqref{3.1eq1} that
\begin{equation}\label{3.1eq2}
\Gamma\Big(\frac{1}{2}c\Big)\,\Gamma\Big(\frac{1}{2}(1+c)\Big)=2^{1-c}\,\sqrt{\pi}\,\Gamma(c),
\end{equation}
since $\Gamma(1/2)=\sqrt{\pi}$.
In this case, we obtain
$$
f(x_0)=f(0)={}_2F_1[a,1-a;c;0]=1;
$$
$$
f(x_1)=f(0.5)={}_2F_1[a,1-a;c;1/2]=\frac{\Gamma(\frac{1}{2}c)\,\Gamma(\frac{1}{2}(1+c))}{\Gamma(\frac{1}{2}(c+a))\,\Gamma(\frac{1}{2}(1+c-a))};
$$
and
$$
f(x_2)=f(1)={}_2F_1[a,1-a;c;1]=\frac{\Gamma(c)\Gamma(c-1)}{\Gamma(c-a)\Gamma(c+a-1)}\quad (c>1).
$$
Consider the well-known Lagrange fundamental polynomials
$$L_0(x)=\frac{(x-x_1)(x-x_2)}{(x_0-x_1)(x_0-x_2)},~~
L_1(x)=\frac{(x-x_0)(x-x_2)}{(x_1-x_0)(x_1-x_2)},~~
L_2(x)=\frac{(x-x_0)(x-x_1)}{(x_2-x_0)(x_2-x_1)}.
$$
Thus, the quadratic interpolation of $f(x)={}_2F_1[a,1-a;c;x]$ becomes
\begin{align*}
P_{q_3}(x)
& = f(x_0)L_0(x)+f(x_1)L_1(x)+f(x_2)L_2(x)\\
& = (2x^2-3x+1)+(-4x^2+4x)\frac{\Gamma(\frac{1}{2}c)\,\Gamma(\frac{1}{2}(1+c))}{\Gamma(\frac{1}{2}(c+a))\,\Gamma(\frac{1}{2}(1+c-a))}\\
& \hspace*{8cm} +(2x^2-x)\frac{\Gamma(c)\Gamma(c-1)}{\Gamma(c-a)\Gamma(c+a-1)}.
\end{align*}
This leads to the following result.
\begin{theorem}\label{3.1thm1}
Let $a,b,c\in\mathbb{R}$ be such that $c>1$.
Then
\begin{align*}
P_{q_1}(x) &= \left(2-\frac{4\,\Gamma(\frac{1}{2}c)\,\Gamma(\frac{1}{2}(1+c))}{\Gamma(\frac{1}{2}(c+a))\,\Gamma(\frac{1}{2}(1+c-a))}
+\frac{2\,\Gamma(c)\Gamma(c-1)}{\Gamma(c-a)\Gamma(c+a-1)}\right)x^2\\
& \hspace*{2cm}+\left(\frac{4\,\Gamma(\frac{1}{2}c)\,\Gamma(\frac{1}{2}(1+c))}{\Gamma(\frac{1}{2}(c+a))\,\Gamma(\frac{1}{2}(1+c-a))}
-\frac{\Gamma(c)\Gamma(c-1)}{\Gamma(c-a)\Gamma(c+a-1)}-3\right)x+1.
\end{align*}
is a quadratic interpolation of ${}_2F_1[a,1-a;c;x]$ in $[0,1]$.
\end{theorem}
\begin{remark}
It is evident that when $a=0,1$, then $P_{q_1}(x)={}_2F_1[a,1-a;c;x]=1$ for all $x\in [0,1]$
and for all $c>1$. Moreover, for all $c>1$, we have the following three natural observations
\begin{enumerate}
\item[(i)] if $-1<a<0$, then $P_{q_1}(x)$ and ${}_2F_1[a,1-a;c;x]$ both decrease together in $[0,1]$;
\item[(ii)] if $0<a<1$, then $P_{q_1}(x)$ and ${}_2F_1[a,1-a;c;x]$ both increase together in $[0,1]$; and
\item[(iii)] if $1<a<2$, then $P_{q_1}(x)$ and ${}_2F_1[a,1-a;c;x]$ both decrease together in $[0,1]$.
\end{enumerate}
Indeed, all of them follow from derivative test. More observations are stated later while
estimating the error (see Remark~\ref{3.2rem1}).
\end{remark}
An interpolating polynomial $P_{q_1}(x)$ of ${}_2F_1[a,1-a;c;x]$ for certain choices of
parameters $a$ and $c$ is as shown in Figure~\ref{Pq1}.
\begin{figure}[H]
\centering
\includegraphics[width=8cm]{Pq1.eps}
\caption{The quadratic interpolation of ${}_2F_1[0.9,0.1;1.5;x]$ at $0,0.5$, and $1$.} \label{Pq1}
\end{figure}
\begin{remark}
Note that in Theorem~\ref{3.1thm1}, the parameter $c$ can not be chosen such that
$c\le (a+b+1)/2$ since the choice $b=1-a$ resulting to $c\le 1$, which is a contradiction to the
assumption that $c>1$. In particular, $c\neq (a+b+1)/2$ in Theorem~\ref{3.1thm1},
the negation of a constraint that will be considered in the next subsection.
\end{remark}
\subsection{Quadratic Interpolation on ${}_2F_1[a,b;(a+b+1)/2;x]$}\label{sec3.2}
In this section, $f(x)={}_2F_1[a,b;c;x]$, $c=(a+b+1)/2$, is first interpolated using
the following quadratic transformation obtained from \cite[(3.1.3)]{AAR99}
(see also \cite[Theorem~2.5]{RV60}).
\begin{lemma}\label{qt1}
If $(a+b+1)/2$ is neither zero nor a negative integer, and if $|x|<1$ and
$|4x(1-x)|<1$, then
\begin{equation}\label{eq4}
_2F_1\left[a,b;\frac{a+b+1}{2};x\right]
= {_2F}_1\left[\frac{a}{2},\frac{b}{2};\frac{a+b+1}{2};4x(1-x)\right].
\end{equation}
\end{lemma}
If we choose $x=0.5$ then the right hand side of (\ref{eq4}) computes
asymptotic behavior of the hypergeometric function at $1$. Hence the functional value at
$x=0.5$ of the function $f(x)={}_2F_1[a,b;(a+b+1)/2;x]$ can be obtained with the help of
\eqref{eq2}. Due to Lemma~\ref{qt1} and \eqref{eq2}, in this case, the constraints on the
parameters are computed as:
\begin{itemize}
\item $a+b<1$;
\item $a+b\neq -(2n+1)$ for $n \in \mathbb{N}\cup\{0\}$.
\end{itemize}
One can easily obtain that
\begin{align*}
f(x_0) &={}_2F_1\left[a,b;\frac{a+b+1}{2};0\right]=1;\\
f(x_1) &= {}_2F_1\left[a,b;\frac{a+b+1}{2};\frac{1}{2}\right]=\frac{\sqrt{\pi}\,\Gamma\Big(\displaystyle\frac{a+b+1}{2}\Big)}{\Gamma\Big(\displaystyle\frac{a+1}{2}\Big)\Gamma\Big(\displaystyle\frac{b+1}{2}\Big)}
;\\
f(x_2) &= {}_2F_1\left[a,b;\frac{a+b+1}{2};1\right]=\frac{\Gamma\Big(\displaystyle\frac{1-a-b}{2}\Big)\Gamma\Big(\displaystyle\frac{a+b+1}{2}\Big)}{\Gamma\Big(\displaystyle\frac{a+1-b}{2}\Big)\Gamma\Big(\displaystyle\frac{b+1-a}{2}\Big)}=\frac{\cos\pi\displaystyle\frac{(b-a)}{2}}{\cos\pi\displaystyle\frac{(b+a)}{2}},
\end{align*}
where
$f(x_2)$ is obtained by the well-known Euler's reflection formula
(in non-integral variable $x$)
$$\Gamma(x)\Gamma(1-x)=\frac{\pi}{\sin(\pi x)}.
$$
This leads to the
additional constraints on the parameters as (these constraints
may be relaxed when one does not use Euler's reflection formula!)
\begin{equation}\label{3.2eq2}
\left\{
\begin{array}{ll}
& a+b\neq 1\pm 2n \quad \mbox{and} \quad a-b\neq -1\pm 2n, ~n\in \mathbb{Z}; \mbox{ or}\\
& a+b\neq -1\pm 2n \quad \mbox{and} \quad a-b\neq 1\pm 2n, ~n\in \mathbb{Z}.
\end{array}
\right.
\end{equation}
Thus, the first quadratic interpolation of $f(x)={}_2F_1[a,b;(a+b+1)/2;x]$ becomes
\begin{align*}
P_{q_2}(x)
& = f(x_0)L_0(x)+f(x_1)L_1(x)+f(x_2)L_2(x)\\
& = (2x^2-3x+1)+(-4x^2+4x)\frac{\sqrt{\pi}\,\Gamma\left(\displaystyle\frac{a+b+1}{2}\right)}
{\Gamma\left(\displaystyle\frac{a+1}{2}\right)\Gamma\left(\displaystyle\frac{b+1}{2}\right)}
+(2x^2-x)\frac{\cos\pi\displaystyle\frac{(b-a)}{2}}{\cos\pi\displaystyle\frac{(b+a)}{2}}.
\end{align*}
This leads to the following result.
\begin{theorem}\label{3.2thm1}
Let $a,b\in\mathbb{R}$ and $n \in \mathbb{N}\cup\{0\}$ be such that $a+b\neq -(2n+1)$ and $a+b<1$.
If either $a+b\neq 1\pm 2n$ and $a-b\neq -1\pm 2n$, or
$a+b\neq -1\pm 2n$ and $a-b\neq 1\pm 2n$ hold,
then
\begin{align*}
P_{q_2}(x) &= \left(2-\frac{4\sqrt{\pi}\,\Gamma\left(\displaystyle\frac{a+b+1}{2}\right)}
{\Gamma\left(\displaystyle\frac{a+1}{2}\right)\Gamma\left(\displaystyle\frac{b+1}{2}\right)}
+\frac{2\cos\pi\displaystyle\frac{(b-a)}{2}}{\cos\pi\displaystyle\frac{(b+a)}{2}}\right)x^2\\
& \hspace*{3cm}+\left(\frac{4\sqrt{\pi}\,\Gamma\left(\displaystyle\frac{a+b+1}{2}\right)}
{\Gamma\left(\displaystyle\frac{a+1}{2}\right)\Gamma\left(\displaystyle\frac{b+1}{2}\right)}
-\frac{\cos\pi\displaystyle\frac{(b-a)}{2}}{\cos\pi\displaystyle\frac{(b+a)}{2}}-3\right)x+1.
\end{align*}
is a quadratic interpolation of ${}_2F_1[a,b;(a+b+1)/2;x]$ in $[0,1]$.
\end{theorem}
Secondly, we also discuss quadratic interpolation of the same function
${}_2F_1[a,b;c;x]$, $c=(a+b+1)/2$, in $[0,1]$, but using a different hypergeometric
identity. Finally, we observe that both the interpolations are same
except at a minor difference in one of the constraints.
Recall the transformation formula (see \cite[Theorem~20, p.~60]{RV60}):
\begin{lemma}\label{3.3lem1}
If $|x|<1$ and $|x/(1-x)|<1$, then we have
$$
{}_2F_1[a,b;c;x]={(1-x)^{-a}} {_2F_1[a,c-b;c;\frac{-x}{1-x}]}.
$$
\end{lemma}
Note that $-x/(1-x)=-1$ for $x=0.5$. To find the value $f(0.5)=2^a{}_2F_1[a,c-b;c;-1]$, this
suggests us to use the following identity (see \cite[Theorem~26, p~68]{RV60}; see also \cite{BW10}).
\begin{lemma}\label{3.3lem2}
Let $a',b'\in \mathbb{R}$.
If $1+a'-b'\neq \{0,-1,-2,-3,\ldots\}$ and $b'<1$, then we have
$$
{}_2F_1[a',b';a'-b'+1;-1]=\frac{\Gamma(a'-b'+1)\Gamma\Big(\displaystyle\frac{a'}{2}+1\Big)}
{\Gamma(a'+1)\Gamma\Big(\displaystyle\frac{a'}{2}-b'+1\Big)}.
$$
\end{lemma}
Comparison of the parameters $a'=a$, $b'=c-b$ and $a'-b'+1=c$ leads to
\begin{equation}\label{3.3eq1}
{}_2F_1[a,c-b;c;-1]=\frac{\Gamma(a-c+b+1)\Gamma\Big(\displaystyle\frac{a}{2}+1\Big)}
{\Gamma(a+1)\Gamma\Big(\displaystyle\frac{a}{2}-c+b+1\Big)}
\end{equation}
with the constraints
\begin{itemize}
\item $2c=a+b+1$;
\item $c\neq \{0,-1,-2,-3,\ldots\} \iff a+b\neq -(2n+1),~n\in\mathbb{N}\cup\{0\}$;
\item $c-b<1 \iff a-b<1$.
\end{itemize}
Under these conditions, \eqref{3.3eq1} leads to
\begin{align*}
f(x_1)=f(0.5)
& ={}_2F_1\Big[a,b;\frac{a+b+1}{2};\frac{1}{2}\Big] =2^a\frac{\Gamma\Big(\displaystyle\frac{a+b+1}{2}\Big)\Gamma\Big(\displaystyle\frac{a}{2}+1\Big)}
{\Gamma(a+1)\Gamma\Big(\displaystyle\frac{b+1}{2}\Big)}\\
& =\frac{2^{a-1}\,\Gamma\Big(\cfrac{a+b+1}{2}\Big)\,\Gamma\Big(\cfrac{a}{2}\Big)}{\Gamma(a)\,\Gamma\Big(\cfrac{b+1}{2}\Big)}
=\frac{\sqrt{\pi}\,\Gamma\Big(\cfrac{a+b+1}{2}\Big)}{\Gamma\Big(\cfrac{a+1}{2}\Big)
\Gamma\Big(\cfrac{b+1}{2}\Big)},
\end{align*}
where the last equality holds by \eqref{3.1eq2}.
Also as discussed in Section~\ref{sec3.2}, we have
$$f(x_0)=f(0)={}_2F_1\Big[a,b;\frac{a+b+1}{2};0\Big]=1,
$$
and
$$f(x_2)=f(1)={}_2F_1\Big[a,b;\frac{a+b+1}{2};1\Big]
=\frac{\cos\pi\displaystyle\frac{(b-a)}{2}}{\cos\pi\displaystyle\frac{(b+a)}{2}},
\quad a+b<1
$$
with additional constraints obtained in \eqref{3.2eq2} (here also \eqref{3.2eq2} may be relaxed!).
Thus, the second quadratic interpolation of $f(x)={}_2F_1[a,b;(a+b+1)/2;x]$ remains same as
the first quadratic interpolation obtained in Theorem~\ref{3.2thm1} but with an additional constraint
$a-b<1$. This shows that the quadratic interpolation obtained by Theorem~\ref{3.2thm1} is stronger
than what was discussed so far using Lemma~\ref{3.3lem1} and Lemma~\ref{3.3lem2}.
A quadratic interpolation of ${}_2F_1[a,b;(a+b+1)/2;x]$ is shown in Figure~\ref{Pq2}.
\begin{figure}[H]
\includegraphics[width=8cm]{Pq2.eps}
\caption{The quadratic interpolation of ${}_2F_1[0.1,0.3;0.7;x]$ at $0$, $0.5$, and $1$.}\label{Pq2}
\end{figure}
\subsection{Error Estimates}
The error estimate in quadratic interpolation of ${}_2F_1[a,b;c;x]$
interpolating at $0,0.5,1$ in $[0,1]$ is formulated as below:
\begin{lemma}\label{3.2lem4}
Let $P_q(x)$ be a quadratic interpolation of $f(x)={}_2F_1[a,b;c;x]$
interpolating at $0,0.5,1$ in $[0,1]$. If $a,b,c\in(-3,\infty)$ with $c-a-b>3$,
then the deviation of $f(x)$ from $P_q(x)$ is estimated by
\begin{align*}
|E_q(f,x)|
& =|f(x)-P_q(x)|\\
& \le \frac{M}{6}\,|a(a+1)(a+2)b(b+1)(b+2)|\,
\frac{\Gamma(c)\Gamma(c-a-b-3)}{\Gamma(c-a)\Gamma(c-b)}
\end{align*}
for all values of $x\in[0,1]$, where $M$ is defined by
\begin{align}\label{M}
M & :=\left\{\begin{array}{ll}
\cfrac{1}{12}(3-\sqrt{3})(-1+\cfrac{1}{6}(3-\sqrt{3}))(-1+\cfrac{1}{3}(3-\sqrt{3})), & \mbox{ $x<1/2$,}\\[4mm]
-\cfrac{1}{12}(3+\sqrt{3})(-1+\cfrac{1}{6}(3+\sqrt{3}))(-1+\cfrac{1}{3}(3+\sqrt{3})), & \mbox{ $x>1/2$.}
\end{array}\right.
\end{align}
\end{lemma}
\begin{proof}
It requires to estimate
$$\max_{0\le x\le 1} \frac{|x(x-0.5)(x-1)|}{6}\max_{0\le x\le 1}|f'''(x)|,
$$
where $f(x)={}_2F_1[a,b;c;x]$. Note that
$$
\max_{0\le x\le 1} |x(x-0.5)(x-1)|=M ~(\approx 0.0481125\cdots)
$$
obtained by \eqref{M}.
We apply the well-known derivative formula \eqref{der-for} to maximize $|f'''(x)|$, $0\le x\le 1$.
The proof follows from (\ref{eq2}), Lemma~\ref{lem-AVV}, (\ref{der-for}), and the fact that
$\Gamma(x+1)=x\Gamma(x)$.
\end{proof}
The following result is an immediate consequence of Lemma~\ref{3.2lem4} which estimates the difference
$E_{q_1}(f,x)={}_2F_1[a,1-a;c;x]-P_{q_1}(x)$ in $[0,1]$.
\begin{corollary}\label{3.2cor1}
Let $a,c\in\mathbb{R}$ be such that $-3<a<4$ and $c>4$.
Then the deviation of ${}_2F_1[a,1-a;c;x]$ from $P_{q_1}(x)$ is estimated by
\begin{align*}
|E_{q_1}(f,x)|
& =|f(x)-P_{q_1}(x)|\\
& \le \frac{M}{6}\,|a(a+1)(a+2)(1-a)(2-a)(3-a)|\,
\frac{\Gamma(c)\Gamma(c-4)}{\Gamma(c-a)\Gamma(c+a-1)}
\end{align*}
for all values of $x\in[0,1]$, where $M$ is obtained by \eqref{M}.
\end{corollary}
\begin{remark}\label{3.2rem1}
It follows from Corollary~\ref{3.2cor1} that there is no error for either of the choices
$a=-2,-1,0,1,2,3$. In other words, for either of these choices,
$E_{q_1}(f,x)$ vanishes.
\end{remark}
Similarly, as a consequence of Lemma~\ref{3.2lem4}, we obtain
\begin{corollary}\label{3.2cor2}
Let $a,b\in\mathbb{R}$ be such that $-7<a+b<-5$.
Then the deviation of ${}_2F_1[a,b;(a+b+1)/2;x]$ from $P_{q_2}(x)$ is estimated by
\begin{align*}
|E_{q_2}(f,x)|
& =|f(x)-P_{q_2}(x)|\\
& \le \frac{M}{6}\,|a(a+1)(a+2)b(b+1)(b+2)|\,
\frac{\Gamma\Big(\cfrac{a+b+1}{2}\Big)\Gamma\Big(\cfrac{-a-b-5}{2}\Big)}
{\Gamma\Big(\cfrac{b-a+1}{2}\Big)\Gamma\Big(\cfrac{a-b+1}{2}\Big)}
\end{align*}
for all values of $x\in[0,1]$, where $M$ is obtained by \eqref{M}.
\end{corollary}
\begin{remark}\label{3.2rem2}
It follows from Corollary~\ref{3.2cor2} that
since $E_{q_2}(f,x)$ vanishes for the choices $a=-2,-1,0$ and $b=-2,-1,0$,
there is no error for these choices of the parameters $a$ and $b$.
\end{remark}
Now we describe a bit deeper analysis on the error obtained in Corollary~\ref{3.2cor1} through the following lemma
which is a consequence of Lemma~\ref{lem-AVV-p36}. A similar analysis can be described for Corollary~\ref{3.2cor2}.
\begin{lemma}\label{3.2lem1}
Let $a,c\in\mathbb{R}$ be such that $c>4$. If either $1<a<4$ or $-3<a<0$ holds, then the quotient
$$
\frac{\Gamma(c)\Gamma(c-4)}{\Gamma(c-a)\Gamma(c+a-1)}
$$
decreases when $c$ increases.
\end{lemma}
\begin{proof}
We use Lemma~\ref{lem-AVV-p36}. Since $c-a>c-4>0$, in one hand we have
$$
\frac{\Gamma'(c-4)}{\Gamma(c-4)}-\frac{\Gamma'(c-a)}{\Gamma(c-a)}<0.
$$
On the other hand, since $c<c+a-1$, we have
$$
\frac{\Gamma'(c)}{\Gamma(c)}-\frac{\Gamma'(c+a-1)}{\Gamma(c+a-1)}<0.
$$
Thus, if
$$
g(c)=\frac{\Gamma(c)\Gamma(c-4)}{\Gamma(c-a)\Gamma(c+a-1)},
$$
it follows that
\begin{align*}
\frac{g'(c)}{g(c)}
& =\frac{\Gamma'(c)}{\Gamma(c)}+\frac{\Gamma'(c-4)}{\Gamma(c-4)}
-\frac{\Gamma'(c-a)}{\Gamma(c-a)}-\frac{\Gamma'(c+a-1)}{\Gamma(c+a-1)}\\
& = \left(\frac{\Gamma'(c-4)}{\Gamma(c-4)}
-\frac{\Gamma'(c-a)}{\Gamma(c-a)}\right)+
\left(\frac{\Gamma'(c)}{\Gamma(c)}-\frac{\Gamma'(c+a-1)}{\Gamma(c+a-1)}\right)\\
& < 0.
\end{align*}
By the definition of the gamma function, obviously, one can see that $\Gamma(x)>0$ for $x>0$.
This shows that $g(c)>0$ and hence $g'(c)<0$. Thus, $g(c)$ decreases for $1<a<4<c$.
For $c>4$, if $-3<a<0$ holds then we consider the rearrangement
$$
\frac{g'(c)}{g(c)}=\left(\frac{\Gamma'(c)}{\Gamma(c)}
-\frac{\Gamma'(c-a)}{\Gamma(c-a)}\right)+\left(\frac{\Gamma'(c-4)}{\Gamma(c-4)}-\frac{\Gamma'(c+a-1)}{\Gamma(c+a-1)}\right)
$$
and show that ${g'(c)}/{g(c)}<0$.
\end{proof}
Using Mathematica or other similar tools, one can see that Lemma~\ref{3.2lem1} even holds true
for the remaining range $0\le a\le 1$. This suggests us to pose the following conjecture.
\begin{conjecture}\label{3.2conj}
Let $a,c\in\mathbb{R}$ be such that $0\le a\le 1$ and $c>4$. Then the quotient
$$
\frac{\Gamma(c)\Gamma(c-4)}{\Gamma(c-a)\Gamma(c+a-1)}
$$
decreases when $c$ increases.
\end{conjecture}
Thus, we observe that when $c>4$ increases then the error $E_{q_1}(f,x)$ estimated in
Corollary~\ref{3.2cor1} decreases (see also Figure~\ref{Eq1fx-1} and Figure~\ref{Eq1fx-2}).
\begin{figure}[H]
\begin{center}
\includegraphics[width=7cm]{Eq1fx1.eps}
\includegraphics[width=7cm]{Eq1fx2.eps}
\end{center}
\caption{The error estimate $E_{q_1}(f,x)$ when $a=3.9$ and $c$ increases from $4.5$ to $6.5$.}\label{Eq1fx-1}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=7cm]{Eq1fx3.eps}
\includegraphics[width=7cm]{Eq1fx4.eps}
\end{center}
\caption{The error estimate $E_{q_1}(f,x)$ when $a=0.9$ and $c$ increases from $4.1$ to $6.1$.}\label{Eq1fx-2}
\end{figure}
\begin{table}[H]
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Nodes $x_i$ & $0$ & $0.25$ & $0.5$ & $0.75$ & $1$ \\
\hline
Actual values ${}_2F_1[3.9,-2.9;5;x_i]$ & $1$ & $0.5372$ & $0.2516$ & $0.0998$ & $0.0367$ \\
\hline
Polynomial approximations & $1$ & $0.5591$ & $0.2516$ & $0.0775$ & $0.0367$\\
by $P_{q_1}(x_i)$ &&&&&\\
\hline
Validity of error bounds & $0$ & $0.0219<0.0274$ & $0$ & $0.0223<0.0274$ & $0$\\
by $E_{q_1}(f,x_i)$ &&&&&\\
\hline
&&&&&\\[-4.5mm]
\hline
Actual values ${}_2F_1[3.9,-2.9;6;x_i]$ & $1$ & $0.6027$ & $0.3358$ & $0.1724$ & $0.0845$ \\
\hline
Polynomial approximations & $1$ & $0.6163$ & $0.3358$ & $0.1585$ & $0.0845$\\
by $P_{q_1}(x_i)$ &&&&&\\
\hline
Validity of error bounds & $0$ & $0.0136<0.0158$ & $0$ & $0.0139<0.0158$ & $0$\\
by $E_{q_1}(f,x_i)$ &&&&&\\
\hline
\end{tabular}
\caption{Comparison of the functional and quadratic polynomial values}\label{T2}
\end{table}
\begin{table}[H]
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Nodes $x_i$ & $0$ & $0.25$ & $0.5$ & $0.75$ & $1$ \\
\hline
Actual values ${}_2F_1[0.9,0.1;5;x_i]$ & $1$ & $1.0047$ & $1.0099$ & $1.0158$ & $1.0227$ \\
\hline
Polynomial approximations & $1$ & $1.0046$ & $1.0099$ & $1.0160$ & $1.0227$\\
by $P_{q_1}(x_i)$ &&&&&\\
\hline
Validity of error bounds & $0$ & $0.0001<0.0016$ & $0$ & $0.0002<0.0016$ & $0$\\
by $E_{q_1}(f,x_i)$ &&&&&\\
\hline
&&&&&\\[-4.5mm]
\hline
Actual values ${}_2F_1[0.9,0.1;6;x_i]$ & $1$ & $1.0039$ & $1.0082$ & $1.0128$ & $1.0182$ \\
\hline
Polynomial approximations & $1$ & $1.0038$ & $1.0082$ & $1.0129$ & $1.0182$\\
by $P_{q_1}(x_i)$ &&&&&\\
\hline
Validity of error bounds & $0$ & $0.0001<0.0004$ & $0$ & $0.0001<0.0004$ & $0$\\
by $E_{q_1}(f,x_i)$ &&&&&\\
\hline
\end{tabular}
\caption{Comparison of the functional and quadratic polynomial values}\label{T3}
\end{table}
Figure~\ref{Eq1fx-1} and Figure~\ref{Eq1fx-2} describe the quadratic interpolation of the hypergeometric functions
${}_2F_1[a,1-a,c,x]$ at $0$, $0.5$ and $1$, whereas
Table~\ref{T2} and Table~\ref{T3} respectively compare the values of the hypergeometric functions
up to four decimal places with its interpolating polynomial values in the interval $[0,1]$
for the choice of parameters $a=3.9$ and $a=0.9$, $c=5$ and $c=6$.
Figures~\ref{Eq1fx-1}--\ref{Eq1fx-2} and Tables~\ref{T2}--\ref{T3} also indicate errors at various points
within the unit interval except at the interpolating points at $x=0,0.5,1$.
The error estimate $|E_{q_2}(f,x)|$ for the function ${}_2F_1[a,b;(a+b+1)/2;x]$ can be analyzed
in a similar way, and hence we omit the proof.
\section{An Application}
In this section, we brief on interpolation of a continued fraction
that converges to a quotient of two hypergeometric functions.
Gauss used the contiguous relations to give several ways to write a quotient
of two hypergeometric functions as a continued fraction. For instance, it is well-known that
\begin{equation}\label{cf}
\frac{{}_2F_1[a+1,b;c+1;x]}{{}_2F_1[a,b;c;x]}
= \cfrac{1}{1 + \cfrac{\cfrac{(a-c)b}{c(c+1)} x}{1 + \cfrac{\cfrac{(b-c-1)(a+1)}{(c+1)(c+2)} x}{1
+ \cfrac{\cfrac{(a-c-1)(b+1)}{(c+2)(c+3)} x}
{1 + \cfrac{\cfrac{(b-c-2)(a+2)}{(c+3)(c+4)} x}{1 + {}\ddots}}}}}, \quad |x|<1.
\end{equation}
In one hand, if we adopt the basic linear interpolation method that we discussed in Section~2
(that is, linear interpolation directly) to the function
$$
g(x)=\frac{{}_2F_1[a+1,b;c+1;x]}{{}_2F_1[a,b;c;x]}
$$
at $x_0=0$ and $x_1=1$, we obtain the linear interpolation
of the above continued fraction in the following form:
$$
R_l(x)=g(x_0)+\frac{x-x_0}{x_1-x_0}(g(x)-g(x_0))=1+\Big(\frac{b}{c-b}\Big)x, \quad c-b>a,
$$
since $g(x_0)=1$ and $g(x_1)=c/(c-b)$. For the choice $a=1,b=2,c=6$, this approximation is also shown in Figure~\ref{Rl}.
\begin{figure}[H]
\includegraphics[width=8cm]{Rl.eps}
\caption{Approximation of ${{}_2F_1[a+1,b;c+1;x]}/{{}_2F_1[a,b;c;x]}$ through $R_l(x)$.}\label{Rl}
\end{figure}
On the other hand, an application of linear interpolation of ${}_2F_1[a,b;c;x]$ obtained in Section~\ref{sec2}
leads to the following approximation of the above continued fraction
in terms of ratio of polynomial approximation (we call this {\em rational interpolation}):
\begin{align*}
R_r(x)
&=\frac{1}{P_l(x)}\left(\frac{\Gamma(c+1) \Gamma(c-a-b) -\Gamma(c-a) \Gamma(c-b+1)}
{\Gamma(c-a) \Gamma(c-b+1)}x+1 \right)\\
&=\frac{\Big[\cfrac{c\Gamma(c)\Gamma(c-a-b)}{c-b}-\Gamma(c-a)\Gamma(c-b)\Big]x+\Gamma(c-a)\Gamma(c-b)}
{[\Gamma(c)\Gamma(c-a-b)-\Gamma(c-a)\Gamma(c-b)]x+\Gamma(c-a)\Gamma(c-b)}\\
& =1+\frac{b}{c-b} \left[\frac{\Gamma(c-a-b) \Gamma(c) \,x}{[\Gamma(c) \Gamma(c-a-b)
-\Gamma(c-a) \Gamma(c-b)] \,x+ \Gamma(c-a) \Gamma(c-b)}\right],
\end{align*}
where $c-a-b>0$. For the choice $a=1,b=2,c=6$, this approximation is also shown in Figure~\ref{Rr}.
\begin{figure}[H]
\includegraphics[width=8cm]{Rr.eps}
\caption{Approximation of ${{}_2F_1[a+1,b;c+1;x]}/{{}_2F_1[a,b;c;x]}$ through $R_r(x)$.}\label{Rr}
\end{figure}
Observe that
$$R_r(x_0)=1=R_l(x_0)~~\mbox{ and }~~R_r(x_1)=\frac{c}{c-b}=R_l(x_1)
$$
and hence $R_r$ also interpolates the continued fraction under consideration at $0$ and $1$.
Further we observe that both the approximations $R_l(x)$ and $R_r(x)$ of the continued fraction
are easy to obtain and the first approximation
(i.e., $R_l(x)$) is in a simpler form than $R_r(x)$ as expected.
Now, it would be interesting to know which one would give the best approximation to the continued
fraction under consideration. With the special choice $a=1,b=2,c=6$, we see from
Figure~\ref{Rl} and Figure~\ref{Rr} that among these two, $R_l(x)$ is the better approximation than
$R_r(x)$. One may ask: does it happen for arbitrary parameters $a,b,c$?
Since $R_l(x)=R_r(x)$ if and only if $\Gamma(c)\Gamma(c-a-b)=\Gamma(c-a)\Gamma(c-b)$,
the answer to this affirmative question is yes except when $\Gamma(c)\Gamma(c-a-b)=\Gamma(c-a)\Gamma(c-b)$.
This leads to the following result:
\begin{theorem}
Let $R_l(x)$ and $R_r(x)$ be respectively the linear interpolation and the rational interpolation
of the quotient ${{}_2F_1[a+1,b;c+1;x]}/{{}_2F_1[a,b;c;x]}$ (euivalently, of the continued fraction
\eqref{cf}). Then $R_l(x)$ and $R_r(x)$ coincide each other if and only if
$\Gamma(c)\Gamma(c-a-b)=\Gamma(c-a)\Gamma(c-b)$ holds for $c-a-b>0$.
\end{theorem}
\section{Concluding Remarks and Future Scope}
Recall that, in this paper, we use some standard interpolation techniques to approximate the hypergeometric function
$$
{}_2F_1[a,b;c;x]=1+\frac{ab}{c}x+\frac{a(a+1)b(b+1)}{c(c+1)}\frac{x^2}{2!}+\cdots
$$
for a range of parameter triples $(a,b,c)$ on the interval $0<x<1$.
Some of the familiar hypergeometric functional identities and asymptotic behavior of the hypergeometric function
at $x=1$ played crucial roles in deriving the formula for such approximations.
One can expect similar formulae using other well-known interpolations and obtain better approximation
for the hypergeometric function, however, we discuss such results in the upcoming manuscript(s).
Different numerical methods for the computation of the confluent and Gauss hypergeometric functions
are studied recently in \cite{POP17}. Such investigation may be extended to the $q$-analog of
the hypergeometric functions, namely, Heine's basic hypergeometric functions; for instance refer to \cite{CF11}
for similar discussions.
We also focus on error analysis of the numerical approximations leading to monotone properties of
quotient of gamma functions in parameter triples $(a,b,c)$. Monotone properties of the gamma and its quotients
in different forms
are of recent interest to many researchers; see for instant \cite{Alz93,AQ97,BI86,CZ14,GL01,Gau59,LWZ17,MD17}.
In this paper, we also studied and
stated a conjecture (see Conjecture~\ref{3.2conj}) related to monotone properties of quotient of
gamma functions to analyse the error estimate of the numerical approximations under consideration.
Finally, an application to continued fractions of Gauss is also discussed. Approximations of continued fractions
in different forms are also attracted to many researchers; see \cite{LLQ17,LSM16} and references therein
for some of the recent works.
\bigskip
\noindent
{\bf Acknowledgement.} This work was carried out when the first author was in internship
at IIT Indore during the summer 2014.
The authors would like to thank the referee and the editor
for their valuable remarks on this paper.
|
1,941,325,220,064 | arxiv | \section{Introduction}
Consider the one dimensional Rosenau equation:
\begin{equation}
u_t + u_{xxxxt} = f(u)_x, \quad (x,t) \in (a,b) \times (0,T] \label{Eq1.1}
\end{equation}
with initial condition
\begin{equation}
u(x,0) = u_0(x), \quad x \in (a, b), \label{Eq1.2}\\
\end{equation}
and the boundary conditions
\begin{eqnarray}
u(a, t) &=& u(b, t) = 0,\nonumber \\
u_x(a, t) &=& u_x(b, t) = 0, \label{Eq1.3}
\end{eqnarray}
where $f(u)$ is a nonlinear term in $u$ of the type $f(u) = \displaystyle
\sum_{i=1}^{n} \frac{c_i u^{p_i+1}}{p_i+1}$, here $c_i$ is a real
constant and $p_i$ is a positive integer. \\\\
The Rosenau equation is an example of a nonlinear partial differential
equation, which governs
the dynamics of dense discrete systems and models wave propagation in
nonlinear dispersive media. \\\\
Recently, several numerical techniques
like conforming finite element methods, mixed finite element methods,
orthogonal cubic spline collocation methods, etc., were proposed to
find the approximate solution of Rosenau equation.
The different conforming finite element techniques are used to
approximate the solution of Rosenau equation needs $C^1$-interelement
continuity condition and mixed finite element formulations
are required $C^0$-continuity condition. In this article discontinuous
Galerkin finite element methods are used to approximate the solution. \\\\
The well-posedness results of (\ref{Eq1.1})-(\ref{Eq1.3}) was
proved by Park \cite{park} and Atouani {\it et al.} in \cite{atou}.
Earlier, some numerical methods were proposed to solve the Rosenau
equation (\ref{Eq1.1})-(\ref{Eq1.3}) using finite difference methods
by Chung \cite{chung3}, conservative difference schemes by Hu and Zheng
\cite{Hu2010} and Atouni and Omrani \cite{omrani}.
Finite element Galerkin method was used by
\cite{atou,ha}, a second order splitting combined with orthogonal
cubic spline collocation method was used by Manickam {\it et al.} \cite{manickam} and Chung
and Pani in \cite{chung} constructed
a $C^1$-conforming finite element method for the Rosenau equation
(\ref{Eq1.1})-(\ref{Eq1.3}) in
two-space dimensions. \\\\
In recent years, there has been a growing interest in discontinuous
Galerkin finite element methods because of their flexibility in
approximating globally rough solutions and their potential for error
control and mesh adaptation.\\\\
Recently, a cGdG method was proposed by Choo. {\it et. al}
in \cite{dgros}. A subdomain finite element method using sextic
b-spline was proposed by Battal and Turgut in \cite{subd}. But constructing
$C^1$ finite elements for fourth order problems becomes expensive
and hence discontinuous Galerkin finite element methods can be used to
solve fourth order problems \cite{gudi}.\\\\
In this paper, we introduce discontinuous Galerkin finite element
methods (DGFEM) in space to solve the one dimensional Rosenau
equation (\ref{Eq1.1})-(\ref{Eq1.3}). Comparitive to existing methods
our proposed method require less regularity. \\\\
The outline of the paper is as follows. In Section 2, we derive the
discontinuous weak formulation of the Rosenau equation. In Section 3,
we discuss the \textit{a priori} bounds and optimal error estimates
for the semidiscrete problem. In Section 4, we discretize the
semidiscrete problem in the temporal direction using a backward Euler
method and discuss the \textit{a priori} bounds and optimal error
estimates. Finally, in Section 5, we present some numerical results to
validate the theoretical results. \\\\
Throughout this paper, $C$ denotes a generic positive constant which
is independent of the discretization parameter $h$ which may have
different values at different places.
\section{Weak Formulation}
In this section, we derive the weak formulation for the problem
(\ref{Eq1.1})-(\ref{Eq1.3}). \\\\
We discretize the domain $(a, b)$ into $N$ subintervals as
$$
a = x_0 < x_1 < x_2 < \dots <x_N = b,
$$
and $I_n = \left(x_n, x_{n + 1} \right)$ for $n = 0, 1, 2, \ldots,
N-1$.
We denote this partition by $\mathcal{E}_h$ consisting of sub-intervals
$I_n, \, n = 0, 1, 2, \ldots N-1$. Below, we define
the broken Sobolev space and corresponding norm
\begin{equation*}
H^s(\mathcal{E}_h) = \left\{ v \in L^2(a, b) \;: \; v |_{I_n}
\in H^s(I_n), \; \forall I_n\in\mathcal{E}_h \right\}
\end{equation*}
and
\begin{equation*}
|||v|||_{H^s(\mathcal{E}_h)} = \left(\sum_{K\in\mathcal{E}_h} \lVert
v \rVert_{H^s(K)}^2\right)^\frac{1}{2}.
\end{equation*}
\noindent
Now define the jump and
average of $v$ across the nodes $\{x_n\}_{n=1}^{N-1}$ as follows.
The jump of a function value $v(x_n)$ across
the inter-element node $x_n$, shared by $I_{n-1}$ and $I_n$ denoted
by $[v(x_n)]$ and defined by
\begin{equation*}
[v(x_n)] = v(x_n^-) - v(x_n^+).
\end{equation*}
At the boundary $x_0$ and $x_N$, we set
\begin{equation*}
[v(x_0)] = -v(x_0), \;\;\;\; \mbox{and} \;\;\;\;
[v(x_N)] = v(x_N).
\end{equation*}
\noindent
The average of a function value $v(x_n)$ across
the inter-element node $x_n$, shared by $I_{n-1}$ and $I_n$ denoted
by $\left\{ v(x_n) \right\}$ and defined by
\begin{equation*}
\{v(x_n)\} = \frac{1}{2} \left(v(x_n^-) + v(x_n^+)\right).
\end{equation*}
At the boundary $x_0$ and $x_N$, we set
$$
\{v(x_0)\} = v(x_0), \;\;\;\; \mbox{and} \;\;\;\;
\{v(x_N)\} = v(x_N).
$$
\noindent
We multiply \eqref{Eq1.1} with $v\in H^s(\mathcal{E}_h)$ and
integrate over $I_n = (x_n,x_{n+1})$ to obtain
\begin{eqnarray}
\int_{x_n}^{x_{n+1}} (u_t + u_{xxxxt}) v \; dx =
\int_{x_n}^{x_{n+1}} f(u)_x v\; dx. \label{Eq2.1}
\end{eqnarray}
Now, using integration by parts twice in (\ref{Eq2.1}), we arrive at
\begin{eqnarray*}
\int_{x_n}^{x_{n+1}} u_t v \; dx + \int_{x_n}^{x_{n+1}} u_{xxt} v_{xx} \; dx
+ u_{xxxt}(x_{n+1})v(x_{n+1}^-) - u_{xxxt}(x_{n})v(x_{n}^+) \\ - u_{xxt}(x_{n+1})v_x(x_{n+1}^-)
+ u_{xxt}(x_{n})v_x(x_{n}^+) = \int_{x_n}^{x_{n+1} } f(u)_x v \; dx.
\end{eqnarray*}
Summing over all the elements $n = 0,1,\dots,N-1$ and using
$$
ps-qr = \frac{1}{2}(p+q)(s-r) + \frac{1}{2}(r+s)(p-q), \;\;\; p, q,
r \; \mbox{and} \; s \in \mathbb{R},
$$
we obtain
\begin{eqnarray}
\sum_{n=0}^{N-1} \int_{x_n}^{x_{n+1}} u_t v \; dx
+ \sum_{n=0}^{N-1} \int_{x_n}^{x_{n+1}} u_{xxt} v_{xx} \; dx
&+& \sum_{n=0}^{N} \big\{u_{xxxt}(x_n)\big\}\big[v(x_n)\big] -
\sum_{n=0}^{N} \big\{u_{xxt}(x_n)\big\}\big[v_x(x_n)\big]
\nonumber \\
&=& \sum_{n=0}^{N-1} \int_{x_n}^{x_{n+1}} f(u)_x v \; dx. \label{Eq2.2}
\end{eqnarray}
Since $u(x,t)$ is assumed to be sufficiently smooth, we have
$\big[u_t(x_n)\big] = \big[u_{xt}(x_n)\big] = 0$. Using this, we write as
\begin{multline}
\sum_{n=0}^{N}\big\{v_{xxx}(x_n) \big\}\big[u_t(x_n)\big] -
\sum_{n=0}^N\big\{v_{xx}(x_n)\big\}\big[u_{xt}(x_n)\big] +
\sum_{n=0}^N\frac{\sigma_0}{h^\beta}\big[v(x_n)\big]\big[u_t(x_n)\big] +
\sum_{n=0}^N\frac{\sigma_1}{h}\big[v_x(x_n)\big]\big[u_{xt}(x_n)\big] \\
= v_{xxx}(x_0)u_t(x_0) - v_{xxx}(x_N)u_t(x_N) -
v_{xx}(x_0)u_{xt}(x_0) + v_{xx}(x_N)u_{xt}(x_N) -\\
\frac{\sigma_0}{h^\beta}v(x_0)u_t(x_0) -
\frac{\sigma_0}{h^\beta}v(x_N)u_{t}(x_N) -
\frac{\sigma_1}{h}v_{x}(x_0)u_{xt}(x_0) -
\frac{\sigma_1}{h}v_{x}(x_N)u_{xt}(x_N) = 0. \label{Eq2.3}
\end{multline}
The right hand side of \eqref{Eq2.3} was found out using the
boundary conditions \ref{Eq1.3}. Adding \eqref{Eq2.3} to \eqref{Eq2.2} we obtain
\begin{eqnarray}
\sum_{n=0}^{N-1} \int_{x_n}^{x_{n+1}} u_t v \; dx
&+& \sum_{n=0}^{N-1} \int_{x_n}^{x_{n+1}} u_{xxt} v_{xx} \; dx
+ \sum_{n=0}^{N} \big\{u_{xxxt}(x_n)\big\}\big[v(x_n)\big] +
\sum_{n=0}^{N}
\big\{v_{xxx}(x_n)\big\}\big[u_t(x_n)\big]\nonumber \\
&-& \sum_{n=0}^{N} \big\{u_{xxt}(x_n)\big\}\big[v_x(x_n)\big]
- \sum_{n=0}^{N}
\big\{v_{xx}(x_n)\big\}\big[u_{xt}(x_n)\big]
+ \frac{\sigma_0}{h^\beta} \sum_{n=0}^{N}
\big[v(x_n)\big]\big[u_t(x_n)\big] \nonumber \\
&+& \frac{\sigma_1}{h} \sum_{n=0}^{N} \big[v_x(x_n)\big]\big[u_{xt}(x_n)\big]
= \sum_{n=0}^{N} \int_{x_n}^{x_{n+1}} f(u)_x v \; dx \label{Eq2.4}.
\end{eqnarray}
We define the bilinear form as
\begin{eqnarray}
B(u,v) &=& A(u,v) + J^{\sigma_0}(u,v) + J^{\sigma_1}(u,v),\label{Eq:n2.4}
\end{eqnarray}
where
\begin{eqnarray*}
A(u,v) &=& \sum_{n=0}^{N-1}\int_{x_n}^{x_{n+1}} u_{xx}v_{xx} \,dx +
\sum_{n=0}^N\left(\big\{ u_{xxx}(x_n)\big\}\big[v(x_n)\big] + \big\{
v_{xxx}(x_n)\big\}\big[u(x_n)\big]\right)\\
&-&\sum_{n=0}^N\left(\big\{ u_{xx}(x_n)\big\}\big[v_x(x_n)\big] + \big\{
v_{xx}(x_n)\big\}\big[u_x(x_n)\big]\right),
\end{eqnarray*}
and
\begin{eqnarray*}
J^{\sigma_0}(u,v) =
\sum_{n=0}^N\frac{\sigma_0}{h^\beta}\big[u(x_n)\big]\big[v(x_n)\big],
\qquad \qquad J^{\sigma_1}(u,v) =
\sum_{n=0}^N\frac{\sigma_1}{h}\big[u_x(x_n)\big]\big[v_x(x_n)\big].
\end{eqnarray*}
In \eqref{Eq:n2.4}, $J^{\sigma_0}$ and $J^{\sigma_1}$ are the penalty terms
and $\sigma_0, \sigma_1 > 0$. The value of $\beta$ will be defined
later. \\\\
The weak formulation of
\eqref{Eq1.1}-\eqref{Eq1.3} as follows:
Find $u(t) \in H^s(\mathcal{E}_h), \; s > 7/2$, such that
\begin{eqnarray}
(u_t,v) + B(u_t,v) &=& \left( f(u)_x,v\right), \; \forall v \in
H^s(\mathcal{E}_h), \;\;t>0 \label{Eq2.7} \\
u(x,0) &=& u_0(x). \label{Eq2.8}
\end{eqnarray}
\noindent
Below, we state and prove the consistency result of the weak formulation
(\ref{Eq2.7})-(\ref{Eq2.8}).
\begin{theorem}
Let $u(x,t)\in C^4(a,b)$ be a solution of the continuous
problem \eqref{Eq1.1}-\eqref{Eq1.3}. Then $u(x,t)$ satisfies the
weak formulation \eqref{Eq2.7}-\eqref{Eq2.8}. Conversely, if
$u(x,t)\in H^2(a, b) \cap H^s(\mathcal{E}_h)$ for $s>7/2$ is a
solution of \eqref{Eq2.7}-\eqref{Eq2.8}, then $u(x, t)$ satisfies
\eqref{Eq1.1}-\eqref{Eq1.3}.
\begin{proof}
Let $u(x,t)\in C^4(a,b)$ and $v\in
H^s(\mathcal{E}_h)$. Multiply \eqref{Eq1.1} by $v$ and integrate
from $x_n$ to $x_{n+1}$. Sum over all
$n=0,1,\dots,N-1$ and using \eqref{Eq2.2}, \eqref{Eq2.3} and
\eqref{Eq2.4}, we obtain the weak formulation \eqref{Eq2.7}.\\\\
Conversely, let $u\in H^2(a,b) \cap H^s(\mathcal{E}_h), \; s>7/2$ and $v
\in \mathcal{D}(I_n)$, the space of infinitely differentiable
functions with compact support in $I_n$. Then, \eqref{Eq2.7}
becomes
\begin{equation}
\int_{x_n}^{x_{n+1}} u_t v \; dx + \int_{x_n}^{x_{n+1}} u_{xxt}
v_{xx} \; dx = \int_{x_n}^{x_{n+1}} f(u)_x v\; dx. \label{eq:Neq21}
\end{equation}
Applying integration by parts twice on the second term on the left hand
side of \eqref{eq:Neq21} to obtain,
\begin{equation*}
\int_{x_n}^{x_{n+1}} u_{xxt} v_{xx} \; dx = \int_{x_n}^{x_{n+1}}
u_{xxxxt} v \; dx,
\end{equation*}
as $v$ is compactly supported on $I_n$. This immediately yields
\begin{equation}
u_t + u_{xxxxt} = f(u)_x, \quad \text{a.e in} \;\;I_n.\label{eq:3-17}
\end{equation}
Consider the node $x_k$ shared between $I_{k-1}$ and $I_k$. Choose
$v \in H^2_0(I_{k-1}\cup I_{k})$, multiply \eqref{eq:3-17} by $v$
and integrate over $(a,b)$ to obtain
\begin{equation}
\int_{I_{k-1} \cup I_k} u_t v \; dx + \int_{I_{k-1} \cup I_k}
u_{xxxxt} v \; dx = \int_{I_{k-1} \cup I_k} f(u)_x v\;
dx.\label{eq:Eq22}
\end{equation}
Applying integration by parts twice on the second term of
\eqref{eq:Eq22} and using $v\in H^2_0(I_{k-1}\cup I_{k})$, we obtain
\begin{eqnarray}
\int_{I_{k-1} \cup I_k} u_t v \; dx
&+& \int_{I_{k-1} \cup I_k}
u_{xxt} v_{xx} \; dx + \left[u_{xxxt}(x_k)\right]v(x_k)
-
\left[u_{xxt}(x_k)\right]v_x(x_k)
\nonumber \\
&=& \int_{I_{k-1} \cup I_k} f(u)_x v \, dx. \label{eq:3-18}
\end{eqnarray}
On the other hand, we have from \eqref{Eq2.7} for the choice of $u$ and $v$,
\begin{eqnarray}
\int_{I_{k-1} \cup I_k} u_t v \; dx + \int_{I_{k-1} \cup I_k}
u_{xxt} v_{xx} \; dx = \int_{I_{k-1} \cup I_k} f(u)_x v\, dx. \label{eq:3-19}
\end{eqnarray}
Comparing \eqref{eq:3-18} and \eqref{eq:3-19} and using the fact
that $v$ is arbitrary, we obtain
\[\left[u_{xxxt}(x_k)\right] = 0.\]
Thus $u_{xxxxt} \in L^2(\Omega)$ and hence, from \eqref{eq:Eq22},
we obtain
\begin{equation}
u_t + u_{xxxxt} = f(u)_x \qquad \text{a.e in } (a,b).
\end{equation}
This completes the proof.
\end{proof}
\end{theorem}
\section{Semidiscrete DGFEM}
\setcounter{equation}{0}
In this section, we discuss the {\it a priori} bounds and optimal
error estimates for the semidiscrete Galerkin method. \\\\
We define a finite dimensional subspace $D_k(\mathcal{E}_h)$ of
$H^s(\mathcal{E}_h), \; s > 7/2$ as
\begin{equation*}
D_k(\mathcal{E}_h) = \left\{ v\in L^2(a, b): v |_{I_n}
\in \mathbb{P}_k(I_n), \;\;I_n \in \mathcal{E}_h\right\}.
\end{equation*}
The weak formulation for the semidiscrete Galerkin method is to find
$u^h(t) \in D_k(\mathcal{E}_h)$ such that
\begin{eqnarray}
(u^h_t,\chi) + B(u^h_t,\chi) &=& \left( f(u^h)_x,\chi\right), \; \; \mbox{for
all} \;\; \chi \in D_k(\mathcal{E}_h), \label{Neq2.1}\\
u^h(0) &=& u^h_0, \label{Neq2.2}
\end{eqnarray}
where $u_0^h$ is an appropriate approximation of $u_0$ which will be
defined later.
\subsection{\textit{A priori} Bounds}
In this sub-section, we derive the \textit{a priori} bounds. \\
\noindent
Define the energy norm
$$
||u||_{\mathcal{E}}^2 = \sum_{n=0}^N\int_{x_n}^{x_{n+1}} u_{xx}^2 + \sum_{n=0}^N
\frac{\sigma_0}{h^\beta} |[u(x_n)]|^2 + \sum_{n=0}^N \frac{\sigma_1}{h}
|[u_x(x_n)]|^2.
$$
We note from \cite{gudi} that $B(.,.)$ is coercive with respect to the energy
norm, i.e.,
$$
B(v,v) \ge C||v||_{\mathcal{E}}^2, \quad v\in D^k(\mathcal{E}_h),
$$
for sufficiently large values of $\sigma_0$ and $\sigma_1$.\\\\
Observe that \eqref{Neq2.1} yields a system of non-linear ordinary
differential equations and the existence and uniqueness of the solution can be
guaranteed locally using the Picard's theorem. To obtain existence and
uniqueness globally, we use continuation arguments and hence we need the
following \textit{a priori} bounds.
\begin{theorem}
Let $u^h(t)$ be a solution to \eqref{Neq2.1} and assume that $f'$ is
bounded. Then there exists a positive constant $C$ such that
\begin{equation}
\|u^h(t)\| + \|u^h(t)\|_{\mathcal{E}} \le C(\|u_0^h\|_{\mathcal{E}}).
\end{equation}
\begin{proof}
On setting $\chi = u^h$ in \eqref{Neq2.1}, we obtain
\begin{equation}
(u^h_t,u^h) + B(u^h_t,u^h) = (f(u^h)_x,u^h).\label{Neq2.9}
\end{equation}
We rewrite the equation \eqref{Neq2.9} as
$$
\frac{1}{2}\frac{d}{dt}\|u^h(t)\|^2 + \frac{1}{2}\frac{d}{dt}
B(u^h(t),u^h(t)) = (f'(u^h)u^h_x,u^h).
$$
Integrating from $0$ to $t$, we obtain
\begin{equation}
\|u^h(t)\|^2 + B(u^h(t),u^h(t)) = \|u^h(0)\|^2 + B(u^h(0),u^h(0)) +
\int_{0}^{t}(f'(u^h)u^h_x,u^h) \,ds \label{Eq:2}.
\end{equation}
On using the coercivity of $B(u^h(t),u^h(t))$ and the
boundedness of $f'$, we arrive at
\begin{equation}
\|u^h(t)\|^2 + C\|u^h(t)\|_{\mathcal{E}}^2 \le \|u^h(0)\|^2 + B(u^h(0),u^h(0)) +
C \int_{0}^{t}(u^h_x,u^h) \,ds. \label{Eq:4}
\end{equation}
Using the Cauchy Schwarz inequality and the
Poincar\'e inequality on the right hand side of \eqref{Eq:4}, we
obtain
\begin{equation}
\|u^h(t)\|^2 + C\|u^h(t)\|_{\mathcal{E}}^2 \le C(\|u^h_0\|_{\mathcal{E}}) +
\int_{0}^{t}\|u^h(s)\|_{\mathcal{E}}^2 \,ds \label{Eq:5}.
\end{equation}
An application of Gronwall's inequality yields the desired \textit{a
priori} bound for $u^h(t)$.
\end{proof}
\end{theorem}
\subsection{Error Estimates in the energy and $L^2$-norm}
In this subsection, we derive the optimal error estimates in energy
and $L^2$-norm.\\\\
Often a direct comparison between $u$ and $u^h$ does
not yield optimal rate of convergence. Therefore, there is a need to introduce an
appropriate auxiliary or intermediate function $\tilde{u}$ so that the
optimal estimate of $u-\tilde{u}$ is easy to obtain and the
comparision between $u^h$ and $\tilde{u}$ yields a sharper estimate
which leads to optimal rate of convergence for $u-u^h$. In literature, Wheeler \cite{wheeler}
for the first time introduced this technique in the context of
parabolic problem. Following Wheeler \cite{wheeler}, we introduce $\tilde{u}$ be an
auxiliary projection of $u$ defined by
\begin{equation}
B(u-\tilde{u},\chi) = 0, \;\; \mbox{for all} \; \chi \in
D^k(\mathcal{E}_h).\label{Eq:10}
\end{equation}
Now set the error $e = u-u^h$ and split as follows:
$e = u - \tilde{u} - \left(u^h -\tilde{u} \right) = \eta - \theta$,
where $\eta = u - \tilde{u}$ and $\theta = u^h-\tilde{u}$.
Below, we state some error estimates for $\eta = u-\tilde{u}$ and its temporal
derivative.
\begin{lemma}\label{Lemma3}
For $t\in(0,T]$ and $s > 7/2$ then there exists a positive constant $C$
independent of $h$ such that the following error estimates for $\eta$ hold:
\begin{eqnarray*}
\left\|\frac{\partial^l \eta}{\partial t^l}\right\|_{\mathcal{E}}
&\le&
Ch^{\min(k+1,s)-2}\left(\sum_{m=0}^{l} |||\frac{\partial ^m
u}{\partial t^m}|||_{H^s(\mathcal{E}_h)}\right),\\
\left\|\frac{\partial^l \eta}{\partial t^l}\right\|
&\le&
Ch^{\min(k+1,s)}\left(\sum_{m=0}^{l} |||\frac{\partial ^m
u}{\partial t^m}||| \right), \quad l = 0,1.
\end{eqnarray*}
\begin{proof}
We split $\eta$ as follows:
$$
\eta = u - \tilde{u} = \left(u - \bar{u} \right) - \left(
\tilde{u} - \bar{u} \right) = \rho - \xi,
$$
where $\rho = u - \bar{u}$, $\xi = \tilde{u} - \bar{u}$ and
$\bar{u}$ is an interpolant of $u$ satisfying good approximation
properties. Now from \eqref{Eq:10}, we have
\begin{equation}
B(\xi,\chi) = B(\rho,\chi)\label{eq:10}.
\end{equation}
We note that $\rho$ satisfies the following approximation
property \cite{rvg}:
$$
\|\rho\|_{H^q(I_n)} \le Ch^{\min(k+1,s)-q} \|u\|_{H^s(I_n)}.
$$
Set $\chi = \xi$ in
\eqref{eq:10} to obtain
$$
B(\xi,\xi) = B(\rho,\xi).
$$
A use of coercivity of $B(.,.)$ and the assumption that $\bar{u}$ is a
sufficiently smooth interpolant of $u$, we obtain
\begin{equation}
C \|\xi\|^2_{\mathcal{E}} \le \sum_{n=0}^{N-1}
\int_{x_n}^{x_{n+1}} \rho_{xx}\xi_{xx} \,dx +
\sum_{n=0}^{N}\{\rho_{xxx}(x_n)\}[\xi(x_n)] -
\sum_{n=0}^{N}\{\rho_{xx}(x_n)\}[\xi_x(x_n)].\label{Eq:n6}
\end{equation}
Now we estimate the first term as follows:
\begin{eqnarray}
\sum_{n=0}^{N-1} \int_{x_n}^{x_{n+1}} \rho_{xx}\xi_{xx}
\,dx
&\le& |||\rho_{xx}|||\,
|||\xi_{xx}||| \le \frac{1}{6C}\|\rho\|_{H^2}^2 +
\frac{C}{6} \|\xi\|_{\mathcal{E}}^2,\nonumber\\
&\le& Ch^{2\min(k+1,s)-4}|||u|||_{H^s(\mathcal{E}_h)}^2 + \frac{C}{6}
\|\xi\|_{\mathcal{E}}^2\label{Eq:6}.
\end{eqnarray}
Estimating the second term using H\"{o}lder's inequality, trace
inequality and the Young's inequality, we obtain
\begin{eqnarray}
\sum_{n=0}^{N}\{\rho_{xxx}(x_n)\}[\xi(x_n)]
\le Ch^{2\min(k+1,s)-6+\beta-1}|||u|||_{H^s(\mathcal{E}_h)}^2 + \frac{C}{6}
\|\xi\|_{\mathcal{E}}^2\label{Eq:7}.
\end{eqnarray}
Similarly the last term can be estimated as
\begin{eqnarray}
\sum_{n=0}^{N}\{\rho_{xx}(x_n)\}[\xi_x(x_n)]
&\le& Ch^{2\min(k+1,s)-6+\beta-1}|||u|||_{H^s(\mathcal{E}_h)}^2 + \frac{C}{6}
\|\xi\|_{\mathcal{E}}^2\label{Eq:8}.
\end{eqnarray}
Combining \eqref{Eq:6}-\eqref{Eq:8}, we obtain
the following bound for $\xi$ when $\beta \ge 3$
\begin{equation}
\|\xi\|_{\mathcal{E}}\le Ch^{\min(k+1,s)-2} |||u|||_{H^s(\mathcal{E}_h)}.
\end{equation}
Now using $\|\eta\|_{\mathcal{E}} \le \|\xi\|_{\mathcal{E}} +
\|\rho\|_{\mathcal{E}}$, we obtain the energy norm estimate
for $\eta$. For the $L^2$-estimate of $\eta$, we use the Aubin
Nitsch\'e duality argument. Consider the dual problem
\begin{eqnarray*}
&&\frac{d^4\phi}{dx^4} = \eta, \;\; x\in (a,b),\\
&&\phi(a) = \phi(b) = 0,\\
&& \phi'(a) = \phi'(b) = 0. \qquad
\end{eqnarray*}
We note that $\phi$ satisfies the regularity condition $
\|\phi\|_{H^4} \le C\|\eta\|$. Consider
\begin{equation}
(\eta,\eta) = (\eta,\phi_{xxxx}) = \sum_{n=0}^{N-1}
\int_{x_n}^{x_{n+1}} \phi_{xx}\eta_{xx} \,dx +
\sum_{n=0}^{N}\{\phi_{xxx}(x_n)\}[\eta(x_n)] -
\sum_{n=0}^{N}\{\phi_{xx}(x_n)\}[\eta_x(x_n)].\nonumber
\end{equation}
Since $B(\eta,\chi) = 0 \;\; \forall \chi\in D^k(\mathcal{E}_h)$,
we
can write
\begin{eqnarray}
\|\eta\|^2 = (\eta,\eta) - B(\eta,\tilde{\phi})
&=& \sum_{n=0}^{N-1}
\int_{x_n}^{x_{n+1}} (\phi-\tilde{\phi})_{xx}\eta_{xx} \,dx +
\sum_{n=0}^{N}\{(\phi-\tilde{\phi})_{xxx}(x_n)\}[\eta(x_n)]\nonumber\\
&-&
\sum_{n=0}^{N}\{(\phi-\tilde{\phi})_{xx}(x_n)\}[\eta_x(x_n)],\label{Eq:9}
\end{eqnarray}
where $\tilde{\phi}$ is a continuous interpolant of $\phi$ and
satisfies the approximation property:
\begin{equation}
\|\phi-\tilde{\phi}\|_{H^q} \le
Ch^{s-q}\|\phi\|_{H^s}.\label{eq:n7}
\end{equation}
We use the approximation property \eqref{eq:n7},
the energy norm estimate for $\eta$ and the regularity result to
bound each term on the right hand side of \eqref{Eq:9} and obtain
the estimate
for $\|\eta\|$ as:
$$
\|\eta\| \le Ch^{\min(k+1,s)}|||u|||_{H^s(\mathcal{E}_h)}.
$$
For the estimates of the temporal derivative of $\eta$, we
differentiate \eqref{Eq:10} with respect to $t$ and repeat the
arguments. Hence, it completes the rest of the proof.
\end{proof}
\end{lemma}
\noindent
The following Lemma is useful to prove the error estimates:
\begin{lemma}\label{lemma1}
Let $v\in \mathbb{P}_k(I_n)$ where $I_n = (x_n, x_{n+1}) \in
\mathcal{E}_h$. Then there exists a positive constant $C$ independent of $h$
such that,
\begin{equation*}
Ch_n^{-4}\|v\|_{L^2(I_n)}^2 \le \|v_{xx}\|^2_{L^2(I_n)},
\end{equation*}
where $h_n = \left(x_{n+1}-x_n\right)$.
\begin{proof}
We define the reference element $\hat{I_n}$ as
\begin{equation*}
\hat{I_n} = \left\{\left(\frac{1}{h_n}\right)x, \quad x
\in I_n\right\}.
\end{equation*}
Since $v\in \mathbb{P}_k(I_n)$, we have the following relation
(refer \cite{brenner}) for the norms in the reference element and the
interval $I_n$
\begin{equation*}
|\hat{v}|_{H^r(\hat{I_n})} = h_n^{r - \frac{d}{2}}|v|_{H^r(I_n)}.
\end{equation*}
In one space dimension, i.e., $d=1$, we have
\begin{eqnarray}
\begin{aligned}
\|\hat{v}\|_{L^2(\hat{I_n})} &=&
h_n^{-\frac{1}{2}}\|v\|_{L^2(I_n)}, \;\; \mbox{and} \\
|\hat{v}|_{H^2(\hat{I_n})} &=& h_n^{\frac{3}{2}}|v|_{H^2(I_n)}.
\end{aligned}\label{neq:3.17}
\end{eqnarray}
By the equivalence of norm (refer \cite{brenner}), we have
\begin{equation}
C_0\|\hat{v}\|_{L^2(\hat{I_n})} \le \|\hat{v}_{xx}\|_{L^2(\hat{I_n})}
\le C_1\|\hat{v}\|_{L^2(\hat{I_n})}.\label{neq:3.18}
\end{equation}
Now from \eqref{neq:3.17} and \eqref{neq:3.18}, we obtain
\begin{equation*}
C_0h_n^{-\frac{1}{2}}\|v\|_{L^2(I_n)} \le
h_n^{\frac{3}{2}}\|v_{xx}\|_{L^2(I_n)}.
\end{equation*}
Rearranging the terms and squaring on both sides, we obtain the
desired estimate.
\end{proof}
\end{lemma}
\noindent
To obtain the error estimates, we subtract
\eqref{Neq2.1} from \eqref{Eq2.7} and using the auxiliary projection
\eqref{Eq:10}, we obtain the following error equation
\begin{equation}
(\theta_t,\chi) + B(\theta_t,\chi) = (\eta_t,\chi) +
(f(u^h)_x-f(u)_x,\chi).\label{Eq:12}
\end{equation}
Now we state and prove the following theorem.
\begin{theorem}
Let $u^h(t)$ and $u(t)$ be the solutions of \eqref{Neq2.1} and
\eqref{Eq2.7}, respectively. Let $u^h_0$ be the
elliptic projection of $u_0$, i.e., $u^h_0 = \tilde{u}(0)$. Then for
$s > 7/2$ and
there exists a positive constant $C$ independent of $h$ such that
\begin{eqnarray*}
\|u(t)-u^h(t)\|_{\mathcal{E}} &\le& Ch^{\min(k+1,s)-2}
\|u\|_{H^1(0,T;H^s(\mathcal{E}_h))},\\
\|u(t)-u^h(t)\| &\le& Ch^{\min(k+1,s)}
\|u\|_{H^1(0,T;H^s(\mathcal{E}_h))}.
\end{eqnarray*}
\begin{proof}
Setting $\chi = \theta(t)$ in \eqref{Eq:12}, we obtain
\begin{equation}
(\theta_t,\theta) + B(\theta_t,\theta) = (\eta_t,\theta) +
(f(u^h)_x-f(u)_x,\theta).\label{Eqno:1}
\end{equation}
Now, we write equation \eqref{Eqno:1} as
$$
\frac{1}{2}\frac{d}{dt} \left( \|\theta(t)\|^2 +
B(\theta(t),\theta(t)) \right) = (\eta_t,\theta) +
(f(u^h)_x-f(u)_x,\theta).
$$
Integrating with respect to $t$ from $0$ to $T$ and noting that
$\theta(0)=0$, we obtain
\begin{equation}
\|\theta(t)\|^2 + B(\theta(t),\theta(t)) = \int_{0}^{T} (\eta_t,\theta)\,ds
+ \int_0^T (f(u^h)_x-f(u)_x,\theta)\, ds.\label{Neq:8}
\end{equation}
We use integration by parts on the nonlinear term to obtain,
\begin{equation}
\left( (f(u^h) - f(u))_x,\theta\right) = \sum_{n=0}^N
\{f(u^h)-f(u)\}[\theta(x_n)] + [f(u^h)-f(u)]\{\theta(x_n)\} +
\left( f(u) - f(u^h),\theta_x\right).\label{Neq:5}
\end{equation}
Using the Cauchy Schwarz's and Young's inequality, we bound the last term of
\eqref{Neq:5} as
$$
\left( f(u)-f(u^h),\theta_x \right) \le C(\|\eta\|^2 +
\|\theta\|^2 + \|\theta\|_{\mathcal{E}}^2).
$$
Now for the first term in \eqref{Neq:5}, we use H\"{o}lder's
inequality to write
\begin{equation}
\sum_{n=0}^N \{f(u^h)-f(u)\}[\theta(x_n)] \le
\left(\sum_{n=0}^N|\{f(u^h)-f(u)\}|^2
\right)^{1/2}\left(\sum_{n=0}^N
|[\theta(x_n)]|^2\right)^{1/2}.\label{Neq:6}
\end{equation}
As earlier in \eqref{Eq:7}, we use the penalty term to write
\eqref{Neq:6} as
\begin{equation}
\sum_{n=0}^N \{f(u^h)-f(u)\}[\theta(x_n)] \le
Ch^{\frac{\beta-1}{2}}\|e\| \|\theta \|_{\mathcal{E}} \le
C\|e\|\|\theta\|_{\mathcal{E}}, \quad \text{since} \;\; \beta
\ge 3. \nonumber
\end{equation}
A similar bound for the second term can be obtained as
follows. Using the H\"{o}lder's inequality, we write
\begin{equation}
\sum_{n=0}^N[f(u^h) - f(u)]\{\theta(x_n)\} \le
\left(\sum_{n=0}^N h^{\beta}|[f(u^h)-f(u)]|^2
\right)^{1/2}\left(\sum_{n=0}^N
h^{-\beta}|\{\theta(x_n)\}|^2\right)^{1/2}.\label{Neq:7}
\end{equation}
Using the trace inequality, we obtain
\begin{eqnarray*}
\sum_{n=0}^N[f(u^h) - f(u)]\{\theta(x_n)\}
&\le& \left(Ch^{\frac{\beta-1}{2}}\|e\|\right)
\left(\sum_{n=0}^N
h^{-\beta-1}\|\theta\|_{L^2(I_n)}^2\right)^{\frac{1}{2}}, \\
&\le& Ch^{\frac{\beta-1}{2}}\left(\sum_{n=0}^N
h^{-4 +
(3-\beta)}\|\theta\|_{L^2(I_n)}^2\right)^{\frac{1}{2}}\|e\|,
\\
&\le& Ch \left(\sum_{n=0}^N
h^{-4}\|\theta\|_{L^2(I_n)}^2\right)^{\frac{1}{2}}\|e\|
\;\;\le\;\; C \|e\| \|\theta\|_{\mathcal{E}},
\end{eqnarray*}
where the last step is obtained by using \textsc{Lemma}
\textbf{\ref{lemma1}}. Using the triangle
inequality together with Young's inequality we obtain the bound
\begin{eqnarray*}
\left( (f(u^h) - f(u))_x,\theta\right)
&\le& C(\|\theta\|^2 + \|\theta\|_{\mathcal{E}}^2) +
Ch^{2\min(k+1,s)}|||u|||_{H^s(\mathcal{E}_h)}^2.
\end{eqnarray*}
Now using the coercivity of $B(.,.)$ and estimate of the
nonlinear term in \eqref{Neq:8}, we arrive at
\begin{eqnarray}
\|\theta\|^2 + \|\theta\|_{\mathcal{E}}^2 \le
Ch^{2\min(k+1,s)}\int_0^{T}|||u|||_{H^s(\mathcal{E}_h)}^2\, ds
&+& Ch^{2\min(k+1,s)}\int_0^T|||u_t|||_{H^s(\mathcal{E}_h)}^2\, ds\nonumber\\
&+& C\int_{0}^{t}(\|\theta\|^2 + \|\theta\|_{\mathcal{E}}^2) \,ds.
\end{eqnarray}
An application of Gronwall's inequality yields an estimate for
$\theta$. We then use the triangle inequality to obtain the
estimates for $e = u - u^h$. The estimates are optimal in
$L^2$-norm if $\beta \ge 3$.
\end{proof}
\end{theorem}
\section{Fully Discrete DGFEM}
\setcounter{equation}{0}
In this section, we derive a fully discrete DGFEM and establish
\textit{a priori} bounds
along with optimal error estimates. \\\\
{\bf Backward Euler discretization}:
Let $\Delta t$ denote the size of time discretization. Divide $[0,T]$
by \[ 0 = t_0 < t_1 < t_2 < \dots < t_{M-1} < t_M = T \] where
$t_{i+1} = t_i + \Delta t, \quad i = 0,1,\dots,M-1$ and $\Delta t =
\frac{T}{M}$. Let $u(t_n) = u^n$ and approximate $\frac{\partial
u}{\partial t}$ by using backward Euler difference formula as
: \[\partial_t u^{n} = \frac{u^{n}-u^{n-1}}{\Delta t}.\] Now,
the fully discrete discontinuous Galerkin finite element method is
given as follows:
\begin{equation}
\left(\partial_t U^{n+1}, \chi\right) + B(\partial_t U^{n+1},
\chi) = \left(f(U^{n+1}), \chi\right), \;\; \mbox{for all} \;\; \chi \in
D^k(\mathcal{E}_h), \label{Eqn3.1}
\end{equation}
\[U^0 = U_0, \] where $U^n$ is the fully discrete approximation of
$u(x,t_n)$.
\subsection{\textit{A priori} bounds}
In this sub-section, we prove an \textit{a priori} bound for the fully
discrete DGFEM.
\begin{theorem}
Let $U^n$ be a solution to \eqref{Eqn3.1} and assume that $f'$ is
bounded. Then there exists a positive constant $C$ such that
\begin{equation}
\|U^n\| + \|U^n\|_{\mathcal{E}} \le C(\|U_0\|).
\end{equation}
\begin{proof}
Set $\chi = U^{n+1}$ in \eqref{Eqn3.1} to obtain
\[\left(\frac{U^{n+1}-U^n}{\Delta t}, U^{n+1} \right) +
B\left(\frac{U^{n+1}-U^{n}}{\Delta t}, U^{n+1}\right) = \left(
f(U^{n+1})_x, U^{n+1} \right).\] Multiplying by $\Delta
t$ throughout, we arrive at
\begin{equation}
\left(U^{n+1}-U^n, U^{n+1} \right) +
B\left(U^{n+1}-U^{n}, U^{n+1}\right) = \Delta t \left(
f(U^{n+1})_x, U^{n+1} \right). \label{Eqn3.2}
\end{equation}
Using the fact that for any two real numbers $x$ and $y$, we have
\[
\frac{1}{2}(x^2-y^2) \le \frac{1}{2} (x^2-y^2+(x-y)^2) = (x-y)x. \]
We rewrite the equation (\ref{Eqn3.2}), we obtain
\begin{equation}
\frac{1}{2}\lVert U^{n+1}\rVert^2 - \frac{1}{2}\lVert
U^{n} \rVert^2 +
\frac{1}{2}\left(B\left(U^{n+1}, U^{n+1}\right) - B(U^n,U^n)\right) \le \Delta t \left(
f(U^{n+1})_x, U^{n+1} \right).\label{eq:3-40}
\end{equation}
\noindent
Using the Cauchy Schwarz's inequality, Poincare inequality and Young's
inequality with the bound on $f'(u)$, we obtain the
following inequality from \eqref{eq:3-40}
\begin{equation*}
\frac{1}{2}\lVert U^{n+1}\rVert^2 - \frac{1}{2}\lVert
U^{n} \rVert^2 + \frac{1}{2}\left( B(U^{n+1},U^{n+1}) - B(U^n,U^n)\right) \le C \Delta t \left(
\frac{\lVert U^{n+1}\rVert_{\mathcal{E}}^2}{2} +\frac{\lVert
U^{n+1} \rVert^2}{2} \right).
\end{equation*}
Summing over $n = 0,1,2,\dots,J-1$ and using the coercivity of $B(\cdot,\cdot)$, we obtain
\begin{equation*}
\frac{1}{2}\lVert U^{J}\rVert^2 + \frac{1}{2}\lVert
U^{J} \rVert_{\mathcal{E}}^2 \le \frac{1}{2}\lVert U^{J}\rVert^2 + \frac{1}{2}B(U^0,U^0) + C \Delta t \sum_{n=0}^{J-1}\left(
\lVert U^{n+1}\rVert_{\mathcal{E}}^2 +\lVert U^{n+1} \rVert^2 \right).
\end{equation*}
Rearranging the terms and applying the discrete Gronwall Inequality,
we obtain the desired \textit{a priori} bound on $U^J$.
\end{proof}
\end{theorem}
\subsection{Error Estimates}
In this sub-section, we prove the optimal error estimates for the fully discrete
discontinuous Galerkin method. \\\\
Subtracting equation \eqref{Eqn3.1} from \eqref{Eq2.7} and using the
auxiliary projection \eqref{Eq:10}, we obtain the error equation as
\begin{eqnarray}
\left(\partial_t \theta^{n+1}, \chi\right) + B(\partial_t
\theta^{n+1}, \chi)
&=& \left(\partial_t \eta^{n+1}, \chi\right) +
\left((f(U^{n+1})_x - f(u^{n+1})_x),\chi\right)\nonumber\\
&+&
\left(\sigma^{n+1},\chi\right) + B(\sigma^{n+1},\chi). \label{eq:3-45}
\end{eqnarray}
where $\sigma^{n+1} = u_t^{n+1}-\partial_t U^{n+1}$. Before we derive
the error estimate, we state and prove the following Lemma.
\begin{lemma}\label{Lemma2}
Let $\sigma^{n} = u_t^{n} - \partial_t U^{n}$. Then the following
holds
\[ \lVert \sigma^n \rVert^2 \le \Delta t \int_{t_{n-1}}^{t_n} \lVert
u_{tt}(s)\rVert^2 ds.\]
\begin{proof}
Consider $I = \int_{t_{n-1}}^{t_{n}} (s-t_{n-1}) u_{tt}(s) \:
ds$. Using integration by parts, we see that $I = \Delta t
\; \sigma^{n}$. A use of the H\"{o}lder's inequality, we obtain
$\lVert \sigma^n \rVert^2 \le \Delta t \int_{t_{n-1}}^{t_n} \lVert
u_{tt}(s)\rVert^2 ds $.
The same inequality can be proved in the energy norm as well.
\end{proof}
\end{lemma}
\begin{theorem}
Let $U^0 = \tilde{u}(0)$ so that $\theta^0 = 0$. Then there exists a
positive constant $C$ independent of $h$ and $\Delta t$ such that,
\begin{eqnarray}
\lVert u(t_n) - U^n \rVert_{\mathcal{E}}
&\le& C\left(h^{\min(k+1,s)-2}\,\|u\|_{H^1(0,T,H^s(\mathcal{E}_h))}+
\Delta t\,
|||u_{tt}|||_{L^2(0,T;H^s(\mathcal{E}_h))}\right),\nonumber\\
\lVert u(t_n) - U^n \rVert
&\le& C\left(h^{\min(k+1,s)}\,\|u\|_{H^1(0,T,H^s(\mathcal{E}_h))} +
\Delta t\, |||u_{tt}|||_{L^2(0,T;H^s(\mathcal{E}_h))}\right).
\end{eqnarray}
\begin{proof}
Set $\chi = \theta^{n+1}$ in \eqref{eq:3-45} we obtain
\begin{eqnarray}
\left(\theta^{n+1}-\theta^n, \theta^{n+1}\right) + B(\theta^{n+1}-\theta^n,
\theta^{n+1})
&=& \left(\eta^{n+1}-\eta^n, \theta^{n+1}\right) +
\Delta t \left(f(U^{n+1})_x - f(u^{n+1})_x,\theta^{n+1}\right)\nonumber\\
&+&
\Delta t \left(\sigma^{n+1},\theta^{n+1}\right) + \Delta t
B(\sigma^{n+1},\theta^{n+1}).\label{eq:3-46}
\end{eqnarray}
Using Cauchy Schwarz's Inequality and
constructing upper bounds similar to the semidiscrete case on the
right hand side of \eqref{eq:3-46}, we obtain the inequality
\begin{eqnarray}
\frac{1}{2}\lVert \theta^{n+1} \rVert^2
-
\frac{1}{2}\lVert \theta^{n} \rVert^2 + \frac{1}{2}\left(
B(\theta^{n+1},\theta^{n+1}) - B(\theta^n,\theta^n) \right)
\le Ch^{2\min(k+1,s)}\int_{t_n}^{t_{n+1}}\|u_t\|^2\, ds \nonumber\\
+\, Ch^{2\min(k+1,s)+\beta-3}|||u|||_{H^s(\mathcal{E}_h)}^2
+ C\Delta t \left( \lVert \theta^{n+1}
\rVert^2 + \lVert \theta^{n+1}
\rVert_{\mathcal{E}}^2 +
\lVert \sigma^{n+1}
\rVert^2 + \lVert \sigma^{n+1}
\rVert_{\mathcal{E}}^2\right).\label{eqNo:11}
\end{eqnarray}
Sum over $n=0,1,2,\dots,M-1$ on both sides of \eqref{eqNo:11} and
using \textsc{Lemma} \textbf{\ref{Lemma2}}, we obtain
\begin{eqnarray}
\lVert \theta^{M} \rVert^2 + \lVert \theta^{M}
\rVert_{\mathcal{E}}^2
&\le& Ch^{2\min(k+1,s)}|||u|||_{H^s(\mathcal{E}_h)}^2 +
Ch^{2\min(k+1,s)}\int_{0}^{T}\|u_t\|^2\, ds
\nonumber \\
&+& C\Delta t
\sum_{n=0}^{M-1}\left( \lVert
\theta^{n+1}
\rVert + \lVert \theta^{n+1}
\rVert_{\mathcal{E}}^2\right) + C \Delta
t^2\|u_{tt}\|^2_{L^2(0,T;H^s(\mathcal{E}_h))}.
\end{eqnarray}
Using the discrete Gronwall's inequality, we obtain the estimate
for $\theta^M$ as
\[ \lVert \theta^M \rVert^2 + \lVert \theta^{M}
\rVert_{\mathcal{E}}^2 \le
C\left(h^{2\min(k+1,s)}\|u\|_{H^1(0,T,H^s(\mathcal{E}_h))}^2 + \Delta
t^2\|u_{tt}\|^2_{L^2(0,T;H^s(\mathcal{E}_h))}\right),\]
provided $\beta \ge 3$.
Using the triangle inequality, we can prove the required estimate for
$\lVert e \rVert_{\mathcal{E}}$ and $\lVert e \rVert$.
\end{proof}
\end{theorem}
\section{Numerical Results}
\setcounter{equation}{0}
In this section, we perform some numerical experiments to validate the
theoretical results. \\\\
We consider the following Rosenau equation
\begin{equation}
u_t + \frac{1}{2}u_{xxxxt} = f(u)_x \label{Eq5.1}
\end{equation}
with the boundary conditions (\ref{Eq1.3}) and the nonlinear function $f(u) =
10u^3-12u^5-\frac{3}{2}u$. The exact solution of the
equation \eqref{Eq5.1} is $u(x,t) = \text{sech}(x-t)$. Since the
equation \eqref{Eq5.1} was considered as a benchmark example to
validate the results by several authors (for instance
\cite{manickam,dgros}), the same example has been taken to compare the
existing results.\\\\
We choose the computational domain $\Omega = (-10,10)$ and the final
time $T = 1$. The equation is solved numerically with corresponding initial and
boundary conditions.\\\\
The order of convergence for the numerical method was computed by
using the formula
\begin{equation*}
p \approx \frac{\log\left(\frac{\lVert E_i \rVert}{\lVert E_{i + 1}
\rVert}\right)}{\log\left(\frac{h_i}{h_{i + 1}}\right)}, \; i
= 1, 2, 3, 4.
\end{equation*}
In Table \ref{tab:1}, we show the order of convergences for piecewise
quadratic and piecewise cubic basis functions.
\begin{table}[H]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multicolumn{3}{|c|}{Quadratic Elements ($k=2$)} &
\multicolumn{3}{|c|}{Cubic Elements ($k=3$)}\\
\hline
$h$ & Error $\lVert e \rVert$ & Order & $h$ &Error $\lVert e \rVert$ & Order \\
\hline
$0.20000$ & $1.47828\times 10^{-2}$ &- & $0.40000$& $5.17824\times 10^{-2}$ & -\\
\hline
$0.18182$ & $1.12327\times 10^{-2}$ & $2.8815$ & $0.33331$ &$2.58529\times10^{-2}$ &$3.8099$\\
\hline
$0.16667$ & $8.68469\times 10^{-3}$ & $2.9567$ & $0.28571$ &$1.41359\times10^{-2}$ &$3.9164$\\
\hline
$0.15384$ & $6.87521\times 10^{-3}$ & $2.9189$ & $0.25000$ &$8.35109\times10^{-3}$ &$3.9416$\\
\hline
$0.14257$ & $5.46407\times 10^{-3}$ & $3.0999$ & $0.22222$ & $5.23371\times10^{-3}$ & $3.9672$\\
\hline
\end{tabular}
\end{center}
\caption{Order of convergence for $P_2$ and $P_3$ elements with
$\sigma_0 = \sigma_1 = 2000$.}
\label{tab:1}
\end{table}
\noindent
Below, we show the Figures \eqref{fig:1}-\eqref{fig:3}, for the
comparison of exact solution profile with that of the approximate
solution obtained from DGFEM. The Figure \eqref{fig4} shows the solution
profile at different time levels.
\begin{figure}[H]
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[height=3.25cm,width=7.6cm]{QuadN50M1e4}
\caption{$N=50$}
\label{fig:1}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[height=3.25cm,width=7.6cm]{QuadN100M1e4}
\caption{$N=100$}
\label{fig:2}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[height=3.25cm,width=7.6cm]{QuadN200M1e4}
\caption{$N=200$}
\label{fig:3}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[height=3.25cm,width=7.6cm]{solution}
\caption{Solution profiles from $t=0$ to $t=1$.}
\label{fig4}
\end{subfigure}
\caption{Approximate solution using $P_2$ elements with exact solution $u(x) = \text{sech}(x-1)$}
\end{figure}
\noindent
We compare our numerical results with Choo {\it et al.}
\cite{dgros}. We observe that our solution profiles matches very
acurately and we have achieved third order convergence for quadratic
elements and fourth order convergence for cubic elements which are
optimal. The proposed method can be easily
extended to higher degree polynomials and higher dimensions also.
But the cGdG method
considered in \cite{dgros} is difficult to apply for higher dimensions due to the requirement of $C^1$-elements.
\subsection{Decay Estimates}
In this sub-section, we validate the decay estimates that was derived by
Park in \cite{park2}. As in \cite{manickam}, we consider the following equation
\begin{eqnarray}
u_t + u_{xxxxt} + u_x = f(u)_x, \quad (x,t) \in (0,1) \times
(0,T]\label{eq:rosenau}
\end{eqnarray}
with the initial condition
\begin{eqnarray}
u(x,0) = \phi_0(x), \label{eq:init}
\end{eqnarray}
and boundary conditions (\ref{Eq1.3}). Here $f(u) = \sum_{i=1}^n \frac{c_iu^{p_i+1}}{p_i+1}, \; c_i \in
\mathbb{R}, p_i>0.$ \\\\
For a small initial data, it has been proven that the solution to the
Rosenau equation with small initial data decays like $\frac{1}{(1+t)^{\frac{1}{5}}}$ in the
$L^\infty$-norm for $\underset{1\le i\le n}{\min} p_i\; > 6$. Like in \cite{manickam}, we take
$c_1 = -1, p_1 = 7$, $c_2 = \frac{4}{7}, p_2 = 8$, $c_3 = -\frac{4}{3}, p_3 = 9$ and
$\phi_0(x) = 0.001e^{-x^2}$ in \eqref{eq:init}. The solution curves for
\eqref{eq:rosenau} and \eqref{eq:init} for $t=0$ to $t=10$ is shown in
Figure \eqref{fig:4}. The height of the initial pulse decreases with
time, indicating a decaying behavior of the solution. The decay in
$L^\infty$-norm of the approximate solution with time is shown in
Figure \eqref{fig:5}.
\begin{figure}[H]
\centering
\includegraphics[height=8cm,width=14cm]{sol}
\caption{Curves illustrating the decaying nature of the
solution}
\label{fig:4}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[height=8cm,width=14cm]{decay}
\caption{The variation of the $L^\infty$-norm of the
solution with respect to time ($t=0-100$).}
\label{fig:5}
\end{figure}
\section{Conclusion}
In this paper, we derived \textit{a priori} bounds and optimal error
estimates for the semidiscrete problem. Next, we discretized the
semidiscrete problem in the temporal direction using a backward Euler
method, and derived \textit{a priori} bounds and optimal error
estimates. We have validated the theoretical results by performing
some numerical experiments. Compared to the existing results, our
method requires less regularity of the original problem.
\section{Acknowledgements}
The authors would like to thank Department of Science and Technology
DST-FIST Level-1 Program Grant No. SR/FST/MSI-092/2013 for providing
computational facilities.
|
1,941,325,220,065 | arxiv | \section{Introduction}
Our understanding of primordial fluctuations in the early
universe was revolutionized first with inflation and then
by the actual detection of temperature fluctuations in the
cosmic microwave background
(CMB). Inflation gave the spectrum of
fluctuations, but not the normalization. The COBE-DMR
experiment measured the amplitude of temperature
fluctuations on an angular scale that
was
acausal at last
scattering, and hence directly probed inflationary
fluctuations, and in particular the fluctuation strength.
More than twenty subsequent experiments have plugged the
causal gap, measuring fluctuations over angular scales less
than or of order the acoustic peak at
$\lambda=220\,\Omega^{1/2}$ that corresponds to the maximum
sound horizon in the early universe.
The shape of the fluctuation spectrum is now being probed.
The CMB fluctuations measure density fluctuations at $z \sim
1000$ on scales of hundreds of Mpc. At large redshifts one
degree subtends a comoving scale of 100 Mpc. A complementary
measure arises from galaxy redshift surveys. These measure
variations in the luminous matter density out to $\sim 300$
Mpc at the present epoch. One can combine, after choosing a
model, the CMB and
large-scale structure
(LSS)
measures of the fluctuation spectrum.
Here we will describe the current status of our
understanding of the shape of the primordial fluctuation
spectrum.
It is customary to use a two-parameter fit to the LSS power:
$\sigma_8$,
the normalization at $8\,h^{-1}$ Mpc, and
$\Gamma$, measure of shape relative to CDM and
nearly
equal to $\Omega\,h$
for CDM. Given the several data sets, each with a number of
independent data points, this may be an unnecessarily
restrictive approach. Of course any data set is imperfect,
with possible systematic errors, and the data sets have
different selection biases. However,
provided the bias is
scale-independent, one can renormalize the data sets and
examine detailed shape constraints. One has to decide
whether to compare a nonlinear power spectrum with the data
or whether to correct the data for nonlinearity and compare
the data with linear theory. We will employ the latter
approach here.
\section{CMB: Status of the Theory}
Inflation-generated
curvature fluctuations provide the
paradigm for interpreting the cosmic microwave background
temperature anisotropies. There are three components to
$\delta T/T$, schematically summarized as
$$\frac{\delta T}{T} = \vert \delta \phi + \delta
\rho_{\gamma} + \delta v \vert\,.$$
These are the gravitational potential, intrinsic and Doppler
contributions from the last scattering surface. The
combined effect of the first two terms results in the
Sachs-Wolfe effect $\delta T/T=\frac{1}{3}\vert\delta\phi\vert$
which represents the only superhorizon contributions to
$\delta T/T$. Inflationary initial conditions then require
that the Fourier component
$\vert\delta T/T\vert_k \propto
\vert \cos (kv_s t_{ls})\vert$
at the last scattering epoch
$t_{ls}$, where $k$ is the wavenumber and $v_s$ is the sound
speed, must be constant as $k\rightarrow 0$. Inflation of course
specifies the phases of density fluctuations that
decompose to sound waves of wavelength less than that of the
maximum sound horizon, $v_s t_{ls}\,$. Longer wavelengths correspond
to power-law
growing modes at horizon crossing.
The wave just
entering the horizon at last
scattering has a peak at wavenumber $n\pi/v_s t_{lss}$,
$n=1$, and a succession of waves crest at $n=2,\ 3, \dots$
before damping sets in as the photon mean free path increases
relative to the wavelength. The first acoustic peak
projects to $\delta T/T$ on angular scales
$\sim\,\Omega^{1/2}$ degree, and is a robust measure of the
curvature of the universe. Doppler peaks are 90\ifmmode^\circ\;\else$^\circ\;$\fi out of
phase
and of lower amplitude, so they fill in the troughs of the acoustic oscillations as
measured by the radiation power spectrum. Peak heights are determined in large part by choice of
$\Omega_B$ and $\Omega_\Lambda$. An
increase in $\Omega_B$ enhances the wave compression and reduces
the rarefaction phases. An
increase in $\Omega_\Lambda$
enhances the ratio of radiation to matter in a flat
model, and thereby boosts the peak potential decay and the low $\ell$
power via the early
integrated Sachs-Wolfe effect. Of course increasing the
spectral index $n$ also raises the peak height. Peak
heights are lowered by reionization and secondary
scattering. Not all of these degeneracies are removed by
examining the higher peaks. For example, combination
of $\Omega_B$ and $\Omega_\Lambda$ at specified $\Omega$ is
nearly
degenerate
in peak height and peak location
since the angular size-redshift relation depends only on $\Omega.$
Lensing by nonlinear and
quasilinear foreground structure redistributes the peak
power towards very high $\ell$ in a way that breaks the
degeneracies in the CMB. One can search for this effect with interferometer experiments \cite{met} at $\ell\lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 1000$ or else by correlating the CMB fluctuations with large-scale power from either large redshift surveys such as
the SDSS \cite{wanss} or via weak lensing distortions of the CMB
\cite{selz} or LSS \cite{hut}.
\section{Reconstruction of the Primordial Power Spectrum
from the CMB}
The inflationary, approximately scale-free, power spectrum $P(k)=Ak^n$,
$n\approx 1$, is modified by the transition from radiation to matter domination,
since in matter-dominated epochs different growth occurs on subhorizon scales:
in the radiation era, there is no subhorizon growth of fluctuations. This
modifies $P(k)$ to $P(k)\propto k^{-3}$ on larger scales. The
transition occurs at the horizon during matter-radiation equality, namely
$12(\Omega h^2)^{-1}$ Mpc. The Boltzmann equation can be solved for temperature
fluctuations, mode by mode, and the solutions for $\delta T/T$ are scaled
to agree
with the quoted errors for each CMB experiment. For each specified cosmological
model, we then infer the power spectrum amplitude over scales that correspond to
the deprojection on the sky of the experimental window function
\cite{gw}. We confirm
that
standard CDM ($\Omega_{CDM} =1$, $h=0.5$) fits the CMB data rather poorly,
with best fit renormalization that corresponds to a high value of $\sigma_8$.
The $\Lambda$CDM model ($\Omega=1$, $\Omega_m=0.4$, $ h=0.6$) gives a
reasonably good fit with an acceptable value of
$\sigma_8.$ Much stronger constraints however
come when these fits are combined with LSS data.
\section{Reconstruction of $P(k)$ from Large-Scale Structure Data}
There are several large-scale structure data sets that one may use to
reconstruct $P(k)$. Galaxy redshift surveys include the Las Campanas Survey of
25,000 galaxies
\cite{lcrs}, the PSC$z$ survey of 1,500 galaxies \cite{pscz}, and the SSRS2/CfA2 survey
of 7,000 galaxies \cite{ssrs2}. There is also the APM cluster survey which probes to $\sim
300\,h^{-1}$ Mpc \cite{clusters}
and the real space inversion of the 2D APM galaxy survey
\cite{apm}. The
local mass function of clusters
\cite{vial}
has been used to measure
$\sigma_8\Omega^{0.6}$,
and the high redshift cluster abundance \cite{bah}
has been used to
break the degeneracy between $\sigma_8$ and $\Omega$.
Peculiar velocities and
large scale bulk flows also yield $\sigma_8\Omega^{0.6}$ in a completely
bias-independent approach, although systematic uncertainties remain large
\cite{kold}.
The redshift space surveys can be corrected in a straightforward way, for
peculiar velocities on small scales and bulk flows on large scales, to derive
the real space $P(k)$ by assuming a cosmological model \cite{pead}.
Correction of data in
the nonlinear regime
is best done by numerical simulation, but
can be performed using
an empirical formulation
calibrated to numerical simulations based on
a smooth interpolation from spherical collapse by a factor of 2
in radius on cluster scales \cite{peadd}. Renormalization of the various measures of $P(k)$
is effected by assuming that all measurements are subject to a scale-independent
bias, allowed to be independent for each probe of $P(k)$.
\section{Confrontation of $P(k)$ with CMB and LSS}
Model fitting to LSS alone results in the following conclusions.
Of course
the standard COBE-normalized
CDM model fails completely. Without a large scale-dependent bias factor on
10 -- 100 Mpc scales, peculiar velocities and the galaxy cluster
abundance are greatly overpredicted. Low density models circumvent these problems. The cluster abundance, evolution and baryon fraction are all
in satisfactory agreement with observations \cite{bah}.
However the combined CMB/LSS fits to $P(k)$ lead \cite{gw} to a surprising conclusion.
The surprise is that almost all
models, while occasionally faring better than sCDM, still provide
unacceptable fits to all of the data. Consider for example $\Lambda$CDM,
currently favored by the SN Ia Hubble diagram. The reduced $\chi^2$ is
2.1 for
70 degrees of freedom. Data set by data set, one still has a problem. For
example the values of $\chi^2$(d.o.f.)
are APM clusters: 25(8); LCRS 17(5); APM
44(9); IRAS 16(9). The acceptable data sets are CfA for 2 d.o.f., cluster
abundances/peculiar velocities for 3 d.o.f. and CMB for 34 d.o.f.
\section{ Neutrinos and LSS}
The only mildly
acceptable model (reduced $\chi^2 = 1.2$) is CHDM, hot and cold dark matter with
$\Omega=1$. This model overpredicts current cluster
abundances and underpredicts the small number of high redshift, luminous
x-ray clusters (2 at $z=0.5$, 1 at $z=0.8$).
However the cluster evolution constraint is disputed \cite{bla},
and
the local normalization
is not
necessarily robust.
The cluster baryon fraction provides an independent and
powerful constraint that favors $\Omega_m\approx 0.3$.
Of course this rests on the
reasonably
plausible assumption that clusters provide a fair sample of the baryon fraction of the universe. This need not necessarily be true if gas has had a complex history prior to cluster formation:
{\it e.g.} the gas may have been preheated
as is suggested by recent considerations of the entropy of intracluster
gas \cite{bal}. This would reduce the baryon fraction, but one can equally well imagine scenarios for cluster formation in dense sheets or filaments where the baryon fraction was already enhanced.
Consider the model preferred by the combination of
SNIa and cluster constraints, namely $\Lambda$CDM.
Figure 1
shows the $\Lambda$CDM power spectrum compared
with observations of Large-Scale Structure and CMB anisotropy.
One can pose the following question: does adding a hot component improve the marginally acceptable LSS fit? We find that as an admixture of
HDM is added to the dominant CDM, the combined fit to CMB and LSS deteriorates
(Figures 2,3).
The reason is that low
density
CDM models have a
$P(k)$ peak that is longward of the apparent peak in the APM data.
Adding HDM only
exacerbates
the mismatch.
Neutrino masses imprint a distinct signature on $P(k).$ This will eventually be a measurable probe of the neutrino mass, from LSS as well as from CMB. Indeed the LSS probe may potentially
be more powerful \cite{huet}.
There is more dynamical
range available in probing $P(k)$ with LSS on the neutrino free streaming scale, where the primary signature should be present. Even present data is sensitive to a neutrino mass of around an eV: for example we find that the fit changes significantly between 0.1 and 1 eV.
If $\Lambda$CDM is in fact the right model, our analysis yields an indirect
upper limit on the mass of the most massive neutrino species
of $m_\nu \leq 2$eV. While there are
considerable systematic uncertainties in this approach, it is promising as a
complement to the direct evidence for mass difference between neutrino
species from SuperKamiokande \cite{fukuda} and the solar neutrino
problem \cite{jbah}, and is already beginning to conflict with results from
LSND \cite{ath} that require a large mass difference.
One class of exotic models is the following.
Take a model that fits all constraints except for the shape.
The best contender for such a model is $\Lambda$CDM.
Inspection of the LSS constraints reveals that
there is a deficiency of large-scale power near 100 Mpc.
One can add an ad hoc feature on this scale from considering
inflationary models with
multiple scalar fields
(see Figure 4).
This could be generated for example by \cite{amendola}
incomplete coagulation of bubbles of new phase
in a universe that already has been homogenized by a previous episode of
inflation.
One can tune the bubble size distribution to be sharply peaked at any
preferred
scale. This results in nongaussian features and excess power where needed.
The non-gaussianity provides a distinguishing characteristic.
Other suggestions that fit both CMB and LSS data appeal to an inflationary relic of excess power from broken scale invariance, arising from double inflation in a $\Lambda$CDM model,
which results in a gaussian feature that is essentially a step in $P(k)$ at the desired wavenumber \cite{les}. This
improves the fit in much the same way as adding a hot component to CDM
improves the empirical fit.
While such ad hoc fits may seem unattractive, one could argue that
other aspects of cosmological model building are equally ad hoc,
such as postulating a universe in which
$\Lambda$ is only becoming dynamically important at the present epoch.
Clearly one has to accommodate such arguments in order to fit the data,
if the data is indeed accepted at face value.
Moreover there are positive side effects that arise from the tuned void
approach.
The bubble-driven shells provide a source of overdensities on large scales. Rare shell interactions could produce nongaussian massive galaxies or clusters at low or even high redshift: above a critical surface density threshold
gas cooling would help concentrate gas and aid collapse.
If massive
galaxies were discovered at say $z>5$ or a massive galaxy cluster at $z>2$ this
would be another indication that the current library of cosmological models is
inadequate. New data sets such as SDSS and 2DF are
urgently needed to verify whether the shape discrepancies in $P(k)$ will
persist.
\section{Summary}
If the data are accepted as mostly being free of systematics
and ad hoc additions to the primordial power spectrum are avoided,
there is no acceptable model for
large-scale structure. No one LSS data set can be blamed. Perhaps it is best
to wait for improved data. The Sloan and 2DF surveys are already acquiring
galaxy redshifts. However another philosophy is to search for more exotic
models. Consider for example the primordial isocurvature mode. This has the
advantage of forming primordial black holes of stellar mass, since normalization
to large-scale structure and the present spectrum over 10 -- 50 Mpc requires
a spectral index that generates nonlinear fluctuations at roughly the
epoch of the quark-hadron phase transition, when the horizon contained
approximately one solar mass \cite{sugi}
(hence the primordial black holes may be the possibly observed MACHOs).
The goodness of fit of this model to
the combined CMB/LSS data is similar to that of the $\Lambda$CDM model. One cannot distinguish with current data between an exotic isocurvature model and
$\Lambda$CDM, although neither model is satisfactory. To
improve on this, clearly something even more exotic is required. It may
be that independent observations will force us in this direction.
\section*{Acknowledgements}
We gratefully acknowledge support by NSF and NASA.
\section*{References}
|
1,941,325,220,066 | arxiv | \section{Introduction}
In this paper we consider extremal Betti numbers of Vietoris-Rips complexes. Given a finite set of points $S$ in Euclidean space $\mathbb{R}^d$, we define the \textit{Vietoris-Rips complex} $R^{\epsilon}(S)$, or \textit{Rips complex}, as the simplicial complex whose faces are given by all subsets of $S$ with diameter at most $\epsilon$. Take $R(S) := R^1(S)$. Our main goal in this paper is to determine the largest topological Betti numbers of $R(S)$ in terms of $|S|$ and $d$.
Rips complexes have a wide range of applications. Vietoris \cite{Vietoris} used Rips complexes to calculate the homology groups of metric spaces. Other applications include geometric group theory \cite{GeoGroup}, simplicial approximation of point-cloud data \cite{PointModel1}, \cite{PointModel2}, \cite{PointModel3}, \cite{PointModel4}, and modeling communication between nodes in sensor networks \cite{Sensor1}, \cite{Sensor2}, \cite{Sensor3}. In the specific case of the Euclidean plane, the topology of Rips complexes is studied in \cite{Planar}. Rips complexes are used in manifold reconstruction in \cite{ChazOud}.
One of the main uses of the Rips complex is to approximate the topology of a point cloud. The point cloud might be a random sample of points from a manifold or some other topological space. Several papers, such as \cite{ChazOud}, give conditions on the point sample under which the Rips complex can be used to determine the homology and homotopy groups of the underlying space. It is generally assumed that the Rips complex $R^{\epsilon}(S)$ is chosen in such a way that the points of $S$ are dense in the underlying space, relative to $\epsilon$.
For a fixed base field ${\bf k}$, we denote the homology groups of a simplicial complex $\Gamma$ by $\tilde{H}_p(\Gamma;{\bf k})$. The topological Betti numbers are given by $\tilde{\beta}_p(\Gamma;{\bf k}) := \dim_{{\bf k}}(\tilde{H}_p(\Gamma;{\bf k}))$. All of our results are independent of ${\bf k}$, and so from now on we suppress the base field from our notation. We define $$M_{p,d}(n) := \max\{\tilde{\beta}_p(R(S)): S \subset \mathbb{R}^d, |S| \leq n\}.$$
The \v{C}ech complex is another simplicial complex that captures the topology of a point cloud. Given $S \subset \mathbb{R}^d$, the \v{C}ech complex $C(S)$ has vertex set $S$ and faces given by all sets of points that are contained in a ball of radius $\epsilon/2$. By the Nerve Lemma \cite{Bjorner}, $\tilde{\beta}_k(C(S)) = 0$ for $k \geq d$. By contrast, if $d \geq 2$, then $\tilde{\beta}_k(R(S))$ can be nonzero for arbitrarily large $k$.
[To be added: discussion of Matt Kahle's work]
In the interest of understanding the topology of Rips complexes, we consider the largest possible topological Betti numbers. We find that nontrivial upper bounds are possible, but also that the Betti numbers can be quite large under specialized constructions.
The structure of this paper is as follows. We review some facts on simplicial complexes in Section \ref{prelim}. In Section \ref{Hom1}, we prove that $M_{1,d}(n)$ grows linearly in $n$ for each fixed $d$. In Section \ref{Hom2}, we prove that $M_{2,2}(n)$ grows linearly in $n$, and in general, for each fixed $\delta$ and $d$, $M_{2,d}(n) < \delta n^2$ for sufficiently large $n$. We also give a construction to prove that $M_{2,5}(n) > Cn^{3/2}$ for some constant $C$ and sufficiently large $n$. In Section \ref{Hom3}, we extend the results of the previous sections by showing that for each fixed $\delta, p, d$, $M_{p,d}(n) < \delta n^p$ for sufficiently large $n$, and also that $M_{p,5}(n) > C_pn^{p/2+1/2}$, for a value $C_p$ that depends only on $p$ and sufficiently large $n$. In Section \ref{QuasiRips} we consider similar bounds on the Betti numbers of related objects known as quasi-Rips complexes. Our proofs make frequent use of the Mayer-Vietoris sequence and a careful analysis of the structure of the first homology group of a Rips complex.
\section{Definitions and preliminaries}
\label{prelim}
An abstract simplicial complex $\Gamma$ on a finite set $S$, called the vertex set, is a collection of subsets, called \textit{faces}, of $S$ that is closed under inclusion and contains all singleton subsets. A face with two elements is called an \textit{edge}. For convenience, we generally suppress commas and braces when expressing faces of a simplicial complex. We also refer to the vertex set of $\Gamma$ by $V(\Gamma)$.
If $F$ is a face of $\Gamma$, then we define the link $\mbox{\upshape lk}\,_\Gamma(F)$, or $\mbox{\upshape lk}\,(F)$ when $\Gamma$ is implicit, as $\{G \in \Gamma: G \cup F \in \Gamma, G \cap F = \emptyset\}$. The star $\mbox{\upshape st}\,_\Gamma(F) = \mbox{\upshape st}\,(F)$ is $\{G \in \Gamma: G \cup F \in \Gamma\}$. If $\Gamma$ is a Rips complex $R(S)$, then the stars and links are also Rips complexes. For an arbitrary subset $F \subset S$, define $N(F) := \{v \in S-F: \mbox{\upshape dist}\,(u,v) \leq 1$ for all $u \in F\}$. Then for $F \in R(S)$, $\mbox{\upshape lk}\,(F) = R(N(F))$ and $\mbox{\upshape st}\,(F) = R(N(F) \cup F)$. The \textit{induced subcomplex} $\Gamma[W]$ for $W \subset V(\Gamma)$ is defined as $\{F: F \in \Gamma, F \subset W\}$. For a Rips complex $R(S)$, $R(S)[W] = R(W)$.
Every Rips complex is also a flag complex. A \textit{flag} complex, also called a \textit{clique} complex, is a simplicial complex $\Gamma$ such that $F \in \Gamma$ whenever all $2$-subsets of $F$ are edges in $\Gamma$. Thus a flag complex is determined by its edges. For a graph $G$, we define $X(G)$ to be the unique flag simplicial complex with the same edges as $G$.
Let $\Gamma$ be a simplicial complex with a subcomplex $\Gamma'$. Let $\phi: \tilde{H}_p(\Gamma') \rightarrow \tilde{H}_p(\Gamma)$ be the map on homology induced by inclusion. We define $\Omega_{p}(\Gamma,\Gamma')$ to be the image of $\phi$.
Our proofs give special attention to the structure of the first homology group. Given a simplicial complex $\Gamma$ with $\{v_1, \ldots, v_r\} \subset V(\Gamma)$ and edges $v_1v_2, \ldots, v_{r-1}v_r, v_rv_1$, the notation $C = (v_1, \ldots, v_r)$ refers to the graph theoretic cycle in $\Gamma$. Taking subscripts mod $r$, we equivalently think of $C$ as the simplicial $1$-chain $\sum_{i=1}^r \pm v_iv_{i+1}$, with signs chosen so that $\partial C = 0$. We denote by $[C]_{\Gamma}$, or $[C]$ when $\Gamma$ is clear from context, the equivalence class of $C$ in $\tilde{H}_1(\Gamma)$.
\begin{lemma}
\label{CycleBasis}
There is a basis for $\tilde{H}_1(\Gamma)$ such that every element of the basis is the equivalence class of a simple, chord-free cycle.
\end{lemma}
\begin{proof}
It is a standard fact in algebraic topology that $\tilde{H}_1(\Gamma)$ has a basis of equivalence classes of cycles. Let $B$ be such a basis. If $[C] \in B$ is the equivalence class of a non-simple cycle of the form $C = (v_1, \ldots, v_r, v_1, v'_2, \ldots, v'_{r'})$, then replace $[C]$ in $B$ by $[C_1] = [(v_1, \ldots, v_r)]$ and $[C_2] = [v_1, v'_2, \ldots, v'_{r'}]$. Also, if $C = (v_1, \ldots, v_r)$ and $C$ has a chord $v_iv_j$, then replace $[C]$ by $[C_1] = [(v_1, \ldots, v_i,v_j, \ldots, v_r)]$ and $[C_2] = [(v_i, \ldots, v_j)]$. Then delete elements from $B$ until $B$ is again a basis for $\tilde{H}_1(\Gamma)$. Repeat this operation until all elements of $B$ are equivalence classes of simple, chord-free cycles.
\end{proof}
\section{Results on $M_{1,d}(n)$ and lemmas}
\label{Hom1}
In this section we prove a linear upper bound on $M_{1,d}(n)$, and we also give some lemmas on the structure of $\tilde{H}_1(R(S))$. Those lemmas are needed to prove results on higher homology. Before the main theorem of this section, we need a general fact on the homology of simplicial complexes.
\begin{lemma}
\label{LinkLemma}
Consider $v \in S$. Then for all $p$, $\tilde{\beta}_p(R(S)) \leq \tilde{\beta}_p(R(S-v)) + \tilde{\beta}_{p-1}(\mbox{\upshape lk}\,(v))$.
\end{lemma}
\begin{proof}
Consider $\Delta := R(S-v)$ and $\Delta' = \mbox{\upshape st}\,_{R(S)}(v)$. Then $\Delta \cup \Delta' = R(S)$ and $\Delta \cap \Delta' = \mbox{\upshape lk}\,(v)$. Since $\Delta'$ is a cone, that is, $v$ is contained in all maximal faces of $\Delta'$, all of its homology groups vanish. The lemma then follows from the Mayer-Vietoris sequence.
\end{proof}
\begin{theorem}
For every $d$, there exists a constant $C_d$ such that $M_{1,d}(n) \leq C_dn$.
\end{theorem}
\begin{proof}
Let $B^d$ be a closed ball of radius $1$ in $\mathbb{R}^d$, and let $$C_d := \max\{|T|: T \subset B^d, \mbox{\upshape dist}\,(u,v) > 1 \mbox{ for all } u,v \in T\}-1.$$ Choose $v \in S$. Then $\mbox{\upshape lk}\,(v) = R(N(v))$ is a Rips complex on a point set contained in a ball of radius $1$. Suppose that $R(N(v))$ has $k$ connected components with representative vertices $v_1, \ldots, v_k$. Then for all $1 \leq i < j \leq k$, $\mbox{\upshape dist}\,(v_i,v_j) > 1$. Thus $k \leq C_d+1$, and $\tilde{\beta}_0(R(N(v))) \leq C_d$.
We prove the theorem by induction on $n$, with the base case $n=0$ evident. By the inductive hypothesis, $\tilde{\beta}_1(R(S-v)) \leq C_d(n-1)$. Also, $B_0(R(N(v))) \leq C_d$. The result follows from Lemma \ref{LinkLemma}.
\end{proof}
Our next lemma relates the first Betti number of the clique complex of a certain kind of graph to the zeroth Betti number of a related graph. In the following, we may think of $X(G)$ as $R(U \sqcup V)$, where $U$ and $V$ are both clusters of points of diameter at most $1$.
\begin{lemma}
\label{BipartiteLemma}
Let $G$ be a graph with vertex set $U \sqcup V$ such that all edges $uu'$ and $vv'$ are in $G$ for $u,u' \in U$, $v,v' \in V$. Let $G'$ be the bipartite graph on $U \sqcup V$ obtained from $G$ by deleting all $uu'$ and $vv'$ for $u,u' \in U$ and $v,v' \in V$, and then deleting any isolated vertices. Then $\tilde{\beta}_1(X(G)) = \tilde{\beta}_0(G')$. Let $u_iv_i$, $u_i \in U, v_i \in V$, $1 \leq i \leq q$ be a set of representative edges of the $q$ components of $G'$. Then the cycles $[(u_1,u_i,v_i,v_1)]$ for $2 \leq i \leq q$ can be taken as a basis for $\tilde{H}_1(X(G))$.
\end{lemma}
\begin{proof}
Suppose that $G'$ has $q$ connected components. We show that $\tilde{\beta}_1(X(G)) = \tilde{\beta}_0(G') = \max\{0,q-1\}$ by induction on $q$. In the case that $q=0$, $X(G)$ is the disjoint union of simplices on $U$ and $V$, and so $\tilde{\beta}_1(X(G)) = 0$.
Next we show that $\tilde{\beta}_1(X(G)) = 0$ if $q=1$. Enumerate the edges of $G'$ by $e_1, \ldots, e_z$ in such a way that for all $i > 1$, $e_i$ shares an endpoint with some previous edge. For all $i$, construct $G_i$ from $G$ by removing $e_{i+1},\ldots,e_z$ from $G$. Note that $G_z = G$. Since $X(G_1)$ consists of two disjoint simplices connected by a single edge, $\tilde{\beta}_1(X(G_1))=0$. We show by induction on $i$ that $\tilde{\beta}_1(X(G_i))=0$ for all $i$, and in particular that $\tilde{\beta}_1(X(G)) = 0$.
Let $C$ be a graph theoretic cycle in $G_i$ and consider $[C]_{X(G_i)}$ for $i > 1$. Let $e_i = uv$ for $u \in U, v \in V$, and suppose without loss of generality (perhaps by switching the roles of $U$ and $V$) that $G_i$ contains an edge $uv'$ for some $v \neq v' \in V$. This assumption is valid by the assumption that $e_i$ shares an endpoint with $e_j$ for some $j < i$. If $C$ contains $uv$, let $C'$ be the cycle obtained by replacing $uv$ in $C$ by the two edges $uv', v'v$. Otherwise, set $C' := C$. Since $C'$ avoids $e_i$, $C'$ is a cycle in $G_{i-1}$. Then $[C']_{X(G_{i-1})} = 0$ by the inductive hypothesis, and hence $[C']_{X(G_i)} = 0$ by $X(G_{i-1}) \subset X(G_i)$. We have $uvv' \in X(G_i)$ by the flag property, and so $[C]_{X(G_i)}=[C']_{X(G_i)} = 0$. This proves that $\tilde{\beta}_1(X(G_i)) = 0$.
Now suppose that $q \geq 2$, and let $W$ be the vertex set of a component of $G'$. Let $\tilde{G}$ be obtained from $G$ by removing the edges of $G$ with one endpoint in $U \cap W$ and the other in $V \cap W$. Set $\Delta := X(\tilde{G})$. Then $\Delta$ is connected and satisfies $\tilde{\beta}_1(\Delta) = q-2$ by the inductive hypothesis. Set $\Delta' := X(G)[W]$. By the $q=1$ case, $\tilde{\beta}_1(\Delta') = 0$, and also $\Delta'$ is connected. We also have that $\Delta \cup \Delta' = X(G)$ and $\Delta \cap \Delta'$ is the disjoint union of simplices on $W \cap U$ and $W \cap V$. Hence $\tilde{\beta}_0(\Delta \cap \Delta') = 1$ and all other Betti numbers of $\Delta \cap \Delta'$ vanish. Apply the portion of the Mayer-Vietoris sequence with components $\Delta$ and $\Delta'$ $$0 \rightarrow \tilde{H}_1(\Delta) \stackrel{\phi}{\rightarrow} \tilde{H}_1(X(G)) \stackrel{\partial}{\rightarrow} \tilde{H}_0(\Delta \cap \Delta') \rightarrow 0$$ to conclude that $\tilde{\beta}_1(X(G)) = q-1$.
Now we prove that the cycles $[(u_1,u_i,v_i,v_1)]$ for $2 \leq i \leq q$ can be taken as a basis for $\tilde{H}_1(X(G))$ by induction on $q$, with the cases $q=0$ and $q=1$ trivial. Assume that $u_q,v_q \in W$, with $W$ as above. Note that the homology groups in the above Mayer-Vietoris sequence are vector spaces, and hence the sequence splits. Since the inclusion-induced map $\phi$ is injective, the set of cycles $\{[(u_1,u_i,v_i,v_1)]\}$ for $2 \leq i \leq q-1$ is a basis for $\Omega_1(X(G),\Delta)$. Also, by the structure of the connecting homomorphism, $\partial([(u_1,u_q,v_q,v_1)]) = \pm [v_q - u_q]$ is a nonzero element of $\tilde{H}_0(\Delta \cap \Delta')$. This proves the result.
\end{proof}
\begin{corollary}
\label{BipartiteGen2}
Let all quantities be as in Lemma \ref{BipartiteLemma}, and suppose that $X(G)$ is an induced subcomplex of some larger complex $\Gamma$. Then there exists an edge set $\{u_iv_i: u_i \in U, v_i \in V, 1 \leq i \leq q'\}$ for some $q' \leq q$, such that each edge is in a different component of $G'$ and the set of cycles $\{[(u_1,u_i,v_i,v_1)]\}$ for $2 \leq i \leq q'$ is a basis for $\Omega_1(\Gamma,X(G))$.
\end{corollary}
\begin{proof}
Take the set of cycles from Lemma \ref{BipartiteLemma} and reduce it to a linearly independent set in $\Omega_1(\Gamma,X(G))$ with the same span.
\end{proof}
Now we begin constructing our regular form of a basis for $\tilde{H}_1(R(S))$. For a given $\epsilon > 0$, we partition $\mathbb{R}^d$ into $\epsilon$-cubes. We say that $K \subset \mathbb{R}^d$ is an $\epsilon$-\textit{cube} if there exist integers $m_1, \ldots, m_d$ such that $K$ is the product of half-open intervals $[m_1 \epsilon, (m_1+1)\epsilon) \times \ldots \times [m_d \epsilon, (m_d+1)\epsilon)$. If $\epsilon \leq d^{-1/2}$ and $S$ is a finite subset of some $\epsilon$-cube $K$, then $R(S)$ is a simplex.
The next lemma gives our first form for a basis of $\tilde{H}_1(R(S))$. Call a basis of the prescribed form $C_{d,r,\epsilon}$-\textit{regular}.
\begin{lemma}
\label{RegularGenerators}
Let $S$ be a finite subset of $\mathbb{R}^d$ contained in a ball $D$ of radius $r$, and fix $\epsilon \leq d^{-1/2}$. Then there exists a constant $C_{d,r,\epsilon}$, which depends only on $d$, $r$, and $\epsilon$, such that the following holds. There exists a basis of $\tilde{H}_1(R(S))$ such that all but at most $C_{d,r,\epsilon}$ of the basis elements are of the form $[(u,u',v',v)]$, where $u$ and $u'$ are in the same $\epsilon$-cube, and $v$ and $v'$ are in the same $\epsilon$-cube.
\end{lemma}
If a cycle $C = (u,u',v',v)$ satisfies the condition that $u$ and $u'$ are in the same $\epsilon$-cube, and $v$ and $v'$ are in the same $\epsilon$-cube, then we say that $C$ is $\epsilon$-\textit{simple}.
\begin{proof} There is a set $\mathcal{K} = \{K_1, \ldots, K_\kappa\}$ of $\kappa := (\lceil 2r/\epsilon \rceil +1)^d$ $\epsilon$-cubes that cover $S$. Choose a basis $B$ of $\tilde{H}_1(R(S))$ so that each basis element is the equivalence class of a simple, chord-free graph theoretic cycle in $R(S)$, as allowed by Lemma \ref{CycleBasis}. Given three points $u,v,w \in S \cap K_i$ for some $i$, $R(u,v,w)$ is a simplex. Hence, given a cycle $[C] \in B$, $C$ contains at most $2$ vertices in $K_i$, which implies that $C$ contains at most $2 \kappa$ vertices in total. We say that two cycles $C$ and $C'$ are \textit{near} each other if, by labeling vertices appropriately, $C=(v_1,\ldots,v_k)$, $C'=(v_1',\ldots,v_k')$, and for all $i$, $v_i$ and $v_i'$ are in the same $\epsilon$-cube. Nearness is an equivalence relation. There are at most $C_{d,r,\epsilon} := \sum_{i=1}^{2\kappa}\kappa^i$ nearness equivalence classes for simple, chord-free cycles of length at most $2 \kappa$ in $D$.
Suppose that $[C]=[(v_1,\ldots,v_k)] \in B$ and $[C']=[(v_1',\ldots,v_k')] \in B$ are near each other and are not $\epsilon$-simple. The following subscripts are understood mod $k$. Then $[C'] = [C] + \sum_{i=1}^k [(v_i',v_{i+1}',v_{i+1},v_i)]$. Remove $[C']$ from $B$ and add each of the $[(v_i',v_{i+1}',v_{i+1},v_i)]$ to $B$. Then reduce $B$ to a basis for $\tilde{H}_1(R(S))$ by removing elements that are linear combinations of other elements. After this reduction, all elements of $B$ are equivalence classes of simple, chord-free cycles; the reason is that if $(v_i',v_{i+1}',v_{i+1},v_i)$ has a chord, then $[(v_i',v_{i+1}',v_{i+1},v_i)] = 0$ by the fact that $R(S)$ is flag. This operation strictly decreases the number of non-$\epsilon$-simple elements of $B$ while maintaining the span of $B$. Repeat this operation as many times as possible; then $B$ contains at most $C_{d,r,\epsilon}$ non-$\epsilon$-simple generators.
\end{proof}
We further refine our basis for $\tilde{H}_1(R(S))$. Let $K$ be a distinguished $\epsilon$-cube and $W := K \cap S$. We say that a basis $B$ for $\tilde{H}_1(R(S))$ is $W$-\textit{regular} if all but $C_{d,r,\epsilon}+{\kappa \choose 2}$ elements $[C] \in B$ are of one of the following two forms. \newline
1) $C = (w,w',v',v)$ with $w,w' \in W$ and $v,v'$ in the same $\epsilon$-cube. \newline
2) $C = (u,u',v',v)$ with $u,u'$ in the same $\epsilon$-cube and $v,v'$ in the same $\epsilon$-cube, and furthermore there is no face $wuv$ or $wu'v'$ for any $w \in W$.
\begin{lemma}
\label{RegGen2}
Let $S$ be as in Lemma \ref{RegularGenerators}, and let $W$ be the intersection of a fixed $\epsilon$-cube with $S$. Then $\tilde{H}_1(R(S))$ has a $W$-regular basis.
\end{lemma}
\begin{proof}
Let $\mathcal{K} = \{K_1, \ldots, K_\kappa\}$ be a set of $\epsilon$-cubes that cover the points of $S$, with $K=K_1$ if $W \neq \emptyset$. By Lemma \ref{RegularGenerators}, the equivalence classes of $\epsilon$-simple cycles in $R(S)$ span a subspace $\Omega$ of $\tilde{H}_1(R(S))$ with $\dim(\Omega) \geq \tilde{\beta}_1(R(S)) - C_{r,d,\epsilon}$. It is clear that $$\Omega \subseteq \sum_{i,j}\Omega_1(R(S),R(S \cap (K_i \cup K_j))),$$ and since each $\Omega_1(R(S),R(S \cap (K_i \cup K_j)))$ is spanned by $\epsilon$-simple cycles by Corollary \ref{BipartiteGen2}, $$\Omega \supseteq \sum_{i,j}\Omega_1(R(S),R(S \cap (K_i \cup K_j))).$$ We first construct a basis $B$ for $\Omega$ as follows. By Corollary \ref{BipartiteGen2}, for all $1 \leq i < j \leq \kappa$ we may choose integers $p_{i,j}$ and a basis $B_{i,j}$ for $\Omega_1(R(S),R(S \cap (K_i \cup K_j)))$ given by $\{[(u^{i,j}_1,v^{i,j}_1,v^{i,j}_k,u^{i,j}_k)], 2 \leq k \leq p_{i,j}\}$ with the properties prescribed in Corollary \ref{BipartiteGen2}. Then let $B$ be a linearly independent subset of $\cup_{1 \leq i < j \leq \kappa}B_{i,j}$ with the same span. If $W = \emptyset$, then an extension of $B$ to a basis for $\tilde{H}_1(R(S))$ is $W$-regular, as every element of $B$ is satisfies the second condition in the definition of a $W$-regular basis. So now suppose that $W \neq \emptyset$.
Suppose that there exist distinct $w, w' \in W$ so that for some $1 \leq k < k' \leq p_{i,j}$, there exist faces $wu^{i,j}_kv^{i,j}_k$ and $w'u^{i,j}_{k'}v^{i,j}_{k'}$. Then $$[(u^{i,j}_k,v^{i,j}_k,v^{i,j}_{k'},u^{i,j}_{k'})] = [(u^{i,j}_k,w,v^{i,j}_k,v^{i,j}_{k'},w',u^{i,j}_{k'})].$$ By the existence of the edge $ww'$, this is $[(u^{i,j}_{k'},u^{i,j}_{k},w,w')] + [(v^{i,j}_{k},v^{i,j}_{k'},w',w)]$. Then replace $[(u^{i,j}_1,v^{i,j}_1,v^{i,j}_{k'},u^{i,j}_{k'})]$ with $[(u^{i,j}_{k'},u^{i,j}_{k},w,w')]$ and $[(v^{i,j}_{k},v^{i,j}_{k'},w',w)]$ in $B$, and then remove elements from $B$ until the set is linearly independent with the same span. This operation does not decrease $|B|$, and it strictly decreases the number of elements $[C] \in B$ such that $C$ does not have a vertex in $W$. Redefine variables so that $B_{i,j}$ is again of the form $\{(u^{i,j}_1,v^{i,j}_1,v^{i,j}_k,u^{i,j}_k)$, $2 \leq k \leq p_{i,j}\}$ for a new value of $p_{i,j}$. Repeat this operation as many times as possible.
Also, there cannot exist $w \in W$ such that there are faces $wu^{i,j}_kv^{i,j}_k$ and $wu^{i,j}_{k'}v^{i,j}_{k'}$ for $k \neq k'$, since in that case faces $wu^{i,j}_ku^{i,j}_{k'}$ and $wv^{i,j}_kv^{i,j}_{k'}$ also exist and $[(u^{i,j}_k,v^{i,j}_k,v^{i,j}_{k'},u^{i,j}_{k'})]=0$. If $k=1$, this violates the basis assumption. If $k>1$,
\begin{eqnarray*}
& & [(u^{i,j}_1,v^{i,j}_1,v^{i,j}_{k},u^{i,j}_{k})] = \\
& & [(u^{i,j}_1,v^{i,j}_1,v^{i,j}_{k},u^{i,j}_{k})] + [(u^{i,j}_k,v^{i,j}_k,v^{i,j}_{k'},u^{i,j}_{k'})] = \\
& & [(u^{i,j}_1,v^{i,j}_1,v^{i,j}_{k},v^{i,j}_{k'},u^{i,j}_{k'},u^{i,j}_{k})].
\end{eqnarray*}
By the existence of faces $u_1^{i,j}u_k^{i,j}u_{k'}^{i,j}$ and $v_1^{i,j}v_k^{i,j}v_{k'}^{i,j}$, this implies that $[(u^{i,j}_1,v^{i,j}_1,v^{i,j}_{k},u^{i,j}_{k})] = [(u^{i,j}_1,v^{i,j}_1,v^{i,j}_{k'},u^{i,j}_{k'})]$, also a contradiction to the basis assumption.
We conclude that for each fixed pair $(i,j)$, there exists at most one value of $k$ such that there exists $w \in W$ and a face $wu^{i,j}_kv^{i,j}_k$. If such a face exists and $k \neq 1$, then remove $[(u^{i,j}_1,v^{i,j}_1,v^{i,j}_{k},u^{i,j}_{k})]$ from $B$. If $k=1$, note that $$[(u^{i,j}_2,v^{i,j}_2,v^{i,j}_{k'},u^{i,j}_{k'})] = -[(u^{i,j}_1,v^{i,j}_1,v^{i,j}_{2},u^{i,j}_{2})] + [(u^{i,j}_1,v^{i,j}_1,v^{i,j}_{k'},u^{i,j}_{k'})]$$ by the existence of faces $u^{i,j}_1u^{i,j}_2u^{i,j}_{k'}$ and $v^{i,j}_1v^{i,j}_2v^{i,j}_{k'}$. Then replacing $[(u^{i,j}_1,v^{i,j}_1,v^{i,j}_{k'},u^{i,j}_{k'})]$ by $[(u^{i,j}_2,v^{i,j}_2,v^{i,j}_{k'},u^{i,j}_{k'})]$ for all $k' > 2$ and removing $[(u^{i,j}_1,v^{i,j}_1,v^{i,j}_{2},u^{i,j}_{2})]$ decreases $|B|$ by $1$ and preserves linear independence of $B$. Doing this for all $1 \leq i < j \leq r$, $|B|$ decreases by at most ${\kappa \choose 2}$. Then extend $B$ to a basis for $\tilde{H}_1(R(S))$. This proves the result.
\end{proof}
We need yet another refinement of our basis. We say that $B$ is a $W$-\textit{strongly regular} basis if the following holds. For every pair of $\epsilon$-cubes $K_i$ and $K_j$ such that $R(S)$ has an edge with one endpoint in $K_i$ and another in $K_j$, choose a distinguished edge $u^{i,j}v^{i,j}$ with $u^{i,j} \in K_i, v^{i,j} \in K_j$. Then all but $C_{d,r,\epsilon}+{\kappa \choose 2}$ elements of $B$ satisfy one of the two conditions in the definition of a $W$-regular basis and are also of the form $[(u^{i,j},v^{i,j},v',u')]$ for some $u' \in K_i$ and $v' \in K_j$. Next we verify that $\tilde{H}_1(R(S))$ has a $W$-strongly regular basis.
\begin{lemma}
\label{RegGen3}
Let $S$ be as in Lemma \ref{RegularGenerators}, and let $W$ be the intersection of fixed $\epsilon$-cube with $S$. Then $\tilde{H}_1(R(S))$ has a $W$-strongly regular basis.
\end{lemma}
\begin{proof}
First construct a $W$-regular basis $B'$, as guaranteed by Lemma \ref{RegGen2}, and we modify it into a strongly regular basis. Let all quantities be as in the proof of Lemma \ref{RegGen2}. For $2 \leq i < j \leq \kappa$, or for $1 \leq i < j \leq \kappa$ in the case that $W = \emptyset$, we may take $u^{i,j} := u^{i,j}_1$ and $v^{i,j} := v^{i,j}_1$, and all elements of $B$ with endpoints in $K_i$ and $K_j$ are of the form $(u^{i,j},v^{i,j},v',u')$ by construction. This completes the proof in the case that $W = \emptyset$, and so now we assume that $W \neq \emptyset$ and $K = K_1$.
Now consider $1 = i < j \leq \kappa$. Let $[C_1], \ldots, [C_t]$ be the elements of $B$ with vertices in $K_i$ and $K_j$, and define $C_k := (u_k,v_k,v_k',u_k')$ with $u_k,u_k' \in K_i$ and $v_k,v_k' \in K_j$ for $1 \leq k \leq t$. For $2 \leq k \leq t$, add the cycles $[C_k'] := [(u_1,v_1,v_k,u_k)]$ and $[C_k''] := [u_1,v_1,v_k',u_k']$ to $B$, and remove $[C_k]$. Observe that $C_k'$ and $C_k''$ satisfy Condition 1 in the definition of a $W$-regular basis. Then remove any element from $B$ that can be written as a linear combination of other elements in $B'$, and repeat this operation as many times as possible. By the existence of faces $u_1u_ku_k'$ and $v_1v_kv_k'$, $[C_k] = [C_k''] - [C_k']$, and therefore this operation preserves the property that $B'$ is a basis and hence $|B'|$ is preserved. Since the operation also preserves $|B'-B|$, $|B|$ is preserved as well. The lemma follows by taking $u^{i,j} := u_1$ and $v^{i,j} := v_1$.
\end{proof}
In order to obtain a more useful combinatorial picture of our $W$-strongly regular basis, we associate with the basis a set of edges with specific properties. This set of edges will be instrumental in the proofs of later theorems.
\begin{corollary}
\label{EdgeSet}
Let all quantities be as in the statement and proof of Lemma \ref{RegGen3}. There exists a set of edges $E=E(S) \subset \Gamma$, $|E| \geq \tilde{\beta}_1(R(S)) - C_{r,d,\epsilon} - {\kappa \choose 2}$, which can be partitioned into sets $\{E_{i,j}\}$ for all pairs $1 \leq i < j \leq \kappa$, with the following properties. \newline
1) All the edges in $E_{i,j}$ are of the form $uv$ with $u \in S \cap K_i$ and $v \in S \cap K_j$. \newline
2) If $i \neq 1$, then there is no face $wuv$ for any $w \in S \cap K_1$ and $uv \in E_{i,j}$. \newline
3) Let $G^{i,j}$ be the bipartite graph that is the graph of $R(S \cap (K_i \cup K_j))$ with all edges in $R(S \cap K_i)$ and in $R(S \cap K_j)$ removed and then all isolated vertices removed. Then $E_{i,j}$ does not contain two edges from the same component in $G^{i,j}$. \newline
4) Let $e_1, e_2$ be two edges in $E_{i,j}$, and let $G^{i,j}_1$ and $G^{i,j}_2$ be the components of $G^{i,j}$ that contain $e_1$ and $e_2$ respectively. Then there is no vertex $w \in S \cap K_1$ such that $\mbox{\upshape lk}\,(w)$ contains edges both in $G^{i,j}_1$ and $G^{i,j}_2$.
\end{corollary}
\begin{proof}
Let $B'$ be a $W$-strongly regular basis for $\tilde{H}_1(R(S))$, and let $B$ be as in the proof of Lemma \ref{RegGen3}. For fixed $i<j$, let $\{[(u_1,v_1,v_k,u_k)] : k \leq 2\}$ be the set of elements of $B$ with $u_1,u_k \in K_i$ and $v_1,v_k \in K_j$. Set $E_{i,j} := \{u_2v_2, \ldots, u_kv_k\}$. By construction, $E$ satisfies Conditions 1 and 2.
To verify Condition 3, note that if $u,u' \in K_i$, $v,v',v'' \in K_j$, and $uv, u'v', u'v''$ are all edges in $R(S)$, then $[(u,v,v',u')] = [(u,v,v'',u')]$ by the existence of faces $vv'v''$ and $u'v'v''$ in $R(S)$. By repeated applications of this fact, perhaps switching the roles of $K_i$ and $K_j$, we have that if $k' > k > 1$, then $[(u_1,v_1,v_k,u_k)] = [(u_1,v_1,v_{k'},u_{k'})]$ if $u_kv_k$ and $u_{k'}v_{k'}$ are in the same component in $G^{i,j}$. This contradicts the linear independence of $B$, and so we have that all the edges in $E_{i,j}$ are in different components of $G^{i,j}$.
Now we verify Condition 4. Let all quantities be as in the previous paragraph. Suppose that $\mbox{\upshape lk}\,(w)$ contains edges $u_k'v_k'$ and $u_{k'}'v_{k'}'$ in the same components of $G^{i,j}$ as $u_kv_k$ and $u_{k'}v_{k'}$ respectively. By the argument of the previous paragraph and existences of faces $wu_k'v_k', wu_{k'}'v_{k'}', wu_k'u_{k'}', wv_k'v_{k'}'$, we have that $[(u_k,v_k,v_{k'},u_{k'})] = [(u_k',v_k',v_{k'}',u_{k'}')] = 0$, which by the existence of faces $u_1u_ku_{k'}$ and $v_1v_kv_{k'}$ implies that $[(u_1,v_1,v_k,u_k)] = [(u_1,v_1,v_{k'},u_{k'})]$, also a contradiction to the linear independence of $B$. This proves the corollary.
\end{proof}
\section{Results on second homology}
\label{Hom2}
In this section, we prove upper bounds on $M_{2,2}(n)$ and $M_{2,d}(n)$ and a lower bound on $M_{2,5}(n)$. For our first major result, we consider point configurations in $\mathbb{R}^2$. If $p \in \mathbb{R}^2$, $x(p)$ denotes the $x$-coordinate of $p$.
\begin{theorem}
\label{M22}
There exists a constant $D$ so that $M_{2,2}(n) \leq Dn$.
\end{theorem}
We need two lemmas before we prove Theorem \ref{M22}. Both the statement and the proof of our first lemma are found as \cite[Proposition 2.1]{Planar}. The second lemma is a claim about arrangements of points that are close together.
\begin{lemma}
\label{CrossingLemma}
Let $S = \{u_1,u_2,v_1,v_2\} \subset \mathbb{R}^2$ so that $R(S)$ contains edges $u_1v_1$ and $u_2v_2$, and suppose that the line segments joining $u_1,v_1$ and $u_2,v_2$ intersect in $\mathbb{R}^2$. Then $R(S)$ is a cone.
\end{lemma}
\begin{proof}
Let $p$ be the point of intersection between $u_1v_1$ and $u_2v_2$. Suppose without loss of generality that the segment $pu_1$ is not longer than any of $pu_2, pv_1$, or $pv_2$. Since $||pu_2||+||pv_2|| \leq 1$, then $||pu_1||+||pu_2|| \leq 1$ and $||pu_1||+||pv_2|| \leq 1$. It follows from the triangle inequality that $u_1u_2$ and $u_1v_2$ are edges in $R(S)$ and hence $R(S)$ is a cone.
\end{proof}
\begin{lemma}
\label{PerpLemma}
Let $U$ and $V$ be finite sets of points in $\mathbb{R}^2$ such that all points of $U$ and $V$ are within distance $\epsilon$ of points $p_U$ and $p_V$ with $\mbox{\upshape dist}\,(p_U,p_V) = 1$. Choose $v_1 \neq v_2 \in V$. Consider the vectors $w_1 := p_V-p_U$ and $w_2 := \frac{v_2-v_1}{\mbox{\upshape dist}\,(v_1,v_2)}$, with $w_1 \cdot w_2$ denoting the standard scalar product. Then one of the following is true. \newline
1) Either $\mbox{\upshape dist}\,(v_1,u) \leq \mbox{\upshape dist}\,(v_2,u)$ for all $u \in U$ or $\mbox{\upshape dist}\,(v_1,u) \geq \mbox{\upshape dist}\,(v_2,u)$ for all $u \in U$. \newline
2) There exists $\alpha = \alpha(\epsilon) \rightarrow 0$ as $\epsilon \rightarrow 0$ such that $|w_1 \cdot w_2| < \alpha$.
\end{lemma}
Roughly speaking, the second condition asserts that $w_1$ and $w_2$ are almost perpendicular.
\begin{proof}
By applying an isometry, we may assume without loss of generality that $p_U = (0,1)$ and $p_V = (0,0)$. By applying a translation and replacing $\epsilon$ with $2 \epsilon$, we may also assume that $v_1 = (0,0)$. Let $v_2 = (x,y)$. Suppose that the first statement is false; that is, there exist $(x',1+y'), (x'',1+y'') \in U$ such that $\mbox{\upshape dist}\,(v_1,(x',1+y')) > \mbox{\upshape dist}\,(v_2,(x',1+y'))$ and $\mbox{\upshape dist}\,(v_1,(x'',1+y'')) < \mbox{\upshape dist}\,(v_2,(x'',1+y''))$. Note that $|x|,|y|,|x'|,|y'|,|x''|,|y''| \leq \epsilon$. We show that the second condition holds.
By considering squares of distances and simplifying, we have that $0 > x^2-2xx'-2y+y^2-2yy'$ and $0 < x^2-2xx''-2y+y^2-2yy''$. This is impossible if $x=0$, which we see by dividing each side by $y$ and considering the fact that $y,y',y''$ are all close to $0$. Then let $y=mx$. Then we have that the quantities $x-2x'-2m+m^2x-2my' = x-2x'+m(-2+y-2y')$ and $x-2x''+m(-2+y-2y'')$ have opposite signs, which implies that $|m| < \alpha$ for some $\alpha \rightarrow 0$ as $\epsilon \rightarrow 0$. Then $w_1$ is a vertical vector, $w_2$ is a nearly horizontal vector, and the result follows.
\end{proof}
\proofof{Theorem \ref{M22}}
Let $S$ be a point configuration in $\mathbb{R}^2$ with $|S| \leq n$. Consider $0 < \epsilon < 2^{-1/2}$, and let $K$ be an $\epsilon$-cube such that $|K \cap S|$ is maximal. Set $W := K \cap S$. Since $\epsilon < 2^{-1/2}$, if $v \in V(\mbox{\upshape lk}\,(w))$ for some $w \in W$, then $v$ is of distance no more than $3/2$ from the center of $K$. There exists a value $\kappa$, which depends only on $\epsilon$, and $\epsilon$-cubes $\mathcal{K} = \{K=K_1, \ldots, K_\kappa\}$ such that for every $w \in W$, $\mbox{\upshape lk}\,(w)$ contains only vertices in $S \cap (\cup K_i)$. For each $w \in W$, let $$E_w = E(\mbox{\upshape lk}\,(w)) = \cup_{1 \leq i < j \leq \kappa} E_{i,j,w}$$ be a set of edges as guaranteed by Corollary \ref{EdgeSet} with corresponding graphs $G^{i,j}_w$. We take $r = 3/2$ in the corollary.
We claim that there exists an absolute constant $D'$ such that, for all $1 \leq i < j \leq \kappa$, $\sum_{w \in W} |E_{i,j,w}| \leq D'|W|$. Assuming this claim, it then follows that $$\sum_{w \in W}|E_w| \leq {\kappa \choose 2}D'|W|,$$ and that there exists some $w \in W$ such that $|E_w| \leq {\kappa \choose 2}D'$. By construction of $E_w$, there exists a constant $D$ such that $\tilde{\beta}_1(\mbox{\upshape lk}\,(w)) \leq D$. The theorem follows by Lemma \ref{LinkLemma} and induction on $|S|$. We prove the claim in two cases: the $i=1$ case and the $i>1$ case.
\noindent
\textbf{Case 1: $\mathbf{i=1}$:}
First suppose that $i=1$. Let $U := S \cap K_j$. By choosing $\epsilon$ sufficiently small and translating the coordinate system, we may assume that all points of $W$ are within distance $0.01$ of $(0,0)$. If $\mbox{\upshape dist}\,(u,w) > 1$ for all $w \in W, u \in U$, then $|E_{1,j,w}|=0$ for all $w$. If $\mbox{\upshape dist}\,(u,w) \leq 1$ for all $w \in W, u \in U$, then $|E_{1,j,w}| \leq 1$ for all $w$ by Condition 3 of Corollary \ref{EdgeSet} and the observation that $G^{1,j}_w$ is a complete bipartite graph. Hence $\mbox{\upshape dist}\,(u,w) > 1$ for some $u \in U, w \in W$ and $\mbox{\upshape dist}\,(u',w') \leq 1$ for some $u' \in U, w' \in W$. By rotating the coordinate system about the origin, we may assume that all points of $U$ are within distance $0.1$ of $(0,1)$.
Let $U_w$ be the set of endpoints of edges in $E_{1,j,w}$ that are in $U$. If $w, w' \in W$ and $\mbox{\upshape dist}\,(u,w') \leq \mbox{\upshape dist}\,(u,w)$ for all $u \in U$, then there is an edge joining $w'$ to all $u \in U_w$ in $G^{i,j}_w$, which implies that $G^{i,j}_w$ is connected, and by Condition 3 of Corollary \ref{EdgeSet}, $|U_{w}| \leq 1$. Construct $\tilde{W}$, starting from $W$, in the following way: whenever there is a pair $w \neq w' \in W$ such that $\mbox{\upshape dist}\,(u,w') \leq \mbox{\upshape dist}\,(u,w)$ for all $u \in U$, delete $w$, and continue until no more points can be deleted in this manner. If $\epsilon$ is sufficiently small, then for all $w,w' \in \tilde{W}$, the slope $m$ of the line joining $w$ and $w'$ satisfies $-1 < \alpha < 1$; otherwise either $w$ or $w'$ would have been deleted by Lemma \ref{PerpLemma}. It suffices to show that $\sum_{w \in \tilde{W}}|U_w| \leq D'|W|$ for some constant $D'$ by $$\sum_{w \in W}|E_{1,j,w}| = \sum_{w \in W}|U_w| \leq \sum_{w \in \tilde{W}}|U_w|+|W|.$$
Choose $u, u' \in U$. If $\mbox{\upshape dist}\,(u,w) \leq \mbox{\upshape dist}\,(u',w)$ for all $w \in W$, then whenever $w'u' \in E_{1,j,w}$ for some $w$, $w'u$ is an edge in $G^{i,j}_w$ in the same component as $w'u'$. Hence we may replace $w'u'$ with $w'u$ and still satisfy the conditions of Corollary \ref{EdgeSet}. Construct $\tilde{U}$, starting from $U$, by deleting $u'$ for every pair of vertices $u \neq u' \in \tilde{U}$ such that $\mbox{\upshape dist}\,(u,w) \leq \mbox{\upshape dist}\,(u',w)$ for all $w \in W$, until no more vertices can be deleted in this manner. We may choose $E_{1,j,w}$ so that every endpoint of an edge in $E_{1,j,w}$ in $U$ is actually in $\tilde{U}$. Label the vertices of $\tilde{W}$ as $\{w_1, \ldots, w_{|\tilde{W}|}\}$ in order of ascending $x$-coordinates, and likewise label the vertices of $\tilde{U}$ as $\{u_1,\ldots,u_{|\tilde{U}|}\}$ in order of ascending $x$-coordinates. As above, we may choose $\epsilon$ so that for all $u \neq u' \in \tilde{U}$, the slope $m$ of the line that joins $u$ and $u'$ satisfies $-1 < m < 1$.
Choose $i_1 < i_2$ and suppose that there exist $j_1 < j_2 < j_3 < j_4 < j_5 < j_6$ such that $u_{j_1}, u_{j_2},u_{j_3} \in U_{w_{i_2}}$ and $u_{j_4}, u_{j_5}, u_{j_6} \in U_{w_{i_1}}$. Suppose that there exist $u_{j_1}w_a, u_{j_2}w_b, u_{j_3}w_c \in E_{i,j,w_{i_1}}$, and we derive a contradiction. At most one of $w_a,w_b,w_c$ is equal to $w_{i_1}$. Then there exists $k \in \{j_1,j_2,j_3\}$ such that $w_{i_1}u_k$ is not an edge; otherwise, $\mbox{\upshape lk}\,(w_{i_1})$ contains two edges of $E_{i,j,w_{i_2}}$, a contradiction to Condition 4 of Corollary \ref{EdgeSet}. Likewise, there exists $k' \in \{j_4,j_5,j_6\}$ such that $w_{i_2}u_{k'}$ is not an edge. In particular, this shows that $|U_{w_{i_1}} \cap U_{w_{i_2}}| \leq 2$ for all $i_1 < i_2$. The points $w_{i_1}$ and $u_{k'}$ are on opposite sides of the line joining $w_{i_2}$ and $u_k$ by consideration of the slopes of the lines joining the points, and similarly $w_{i_2}$ and $u_k$ are on opposite sides of the line joining $w_{i_1}$ and $u_{k'}$. Hence the segments $w_{i_1}u_{k'}$ and $w_{i_2}u_k$ intersect in $\mathbb{R}^2$, and the set $\{w_{i_1},w_{i_2},u_k,u_{k'}\}$ violates Lemma \ref{CrossingLemma}. Thus there cannot exist such $j_1 < \ldots < j_6$.
Let $W' = \{w \in \tilde{W}: |U_w| > 5\}$. It suffices to show that $\sum_{w \in W'}|U_w| \leq D'|W|$ for some constant $D'$ by $$\sum_{w \in \tilde{W}}|U_w| \leq \sum_{w \in W'}|U_w|+5|W|.$$ For $w \in W'$, let $r(w)$ and $r'(w)$ be the indices of the points of $U_w$ with third smallest and second largest $x$-coordinates respectively. By the above, if $w, w' \in W$ and $x(w') > x(w)$, then $$r(w') \geq r'(w) \geq r(w)+|U_w|-5.$$ If $w^-$ and $w^+$ are the points in $W'$ with smallest and largest $x$-coordinates respectively, then $$r(w^-)+\sum_{w^+ \neq w \in W'}(|U_w|-5) \leq r(w^+) \leq |\tilde{U}|-|U_{w^+}|+3.$$ Then $$\sum_{w \in W'}|U(w)| \leq |\tilde{U}|+5(|W'|-1)+3 \leq 5|W|.$$ This proves the result in the case that $i=1$.
\noindent
\textbf{Case 2: $\mathbf{i>1}$:}
Now fix $i$ and $j$ with $j > i > 1$. Set $U := S \cap K_i$ and $V := S \cap K_j$, and for all $w \in W$, define $U_w := \{u_{w,1}, \ldots, u_{w,r_w}\}$ and $V_w := \{v_{w,1}, \ldots, v_{w,r_w}\}$ so that $E_{i,j,w}=\{u_{w,1}v_{w,1},\ldots,u_{w,r_w}v_{w,r_w}\}$.
If $\mbox{\upshape dist}\,(u,v) > 1$ for all $u \in U, v \in V$, or if $\mbox{\upshape dist}\,(w,u) > 1$ for all $w \in W, u \in U$, or if $\mbox{\upshape dist}\,(w,v) > 1$ for all $w \in W, v \in V$, then $|E_w| = 0$ for all $w \in W$ and the result is proven. If $\mbox{\upshape dist}\,(u,v) \leq 1$ for all $u \in U, v \in V$, then $G^{i,j}_w$ is a complete bipartite graph and hence $|E_{i,j,w}| \leq 1$ for all $w \in W$ by Condition 3 of Corollary \ref{EdgeSet} and the claim is proven. If $\mbox{\upshape dist}\,(w,u) \leq 1$ for all $w \in W, u \in U$, then consider $v \in V_w \cap V_{w'}$ for $w \neq w'$ so that $uv \in E_{i,j,w}$. Then $u$ and $v$ are both vertices in $\mbox{\upshape lk}\,(w')$, which contradicts Condition 2 of Corollary \ref{EdgeSet} for $E_{i,j,w}$. Hence $V_w \cap V_{w'} = \emptyset$, which implies that $\sum_{w \in W}|E_{i,j,w}| \leq |V| \leq |W|$, proving the claim. Likewise, if $\mbox{\upshape dist}\,(w,v) \leq 1$ for all $w \in W, v \in V$, then the claim is proven. All pairs of points in $U \cup V \cup W$ that are not both in the same $\epsilon$-cube have distance between $1-4\epsilon$ and $1+4\epsilon$. By choosing $\epsilon$ sufficiently small and making a suitable isometric change of coordinates, we may assume that all vertices of $U,V,W$ are within distance $0.01$ of $(0,0)$, $(0,1)$, and $(\sqrt{3}/2, 1/2)$ respectively.
For $u \in U$ and $v \in V$, let $W_u = \{w \in W: u \in U_w\}$ and $W_v = \{w \in W: v \in V_w\}$. For $w \in W_u$, define the vertex $v(u,w)$ so that the edge $\{u,v(u,w)\} \in E_{i,j,w}$. If $w, w' \in W_u$, then either the line that joins $w$ and $w'$ has slope $m$ satisfying $\sqrt{3}-0.1 < m < \sqrt{3}+0.1$; or either $\mbox{\upshape dist}\,(w,v) \leq \mbox{\upshape dist}\,(w',v)$ for all $v \in V$, or $\mbox{\upshape dist}\,(w',v) \leq \mbox{\upshape dist}\,(w,v)$ for all $v \in V$ by Lemma \ref{PerpLemma}. Without loss of generality, assume the former. Then $\{u,v(u,w')\}$ is an edge in $\mbox{\upshape lk}\,(w)$ and in $\mbox{\upshape lk}\,(w')$, a contradiction to Condition 2 of Corollary \ref{EdgeSet}. The vertices in $W_u$ can then be arranged $w_{u,1}, \ldots, w_{u,s_u}$ in order of increasing distance from $U$. By the same argument, the vertices of $W_v$ can be similarly arranged $w_{v,1}, \ldots, w_{v,t_v}$ in order of increasing distance from $V$.
For all $u \in U$ with $W_u \neq \emptyset$, there exists a vertex $v(u) \in V$ and $w \in W_u$ such that $\{u,v(u)\} \in E_{i,j,w}$ and $\mbox{\upshape dist}\,(w,u) \geq \mbox{\upshape dist}\,(w',u)$ for all $w' \in W_u$. Likewise, for all $v \in V$ with $W_v \neq \emptyset$, there exists a vertex $u(v) \in U$ such that $\{u(v),v\} \in E_{i,j,w}$ and $\mbox{\upshape dist}\,(w,v) \geq \mbox{\upshape dist}\,(w',v)$ for all $w' \in W_v$. There are at most $|U|$ (or $|V|$) edges $uv$ in $\cup_{w \in W}E_{i,j,w}$ such that $v=v(u)$ (or $u=u(v)$). Also, the $E_{i,j,w}$ are disjoint by Condition 2 of Corollary \ref{EdgeSet}. Hence if $\sum_{w \in W}|E_{i,j,w}| > 2|W| \geq |U|+|V|$, there exist $w \in W, u \in U, v \in V$ such that $uv \in E_{i,j,w}$, $u \neq u(v)$, and $v \neq v(u)$. In this case, choose $w_u \in W_u, w_v \in W_v$ such that $\mbox{\upshape dist}\,(u,w_u) > \mbox{\upshape dist}\,(u,w)$ and $\mbox{\upshape dist}\,(v,w_v) > \mbox{\upshape dist}\,(v,w)$. By consideration of the slopes between the points $u,v,w,w_u,w_v$, the points $v$ and $w_v$ are on opposite sides of the line joining $u$ and $w_u$, and $u$ and $w_u$ are on opposite sides of the line joining $v$ and $w_v$, and so the segments $uw_u$ and $vw_v$ intersect. By Lemma \ref{CrossingLemma}, either $uw_v$ or $vw_u$ is an edge, yielding either the face $uvw_v$ or $uvw_u$. This contradicts Condition 2 of Lemma \ref{EdgeSet}. We conclude that $\sum_{w \in W}|E_{i,j,w}| \leq 2|W|$ as desired.
\hfill$\square$\medskip
\begin{theorem}
\label{M2D}
For all fixed $\delta > 0$ and $n$ sufficiently large, $M_{2,d}(n) < \delta n^2$.
\end{theorem}
Before we give the proof, we need two additional lemmas. The first concerns bipartite graphs that avoid certain kinds of subgraphs.
\begin{lemma}
\label{MooreBound}
Let $G$ be a bipartite graph on vertices $U \sqcup V$ with $|U| \leq n$ and $|V| \leq n$. Suppose that no two vertices of $U$ share three common neighbors. Then there exists a constant $C$ such that $G$ has at most $Cn^{3/2}$ edges.
\end{lemma}
\begin{proof}
Equivalent to the condition that no two vertices of $U$ share three common neighbors is the condition that no three vertices of $V$ share two common neighbors. For each $v \in V$, let $N(v)$ be the set of neighbors of $v$. Let ${N(v) \choose 2}$ be the set of pairs of neighbors of $v$, so that $|{N(v) \choose 2}| = {|N(v)| \choose 2}$. Also, let $d$ be the average degree of vertices in $v$. Then $\sum_{v \in V} |{N(v) \choose 2}| \geq |V|{d \choose 2}.$ Since no three vertices in $V$ share two common neighbors, it must be that $|V|{d \choose 2} \leq 2{|U| \choose 2} \leq 2{n \choose 2}$ by the pigeonhole principle. There exists a constant $C$ such that $d \leq C n|V|^{-1/2}$, and hence $G$ has at most $C n|V|^{1/2}$ edges. This proves the result by $|V| \leq n$.
\end{proof}
The second lemma concerns induced matchings. Let $G$ be a bipartite graph with vertex sets $U$ and $V$. Then a \textit{matching} $M$ is a set of edges in $G$ such that no two edges have a common endpoint. We say that $M$ is an \textit{induced matching} if whenever $uv, u'v' \in M$ for $u,u' \in U, v,v' \in V$, $G$ does not contain edges $uv'$ or $u'v$. The following is an immediate consequence of \cite[Proposition 10.45]{TaoVu}.
\begin{lemma}
\label{InducedMatchings}
Let $G$ be a bipartite graph with vertex sets $U$ and $V$, $|U| \leq n$ and $|V| \leq n$. Let $M_1, \ldots, M_t, t \leq n$ be disjoint sets of edges that are each an induced matching in $G$. Let $\delta > 0$ be fixed. Then $\sum_{i=1}^t|M_i| < \delta n^2$ if $n$ is sufficiently large.
\end{lemma}
\proofof{Theorem \ref{M2D}}
We use some of the same methods as in the proof of Theorem \ref{M22}. Let $S$ be a point configuration in $\mathbb{R}^d$ with $|S| \leq n$. Fix $\epsilon = d^{-1/2}$, and let $K$ be an $\epsilon$-cube such that $W := |K \cap S|$ is maximal. There is a value $\kappa$, which depends only on $d$, and set of $\epsilon$-cubes $\mathcal{K} = \{K=K_1, \ldots, K_\kappa\}$ such that every vertex in the link of each $w \in W$ is contained in $S \cap (\cup K_i)$. For each $w \in W$, let $$E_w = E(\mbox{\upshape lk}\,(w)) = \cup_{1 \leq i < j \leq \kappa} E_{i,j,w}$$ be a set of edges as guaranteed by Corollary \ref{EdgeSet} with $r=3/2$.
We show that for any given $\delta' > 0$ and $n$ sufficiently large, for all $1 \leq i < j \leq \kappa$, $\sum_{w \in W} |E_{i,j,w}| < \delta'|W|n$. It then follows that $\sum_{w \in W}|E_w| < {\kappa \choose 2}\delta'|W|n$, and that there exists some $w \in W$ such that $|E_w| < {\kappa \choose 2}\delta'n$. By construction of $E_w$, $\tilde{\beta}_1(\mbox{\upshape lk}\,(w)) < \delta n$ for a $\delta > 0$ that can be chosen arbitrarily small by choosing $\delta'$ sufficiently small. Then by Lemma \ref{LinkLemma}, $\tilde{\beta}_2(R(S)) < \tilde{\beta}_2(R(S-\{w\}))+ \delta n$. By induction on $|S|$ (we keep $n$ fixed and decrease $|S|$ in the inductive step), $\tilde{\beta}_2(R(S)) < \delta n^2$ as desired.
First consider the case that $i=1$. Let $U := S \cap K_j$. For each $w \in W$, let $U_w$ be the set of endpoints of edges in $E_{1,j,w}$ that are in $U$, as in the proof of Theorem \ref{M22}. By the same argument as in the proof of Theorem \ref{M22}, $|U_w \cap U_{w'}| \leq 2$ for all $w \neq w'$. Construct a bipartite graph $G$ with vertex set $W \sqcup U$ and an edge $wu, w \in W, u \in U$ whenever $u \in U_w$. Label the edge set of $G$ by $EG$. By $|U_w \cap U_{w'}| \leq 2$ for all $w \neq w'$, no two vertices in $W$ has three common neighbors in $G$. It follows from Lemma \ref{MooreBound} and the fact that $|U| \leq |W|$ that $$\sum_{w \in W} |E_{1,j,w}| = |EG| \leq C|W|^{3/2} < \delta'|W|n$$ for some constant $C$. The last inequality follows by taking $n$ sufficiently large.
Now suppose that $i > 1$. Set $U := S \cap K_i$ and $V := S \cap K_j$, and for all $w \in W$, define $U_w = \{u_{w,1}, \ldots, u_{w,r_w}\}$ and $V_w = \{v_{w,1}, \ldots, v_{w,r_w}\}$ so that $E_{i,j,w}=\{u_{w,1}v_{w,1},\ldots,u_{w,r_w}v_{w,r_w}\}$. Let $G'$ be the bipartite graph on vertices $U \sqcup V$ with an edge $uv, u \in U, v \in V$ whenever $uv$ is an edge in $R(S)$. Conditions 1 and 3 of Lemma \ref{EdgeSet} imply that $E_{i,j,w}$ is a matching in $G'$ for all $w \in W$. If $E_{i,j,w}$ contains edges $uv$ and $u'v'$, and there is an edge $uv'$ or $u'v$ in $G'$, then $uv$ and $u'v'$ are in the same component in $G' \cap \mbox{\upshape lk}\,(w)$. Hence Condition 3 of Lemma \ref{EdgeSet} implies that $E_{i,j,w}$ is in fact an induced matching. It must be that $E_{i,j,w} \cap E_{i,j,w'} = \emptyset$ for all $w \neq w'$; otherwise $\mbox{\upshape lk}\,(w)$ contains an edge of $E_{i,j,w'}$, which violates Condition 2 of Corollary \ref{EdgeSet}. If $|W| < \delta'n$, then $$\sum_{w \in W} |E_{i,j,w}| \leq |W|^2 < \delta'|W|n.$$ Otherwise, it follows from Lemma \ref{InducedMatchings} that $$\sum_{w \in W} |E_{i,j,w}| < \delta'|W|^2 < \delta'|W|n$$ for sufficiently large $n$.
\hfill$\square$\medskip
\begin{theorem}
\label{n32}
There exists a constant $C$ such that for sufficiently large $n$, $M_{2,5}(n) > C n^{3/2}$.
\end{theorem}
\begin{proof}
We establish the result by producing a point configuration $S \subset \mathbb{R}^5$ with at most $n$ vertices and with $\tilde{\beta}_2(R(S)) > Cn^{3/2}$. Let $k$ be the largest integer such that $3k^2 \leq n$. Choose a value $\delta$ small relative to $n$ and a value $\epsilon$ small relative to $\delta$; we may take $\delta = n^{-1}$ and $\epsilon = n^{-3}$. Let $U = \{u_{i,j}, 1 \leq i,j \leq k\}$, $V = \{v_{i,j}, 1 \leq i,j \leq k\}$, $W = \{w_{i,j}, 1 \leq i,j \leq k\}$ with $$u_{i,j} = \left(\frac{\sqrt{2}}{2}\cos (i\delta),\frac{\sqrt{2}}{2}\sin (i\delta),0,0,j \epsilon \right),$$ $$v_{i,j} = \left(0,0,\frac{\sqrt{2}}{2}\cos (i\delta),\frac{\sqrt{2}}{2}\sin (i\delta),j \epsilon \right),$$ $$w_{i,j} = \left(\frac{\sqrt{2}}{4}\cos (i\delta),\frac{\sqrt{2}}{4}\sin (i\delta),\frac{\sqrt{2}}{4}\cos (j\delta),\frac{\sqrt{2}}{4}\sin (j\delta),\frac{\sqrt{3}}{2} \right).$$
\noindent
The edge set of $R(S)$ is exactly the following:
\begin{enumerate}
\item $u_{i,j}u_{i',j'}$ for all $1 \leq i,j,i',j' \leq k$,
\item $v_{i,j}v_{i',j'}$ for all $1 \leq i,j,i',j' \leq k$,
\item $w_{i,j}w_{i',j'}$ for all $1 \leq i,j,i',j' \leq k$,
\item $u_{i,j}v_{i',j}$ for all $1 \leq i,i',j \leq k$,
\item $u_{i,j}w_{i,j'}$ for all $1 \leq i,j,j' \leq k$,
\item $v_{i,j}w_{i',i}$ for all $1 \leq i,i',j \leq k$.
\end{enumerate}
The non-existence of edges $u_{i,j}w_{i',j'}$ for $i' \neq i$ and $v_{i,j}w_{i',i''}$ for $i'' \neq i$ is guaranteed by a sufficiently small choice of $\epsilon$. For all $w \in W$, the set edges of $\mbox{\upshape lk}\,(w)$ with one endpoint in $U$ and the other in $V$ constitutes an induced matching. Furthermore, these matchings are disjoint over all $w$.
It can be verified that $\beta_2(R(S)) \geq b$, with $b \approx 3^{-3/2}n^{3/2}$. We defer the details of this calculation to the more general setting of Section \ref{QuasiRips}.
\end{proof}
Label the above construction with $\delta = 1/n$ and $\epsilon = n^{-3}$ as $S^2(n)$.
\section{Results on higher homology}
The results of the previous section can be extended to higher Betti numbers. In this section we prove two such extensions.
\label{Hom3}
\begin{theorem}
Let $p \geq 2$, $d$, and $\delta > 0$ be fixed. If $n$ is sufficiently large, then $M_{p,d}(n) < \delta n^p$. Also, there exists a value $D_p$ which depends only on $p$ such that $M_{p,2}(n) \leq D_p n^{p-1}$.
\end{theorem}
\begin{proof}
We prove the first statement by induction on $p$. The case that $p=2$ follows from Theorem \ref{M2D}. Let $S \subset \mathbb{R}^d$ with $|S| \leq n$. For $p > 2$, assume that $n$ is large enough so that $M_{p-1,d}(n) < \delta n^{p-1}$. Choose $v \in S$. Then by the inductive hypothesis, $\tilde{\beta}_{p-1}(\mbox{\upshape lk}\,(v)) < \delta n^{p-1}$. We calculate that $\tilde{\beta}_p(R(S)) < \delta|S|n^{p-1} \leq \delta n^p$ by induction on $|S|$. Indeed, it follows from Lemma \ref{LinkLemma} that $\tilde{\beta}_p(R(S)) \leq \tilde{\beta}_p(R(S-v)) + \tilde{\beta}_{p-1}(\mbox{\upshape lk}\,(v)) < \delta|S-v|n^{p-1} + \delta n^{p-1}$ as desired.
The second statement follows from Theorem \ref{M22} in the same way.
\end{proof}
Let $\Gamma$ and $\Delta$ be two simplicial complexes. We define their \textit{simplicial join} $\Gamma \ast \Delta$ by $V(\Gamma \ast \Delta) := V(\Gamma) \sqcup V(\Delta)$ and faces $\{F \cup G: F \in \Gamma, G \in \Delta\}$. Let $S,S' \subset \mathbb{R}^d$ such that for all $s \in S, s' \in S'$, $\mbox{\upshape dist}\,(s,s') \leq 1$. Then $R(S \cup S') = R(S) \ast R(S')$. By the K\"{u}nneth Formula, $\tilde{\beta}_p(R(S \cup S')) = \sum_{i+j=p-1}\tilde{\beta}_i(R(S))\tilde{\beta}_j(R(S'))$.
\begin{lemma}
For every $k > 0$ and $d \geq 2$, there exists a constant $C_k$ such that $M_{2k-1,d}(n) \geq C_k n^k$ for sufficiently large $n$.
\end{lemma}
\begin{proof}
We prove the result by giving a point configuration $S \subset \mathbb{R}^2$ with $|S| \leq n$ and $\tilde{\beta}_{2k-1}(R(S)) \geq C_kn^k$. Let $r := \lfloor n/(2k) \rfloor$, $\theta := 1/n$, and $\epsilon := n^{-4}$. For $1 \leq i \leq k$, define $s_i^+ := (1/2,i \epsilon)$ and $s_i^- := (-1/2,i \epsilon)$. Let $S_1 := \{s_1^+, \ldots, s_r^+, s_1^-, \ldots, s_r^-\}$. For $2 \leq j \leq p$, construct $S_j$ by rotating $S_1$ counterclockwise about the origin by an angle of $\theta j$ and let $S := S_1 \cup \ldots \cup S_k$.
For each $i$, $\tilde{\beta}_1(R(S_i)) = r-1$ by Lemma \ref{BipartiteLemma}. For all $i \neq j$ and $s \in S_i, s' \in S_j$, $\mbox{\upshape dist}\,(s,s') < 1$ by the small choice of $\epsilon$. It follows by the K\"{u}nneth Formula that $\tilde{\beta}_{2k-1}(R(S)) \geq (r-1)^k$.
\end{proof}
Label the above construction as $S^{2k-1}(n)$ with $S^{-1}(n) = \emptyset$.
\begin{theorem}
For every $p > 0$ and $d \geq 5$, there exists a constant $C_p$ such that $M_{p,d}(n) \geq C_p n^{p/2+1/2}$ for sufficiently large $n$.
\end{theorem}
\begin{proof}
The result follows for odd $p$ by the existence of $S^{2k-1}(n)$, so consider even $p$. Let $S = S^2(\lfloor n/2 \rfloor)$ and $\tilde{S} = S^{p-3}(\lceil n/2 \rceil)$. Let $S'$ be the image of $\tilde{S}$ under the isometry that sends $(x,y)$ to $(\sqrt{2}/4,x,\sqrt{2}/4,y,\sqrt{3}/6)$. There exists $\alpha \rightarrow 0$ as $n \rightarrow \infty$ such that every point in $S$ is within distance $\alpha$ from either $(0,0,\sqrt{2}/2,0,0)$, $(\sqrt{2}/2,0,0,0,0)$, or $(\sqrt{2}/4,0,\sqrt{2}/4,0,\sqrt{3}/2)$, and every point of $S'$ is within distance $\alpha$ of either $(\sqrt{2}/4,1/2,\sqrt{2}/4,0,\sqrt{3}/6)$ or $(\sqrt{2}/4,-1/2,\sqrt{2}/4,0,\sqrt{3}/6)$. Hence for all $s \in S, s' \in S'$, $\mbox{\upshape dist}\,(s,s') < \sqrt{7/12}+\alpha'$, where $\alpha' \rightarrow 0$ as $n \rightarrow \infty$. This proves the result by the K\"{u}nneth Formula.
\end{proof}
\section{Quasi-Rips complexes}
\label{QuasiRips}
Quasi-Rips complexes, discussed in \cite{Planar}, are relaxations of Rips complexes. Given a finite set $S \subset \mathbb{R}^d$ and fixed $0 < \alpha < 1$, a \textit{quasi-Rips complex with parameter} $\alpha$ on $S$ is a flag complex with vertex set $S$, an edge $uv$ whenever $\mbox{\upshape dist}\,(u,v) \leq \alpha$, and no edge $uv$ when $\mbox{\upshape dist}\,(u,v) > 1$. If $\alpha < \mbox{\upshape dist}\,(u,v) \leq 1$, then the edge $uv$ may be included or excluded arbitrarily. All Rips complexes are quasi-Rips complexes with parameter $\alpha$ for any $0 < \alpha < 1$.
There is much greater freedom in the kinds of graphs that arise as the graphs of quasi-Rips complexes. Let $G$ be a graph with three vertex subsets $U_1,U_2,U_3$ such that for $1 \leq i \leq 3$, there is an edge $uu'$ for all $u,u' \in U_i$. The other edges of $G$ may be chosen arbitrarily. For any $0 < \alpha < 1$, $G$ can arise as a quasi-Rips complex of a point configuration in $\mathbb{R}^2$ if the points of $U_1, U_2, U_3$ are all near $(0,0), (0,1), (\sqrt{3}/2,1/2)$ respectively and inside the triangle with these three vertices. If $\alpha < 1/2$, $G$ can even be the graph of a quasi-Rips complex of a point configuration in $\mathbb{R}^1$ by concentrating all points of $U_1, U_2, U_3$ near $0, 1/2, 1$ respectively and inside the interval $[0,1]$.
Despite this freedom, the Betti numbers of quasi-Rips complexes obey nontrivial upper bounds. Given $S \subset \mathbb{R}^d$, let $Q^{\alpha}(S)$ be the set of quasi-Rips complexes on $S$ with parameter $\alpha$. Let $Q^{\alpha}_d(n)$ be the union of all $Q^{\alpha}(S)$ as $S$ ranges over subsets of $\mathbb{R}^d$ of size at most $n$, and define $$M^{\alpha}_{p,d}(n) := \max\{\tilde{\beta}_p(\Gamma): \Gamma \in Q^{\alpha}_d(n)\}.$$
We focus specifically on $M^{\alpha}_{2,d}(n)$. To do so, we again consider induced matchings. Let $I(n)$ be the maximum value of $\sum_{i=1}^t |M_i|$, where $t \leq n$ and the $M_i$'s are disjoint, induced matchings on a bipartite graph $G$ with $V(G) = U \sqcup V$ and $|U|, |V| \leq n$.
\begin{theorem}
\label{qr}
For each $d \geq 2$ and $0 < \alpha < 1$, there exist values $C_{d,\alpha}$ and $D_{d,\alpha}$ such that $$C_{d,\alpha}I(n) < M^{\alpha}_{2,d}(n) < D_{d,\alpha}I(n)$$ for sufficiently large $n$.
\end{theorem}
\begin{proof}
The proof of the upper bound is very similar to that of Theorem \ref{M2D}. The changes necessary are to use $\epsilon = \alpha d^{-1/2}$ instead of $\epsilon = d^{-1/2}$, and to make the observation that for some absolute constant $C$ and all $n>n' \gg 0$, $I(n) \geq CI(n') n/n'$.
For the lower bound, consider a bipartite graph $G$ with vertex set $U \sqcup V$, $|U|,|V| \leq n$, and disjoint, induced matchings $M_1, \ldots, M_t, t \leq n$ such that $\sum_{i=1}^t |M_i| = I(n)$. Let $\mathcal{N} := \{N_1, N_2, \ldots, N_{t'}\}, t' = \min\{t,\lfloor n/3 \rfloor\}$ be a subset of $t'$ largest matchings of the set $\{M_1, \ldots, M_t\}$. Let $U'$ be a set of $\min\{|U|,\lfloor n/3 \rfloor\}$ vertices of $U$ that are endpoints for the largest number of matchings in $\mathcal{N}$, and restrict each element of $\mathcal{N}$ to edges with endpoints in $U' \sqcup V$. Finally, let $V'$ be a set of $\min\{|V|,\lfloor n/3 \rfloor\}$ vertices of $V$ that are endpoints for the largest number of matchings in $\mathcal{N}$, and restrict each element of $\mathcal{N}$ to edges with endpoints in $U' \sqcup V'$. Then for large $n$, $\sum_{i=1}^{t'} |N_i| \geq \frac{1}{27.1}I(n)$.
Let $G'$ be a graph with vertex set $U' \sqcup V' \sqcup \mathcal{N}$, with edges defined as follows. For $u \in U', v \in V'$, $uv$ is an edge in $G'$ if it is an edge in $G$. For all $u,u' \in U', v,v' \in V'$, $uu'$ and $vv'$ are edges in $G'$. The neighbors of $N_i$ are all $N_j$ for $j \neq i$, and all vertices $u \in U', v \in V'$ such that $uv \in N_i$. Then, by the discussion preceding Theorem \ref{qr}, there exists a simplicial complex $\Gamma$ such that $\Gamma \in Q^{\alpha}_d(n)$ and $\Gamma = X(G')$. Next we calculate $\tilde{\beta}_2(\Gamma)$.
Consider $\Gamma'$, which is obtained from $\Gamma$ by removing all faces of the form $Nuv$ for $N \in \mathcal{N}, u \in U', v \in V'$. By construction, every face removed in this manner is a maximal face in $\Gamma$. Note that $\Gamma'$ is neither a Rips complex nor a flag complex. By Lemma \ref{BipartiteLemma}, $\tilde{\beta}_1(\Gamma'[U',V']) = \tilde{\beta}_1(\Gamma[U',V']) \leq \lfloor n/3 \rfloor$. To calculate $\tilde{\beta}_1(\Gamma')$, consider $\Gamma'_i := \Gamma'[U',V',N_1, \ldots, N_i]$. Then $\mbox{\upshape lk}\,_{\Gamma'_i}(N_i)$ consists of at most three components, as $\mbox{\upshape lk}\,_{\Gamma'_i}(N_i) \cap \Gamma[U'], \mbox{\upshape lk}\,_{\Gamma'_i}(N_i) \cap \Gamma[V'], \mbox{\upshape lk}\,_{\Gamma'_i}(N_i) \cap \Gamma[\mathcal{N}]$ are all connected, and so $\tilde{\beta}_0(\mbox{\upshape lk}\,_{\Gamma'_i}(N_i)) \leq 2$. By induction on $i$, $\tilde{\beta}_1(\Gamma') \leq \lfloor n/3 \rfloor + 2\lfloor n/3 \rfloor \leq n$. By Euler-Poincar\'{e} formula, $\tilde{\beta}_2(\Gamma) > \frac{1}{27.1}I(n) - n \geq C_{d,\alpha}I(n)$ for some constant $C_{d,\alpha}$, which proves the lower bound.
\end{proof}
The example of Theorem \ref{n32} satisfies the description of Theorem \ref{qr}, with each vertex in $W$ corresponding to a matching of $k$ pairs of vertices on $U$ and $V$. Hence $\tilde{\beta}_2(R(S)) \approx 3^{-3/2}n^{3/2}.$
Determining the value of $I(n)$, even to within a multiplicative constant, is a very challenging problem. It is shown in \cite{Elkin} that there exists a constant $C$ so that, for large $n$, there exists $A \subset \mathbb{Z}/n\mathbb{Z}$ such that $A$ contains no arithmetic progressions of length $3$ and $|A| \geq C n 2^{-2\sqrt{2}\sqrt{\log_2(n)}} \log^{1/4}(n)$. Such an $A$ can be adapted into a tripartite graph with $n$ vertices in each component, at least $C n^2 2^{-2\sqrt{2}\sqrt{\log_2(n)}} \log^{1/4}(n)$ triangles, and the property that no two triangles that share an edge. This graph can then be further adapted into $n$ disjoint, induced matchings on a bipartite graph with $n$ vertices on each side. The collective size of the matchings is $C n^2 2^{-2\sqrt{2}\sqrt{\log_2(n)}} \log^{1/4}(n)$. An upper bound on $I(n)$, as given in the proof of \cite[Proposition 10.45]{TaoVu}, is $C'\frac{n^2}{(\log_*(n))^{1/5}}$ for some constant $C'$, where $\log_*(n)$ is the number of natural logarithms one needs to apply to $n$ to obtain a nonpositive value. The $\log_*(n)$ term comes from the usage of the Szemer\'edi Regularity Lemma in the proof.
Theorem \ref{qr} can be extended to higher Betti numbers using similar techniques as in Section \ref{Hom3}.
|
1,941,325,220,067 | arxiv | \section{Introduction}
In joint work with Kyu-Hwan Lee and Phil Lombardo \cite{LLS-A}, the author reinterpreted the Casselman-Shalika formula expression due to Brubaker-Bump-Friedberg \cite{BBF:11-ann,BBF:11-book} and Bump-Nakasuji \cite{BN:10} as a sum over the crystal graph (based on work of Tokuyama \cite{Tok:88}) to a sum over tableaux. The expression given in the work of Bump and Nakasuji involves taking paths in the graph of a highest weight crystal from a given vertex to the highest weight vector and decorating the path. These decorations, called boxing and circling, prescribe contributions at a vertex in the form of Gauss sums (coming from the theory of Weyl group multiple Dirichlet series and Whittaker functions). The resulting function, formed by summing the contributions over the crystal together with their respective weights, has been coined a {\it Tokuyama function}.
The benefit to a tableaux description of the Tokuyama function means that, in practice, one no longer needs to compute the entire path to a the highest weight vertex in the crystal graph, which may be very large. Instead, one can extract the essential data from the content of the tableaux at the vertex and obtain the same function. This tableaux description was explained in \cite{LLS-A} by using reparametrizations $\aa(T)$ and ${\bm b}(T)$ of the string parametrization obtained directly from data in a tableau $T$, but again required the calculation of a sequence (in our case, two sequences). It is the goal of this work to interpret these two sequences as statistics on tableaux.
Such a statistic was created in the context of the Gindikin-Karpelevich formula, again based on the work of Brubaker-Bump-Friedberg \cite{BBF:11-ann,BBF:11-book} and Bump-Nakasuji \cite{BN:10}. This formula, from the context of crystals, may be viewed as the Verma module analogue of the highest weight calculation used in the Casselman-Shalika formula. In \cite{LS-ABCDG,LS-A}, we were able to recover the path to the highest weight vector using the marginally large tableaux of J.\ Hong and H.\ Lee \cite{HL:08}, which is a certain enlargement of semistandard Young tableaux, and interpret the decorations. The corresponding statistic was called the \emph{segment} statistic and may be easily read off from a marginally large tableaux. Outside of type $A_r$, the proof in \cite{LS-ABCDG} relied on the interpretation of the Gindikin-Karpelevich formula as a sum over Lusztig's canonical basis given by H.\ H.\ Kim and K.-H.\ Lee in \cite{KL:11,KL:12} and did not require any decorated paths to the highest weight in the crystal graph.
From the decorated path point of view, the Gindikin-Karpelevich formula only requires the circling rule, and the segment statistic on marginally large tableaux completely encodes the circling data. Moreover, there exists an embedding from semistandard Young tableaux to marginally large tableaux which preserves the path to the highest weight. More precisely, if $\mathcal{B}(\lambda+\rho)$ is the crystal of highest weight $\lambda+\rho$ parametrized by semistandard Young tableaux of shape $\lambda + \rho$, then there is an embedding into the crystal of marginally large tableaux $\mathcal{B}(\infty)$ such that the circling rule is preserved. Understanding this embedding leads one to a definition of segments on ordinary semistandard Young tableaux of fixed shape, so that one is only left to understand the boxing rule. It is the latter problem where the results from \cite{LLS-A} become crucial, as the definition of the sequences defined there lead to a descriptive picture of what the boxing rule means on the tableaux level.
The way to understand the boxing rule again involves the idea of segments in a tableau, but it is how these segments are arranged in the tableau which will encode the boxing rule. In this note, we define the \emph{flush} statistic on semistandard Young tableaux $T$ in $\mathcal{B}(\lambda+\rho)$, which, loosely speaking, is the number of segments in $T$ whose left-most box in the segment is in the same column as the left-most box of the subsequent segment in the row beneath it (in English notation for tableaux). In other words, the number of segments who are flush-left with their neighbor below. The notions of ``subsequent segment'' and ``neighbor below'' are made precise in Definition \ref{def}(\ref{defflush}) below. It turns out that $\operatorname{flush}(T)$ is exactly the number of boxed entries of $\aa(T)$ and is equal to the number of boxed entries in the decorated path from $T$ to the highest weight vector of $\mathcal{B}(\lambda+\rho)$.
It is the hope that the statistics developed here will help shed some new light on the Casselman-Shalika formula outside of type $A_r$. Currently, there are boxing and circling rules defined and verified in types $B_r$ \cite{FZ:13} and $C_r$ \cite{BBF:11-C,BBF:S}, but only conjectural formulas in types $D_r$ \cite{CG:S} and type $G_2$ \cite{FGG:S}.
\section{Crystals and Tableaux}\label{sec:crystals}
We start by recalling the setup from \cite{LLS-A}. Let $r\geq 1$ and suppose $\mathfrak{g} = \mathfrak{sl}_{r+1}$ with simple roots $\{ \alpha_1,\dots,\alpha_r\}$, and let $I = \{1,\dots,r\}$. Let $P$ and $P^+$ denote the weight lattice and the set of dominant integral weights, respectively. Denote by $\Phi$ and $\Phi^+$, respectively, the set of roots and the set of positive roots. Let $\{h_1,\dots,h_r\}$ be the set of coroots and define a pairing $\langle \ ,\ \rangle \colon P^\vee\times P \longrightarrow \mathbf{Z}$ by $\langle h,\lambda \rangle = \lambda(h)$, where $P^\vee$ is the dual weight lattice. Let $\mathfrak{h} = \mathbf{C} \otimes_\mathbf{Z} P^\vee$ be the Cartan subalgebra, and let $\mathfrak{h}^*$ be its dual. Denote the {\it Weyl vector} by $\rho$; this is the element $\rho\in \mathfrak{h}^*$ defined as
$
\rho = \frac12\sum_{\alpha >0} \alpha = \sum_{i=1}^r \omega_i,
$
where $\omega_i$ is the $i$th fundamental weight. The set of roots for $\mathfrak{g}$ will be denoted by $\Delta$, while $\Delta^+$ will denote the set of positive roots and $N = \#\Delta^+$.
A {\it $\mathfrak{g}$-crystal} is a set $\mathcal{B}$ together with maps
$
\widetilde e_i, \widetilde f_i\colon \mathcal{B} \longrightarrow \mathcal{B}\sqcup\{0\}$,
$\varepsilon_i,\varphi_i\colon \mathcal{B} \longrightarrow \mathbf{Z}\sqcup\{-\infty\}$, and
${\rm wt}\colon \mathcal{B} \longrightarrow P,
$
such that, for all $b,b'\in \mathcal{B}$ and $i\in I$, we have $\widetilde f_ib = b'$ if and only if $\widetilde e_ib' = b$, ${\rm wt}(\widetilde f_ib) = {\rm wt}(b) - \alpha_i$, and $\langle h_i,{\rm wt}(b)\rangle = \varphi_i(b) - \varepsilon_i(b)$. The maps $\widetilde e_i$ (resp.\ $\widetilde f_i$) for $i\in I$ are called the {\it Kashiwara raising operators} (resp.\ {\it Kashiwara lowering operators}).
(For more details, see, for example, \cite{HK:02,Kash:95}.)
To each highest weight representation $V(\lambda)$ of $\mathfrak{g}$, there is an associated highest weight crystal $\mathcal{B}(\lambda)$ which serves as a combinatorial frame of the representation $V(\lambda)$. The only fact we will use in this note is that $\mathcal{B}(\lambda)$ as a set may be realized as the set of semistandard tableaux of shape $\lambda$ over the alphabet $\{1,\dots,r+1\}$ with the usual ordering, where $\lambda = a_1\omega_1 + \cdots + a_r\omega_r$ is identified with the partition having $a_i$ columns of height $i$, for each $1\le i \le r$.
\section{Using the tableaux model}\label{sec:tableaux}
We now recall the definitions and result from \cite{LLS-A}.
\begin{dfn}[\cite{LLS-A}]
Let $\lambda \in P^+$ and $T\in \mathcal{B}(\lambda+\rho)$ be a tableau.
\begin{enumerate}
\item Define $\aa_{i,j}$ to be the number of $(j+1)$-colored boxes in rows $1$ through $i$ for $1 \leq i \leq j \leq r$, and define the vector $\aa(T) \in \mathbf{Z}_{\ge0}^N$ by
\[
\aa(T)=(\aa_{1,1}, \aa_{1,2}, \ldots, \aa_{1,r}; \aa_{2,2}, \ldots , \aa_{2,r}; \ldots ; \aa_{r,r} ).
\]
\item The number ${\bm b}_{i,j}$ is defined to be the number of boxes in the $i$th row which have color greater or equal to $j+1$ for $1 \leq i \leq j \leq r$. Set
\[
{\bm b}(T) = ({\bm b}_{1,1},\dots,{\bm b}_{1,r};{\bm b}_{2,2},\dots,{\bm b}_{2,r};\cdots;{\bm b}_{r,r}).
\]
\item Write $\lambda+\rho$ as
$
\lambda+\rho = (\ell_1 > \ell_2 > \cdots > \ell_r > \ell_{r+1}= 0),
$
and define $\theta_i = \ell_i- \ell_{i+1}$ for $i=1,\dots,r$. Let $\theta = (\theta_1,\dots,\theta_r)$.
\end{enumerate}
\end{dfn}
In \cite{LLS-A}, we give a definition of boxing and circling on the entries of $\aa(T)=(\aa_{i,j})$ for $T \in \mathcal{B}(\lambda + \rho)$ based on the boxing and circling decorations on BZL paths in \cite{BBF:11-ann,BBF:11-book}.
\begin{align}
&\text{Box $\aa_{i,j}$ if ${\bm b}_{i,j} \geq \theta_i + {\bm b}_{i+1,j+1}$.}\tag{B-II}\label{ourbox}\\
&\text{Circle $\aa_{i,j}$ if $\aa_{i,j} = \aa_{i-1,j}$.}\tag{C-II}\label{ourcirc}
\end{align}
Set $\operatorname{non}(T)$ to be the number of entries in $\aa(T)$ which are neither circled nor boxed, and define $\operatorname{box}(T)$ to be the number of entries in $\aa(T)$ which are boxed. Additionally, borrowing the vernacular of the Gelfand-Tsetlin pattern setting of Tokuyama \cite{Tok:88}, we say that $T$ is \emph{strict} if $\aa(T)$ has no entry which is both boxed and circled. Now define a function $C_{\lambda+\rho,q}$ on $\mathcal{B}(\lambda+\rho)$ with values in $\mathbf{Z}[q^{-1}]$ by
\[
C_{\lambda+\rho}(T;q) =
\left\{\begin{array}{cl}
(-q^{-1})^{\operatorname{box}(T)}(1-q^{-1})^{\operatorname{non}(T)} & \text{if $T$ is strict},\\
0 & \text{otherwise}.
\end{array}\right.
\]
\begin{thm}[\cite{LLS-A}]\label{thm:CS-A}
We have
\begin{equation}\label{eq:CS-A}
{\bm z}^\rho\chi_\lambda({\bm z})\prod_{\alpha>0} (1-q^{-1}{\bm z}^{-\alpha})
= \sum_{T\in \mathcal{B}(\lambda+\rho)} C_{\lambda+\rho}(T;q) {\bm z}^{{\rm wt}(T)}.
\end{equation}
\end{thm}
The main result of this note is a new statistic on $T\in \mathcal{B}(\lambda+\rho)$ to compute $C_{\lambda+\rho}(T;q)$ without the need to construct the sequence $\aa(T)$.
\begin{dfn}\label{def}
Let $T\in \mathcal{B}(\lambda+\rho)$ be a tableau.
\begin{enumerate}
\item\label{defseg} Let $T \in \mathcal{B}(\lambda+\rho)$ be a tableaux. We define a {\it $k$-segment} \cite{LS-ABCDG,LS-A} of $T$ (in the $i$th row) to be a maximal consecutive sequence of $k$-boxes in the $i$th row for any $i+1 \le k \le r+1$. Denote the total number of $k$-segments in $T$ by $\operatorname{seg}(T)$.
\item\label{defflush} Let $1 \le i < k \le r+1$ and suppose $\ell$ is the smallest integer greater than $k$ such that there exists an $\ell$-segment in the $(i+1)$st row of $T$. A $k$-segment in the $i$th row of $T$ is called {\it flush} if the leftmost box in the $k$-segment and the leftmost box of the $\ell$-segment are in the same column of $T$. If, however, no such $\ell$ exists, then this $k$-segment is said to be {\it flush} if the number of boxes in the $k$-segment is equal to $\theta_i$. Denote the number of flush $k$-segments in $T$ by $\operatorname{flush}(T)$.
\end{enumerate}
\end{dfn}
\begin{ex}
Let $r=4$, $\lambda = 2\omega_3$, and
\[
T = \young(112335,23344,3555,5).
\]
It is easy to see that $\operatorname{seg}(T) = 7$ because there is a $2$-segment in the first row, a $3$-segment in both the first and second rows, a $4$-segment in the second row, and a $5$-segment in each of the first, third, and fourth rows. Moreover, $\operatorname{flush}(T) = 5$ because each $3$-segment and $5$-segment is flush. In other words, the $2$-segment in the first row and the $4$-segment in the second row are not flush.
\end{ex}
The motivating picture for the definition of flush is given in Figure \ref{fig:flush}, and will useful to keep in mind for the proof of Theorem \ref{algorithm}.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[xscale=1.5,yscale=.65]
\node (i) at (0,1.5) {$i$th row};
\node (i+1) at (0,0.5) {$(i+1)$st row};
\draw[color=white,fill=gray!30] (1,2) -- (3,2) -- (3,1) -- (2,1) -- (2,0) -- (1,0) -- cycle;
\draw[-] (1,2) -- (6,2) -- (6,1) -- (4,1) -- (4,0) -- (1,0) -- cycle;
\draw[-] (4,1) -- (3,1) -- (3,2);
\draw[-] (3,1) -- (2,1) -- (2,0);
\draw[-] (3.1,1.5) -- (5.9,1.5);
\node[fill=white,inner sep=1.7] at (4.5,1.45) {${\bm b}_{i,k-1}$};
\draw[-] (2.1,0.5) -- (3.9,0.5);
\node[fill=white,inner sep=1.7] at (3,0.45) {${\bm b}_{i+1,k}$};
\draw [decorate,decoration={brace,amplitude=7pt,mirror},xshift=0.4pt,yshift=-0.4pt](4.,.95) -- (6,.95) node[black,midway,yshift=-0.45cm] {\scriptsize $\theta_i$};
\end{tikzpicture
\caption{A diagram motivating the definition of flush.}
\label{fig:flush}
\end{figure}
\begin{thm}\label{algorithm}
Let $T\in \mathcal{B}(\lambda+\rho)$ be a tableau.
\begin{enumerate}
\item\label{circbox} Let $1 \le i < k \le r$. Suppose the following two conditions hold.
\begin{enumerate}
\item\label{nocircle} There is no $k$-segment in the $i$th row of $T$.
\item\label{nobox} Let $\ell$ be the smallest integer greater than $k$ such that there exist an $\ell$-segment in the $i$th row. There is no $p$-segment in the $(i+1)$st row, for $k+1\le p \le \ell$, and the $\ell$-segment is flush.\footnote{By convention, if no such $\ell$ exists, then condition (\ref{nobox}) is not satisfied.}
\end{enumerate}
Then $C_{\lambda+\rho}(T;q) = 0$.
\item\label{segflush} If conditions {\upshape(\ref{nocircle})} and {\upshape(\ref{nobox})} are not satisfied, then
\[
C_{\lambda+\rho}(T;q) = (-q^{-1})^{\operatorname{flush}(T)}(1-q^{-1})^{\operatorname{seg}(T)-\operatorname{flush}(T)}.
\]
\end{enumerate}
\end{thm}
\begin{proof}
First note that ${\bm b}_{i,k-1} \ge {\bm b}_{i+1,k} + \theta_i$ implies ${\bm b}_{i,k-1} = {\bm b}_{i+1,k}+\theta_i$ because $T$ is semistandard.
We claim that conditions (\ref{nocircle}) and (\ref{nobox}) are equivalent to $\aa_{i,k-1}$ in $\aa(T)$ being both boxed and circled. First, there is no $k$-segment in the $i$th row if and only if $\aa_{i,k-1} = \aa_{i-1,k-1}$, which justifies condition (\ref{nocircle}). It now remains to show that (\ref{nobox}) is equivalent to $\aa_{i,k-1}$ being boxed. If condition (\ref{nobox}) holds, then, by the definition of $\ell$ and being flush, we have
$
{\bm b}_{i,k-1} = {\bm b}_{i,\ell-1} = {\bm b}_{i+1,\ell} + \theta_i = {\bm b}_{i+1,k} + \theta_i,
$
so $\aa_{i,k-1}$ is boxed. On the other hand, if $\aa_{i,k-1}$ is boxed and there is no $k$-segment in the $i$th row, then ${\bm b}_{i,k-1} = {\bm b}_{i,\ell-1} = {\bm b}_{i+1,k} + \theta_i$, where $\ell$ is as in condition (\ref{nobox}). The only way ${\bm b}_{i,\ell-1} = {\bm b}_{i+1,k}+\theta_i$ is if the leftmost box of the $\ell$-segment in the $i$th row and the leftmost box of the $m$-segment in the $(i+1)$st row are in the same column, where $m$ is the smallest integer greater than $k$ such that there exists an $m$-segment in the $(i+1)$st row. By the semistandardness of $T$, this implies condition (\ref{nobox}) must be satisfied.
To see condition (\ref{segflush}), it follows from Lemma 2.5 and Proposition 2.7 of \cite{LLS-A} that $\operatorname{seg}(T)$ is exactly the number of entries in $\aa(T)$ which are not circled. Additionally, it follows immediately from the definition that a $k$-segment in the $i$th row is flush if and only if ${\bm b}_{i,k-1} = {\bm b}_{i+1,k}+\theta_i$. Hence $\operatorname{box}(T) = \operatorname{flush}(T)$ and $\operatorname{non}(T) = \operatorname{seg}(T)-\operatorname{flush}(T)$, as required.
\end{proof}
\begin{dfn}
If $T \in \mathcal{B}(\lambda+\rho)$ is a tableau and if conditions (\ref{nocircle}) and (\ref{nobox}) in Theorem \ref{algorithm} are satisfied for some $1\le i < k \le r$, then we say $T$ \emph{has gaps}. If no such pair $(i,k)$ satisfies conditions (\ref{nocircle}) and (\ref{nobox}) in Theorem \ref{algorithm}, then we say $T$ is \emph{gapless}.
\end{dfn}
Using the idea of gaps, we may rewrite
\[
C_{\lambda+\rho}(T;q) =
\left\{\begin{array}{cl}
(-q^{-1})^{\operatorname{flush}(T)}(1-q^{-1})^{\operatorname{seg}(T)-\operatorname{flush}(T)} & \text{if $T$ is gapless},\\
0 & \text{if $T$ has gaps},
\end{array}\right.
\]
to get
\begin{equation}\label{CSflush}
{\bm z}^\rho\chi_\lambda({\bm z})\prod_{\alpha>0} (1-q^{-1}{\bm z}^{-\alpha})
= \sum_{\substack{T\in \mathcal{B}(\lambda+\rho) \\ T \,\mathrm{gapless}}} (-q^{-1})^{\operatorname{flush}(T)}(1-q^{-1})^{\operatorname{seg}(T)-\operatorname{flush}(T)} {\bm z}^{{\rm wt}(T)}.
\end{equation}
\begin{ex}
Let
\[
T = \young(11134,2224,34)\ .
\]
There is no $2$-segment in the first row nor is there a $3$-segment in the second row. However, the $3$-segment in the first row is flush, so $T$ has a gap and $C_{\lambda+\rho}(T;q) =0$. As a check, we have $\aa(T) = (0,1,1;1,2;3)$ and ${\bm b}(T) = (2,2,1;1,1;1)$. By \eqref{ourcirc}, $\aa_{1,1} =0$ is circled because the (non-existent) entry $\aa_{0,1} =0$. Moreover, by \eqref{ourbox}, $\aa_{1,1}$ is boxed because ${\bm b}_{1,1} = 2$ and ${\bm b}_{2,2} + \theta_1 = 1+1$.
\end{ex}
\begin{ex}
Let
\[
T = \young(11222334,223334,444)\ .
\]
All possible $k$-segments are included in $T$, so condition (\ref{nocircle}) of Theorem \ref{algorithm} is not satisfied, so $T$ is gapless and $C_{\lambda+\rho}(T;q) \neq 0$. There is a $2$-segment in the first row, a $3$-segment in both the first and second rows, and a $4$-segment in all three rows. Thus $\operatorname{seg}(T) = 6$. Next, the $2$-segment in the first row, the $3$-segment in the first row, and $4$-segment in the last row are each flush, so $\operatorname{flush}(T) = 3$. Hence $C_{\lambda+\rho}(T;q) = (-q^{-1})^3(1-q^{-1})^3$. As a check, $\aa(T) = (3,2,1;5,2;5)$, where no entry is circled and $\aa_{1,1}$, $\aa_{1,2}$, and $\aa_{3,3}$ are all boxed, as required.
\end{ex}
\begin{remark}
While the proof above made use of the circling and boxing rules of \cite{BBF:11-book}, the statistics $\operatorname{seg}(T)$ and $\operatorname{flush}(T)$ are intrinsic to the tableaux. Thus, generalizing these statistics to other Lie algebras may yield an appropriate Casselman-Shalika formula expansion over a crystal graph without the need for circling and boxing rules. At this moment, such an expansion is important as there are no proven circling and boxing rules in types $D_r$ and $G_2$. (See \cite{CG:S} for more on the conjecture in type $D_r$ and \cite{FGG:S} for the conjecture in type $G_2$.)
\end{remark}
\section*{Acknowldegements}
The author revisited this work whilst discussing related topics at the ICERM Semester Program on ``Automorphic Forms, Combinatorial Representation Theory and Multiple Dirichlet Series'' during Spring 2013, and then again while preparing a talk at the BIRS workshop entitled ``Whittaker Functions: Number Theory, Geometry, and Physics'' (13w5154) in October 2013, so he would like to thank the various organizers for these opportunities. The author also valued from informative discussions with Kyu-Hwan Lee and Phil Lombardo.
|
1,941,325,220,068 | arxiv | \section*{Appendix \thesection\protect\indent \parbox[t]{11.15cm} {#1} } \addcontentsline{toc}{section}{Appendix \thesection\ \ \ #1} }
\def\mapa#1{\smash{\mathop{\longrightarrow }\limits_{#1} }}
\def |
1,941,325,220,069 | arxiv | \section{Introduction}
\label{intro}
Finite time system identification---the problem of estimating the parameters of an unknown dynamical system given a finite time series of its output---is an important problem in the context of time-series analysis, control theory, economics and reinforcement learning. In this work we will focus on obtaining sharp non--asymptotic bounds for \textit{linear} dynamical system identification using the ordinary least squares (OLS) method. Such a system is described by $X_{t+1} = AX_t + \eta_{t+1}$ where $X_t \in \mathbb{R}^d$ is the state of the system and $\eta_t$ is the unobserved process noise. The goal is to learn $A$ by observing only $X_t$'s. Our techniques can easily be extended to the more general case when there is a control input $U_t$, \textit{i.e.}, $X_{t+1} = AX_{t} + BU_t + \eta_{t+1}$. In this case $(A, B)$ are unknown, and we can choose $U_t$.
Linear systems are ubiquitous in control theory. For example, proportional-integral-derivative (PID) controller is a popular linear feedback control system found in a variety of devices, from planetary soft landing systems for rockets (see e.g.~\cite{accikmecse2013lossless}) to coffee machines. Further, linear approximations to many non--linear systems have been known to work well in practice. Linear systems also appear as auto--regressive (AR) models in time series analysis and econometrics. Despite its importance, sharp non--asymptotic characterization of identification error in such models was relatively unknown until recently.
In the statistics literature, correlated data is often dealt with using mixing--time arguments (see e.g. \cite{yu1994rates}).
However, a fundamental limitation of the mixing-time method is that bounds deteriorate when the underlying process mixes slowly. For discrete linear systems, this happens when $\rho(A)$---the spectral radius of $A$---approaches $1$. As a result these methods cannot extend to the case when $\rho(A) \geq 1$. More recently there has been renewed effort in obtaining sharp non--asymptotic error bounds for linear system identification~\cite{faradonbeh2017finite,simchowitz2018learning}. Specifically,~\cite{faradonbeh2017finite} analyzed the case when the system is either stable ($\rho(A) < 1$) or purely explosive ($\rho(A) > 1$). For the case when $\rho(A) < 1$ the techniques in \cite{faradonbeh2017finite} are similar to the standard mixing time arguments and, as a result, suffer from the same limitations. When the system is purely explosive, the authors of \cite{faradonbeh2017finite} show that finite time identification is only possible if the system is regular, \textit{i.e.}, if the geometric multiplicity of eigenvalues greater than unity is one. However, as discussed in~\cite{simchowitz2018learning}, the bounds obtained in~\cite{faradonbeh2017finite} are suboptimal due to a decoupled analysis of the sample covariance, $\sum_{t=1}^T X_tX_t^{\prime}$, and the martingale difference term $\sum_{t=1}^T X_t \eta_{t+1}'$. A second approach, based on Mendelson's small--ball method, was studied in~\cite{simchowitz2018learning}. Such a technique eschewed the need for mixing-time arguments and sharper error bounds for $1 - C/T \leq \rho(A) \leq 1 + C/T$ could be obtained. The authors in~\cite{simchowitz2018learning} argue that a larger signal-to-noise ratio, measured by $\lambda_{\min}(\sum_{t=0}^{T-1} A^{t}A^{t \prime})$, makes it easier to estimate $A$. Although this intuition is consistent for the case when $\rho(A) \leq 1$, it does not extend to the case when eigenvalues are far outside the unit circle. Since $X_T = \sum_{t=1}^T A^{T-t} \eta_{t}$, the behavior of $X_T$ is dominated by $\{\eta_1, \eta_2, \ldots \}$, \textit{i.e.}, the past, due to exponential scaling by $\{A^{T-1}, A^{T-2},\ldots\}$. As a result, $X_1$ depends strongly on $\{X_2, \ldots, X_T\}$ and standard techniques of creating ``independent'' blocks of covariates fail.
The problem of system identification has received a lot of attention. Asymptotic results on identification of AR models can be found in~\cite{lai1983asymptotic}. Some of the earlier work on finite time identification in systems theory include~\cite{campi2002finite,vidyasagar2006learning}. A more general setting of the problem considered here is when $X_t$ is observed indirectly via its filtered version, \textit{i.e.}, $Y_t = CX_t$ where $C$ is unknown. The single input single output (SISO) version of this problem, \textit{i.e.}, when $Y_t, U_t$ are numbers, has been studied in~\cite{hardt2016gradient} under the assumption that system is stable. Provable guarantees for system identification in general linear systems was also studied in~\cite{oymak2018non}. However, the analysis there requires that $||A|| < 1$. Generalization bounds for time series forecasting of non--stationary and non--mixing processes have been developed in~\cite{forecasting_mohri}.
\section{Notation and Definitions}
A linear time invariant system (LTI) is parametrized by a matrix, $A$, where the observed variable, $X_t$, indexed by $t$ evolves as
\begin{equation}
X_{t+1} = AX_t + \eta_{t+1}. \label{lti}
\end{equation}
Here $\eta_t$ is the noise process.
Denote by $\rho_i(A)$ the absolute value of the $i^{th}$ eigenvalue of the $d \times d$ matrix $A$. Then
\[
\rho_{\max}(A) = \rho_1(A) \geq \rho_2(A) \geq \hdots \geq \rho_{d}(A) = \rho_{\min}(A).
\]
Similarly the singular values of $A$ are denoted by $\sigma_i(A)$. For any matrix $M$, $||M||_{\text{op}} = ||M||_2$.
\begin{definition}
\label{stable}
A stable LTI system is that where $\rho_{\max}(A) < 1$. An explosive LTI system is that where $\rho_{\min}(A) > 1$.
\end{definition}
For simplicity of exposition, we assume that $X_0 = 0$ with probability $1$. All the results can be obtained by assuming $X_0$ to be some bounded vector.
\begin{definition}
\label{isotropic}
A random vector $X \in \mathbb{R}^{d}$ is called isotropic if for all $x \in \mathbb{R}^d$ we have
\[
\mathbb{E} \langle X, x \rangle^2 = ||x||^2_2
\]
\end{definition}
\begin{assumption}
\label{subgaussian_noise}
$\{\eta_t\}_{t=1}^{\infty}$ are i.i.d isotropic subgaussian and coordinates of $\eta_t$ are i.i.d. Further, let $f(x)$ be the pdf of each noise coordinate then the essential supremum of $f(\cdot)$ is bounded above by $C < \infty$.
\end{assumption}
We will deal with only regular systems, \textit{i.e.}, LTI systems where eigenvalues of $A$ with absolute value greater than unity have geometric multiplicity one. We will show that when $A$ is not regular, OLS is statistically inconsistent.
Define the data matrix $\textbf{X}$ and the noise matrix $E$ as
\[
\textbf{X} =\begin{bmatrix}
X_0^{\prime} \\ X^{\prime}_1 \\ \vdots \\X_{T}^{\prime}
\end{bmatrix},
~~~E =\begin{bmatrix}
\eta_1^{\prime} \\ \eta^{\prime}_2 \\ \vdots \\\eta_{T+1}^{\prime},
\end{bmatrix}
\]
where the superscript $a^{\prime}$ denotes the transpose.
Then $\textbf{X}$, $E$ are $(T+1) \times d$ matrices. Consider the OLS solution
\begin{equation*}
\hat{A} = \argmin_{B} \sum_{t=0}^{T}||X_{t+1} - BX_{t}||^2_2.
\end{equation*}
One can show that
\begin{equation}
\label{error_lse}
A - \hat{A} = ((\textbf{X}' \textbf{X})^{+} \textbf{X}^{\prime} E)^{\prime}
\end{equation}
where $M^{+}$ is the pseudo inverse of M. We define
\begin{equation*}
Y_T = \textbf{X}^{\prime} \textbf{X} = \sum_{t=0}^{T} X_t X_t^{\prime},~~~~ S_T = \textbf{X}^{\prime} E = \sum_{t=0}^{T} X_t \eta_{t+1}^{\prime}.
\end{equation*}
To analyze the error in estimating $A$, we will aim to bound the norm of $(\textbf{X}^{\prime} \textbf{X})^{+} \textbf{X}^{\prime}$.
\begin{table*}
\begin{center}
\begin{tabular}{|l|}
\hline
$T_{\eta}(\delta) = C\Big(\log{\frac{2}{\delta}} + d \log{5}\Big)$\\
$T_{s}(\delta) = C\Big({ d \log{( \text{tr}(\Gamma_T(A))+1)} + 2d \log{\frac{5}{\delta}} }\Big)$ \\
$c(A, \delta) = T_{s}(\frac{2\delta}{3T})$\\%C\Big({ {d} \log{( \text{tr}(\Gamma_T(A))+1)} + {2d}\log{\frac{15T}{2\delta}}}\Big)$\\
$\beta_0(\delta) = \inf{\Big\{\beta|\beta^2\sigma_{\min}(\Gamma_{\lfloor \frac{1}{\beta}\rfloor}(A)) \geq \Big(\frac{ 16ec(A, \delta)}{ T\sigma_{\min}(A A^{\prime})}\Big)\Big\}}$\\
$T_{ms}(\delta) = \inf{\Big\{T \Big| T \geq \frac{Cc(A, \delta)}{ \sigma_{\min}(A A^{\prime})}\Big\}}$\\
$T_{u}(\delta)={\Big\{T \Big| \Big(4T^2 \sigma_1^2(A^{-\lfloor \frac{T+1}{2} \rfloor}) \text{tr}(\Gamma_T(A^{-1})) + \frac{T\text{tr}(A^{-T-1}\Gamma_T(A^{-1})A^{-T-1 \prime})}{\delta}\Big) \leq \frac{\phi_{\min}(A)^2 \psi(A)^2 \delta^2}{2\sigma_{\max}(P)^2} \Big\}}$ \\
$\gamma(A, \delta)=\frac{4 \phi_{\max}(A)^2 \sigma_{\max}^2(A)}{\phi_{\min}(A)^2 \sigma_{\min}^2(A) \psi(A)^2 \delta^2} (1+\frac{1}{c}\log{\frac{1}{\delta}})\text{tr}(P(\Gamma_T(A^{-1}))P^{\prime})I$ \\
$\gamma_s(A, \delta) = \sqrt{8d \Big(\log{\Big(\frac{5}{\delta}\Big) + \frac{1}{2}\log{\Big(4\text{tr}(\Gamma_T(A)) + 1 \Big)}}\Big)}$\\
$\gamma_{ms}(A, \delta) = \sqrt{16 d \log{(\text{tr}(\Gamma_T(A)) + 1)} + 32d \log{\Big(\frac{15T}{2\delta}\Big)}}$\\
$\gamma_e(A, \delta) = \frac{\sqrt{d}\sigma_{\max}(P) }{\phi_{\min}(A) \psi(A)\delta}\sqrt{ \log{\frac{2}{\delta}} + 2 \log{5} + \log {(1 + \gamma(A, \delta))}}$\\
\hline
\end{tabular}
\caption{Notation} \label{notation}
\end{center}
\end{table*}
We will occasionally replace $X_t$ (or $X(t)$) with the lower-case counterparts $x_t$ (or $x(t)$) to denote state at time $t$, whenever this does not cause confusion. Further, we will use $C,c$ to indicate universal constants that can change from line to line.
Define the \emph{Gramian} as
\begin{equation}
\label{gramian}
\Gamma_t(A) = \sum_{k=0}^t A^k A^{k\prime}
\end{equation}
and a Jordan block matrix $J_d(\lambda)$ as
\begin{equation}
\label{jordan}
J_d(\lambda) =\begin{bmatrix}
\lambda & 1 & 0 & \hdots & 0 \\
0 & \lambda & 1 & \hdots & 0 \\
\vdots & \vdots & \ddots & \ddots & \vdots \\
0 & \hdots & 0 & \lambda & 1 \\
0 & 0 & \hdots & 0 & \lambda
\end{bmatrix}_{d \times d}
\end{equation}
We present the three classes of matrices that will be of interest to us:
\begin{itemize}
\item The perfectly stable matrix class, $\mathcal{S}_0$
$$\rho_{i}(A) \leq 1 - \frac{C}{T}$$
for $1 \leq i \leq d$.
\item The marginally stable matrix, $\mathcal{S}_1$
$$1 - \frac{C}{T} < \rho_i(A) \leq 1+\frac{C}{T}$$
for $1 \leq i \leq d$.
\item The regular and explosive matrix, $\mathcal{S}_2$
$$\rho_i > 1 + \frac{C}{T}$$
for $1 \leq i \leq d$.
\end{itemize}
Slightly abusing the notation, whenever we write $A \in \mathcal{S}_i \cup \mathcal{S}_j$ we mean that $A$ has eigenvalues in both $\mathcal{S}_i, \mathcal{S}_j$.
Critical to obtaining refined error rates, will be a result from the theory of self--normalized martingales. We let $\bm{\mathcal{F}}_t = \sigma(\eta_1, \eta_2, \ldots, \eta_t, X_1, \ldots, X_t)$ to denote the filtration generated by the noise and covariate process.
\begin{prop}
\label{selfnorm_bnd}
Let $V$ be a deterministic matrix with $V \succ 0$. For any $0 < \delta < 1$ and $\{\eta_t, X_t\}_{t=1}^{T}$ defined as before, we have with probability $1 - \delta$
\begin{align}
&||(\bar{Y}_{T-1})^{-1/2} \sum_{t=0}^{T-1} X_t \eta_{t+1}^{\prime}||_2 \nonumber\\
&\leq R\sqrt{8d \log {\Bigg(\dfrac{5 \text{det}(\bar{Y}_{T-1})^{1/2d} \text{det}(V)^{-1/2d}}{\delta^{1/d}}\Bigg)}}
\end{align}
where $\bar{Y}^{-1}_{\tau} = (Y_{\tau} + V)^{-1}$ and $R^2$ is the subGaussian parameter of $\eta_t$.
\end{prop}
The proof can be found in appendix as Proposition~\ref{selfnorm_bnd_proof}. It rests on Theorem 1 in \cite{abbasi2011improved} which is itself an application of the pseudo-maximization technique in \cite{pena2008self} (see Theorem 14.7).
Finally, we define several $A$-dependent quantities that will appear in time complexities in the next section.
\begin{definition}[Outbox Set]
\label{outbox}
For the space $\mathbb{R}^{d}$ define the $a$--outbox, $S_d(a)$, as the following set
\[
S_d(a) = \{v | \min_{1 \leq i \leq d} |v_i| \geq a\}
\]
$S_d(a)$ will be used to quantify the following norm--like quantities of a matrix:
\begin{align}
\phi_{\min}(A) &= \sqrt{\inf_{v \in S_d(1)} \sigma_{\min}\Big(\sum_{i=1}^T \Lambda^{-i+1} vv^{\prime} \Lambda^{-i+1 \prime}\Big)} \\
\phi_{\max}(A) &= \sqrt{\sup_{||v||_2 = 1} \sigma_{\max}\Big(\sum_{i=1}^T \Lambda^{-i+1} vv^{\prime} \Lambda^{-i+1 \prime}\Big)}\label{anticonc_norm}
\end{align}
where $A = P^{-1} \Lambda P$ is the Jordan normal form of $A$.
\end{definition}
$\psi(A)$ is defined in Proposition~\ref{anti_conc1} and is needed for error bounds for explosive matrices.
\begin{prop}[Proposition 2 in~\cite{faradonbeh2017finite}]
\label{anti_conc1}
Let $\rho_{\min}(A) > 1$ and $P^{-1} \Lambda P = A$ be the Jordan decomposition of $A$. Define $z_T = A^{-T}\sum_{i=1}^TA^{T-i}\eta_i$ and
$$\psi(A, \delta) = \sup \Bigg\{y \in \mathbb{R} : \mathbb{P}\Bigg(\min_{1 \leq i \leq d}|P_i^{'}z_T| < y \Bigg) \leq \delta \Bigg\}$$
where $P = [P_1, P_2, \ldots, P_d]^{'}$. Then
$$\psi(A, \delta) \geq \psi(A) \delta > 0$$
Here $\psi(A) = \frac{1}{2 d \sup_{1 \leq i \leq d}C_{|P_i^{'}z_T|}}$ where $C_{X}$ is the essential supremum of the pdf of $X$.
\end{prop}
We summarize some notation in Table~\ref{notation} for convenience in representing our results.
\section{Probabilistic Inequailities}
\label{prob_ineq}
\begin{prop}[Eq. 2.20~\cite{wainwright_notes}]
\label{noise_energy_bnd}
Let $\eta_t$ be i.i.d sub--Gaussian vectors. Then for every $z \in (0, 1)$
\begin{align*}
\mathbb{P}(||\dfrac{1}{T}\sum_{t=1}^T \eta_t \eta_t^{'} - I|| \geq z) \leq 2 \cdot 5^d \exp{-T\frac{z^2}{32} }
\end{align*}
Then with probability at least $1 - \delta$,
\[
||\dfrac{1}{T}\sum_{t=1}^T \eta_t \eta_t^{'} - I|| \leq \sqrt{\dfrac{32\Big(\log{\dfrac{2}{\delta}} + d\log{5}\Big)}{T}}
\]
\end{prop}
Then for $T > T_{\eta}(\delta)$ we have, with probability at least $1-\delta$, that
\begin{align}
\frac{3}{4}I &\preceq \dfrac{1}{T}\sum_{t=1}^T \eta_t \eta_t^{'} \preceq \frac{5}{4}I \nonumber \\
T_{\eta}(\delta) &= 512 \Big(\log{\frac{2}{\delta}} + d \log{5}\Big) \label{tight_noise_bound}
\end{align}
Further with the same probability
\begin{align}
\frac{3\sigma_{\min}^{2}(P)}{4}I &\preceq \dfrac{1}{T}\sum_{t=1}^T P\eta_t \eta_t^{\prime}P^{\prime} \preceq \frac{5\sigma_{\max}^{2}(P)}{4}I \nonumber \\
T_{\eta}(\delta) &= 512 \Big(\log{\frac{2}{\delta}} + d \log{5}\Big) \label{dep_tight_noise_bound}
\end{align}
\begin{prop}[\cite{vershynin2010introduction}]
\label{eps_net}
We have for any $\epsilon < 1$ and any $w \in \mathcal{S}^{d-1}$ that
\[
\mathbb{P}(||(Y_T^{+})^{1/2} \sum_{t=1}^T X_t \eta_t^T|| > z) \leq (1 + 2/\epsilon)^d \mathbb{P}(||(Y_T^{+})^{1/2} \sum_{t=1}^T X_t \eta_t^Tw|| > \frac{z}{2})
\]
\end{prop}
Proposition~\ref{eps_net} helps us in using the tools developed in de la Pena et. al. and~\cite{abbasi2011improved} for self--normalized martingales. We will define $\tilde{S}_t = \sum_{\tau=0}^{t-1} X_{\tau} \tilde{\eta}_{\tau+1}$ where $\tilde{\eta}_t=w^T \eta_t$ is standard normal when $w$ is a unit vector. Specifically, we use Lemma 9 of~\cite{abbasi2011improved} which we state here for convenience
\begin{thm}[Theorem 1 in~\cite{abbasi2011improved}]
\label{selfnorm_main}
Let $\{\bm{\mathcal{F}}_t\}_{t=0}^{\infty}$ be a filtration. Let $\{\eta_{t}\}_{t=1}^{\infty}$ be a real valued stochastic process such that $\eta_t$ is $\bm{\mathcal{F}}_t$ measurable and $\eta_t$ is conditionally $R$-sub-Gaussian for some $R > 0$., \textit{i.e.},
\[
\forall \lambda \in \mathbb{R} \hspace{2mm} \mathbb{E}[e^{\lambda \eta_t} | \bm{\mathcal{F}}_{t-1}] \leq e^{\frac{\lambda^2 R^2}{2}}
\]
Let $\{X_t\}_{t=1}^{\infty}$ be an $\mathbb{R}^d$--valued stochastic process such that $X_t$ is $\bm{\mathcal{F}}_{t}$ measurable. Assume that $V$ is a $d \times d$ positive definite matrix. For any $t \geq 0$ define
\[
\bar{V}_t = V + \sum_{s=1}^t X_s X_s^{\prime} \hspace{2mm} S_t = \sum_{s=1}^t \eta_{s+1} X_s
\]
Then for any $\delta > 0$ with probability at least $1-\delta$ for all $t \geq 0$
\[
||\tilde{S}_{t}||^2_{\bar{V}^{-1}_{t}} \leq 2 R^2 \log{\Bigg(\dfrac{\text{det}(\bar{V}_{t})^{1/2} \text{det}(V)^{-1/2}}{\delta}\Bigg)}
\]
\end{thm}
\begin{prop}
\label{selfnorm_bnd_proof}
For any $0 < \delta < 1$ and $\{\eta_t, S_t, \bm{\mathcal{F}}_t \}_{t=1}^{T}$ defined as before, we have with probability $1 - \delta$
\begin{align}
&||(\bar{Y}_{T-1})^{-1/2} \sum_{t=0}^{T-1} X_t \eta_{t+1}^T||_2 \nonumber\\
&\leq R\sqrt{8d \log {\Bigg(\dfrac{5 \text{det}(\bar{Y}_{T-1})^{1/2d} \text{det}(V)^{-1/2d}}{\delta^{1/d}}\Bigg)}}
\end{align}
where $\bar{Y}^{-1}_{\tau} = (Y_{\tau} + V)^{-1}$ and any deterministic $V$ with $V \succ 0$.
\end{prop}
\begin{proof}
Using Proposition~\ref{eps_net} and setting $\epsilon=1/2$, we have that
\begin{align*}
\mathbb{P}(||\bar{Y}_{T-1}^{-1/2}S_{T-1}||_2 \leq y) \leq 5^d \mathbb{P}(||\bar{Y}_{T-1}^{-1/2}S_{T-1}w||_2 \leq \frac{y}{2})
\end{align*}
Setting $S_{T-1} w = \tilde{S}_{T-1}$ and replacing
$$y^2 = 8R^2 \log{\Bigg(\dfrac{\text{det}(\bar{Y}_{T-1})^{1/2} \text{det}(V)^{-1/2}}{5^{-d}\delta}\Bigg)}$$
we get from Theorem~\ref{selfnorm_main}
\[
\mathbb{P}(||\bar{Y}_{T-1}^{-1/2}S_{T-1}||_2 \leq y) \leq \delta
\]
\end{proof}
\begin{thm}[Hanson--Wright Inequality]
\label{hanson-wright}
Given a subGaussian vector $X=(X_1, X_2, \ldots, X_n) \in \mathbb{R}^n$ with $\sup_i ||X_i||_{\psi_2}\leq K$ and $X_i$ are independent. Then for any $B \in \mathbb{R}^{n \times n}$ and $t \geq 0$
\begin{align}
&\Pr(|X^{\prime} B X - \mathbb{E}[X^{\prime} B X]| \leq t) \nonumber\\
&\leq 2 \exp\Bigg\{- c \min{\Big(\frac{t}{K^2 ||B||}, \frac{t^2}{K^4 ||B||^2_{HS}}\Big)}\Bigg\}
\end{align}
\end{thm}
\begin{cor}[Dependent Hanson--Wright Inequality]
\label{dep-hanson-wright}
Given independent subGaussian vectors $X_i \in \mathbb{R}^d$ such that $X_{ij}$ are independent and $\sup_{ij} ||X_{ij}||_{\psi_2}\leq K$. Let $P$ be a full rank matrix. Define
$$X=\begin{bmatrix}PX_1 \\
PX_2 \\
\vdots \\
PX_n \end{bmatrix} \in \mathbb{R}^{dn}$$
Then for any $B \in \mathbb{R}^{dn \times dn}$ and $t \geq 0$
\begin{align}
&\Pr(|X^{\prime} B X - \mathbb{E}[X^{\prime} B X]| \leq t) \nonumber\\
&\leq 2 \exp\Bigg\{- c \min{\Big(\frac{t}{K^2\sigma_1^2(P) ||B||}, \frac{t^2}{K^4 \sigma_1^4(P)||B||^2_{HS}}\Big)}\Bigg\}
\end{align}
\end{cor}
\begin{proof}
Define
$$\tilde{X} = \begin{bmatrix}X_1 \\
X_2 \\
\vdots \\
X_n \end{bmatrix}$$
Now $\tilde{X}$ is such that $\tilde{X}_i$ are independent. Observe that $X = (I_{d \times d}\otimes P)\tilde{X} $. Then $X^{\prime} B X = \tilde{X} (I_{d \times d}\otimes P) B (I_{d \times d}\otimes P') \tilde{X}$. Since
\begin{align*}
||(I_{d \times d}\otimes P) B (I_{d \times d}\otimes P')|| &\leq \sigma_1^2(P) ||B|| \\
\text{tr}((I_{d \times d}\otimes P) B (I_{d \times d}\otimes P')&(I_{d \times d}\otimes P) B (I_{d \times d}\otimes P')) \\
\leq \sigma_1^2(P) \text{tr}((I_{d \times d}\otimes P) &B^2 (I_{d \times d}\otimes P')) \\
\leq \sigma_1^4(P) &\text{tr}(B^2)
\end{align*} and now we can use Hanson--Wright in Theorem~\ref{hanson-wright} and get the desired bound.
\end{proof}
Let $X_t = \sum_{j=0}^{t-1} A^j \eta_{t-j}$.
\begin{prop}
\label{energy_markov}
With probability at least $1-\delta$, we have
\begin{align*}
||\sum_{t=1}^T X_t X_t^{\prime}||_2 &\leq \frac{T\text{tr}(\Gamma_{T-1}(A))}{\delta} \\
||\sum_{t=1}^T AX_t X_t^{\prime}A^{\prime}||_2 &\leq \frac{T\text{tr}(\Gamma_{T}(A) - I)}{\delta}
\end{align*}
Let $\delta \in (0, e^{-1})$ then with probability at least $1-\delta$
\[
||\sum_{t=1}^T X_t X_t^{\prime}||_2 \leq \text{tr}(\sum_{t=0}^{T-1}\Gamma_t(A))\Big(1 + \frac{1}{c}\log{\Big(\frac{1}{\delta}\Big)}\Big)
\]
for some universal constant $c$.
\end{prop}
\begin{proof}
Define $\tilde{\eta} = \begin{bmatrix} \eta_1 \\ \eta_2 \\ \vdots \\ \eta_T\end{bmatrix}$ and $\tilde{A}$ as
\[
\tilde{A} = \begin{bmatrix} I & 0 & 0 & \hdots & 0 \\
A & I & 0 & \hdots & 0 \\
\vdots & \vdots & \ddots & \vdots &\vdots\\
\vdots & \vdots & \vdots & \ddots&\vdots\\
A^{T-1} & A^{T-2} & A^{T-3} & \hdots & I
\end{bmatrix}
\]
Then
\[
\tilde{A} \tilde{\eta} = \begin{bmatrix} X_1 \\ X_2 \\ \vdots \\ X_T\end{bmatrix}
\]
Since
\[
||X_t X_t^{\prime}|| = X_t^{\prime} X_t
\]
We have that
$$||\sum_{t=1}^T X_t X_t^{\prime}|| \leq \sum_{t=1}^T X_t^{\prime} X_t = \tilde{\eta}^{\prime} \tilde{A}^{\prime} \tilde{A}\tilde{\eta} = \text{tr}(\tilde{A}\tilde{\eta} {\eta}^{\prime} \tilde{A}^{\prime})$$
The assertion of proposition follows by applying Markov's Inequality to $\text{tr}(\tilde{A}\tilde{\eta} {\eta}^{\prime} \tilde{A}^{\prime})$. For the second part observe that each block matrix of $\tilde{A}$ is scaled by $A$, but the proof remains the same.
Then in the notation of Theorem~\ref{hanson-wright} $B=\tilde{A}^{\prime} \tilde{A}, X=\tilde{\eta}$
\begin{align*}
||B||_S &= \text{tr}(\tilde{A}^{\prime}\tilde{A}) \\
&= \sum_{t=0}^{T-1}\text{tr}(\Gamma_t(A)) \\
||B||_F^2 &\leq ||B||_S ||B||_2
\end{align*}
Define $c^{*} = \min{(c, 1)}$. Set $t = \frac{||B||_F^2}{c^{*}||B||} {\log{(\frac{1}{\delta})}}$ and assume $\delta \in (0, e^{-1})$ then
\begin{align*}
\frac{t}{c^{*}||B||} \leq \frac{t^2}{c^{*}||B||_F^2}
\end{align*}
we get from Theorem~\ref{hanson-wright} that
\begin{align*}
\tilde{\eta}^{\prime}\tilde{A}^{\prime}\tilde{A}\tilde{\eta} &\leq \text{tr}(\sum_{t=0}^{T-1}\Gamma_t(A)) + \frac{||B||_F^2}{c^{*}||B||} \log{\Big(\frac{1}{\delta}\Big)} \\
&\leq \text{tr}(\sum_{t=0}^{T-1}\Gamma_t(A)) + \frac{||B||_s}{c^{*}} \log{\Big(\frac{1}{\delta}\Big)} \\
&\leq \text{tr}(\sum_{t=0}^{T-1}\Gamma_t(A))\Big(1 + \frac{1}{c^{*}}\log{\Big(\frac{1}{\delta}\Big)}\Big)
\end{align*}
with probability at least $1 - \exp{\Big(- \frac{c||B||_F^2}{c^{*}||B||_2^2}\log{\frac{1}{\delta}}\Big)}$. Since
\[
\frac{c||B||_F^2}{c^{*}||B||_2^2} \geq 1
\]
it follows that
\[
\exp{\Big(- \frac{c||B||_F^2}{c^{*}||B||_2^2}\log{\frac{1}{\delta}}\Big)} \leq \delta
\]
and we can conclude that with probability at least $1-\delta$
\[
\tilde{\eta}^{\prime}\tilde{A}^{\prime}\tilde{A}\tilde{\eta} \leq \text{tr}(\sum_{t=0}^{T-1}\Gamma_t(A))\Big(1 + \frac{1}{c^{*}}\log{\Big(\frac{1}{\delta}\Big)}\Big)
\]
\end{proof}
\begin{cor}
\label{sub_sum}
Whenever $\delta \in (0, e^{-1})$, we have with probability at least $1-\delta$
\[
||\sum_{t=k+1}^T X_t X_t^{\prime}||_2 \leq \text{tr}(\sum_{t=k}^{T-1}\Gamma_t(A))\Big(1 + \frac{1}{c}\log{\Big(\frac{1}{\delta}\Big)}\Big)
\]
for some universal constant $c$.
\end{cor}
\begin{proof}
The proof follows the same steps as Proposition~\ref{energy_markov}. Define
\[
\tilde{A} = \begin{bmatrix} I & 0 & 0 & \hdots & 0 \\
A & I & 0 & \hdots & 0 \\
\vdots & \vdots & \ddots & \vdots &\vdots\\
\vdots & \vdots & \vdots & \ddots&\vdots\\
A^{T-1} & A^{T-2} & A^{T-3} & \hdots & I
\end{bmatrix}
\]
Define $\tilde{A}_k$ as the matrix formed by zeroing out all the rows of $\tilde{A}$ from $k+1$ row onwards. Then observe that
\begin{align*}
||\sum_{t=k+1}^T X_t X_t^{\prime}|| &\leq \text{tr}(\sum_{t=k+1}^T X_t X_t^{\prime}) \\
&= \text{tr}(\sum_{t=1}^T X_t X_t^{\prime} - \sum_{t=1}^k X_t X_t^{\prime}) \\
&= \tilde{\eta}^{\prime}(\tilde{A}^{\prime}\tilde{A} - \tilde{A}^{\prime}_k\tilde{A}_k)\tilde{\eta}
\end{align*}
Since $ \text{tr}(\sum_{t=1}^T X_t X_t^{\prime} - \sum_{t=1}^k X_t X_t^{\prime}) \geq 0$ for any $\tilde{\eta}$ it implies $B = (\tilde{A}^{\prime}\tilde{A} - \tilde{A}^{\prime}_k\tilde{A}_k) \succeq 0$.
\begin{align*}
||B||_S &= \text{tr}(\tilde{A}^{\prime}\tilde{A}) \\
&= \sum_{t=k}^{T-1}\text{tr}(\Gamma_t(A)) \\
||B||_F^2 &\leq ||B||_S ||B||_2
\end{align*}
Define $c^{*} = \min{(c, 1)}$. Set $t = \frac{||B||_F^2}{c^{*}||B||} {\log{(\frac{1}{\delta})}}$ and assume $\delta \in (0, e^{-1})$ then
\begin{align*}
\frac{t}{c^{*}||B||} \leq \frac{t^2}{c^{*}||B||_F^2}
\end{align*}
we get from Theorem~\ref{hanson-wright} that
\begin{align*}
\tilde{\eta}^{\prime}\tilde{A}^{\prime}\tilde{A}\tilde{\eta} &\leq ||B||_S + \frac{||B||_F^2}{c^{*}||B||} \log{\Big(\frac{1}{\delta}\Big)} \\
&\leq ||B||_S + \frac{||B||_s}{c^{*}} \log{\Big(\frac{1}{\delta}\Big)} \\
&\leq ||B||_S\Big(1 + \frac{1}{c^{*}}\log{\Big(\frac{1}{\delta}\Big)}\Big)
\end{align*}
with probability at least $1 - \exp{\Big(- \frac{c||B||_F^2}{c^{*}||B||_2^2}\log{\frac{1}{\delta}}\Big)}$. Since
\[
\frac{c||B||_F^2}{c^{*}||B||_2^2} \geq 1
\]
it follows that
\[
\exp{\Big(- \frac{c||B||_F^2}{c^{*}||B||_2^2}\log{\frac{1}{\delta}}\Big)} \leq \delta
\]
and we can conclude that with probability at least $1-\delta$
\[
\tilde{\eta}^{\prime}\tilde{A}^{\prime}\tilde{A}\tilde{\eta} \leq \text{tr}(\sum_{t=k}^{T-1}\Gamma_t(A))\Big(1 + \frac{1}{c^{*}}\log{\Big(\frac{1}{\delta}\Big)}\Big)
\]
\end{proof}
\begin{prop}
\label{anti_conc}
Define $P^{-1} \Lambda P = A$ be the Jordan decomposition and
$$\psi(A, \delta) = \sup \Bigg\{y \in \mathbb{R} : \mathbb{P}\Bigg(\min_{1 \leq i \leq d}|P_i^{'}z_T| < y \Bigg) \leq \delta \Bigg\}$$
where $P = [P_1, P_2, \ldots, P_d]^{'}$. If $\rho_{\min}(A) > 1$, then
$$\psi(A, \delta) \leq C d\sigma_{\max}(\Gamma_T(A^{-1})) \delta$$
where $C$ is a universal constant.
\end{prop}
\section{Lower Bound for $Y_T$ when $A \in \mathcal{S}_0 \cup \mathcal{S}_1$}
\label{short_proof}
Here we will prove our results when $\rho(A) \leq 1+ C/T$.
Define
\begin{align*}
P &= A Y_{T-1} A^{\prime} \\
Q &= \sum_{\tau=0}^{T-1}{Ax_t \eta_{t+1}^{\prime} }\\
V &= TI \\
T_{\eta} &= 512\Big(\log{\frac{2}{\delta}} + d \log{5}\Big) \\
\mathcal{E}_{1}(\delta) &= \Bigg\{||Q||^2_{(P+V)^{-1}} \leq 8 \log{\Bigg(\dfrac{5^d\text{det}(P+V)^{1/2} \text{det}(V)^{-1/2}}{\delta}\Bigg)}\Bigg\} \\
\mathcal{E}_{2}(\delta) &= \Bigg\{||\sum_{\tau=0}^{T-1} Ax_{\tau} x_{\tau}^{\prime}A^{\prime}|| \leq \frac{T \text{tr}(\Gamma_{T}(A) - I)}{\delta}\Bigg\}\\
\mathcal{E}_{\eta}(\delta) &= \{T > T_{\eta}(\delta), \frac{3}{4}I \preceq \dfrac{1}{T}\sum_{t=1}^T \eta_t \eta_t^{'} \preceq \frac{5}{4}I\} \\
\mathcal{E}(\delta) &= \mathcal{E}_{\eta}(\delta) \cap \mathcal{E}_{1}(\delta) \cap \mathcal{E}_{2}(\delta) \\
\end{align*}
Recall that
\begin{equation}
\label{lb_step1}
{Y}_T \succeq A {Y}_{T-1} A^{\prime} + \sum_{t=0}^{T-1} {Ax_t \eta_{t+1}^{\prime} + \eta_{t+1} x_t^{\prime}A^{\prime}} + \sum_{t=1}^T \eta_t \eta_t^{\prime}
\end{equation}
Our goal here will be to control
\begin{equation}
\label{cross_terms}
||Q||_2
\end{equation}
Following Proposition~\ref{selfnorm_bnd}, Proposition~\ref{energy_markov}, it is true that $\mathbb{P}(\mathcal{E}_{1}(\delta) \cap \mathcal{E}_{2}(\delta)) \geq 1-2\delta$. We will show that
$$\mathcal{E}(\delta) = \mathcal{E}_{\eta}(\delta) \cap \mathcal{E}_{1}(\delta) \cap \mathcal{E}_{2}(\delta) \implies \sigma_{\min}(\hat{Y}_T) \geq 1/4$$
Under $\mathcal{E}_{\eta}(\delta)$, we get
\begin{align}
{Y}_T &\succeq A {Y}_{T-1} A^{\prime} + \sum_{t=0}^{T-1} {Ax_t \eta_{t+1}^{\prime} + \eta_{t+1} x_t^{\prime}A^{\prime}} + \sum_{t=1}^T \eta_t \eta_t^{\prime} \nonumber \\
{Y}_T &\succeq A {Y}_{T-1} A^{\prime} + \sum_{t=0}^{T-1} {Ax_t \eta_{t+1}^{\prime} + \eta_{t+1} x_t^{\prime}A^{\prime}} + \frac{3}{4}TI \nonumber \\
U^{\prime} {Y}_T U &\geq U^{\prime} A Y_{T-1} A^{\prime} U + U^{\prime} \sum_{t=0}^{T-1} \Bigg({Ax_t \eta_{t+1}^{\prime} + \eta_{t+1} x_t^{\prime}A^{\prime}} \Bigg) U + \frac{3}{4}T \hspace{3mm} \forall U\in \mathcal{S}^{d-1} \label{contra_eq}
\end{align}
Intersecting Eq.~\eqref{contra_eq} with $\mathcal{E}_1(\delta) \cap \mathcal{E}_2(\delta)$, we find under $\mathcal{E}(\delta)$
\begin{align*}
&||Q||^2_{(P+V)^{-1}} \leq 8 \log{\Bigg(\dfrac{5^d\text{det}(P+V)^{1/2} \text{det}(V)^{-1/2}}{\delta}\Bigg)} \\
&\leq 8 \log{\Bigg(\dfrac{5^d \text{det}(\frac{T \text{tr}(\Gamma_{T}(A) - I)}{\delta} + TI)^{1/2}\text{det}(TI)^{-1/2}}{\delta}\Bigg)} \\
&\leq 8 \log{\Bigg(\dfrac{5^d \text{det}({ \text{tr}(\Gamma_{T}(A) - I)} + I)^{1/2}}{\delta^d}\Bigg)}
\end{align*}
Using Proposition~\ref{psd_result_2} and letting $\kappa^2 = U^{\prime} P U$ then
\begin{align*}
&||QU||_2 \\
&\leq \sqrt{\kappa^2 + T}\sqrt{8 \log{\Bigg(\dfrac{5^d \text{det}({ \text{tr}(\Gamma_{T}(A) - I)} + I)^{1/2}}{\delta^d}\Bigg)}}
\end{align*}
So Eq.~\eqref{contra_eq} implies
\begin{align}
U^{\prime} {Y}_T U &\geq \kappa^2 \nonumber - \sqrt{(\kappa^2 + T)}{ \sqrt{ 16d \log{( \text{tr}(\Gamma_T - I)+1)} + 32d \log{\frac{5}{\delta}}}} + \frac{3}{4}T
\end{align}
which gives us
\begin{align}
U^{\prime} \frac{{Y}_T}{T} U &\geq \frac{\kappa^2}{T} - \sqrt{(\frac{\kappa^2}{T} + 1)}\underbrace{ \sqrt{ \frac{16d}{T} \log{( \text{tr}(\Gamma_T - I)+1)} + \frac{32d}{T}\log{\frac{5}{\delta}}}}_{=\beta} + \frac{3}{4} \label{contra_eq2}
\end{align}
If we can ensure
\begin{equation}
\label{t_req}
\frac{T}{128} \geq { \frac{d}{2} \log{( \text{tr}(\Gamma_T - I)+1)} + d \log{\frac{5}{\delta}} }
\end{equation}
then $\beta \leq 1/2$, \textit{i.e.},
\[
\sqrt{ \frac{16d}{T} \log{( \text{tr}(\Gamma_T - I)+1)} + \frac{32d}{T}\log{\frac{5}{\delta}}} \leq \frac{1}{2}
\]
Let $T$ be large enough that Eq.~\eqref{t_req} is satisfied then Eq.~\eqref{contra_eq2} implies
\begin{equation}
\label{final_eq}
U^{\prime} \frac{{Y}_T}{T} U \geq \frac{\kappa^2}{T} - \frac{\sqrt{(\frac{\kappa^2}{T} + 1)}}{2} + \frac{3}{4} \geq \frac{1}{4} + \frac{\kappa^2}{2T}
\end{equation}
Since $U$ is arbitrarily chosen Eq.~\eqref{final_eq} implies
\begin{align}
Y_T \succeq \frac{T}{4}I \label{lower_bnd}
\end{align}
with probability at least $1 - 3\delta$ whenever
\begin{align}
\rho_i(A) &\leq 1 + \frac{c}{T} \nonumber\\
T &\geq \max{\Big(512\Big(\log{\frac{2}{\delta}} + d \log{5}\Big), 128\Big({ \frac{d}{2} \log{( \text{tr}(\Gamma_T - I)+1)} + d \log{\frac{5}{\delta}} }\Big)\Big)} \label{t_req_comb}
\end{align}
\begin{remark}
Eq.~\eqref{t_req} is satisfied whenever $\text{tr}(\Gamma_T - I)$ grows at most polynomially in $T$. This is true whenever $\rho(A) \leq 1 +\frac{c}{T}$.
\end{remark}
\section{Discussion}
\label{discussion}
In this work we provided finite time guarantees for OLS identification for LTI systems. We show that whenever $A$ is regular, with an otherwise arbitrary distribution of eigenvalues, OLS can be used for identification. More specifically we give sharpest possible rates when $A$ belongs to one of $\{\mathcal{S}_0, \mathcal{S}_1, \mathcal{S}_2 \}$. When the assumption of regularity is violated, we show that OLS is statistically inconsistent. This suggests that statistical consistency relies on the conditioning of the sample covariance matrix and \textit{not} so much on the signal-to-noise ratio for explosive matrices. Despite substantial differences between the distributional properties of the covariates we find that time taken to reach a given error threshold scales the same (up to some constant that depends only on $A$) across all regimes in terms of the probability of error. To see this, observe that Theorem~\ref{main_result} gives us with probability at least $1-\delta$
\begin{align}
A \in \mathcal{S}_0 &\implies ||A - \hat{A}|| \leq \sqrt{\frac{C_0(d)\log{\frac{1}{\delta}}}{T}} \nonumber \\
A \in \mathcal{S}_1 &\implies ||A - \hat{A}|| \leq \frac{C_1(d)}{T}{\log{\Big(\frac{T}{\delta}\Big)}} \nonumber \\
A \in \mathcal{S}_2 &\implies ||A - \hat{A}|| \leq \frac{C_2(d) \sigma_{\max}(A^{-T})}{\delta} \label{ub}
\end{align}
The lower bounds for $A \in \mathcal{S}_0$ and $A \in \mathcal{S}_1$ are given in~\cite{simchowitz2018learning} Appendix B, F.1 which are
\begin{align}
A \in \mathcal{S}_0 &\implies ||A - \hat{A}|| \geq \sqrt{\frac{B_0(d)\log{\frac{1}{\delta}}}{T}} \nonumber\\
A \in \mathcal{S}_1 &\implies ||A - \hat{A}|| \geq \frac{B_1(d)}{T}{\log{\Big(\frac{1}{\delta}\Big)}} \label{lbb}
\end{align}
with probability at least $\delta$. For $A \in \mathcal{S}_2$ we provide a tighter lower bound in Proposition~\ref{minimax}, \textit{i.e.}, with probability at least $\delta$
\begin{equation}
A \in \mathcal{S}_2 \implies ||A - \hat{A}|| \geq \frac{B_2(d) \sigma_{\max}(A^{-T})}{-\delta \log{\delta}} \label{lbb2}
\end{equation}
Now fix an error threshold $\epsilon$, from Eq.~\eqref{ub} we get with probability $\geq 1 - \delta$
\begin{align*}
A \in \mathcal{S}_0 &\implies ||A - \hat{A}|| \leq \epsilon \text{ if } T \geq \frac{ \log{\frac{1}{\delta}}}{\epsilon^2 C_0(d)} \\
A \in \mathcal{S}_1 &\implies ||A - \hat{A}|| \leq \epsilon \text{ if } T \geq \frac{ \log{\frac{T}{\delta}}}{\epsilon C_1(d)} \\
A \in \mathcal{S}_2 &\implies ||A - \hat{A}|| \leq \epsilon \text{ if } T \geq \frac{\log{\frac{1}{\delta {\epsilon}}} + \log{C_2(d)}}{\log{\rho_{\min}}}
\end{align*}
From Eq.~\eqref{lbb},\eqref{lbb2} we also know this is tight. In summary to reach a certain error threshold, $T$ must be at least as large as $\log{\frac{1}{\delta}}$ for every regime.
Another key contribution of this work is providing finite time guarantees for a general distribution of eigenvalues. A major hurdle towards applying Theorem~\ref{main_result} to the general case is the mixing between separate components (corresponding to stable, marginally stable or explosive). Despite these difficulties we provide error bounds where each component, stable, marginally stable or explosive, has (almost) the same behavior as Theorem~\ref{main_result}. The techniques introduced here can be used to analyze extensions such as identification in the presence of a control input $U_t$ or heavy tailed distribution of noise (See Sections~\ref{extensions} and \ref{noise_ind}).
\section{Extensions of analysis}
\label{extensions}
\subsection{Extension with control input}
Here we sketch how to extend our results to the general case when we also have a control input, \textit{i.e.},
\begin{equation}
\label{control_eq}
X_{t+1} = AX_t + BU_{t} + \eta_{t+1}
\end{equation}
Here $A, B$ are unknown but we can choose $U_t$. Pick independent vectors $\{U_t \sim \bm{\mathcal{N}}(0, I)\}_{t=1}^T$. We can represent this as a variant of Eq.~\eqref{lti} as follows
\begin{align*}
\underbrace{\begin{bmatrix}
X_{t+1} \\
U_{t+1}
\end{bmatrix}}_{\bar{X}_{t+1}} &= \underbrace{\begin{bmatrix}
A & B \\
0 & 0
\end{bmatrix}}_{\bar{A}}\begin{bmatrix}
X_{t} \\
U_{t}
\end{bmatrix} + \underbrace{\begin{bmatrix}
\eta_{t+1} \\
U_{t+1}
\end{bmatrix}}_{\bar{\eta}_{t+1}}
\end{align*}
Since
\begin{align*}
\text{det}\Bigg(\begin{bmatrix}
A -\lambda I & B \\
0 & -\lambda I
\end{bmatrix}\Bigg) = 0
\end{align*}
holds when $\lambda$ equals an eigenvalue of $A$ or $0$. The eigenvalues of $\bar{A}$ are the same as $A$ with some additional eigenvalues that are zero. Now we can simply use Theorem~\ref{composite_result}.
\subsection{Extension with heavy tailed noise}
It is claimed in~\cite{faradonbeh2017finite} that techniques involving inequalities for subgaussian distributions cannot be used for the class of sub-Weibull distributions they consider. However, by bounding the noise process, as even \cite{faradonbeh2017finite} does, we can convert the heavy tailed process into a zero mean independent subgaussian one. In such a case our techniques can still be applied, and they incur only an extra logarithmic factor. This is discussed in detail in appendix as Section~\ref{noise_ind}.
\section{Appendix}
\label{appendix_matrix}
\begin{prop}
\label{psd_result_2}
Let $P, V$ be a psd and pd matrix respectively and define $\bar{P} = P + V$. Let there exist some matrix $Q$ for which we have the following relation
\[
||\bar{P}^{-1/2} Q|| \leq \gamma
\]
For any vector $v$ such that $v^{\prime} P v = \alpha, v^{\prime} V v =\beta$ it is true that
\[
||v^{\prime}Q|| \leq \sqrt{\beta+\alpha} \gamma
\]
\end{prop}
\begin{proof}
Since
\[
||\bar{P}^{-1/2} Q||_2^2 \leq \gamma^2
\]
for any vector $v \in \mathcal{S}^{d-1}$ we will have
\[
\frac{v^{\prime} \bar{P}^{1/2}\bar{P}^{-1/2} Q Q^{\prime}\bar{P}^{-1/2}\bar{P}^{1/2} v}{v^{\prime} \bar{P} v} \leq \gamma^2
\]
and substituting $v^{\prime} \bar{P} v = \alpha + \beta$ gives us
\begin{align*}
{v^{\prime} Q Q^{\prime} v} &\leq \gamma^2{v^{\prime} \bar{P} v} \\
&= (\alpha + \beta) \gamma^2
\end{align*}
\end{proof}
\begin{prop}
\label{inv_jordan}
Consider a Jordan block matrix $J_d(\lambda)$ given by \eqref{jordan}, then $J_d(\lambda)^{-k}$ is a matrix where each off--diagonal (and the diagonal) has the same entries, \textit{i.e.},
\begin{equation}
J_d(\lambda)^{-k} =\begin{bmatrix}
a_1 & a_2 & a_3 & \hdots & a_d \\
0 & a_1 & a_2 & \hdots & a_{d-1} \\
\vdots & \vdots & \ddots & \ddots & \vdots \\
0 & \hdots & 0 & a_1 & a_2 \\
0 & 0 & \hdots & 0 & a_1
\end{bmatrix}_{d \times d}
\end{equation}
for some $\{a_i\}_{i=1}^d$.
\end{prop}
\begin{proof}
$J_d(\lambda) = (\lambda I + N)$ where $N$ is the matrix with all ones on the $1^{st}$ (upper) off-diagonal. $N^k$ is just all ones on the $k^{th}$ (upper) off-diagonal and $N$ is a nilpotent matrix with $N^d = 0$. Then
\begin{align*}
(\lambda I + N)^{-1} &= (\sum_{l=0}^{d-1} (-1)^{l}\lambda^{-l-1}N^{l}) \\
(-1)^{k-1}(k-1 )!(\lambda I + N)^{-k} &= \Big(\sum_{l=0}^{d-1} (-1)^{l}\frac{d^{k-1}\lambda^{-l-1}}{d \lambda^{k-1}}N^{l}\Big) \\
&= \Big(\sum_{l=0}^{d-1} (-1)^{l}c_{l, k}N^{l}\Big)
\end{align*}
and the proof follows in a straightforward fashion.
\end{proof}
\begin{prop}
\label{reg_invertible}
Let $A$ be a regular matrix and $A = P^{-1} \Lambda P$ be its Jordan decomposition. Then
\[
\inf_{||a||_2 = 1}||\sum_{i=1}^d a_i \Lambda^{-i+1}||_2 > 0
\]
Further $\phi_{\min}(A) > 0$ where $\phi_{\min}(\cdot)$ is defined in Definition~\ref{outbox}.
\end{prop}
\begin{proof}
When $A$ is regular, the geometric multiplicity of each eigenvalue is $1$. This implies that $A^{-1}$ is also regular. Regularity of a matrix $A$ is equivalent to the case when minimal polynomial of $A$ equals characteristic polynomial of $A$ (See Section~\ref{lemmab} in appendix), \textit{i.e.},
\begin{align*}
\inf_{||a||_2 = 1}||\sum_{i=1}^d a_i A^{-i+1}||_2 &> 0
\end{align*}
Since $A^{-j} = P^{-1} \Lambda^{-j} P$ we have
\begin{align*}
\inf_{||a||_2 = 1}||\sum_{i=1}^d a_i P^{-1}\Lambda^{-i+1}P||_2 &> 0 \\
\inf_{||a||_2 = 1}||\sum_{i=1}^d a_i P^{-1}\Lambda^{-i+1}||_2 \sigma_{\min}(P) &> 0 \\
\inf_{||a||_2 = 1}||\sum_{i=1}^d a_i \Lambda^{-i+1}||_2 \sigma_{\min}(P) \sigma_{\min}(P^{-1}) &> 0 \\
\inf_{||a||_2 = 1}||\sum_{i=1}^d a_i \Lambda^{-i+1}||_2 &>0
\end{align*}
Since $\Lambda$ is Jordan matrix of the Jordan decomposition, it is of the following form
\begin{equation}
\Lambda =\begin{bmatrix}
J_{k_1}(\lambda_1) & 0 & \hdots & 0 &0 \\
0 & J_{k_2}(\lambda_2) & 0 & \hdots &0 \\
\vdots & \vdots & \ddots & \ddots & \vdots \\
0 & \hdots & 0 & J_{k_{l}}(\lambda_l) & 0 \\
0 & 0 & \hdots & 0 & J_{k_{l+1}}(\lambda_{l+1})
\end{bmatrix}
\end{equation}
where $J_{k_i}(\lambda_i)$ is a $k_i \times k_i$ Jordan block corresponding to eigenvalue $\lambda_i$. Then
\begin{equation}
\Lambda^{-k} =\begin{bmatrix}
J^{-k}_{k_1}(\lambda_1) & 0 & \hdots & 0 &0 \\
0 & J^{-k}_{k_2}(\lambda_2) & 0 & \hdots &0 \\
\vdots & \vdots & \ddots & \ddots & \vdots \\
0 & \hdots & 0 & J^{-k}_{k_{l}}(\lambda_l) & 0 \\
0 & 0 & \hdots & 0 & J^{-k}_{k_{l+1}}(\lambda_{l+1})
\end{bmatrix}
\end{equation}
Since $||\sum_{i=1}^d a_i \Lambda^{-i+1}||_2 >0$, without loss of generality assume that there is a non--zero element in $k_1 \times k_1$ block. This implies
\begin{align*}
||\underbrace{\sum_{i=1}^d a_i J_{k_1}^{-i+1}(\lambda_1)}_{=S}||_2 > 0
\end{align*}
By Proposition~\ref{inv_jordan} we know that each off--diagonal (including diagonal) of $S$ will have same element. Let $j_0 = \inf{\{j | S_{ij} \neq 0\}}$ and in column $j_0$ pick the element that is non--zero and highest row number, $i_0$. By design $S_{i_0, j_0} > 0$ and further
$$S_{k_1 -(j_0 - i_0), k_1} = S_{i_0, j_0}$$
because they are part of the same off--diagonal (or diagonal) of $S$. Thus the row $k_1 - (j_0 - i_0)$ has only one non--zero element because of the minimality of $j_0$.
We proved that for any $||a||=1$ there exists a row with only one non--zero element in the matrix $\sum_{i=1}^d a_i \Lambda^{-i+1}$. This implies that if $v$ is a vector with all non--zero elements, then $||\sum_{i=1}^d a_i \Lambda^{-i+1} v||_2 > 0$, \textit{i.e.},
\begin{align*}
\inf_{||a||_2 = 1}||\sum_{i=1}^d a_i \Lambda^{-i+1} v ||_2 &> 0
\end{align*}
This implies
\begin{align*}
\inf_{||a||_2 = 1}||[v, \Lambda^{-1} v, \ldots, \Lambda^{-d+1}v] a||_2 &> 0\\
\sigma_{\min}([v, \Lambda^{-1} v, \ldots, \Lambda^{-d+1}v]) &> 0 \\
\end{align*}
By Definition~\ref{outbox} we have
\begin{align*}
\phi_{\min}(A) &> 0
\end{align*}
\end{proof}
\begin{prop}[Corollary 2.2 in~\cite{ipsen2011determinant}]
\label{det_lb}
For any positive definite matrix $M$ with diagonal entries $m_{jj}$, $1 \leq j \leq d$ and $\rho$ is the spectral radius of the matrix $C$ with elements
\begin{align*}
c_{ij} &= 0 \hspace{3mm} \text{if } i=j \\
&=\frac{m_{ij}}{\sqrt{m_{ii}m_{jj}}} \hspace{3mm} \text{if } i\neq j
\end{align*}
then
\begin{align*}
0 < \frac{\prod_{j=1}^d m_{jj} - \text{det}(M)}{\prod_{j=1}^d m_{jj}} \leq 1 - e^{-\frac{d \rho^2}{1+\lambda_{\min}}}
\end{align*}
where $\lambda_{\min} = \min_{1 \leq j \leq d} \lambda_j(C)$.
\end{prop}
\begin{prop}
\label{gramian_lb}
Let $1 - C/T \leq \rho_i(A) \leq 1 + C/T$ and $A$ be a $d \times d$ matrix. Then there exists $\alpha(d)$ depending only on $d$ such that for every $8 d \leq t \leq T$
\[
\sigma_{\min}(\Gamma_t(A)) \geq t \alpha(d)
\]
\end{prop}
\begin{proof}
We write $A = P^{-1} \Lambda P$ where $\Lambda$ is the Jordan matrix. Since $\Lambda$ can be complex we will assume that adjoint instead of transpose. This gives
\begin{align*}
\Gamma_T(A) &= I + \sum_{t=1}^T A^t (A^{t})^{\prime} \\
&= I + P^{-1}\sum_{t=1}^T \Lambda^tPP^{\prime} (\Lambda^t)^{*} P^{-1 \prime} \\
&\succeq I + \sigma_{\min}(P)^2P^{-1}\sum_{t=1}^T \Lambda^t(\Lambda^t)^{*} P^{-1 \prime}
\end{align*}
This implies that
\begin{align*}
\sigma_{\min}( \Gamma_T(A)) &\geq 1 +\sigma_{\min}(P)^2 \sigma_{\min}(P^{-1}\sum_{t=1}^T \Lambda^t(\Lambda^t)^{\prime} P^{-1 \prime}) \\
&\geq 1 + \sigma_{\min}(P)^4 \sigma_{\min}(\sum_{t=1}^T \Lambda^t(\Lambda^t)^{\prime} )
\end{align*}
Now
\begin{align*}
\sum_{t=0}^T \Lambda^t(\Lambda^t)^{*} &= \begin{bmatrix}
\sum_{t=0}^T J^{t}_{k_1}(\lambda_1)(J^{t}_{k_1}(\lambda_1))^{*} & 0 & \hdots & 0 \\
0 & \sum_{t=1}^T J^{t}_{k_2}(\lambda_2)(J^{t}_{k_2}(\lambda_2))^{*} & 0 & \hdots \\
\vdots & \vdots & \ddots & \ddots \\
0 & \hdots & 0 & \sum_{t=1}^T J^{t}_{k_{l}}(\lambda_l) (J^{t}_{k_l}(\lambda_l))^{*}
\end{bmatrix}
\end{align*}
Since $\Lambda$ is block diagonal we only need to worry about the least singular value corresponding to some block. Let this block be the one corresponding to $J_{k_1}(\lambda_1)$, \textit{i.e.},
\begin{equation}
\sigma_{\min}(\sum_{t=0}^T \Lambda^t(\Lambda^t)^{*} ) =\sigma_{\min}(\sum_{t=0}^T J^{t}_{k_1}(\lambda_1)(J^{t}_{k_1}(\lambda_1))^{*}) \label{bnd_1}
\end{equation}
Define $B = \sum_{t=0}^T J^{t}_{k_1}(\lambda_1)(J^{t}_{k_1}(\lambda_1))^{*}$. Note that $J_{k_1}(\lambda_1) = (\lambda_1 I + N)$ where $N$ is the nilpotent matrix that is all ones on the first off--diagonal and $N^{k_1} = 0$. Then
\begin{align*}
(\lambda_1 I + N)^t &= \sum_{j=0}^t {t \choose j} \lambda_1^{t-j}N^{j} \\
(\lambda_1 I + N)^t((\lambda_1 I + N)^t)^{*} &= \Big(\sum_{j=0}^t {t \choose j} \lambda_1^{t-j}N^{j}\Big)\Big(\sum_{j=0}^t {t \choose j} (\lambda_1^{*})^{t-j}N^{j \prime}\Big) \\
&= \sum_{j=0}^t {t \choose j}^2 |\lambda_1|^{2(t-j)} \underbrace{N^j (N^j)^{\prime}}_{\text{Diagonal terms}} + \sum_{j \neq k}^{j=t, k=t} {t \choose k}{t \choose j} \lambda_1^j (\lambda_1^{*})^{k} N^j (N^k)^{\prime} \\
&= \sum_{j=0}^t {t \choose j}^2 |\lambda_1|^{2(t-j)} \underbrace{N^j (N^j)^{\prime}}_{\text{Diagonal terms}} + \sum_{j > k}^{j=t, k=t} {t \choose k}{t \choose j} \lambda_1^j (\lambda_1^{*})^{k} N^j (N^k)^{\prime} \\
&+ \sum_{j< k}^{j=t, k=t} {t \choose k}{t \choose j} \lambda_1^j (\lambda_1^{*})^{k} N^j (N^k)^{\prime} \\
&= \sum_{j=0}^t {t \choose j}^2 |\lambda_1|^{2(t-j)} \underbrace{N^j (N^j)^{\prime}}_{\text{Diagonal terms}} + \sum_{j > k}^{j=t, k=t} {t \choose k}{t \choose j} \underbrace{|\lambda_1|^{2k} \lambda_1^{j-k} N^{j-k} N^k(N^k)^{\prime}}_{\text{On $(j-k)$ upper off--diagonal}} \\
&+ \sum_{j< k}^{j=t, k=t} {t \choose k}{t \choose j} \underbrace{|\lambda_1|^{2j} (\lambda_1^{*})^{k-j} N^j(N^{j})^{\prime} (N^{j-k})^{\prime}}_{\text{On $(k-j)$ lower off--diagonal}}
\end{align*}
Let $\lambda_1 = r e^{i\theta}$, then similar to~\cite{erxiong1994691}, there is $D = \text{Diag}(1, e^{-i\theta}, e^{-2i\theta}, \ldots, e^{-i(k_1-1)\theta})$ such that
\[
\underbrace{D (\lambda_1 I + N)^t((\lambda_1 I + N)^t)^{*} D^{*}}_{\text{Real Matrix}}
\]
Observe that any term on $(j-k)$ upper off--diagonal of $(\lambda_1 I + N)^t((\lambda_1 I + N)^t)^{*}$ is of the form $r_0 e^{i(j-k)\theta}$. In the product $D (\lambda_1 I + N)^t((\lambda_1 I + N)^t)^{*} D^{*}$ any term on the $(j-k)$ upper off diagonal term now looks like $e^{-ij\theta + ik\theta} r_0 e^{i(j-k)\theta} = r_0$, which is real. Then we have
\begin{align}
D (\lambda_1 I + N)^t((\lambda_1 I + N)^t)^{*} D^{*} &= \sum_{j=0}^t {t \choose j}^2 |\lambda_1|^{2(t-j)} \underbrace{N^j (N^j)^{\prime}}_{\text{Diagonal terms}} + \sum_{j > k}^{j=t, k=t} {t \choose k}{t \choose j} \underbrace{|\lambda_1|^{2k} |\lambda_1|^{j-k} N^{j-k} N^k(N^k)^{\prime}}_{\text{On $(j-k)$ upper off--diagonal}} \nonumber\\
&+ \sum_{j< k}^{j=t, k=t} {t \choose k}{t \choose j} \underbrace{|\lambda_1|^{2j} |\lambda_1|^{k-j} N^j(N^{j})^{\prime} (N^{k-j})^{\prime}}_{\text{On $(k-j)$ lower off--diagonal}} \label{real}
\end{align}
Since $D$ is unitary and $D (\lambda_1 I + N)^t((\lambda_1 I + N)^t)^{*} D^{*} =(|\lambda_1| I + N)^t((|\lambda_1| I + N)^t)^{\prime} $, we can simply work with the case when $\lambda_1$ is real. Now we show the growth of $ij^{th}$ term of the product $D (\lambda_1 I + N)^t((\lambda_1 I + N)^t)^{*} D^{*})$,
Define $B=\sum_{t=1}^T (|\lambda_1| I + N)^t((|\lambda_1| I + N)^t)^{\prime}$
with
\begin{align}
B_{ll} &=\sum_{t=1}^T [(\lambda_1 I + N)^t((\lambda_1 I + N)^t)^{*}]_{ll} \\
&= \sum_{t=1}^T \sum_{j=0}^{k_1-l} {t \choose j}^2 |\lambda_1|^{2(t-j)}. \label{bll}
\end{align}
Since $1-C/T \leq |\lambda_1| \leq 1+C/T$, then for every $t \leq T$ we have
$$e^{-C} \leq |\lambda_1|^t \leq e^{C}.$$
Then
\begin{align}
B_{ll} &= \sum_{t=1}^T \sum_{j=0}^{k_1-l} {t \choose j}^2 |\lambda_1|^{2(t-j)} \nonumber\\
&\geq e^{-C} \sum_{t=1}^T \sum_{j=0}^{k_1-l} {t \choose j}^2 \nonumber\\
& \geq e^{-C} \sum_{t=T/2}^T \sum_{j=0}^{k_1-l} {t \choose j}^2 \geq e^{-C} \sum_{t=T/2}^T c_{k_1} \frac{t^{2k_1-2l+2} - 1}{t^2 - 1} \geq C(k_1) T^{2k_1 - 2l+1}. \label{lb}
\end{align}
An upper bound can be achieved in an equivalent fashion:
\begin{align}
B_{ll} &= \sum_{t=1}^T \sum_{j=0}^{k_1-l} {t \choose j}^2 |\lambda_1|^{2(t-j)} \nonumber\\
& \leq e^{C} T \sum_{j=0}^{k_1-l} T^{2j} \leq C(k_1) T^{2k_1 - 2l + 1} \label{ub1}
\end{align}
Similarly, any $B_{jk} = C(k_1)T^{2k_1-j-k +1}$. For brevity we use the same $C(k_1)$ to indicate different functions of $k_1$ as we are interested only in the growth with respect to $T$. To summarize
\begin{align}
B_{jk} &= C(k_1)T^{2k_1 - j - k +1} \label{jordan_value}
\end{align}
whenever $T \geq 8d$. Recall Proposition~\ref{det_lb}, and let the $M$ there be equal to $B$. Then since
\[
C_{ij} = C(k_1)\frac{B_{ij}}{\sqrt{B_{ii} B_{jj}}} = C(k_1)\frac{T^{2k_1 - j -k +1}}{\sqrt{T^{4k_1 - 2j - 2k + 2}}}
\]
it turns out that $C_{ij}$ is independent of $T$ and, consequently, $\lambda_{min}(C), \rho$ are independent of $T$ and depend only on $k_1$: the Jordan block size. Then $\prod_{j=1}^{k_1} B_{jj} \geq \text{det}(B) \geq \prod_{j=1}^{k_1} B_{jj} e^{-\frac{d\rho^2}{1 + \lambda_{\min}}} = C(k_1) \prod_{j=1}^{k_1} B_{jj}$. This means that $\text{det}(B) = C(k_1) \prod_{j=1}^{k_1} B_{jj}$ for some function $C(k_1)$ depending only on $k_1$. Further, using the values for $B_{jj}$ we get
\begin{equation}
\label{det}
\text{det}(B) = C(k_1) \prod_{j=1}^{k_1} B_{jj} = \prod_{j=1}^{k_1} C(k_1) T^{2k_1 - 2l +1} = C(k_1) T^{k_1^2}.
\end{equation}
Next we use Schur-Horn theorem, \textit{i.e.}, let $\sigma_i(B)$ be the ordered singular values of $B$ where $\sigma_i(B) \geq \sigma_{i+1}(B)$. Then $\sigma_i(B)$ majorizes the diagonal of $B$, \textit{i.e.}, for any $k \leq k_1$
\[
\sum_{i=1}^k \sigma_i(B) \geq \sum_{i=1}^{k} B_{ii}.
\]
Observe that $B_{ii} \leq B_{jj}$ when $i \leq j$. Then from Eq.~\eqref{jordan_value} it implies that
\begin{align*}
B_{k_1 k_1}=C_1(k_1)T &\geq \sigma_{k_1}(B) \\
\sum_{j=k_1-1}^{k_1} B_{jj} &= C_{2}(k_1)T^{3} + C_1(k_1)T \geq \sigma_{k_1 - 1}(A) + \sigma_{k_1}(A).
\end{align*}
Since $k_1 \geq 1$, it can be checked that for $T \geq T_1 =k_1\sqrt{\frac{C_1(k_1)}{C_2(k_1)}}$ we have $\sigma_{k_1-1}(A) \leq {(1+k_1^{-2})C_2(k_1)T^3} \leq {(1+k_1^{-1})C_2(k_1)T^3}$ as for every $T \geq T_1$ we have $C_2(k_1)T^3 \geq k_1^2C_1(k_1)T$. Again to upper bound $\sigma_{k_1-2}(A)$ we will use a similar argument
\begin{align*}
\sum_{j=k_1-2}^{k_1} B_{jj} &= C_3(k_1)T^{5} + C_2(k_1)T^{3} + C_1(k_1)T \geq \sigma_{k_1-2}(A) +\sigma_{k_1-1}(A) + \sigma_{k_1}(A)
\end{align*}
and show that whenever
\[
T \geq \max{\Big(T_1, k_1\sqrt{\frac{C_2(k_1)}{C_3(k_1)}}\Big)}
\]
we get $\sigma_{k_1-2}(A) \leq (1+k_1^{-2} + k_1^{-4}){C_3(k_1)T^5} \leq (1+k_1^{-1}){C_3(k_1)T^5}$ because $T \geq T_1$ ensures $C_2(k_1)T^3 \geq k_1^2C_1(k_1)T$ and $T \geq T_2 = k_1\sqrt{\frac{C_2(k_1)}{C_3(k_1)}}$ ensures $C_3(k_1)T^5 \geq k_1^2 C_2(k_1)T^3$. The $C_i(k_1)$ are not important, the goal is to show that for a sufficiently large $T$ we have an upper bound on each singular values (roughly) corresponding to the diagonal element. Similarly we can ensure for every $i$ we have $\sigma_i(A) \leq (1+k_1^{-1})C_{k_1 -i+1}(k_1)T^{2k_1 - 2i + 1}$, whenever
\[
T > T_{i} = \max{\Big(T_{i-1}, k_1\sqrt{\frac{C_{i}(k_1)}{C_{i+1}(k_1)}}\Big)}
\]
Recall Eq.~\eqref{det} where $\text{det}(B) = C(k_1) T^{k_1^2}$. Assume that $\sigma_{k_1}(B) < \frac{C(k_1) T}{e \prod_{i=1}^d C_{i+1}(k_1)}$. Then whenever $T \geq \max{\Big(8d, \sup_{i}k_1\sqrt{\frac{C_{i}(k_1)}{C_{i+1}(k_1)}}\Big)}$
\begin{align*}
\text{det}(B) &= C(k_1) T^{k_1^2} \\
\prod_{i=1}^{k_1}\sigma_i &= C(k_1) T^{k_1^2} \\
\sigma_{k_1}(B)(1+k_1^{-1})^{k_1-1} T^{k_1^2-1}\prod_{i=2}^{k_1}C_{i+1} &\geq C(k_1) T^{k_1^2} \\
\sigma_{k_1}(B) &\geq \frac{C_{k_1}T}{(1+k_1^{-1})^{k_1-1}\prod_{i=2}^{k_1}C_{i+1}} \\
&\geq \frac{C(k_1) T}{e \prod_{i=1}^{k_1} C_{i+1}(k_1)}
\end{align*}
which is a contradiction. This means that $\sigma_{k_i}(B) \geq \frac{C(k_1) T}{e \prod_{i=1}^{k_1} C_{i+1}(k_1)}$. This implies
\[
\sigma_{\min}(\Gamma_T(A)) \geq 1 + \sigma_{\min}(P)^4 C(k_1)T
\]
for some function $C(k_1)$ that depends only on $k_1$.
\end{proof}
It is possible that $\alpha(d)$ might be exponentially small in $d$, however for many cases such as orthogonal matrices or diagonal matrices $\alpha(A)=1$ [As shown in~\cite{simchowitz2018learning}]. We are not interested in finding the best bound $\alpha(d)$ rather show that the bound of Proposition~\ref{gramian_lb} exists and assume that such a bound is known.
\begin{prop}
\label{gramian_ratio}
Let $t_1/t_2 = \beta > 1$ and $A$ be a $d \times d$ matrix. Then
\[
\lambda_1(\Gamma_{t_1}(A)\Gamma_{t_2}^{-1}(A)) \leq C(d, \beta)
\]
where $C(d, \beta)$ is a polynomial in $\beta$ of degree at most $d^2$ whenever $t_i \geq 8d$.
\end{prop}
\begin{proof}
Since $\lambda_1(\Gamma_{t_1}(A)\Gamma_{t_2}^{-1}(A)) \geq 0$
\begin{align*}
\lambda_1(\Gamma_{t_1}(A)\Gamma_{t_2}^{-1}(A)) &\leq \text{tr}(\Gamma_{t_1}(A)\Gamma_{t_2}^{-1}(A)) \\
&\leq \text{tr}(\Gamma_{t_2}^{-1/2}(A)\Gamma_{t_1}(A)\Gamma_{t_2}^{-1/2}(A)) \\
&\leq d \sigma_1(\Gamma_{t_2}^{-1/2}(A)\Gamma_{t_1}(A)\Gamma_{t_2}^{-1/2}(A)) \\
&\leq d\sup_{||x|| \neq 0}\frac{x^{\prime} \Gamma_{t_1}(A) x}{x^{\prime}\Gamma_{t_2}(A) x}
\end{align*}
Now
\begin{align*}
\Gamma_{t_i}(A) &= P^{-1}\sum_{t=0}^{t_i} \Lambda^{t}PP^{\prime}(\Lambda^{t})^{*}P^{-1 \prime} \\
&\preceq \sigma_{\max}(P)^2 P^{-1}\sum_{t=0}^{t_i} \Lambda^{t}(\Lambda^{t})^{*}P^{-1 \prime} \\
\Gamma_{t_i}(A) &\succeq \sigma_{\min}(P)^2 P^{-1}\sum_{t=0}^{t_i} \Lambda^{t}(\Lambda^{t})^{*}P^{-1 \prime}
\end{align*}
Then this implies
\[
\sup_{||x|| \neq 0}\frac{x^{\prime} \Gamma_{t_1}(A) x}{x^{\prime}\Gamma_{t_2}(A) x} \leq \frac{\sigma_{\max}(P)^2}{\sigma_{\min}(P)^2} \sup_{||x|| \neq 0}\frac{x^{\prime} \sum_{t=0}^{t_1} \Lambda^{t}(\Lambda^{t})^{*} x}{x^{\prime}\sum_{t=0}^{t_2} \Lambda^{t}(\Lambda^{t})^{*} x}
\]
Then from Lemma 12 in~\cite{abbasi2011improved} we get that
\[
\sup_{||x|| \neq 0}\frac{x^{\prime} \sum_{t=0}^{t_1} \Lambda^{t}(\Lambda^{t})^{*} x}{x^{\prime}\sum_{t=0}^{t_2} \Lambda^{t}(\Lambda^{t})^{*} x} \leq \frac{\text{det}(\sum_{t=0}^{t_1} \Lambda^{t}(\Lambda^{t})^{*})}{\text{det}(\sum_{t=0}^{t_2} \Lambda^{t}(\Lambda^{t})^{*})}
\]
Then
\begin{align*}
\frac{\text{det}(\sum_{t=0}^{t_2} \Lambda^{t}(\Lambda^{t})^{*})}{\text{det}(\sum_{t=0}^{t_1} \Lambda^{t}(\Lambda^{t})^{*})} &\leq \frac{\text{det}(\prod_{i=1}^l (\sum_{t=0}^{t_2} J_{k_i}(\lambda_i)^{t}(J_{k_i}(\lambda_i)^{t})^{*}))}{\text{det}(\prod_{i=1}^l(\sum_{t=0}^{t_1} J_{k_i}(\lambda_i)^{t}(J_{k_i}(\lambda_i)^{t})^{*}))}
\end{align*}
Here $l$ are the number of Jordan blocks of $A$. Then our assertion follows from Eq.~\eqref{det} which implies that the determinant of $\sum_{t=0}^{t_2} J_{k_i}(\lambda_i)^{t}(J_{k_i}(\lambda_i)^{t})^{*} $ is equal to the product of the diagonal elements (times a factor that depends only on Jordan block size), \textit{i.e.}, $C(k_i)t_2^{k_i^2}$. As a result the ratio is given by
\[
\frac{\text{det}(\prod_{i=1}^l (\sum_{t=0}^{t_2} J_{k_i}(\lambda_i)^{t}(J_{k_i}(\lambda_i)^{t})^{*}))}{\text{det}(\prod_{i=1}^l(\sum_{t=0}^{t_1} J_{k_i}(\lambda_i)^{t}(J_{k_i}(\lambda_i)^{t})^{*}))} = \prod_{i=1}^l \beta^{k_i^2}
\]
whenever $t_2, t_1 \geq 8d$. Summarizing we get
\[
\sup_{||x|| \neq 0}\frac{x^{\prime} \Gamma_{t_1}(A) x}{x^{\prime}\Gamma_{t_2}(A) x} \leq \frac{\sigma_{\max}(P)^2}{\sigma_{\min}(P)^2} \prod_{i=1}^l \beta^{k_i^2}
\]
\end{proof}
\section{Composite Result}
\label{composite_result_proof}
In this section we discuss error rates for regular matrices which may have eigenvalues anywhere in the complex plane. The key step is to recall that for every matrix $A$ it is possible to find $\tilde{P}$ such that
\begin{align}
A = \tilde{P}^{-1} \underbrace{\begin{bmatrix}
A_{e} & 0 & 0 \\
0 & A_{ms} & 0 \\
0 & 0 & A_s
\end{bmatrix}}_{=\tilde{A}}\tilde{P} \label{partition}
\end{align}
Here $A_{e}, A_{ms}, A_s$ are the purely explosive, marginally stable and stable portions of $A$. This follows because any matrix $A$ has a Jordan normal form $A = P^{-1} \Lambda P$, where $\Lambda$ is a block diagonal matrix and each block corresponds to an eigenvalue. We can always find $Q$ (a rearrangement matrix) such that $\Lambda$ is partitioned into two diagonal parts: explosive, marginally stable and stable, \textit{i.e.},
\begin{align}
A = P^{-1}Q^T \begin{bmatrix}
\Lambda_{e} & 0 & 0 \\
0 & \Lambda_{ms} & 0 \\
0 & 0 & \Lambda_s
\end{bmatrix}QP
\end{align}
Clearly, $\tilde{P} = QP$.
Since
\begin{align}
X_t &= \sum_{\tau=1}^t A^{\tau-1}\eta_{t -\tau+1} \nonumber \\
\tilde{X}_t = \tilde{P}X_t &= \sum_{\tau=1}^t \tilde{A}^{\tau-1}\underbrace{\tilde{P}\eta_{t -\tau+1}}_{\tilde{\eta}_{t-\tau+1}}
\end{align}
Now, the transformed dynamics are as follows:
\begin{align*}
\tilde{X}_{t+1} &= \tilde{A}\tilde{X}_t + \tilde{\eta}_{t+1}
\end{align*}
where $\tilde{A}$ has been partitioned into explosive and stable components as Eq.~\eqref{partition}. Corresponding to $\tilde{A}$ partition $\tilde{X}_t, \tilde{\eta}_t$
\begin{align}
\tilde{X}_t = \begin{bmatrix}
X^{e}_t \\
X^{ms}_t \\
X^{s}_t
\end{bmatrix}&, \tilde{\eta}_t = \begin{bmatrix}
\eta^{e}_t \\
\eta^{ms}_t \\
\eta^{s}_t
\end{bmatrix}
\end{align}
\begin{align}
\tilde{Y}_T = \sum_{t=1}^T \tilde{X}_t \tilde{X}_t^{\prime} &= \sum_{t=1}^T\begin{bmatrix}
X^{e}_t (X^{e}_t)^{\prime} & X^{e}_t (X^{ms}_t)^{\prime} & X^{e}_t (X^{s}_t)^{\prime}\\
X^{ms}_t (X^{e}_t)^{\prime} & X^{ms}_t (X^{ms}_t)^{\prime} & X^{ms}_t (X^{s}_t)^{\prime} \\
X^{s}_t (X^{e}_t)^{\prime} & X^{s}_t (X^{ms}_t)^{\prime} & X^{s}_t (X^{s}_t)^{\prime}
\end{bmatrix}
\end{align}
We analyze the error of identification in the transformed system instead and show how it relates to the actual error. Note that $\tilde{P}$ is unknown, the transformation is done for ease of analysis. The invertibility of submatrix corresponding to stable and marginally stable components, \textit{i.e.},
\begin{align*}
X^{mss}_t = \begin{bmatrix}
X^{ms}_{t} \\
X^{s}_{t}
\end{bmatrix}
\end{align*}
follows from Theorem~\ref{main_result}. To see this let $A_e$ be a $d_e \times d_e$ matrix. Define
\[
P_{mss} = \tilde{P}[d_e+1:d, :]
\]
\textit{i.e.}, $P_{mss}$ is the rectangular matrix formed by removing the rows of $\tilde{P}$ corresponding to the explosive part. Then, by definition, we have that
\[
\begin{bmatrix}
\eta^{ms}_t \\
\eta^{s}_t
\end{bmatrix} = P_{mss} \eta_t
\]
and
\[
X_{t+1}^{mss}= \underbrace{\begin{bmatrix}
A_{ms} &0 \\
0 & A_s
\end{bmatrix}}_{A_{mss}}X_{t}^{mss} + \begin{bmatrix}
\eta^{ms}_{t+1} \\
\eta^{s}_{t+1}
\end{bmatrix}
\]
Further
\[
\mathbb{E}[P_{mss}\eta_t \eta_t^{\prime} P_{mss}^{\prime}] = P_{mss}P_{mss}^{\prime} \succ 0
\]
Since all rows of $\tilde{P}$ are independent then $P_{mss}P_{mss}^{\prime}$ is invertible and $\{P_{mss}\eta_t\}_{t=1}^T$ are independent subGaussian vectors. Now this is the same set up as the general version of Theorem~\ref{main_result} discussed in Section~\ref{short_proof}. Since $A_{mss} \in \mathcal{S}_0 \cup \mathcal{S}_1$ only has stable and marginally stable components, it follows from the Eq.~\eqref{lower_bnd} that
$$\sum_{t=1}^T X^{mss}_t (X^{mss}_t)^{\prime} \succeq \frac{T}{4} \sigma_{\min}(P_{mss}P_{mss}^{\prime})I$$
with high probability. Then since $\sigma_{\min}(P_{mss}P_{mss}^{\prime}) \geq \sigma_{\min}(\tilde{P})^2 = R^2$, we have that $\sum_{t=1}^T X^{mss}_t (X^{mss}_t)^{\prime} \succeq \frac{TR^2}{4}I$. Let $\sigma_{\max}(\tilde{P}) =1$. (this makes no difference to the results and $R$ can be interpreted as the inverse condition number)
Recall the definition of $\beta_0(\delta)$
$$\beta_0(\delta) = \inf{\Big\{\beta|\beta^2\sigma_{\min}(\Gamma_{\lfloor \frac{1}{\beta}\rfloor}(A)) \geq \Big(\frac{ 8ec(A, \delta)}{ TR^2\sigma_{\min}(A A^{\prime})}\Big)\Big\}}$$
we refer to $\beta_0(\delta)$ as $\beta_0$. Following our discussion in Proposition~\ref{gramian_lb} we see that $\beta_0 > 0$ and since $\sigma_{\min}(\Gamma_t(A)) \geq \alpha(d)t$ we have that
\[
\beta_0 \leq \frac{8ec(A, \delta)}{TR^2 \sigma_{\min}^2(A)C(d)} \implies \frac{1}{\beta_0} \geq \frac{T R^2\sigma_{\min}^2(A)C(d)}{8ec(A, \delta)}
\]
Define
$$
V_e = (\sum_{t=1}^T X^{e}_t (X^{e}_t)^{\prime}), V_{s} = \frac{TR^2}{4}I, V_{ms} = \Big(\frac{TR^2}{8e} \Gamma_{\lfloor \frac{1}{\beta_0}\rfloor}(A_{ms}) \Big)
$$
where the invertibility in $V_e$ holds with high probability. Observe that $V_{ms} \preceq (\sum_{t=1}^T X^{ms}_t (X^{ms}_t)^{\prime}), V_{s} \preceq (\sum_{t=1}^T X^{s}_t (X^{s}_t)^{\prime})$ with high probability (follows from Eq.~\eqref{lower_bnd},\eqref{stable_yt}). This observation will be useful in proving the composite invertibility.
Although the technique to prove the invertibility of $\sum_{t=1}^T \tilde{X}_t \tilde{X}_t^{\prime}$ is similar in spirit to that of~\cite{faradonbeh2017finite}, it addresses additional difficulties arising due to the presence of a marginally stable block.
\begin{align}
B_{d \times d} &= \begin{bmatrix}
V_e^{-1/2} & 0 & 0 \\
0 & V_{ms}^{-1/2} & 0\\
0 & 0 & V_s^{-1/2}
\end{bmatrix}
\end{align}
We will show that $B \sum_{t=1}^T \tilde{X}_t \tilde{X}_t^{\prime} B^{\prime}$ is positive definite with high probability, \textit{i.e.},
\begin{align}
\sum_{t=1}^T B \tilde{X}_t \tilde{X}_t^{\prime}B^{\prime} &= \begin{bmatrix}
I & \sum_{t=1}^T V_{e}^{-1/2} X^{e}_t (X^{ms}_t)^{\prime} V_{ms}^{-1/2 \prime} & \sum_{t=1}^T V_{e}^{-1/2} X^{e}_t (X^{s}_t)^{\prime} V_{s}^{-1/2 \prime}\\
\sum_{t=1}^T V_{ms}^{-1/2} X^{ms}_t (X^{e}_t)^{\prime} V_{e}^{-1/2 \prime} & \sum_{t=1}^T V_{ms}^{-1/2} X^{ms}_t (X^{ms}_t)^{\prime} V_{ms}^{-1/2 \prime} & \sum_{t=1}^T V_{ms}^{-1/2} X^{ms}_t (X^{s}_t)^{\prime} V_{s}^{-1/2 \prime} \\
\sum_{t=1}^T V_{s}^{-1/2} X^{s}_t (X^{e}_t)^{\prime} V_{e}^{-1/2 \prime} & \sum_{t=1}^T V_{s}^{-1/2} X^{s}_t (X^{ms}_t)^{\prime} V_{ms}^{-1/2 \prime} & \sum_{t=1}^T V_{s}^{-1/2} X^{s}_t (X^{s}_t)^{\prime} V_{ms}^{-1/2 \prime}
\end{bmatrix}
\end{align}
We already showed that lower submatrix is invertible. To show that the entire matrix is invertible we need to show
\[
||V_{e}^{-1/2} \sum_{t=1}^T X^{e}_t (X^{ms}_t)^{\prime} V_{ms}^{-1/2 \prime}||, ||V_{e}^{-1/2} \sum_{t=1}^T X^{e}_t (X^{s}_t)^{\prime} V_{s}^{-1/2 \prime}|| < \gamma/8
\]
with high probability for some appropriate $\gamma$ and
\[
\sigma_{\min}\Bigg(\begin{bmatrix}
V_{ms}^{-1/2} & 0\\
0 & V_s^{-1/2}
\end{bmatrix} \sum_{t=1}^T X^{mss}_t (X^{mss}_t)^{\prime} \begin{bmatrix}
V_{ms}^{-1/2} & 0\\
0 & V_s^{-1/2}
\end{bmatrix}\Bigg) \geq \gamma > 0
\]
\subsection{Cross Terms have low norm}
\label{cross_low}
Define the following quantities:
\begin{align}
\alpha(A_e, \delta) &= \frac{3\phi_{\max}(A_e)^2 \sigma_{\max}^2(A_e)}{\phi_{\min}(A_e)^2 \sigma_{\min}(A_e)^2}\frac{\Big(1 + \frac{1}{c} \log{\frac{1}{\delta}}\Big)\text{tr}(P_e(\Gamma_T(A_e^{-1} - I))P_e^{\prime})}{\psi(A_e)^2 \delta^2} \label{alpha_exp} \\
T_{mc}(\delta) &= {\Bigg\{T \Bigg | \alpha(A_e, \delta)\text{tr}(A_{e}^{-T + k_{mc}(T)}(A_{e}^{-T + k_{mc}(T)})^{\prime}) \leq \frac{\gamma^2}{256} \Bigg\}} \label{ms_exp} \\%\lambda_1\Big(\frac{8ek_{mc}}{T}\Gamma_{k_{mc}}(A_{ms}) \Gamma_{\lfloor \frac{1}{\beta_0(\delta)} \rfloor}(A_{ms})^{-1}\Big){\Big(1 + \frac{1}{c}\log{\frac{1}{\delta}}\Big)}
k_{mc} &= k_{mc}(T) = T \Bigg(1 - \frac{R^2\gamma^2}{2048 de \lambda_1\Big(\Gamma_T(A_{ms})\Gamma^{-1}_{\lfloor \frac{1}{\beta_0(\delta)} \rfloor}(A_{ms}){\Big(1 + \frac{1}{c}\log{\frac{1}{\delta}}\Big)}\Big)}\Bigg) \label{k_mc} \\
T_{sc}(\delta) &= {\Bigg\{T \Bigg | \alpha(A_e, \delta)\text{tr}(A_{e}^{-T + k_{sc}(T)}(A_{e}^{-T + k_{sc}(T)})^{\prime}) \leq \frac{\gamma^2}{256} \Bigg\}} \label{s_exp} \\
k_{sc} &= k_{sc}(T) = T \Bigg(1 - \frac{R^2\gamma^2}{1024 d\lambda_1\Big(\Gamma_T(A_{s}){\Big(1 + \frac{1}{c}\log{\frac{1}{\delta}}\Big)}\Big)}\Bigg) \label{k_s}
\end{align}
\begin{remark}
Note that $T_{mc}(\delta)$ (and $T_{sc}(\delta)$) is a set where there exists a minimum $T_{*} < \infty$ such that $T \in T_{mc}(\delta)$ whenever $T \geq T_{*} $. However, there might be $T < T_{*}$ for which the inequality of $T_{mc}(\delta)$ holds. Whenever we write $T \in T_{mc}(\delta)$ we mean $T \geq T_{*}$.
\end{remark}
Second note that for every $T$, since $R, \gamma < 1$ we have
\[
k_{sc}(T), k_{mc}(T) \geq \frac{T}{2}
\]
These quantities will be useful in stating the error bounds. We have
\begin{align*}
|| V_{e}^{-1/2} \sum_{t=1}^T X^{e}_t (X^{ms}_t)^{\prime} V_{ms}^{-1/2 \prime}|| &\leq ||V_{e}^{-1/2} \sum_{t=1}^k X^{e}_t (X^{ms}_t)^{\prime} V_{ms}^{-1/2 \prime}|| + ||V_{e}^{-1/2} \sum_{t= k+1}^T X^{e}_t (X^{ms}_t)^{\prime} V_{ms}^{-1/2\prime}||
\end{align*}
We will need a more nuanced argument to upper bound Eq.~\eqref{err0} than that provided in~\cite{faradonbeh2017finite} (although it will be similar in flavor).
\begin{align}
\mathbb{P}(||V_{e}^{-1/2} \sum_{t= 1}^T X^{e}_t (X^{ms}_t)^{\prime} V_{ms}^{-1/2} || ) \label{err0}
\end{align}
For any $v_1, v_2$ we break $|v_1^{\prime}V_{e}^{-1/2} \sum_{t= 1}^T X^{e}_t (X^{ms}_t)^{\prime} V_{ms}^{-1/2}v_2|$ into two parts $$|v_1^{\prime}V_{e}^{-1/2} \sum_{t= 1}^k X^{e}_t (X^{ms}_t)^{\prime} V_{ms}^{-1/2}v_2|$$
and
$$|v_1^{\prime}V_{e}^{-1/2} \sum_{t= k+1}^T X^{e}_t (X^{ms}_t)^{\prime} V_{ms}^{-1/2}v_2|$$.
For $|v_1^{\prime}V_{e}^{-1/2} \sum_{t= k+1}^T X^{e}_t (X^{ms}_t)^{\prime} V_{ms}^{-1/2}v_2|$ we have
\begin{align}
|v_1^{\prime} V_{e}^{-1/2} \sum_{t=k+1}^T X^{e}_t (X^{ms}_t)^{\prime} V_{ms}^{-1/2} v_2| &\leq \underbrace{\sqrt{v_1^{\prime} V_{e}^{-1/2} \sum_{t=k+1}^T X^{e}_t (X^{e}_t)^{\prime} V_{e}^{-1/2} v_1}}_{\leq 1} \sqrt{v_2^{\prime} V_{ms}^{-1/2} \sum_{t=k+1}^T X^{ms}_t (X^{ms}_t)^{\prime}V_{ms}^{-1/2} v_2} \nonumber\\
&\leq \sqrt{v_2^{\prime} V_{ms}^{-1/2}\sum_{t=k+1}^T X^{ms}_t (X^{ms}_t)^{\prime} V_{ms}^{-1/2} v_2} \leq \sqrt{\sigma_1( V_{ms}^{-1/2}\sum_{t=k+1}^T X^{ms}_t (X^{ms}_t)^{\prime} V_{ms}^{-1/2} )} \nonumber\\
&\leq \sqrt{\lambda_1( \sum_{t=k+1}^T X^{ms}_t (X^{ms}_t)^{\prime} V_{ms}^{-1} )}\label{err_2}
\end{align}
To upper bound Eq.~\eqref{err_2} we simply need to upper bound $V_{ms}^{-1/2}\sum_{t=k+1}^T X^{ms}_t (X^{ms}_t)^{\prime}V_{ms}^{1/2}$. We can use dependent Hanson--Wright inequality (Corollary~\ref{dep-hanson-wright}) and Corollary~\ref{sub_sum}.
Then from Corollary~\ref{sub_sum} and since $V_{ms}$ is deterministic we can conclude that with probability at least $1-\delta$ we get
\begin{equation}
V_{ms}^{-1/2} \sum_{t=k+1}^T X^{ms}_t (X^{ms}_t)^{\prime} V_{ms}^{-1/2} \preceq \sum_{t=k+1}^T\text{tr}( V_{ms}^{-1/2} \Gamma_t(A_{ms}) V_{ms}^{-1/2} )\Big(1 + \frac{1}{c}\log{\frac{1}{\delta}}\Big)I \label{errf_1}
\end{equation}
We can upper bound the deterministic quantity in Eq.~\eqref{errf_1} as
\begin{align}
\sum_{t=k+1}^T\text{tr}( V_{ms}^{-1/2} \Gamma_t(A) V_{ms}^{-1/2}) &\leq d\lambda_1(\sum_{t=k+1}^T\Gamma_t(A_{ms}) V_{ms}^{-1}) \nonumber\\
&= d\lambda_1\Big(\frac{8e}{TR^2}\sum_{t=k+1}^T\Gamma_t(A_{ms}) \Gamma_{\lfloor \frac{1}{\beta_0(\delta)} \rfloor}(A_{ms})^{-1}\Big) \nonumber \\
&\leq d\lambda_1\Big(\frac{8e(T-k)}{TR^2}\Gamma_T(A_{ms}) \Gamma_{\lfloor \frac{1}{\beta_0(\delta)} \rfloor}(A_{ms})^{-1}\Big)\label{errf_0}
\end{align}
The last inequality holds because the eigenvalues of $P^{-1/2} Q P^{-1/2}$ are the same as $QP^{-1}$ and non--negative whenever $P, Q$ are psd matrices. The normalized gramian term, $\Gamma_t(A_{ms}) \Gamma_{\lfloor \frac{1}{\beta_0(\delta)} \rfloor}(A_{ms})^{-1}$, appears in Eq.~\eqref{errf_0} only because $V_{ms}$ is deterministic. This will help us in getting non--trivial upper bounds for the cross terms of explosive and marginally stable pair. The key is the choice of $k$. In Proposition~\ref{gramian_ratio} we showed that $\lambda_1(\Gamma_{t_1} \Gamma_{t_2}^{-1})$ only depends on the ratio of $t_1/t_2$ and $A_{ms}$ and not on the specific values of $t_1, t_2$. Note that due to Proposition~\ref{gramian_ratio} the normalized gramian term $\Gamma_T(A_{ms})\Gamma^{-1}_{\lfloor \frac{1}{\beta_0(\delta)} \rfloor}(A_{ms})$ has spectral radius that is at most polynomial in $T\beta_0(\delta)$. Since $\beta_0(\delta) \approx \frac{\log{T}}{T} \times \log{\frac{1}{\delta}}$, we get that $$\lambda_1(\Gamma_T(A_{ms})\Gamma^{-1}_{\lfloor \frac{1}{\beta_0(\delta)} \rfloor}(A_{ms})) = \text{poly}\Big(\log{T}, \log{\frac{1}{\delta}}\Big)$$
Our choices of $T_{mc}(\delta), k_{mc}(T)$ in Eq.~\eqref{ms_exp},\eqref{k_mc} are motivated by the preceding discussion. We set $k = k_{mc}(T)$ and we have that $d\lambda_1\Big(\frac{8e(T-k)}{TR^2}\Gamma_T(A_{ms}) \Gamma_{\lfloor \frac{1}{\beta_0(\delta)} \rfloor}(A_{ms})^{-1}\Big) \leq \frac{\gamma^2}{256}$ (check by directly substituting $k=k_{mc}(T)$ in Eq.~\eqref{errf_0}) and as a result from Eq.~\eqref{err_2}
\[
|v_1^{\prime} V_{e}^{-1/2} \sum_{t=k+1}^T X^{e}_t (X^{ms}_t)^{\prime} V_{ms}^{-1/2} v_2| \leq \frac{\gamma}{16}
\]
for arbitrary $v_1, v_2$. Similarly for the second part
\begin{align}
|v_1^{\prime} V_{e}^{-1/2} \sum_{t=1}^k X^{e}_t (X^{ms}_t)^{\prime} V_{ms}^{-1/2} v_2| &\leq \underbrace{\sqrt{v_1^{\prime} V_{e}^{-1/2} \sum_{t=1}^k X^{e}_t (X^{e}_t)^{\prime} V_{e}^{-1/2} v_1}}_{a_1} \underbrace{\sqrt{v_2^{\prime}V_{ms}^{-1/2} \sum_{t=1}^k X^{ms}_t (X^{ms}_t)^{\prime} V_{ms}^{-1/2} v_2}}_{\leq 1} \label{err_3}
\end{align}
For the choice of $k=k_{mc}$ the other term can be simplified as
\begin{align}
a_1 &= \sqrt{v_1^{\prime} V_{e}^{-1/2} \sum_{t=1}^k X^{e}_t (X^{e}_t)^{\prime} V_{e}^{-1/2} v_1} \leq \sqrt{\sigma_1(V_{e}^{-1/2} \sum_{t=1}^k X^{e}_t (X^{e}_t)^{\prime} V_{e}^{-1/2} )} \leq \sqrt{\lambda_1(\sum_{t=1}^k X^{e}_t (X^{e}_t)^{\prime} V_{e}^{-1})} \nonumber\\
&\leq \sqrt{\text{tr}(\sum_{t=1}^k X^{e}_t (X^{e}_t)^{\prime} V_{e}^{-1})} \label{err_1}
\end{align}
By ensuring that both $T, k = k_{mc} (\text{which is }\geq T/2) \in T_u(\delta)$ (from Table~\ref{notation}) we have from Eq.~\eqref{exp_bnds} that
\begin{align*}
\sum_{l=1}^k X^{e}_t (X^{e}_t)^{\prime} &\preceq\frac{3\phi_{\max}(A_e)^2}{2\sigma_{\min}(P_e)^2}(1+\frac{1}{c}\log{\frac{1}{\delta}})\text{tr}(P_e(\Gamma_T(A_e^{-1})-I)P_e^{\prime}) A_e^k A_e^{k \prime} \nonumber \\
V_e &\succeq \frac{\phi_{\min}(A_e)^2 \psi(A_e)^2 \delta^2}{2\sigma_{\max}(P_e)^2}A_e^T A_e^{T \prime}\nonumber\\
\end{align*}
Define
\[
\alpha(A_e, \delta) = \frac{3\phi_{\max}(A_e)^2 \sigma_{\max}^2(A_e)}{\phi_{\min}(A_e)^2 \sigma_{\min}(A_e)^2}\frac{\Big(1 + \frac{1}{c} \log{\frac{1}{\delta}}\Big)\text{tr}(P_e(\Gamma_T(A_e^{-1}) - I)P_e^{\prime})}{\psi(A_e)^2 \delta^2}
\]
and we can conclude
\[
\sqrt{\text{tr}(\sum_{t=1}^k X^{e}_t (X^{e}_t)^{\prime} V_{e}^{-1})} \leq \sqrt{\alpha(A_e, \delta)\text{tr}(A_e^{-T + k}(A_e^{-T + k})^{\prime})}
\]
with probability at least $1-2\delta$. Since $T \in T_{mc}(\delta)$ we have
\begin{equation}
a_1 \leq \sqrt{\alpha(A_e, \delta)\text{tr}(A_e^{-T + k}(A_e^{-T + k})^{\prime})} \leq \frac{\gamma}{16} \label{errf_22}
\end{equation}
with probability at least $1-2\delta$. Then combining Eq.~\eqref{err_2},\eqref{errf_1},\eqref{err_3},\eqref{errf_22} we get with probability at least $1-4\delta$ that
\begin{align}
|v_1^{\prime}V_{e}^{-1/2} \sum_{t= 1}^T X^{e}_t (X^{ms}_t)^{\prime} V_{ms}^{-1/2} v_2| &\leq \frac{\gamma}{8}
\end{align}
This implies with probability at $1-4 \delta$ we have
\begin{align}
||V_{e}^{-1/2} \sum_{t= 1}^T X^{e}_t (X^{ms}_t)^{\prime} V_{ms}^{-1/2} || &\leq \frac{\gamma}{8}\label{fin_err}
\end{align}
We have a similar assertion for the stable--explosive block but with $T \in T_{sc}(\delta)$ and $k= k_{sc}(T)$.
\begin{align}
||V_{e}^{-1/2} \sum_{t= 1}^T X^{e}_t (X^{s}_t)^{\prime} V_{s}^{-1/2} || &\leq \frac{\gamma}{8}\label{fin_err2}
\end{align}
It should be noted that $T \in T_{sc}(\delta), T_{mc}(\delta)$ are both poly logarithmic in $\delta$ because of $A^{-T + k_{mc}}$ (or $A^{-T + k_{sc}}$) term which is exponentially decaying.
\begin{remark}
Whenever $T \in T_{sc}(\delta) , T_{mc}(\delta)$, the other conditions on $T$ such as $T/2 \in T_{u}(\delta)$ or $T \geq T_{s}(\delta) \vee T_{ms}(\frac{\delta}{2T})$ for the invertibility of the individual stable, marginally stable blocks are satisfied simultaneously (or are trivial to satisfy) and we do not state them explicitly.
\end{remark}
\subsection{Norm of scaled $\sum_{t=1}^T X^{mss}_t(X^{mss}_t)^{\prime}$ is high}
\label{highnorm_gen}
Now we need to check
\[
\sigma_{\min}\Bigg(\begin{bmatrix}
V_{ms}^{-1/2} & 0\\
0 & V_s^{-1/2}
\end{bmatrix} \sum_{t=1}^T X^{mss}_t (X^{mss}_t)^{\prime} \begin{bmatrix}
V_{ms}^{-1/2} & 0\\
0 & V_s^{-1/2}
\end{bmatrix}\Bigg) \geq \gamma > 0
\]
Since from Theorem~\ref{main_result} and its extension in Section~\ref{short_proof} it is known that with probability at least $1-\delta$ we have $\sum_{t=1}^T X^{mss}_t (X^{mss}_t)^{\prime} \succeq R^{2} \frac{TI}{4}$ for some fixed $R =\sigma_{\min}(\tilde{P}) > 0$, then we know that the Schur complement of $\sum_{t=1}^T X^{mss}_t (X^{mss}_t)^{\prime}$ is invertible too. For shorthand let
\[
M = \sum_{t=1}^T X^{mss}_t (X^{mss}_t)^{\prime} = \begin{bmatrix}
M_{11} & Q^{\prime} \\
Q & M_{22}
\end{bmatrix}
\]
Then the Schur complement is
\[
M/M_{11} = M_{22} - Q M_{11}^{-1} Q^{\prime}
\]
Since $\sigma_{\min}(M) \geq R^2 \frac{TI}{4}$ then from Corollary 2.3 in~\cite{Liu2005} we have that
\[
\sigma_{\min}(M/M_{11}) \geq R^2 \frac{T}{4}
\]
Since $M_{22} \preceq \sum_{t=0}^{T-1}\text{tr}(\Gamma_t(A_s))\Big(1 + \frac{1}{c}\log{\frac{1}{\delta}}\Big)I$ with probability at least $1-\delta$. We see that with probability at least $1-\delta$
\begin{equation}
M_{22}^{-1/2}(M/M_{11})M_{22}^{-1/2} = I - M_{22}^{-1/2} Q M_{11}^{-1/2} M_{11}^{-1/2} Q^{\prime} M_{22}^{-1/2}\succeq \frac{R^2}{4 \text{tr}(\Gamma_T(A_s))(1 + \frac{1}{c}\log{\frac{1}{\delta}})}I \label{lb_schur}
\end{equation}
Since $A_s$ is stable $\text{tr}(\Gamma_T(A_s)) \leq \text{tr}(\Gamma_{\infty}(A_s)) < \infty$. Define
\begin{equation}
\omega(\delta) = \frac{R^2}{4 \text{tr}(\Gamma_T(A_s))(1 + \frac{1}{c}\log{\frac{1}{\delta}})} > 0 \label{omega}
\end{equation}
Then this implies that
\[
\begin{bmatrix}
M_{11}^{-1/2} & 0 \\
0 & M_{22}^{-1/2}
\end{bmatrix} M \begin{bmatrix}
M_{11}^{-1/2} & 0 \\
0 & M_{22}^{-1/2}
\end{bmatrix}= \begin{bmatrix}
I &M_{11}^{-1/2}Q^{\prime}M_{22}^{-1/2} \\
M_{22}^{-1/2}QM_{11}^{-1/2} & I
\end{bmatrix} \succeq \frac{\omega(\delta)}{4} I
\]
because for any $v = \begin{bmatrix}
v_1 \\
v_2
\end{bmatrix}$ we have
\begin{align*}
v^{\prime}\begin{bmatrix}
I &\underbrace{M_{11}^{-1/2}QM_{22}^{-1/2}}_{=D^{\prime}} \\
M_{22}^{-1/2}Q^{\prime}M_{11}^{-1/2} & I
\end{bmatrix}v &= v_1^{\prime} v_1 + v_1^{\prime} D v_2 + v_2^{\prime} D^{\prime} v_1 + v_2^{\prime}v_2 \\
&= v_1^{\prime}v_{1}-2 \sqrt{1 - \omega(\delta)} ||v_2|| ||v_1|| + v_2^{\prime} v_2 \\
&\geq v_1^{\prime}v_{1}-2 \Big(1 - \frac{\omega(\delta)}{2}\Big) ||v_2|| ||v_1|| + v_2^{\prime} v_2
\end{align*}
Since from Eq.~\eqref{lb_schur} it follows that $||D||^2 \leq 1 - \omega(\delta)$ we obtain
\begin{align*}
v_1^{\prime}v_{1}-2 \sqrt{1 - \omega(\delta)} ||v_2|| ||v_1|| + v_2^{\prime} v_2 &=v_1^{\prime}v_{1}-2 \Big({1 - \frac{\omega(\delta)}{2}}\Big) ||v_2|| ||v_1|| + v_2^{\prime} v_2 \\
&= \Big({1 - \frac{\omega(\delta)}{2}}\Big)(||v_1|| -||v_2||)^2 + \Big(1- \sqrt{{1 - \frac{\omega(\delta)}{2}}}\Big)(||v_1||^2 + ||v_2||^2)\\
&\geq \Big(\frac{\omega(\delta)}{4}\Big)(||v_1||^2 + ||v_2||^2)
\end{align*}
Combining these observations we get
\begin{align*}
v^{\prime}\begin{bmatrix}
I &\underbrace{M_{11}^{-1/2}QM_{22}^{-1/2}}_{=D} \\
M_{22}^{-1/2}Q^{\prime}M_{11}^{-1/2} & I
\end{bmatrix}v &\geq \Big(\frac{\omega(\delta)}{4}\Big)
\end{align*}
We have that
\[
\sigma_{\min}\Big(\begin{bmatrix}
M_{11}^{-1/2} & 0 \\
0 & M_{22}^{-1/2}
\end{bmatrix} M \begin{bmatrix}
M_{11}^{-1/2} & 0 \\
0 & M_{22}^{-1/2}
\end{bmatrix} \Big) \geq \Big(\frac{\omega(\delta)}{4}\Big)
\]
Since $M_{22} \succeq V_s, M_{11} \succeq V_{ms}$ we have with probability at least $1-\delta$
\begin{equation}
\sigma_{\min}\Bigg(\begin{bmatrix}
V_{ms}^{-1/2} & 0\\
0 & V_s^{-1/2}
\end{bmatrix} \sum_{t=1}^T X^{mss}_t (X^{mss}_t)^{\prime} \begin{bmatrix}
V_{ms}^{-1/2} & 0\\
0 & V_s^{-1/2}
\end{bmatrix}\Bigg) \geq \Big(\frac{\omega(\delta)}{4}\Big) > 0 \label{fin_term3}
\end{equation}
Now we replace in Eq.~\eqref{fin_err},\eqref{fin_err2} $\gamma \rightarrow \frac{\sqrt{{\omega(\delta)}}}{32}$. Then that implies
\begin{align*}
||\ V_{e}^{-1/2} \sum_{t= 1}^T X^{e}_t (X^{s}_t)^{\prime} V_{s}^{-1/2}|| &\geq \frac{\sqrt{{\omega(\delta)}}}{64} \\
||\ V_{e}^{-1/2} \sum_{t= 1}^T X^{e}_t (X^{ms}_t)^{\prime} V_{ms}^{-1/2}|| &\geq \frac{\sqrt{{\omega(\delta)}}}{64}
\end{align*}
\subsection{Lower Bound on $\sum_{t=1}^T \tilde{X}_t \tilde{X}_t^{\prime}$}
\label{lower_gen}
Recalling that
\begin{align*}
\sum_{t=1}^T B \tilde{X}_t \tilde{X}_t^{\prime}B^{\prime} &= \begin{bmatrix}
I & \sum_{t=1}^T V_{e}^{-1/2} X^{e}_t (X^{ms}_t)^{\prime} V_{ms}^{-1/2 \prime} & \sum_{t=1}^T V_{e}^{-1/2} X^{e}_t (X^{s}_t)^{\prime} V_{s}^{-1/2 \prime}\\
\sum_{t=1}^T V_{ms}^{-1/2} X^{ms}_t (X^{e}_t)^{\prime} V_{e}^{-1/2 \prime} & \sum_{t=1}^T V_{ms}^{-1/2} X^{ms}_t (X^{ms}_t)^{\prime} V_{ms}^{-1/2 \prime} & \sum_{t=1}^T V_{ms}^{-1/2} X^{ms}_t (X^{s}_t)^{\prime} V_{s}^{-1/2 \prime} \\
\sum_{t=1}^T V_{s}^{-1/2} X^{s}_t (X^{e}_t)^{\prime} V_{e}^{-1/2 \prime} & \sum_{t=1}^T V_{s}^{-1/2} X^{s}_t (X^{ms}_t)^{\prime} V_{ms}^{-1/2 \prime} & \sum_{t=1}^T V_{s}^{-1/2} X^{s}_t (X^{s}_t)^{\prime} V_{ms}^{-1/2 \prime}
\end{bmatrix}
\end{align*}
then it follows from Eq.~\eqref{fin_term3} that
\begin{align*}
\sum_{t=1}^T B \tilde{X}_t \tilde{X}_t^{\prime}B^{\prime} &\succeq \begin{bmatrix}
I & \sum_{t=1}^T V_{e}^{-1/2} X^{e}_t (X^{ms}_t)^{\prime} V_{ms}^{-1/2 \prime} & \sum_{t=1}^T V_{e}^{-1/2} X^{e}_t (X^{s}_t)^{\prime} V_{s}^{-1/2 \prime}\\
\sum_{t=1}^T V_{ms}^{-1/2} X^{ms}_t (X^{e}_t)^{\prime} V_{e}^{-1/2 \prime} & \frac{\omega(\delta)}{4} I & 0 \\
\sum_{t=1}^T V_{s}^{-1/2} X^{s}_t (X^{e}_t)^{\prime} V_{e}^{-1/2 \prime} & 0 & \frac{\omega(\delta)}{4} I
\end{bmatrix}
\end{align*}
Let $v = \begin{bmatrix}
v_1 \\
v_2 \\
v_3
\end{bmatrix}$
Then $v^{\prime} \sum_{t=1}^T B \tilde{X}_t \tilde{X}_t^{\prime}B^{\prime} v = ||v_1||^2 + \frac{\omega(\delta)}{4} (||v_2||_2^2 + ||v_3||_2^2) + 2 v_1^{\prime}\sum_{t=1}^T V_{e}^{-1/2} X^{e}_t (X^{ms}_t)^{\prime} V_{ms}^{-1/2 \prime} v_2 + 2 v_1^{\prime} \sum_{t=1}^T V_{e}^{-1/2} X^{e}_t (X^{s}_t)^{\prime} V_{s}^{-1/2 \prime}v_3 \geq ||v_1||^2 + \frac{\omega(\delta)}{4} (||v_2||_2^2 + ||v_3||_2^2) - \frac{\sqrt{\omega(\delta)}}{32} ||v_1|| ||v_2|| - \frac{\sqrt{\omega(\delta)}}{32} ||v_1|| ||v_3|| $. Then we get
\[
v^{\prime} \sum_{t=1}^T B \tilde{X}_t \tilde{X}_t^{\prime}B^{\prime} v \geq ||v_1||^2 + \frac{\omega(\delta)}{4} (||v_2||_2^2 + ||v_3||_2^2) - \frac{\omega(\delta)}{64} (||v_1||^2 + ||v_2||^2) - \frac{\omega(\delta)}{64} (||v_1||^2 + ||v_3||^2)
\]
Thus $\sigma_{\min}(\sum_{t=1}^T B \tilde{X}_t \tilde{X}_t^{\prime}B^{\prime}) \geq \frac{\omega(\delta)}{8}$. Summarizing we have with probability at least $1 - C\delta$. The $C \delta$ comes because we are considering the intersection of invertibility of $\sum_{t=1}^T X^{mss}_t (X^{mss}_t)^{\prime}$ and $\sum_{t=1}^T X^{e}_t (X^{e}_t)^{\prime}, \sum_{t=1}^T X^{s}_t (X^{s}_t)^{\prime}, \sum_{t=1}^T X^{ms}_t (X^{ms}_t)^{\prime}$.
\[
\sigma_{\min}(\sum_{t=1}^T B \tilde{X}_t \tilde{X}_t^{\prime}B^{\prime}) \geq \frac{\omega(\delta)}{8}
\]
whenever
\begin{align}
T \in T_{mc}(\delta) \cap T_{sc}(\delta) \label{t_mc}
\end{align}
Replacing $\delta \rightarrow \frac{\delta}{C}$ we get with probability at least $1-\delta$ that
\[
\sigma_{\min}(\sum_{t=1}^T B \tilde{X}_t \tilde{X}_t^{\prime}B^{\prime}) \geq \frac{\omega( \frac{\delta}{C})}{8}
\]
Define
\begin{align*}
V^e_{dn}(\delta) &= \frac{\phi_{\min}(A_e)^2 \psi(A_e)^2 \delta^2}{2\sigma_{\max}(P)^2}A_e^T A_e^{T \prime}, V^s_{dn}(\delta) = \frac{TR^2}{4}I, V^{ms}_{dn}(\delta) = \Big(\frac{TR^2}{8e} \Gamma_{\lfloor \frac{1}{\beta_0(\delta)}\rfloor}(A_{ms})\Big)
\end{align*}
This implies that with probability at least $1-2\delta$ we have that
\begin{align}
\sum_{t=1}^T B \tilde{X}_t \tilde{X}_t^{\prime}B^{\prime} &\succeq \frac{\omega( \frac{\delta}{C})}{8} I \implies \sum_{t=1}^T \tilde{X}_t \tilde{X}_t^{\prime}\succeq \frac{\omega( \frac{\delta}{C})}{8}B^{-2} \nonumber \\
\sum_{t=1}^T \tilde{X}_t \tilde{X}_t^{\prime} &\succeq \underbrace{\frac{\omega( \frac{\delta}{ C})}{8} \begin{bmatrix}
V^e_{dn}(\delta) & 0 & 0 \\
0 & V^{ms}_{dn}( \frac{\delta}{ C}) & 0\\
0 & 0 & V^{s}_{dn}( \frac{\delta}{ C})
\end{bmatrix}}_{=V_{dn}}
\end{align}
$V^e_{dn}$ depends differently than the rest because $V_e$ was chosen to be data dependent and we only apply the lower bound on $\sum_{t=1}^T X^{e}_t (X^{e}_t)^{\prime}$ at the very end.
\subsection{Finding the Upper Bound $\sum_{t=1}^T \tilde{X}_t \tilde{X}_t^{\prime}$}
\label{ub_general}
For the upper bound on $\sum_{t=1}^T \tilde{X}_t \tilde{X}_t^{\prime}$. We use Lemma A.5 of~\cite{simchowitz2018learning}. Consider an arbitrary matrix $M = \begin{bmatrix} M_1 \\
M_2 \\
M_3 \end{bmatrix}$. Then $ \begin{bmatrix} 3M_1M_1^{\prime} & 0 & 0 \\
0 & 3M_2M_2^{\prime}& 0\\
0 & 0 & 3M_3M_3^{\prime} \end{bmatrix} \succeq MM^{\prime}$. This is because
\begin{align*}
\begin{bmatrix} 2M_1M_1^{\prime} & -M_1 M_2^{\prime} & -M_1 M_3^{\prime} \\
-M_2 M_1^{\prime} & 2M_2M_2^{\prime}& -M_2 M_3^{\prime}\\
-M_3 M_1^{\prime} & -M_3 M_2^{\prime} & 2M_3M_3^{\prime} \end{bmatrix} &= (\begin{bmatrix} M_1 \\
0 \\
0 \end{bmatrix}- \begin{bmatrix} 0 \\
M_2 \\
0 \end{bmatrix})(\begin{bmatrix} M_1 \\
0 \\
0 \end{bmatrix}- \begin{bmatrix} 0 \\
M_2 \\
0 \end{bmatrix})^{\prime} \\
&+ (\begin{bmatrix} M_1 \\
0 \\
0 \end{bmatrix}- \begin{bmatrix} 0 \\
0 \\
M_3 \end{bmatrix})(\begin{bmatrix} M_1 \\
0 \\
0 \end{bmatrix}- \begin{bmatrix} 0 \\
0 \\
M_3 \end{bmatrix})^{\prime} + (\begin{bmatrix} 0 \\
0 \\
M_3 \end{bmatrix}- \begin{bmatrix} 0 \\
M_2 \\
0 \end{bmatrix})(\begin{bmatrix} 0 \\
0 \\
M_3 \end{bmatrix}- \begin{bmatrix} 0 \\
M_2 \\
0 \end{bmatrix})^{\prime}
\end{align*}
Define
\begin{align*}
V^e_{up}(\delta) &= \frac{3\phi_{\max}(A)^2\sigma_{\max}(\tilde{P})^4}{\sigma_{\min}(\tilde{P})^2}(1+\frac{1}{c}\log{\frac{1}{\delta}})\text{tr}(\Gamma_T(A_e^{-1})) A_e^T A_e^{T \prime} \\
V^s_{up}(\delta) &= 3\sigma_{\max}(\tilde{P})^2T\text{tr}(\Gamma_T(A_{s}))\Big(1 + \frac{1}{c}\log{\Big(\frac{1}{\delta}\Big)}\Big)I \\
V^{ms}_{up}(\delta) &= 3\sigma_{\max}(\tilde{P})^2T\text{tr}(\Gamma_T(A_{ms}))\Big(1 + \frac{1}{c}\log{\Big(\frac{1}{\delta}\Big)}\Big) I
\end{align*}
Then with probability at least $1-4\delta$ we have
\begin{align*}
\begin{bmatrix}
\sum_{t=1}^T X^{e}(X^{e}_t)^{\prime} & 0 & 0 \\
0 & \sum_{t=1}^T X^{ms}(X^{ms}_t)^{\prime} & 0\\
0 & 0 & \sum_{t=1}^T X^{s}(X^{s}_t)^{\prime}
\end{bmatrix} \preceq \begin{bmatrix}
V^e_{up}(\delta) & 0 & 0 \\
0 & V^{ms}_{up}(\delta) & 0\\
0 & 0 & V^s_{up}(\delta)
\end{bmatrix}
\end{align*}
We get these upper bounds for stable and marginally stable matrices from Proposition~\eqref{energy_markov} and Eq.~\eqref{exp_bnds} for explosive matrices. Then with probability at least $1-4\delta$ we have
\begin{align}
\sum_{t=1}^T \tilde{X}_t \tilde{X}_t^{\prime} \preceq \underbrace{\begin{bmatrix}
3V^e_{up}(\delta) & 0 & 0 \\
0 & 3V^{ms}_{up}(\delta) & 0\\
0 & 0 & 3V^s_{up}(\delta)
\end{bmatrix}}_{{=V_{up}}} \label{ub_comp}
\end{align}
Note that the time requirement in Eq.~\eqref{t_mc} is sufficient to ensure the upper bounds with high probability and we do not state them explicitly.
\subsection{Getting Error Bounds}
\label{error}
We recall the discussion for Theorem~\ref{main_result}. We have $V_{up}, V_{dn}$, so we compute $V_{up}V_{dn}^{-1}$ which gives us
\begin{align*}
V_{up}V_{dn}^{-1} &= \frac{8}{\omega(\frac{\delta}{C})}\begin{bmatrix}
3V^e_{up}(\delta)(V^e_{dn}(\delta))^{-1} & 0 & 0 \\
0 & 3V^{ms}_{up}(\delta)(V^{ms}_{dn}( \frac{\delta}{ C}))^{-1} & 0\\
0 & 0 & 3V^s_{up}(\delta)(V^{s}_{dn})^{-1}( \frac{\delta}{ C})
\end{bmatrix}\\
\text{det}(V_{up}V_{dn}^{-1}) &= \Big(\frac{24}{\omega(\frac{\delta}{C})}\Big)^d \text{det}(V^e_{up}(\delta)(V^e_{dn}(\delta))^{-1}) \text{det}(V^{ms}_{up}(\delta)(V^{ms}_{dn}( \frac{\delta}{ C}))^{-1})\text{det}(V^s_{up}(\delta)(V^{s}_{dn}( \frac{\delta}{ C}))^{-1})
\end{align*}
Further $V^{s}_{dn}( \frac{\delta}{ C}) = V^{s}_{dn}(\delta)$ (only the time required to be greater than this with high probability changes). Then
\begin{align*}
\log{(\text{det}(V_{up}V_{dn}^{-1}))} &=d(\log{24} - \log{\omega(\frac{\delta}{C})}) + \log{\text{det}(V^e_{up}(\delta)(V^e_{dn}(\delta))^{-1})} \\
&+ \log{\text{det}(V^{ms}_{up}(\delta)(V^{ms}_{dn}( \frac{\delta}{ C}))^{-1})} + \log{\text{det}(V^s_{up}(\delta)(V^{s}_{dn}( \frac{\delta}{ C}))^{-1})}
\end{align*}
Following this the bounds are straightforward and can be computed as shown in Eq.~\eqref{error_form}. It should be noted that Proposition~\ref{selfnorm_bnd} works for a general case of noise process which $\tilde{\eta}_t$ satisfies.
Now we only know the error of the transformed dynamics, \textit{i.e.},
\begin{align*}
\sum_{t=1}^T(\sum_{t=1}^T \tilde{X}_t \tilde{X}_t)^{+}(\sum_{t=1}^T \tilde{X}_t \tilde{\eta}_{t+1})
\end{align*}
Since $(\sum_{t=1}^T \tilde{X}_t \tilde{X}_t)$ is invertible with high probability
\begin{align*}
\sum_{t=1}^T(\sum_{t=1}^T \tilde{X}_t \tilde{X}_t)^{+}(\sum_{t=1}^T \tilde{X}_t \tilde{\eta}_{t+1})&= (\sum_{t=1}^T \tilde{X}_t \tilde{X}_t)^{-1}(\sum_{t=1}^T \tilde{X}_t \tilde{\eta}_{t+1}) \\
&= \sum_{t=1}^T\tilde{P}^{-1 \prime}(\sum_{t=1}^T X_t X_t)^{-1}\tilde{P}^{-1} \tilde{P}X_t \eta_{t+1}\tilde{P}^{\prime} \\
&= \tilde{P}^{-1 \prime}\sum_{t=1}^T(\sum_{t=1}^T X_t X_t)^{-1}X_t \eta_{t+1}\tilde{P}^{\prime}
\end{align*}
Then it is clear that
\[
\Bigg |\bl\sum_{t=1}^T(\sum_{t=1}^T \tilde{X}_t \tilde{X}_t)^{-1}(\sum_{t=1}^T \tilde{X}_t \tilde{\eta}_{t+1})\Bigg |\bl \geq \sigma_{\min}(\tilde{P}^{-1}) \Bigg | \Bigg | \sum_{t=1}^T(\sum_{t=1}^T X_t X_t)^{-1}X_t \eta_{t+1}\Bigg | \Bigg | \sigma_{\min}(\tilde{P})
\]
and we have bounded the original error term in terms of the unknown $\sigma_{\min}(\tilde{P}), \sigma_{\min}(\tilde{P}^{-1})$. However this factor only depends on $d$ and not $T$.
\subsection*{Bounding $\alpha_t$}
Here the hard step is to control the quantity
\[
\alpha_t =\Gamma_{T}^{-1/2} A x(t) x(t)^{'}A^{'} \Gamma_T^{-1/2}
\]
One way to deal with this is by observing that $x(t) = \sum_{s=0}^{t-1} A^{t-s} \eta_s$, then $\Gamma_{T}^{-1/2} A x(t) = \sum_{s=0}^t \Gamma_{T}^{-1/2} A^{t+1-s} \eta_s$. This can be written as
\[
\Gamma_{T}^{-1/2} A x(t) = \sum_{j=1}^d \sum_{s=0}^t \Gamma_{T}^{-1/2} A^{t+1-s} e_j\eta_{sj}
\]
Now, we look at the Hermitian dilation of $\Gamma_{T}^{-1/2} A x(t)$ which is given by
\[
\mathcal{H}(A x(t)) = \begin{bmatrix}
0 & \Gamma_{T}^{-1/2} A x(t) \\
x^{'}(t) A^{'} \Gamma_{T}^{-1/2} & 0
\end{bmatrix}
\]
which can be written as
\[
\mathcal{H}(A x(t)) = \sum_{j=1}^d \sum_{s=0}^{t} \eta_{sj}\begin{bmatrix}
0 & \Gamma_{T}^{-1/2} A^{t+1-s} e_j \\
e_j^{'} (A^{t+1-s})^{'} \Gamma_{T}^{-1/2} & 0
\end{bmatrix}
\]
where $A_{sj} = \begin{bmatrix}
0 & \Gamma_{T}^{-1/2} A^{t+1-s} e_j \\
e_j^{'} (A^{t+1-s})^{'} \Gamma_{T}^{-1/2} & 0
\end{bmatrix}$
Then it can be shown that
$$
\sum_{s, j} A_{sj}^2 = \begin{bmatrix}
\sum_{s=0}^{t} \Gamma_{T}^{-1/2} A^{t+1-s} (A^{t+1-s})^{'} \Gamma_{T}^{-1/2} & 0 \\
0 & \text{trace}(\sum_{s=0}^{t} \Gamma_{T}^{-1/2} A^{t+1-s} (A^{t+1-s})^{'} \Gamma_{T}^{-1/2})
\end{bmatrix}
$$
Here $\sigma_1(\sum_{s, j}A^2_{sj}) \leq \text{trace}(\Gamma_T^{-1/2} \Gamma_t \Gamma_T^{-1/2})$, then since we have $D_T$ in the form
\[
D_t = \sum_{k} \gamma_k A_k
\]
where $A_k$ are fixed and $\gamma_k$ are i.i.d. subgaussian. We can bound the operator norm by results in~\cite{tropp2012user}, and this is of the form
\[
\mathbb{P}(||\alpha_t||_{\text{op}} > z) \leq 2d \exp{-\dfrac{z^2}{2 \text{trace}(\Gamma_T^{-1/2} \Gamma_t \Gamma_T^{-1/2})}}
\]
Observe the term $x_t = A^t x_0 + \sum_{\tau=1}^t A^{\tau-1} \eta_{\tau}$. One can alternately write this as
\begin{align*}
x_t &= \sum_{j=1}^d \sum_{s=0}^t A^{t-s} e_j\eta_{sj} \\
x_t \eta_{t+1}^{'} &= \sum_{j=1}^d \sum_{s=0}^t \eta_{sj} A^{t-s} e_j\eta_{t+1}^{'} \\
A x_t \eta_{t+1}^{'} &= \sum_{j=1}^d\sum_{k=1}^d \sum_{s=0}^t \eta_{sj}\eta_{t+1, k} A^{t+1-s} e_je_k^{'}
\end{align*}
The proof of our fact will hinge on bounding the operator norm of $Ax_t \eta_{t+1}^{'}$. Observe that $Ax_t \eta_{t+1}^{'}$ is a martingale difference sequence. Further,
Now we give a concentration result for $C_T$.
\subsection*{Bounding $C_T$}
We work after conditioning under $\mathcal{W}$. Then let $Z_t = \Gamma_{T}^{-1/2}A x(t) \eta_{t+1}^{'} \Gamma_T^{-1/2}$, we have that almost surely
\[
C_T = \sum_{t=0}^{T-1} Z_t + Z_t^{'}
\]
$Z_t = uv^{'}$, then it is known that $uv^{'} + v u^{'} \succeq -uu^{'} - vv^{'}$ which follows from the fact that $(u+v)(u+v)^{'} \succeq 0$. Then $C_T \succeq \sum_{t=1}^T - Z_t Z_t^{'} - Z_t^{'} Z_t$.
\begin{align*}
Z_t Z_t^{'} &= (\eta^{'}_{t+1}\Gamma_T^{-1} \eta_{t+1} )(\Gamma_{T}^{-1/2}A x(t) x^{'}(t) A^{'} \Gamma_{T}^{-1/2} )\\
Z_t^{'} Z_t &= (x(t)^{'}A^{'}\Gamma_T^{-1} A x(t))\Gamma_T^{-1/2} \eta_{t} \eta^{'}_{t} \Gamma_T^{-1/2}
\end{align*}
All noise terms are less that $\sqrt{2d \log{Td/\delta}}$ with probability at least $1 - \delta$.
\begin{lem}
\label{noise_bnd}
Under the event $\mathcal{W}$, we have that almost surely
\[
\eta_{t+1}^{'} \Gamma_T^{-1} \eta_{t+1} \leq 2d \log{(2Td/\delta)}\text{trace}(\Gamma_T^{-1})
\]
\end{lem}
\begin{proof}
This follows because we are working under the assumption that $||\eta_t||_{\infty} \leq \sqrt{\log{(2Td/\delta)}}$
\end{proof}
\begin{lem}
Under the event $\mathcal{W}$, we have that
\label{state_bnd}
\[
x(t)^{'}A^{'}\Gamma_T^{-1} A x(t) \leq \log{(2Td/\delta)}\text{trace}(\Gamma_T^{-1/2} \Gamma_t \Gamma_T^{-1/2})
\]
\end{lem}
Lemma~\ref{noise_bnd},~\ref{state_bnd} suggest that whenever we have an explosive system. The term $Z_t Z_t^{'}$ is dominated by $\Gamma_T^{-1/2} Ax(t)x(t)^{'}A^{'}\Gamma_T^{-1/2}$. Further, $Z_t ^{'} Z_t$ then can be shown to concentrate around $\dfrac{2\log{Td}}{\delta^2}$, formally define $
N_T = \sqrt{2\log{Td}}/\delta \begin{bmatrix}
\sqrt{\Gamma_T^{-1/2} \Gamma_1\Gamma_T^{-1/2}}\eta^{'}_1\\
\vdots \\
\sqrt{\Gamma_T^{-1/2} \Gamma_T\Gamma_T^{-1/2}}\eta^{'}_T
\end{bmatrix}$
\begin{prop}
\label{scale_noise}
Then $\mathbb{P}(||N_T||_{\text{op}} \geq \epsilon T) \leq 2d\exp{-\epsilon^2 T \delta / \log{T}}$
\end{prop}
The key conclusion here is that $C_T$ is dominated by the psd terms, \textit{i.e.}, $\sum_{t=1}^T \Gamma_T^{-1/2} \eta_t \eta_t^{'} + A x(t-1) x^{'}(t-1) A^{'} \Gamma_T^{-1/2} $. That is with high probability we can conclude that
\[
V_T \succeq (1 - \epsilon) \sum_{t=1}^T \Gamma_T^{-1/2} \eta_t \eta_t^{'} + A x(t-1) x^{'}(t-1) A^{'} \Gamma_T^{-1/2}
\]
As a result $s_{\min}(\sum_{t=0}^{T}x(t)x(t)^{'}) \geq \Omega(T)$. We have shown that the convergence for explosive systems is at least as fast as the stable system. A closer analysis would give us the exponential dependence, but this proof would not be very different from the analysis in Proposition 9~\cite{faradonbeh2017finite}.
\begin{cor}
\label{lower_bnd}
There exists an absolute constant $C, C^{'}$ such that for $T \geq (C/\delta)( \log{1/\delta} + d \log{d})$ we have that
\[
||A - \hat{A}||_{\text{op}} \leq (C^{'}/\sqrt{T})\sqrt{d \log{d /\delta}}
\]
\end{cor}
\section{Main Results}
\label{main_results}
We will first show non--asymptotic rates for the three separate regimes, followed by the case when $A$ has a general eigenvalue distribution.
\begin{thm}
\label{main_result}
The following non-asymptotic bounds hold, with probability at least $1-\delta$, for the least squares estimator:
\begin{itemize}
\item For $A \in \mathcal{S}_0 \cup \mathcal{S}_1$
$$||A - \hat{A}||_{2} \leq \sqrt{\frac{C}{T}}\underbrace{\gamma_s\Big(A, \frac{\delta}{4}\Big)}_{=O(\sqrt{\log{(\frac{1}{\delta}})})}$$
whenever
$$T \geq \max{\Big(T_{\eta}\Big(\frac{\delta}{4}\Big), T_s\Big(\frac{\delta}{4}\Big)\Big)}$$
\item For $A \in \mathcal{S}_1$
$$||A - \hat{A}||_{2} \leq \frac{C \sigma_{\max}(A^{-1})}{\sqrt{T\sigma_{\min}(\Gamma_{\lfloor \frac{1}{\beta_0(\delta)}\rfloor}(A))}}\underbrace{\gamma_{ms}\Big(A, \frac{\delta}{2}\Big)^2}_{=O(\log{(\frac{T}{\delta})})}$$
whenever
$$T \geq \max{\Big(2T_{\eta}\Big(\frac{\delta}{3T}\Big), 2T_s\Big(\frac{\delta}{3T}\Big), T_{ms}\Big(\frac{\delta}{2}\Big)\Big)}$$
Since $\sigma_{\min}(\Gamma_{\lfloor \frac{1}{\beta_0(\delta)}\rfloor}(A)) \geq \alpha(d)\frac{T}{\log{T}}$, we have that
$$||A - \hat{A}||_{2} \leq \sqrt{\frac{\log{T}}{\alpha(d)}}\frac{\gamma_{ms}\Big(A, \frac{\delta}{2}\Big)^2}{T}$$
\item For $A \in \mathcal{S}_2$
$$||A - \hat{A}||_{2} \leq C\sigma_{\max}(A^{-T}) \underbrace{\gamma_e\Big(A, \frac{\delta}{5}\Big)}_{=O(\frac{1}{\delta})}$$
whenever
$$T \in T_{u}\Big(\frac{\delta}{5}\Big)$$
Since $\sigma_{\max}(A^{-T}) \leq \alpha(d) (\rho_{\min}(A))^{-T}$ for $A \in \mathcal{S}_2$, the identification error decays exponentially with $T$.
\end{itemize}
Here $C, c$ are absolute constants and $\alpha(d)$ is a function that depends only on $d$.
\end{thm}
\begin{remark}
$T_u(\delta)$ is a set where there exists a minimum $T_{*} < \infty$ such that $T \in T_u(\delta)$ whenever $T \geq T_{*}$. However, there might be $T < T_{*}$ for which the inequality of $T_{u}(\delta)$ holds. Whenever we write $T \in T_u(\delta)$ we mean $T \geq T_{*}$.
\end{remark}
\begin{proof}
We start by writing an upper bound
\begin{align}
\label{err}
||A - \hat{A}||_{\text{op}} &\leq ||Y_T^{+}S_T||_{\text{op}} \nonumber \\
&\leq ||(Y_T^{+})^{1/2}||_{\text{op}}||(Y_T^{+})^{1/2}S_T||_{\text{op}}.
\end{align}
The rest of the proof can be broken into two parts:
\begin{itemize}
\item Showing invertibility of $Y_T$ and lower bounds on the least singular value
\item Bounding the self-normalized martingale term given by $(Y_T^{+})^{1/2}S_T$
\end{itemize}
The invertibility of $Y_T$ is where most of the work lies. Once we have a tight characterization of $Y_T$, one can simply obtain the error bound by using Proposition~\ref{selfnorm_bnd}. Here we sketch the basis of our approach. First, we find deterministic $V_{up}, V_{dn}, T_0$ such that
\begin{align}
\mathcal{E}_0 &= \{0 \prec V_{dn} \preceq Y_T \preceq V_{up}, T \geq T_0\} \\
\mathbb{P}(\mathcal{E}_0) &\geq 1 - \delta
\end{align}
The next step is to bound the self--normalized term. Under $\mathcal{E}_0$, it is clear that $Y_T$ is invertible and we have
\[
(Y_T^{+})^{1/2}S_T = Y_T^{-1/2} S_T.
\]
Define event $\mathcal{E}_1$ in the following way
\begin{align*}
&\mathcal{E}_1 = \\
&\Bigg\{||S_{T}||_{(Y_T + V_{dn})^{-1}} \leq \sqrt{8d \log {\Bigg(\dfrac{5 \text{det}(Y_TV_{dn}^{-1} + I)^{1/2d}}{\delta^{1/d}}\Bigg)}}\Bigg\}
\end{align*}
It follows from Proposition~\ref{selfnorm_bnd} that $\mathbb{P}(\mathcal{E}_1) \geq 1- \delta$. Then
\[
\mathcal{E}_0 \implies Y_T + V_{dn} \preceq 2 Y_T \implies (Y_T + V_{dn})^{-1} \succeq \frac{1}{2}Y_T^{-1},
\]
and we have that under $\mathcal{E}_0$
\[
||S_T||_{Y_T^{-1}} \leq \sqrt{2} ||S_T||_{(Y_T + V_{dn})^{-1}}.
\]
Now considering the intersection $\mathcal{E}_0 \cap \mathcal{E}_1$, we get
\begin{align}
&\mathcal{E}_0 \cap \mathcal{E}_1 \implies \nonumber \\
&\mathcal{E}_0 \cap \Bigg\{||S_{T}||_{Y_T^{-1}} \leq \sqrt{16d \log {\Bigg(\dfrac{5 \text{det}(V_{up}V_{dn}^{-1} + I)^{1/2d}}{\delta^{1/d}}\Bigg)}}\Bigg\}
\end{align}
We replaced the LHS of $\mathcal{E}_1$ by the lower bound obtained above and in the RHS replaced $Y_T$ by its upper bound under $\mathcal{E}_0$, $V_{up}$. Further, observe that $\mathbb{P}(\mathcal{E}_0 \cap \mathcal{E}_1) \geq 1 - 2\delta$. Under $\mathcal{E}_0 \cap \mathcal{E}_1$ we get
\begin{equation}
||A - \hat{A}||_{\text{op}} \leq \underbrace{\frac{1}{\sigma_{\min}(V_{dn})}}_{\alpha_T}\underbrace{\sqrt{16d \log {\Bigg(\dfrac{5 \text{det}(V_{up}V_{dn}^{-1} + I)^{1/2d}}{\delta^{1/d}}\Bigg)}}}_{\beta_T} \label{error_form}
\end{equation}
where $\alpha_T$ goes to zero with $T$ and $\beta_T$ is typically a constant. This shows that OLS learns $A$ with increasing accuracy as $T$ grows. The deterministic $V_{up}, V_{dn}, T_0$ differ for each regime of $\rho(A)$ and typically depend on the probability threshold $\delta$. We now sketch the approach for finding these for each regime.
\subsection*{$Y_T$ behavior when $A \in \mathcal{S}_0 \cup \mathcal{S}_1$}
The key step here is to characterize $Y_T$ in terms of $Y_{T-1}$.
\begin{align}
Y_T &= x_0 x_0^{'} + A Y_{T-1} A^{'} + \nonumber \\
&+ \sum_{t=0}^{T-1}(A x_t\eta_{t+1}^{'} + \eta_{t+1}x_t^{'}A^{'}) + \sum_{t=1}^{T}\eta_t \eta_t^{'} \nonumber \\
&\succeq A Y_{T-1} A^{'} + \nonumber \\
&+ \sum_{t=0}^{T-1}(A x_t\eta_{t+1}^{'} + \eta_{t+1}x_t^{'}A^{'}) + \sum_{t=1}^{T}\eta_t \eta_t^{'}. \label{energy_bnd}
\end{align}
Since $\{\eta_t\}_{t=1}^T$ are i.i.d. subgaussian we can show that $\sum_{t=1}^T \eta_t \eta_t^{\prime}$ concentrates near $TI_{d \times d}$ with high probability. Using Proposition~\ref{selfnorm_bnd} once again, we will show that with high probability
\begin{align*}
\sum_{t=0}^{T-1}(A x_t\eta_{t+1}^{'} + \eta_{t+1}x_t^{'}A^{'}) &\succeq -\epsilon ( A Y_{T-1} A^{'} + \sum_{t=1}^{T}\eta_t \eta_t^{\prime})
\end{align*}
where $\epsilon \leq 1/2$ whenever $\rho_i(A) \leq 1 + C/T$ and $T \geq T_0$ for some $T_0$ depending only on $A$. As a result with high probability we have
\begin{align}
Y_T &\succeq (1-\epsilon)A Y_{T-1} A^{'} + (1 - \epsilon)\sum_{t=1}^T \eta_t \eta_t^{\prime} \nonumber\\
&\succeq (1 - \epsilon)\sum_{t=1}^T \eta_t \eta_t^{\prime}. \label{bnd1}
\end{align}
The details of this proof are provided in appendix as Section~\ref{short_proof}. When $1 - C/T \leq \rho_i(A) \leq 1 + C/T$ we note that the bound in Eq.~\eqref{bnd1} is not tight. The key to sharpening the lower bound is the following observation: for $T >\max{\Big(2T_{\eta}\Big(\frac{\delta}{3T}\Big), 2T_s\Big(\frac{\delta}{3T}\Big), T_{ms}\Big(\frac{\delta}{2}\Big)\Big)}$ we can ensure with high probability
\begin{align}
\sum_{\tau=1}^t \eta_{\tau} \eta_{\tau}^{\prime} &= tI \nonumber\\
Y_t &\succeq (1-\epsilon)A Y_{t-1} A^{'} + (1 - \epsilon)tI \label{bnd2}
\end{align}
simultaneously for all $t \geq T/2$. Then we will show that $\epsilon = \beta_0(\delta)$ in Table~\ref{notation}. The sharpening of $\epsilon$ from $1/2$ to $\beta_0(\delta)$ is only possible because all the eigenvalues of $A$ are close to unity. In that case by successively expanding Eq.~\eqref{bnd2} we get
\begin{equation}
Y_T \succeq (1 - \epsilon)^{1/\beta_{0}(\delta)}A Y_{T/2-1} A^{'} + \frac{T}{2}\sum_{t=1}^{1/\beta_{0}(\delta)}(1-\epsilon)^{t}A^{t}A^{t \prime} \label{bnd3}
\end{equation}
and then Eq.~\eqref{bnd3} can be reduced to
\[
Y_T \succeq (1 - \epsilon)^{1/\beta_{0}(\delta)}A Y_{T/2-1} A^{'} + \frac{T (\Gamma_{1/\beta_0(\delta)}(A)-I)}{ 4e}.
\]
We show that
$$1/\beta_0(\delta) \geq \frac{\alpha(d)TR^2\sigma_{\min}(A A^{\prime})}{8ec(A, \delta)}$$
and by Proposition~\ref{gramian_lb}, $Y_T \succeq \alpha(d)T^2$ for some function $\alpha(\cdot)$ that depends only on $d$. The details of the proof are provided in appendix as Section~\ref{sharp_bounds}.
To get deterministic upper bounds for $Y_T$ with high probability, we note that
\begin{align*}
Y_T &\preceq \text{tr}\left(\sum_{t=1}^T X_t X_t^{\prime}\right) I.
\end{align*}
Then we can use Hanson--Wright inequality or Markov inequality to get an upper bound as shown in appendix as Proposition~\ref{energy_markov}.
\subsection*{$Y_T$ behavior when $A \in \mathcal{S}_2$}
The concentration arguments used to show the convergence for stable systems do not work for unstable systems. As discussed before $X_t = \sum_{\tau=1}^T A^{t-\tau} \eta_t$ and, consequently, $X_T$ depends strongly on $X_1, X_2, \ldots$. Due to this dependence we are unable to use typical techniques where $X_i$s are divided into roughly independent blocks of covariates. to obtain concentration results. Motivated by~\cite{lai1983asymptotic}, we instead work by transforming $x_t$ as
\begin{align}
z_t &= A^{-t}x_t \nonumber\\
&= x_0 + \sum_{\tau=1}^{t} A^{-\tau} \eta_{\tau}. \label{zt_form}
\end{align}
The steps of the proof proceed as follows. Define
\begin{align}
U_T &= A^{-T}\sum_{t=1}^T x_t x_t^{\prime} A^{-T \prime} = A^{-T} Y_T A^{-T \prime} \nonumber \\
&= \sum_{t=1}^{T} A^{-T+t}z_t z_t^{\prime} A^{-T+t \prime} \nonumber \\
F_{T} &= \sum_{t=0}^{T-1} A^{-t} z_T z_T^{'} A^{-t \prime} \label{ut_ft}
\end{align}
We show that
\[
||F_T - U_T||_{\text{op}} \leq \epsilon.
\]
Here $\epsilon$ decays exponentially fast with $T$. Then the lower and upper bounds of $U_T$ can be shown by proving corresponding bounds for $F_T$. A necessary condition for invertibility of $F_T$ is that the matrix $A$ should be regular (in a later section we show that it is also sufficient). If $A$ is regular, the deterministic lower bound for $F_T$ is fairly straightforward and depends on $\phi_{\min}(A)$ defined in Definition~\ref{outbox}. The upper bound can be obtained by using Hanson--Wright inequality. The complete steps are given in appendix as Section~\ref{explosive}.
\end{proof}
The analysis presented here is sharper than~\cite{faradonbeh2017finite} as we use subgaussian matrix inequalities such as Hanson--Wright Inequality (Theorem~\ref{hanson-wright}) to bound the error terms in contrast to uniformly bounding each noise variable and applying a less efficient Bernstein inequality. Another minor difference is that~\cite{lai1983asymptotic},\cite{faradonbeh2017finite} consider $||U_T-F_{\infty}||$ instead and as a result they require a martingale concentration argument to show the existence of $z_{\infty}$.
Lower bounds for identification error when $\rho(A) \leq 1$ have been derived in~\cite{simchowitz2018learning}. In Table~\ref{main_result} and Theorem~\ref{main_result}, the error in identification for explosive matrices depends on $\delta$ as $\frac{1}{\delta}$ unlike stable and marginally stable matrices where the dependence is $\log{\frac{1}{\delta}}$. Typical minimax analyses, such as the one in~\cite{simchowitz2018learning}, are unable to capture this relation between error and $\delta$. Here we show that such a dependence is unavoidable:
\begin{prop}
\label{minimax}
Let $A=a \geq 1.1$ be a 1--D matrix and $\hat{A} = \hat{a}$ be its OLS estimate. Then whenever $Ca^2T^2a^{-T} > \delta^2$, we have with probability at least $\delta$ that
\[
|a - \hat{a}| \geq \frac{C(1-a^{-2}) \delta}{ -a^2 (\log{\delta})^3}
\]
where $C$ is a universal constant. If $Ca^2T^2a^{-T} \leq \delta^2$ then with probability at least $\delta$ we have
\[
|a - \hat{a}| \geq \Big(\frac{C(1-a^{-2})}{-\delta \log{\delta}}\Big)a^{-T}
\]
\end{prop}
Our lower bounds indicate that $\frac{1}{\delta}$ is inevitable in Theorem~\ref{main_result}, \textit{i.e.}, when $Ca^2T^2a^{-T} \leq \delta^2$. Second, when $Ca^2T^2a^{-T} > \delta^2$, our bound sharpens Theorem B.2 in~\cite{simchowitz2018learning}. The proof and an explicit comparison is provided in Section~\ref{optimal_bnd}.
For the general case we use a well known fact for matrices, namely, that there exists a similarity transform $\tilde{P}$ such that
\begin{align}
A = \tilde{P}^{-1} \begin{bmatrix}
A_{e} & 0 & 0 \\
0 & A_{ms} & 0 \\
0 & 0 & A_s
\end{bmatrix}\tilde{P} \label{partition0}
\end{align}
Here $A_{e} \in \mathcal{S}_0, A_{ms} \in \mathcal{S}_1, A_s \in \mathcal{S}_2$. Although one might be tempted to use Theorem~\ref{main_result} to provide error bounds, mixing between different components due to the transformation $\tilde{P}$ requires a careful analysis of identification error. We show that error bounds are limited by the slowest component as we describe below. We do not provide the exact characterization due to a shortage of space. The details are given in appendix as Section~\ref{composite_result_proof}.
\begin{thm}
\label{composite_result}
For any regular matrix $A$ we have with probability at least $1-\delta$,
\begin{itemize}
\item For $A \in \mathcal{S}_1 \cup \mathcal{S}_2$
$$||A - \hat{A}||_{2} \leq \frac{\text{poly}(\log{T}, \log{\frac{1}{\delta}})}{T}$$
whenever
$$T \geq \text{poly}\Big(\log{\frac{1}{\delta}}\Big)$$
\item For $A \in \mathcal{S}_0 \cup \mathcal{S}_1 \cup \mathcal{S}_2$
$$||A - \hat{A}||_{2} \leq \frac{\text{poly}(\log{T}, \log{\frac{1}{\delta}})}{\sqrt{T}}$$
whenever
$$T \geq \text{poly}\Big(\log{\frac{1}{\delta}}\Big)$$
\end{itemize}
Here $\text{poly}(\cdot)$ is a polynomial function.
\end{thm}
\begin{proof}
Define the partition of $A$ as Eq.~\eqref{partition0}. Since
\begin{align}
X_t &= \sum_{\tau=1}^t A^{\tau-1}\eta_{t -\tau+1} \nonumber \\
\tilde{X}_t = \tilde{P}^{-1}X_t &= \sum_{\tau=1}^t \tilde{A}^{\tau-1}\underbrace{\tilde{P}^{-1}\eta_{t -\tau+1}}_{\tilde{\eta}_{t-\tau+1}}
\end{align}
then the transformed dynamics are as follows:
\begin{align*}
\tilde{X}_{t+1} &= \tilde{A}\tilde{X}_t + \tilde{\eta}_{t+1}.
\end{align*}
Here $\{\tilde{\eta}_t\}_{t=1}^T$ are still independent. Correspondingly we also have a partition for $\tilde{X}_t, \tilde{\eta}_t$
\begin{align}
\tilde{X}_t = \begin{bmatrix}
X^{e}_t \\
X^{ms}_t \\
X^{s}_t
\end{bmatrix}&, \tilde{\eta}_t = \begin{bmatrix}
\eta^{e}_t \\
\eta^{ms}_t \\
\eta^{s}_t
\end{bmatrix}
\end{align}
Then we have
\begin{align}
\sum_{t=1}^T \tilde{X}_t \tilde{X}_t^{\prime} &= \sum_{t=1}^T\begin{bmatrix}
X^{e}_t (X^{e}_t)^{\prime} & X^{e}_t (X^{ms}_t)^{\prime} & X^{e}_t (X^{s}_t)^{\prime}\\
X^{ms}_t (X^{e}_t)^{\prime} & X^{ms}_t (X^{ms}_t)^{\prime} & X^{ms}_t (X^{s}_t)^{\prime} \\
X^{e}_t (X^{s}_t)^{\prime} & X^{s}_t (X^{ms}_t)^{\prime} & X^{s}_t (X^{s}_t)^{\prime}
\end{bmatrix} \label{mixed_matrix}
\end{align}
The next step is to show the invertibility of $\sum_{t=1}^T \tilde{X}_t \tilde{X}_t^{\prime}$. Although reminiscent of our previous set up, there are some critical differences. First, unlike before, coordinates of $\tilde{\eta}_t$, \textit{i.e.}, $\{\eta^{e}_t,\eta^{ms}_t,\eta^{s}_t\}$ are not independent. A major implication is that it is no longer obvious that the cross terms between different submatrices, such as $\sum_{t=1}^T X^{e}_t (X^{ms}_t)^{\prime}$, go to zero. Our proof will have three major steps:
\begin{itemize}
\item First we will show that the diagonal submatrices are invertible. This follows from Theorem~\ref{main_result} by arguing that the result can be extended to a noise process $\{P\eta_t\}_{t=1}^T$ where $\{\eta_t\}_{t=1}^T$ are independent subgaussian and elements of $\eta_t$ are also independent for all $t$. The only change will be the appearance of additional $\sigma_1^2(P)$ subgaussian parameter (See Corollary~\ref{dep-hanson-wright}). We will then show that
\begin{align*}
X_{mss}= \sum_{t=1}^T\begin{bmatrix}
X^{ms}_t (X^{ms}_t)^{\prime} & X^{ms}_t (X^{s}_t)^{\prime} \\
X^{s}_t (X^{ms}_t)^{\prime} & X^{s}_t (X^{s}_t)^{\prime}
\end{bmatrix}
\end{align*}
is invertible. This will follow from Theorem~\ref{main_result} (its dependent extension). Specifically, since $X_{mss}$ contains only stable and marginally stable components, it falls under $A \in \mathcal{S}_0 \cup \mathcal{S}_1$. It should be noted that since $X^{ms}_t, X^{s}_t$ are not independent in general, the invertibility of $X_{mss}$ can be shown only through Theorem~\ref{main_result}. In a similar fashion, $\sum_{t=1}^TX^{e}_t (X^{e}_t)^{\prime}$ is also invertible as it corresponds to $A \in \mathcal{S}_2$.
\item Since invertibility of block diagonal submatrices in $\sum_{t=1}^T \tilde{X}_t \tilde{X}_t^{\prime}$ does not imply the invertibility of the entire matrix we also need to show that the cross terms $||X^{e}_t (X^{ms}_t)^{\prime}||_2,||X^{e}_t (X^{s}_t)^{\prime}||_2$ are sufficiently small relative to the appropriate diagonal blocks.
\item Along the way we also obtain deterministic lower and upper bounds for the sample covariance matrix following which the steps for bounding the error are similar to Theorem~\ref{main_result}.
\end{itemize}
The details are in appendix as Section~\ref{composite_result_proof}.
\end{proof}
\section{General Bounds}
\label{gen_bounds}
\section{Contributions}
\label{contribution}
\section{Extension to presence of control input}
\label{extensions}
Here we sketch how to extend our results to the general case when we also have a control input, \textit{i.e.},
\begin{equation}
\label{control_eq}
X_{t+1} = AX_t + BU_{t} + \eta_{t+1}
\end{equation}
Here $A, B$ are unknown but we can choose $U_t$. Pick independent vectors $\{U_t \sim \bm{\mathcal{N}}(0, I)\}_{t=1}^T$. We can represent this as a variant of Eq.~\eqref{lti} as follows
\begin{align*}
\underbrace{\begin{bmatrix}
X_{t+1} \\
U_{t+1}
\end{bmatrix}}_{\bar{X}_{t+1}} &= \underbrace{\begin{bmatrix}
A & B \\
0 & 0
\end{bmatrix}}_{\bar{A}}\begin{bmatrix}
X_{t} \\
U_{t}
\end{bmatrix} + \underbrace{\begin{bmatrix}
\eta_{t+1} \\
U_{t+1}
\end{bmatrix}}_{\bar{\eta}_{t+1}}
\end{align*}
Since
\begin{align*}
\text{det}\Bigg(\begin{bmatrix}
A -\lambda I & B \\
0 & -\lambda I
\end{bmatrix}\Bigg) = 0
\end{align*}
holds when $\lambda$ equals an eigenvalue of $A$ or $0$. The eigenvalues of $\bar{A}$ are the same as $A$ with some additional eigenvalues that are zero. Now we can simply use Theorem~\ref{composite_result}.
\section{Extension to heavy tailed noise}
\label{noise_ind}
It is claimed in~\cite{faradonbeh2017finite} that techniques involving inequalities for subgaussian distributions cannot be used for the class of sub-Weibull distributions they consider. However, by bounding the noise process, as even \cite{faradonbeh2017finite} does, we can convert the heavy tailed process into a zero mean independent subgaussian one. In such a case our techniques can still be applied, and they incur only an extra logarithmic factor. We consider the class of distributions introduced in~\cite{faradonbeh2017finite} called sub--Weibull distribution. Let $\eta_{t, i}$ be the $i^{th}$ element of $\eta_t$ then $\eta_{t, i}$ has sub--Weibull distribution if
\begin{equation}
\label{sub_weibull}
\mathbb{P}(|\eta_{t, i} > y|) \leq b \exp{\Big(\frac{-y^{\alpha}}{m}\Big)}
\end{equation}
When $\alpha = 2$ it is subGaussian, $\alpha =1$ it is subExponential and $\alpha < 1$ it is subWeibull. Assume for now that $\eta_{t, i}$ has symmetric distribution. The extension to asymmetric case needs some computation in finding and is not discussed here. Consider the event
$$\mathcal{W}(\delta) = \Bigg\{\max_{1 \leq t \leq T} ||\eta_{t}||_{\infty} \leq \nu_T(\delta)\Bigg\}$$
where $\nu_T(\delta) = \Big(m \log{\Big(\frac{bTd}{\delta}\Big)^{1/\alpha}}\Big)$. Then Proposition 3 in~\cite{faradonbeh2017finite} shows that $\mathbb{P}(\mathcal{W}(\delta)) \geq 1 - \delta$. Clearly because each $\{\eta_{t, i}\}_{t=1, i=1}^{t=T, i=d}$ are i.i.d and have symmetric distribution
\begin{equation}
\label{zero_mean}
\mathbb{E}[\eta_{t, i} | \mathcal{W}(\delta)] = \mathbb{E}[\eta_{t, i} | \{|\eta_{t, i}| \leq \nu_T(\delta)\}] = 0
\end{equation}
Then under $\mathcal{W}(\delta)$, $\eta_{t, i}$ has mean zero and $\{\eta_{t, i}\}_{t=1, i=1}^{t=T, i=d}$ are independent under the event $\mathcal{W}(\delta)$. Further since under $\mathcal{W}(\delta)$ these are bounded, they are also subGaussian. The subGaussian parameter or variance proxy $R^2 \leq \nu_T(\delta)^2$ which is logarithmic in $T$. This appears as simply a scaling factor in Theorem~\ref{selfnorm_main}, Proposition~\ref{selfnorm_bnd}. We can now use all our techniques from before.
\section{Optimality of Bound}
\label{optimal_bnd}
Let $A = a$ be 1-D system. Assume that $T \in T_{u}(\delta)$ (as in Table~\ref{notation}). Then $X_t, \eta_{t}$ are just numbers. Then let $E$ be the error, \textit{i.e.},
\begin{align*}
E &= (\sum_{t=1}^T x_t^2)^{-1}(\sum_{t=1}^T x_t \eta_{t+1}) \\
&= a^{-T}(\sum_{t=1}^T a^{-2T}x_t^2)^{-1}(\sum_{t=1}^T a^{-T}x_t \eta_{t+1})
\end{align*}
In this section, we will show that the bound obtained for explosive systems is optimal in terms of $\delta$. Assume $\eta_t \sim \bm{\mathcal{N}}(0, 1)$ i.i.d Gaussian. Let $S_T = \sum_{t=1}^T a^{-T}x_t \eta_{t+1}, U_T=\sum_{t=1}^T a^{-2T}x_t^2$. Now $E = a^{-T}U_T^{-1} S_T$ and $S_T$ has the following form
\begin{equation}
2S_T = [\eta_{T+1}, \ldots, \eta_1] \underbrace{\begin{bmatrix}
0 & a^{-T} & a^{-T+1}&\hdots & a^{-1} \\
a^{-T} & 0 & a^{-T} & \hdots & a^{-2} \\
\vdots & \ddots & \ddots & \ddots & \vdots \\
\vdots & \vdots & \ddots & \ddots & \ddots \\
a^{-1} & a^{-2} & a^{-3} & \hdots & 0
\end{bmatrix}}_{=M} \underbrace{\begin{bmatrix}
\eta_{T+1} \\
\vdots \\
\eta_1
\end{bmatrix}}_{=\tilde{\eta}} \label{ST_form_dist}
\end{equation}
Define $F_T = \sum_{i=1}^T a^{-2i+2} (a^{-2T} x_T^2) = \frac{1-a^{-2T}}{1-a^{-2}}a^{-2T}x_T^2$. and $\sigma^2 = \text{Var}(a^{-2T}x_T^2)$. It is clear that $a^{-T}x_T$ is a Gaussian random variable. Note that $F_T, U_T$ are the same as Eq.~\eqref{ut_ft} and Section~\ref{explosive} when $A = a$. We can easily calculate $\sigma^2$
\[
a^{-2} \leq \sigma^2 \leq \frac{1}{a^2 - 1}
\]
Consider four events
\begin{align*}
\mathcal{E}_1(\delta) &= \Bigg\{|U_T -F_T| \leq \frac{\delta^2 \sigma^2}{C} \vee \Big(\frac{C T^2 a^{-T}}{1-a^{-2}} + \Big(1 + \frac{1}{c} \log{\frac{1}{\delta}}\Big)\frac{Ta^{-2T}}{(1-a^{-2})}\Big) \Bigg\} ,\mathcal{E}_2(\delta) = \Bigg\{|S_T| \geq \frac{\delta}{-Ca^2\log{\delta}} \Bigg\} \\
\mathcal{E}_3(\delta) &= \Bigg\{0 \leq F_T \leq C_2\delta^2 \sigma^2\Bigg\} , \mathcal{E}_4(\delta) = \Bigg\{ 0 \leq U_T \leq \Big((C_2 + 1/C)\delta^2 \sigma^2\Big) \vee \Big(\frac{C T^2 a^{-T}}{1-a^{-2}} + \Big(1 + \frac{1}{c} \log{\frac{1}{\delta}}\Big)\frac{Ta^{-2T}}{(1-a^{-2})}\Big) \Bigg\}
\end{align*}
From Eq.~\eqref{tight_error_cum} we have with probability at least $1-\frac{\delta}{2}$ that
\begin{align*}
||U_T - F_T||_{2} &\underbrace{\leq}_{\text{Eq.}~\eqref{tight_error_cum}} \Bigg(4T^2 \sigma_1^2(A^{-\frac{(T+1)}{2}}) \text{tr}(\Gamma_T(A^{-1}))+ \Big(T + \frac{T}{c}\log{\frac{1}{\delta}}\Big)\sigma^2_1(A^{-T-1})\text{tr}(\Gamma_T(A^{-1}))\Bigg) \\
&\leq \frac{4T^2 a^{-T}}{1-a^{-2}} + \Big(1 + \frac{1}{c} \log{\frac{1}{\delta}}\Big)\frac{Ta^{-2T}}{(1-a^{-2})}
\end{align*}
Assume $\delta^2 \in (0, \frac{1}{128}]$ then
\begin{align*}
\mathbb{P}(\mathcal{E}_3(\delta)) &= \frac{2}{\sqrt{2 \pi} \sigma}\int_{2 \delta \sigma}^{16\delta \sigma} e^{-\frac{x^2}{2\sigma^2}}dx \\
&\geq \frac{14 \delta}{\sqrt{2 \pi}} e^{-\frac{256 \delta^2}{2}} \\
&\geq \frac{14 \delta}{\sqrt{2 \pi}e} \geq 2 \delta
\end{align*}
Recall $T_u(\delta)$ is the set of $T$ that satisfies Eq.~\eqref{t_exp_req} when $A = a$.
\subsection{$T \in T_u({\delta})$}
\label{t_tu}
For $T \in T_u(\delta)$ and from Eq.~\eqref{error_cum}, we have with probability at least $1 -\frac{\delta}{2}$ that
\begin{align*}
||U_T - F_T||_{2} &\leq \frac{4T^2 a^{-T}}{1-a^{-2}} + \frac{Ta^{-2T}}{\delta(1-a^{-2})} \underbrace{\leq}_{T \in T_u(\delta), \text{Eq.}~\eqref{t_exp_req}} \frac{\phi_{\min}(a)^2 \psi(a)^2 \delta^2}{2\sigma_{\max}(P)^2} \leq \frac{C\delta^2}{(a^2-1)}
\end{align*}
The last inequality follows because for $1$-D systems $\phi_{\min}(A), \psi(A), \sigma_{\max}(P)$ are just constants, for example $P = 1, \phi_{\min}(a) = 1, \psi(a)^2 = C\sigma^2 \leq \frac{C}{a^2 - 1}$ which follows by definition. Note $T \in T_{u}(\delta)$ if and only if we have
\[
{\delta^2 \sigma^2} > \frac{C T^2 a^{-T}}{1-a^{-2}}
\]
Thus, $ \mathbb{P}(\mathcal{E}_1(\delta)) \geq 1 -\frac{\delta}{2}$. Clearly $\mathcal{E}_1(\delta) \cap \mathcal{E}_3(\delta) \implies \mathcal{E}_1(\delta) \cap \mathcal{E}_4(\delta)$ and
$$\mathcal{E}_2(\delta) \cap \mathcal{E}_4(\delta) \implies \Big\{ |S_T| U_T^{-1} \geq \frac{C}{-\sigma^2 a^2 \delta \log{\delta}} \Big\}$$
We bound $\mathbb{P}(\mathcal{E}_2(\delta))$ in Section~\ref{char_fn} and Eq.~\eqref{ST_xT_lb}, which gives $\mathbb{P}(\mathcal{E}_2(\delta)) \geq 1 - \frac{\delta}{2}$ and then
\begin{align*}
\mathbb{P}(\mathcal{E}_1(\delta) \cap \mathcal{E}_2(\delta) \cap \mathcal{E}_4(\delta)) &\geq \mathbb{P}(\mathcal{E}_1(\delta) \cap \mathcal{E}_2(\delta) \cap \mathcal{E}_3(\delta)) \\
&\geq \mathbb{P}(\mathcal{E}_1(\delta)) + \mathbb{P}(\mathcal{E}_2(\delta) \cap \mathcal{E}_3(\delta)) - 1 \\
&\geq \mathbb{P}(\mathcal{E}_1(\delta)) + \mathbb{P}(\mathcal{E}_2(\delta)) +\mathbb{P}( \mathcal{E}_3(\delta)) - 2 \\
&\geq \frac{\delta}{2}
\end{align*}
Since $\mathcal{E}_2(\delta) \cap \mathcal{E}_4(\delta) \implies \{ |S_T| U_T^{-1} \geq \frac{C}{-\sigma^2 a^2 \delta \log{\delta} }\}$ when $T \in T_{u}(\delta)$ then
$$ \mathbb{P}(\{ |S_T| U_T^{-1} \geq \frac{C}{-\sigma^2 a^2 \delta \log{\delta}}\}) \geq \frac{\delta}{2}$$
we have proved our claim that with probability at least $\delta$ we have that
\begin{equation}
|E_T| \geq \Big(\frac{C}{-\sigma^2 a^2 \delta \log{\delta} }\Big)a^{-T} \geq \frac{C(1-a^{-2})}{-\delta \log{\delta}}a^{-T} \label{final_err_dist_lb}
\end{equation}
whenever $Ca^2T^2 a^{-T} \leq \delta^2$.
\subsection{$T \not \in T_u(\delta)$}
\label{t_not_tu}
If $Ca^2T^2 a^{-T} > \delta^2$, then with probability at least $1-\frac{\delta}{2}$
$$|U_T-F_T| \leq \frac{CT^2 a^{-T}}{1-a^{-2}} + \Big(1 + \frac{1}{c} \log{\frac{1}{\delta}}\Big)\frac{Ta^{-2T}}{(1-a^{-2})}\Big)$$
and we have with probability at least $\delta$ that
$$\Big\{ |S_T| U_T^{-1} \geq \frac{C(1-a^{-2}) \delta a^{T}}{-T^2 a^2 \log{\delta} + \Big(1 - \frac{\log{\delta}}{c} \Big)T a^{-T}}\Big\}$$
and we can conclude with probability at least $\delta$
\[
|E_T| \geq \frac{C(1-a^{-2}) \delta}{ -a^2 (\log{\delta})^3}
\]
where $Ca^2T^2 a^{-T} \geq \delta^2 \implies T \leq -\log{\delta}$.
\subsection{Comparison to existing bounds}
\label{comparison}
\begin{thm}[Theorem B.2~\cite{simchowitz2018learning}]
\label{b2}
Fix an $a_* \in \mathbb{R}$ and define $\Gamma_T = \sum_{t=1}a_{*}^{2t}$. Fix an alternative $a^{\prime} \in \{a_* - 2\epsilon, a_* + 2\epsilon\}$ and $\delta \in (0, 1/4)$. Then for any estimator $\hat{a}$
\[
\sup_{a \in \{a_*, a^{\prime}\}} \mathbb{P}(|\hat{a}(T) - a_*| \geq \epsilon) \geq \delta
\]
for any $T$ such that $T \Gamma_T \leq \frac{\log{(1/2\delta)}}{8 \epsilon^2}$.
\end{thm}
Note $\Gamma_T = \frac{a^{2T+2}-1}{a^2 -1 }$. Theorem~\ref{b2} suggests that for a given $T, \delta$ if $\epsilon \leq a^{-T}\sqrt{\frac{-C\log{\delta}}{T}}$ then $\mathbb{P}(|a_* - \hat{a}(T)| \geq \epsilon) \geq \delta$. However we show that whenever $Ca^2 T^2 a^{-T} \leq \delta^2$, we have that
\[
\mathbb{P}\Big(|a_* - \hat{a}(T)| \geq a^{-T} \frac{C(1-a^{-2})}{-\delta \log{\delta}} \Big) \geq \delta
\]
Since $a^{-T}\sqrt{\frac{-C\log{\delta}}{T}} \leq a^{-T} \frac{C(1-a^{-2})}{-\delta \log{\delta}}$ our lower bound is tighter.
\begin{thm}[Theorem B.1~\cite{simchowitz2018learning}]
\label{b1}
Let $\epsilon \in (0, 1)$ and $\delta \in (0, 1/2)$. Then $\mathbb{P}(|\hat{a}(T) - a_*| \leq \epsilon) \geq 1 -\delta$ as long as
\[
T \geq \max \Big\{\frac{8}{(|a_* - \epsilon|)^2 - 1} \log{\frac{2}{\delta}}, \frac{4 \log{\frac{1}{\epsilon}}}{\log{(|a_*| - \epsilon)}} + 8 \log{\frac{2}{\delta}}\Big\}
\]
\end{thm}
We now compare Eq.~\eqref{final_err_dist_lb} to the upper bound in Theorem~\ref{b1}. Eq.~\eqref{final_err_dist_lb} gives us that if
\[
\epsilon \leq \frac{C(1-a^{-2})}{-\delta \log{\delta}}a^{-T}
\]
we have with probability at least $\delta$ that $|E_T| \geq \epsilon$. This reduces to whenever
\begin{equation}
\label{t_lb_req}
T_{-} \leq \frac{\log{\frac{1}{\epsilon}}}{\log{a}} + \frac{\log{\frac{C(1-a^{-2})}{\delta}}}{\log{a}}
\end{equation}
we have with probability at least $\delta$ that $|E_T| \geq \epsilon$. We focus on the case $a_{*} > 1+\epsilon$ of Theorem~\ref{b1}. Let $a_{*} = 1 + \epsilon + \gamma$, then the bounds in Theorem~\ref{b1} indicate that whenever
\[
T_{+} \geq \frac{8}{2\gamma + \gamma^2} \log{\frac{2}{\delta}} + \frac{4 \log{\frac{1}{\epsilon}}}{\log{(\gamma + 1)}} + \log{\frac{2}{\delta}}
\]
we have with probability at least $1 -\delta$ $|E_T| \leq \epsilon$. If $\gamma = o(\epsilon)$, then the requirement on $T$ reduces to
\[
T_{+} \geq \frac{8}{o(\epsilon)} \log{\frac{2}{\delta}} + \frac{4 \log{\frac{1}{\epsilon}}}{o(\epsilon)} + \text{ smaller terms}
\]
By substituting $\log{a} \approx \epsilon$ in $T_{-}$ we note that $ T_{-} \leq T_{+}$. For the case when $\gamma = \Omega(\epsilon)$ for $T_{+}$ we get
\[
T_{+} \geq \Big(\frac{8}{\Omega(\epsilon)} \vee 1\Big) \log{\frac{2}{\delta}} + \frac{4 \log{\frac{1}{\epsilon}}}{\log{(1+\Omega(\epsilon))}} \approx \underbrace{\Big(\frac{8}{\Omega(\epsilon)} \vee 1\Big)}_{\geq (\log{a})^{-1}} \log{\frac{2}{\delta}} + \frac{2 \log{\frac{1}{\epsilon}}}{\log{a}}
\]
In either cases $T_- \leq T_+$.
\section{Distribution of $S_T$}
\label{char_fn}
Recall $S_T$ from Eq.~\eqref{ST_form_dist}. Since $\sum_{i, j}|M|_{i, j} \geq ||M||_{*}$ (the nuclear norm), we have that $||M||_{*} \leq \frac{2a^{-1}}{1-a^{-1}}$ and it is obvious that $||M||_2 \geq a^{-1}$. Since $M = U^{\top} \Lambda U$ (because it is symmetric) and $\eta_t$ are i.i.d Gaussian then $U \tilde{\eta}$ is also Gaussian with each of its entries being i.i.d Gaussian. This implies that $2S_T = \sum_{j=1}^{T+1}\lambda_j g_j^2$ where $\lambda_j$ are eigenvalues of $M$ and $g_j$ are i.i.d Gaussian with $\sum_j \lambda_j = 0, \sum_j |\lambda_j| \leq \frac{2a^{-1}}{1-a^{-1}}$. The characteristic function of $S_T$ is
\[
\phi_{S_T}(t) = \prod_{j=1}^{T+1} \Big(\frac{1}{1-2it \lambda_j}\Big)^{1/2} = \Big(\frac{1}{1 - 4t^2 (\sum_{l \neq j} \lambda_l \lambda_j) - i 8t^3 (\sum_{l \neq j \neq k} \lambda_l \lambda_j \lambda_k) + 16t^4 (\sum_{l \neq j \neq k \neq p} \lambda_l \lambda_j \lambda_k \lambda_p) \hdots}\Big)^{1/2}
\]
where the coefficient of $t$ vanishes because $\sum_{j=1}^{T+1} \lambda_j = 0$. Further since $\sum_{l \neq j} 2\lambda_l \lambda_j = - \sum_{j} \lambda_j^2$ we have
and
\begin{align*}
(\sum_{l \neq j \neq k \neq m} \lambda_l \lambda_j \lambda_k \lambda_m) &= \sum_{l} \lambda_l (\sum_{l \neq j \neq k \neq m} \lambda_j \lambda_k \lambda_m) = \sum_{l} \lambda_l (\sum_{l \neq j \neq k \neq m} \lambda_j \lambda_k \lambda_m + \sum_{l \neq p \neq m} \lambda_l \lambda_p \lambda_m - \sum_{l \neq p \neq m} \lambda_l \lambda_p \lambda_m) \\
&= \sum_{l} \lambda_l (\sum_{j \neq k \neq m} \lambda_j \lambda_k \lambda_m - \sum_{l \neq p \neq m} \lambda_l \lambda_p \lambda_m - \sum_{l \neq m } \lambda_l^2 \lambda_m + \sum_{l \neq m } \lambda_l^2 \lambda_m) \\
&= \sum_{l} \lambda_l (- \lambda_l \sum_{ p \neq m} \lambda_p \lambda_m + \sum_{l \neq m } \lambda_l^2 \lambda_m) = \frac{(\sum_l \lambda_l^2)^2}{2} - \sum_l \lambda_l^4 = {\frac{\text{tr}(M^2)^2}{2}} - \text{tr}(M^4)
\end{align*}
The coefficients of even powers of $t$ can be obtained in a similar fashion. Then recall by Levy's theorem that
\[
f_{S_T}(x) = \int_{-\infty}^{\infty} e^{-itx} \phi_{S_T}(t) dt \implies \sup_x f_{S_T}(x) \leq \int_{-\infty}^{\infty} |\phi_{S_T}(t)| dt \leq \int_{-\infty}^{\infty} \frac{1}{\sqrt{1 + c_1 t^2 + c_2 t^4 + \hdots}} dt
\]
Now whenever $c_k > 0$ (and not decaying asymptotically to zero) for some $k \geq 2$, we get $\sup_x f_{S_T}(x) \leq C $ for some universal constant $C$ and we can use Proposition~\ref{cont_rand} to get $\mathbb{P}(|S_T| \leq \delta) \leq C \delta$. But since that may not be always be true we can explicitly calculate the integral
\begin{align*}
f_{S_T}(x) &= \int_{-\infty}^{\infty} e^{-itx} \phi_{S_T}(t) dt \approx \underbrace{\int_{-\infty}^{\infty} \frac{e^{itx}}{\sqrt{1 + 2a^{-2}t^2}} dt}_{\text{Modified Bessel Function of the Second Kind} } \\
\int_{-\delta}^{\delta} f_{S_T}(x) dx &= \int_{-\delta}^{\delta}\int_{-\infty}^{\infty} \frac{e^{itx}}{\sqrt{1 + 2a^{-2}t^2}} dt dx = 2\int_{-\infty}^{\infty} \int_{-\delta}^{\delta} \frac{\cos(tx)}{\sqrt{1 + 2a^{-2}t^2}} dx dt \\
&= C\delta \int_{0}^{\infty} \frac{\sin(t\delta)}{\delta t\sqrt{1 + 2a^{-2}t^2}} dt = C\delta \int_{0}^{\delta} \frac{\sin(t\delta)}{\delta t\sqrt{1 + 2a^{-2}t^2}} dt + C\delta \int_{\delta}^{\infty} \frac{\sin(t\delta)}{\delta t\sqrt{1 + 2a^{-2}t^2}} dt \\
&\leq C \delta^2 - C a\delta \log(\delta)
\end{align*}
Thus
\[
\mathbb{P}(|S_T| \leq \delta) \leq -Ca \delta \log{\delta}
\]
and replacing $\delta \rightarrow \frac{-C\delta}{2a\log{\delta}}$ we get
\begin{equation}
\mathbb{P}\Big(|S_T| \leq \frac{-C\delta}{a\log{\delta}}\Big) \leq \frac{\delta}{2} \label{ST_xT_lb}
\end{equation}
\section{Lemma B}
\label{lemmab}
Let the characteristic and minimal polynomial be $\chi(t), \mu(t)$ respectively.
\[
\chi(t) = \prod_{i=1}^k (t-\lambda_i)^{a_i}, \mu(t) = \prod_{i=1}^k (t-\lambda_i)^{b_i}
\]
where $b_i \leq a_i$. $b_i$ is the size of the largest Jordan block corresponding to $\lambda_i$ in the Jordan normal form. $a_i$ sum of size of all Jordan blocks corresponding to $\lambda_i$. Now, if $\chi(t) = \mu(t)$ then $a_i=b_i$, \textit{i.e.}, there is only Jordan block corresponding to each $\lambda_i$. On the other if there is only one Jordan block (geometric multiplicity $=1$) corresponding to each eigenvalue $\implies a_i=b_i$ and $\chi(t) = \mu(t)$.
\section{Inconsistency of explosive systems}
\label{inconsistent}
Recall that $A = a I$ where $a \geq 1.1$ and
\[
\begin{bmatrix}
X^{(1)}_{t+1} \\
X^{(2)}_{t+1}
\end{bmatrix} = A \begin{bmatrix}
X^{(1)}_{t} \\
X^{(2)}_{t}
\end{bmatrix} + \begin{bmatrix}
\eta^{(1)}_{t+1} \\
\eta^{(2)}_{t+1}
\end{bmatrix}
\]
Since $A$ is scaled identity we have that $X^{(1)}_{t} = \sum_{t=1}^{T}a^{T-t} \eta^{(1)}_t, X^{(2)}_t = \sum_{t=1}^{T}a^{T-t} \eta^{(2)}_t$. The scaled sample covariance matrix $a^{-2T}Y_T = a^{-2T}\sum_{t=1}^T X_t X_t^{\top}$ is of the following form
\begin{align}
a^{-2T}Y_T &= \begin{bmatrix}
a^{-2T}\sum_{t=1}^T (X^{(1)}_t)^2 & a^{-2T} \sum_{t=1}^T X^{(1)}_t X^{(2)}_t \\
a^{-2T} \sum_{t=1}^T X^{(1)}_t X^{(2)}_t & a^{-2T}\sum_{t=1}^T (X^{(2)}_t)^2 \end{bmatrix} \label{sample_cov}
\end{align}
Define $a^{-T}X_T = Z_T$ with $Z_T^{(i)}$ corresponding to appropriate coordinates, and recall that $Z^{(i)}_T$ is a Gaussian random variable with variance in $(a^{-2}, \frac{a^{-2}}{1-a^{-2}})$ and each $a^{-T} X_t = \langle a^{-T} X_t, Z_T\rangle Z_T + \langle a^{-T} X_t, Z_T^{\perp} \rangle Z_T^{\perp}$. This implies
\begin{align*}
a^{-2T}\sum_{t=1}^T X_t X_t^{\top} &= \sum_{t=1}^T (\underbrace{a^{-T}\langle X_t, Z_T\rangle}_{=\alpha_t})^2 Z_T Z_T^{\top} + \sum_{t=1}^T a^{-2T} \langle X_t, Z_T \rangle \langle X_t, Z_T^{\perp} \rangle Z_T (Z_T^{\perp})^{\top} \\
&+ \sum_{t=1}^T \underbrace{\langle a^{-T}X_t, Z_T\rangle}_{=\alpha_t} \underbrace{\langle a^{-T}X_t, Z_T^{\perp} \rangle}_{=\beta_t} Z_T^{\perp} Z_T^{\top} + \sum_{t=1}^T (\underbrace{a^{-T}\langle X_t, Z_T^{\perp} \rangle}_{=\beta_t})^2 Z_T^{\perp} (Z_T^{\perp})^{\top} \\
&= \underbrace{||\alpha||^2 Z_T Z_T^{\top} + ||\beta||^2 Z_T^{\perp} (Z_T^{\perp})^{\top}}_{=M} + \langle \alpha, \beta \rangle (Z_T^{\perp} Z_T^{\top} + Z_T (Z_T^{\perp})^{\top}) \\
&= M + \underbrace{\langle \alpha, \beta \rangle [Z_T Z_T^{\perp}]}_{=U} \underbrace{\begin{bmatrix}
0 & 1 \\
1 & 0
\end{bmatrix}}_{=C}\underbrace{\begin{bmatrix}
Z_T^{\top} \\
(Z_T^{\perp})^{\top}
\end{bmatrix}}_{=V}
\end{align*}
By using Woodbury's matrix identity and since $M^{-1} = ||\alpha||^{-2} Z_T Z_T^{\top} + ||\beta||^{-2} Z_T^{\perp} (Z_T^{\perp})^{\top}, C=C^{-1}$ we get
\begin{align*}
(a^{-2T}\sum_{t=1}^T X_t X_t^{\top})^{-1} &= M^{-1} - \langle \alpha, \beta \rangle M^{-1} U(C + \langle \alpha, \beta \rangle U^{\top} M^{-1} U)^{-1} U^{\top} M^{-1} \\
&= M^{-1} - \langle \alpha, \beta \rangle [||\alpha||^{-2}Z_T \hspace{2mm} ||\beta||^{-2}Z_T^{\perp}] \Big(\begin{bmatrix}
\langle \alpha, \beta \rangle ||\alpha||^{-2} & 1 \\
1 & ||\beta||^{-2} \langle \alpha, \beta \rangle
\end{bmatrix} \Big)^{-1}\begin{bmatrix}
||\alpha||^{-2}Z_T^{\top} \\
||\beta||^{-2}(Z_T^{\perp})^{\top}
\end{bmatrix}
\end{align*}
Then the error term is
\begin{align*}
\hat{A}_o - A_o &= \Big(\sum_{t=1}^{T}a^{-2T} \eta_{t+1} X_t^{\prime}\Big) (a^{-2T}\sum_{t=1}^T X_t X_t^{\top})^{-1} \\
&= \Big(\sum_{t=1}^T \langle a^{-T} X_t, Z_T \rangle a^{-T}\eta_{t+1} Z_T^{\prime} + \sum_{t=1}^T \langle a^{-T} X_t, Z_T^{\perp} \rangle a^{-T}\eta_{t+1} (Z_T^{\perp})^{\prime}\Big) (a^{-2T}\sum_{t=1}^T X_t X_t^{\top})^{-1}
\end{align*}
We now check the projection of $Z_T, Z_T^{\perp}$ on $(a^{-2T}\sum_{t=1}^T X_t X_t^{\top})^{-1}$
\begin{align}
Z_T^{\top} (a^{-2T}\sum_{t=1}^T X_t X_t^{\top})^{-1} &= ||\alpha||^{-2} Z_T^{\top} - \langle \alpha, \beta \rangle [||\alpha||^{-2} \hspace{2mm} 0] \Big(\begin{bmatrix}
\langle \alpha, \beta \rangle ||\alpha||^{-2} & 1 \\
1 & \langle \alpha, \beta \rangle ||\beta||^{-2}
\end{bmatrix} \Big)^{-1}\begin{bmatrix}
||\alpha||^{-2}Z_T^{\top} \\
||\beta||^{-2}(Z_T^{\perp})^{\top}
\end{bmatrix} \nonumber \\
&= \frac{-||\alpha||^{-2}Z_T^{\top} + \langle \alpha, \beta \rangle ||\alpha||^{-2}||\beta||^{-2}(Z_T^{\perp})^{\top}}{\langle \alpha, \beta \rangle^2 ||\alpha||^{-2} ||\beta||^{-2} - 1} \label{zt_proj}\\
(Z_T^{\perp})^{\top} (a^{-2T}\sum_{t=1}^T X_t X_t^{\top})^{-1} &= ||\beta||^{-2} (Z_T^{\perp})^{\top} - \langle \alpha, \beta \rangle[0 \hspace{2mm} ||\beta||^{-2}] \Big(\begin{bmatrix}
\langle \alpha, \beta \rangle ||\alpha||^{-2} & 1 \\
1 & \langle \alpha, \beta \rangle ||\beta||^{-2}
\end{bmatrix} \Big)^{-1}\begin{bmatrix}
||\alpha||^{-2}Z_T^{\top} \\
||\beta||^{-2}(Z_T^{\perp})^{\top}
\end{bmatrix} \nonumber\\
&= \frac{-||\beta||^{-2}(Z_T^{\perp})^{\top} + \langle \alpha, \beta \rangle ||\alpha||^{-2}||\beta||^{-2}Z_T^{\top}}{\langle \alpha, \beta \rangle^2 ||\alpha||^{-2} ||\beta||^{-2} - 1} \label{ztp_proj}
\end{align}
We will show that with high probability $||\alpha||^{-2} = \Theta(1), ||\beta||^{-2} = \Omega(a^{2T}), \langle \alpha, \beta \rangle = O(a^{-T})$ as a result Eq.~\eqref{zt_proj} is $\Omega(a^{T})$ and Eq.~\eqref{ztp_proj} is $\Omega(a^{2T})$. Note that $Z_T^{\perp} = \begin{bmatrix}
Z^{(2)}_T \\
-Z^{(1)}_T
\end{bmatrix}$ where we have ignored the scaling (as these will be of constant order with high probability). First taking a closer look at $\alpha_t = a^{-2T} X^{(1)}_t Z^{(1)}_T + a^{-2T} X^{(2)}_t Z^{(2)}_T$ reveals the following behaviour
\begin{align*}
a^{-2T} X^{(1)}_{T-1} Z^{(1)}_T &= a^{-1} (Z^{(1)}_T)^2 - a^{-T-1} Z^{(1)}_T \eta^{(1)}_T \\
\alpha_{T-1} &= a^{-1}( (Z^{(1)}_T)^2 +(Z^{(2)}_T)^2) - a^{-T-1} (Z^{(1)}_T \eta^{(1)}_T + Z^{(2)}_T \eta^{(2)}_T) \\
a^{-2T} X^{(1)}_{T-2} Z^{(1)}_T &= a^{-2} (Z^{(1)}_T)^2 - a^{-T-1} Z^{(1)}_{T-1} \eta^{(1)}_T - a^{-T-2} Z^{(1)}_T \eta^{(1)}_T \\
\alpha_{T-2} &= a^{-2}( (Z^{(1)}_T)^2 +(Z^{(2)}_T)^2) - a^{-T-1} (Z^{(1)}_{T-1} \eta^{(1)}_T + Z^{(2)}_{T-1} \eta^{(2)}_T) - a^{-T-2} (Z^{(1)}_T \eta^{(1)}_T + Z^{(2)}_T \eta^{(2)}_T)
\end{align*}
Since $Z^{(1)}_T$ is a Gaussian random variable with bounded variance, we see that $\alpha_t$ decays exponentially as $t$ decreases (up to some $a^{-T}$ additive terms). In a similar fashion one can show that $\sum_{t=1}^{T} \alpha_t^2 = \frac{1-a^{-2T}}{1-a^{-2}}((Z^{(1)}_T)^2 +(Z^{(2)}_T)^2)^2 + O(T^2a^{-T})$ with high probability. Clearly $||\alpha||^{-2} = \Theta(1)$ with high probability. For $\beta$, note that $Z^{(2)}_T$ is independent of $X^{(1)}_t$ and observe that $\{a^T \beta_t \}_{t=1}^{T-1}$ are non--decaying and non--trivial random variables. Specifically these are subexponential random variables with $||\cdot||_{\psi_1}$ norm as $||a^{T} \beta_t ||_{\psi_1} = Ca^{-1}$. Here $||\cdot||_{\psi_1}$ norm is the same Definition 2.7.5 in~\cite{vershynin2018high}. To see this consider for example $t=T-1, T-2$, then
\begin{align}
a^{T}\beta_{T-1} &= \langle X_{T-1}, Z_T^{\perp} \rangle = X^{(1)}_{T-1} Z^{(2)}_{T} - X^{(2)}_{T-1} Z^{(1)}_{T} = a^{-1}( \eta^{(2)}_T Z^{(1)}_T -\eta^{(1)}_T Z^{(2)}_T) \nonumber\\
a^{T} \beta_{T-2} &= \langle X_{T-1}, Z_T^{\perp} \rangle = X^{(1)}_{T-1} Z^{(2)}_{T} - X^{(2)}_{T-1} Z^{(1)}_{T} = a^{-1}(( \eta^{(2)}_{T-1} + a^{-1} \eta^{(2)}_{T})Z^{(1)}_T - ( \eta^{(1)}_{T-1} + a^{-1} \eta^{(1)}_{T})Z^{(2)}_T ) \label{scale_beta}
\end{align}
Clearly, $a^{2T}|| \beta||_2^2 = \Omega(1)$ and $a^{2T}|| \beta||_2^2 = O(T)$ with high probability. Recall the error term
\begin{align}
\hat{A}_o - A_o &= \Big(\sum_{t=1}^{T}a^{-2T} \eta_{t+1} X_t^{\prime}\Big) (a^{-2T}\sum_{t=1}^T X_t X_t^{\top})^{-1} \nonumber\\
&= \Big(\sum_{t=1}^T \langle a^{-T} X_t, Z_T \rangle a^{-T}\eta_{t+1} Z_T^{\prime} + \sum_{t=1}^T \langle a^{-T} X_t, Z_T^{\perp} \rangle a^{-T}\eta_{t+1} (Z_T^{\perp})^{\prime}\Big) (a^{-2T}\sum_{t=1}^T X_t X_t^{\top})^{-1} \nonumber\\
(\hat{A}_o - A_o)Z_T^{\perp} &= (\sum_{t=1}^T \langle a^{-T} X_t, Z_T \rangle a^{-T}\eta_{t+1} Z_T^{\prime}) (a^{-2T}\sum_{t=1}^T X_t X_t^{\top})^{-1} Z_T^{\perp} \nonumber \\
&+ (\sum_{t=1}^T \langle a^{-T} X_t, Z_T^{\perp} \rangle a^{-T}\eta_{t+1} (Z_T^{\perp})^{\prime}) (a^{-2T}\sum_{t=1}^T X_t X_t^{\top})^{-1} Z_T^{\perp} \nonumber \\
&= \frac{\langle \alpha, \beta \rangle ||\alpha||^{-2}||\beta||^{-2}}{\langle \alpha, \beta \rangle^2 ||\alpha||^{-2} ||\beta||^{-2} - 1}\sum_{t=1}^T \langle a^{-T} X_t, Z_T \rangle a^{-T}\eta_{t+1} - \frac{-||\beta||^{-2}}{\langle \alpha, \beta \rangle^2 ||\alpha||^{-2} ||\beta||^{-2} - 1}\sum_{t=1}^T \langle a^{-T} X_t, Z_T^{\perp} \rangle a^{-T}\eta_{t+1} \nonumber \\
&= \frac{||\alpha||^{-2}||a^{T}\beta||^{-2}}{\langle \alpha, a^T \beta \rangle^2 ||\alpha||^{-2} ||a^T \beta||^{-2} - 1} \Big(\sum_{t=1}^T (\langle \alpha, a^T\beta \rangle \alpha_t - a^T \beta_t ||\alpha||^2) \eta_{t+1} \Big) = \gamma_T\label{error_dist}
\end{align}
Observe the term $a^T\beta_t||\alpha||^2 \eta_{t+1}$
\begin{align*}
&a^T\beta_t||\alpha||^2 \eta_{t+1} = ||\alpha||^2\begin{bmatrix}
(a^{-1} (\eta^{(2)}_{t+1} Z^{(1)}_T - \eta^{(1)}_{t+1} Z^{(2)}_T) + a^{-2}(\eta^{(2)}_{t+2} Z^{(1)}_T - \eta^{(1)}_{t+2} Z^{(2)}_T) + \hdots) \eta^{(1)}_{t+1} \\
(a^{-1} (\eta^{(2)}_{t+1} Z^{(1)}_T - \eta^{(1)}_{t+1} Z^{(2)}_T) + a^{-2}(\eta^{(2)}_{t+2} Z^{(1)}_T - \eta^{(1)}_{t+2} Z^{(2)}_T) + \hdots) \eta^{(2)}_{t+1}
\end{bmatrix} \\
&= ||\alpha||^2\begin{bmatrix}
a^{-1} (\eta^{(2)}_{t+1}\eta^{(1)}_{t+1} Z^{(1)}_T - (\eta^{(1)}_{t+1})^2 Z^{(2)}_T) + (a^{-2}(\eta^{(2)}_{t+2} Z^{(1)}_T - \eta^{(1)}_{t+2} Z^{(2)}_T) + \hdots) \eta^{(1)}_{t+1} \\
a^{-1} ((\eta^{(2)}_{t+1})^2 Z^{(1)}_T - \eta^{(2)}_{t+1}\eta^{(1)}_{t+1}Z^{(2)}_T) + (a^{-2}(\eta^{(2)}_{t+2} Z^{(1)}_T - \eta^{(1)}_{t+2} Z^{(2)}_T) + \hdots) \eta^{(2)}_{t+1}
\end{bmatrix}\\
&\sum_{t=1}^T a^T\beta_t||\alpha||^2 \eta_{t+1} = a^{-1}||\alpha||^2 \Big(\underbrace{\begin{bmatrix}
-\sum_{t=1}^T (\eta^{(1)}_{t+1})^2 Z^{(2)}_T \\
\sum_{t=1}^T (\eta^{(2)}_{t+1})^2 Z^{(1)}_T
\end{bmatrix}}_{=\Theta(T)} + \sum_{t=1}^T \begin{bmatrix}
\eta^{(2)}_{t+1}\eta^{(1)}_{t+1} Z^{(1)}_T + (a^{-1}(\eta^{(2)}_{t+2} Z^{(1)}_T - \eta^{(1)}_{t+2} Z^{(2)}_T) + \hdots) \eta^{(1)}_{t+1} \\
\eta^{(2)}_{t+1}\eta^{(1)}_{t+1}Z^{(2)}_T + (a^{-1}(\eta^{(2)}_{t+2} Z^{(1)}_T - \eta^{(1)}_{t+2} Z^{(2)}_T) + \hdots) \eta^{(2)}_{t+1}
\end{bmatrix}\Big) \\
&= a^{-1}||\alpha||^2 \Big(\Theta(T) \\
&+\underbrace{\sum_{t=1}^T \begin{bmatrix}
\eta^{(2)}_t \eta^{(1)}_t Z^{(1)}_T + a^{-1} \eta^{(2)}_{t+1} \eta^{(1)}_t Z^{(1)}_T + a^{-2} \eta^{(2)}_{t+2} \eta^{(1)}_t Z^{(1)}_T + \ldots - a^{-1} \eta^{(1)}_{t+1} \eta^{(1)}_t Z^{(2)}_T - a^{-2} \eta^{(1)}_{t+2} \eta^{(1)}_t Z^{(2)}_T - \ldots\\
\eta^{(2)}_t \eta^{(1)}_t Z^{(2)}_T + a^{-1} \eta^{(2)}_{t+1} \eta^{(2)}_t Z^{(1)}_T + a^{-2} \eta^{(2)}_{t+2} \eta^{(2)}_t Z^{(1)}_T + \ldots - a^{-1} \eta^{(1)}_{t+1} \eta^{(2)}_t Z^{(2)}_T - a^{-2} \eta^{(1)}_{t+2} \eta^{(2)}_t Z^{(1)}_T - \ldots
\end{bmatrix}}_{=O(\sqrt{T} \log{\frac{T}{\delta}})}\Big)
\end{align*}
The $O(\sqrt{T} \log{T})$ follows by applying Hanson-Wright inequality to each of $a^{-j}\sum_{t=1}^T \eta^{(2)}_{t+j} \eta^{(1)}_t$ terms where we get with probability at least $1-\delta/T$ that $a^{-j}\sum_{t=1}^T \eta^{(2)}_{t+j} \eta^{(1)}_t \leq c a^{-j} O(\sqrt{T} \log{\frac{T}{\delta}})$. Therefore simultaneously for all $j \leq T$ we have with probability at least $1 -\delta$ (using union bound) that $a^{-j}\sum_{t=1}^T \eta^{(2)}_{t+j} \eta^{(1)}_t \leq c a^{-j} O(\sqrt{T}\log{\frac{T}{\delta}}) \implies \sum_{j=1}^T a^{-j}\sum_{t=1}^T \eta^{(2)}_{t+j} \eta^{(1)}_t \leq O(\sqrt{T}\log{\frac{T}{\delta}})$. Plugging this in Eq.~\eqref{error_dist} we get that
\begin{align*}
\gamma_T &= \frac{||\alpha||^{-2}||a^{T}\beta||^{-2}}{\langle \alpha, a^T \beta \rangle^2 ||\alpha||^{-2} ||a^T \beta||^{-2} - 1} \Big(\underbrace{\sum_{t=1}^T (\langle \alpha, a^T\beta \rangle \alpha_t}_{=O(\sqrt{T})} - \underbrace{ a^T \beta_t ||\alpha||^2) \eta_{t+1}}_{=\Theta(T)} \Big)
\end{align*}
Clearly then $\gamma_T$ in Eq.~\eqref{error_dist} satisfies a non--trivial pdf, \textit{i.e.}, error does not decay to zero.
Another interesting observation is that $\sum_{t=1}^T a^{-2T} \eta_{t+1} X_t^{\top}$ decays $O(a^{-T})$ with high probability, however the error is a non--decaying random variable. This immediately gives us that
\begin{prop}
\label{condition_number}
The sample covariance matrix $\sum_{t=1}^T X_t X_t^{\top}$ has the following singular values
\[
\sigma_1(\sum_{t=1}^T X_t X_t^{\top}) = \Theta(a^{2T}), \sigma_2(\sum_{t=1}^T X_t X_t^{\top}) = O(\sqrt{T}a^{T})
\]
\end{prop}
\begin{proof}
The largest singular values of $\sum_{t=1}^T X_t X_t^{\top} = \Theta(a^{2T})$ this follows because $$||\sum_{t=1}^T a^{-2T} X_t X_t^{\top} - \frac{1-a^{-2T}}{1-a^{-2}} Z_T Z_T^{\top}||_2 \leq O(a^{-T})$$ with high probability, which follows from the claims of Eq.~\eqref{zt_form}, \eqref{ut_ft} in Theorem~\ref{main_result} and discussion in Section~\ref{explosive}. The second claim follows because $\sum_{t=1}^T a^{-2T} \eta_{t+1} X_t^{\top}$ decays $\Omega(a^{-T})$ with high probability. To see this $$\sum_{t=1}^T a^{-2T} \eta_{t+1} X_t^{\top} \leq a^{-T} \sqrt{\sum_{t=1}^T \eta_t^{\prime} \eta_t}\sqrt{\sum_{t=1}^T a^{-2T}X_{t}^{\prime} X_t } \approx \sqrt{T} a^{-T} $$
The $\sqrt{T}$ factor can be removed by similar arguments as above. However the identification error is a random variable which implies that $\sigma_2(\sum_{t=1}^T a^{-2T} X_t X_t^{\top}) = O(\sqrt{T} a^{-T})$.
\end{proof}
\section{Appendix}
\label{appendix_matrix}
\begin{prop}
\label{psd_result_2}
Let $P, V$ be a psd and pd matrix respectively and define $\bar{P} = P + V$. Let there exist some matrix $Q$ for which we have the following relation
\[
||\bar{P}^{-1/2} Q|| \leq \gamma
\]
For any vector $v$ such that $v^{\prime} P v = \alpha, v^{\prime} V v =\beta$ it is true that
\[
||v^{\prime}Q|| \leq \sqrt{\beta+\alpha} \gamma
\]
\end{prop}
\begin{proof}
Since
\[
||\bar{P}^{-1/2} Q||_2^2 \leq \gamma^2
\]
for any vector $v \in \mathcal{S}^{d-1}$ we will have
\[
\frac{v^{\prime} \bar{P}^{1/2}\bar{P}^{-1/2} Q Q^{\prime}\bar{P}^{-1/2}\bar{P}^{1/2} v}{v^{\prime} \bar{P} v} \leq \gamma^2
\]
and substituting $v^{\prime} \bar{P} v = \alpha + \beta$ gives us
\begin{align*}
{v^{\prime} Q Q^{\prime} v} &\leq \gamma^2{v^{\prime} \bar{P} v} \\
&= (\alpha + \beta) \gamma^2
\end{align*}
\end{proof}
\begin{prop}
\label{inv_jordan}
Consider a Jordan block matrix $J_d(\lambda)$ given by \eqref{jordan}, then $J_d(\lambda)^{-k}$ is a matrix where each off--diagonal (and the diagonal) has the same entries, \textit{i.e.},
\begin{equation}
J_d(\lambda)^{-k} =\begin{bmatrix}
a_1 & a_2 & a_3 & \hdots & a_d \\
0 & a_1 & a_2 & \hdots & a_{d-1} \\
\vdots & \vdots & \ddots & \ddots & \vdots \\
0 & \hdots & 0 & a_1 & a_2 \\
0 & 0 & \hdots & 0 & a_1
\end{bmatrix}_{d \times d}
\end{equation}
for some $\{a_i\}_{i=1}^d$.
\end{prop}
\begin{proof}
$J_d(\lambda) = (\lambda I + N)$ where $N$ is the matrix with all ones on the $1^{st}$ (upper) off-diagonal. $N^k$ is just all ones on the $k^{th}$ (upper) off-diagonal and $N$ is a nilpotent matrix with $N^d = 0$. Then
\begin{align*}
(\lambda I + N)^{-1} &= (\sum_{l=0}^{d-1} (-1)^{l}\lambda^{-l-1}N^{l}) \\
(-1)^{k-1}(k-1 )!(\lambda I + N)^{-k} &= \Big(\sum_{l=0}^{d-1} (-1)^{l}\frac{d^{k-1}\lambda^{-l-1}}{d \lambda^{k-1}}N^{l}\Big) \\
&= \Big(\sum_{l=0}^{d-1} (-1)^{l}c_{l, k}N^{l}\Big)
\end{align*}
and the proof follows in a straightforward fashion.
\end{proof}
\begin{prop}
\label{reg_invertible}
Let $A$ be a regular matrix and $A = P^{-1} \Lambda P$ be its Jordan decomposition. Then
\[
\inf_{||a||_2 = 1}||\sum_{i=1}^d a_i \Lambda^{-i+1}||_2 > 0
\]
Further $\phi_{\min}(A) > 0$ where $\phi_{\min}(\cdot)$ is defined in Definition~\ref{outbox}.
\end{prop}
\begin{proof}
When $A$ is regular, the geometric multiplicity of each eigenvalue is $1$. This implies that $A^{-1}$ is also regular. Regularity of a matrix $A$ is equivalent to the case when minimal polynomial of $A$ equals characteristic polynomial of $A$ (See Section~\ref{lemmab} in appendix), \textit{i.e.},
\begin{align*}
\inf_{||a||_2 = 1}||\sum_{i=1}^d a_i A^{-i+1}||_2 &> 0
\end{align*}
Since $A^{-j} = P^{-1} \Lambda^{-j} P$ we have
\begin{align*}
\inf_{||a||_2 = 1}||\sum_{i=1}^d a_i P^{-1}\Lambda^{-i+1}P||_2 &> 0 \\
\inf_{||a||_2 = 1}||\sum_{i=1}^d a_i P^{-1}\Lambda^{-i+1}||_2 \sigma_{\min}(P) &> 0 \\
\inf_{||a||_2 = 1}||\sum_{i=1}^d a_i \Lambda^{-i+1}||_2 \sigma_{\min}(P) \sigma_{\min}(P^{-1}) &> 0 \\
\inf_{||a||_2 = 1}||\sum_{i=1}^d a_i \Lambda^{-i+1}||_2 &>0
\end{align*}
Since $\Lambda$ is Jordan matrix of the Jordan decomposition, it is of the following form
\begin{equation}
\Lambda =\begin{bmatrix}
J_{k_1}(\lambda_1) & 0 & \hdots & 0 &0 \\
0 & J_{k_2}(\lambda_2) & 0 & \hdots &0 \\
\vdots & \vdots & \ddots & \ddots & \vdots \\
0 & \hdots & 0 & J_{k_{l}}(\lambda_l) & 0 \\
0 & 0 & \hdots & 0 & J_{k_{l+1}}(\lambda_{l+1})
\end{bmatrix}
\end{equation}
where $J_{k_i}(\lambda_i)$ is a $k_i \times k_i$ Jordan block corresponding to eigenvalue $\lambda_i$. Then
\begin{equation}
\Lambda^{-k} =\begin{bmatrix}
J^{-k}_{k_1}(\lambda_1) & 0 & \hdots & 0 &0 \\
0 & J^{-k}_{k_2}(\lambda_2) & 0 & \hdots &0 \\
\vdots & \vdots & \ddots & \ddots & \vdots \\
0 & \hdots & 0 & J^{-k}_{k_{l}}(\lambda_l) & 0 \\
0 & 0 & \hdots & 0 & J^{-k}_{k_{l+1}}(\lambda_{l+1})
\end{bmatrix}
\end{equation}
Since $||\sum_{i=1}^d a_i \Lambda^{-i+1}||_2 >0$, without loss of generality assume that there is a non--zero element in $k_1 \times k_1$ block. This implies
\begin{align*}
||\underbrace{\sum_{i=1}^d a_i J_{k_1}^{-i+1}(\lambda_1)}_{=S}||_2 > 0
\end{align*}
By Proposition~\ref{inv_jordan} we know that each off--diagonal (including diagonal) of $S$ will have same element. Let $j_0 = \inf{\{j | S_{ij} \neq 0\}}$ and in column $j_0$ pick the element that is non--zero and highest row number, $i_0$. By design $S_{i_0, j_0} > 0$ and further
$$S_{k_1 -(j_0 - i_0), k_1} = S_{i_0, j_0}$$
because they are part of the same off--diagonal (or diagonal) of $S$. Thus the row $k_1 - (j_0 - i_0)$ has only one non--zero element because of the minimality of $j_0$.
We proved that for any $||a||=1$ there exists a row with only one non--zero element in the matrix $\sum_{i=1}^d a_i \Lambda^{-i+1}$. This implies that if $v$ is a vector with all non--zero elements, then $||\sum_{i=1}^d a_i \Lambda^{-i+1} v||_2 > 0$, \textit{i.e.},
\begin{align*}
\inf_{||a||_2 = 1}||\sum_{i=1}^d a_i \Lambda^{-i+1} v ||_2 &> 0
\end{align*}
This implies
\begin{align*}
\inf_{||a||_2 = 1}||[v, \Lambda^{-1} v, \ldots, \Lambda^{-d+1}v] a||_2 &> 0\\
\sigma_{\min}([v, \Lambda^{-1} v, \ldots, \Lambda^{-d+1}v]) &> 0 \\
\end{align*}
By Definition~\ref{outbox} we have
\begin{align*}
\phi_{\min}(A) &> 0
\end{align*}
\end{proof}
\begin{prop}[Corollary 2.2 in~\cite{ipsen2011determinant}]
\label{det_lb}
For any positive definite matrix $M$ with diagonal entries $m_{jj}$, $1 \leq j \leq d$ and $\rho$ is the spectral radius of the matrix $C$ with elements
\begin{align*}
c_{ij} &= 0 \hspace{3mm} \text{if } i=j \\
&=\frac{m_{ij}}{\sqrt{m_{ii}m_{jj}}} \hspace{3mm} \text{if } i\neq j
\end{align*}
then
\begin{align*}
0 < \frac{\prod_{j=1}^d m_{jj} - \text{det}(M)}{\prod_{j=1}^d m_{jj}} \leq 1 - e^{-\frac{d \rho^2}{1+\lambda_{\min}}}
\end{align*}
where $\lambda_{\min} = \min_{1 \leq j \leq d} \lambda_j(C)$.
\end{prop}
\begin{prop}
\label{gramian_lb}
Let $1 - C/T \leq \rho_i(A) \leq 1 + C/T$ and $A$ be a $d \times d$ matrix. Then there exists $\alpha(d)$ depending only on $d$ such that for every $8 d \leq t \leq T$
\[
\sigma_{\min}(\Gamma_t(A)) \geq t \alpha(d)
\]
\end{prop}
\begin{proof}
Since $A = P^{-1} \Lambda P$ where $\Lambda$ is the Jordan matrix. Since $\Lambda$ can be complex we will assume that adjoint instead of transpose. This gives
\begin{align*}
\Gamma_T(A) &= I + \sum_{t=1}^T A^t (A^{t})^{\prime} \\
&= I + P^{-1}\sum_{t=1}^T \Lambda^tPP^{\prime} (\Lambda^t)^{*} P^{-1 \prime} \\
&\succeq I + \sigma_{\min}(P)^2P^{-1}\sum_{t=1}^T \Lambda^t(\Lambda^t)^{*} P^{-1 \prime}
\end{align*}
Then this implies that
\begin{align*}
\sigma_{\min}( \Gamma_T(A)) &\geq 1 +\sigma_{\min}(P)^2 \sigma_{\min}(P^{-1}\sum_{t=1}^T \Lambda^t(\Lambda^t)^{\prime} P^{-1 \prime}) \\
&\geq 1 + \sigma_{\min}(P)^2 \sigma_{\min}(P^{-1})^2\sigma_{\min}(\sum_{t=1}^T \Lambda^t(\Lambda^t)^{\prime} ) \\
&\geq 1 + \frac{\sigma_{\min}(P)^2}{\sigma_{\max}(P)^2}\sigma_{\min}(\sum_{t=1}^T \Lambda^t(\Lambda^t)^{\prime} )
\end{align*}
Now
\begin{align*}
\sum_{t=0}^T \Lambda^t(\Lambda^t)^{*} &= \begin{bmatrix}
\sum_{t=0}^T J^{t}_{k_1}(\lambda_1)(J^{t}_{k_1}(\lambda_1))^{*} & 0 & \hdots & 0 \\
0 & \sum_{t=1}^T J^{t}_{k_2}(\lambda_2)(J^{t}_{k_2}(\lambda_2))^{*} & 0 & \hdots \\
\vdots & \vdots & \ddots & \ddots \\
0 & \hdots & 0 & \sum_{t=1}^T J^{t}_{k_{l}}(\lambda_l) (J^{t}_{k_l}(\lambda_l))^{*}
\end{bmatrix}
\end{align*}
Since $\Lambda$ is block diagonal we only need to worry about the least singular value corresponding to some block. Let this block be the one corresponding to $J_{k_1}(\lambda_1)$, \textit{i.e.},
\begin{equation}
\sigma_{\min}(\sum_{t=0}^T \Lambda^t(\Lambda^t)^{*} ) =\sigma_{\min}(\sum_{t=0}^T J^{t}_{k_1}(\lambda_1)(J^{t}_{k_1}(\lambda_1))^{*}) \label{bnd_1}
\end{equation}
Define $B = \sum_{t=0}^T J^{t}_{k_1}(\lambda_1)(J^{t}_{k_1}(\lambda_1))^{*}$. Note that $J_{k_1}(\lambda_1) = (\lambda_1 I + N)$ where $N$ is the nilpotent matrix that is all ones on the first off--diagonal and $N^{k_1} = 0$. Then
\begin{align*}
(\lambda_1 I + N)^t &= \sum_{j=0}^t {t \choose j} \lambda_1^{t-j}N^{j} \\
(\lambda_1 I + N)^t((\lambda_1 I + N)^t)^{*} &= \Big(\sum_{j=0}^t {t \choose j} \lambda_1^{t-j}N^{j}\Big)\Big(\sum_{j=0}^t {t \choose j} (\lambda_1^{*})^{t-j}N^{j \prime}\Big) \\
&= \sum_{j=0}^t {t \choose j}^2 |\lambda_1|^{2(t-j)} \underbrace{N^j (N^j)^{\prime}}_{\text{Diagonal terms}} + \sum_{j \neq k}^{j=t, k=t} {t \choose k}{t \choose j} \lambda_1^j (\lambda_1^{*})^{k} N^j (N^k)^{\prime} \\
&= \sum_{j=0}^t {t \choose j}^2 |\lambda_1|^{2(t-j)} \underbrace{N^j (N^j)^{\prime}}_{\text{Diagonal terms}} + \sum_{j > k}^{j=t, k=t} {t \choose k}{t \choose j} \lambda_1^j (\lambda_1^{*})^{k} N^j (N^k)^{\prime} \\
&+ \sum_{j< k}^{j=t, k=t} {t \choose k}{t \choose j} \lambda_1^j (\lambda_1^{*})^{k} N^j (N^k)^{\prime} \\
&= \sum_{j=0}^t {t \choose j}^2 |\lambda_1|^{2(t-j)} \underbrace{N^j (N^j)^{\prime}}_{\text{Diagonal terms}} + \sum_{j > k}^{j=t, k=t} {t \choose k}{t \choose j} \underbrace{|\lambda_1|^{2k} \lambda_1^{j-k} N^{j-k} N^k(N^k)^{\prime}}_{\text{On $(j-k)$ upper off--diagonal}} \\
&+ \sum_{j< k}^{j=t, k=t} {t \choose k}{t \choose j} \underbrace{|\lambda_1|^{2j} (\lambda_1^{*})^{k-j} N^j(N^{j})^{\prime} (N^{j-k})^{\prime}}_{\text{On $(k-j)$ lower off--diagonal}}
\end{align*}
Let $\lambda_1 = r e^{i\theta}$, then similar to~\cite{erxiong1994691}, there is $D = \text{Diag}(1, e^{-i\theta}, e^{-2i\theta}, \ldots, e^{-i(k_1-1)\theta})$ such that $D (\lambda_1 I + N)^t((\lambda_1 I + N)^t)^{*} D^{*}$ is a real matrix. Observe that any term on $(j-k)$ upper off--diagonal of $(\lambda_1 I + N)^t((\lambda_1 I + N)^t)^{*}$ is of the form $r_0 e^{i(j-k)\theta}$. In the product $D (\lambda_1 I + N)^t((\lambda_1 I + N)^t)^{*} D^{*}$ any term on the $(j-k)$ upper off diagonal term now looks like $e^{-ij\theta + ik\theta} r_0 e^{i(j-k)\theta} = r_0$, which is real. Then we have
\begin{align}
D (\lambda_1 I + N)^t((\lambda_1 I + N)^t)^{*} D^{*} &= \sum_{j=0}^t {t \choose j}^2 |\lambda_1|^{2(t-j)} \underbrace{N^j (N^j)^{\prime}}_{\text{Diagonal terms}} + \sum_{j > k}^{j=t, k=t} {t \choose k}{t \choose j} \underbrace{|\lambda_1|^{2k} |\lambda_1|^{j-k} N^{j-k} N^k(N^k)^{\prime}}_{\text{On $(j-k)$ upper off--diagonal}} \nonumber\\
&+ \sum_{j< k}^{j=t, k=t} {t \choose k}{t \choose j} \underbrace{|\lambda_1|^{2j} |\lambda_1|^{k-j} N^j(N^{j})^{\prime} (N^{k-j})^{\prime}}_{\text{On $(k-j)$ lower off--diagonal}} \label{real}
\end{align}
Since $D$ is unitary and $D (\lambda_1 I + N)^t((\lambda_1 I + N)^t)^{*} D^{*} =(|\lambda_1| I + N)^t((|\lambda_1| I + N)^t)^{\prime} $, we can simply work with the case when $\lambda_1 > 0$ and real, as the singular values remain invariant under unitary transformations. Now we show the growth of $ij^{th}$ term of the product $D (\lambda_1 I + N)^t((\lambda_1 I + N)^t)^{*} D^{*})$,
Define $B=\sum_{t=1}^T (|\lambda_1| I + N)^t((|\lambda_1| I + N)^t)^{\prime}$
\begin{align}
B_{ll} &=\sum_{t=1}^T [(\lambda_1 I + N)^t((\lambda_1 I + N)^t)^{*}]_{ll} \\
&= \sum_{t=1}^T \sum_{j=0}^{k_1-l} {t \choose j}^2 |\lambda_1|^{2(t-j)} \label{bll}
\end{align}
Since $1-C/T \leq |\lambda_1| \leq 1+C/T$, then for every $t \leq T$ we have
$$e^{-C} \leq |\lambda_1|^t \leq e^{C}$$
Then
\begin{align}
B_{ll} &= \sum_{t=1}^T \sum_{j=0}^{k_1-l} {t \choose j}^2 |\lambda_1|^{2(t-j)} \nonumber\\
&\geq e^{-2C} \sum_{t=1}^T \sum_{j=0}^{k_1-l} {t \choose j}^2 \nonumber\\
& \geq e^{-2C} \sum_{t=T/2}^T \sum_{j=0}^{k_1-l} {t \choose j}^2 \geq e^{-2C} \sum_{t=T/2}^T c_{k_1} \frac{t^{2k_1-2l+2} - 1}{t^2 - 1} \geq C(k_1) T^{2k_1 - 2l+1} \label{lb}
\end{align}
An upper bound can be achieved in an equivalent fashion.
\begin{align}
B_{ll} &= \sum_{t=1}^T \sum_{j=0}^{k_1-l} {t \choose j}^2 |\lambda_1|^{2(t-j)} \nonumber\\
& \leq e^{2C} T \sum_{j=0}^{k_1-l} T^{2j} \leq C(k_1) T^{2k_1 - 2l + 1} \label{ub1}
\end{align}
Similarly, for any $B_{k,k+l} $ we have
\begin{align}
B_{k, k+l} &=\sum_{t=1}^T \sum_{j=0}^{k_1-k - l} {t \choose j}{t \choose j+l} |\lambda_1|^{2j} |\lambda_1|^{l} \\
&\geq \sum_{t=1}^T e^{-2C} \sum_{t=T/2}^T \sum_{j=0}^{k_1-k - l} {t \choose j}{t \choose j+l} \\
&\geq e^{-2C} \frac{T}{2}\sum_{j=0}^{k_1-k - l} {T/2 \choose j}{T/2 \choose j+l} \\
&\geq C(k_1) T^{2k_1 - 2k -l +1}
\end{align}
and by a similar argument as before we get $B_{jk} = C(k_1)T^{2k_1-j-k +1}$. For brevity we use the same $C(k_1)$ to indicate different functions of $k_1$ as we are interested only in the growth with respect to $T$. To summarize
\begin{align}
B_{jk} &= C(k_1)T^{2k_1 - j - k +1} \label{jordan_value}
\end{align}
whenever $T \geq 8d$. Recall Proposition~\ref{det_lb}, let the $M$ there be equal to $B$ then since
\[
C_{ij} = C(k_1)\frac{B_{ij}}{\sqrt{B_{ii} B_{jj}}} = C(k_1)\frac{T^{2k_1 - j -k +1}}{\sqrt{T^{4k_1 - 2j - 2k + 2}}}
\]
it turns out that $C_{ij}$ is independent of $T$ and consequently $\lambda_{min}(C), \rho$ are independent of $T$ and depend only on $k_1$: the Jordan block size. Then $\prod_{j=1}^{k_1} B_{jj} \geq \text{det}(B) \geq \prod_{j=1}^{k_1} B_{jj} e^{-\frac{d\rho^2}{1 + \lambda_{\min}}} = C(k_1) \prod_{j=1}^{k_1} B_{jj}$. This means that $\text{det}(B) = C(k_1) \prod_{j=1}^{k_1} B_{jj}$ for some function $C(k_1)$ depending only on $k_1$. Further using the values for $B_{jj}$ we get
\begin{equation}
\label{det}
\text{det}(B) = C(k_1) \prod_{j=1}^{k_1} B_{jj} = \prod_{j=1}^{k_1} C(k_1) T^{2k_1 - 2l +1} = C(k_1) T^{k_1^2}
\end{equation}
Next we use Schur-Horn theorem, \textit{i.e.}, let $\sigma_i(B)$ be the ordered singular values of $B$ where $\sigma_i(B) \geq \sigma_{i+1}(B)$. Then $\sigma_i(B)$ majorizes the diagonal of $B$, \textit{i.e.}, for any $k \leq k_1$
\[
\sum_{i=1}^k \sigma_i(B) \geq \sum_{i=1}^{k} B_{ii}
\]
Observe that $B_{ii} \leq B_{jj}$ when $i \leq j$. Then from Eq.~\eqref{jordan_value} it implies that
\begin{align*}
B_{k_1 k_1}=C_1(k_1)T &\geq \sigma_{k_1}(B) \\
\sum_{j=k_1-1}^{k_1} B_{jj} &= C_{2}(k_1)T^{3} + C_1(k_1)T \geq \sigma_{k_1 - 1}(A) + \sigma_{k_1}(A)
\end{align*}
Since $k_1 \geq 1$ it can be checked that for $T \geq T_1 =2k_1\sqrt{\frac{C_1(k_1)}{C_2(k_1)}}$ we have $\sigma_{k_1-1}(A) \leq {(1+(2k_1)^{-2})C_2(k_1)T^3} \leq {(1+k_1^{-1})C_2(k_1)T^3}$ as for every $T \geq T_1$ we have $C_2(k_1)T^3 \geq 4k_1^2C_1(k_1)T$. Again to upper bound $\sigma_{k_1-2}(A)$ we will use a similar argument
\begin{align*}
\sum_{j=k_1-2}^{k_1} B_{jj} &= C_3(k_1)T^{5} + C_2(k_1)T^{3} + C_1(k_1)T \geq \sigma_{k_1-2}(A) +\sigma_{k_1-1}(A) + \sigma_{k_1}(A)
\end{align*}
and show that whenever
\[
T \geq \max{\Big(T_1, 2k_1\sqrt{\frac{C_2(k_1)}{C_3(k_1)}}\Big)}
\]
we get $\sigma_{k_1-2}(A) \leq (1+(2k_1)^{-2} + (2k_1)^{-4}){C_3(k_1)T^5} \leq (1+k_1^{-1}){C_3(k_1)T^5}$ because $T \geq T_1$ ensures $C_2(k_1)T^3 \geq 4k_1^2C_1(k_1)T$ and $T \geq T_2 = 2k_1\sqrt{\frac{C_2(k_1)}{C_3(k_1)}}$ ensures $C_3(k_1)T^5 \geq 4k_1^2 C_2(k_1)T^3$. The $C_i(k_1)$ are not important, the goal is to show that for a sufficiently large $T$ we have an upper bound on each singular values (roughly) corresponding to the diagonal element. Similarly we can ensure for every $i$ we have $\sigma_i(A) \leq (1+k_1^{-1})C_{k_1 -i+1}(k_1)T^{2k_1 - 2i + 1}$, whenever
\[
T > T_{i} = \max{\Big(T_{i-1}, 2k_1\sqrt{\frac{C_{i}(k_1)}{C_{i+1}(k_1)}}\Big)}
\]
Recall Eq.~\eqref{det} where $\text{det}(B) = C(k_1) T^{k_1^2}$. Assume that $\sigma_{k_1}(B) < \frac{C(k_1) T}{e \prod_{i=1}^d C_{i+1}(k_1)}$. Then whenever $T \geq \max{\Big(8d, \sup_{i}2k_1\sqrt{\frac{C_{i}(k_1)}{C_{i+1}(k_1)}}\Big)}$
\begin{align*}
\text{det}(B) &= C(k_1) T^{k_1^2} \\
\prod_{i=1}^{k_1}\sigma_i &= C(k_1) T^{k_1^2} \\
\sigma_{k_1}(B)(1+k_1^{-1})^{k_1-1} T^{k_1^2-1}\prod_{i=2}^{k_1}C_{i+1} &\geq C(k_1) T^{k_1^2} \\
\sigma_{k_1}(B) &\geq \frac{C_{k_1}T}{(1+k_1^{-1})^{k_1-1}\prod_{i=2}^{k_1}C_{i+1}} \\
&\geq \frac{C(k_1) T}{e \prod_{i=1}^{k_1} C_{i+1}(k_1)}
\end{align*}
which is a contradiction. This means that $\sigma_{k_i}(B) \geq \frac{C(k_1) T}{e \prod_{i=1}^{k_1} C_{i+1}(k_1)}$. This implies
\[
\sigma_{\min}(\Gamma_T(A)) \geq 1 + \frac{\sigma_{\min}(P)^2}{\sigma_{\max}(P)^2} C(k_1)T
\]
for some function $C(k_1)$ that depends only on $k_1$.
\end{proof}
It is possible that $\alpha(d)$ might be exponentially small in $d$, however for many cases such as orthogonal matrices or diagonal matrices $\alpha(A)=1$ [As shown in~\cite{simchowitz2018learning}]. We are not interested in finding the best bound $\alpha(d)$ rather show that the bound of Proposition~\ref{gramian_lb} exists and assume that such a bound is known.
\begin{prop}
\label{gramian_ratio}
Let $t_1/t_2 = \beta > 1$ and $A$ be a $d \times d$ matrix. Then
\[
\lambda_1(\Gamma_{t_1}(A)\Gamma_{t_2}^{-1}(A)) \leq C(d, \beta)
\]
where $C(d, \beta)$ is a polynomial in $\beta$ of degree at most $d^2$ whenever $t_i \geq 8d$.
\end{prop}
\begin{proof}
Since $\lambda_1(\Gamma_{t_1}(A)\Gamma_{t_2}^{-1}(A)) \geq 0$
\begin{align*}
\lambda_1(\Gamma_{t_1}(A)\Gamma_{t_2}^{-1}(A)) &\leq \text{tr}(\Gamma_{t_1}(A)\Gamma_{t_2}^{-1}(A)) \\
&\leq \text{tr}(\Gamma_{t_2}^{-1/2}(A)\Gamma_{t_1}(A)\Gamma_{t_2}^{-1/2}(A)) \\
&\leq d \sigma_1(\Gamma_{t_2}^{-1/2}(A)\Gamma_{t_1}(A)\Gamma_{t_2}^{-1/2}(A)) \\
&\leq d\sup_{||x|| \neq 0}\frac{x^{\prime} \Gamma_{t_1}(A) x}{x^{\prime}\Gamma_{t_2}(A) x}
\end{align*}
Now
\begin{align*}
\Gamma_{t_i}(A) &= P^{-1}\sum_{t=0}^{t_i} \Lambda^{t}PP^{\prime}(\Lambda^{t})^{*}P^{-1 \prime} \\
&\preceq \sigma_{\max}(P)^2 P^{-1}\sum_{t=0}^{t_i} \Lambda^{t}(\Lambda^{t})^{*}P^{-1 \prime} \\
\Gamma_{t_i}(A) &\succeq \sigma_{\min}(P)^2 P^{-1}\sum_{t=0}^{t_i} \Lambda^{t}(\Lambda^{t})^{*}P^{-1 \prime}
\end{align*}
Then this implies
\[
\sup_{||x|| \neq 0}\frac{x^{\prime} \Gamma_{t_1}(A) x}{x^{\prime}\Gamma_{t_2}(A) x} \leq \frac{\sigma_{\max}(P)^2}{\sigma_{\min}(P)^2} \sup_{||x|| \neq 0}\frac{x^{\prime} \sum_{t=0}^{t_1} \Lambda^{t}(\Lambda^{t})^{*} x}{x^{\prime}\sum_{t=0}^{t_2} \Lambda^{t}(\Lambda^{t})^{*} x}
\]
Then from Lemma 12 in~\cite{abbasi2011improved} we get that
\[
\sup_{||x|| \neq 0}\frac{x^{\prime} \sum_{t=0}^{t_1} \Lambda^{t}(\Lambda^{t})^{*} x}{x^{\prime}\sum_{t=0}^{t_2} \Lambda^{t}(\Lambda^{t})^{*} x} \leq \frac{\text{det}(\sum_{t=0}^{t_1} \Lambda^{t}(\Lambda^{t})^{*})}{\text{det}(\sum_{t=0}^{t_2} \Lambda^{t}(\Lambda^{t})^{*})}
\]
Then
\begin{align*}
\frac{\text{det}(\sum_{t=0}^{t_2} \Lambda^{t}(\Lambda^{t})^{*})}{\text{det}(\sum_{t=0}^{t_1} \Lambda^{t}(\Lambda^{t})^{*})} &\leq \frac{\text{det}(\prod_{i=1}^l (\sum_{t=0}^{t_2} J_{k_i}(\lambda_i)^{t}(J_{k_i}(\lambda_i)^{t})^{*}))}{\text{det}(\prod_{i=1}^l(\sum_{t=0}^{t_1} J_{k_i}(\lambda_i)^{t}(J_{k_i}(\lambda_i)^{t})^{*}))}
\end{align*}
Here $l$ are the number of Jordan blocks of $A$. Then our assertion follows from Eq.~\eqref{det} which implies that the determinant of $\sum_{t=0}^{t_2} J_{k_i}(\lambda_i)^{t}(J_{k_i}(\lambda_i)^{t})^{*} $ is equal to the product of the diagonal elements (times a factor that depends only on Jordan block size), \textit{i.e.}, $C(k_i)t_2^{k_i^2}$. As a result the ratio is given by
\[
\frac{\text{det}(\prod_{i=1}^l (\sum_{t=0}^{t_2} J_{k_i}(\lambda_i)^{t}(J_{k_i}(\lambda_i)^{t})^{*}))}{\text{det}(\prod_{i=1}^l(\sum_{t=0}^{t_1} J_{k_i}(\lambda_i)^{t}(J_{k_i}(\lambda_i)^{t})^{*}))} = \prod_{i=1}^l \beta^{k_i^2}
\]
whenever $t_2, t_1 \geq 8d$. Summarizing we get
\[
\sup_{||x|| \neq 0}\frac{x^{\prime} \Gamma_{t_1}(A) x}{x^{\prime}\Gamma_{t_2}(A) x} \leq \frac{\sigma_{\max}(P)^2}{\sigma_{\min}(P)^2} \prod_{i=1}^l \beta^{k_i^2}
\]
\end{proof}
\section{Probabilistic Inequailities}
\label{prob_ineq}
\begin{prop}[\cite{vershynin2010introduction}]
\label{eps_net}
Let $M$ be a random matrix. Then we have for any $\epsilon < 1$ and any $w \in \mathcal{S}^{d-1}$ that
\[
\mathbb{P}(||M|| > z) \leq (1 + 2/\epsilon)^d \mathbb{P}(||Mw|| > (1-\epsilon)z)
\]
\end{prop}
The proof of the Proposition can be found, for instance, in \cite{vershynin2010introduction}.
Proposition~\ref{eps_net} helps us in using the tools developed in de la Pena et. al. and~\cite{abbasi2011improved} for self--normalized martingales. We will define $\tilde{S}_t = \sum_{\tau=0}^{t-1} X_{\tau} \tilde{\eta}_{\tau+1}$ where $\tilde{\eta}_t=w^T \eta_t$ is standard normal when $w$ is a unit vector. Specifically, we use Lemma 9 of~\cite{abbasi2011improved} which we state here for convenience:
\begin{thm}[Theorem 1 in~\cite{abbasi2011improved}]
\label{selfnorm_main}
Let $\{\bm{\mathcal{F}}_t\}_{t=0}^{\infty}$ be a filtration. Let $\{\eta_{t}\}_{t=1}^{\infty}$ be a real valued stochastic process such that $\eta_t$ is $\bm{\mathcal{F}}_t$ measurable and $\eta_t$ is conditionally $R$-sub-Gaussian for some $R > 0$., \textit{i.e.},
\[
\forall \lambda \in \mathbb{R} \hspace{2mm} \mathbb{E}[e^{\lambda \eta_t} | \bm{\mathcal{F}}_{t-1}] \leq e^{\frac{\lambda^2 R^2}{2}}
\]
Let $\{X_t\}_{t=1}^{\infty}$ be an $\mathbb{R}^d$--valued stochastic process such that $X_t$ is $\bm{\mathcal{F}}_{t}$ measurable. Assume that $V$ is a $d \times d$ positive definite matrix. For any $t \geq 0$ define
\[
\bar{V}_t = V + \sum_{s=1}^t X_s X_s^{\prime} \hspace{2mm} S_t = \sum_{s=1}^t \eta_{s+1} X_s
\]
Then for any $\delta > 0$ with probability at least $1-\delta$ for all $t \geq 0$
\[
||{S}_{t}||^2_{\bar{V}^{-1}_{t}} \leq 2 R^2 \log{\Bigg(\dfrac{\text{det}(\bar{V}_{t})^{1/2} \text{det}(V)^{-1/2}}{\delta}\Bigg)}
\]
\end{thm}
\begin{prop}
\label{selfnorm_bnd_proof}
Let $P$ have full row rank and
\[
X_{t+1} = AX_t + P \eta_{t+1}
\]
where $\{\eta_t\}_{t=1}^T$ is an i.i.d. subGaussian process with variance proxy $=1$ and each $\eta_t$ has independent elements.
For any $0 < \delta < 1$, we have with probability $1 - \delta$
\begin{align}
&||(\bar{Y}_{T-1})^{-1/2} \sum_{t=0}^{T-1} X_t \eta_{t+1}^{\prime}P^{\prime}||_2 \nonumber\\
&\leq R\sqrt{8d \log {\Bigg(\dfrac{5 \text{det}(\bar{Y}_{T-1})^{1/2d} \text{det}(V)^{-1/2d}}{\delta^{1/d}}\Bigg)}}
\end{align}
where $\bar{Y}^{-1}_{\tau} = (\sum_{t=1}^{\tau} X_{t}X_t^{\prime}+ V)^{-1}$ and any deterministic $V$ with $V \succ 0$.
\end{prop}
\begin{proof}
Note that $P\eta_t$ is a non--trivial subGaussian if $P$ has full rank.
Define $S_t = \sum_{s=1}^t X_s\eta_{s+1}^{\prime}P^{\prime}$. Using Proposition~\ref{eps_net} and setting $\epsilon=1/2$, we have that
\begin{align}
\mathbb{P}(||\bar{Y}_{T-1}^{-1/2}S_{T-1}||_2 \leq y) \leq 5^d \mathbb{P}(||\bar{Y}_{T-1}^{-1/2}S_{T-1}w||_2 \leq \frac{y}{2}) = \mathbb{P}(||\bar{Y}_{T-1}^{-1/2}S_{T-1}w||^2_2 \leq \frac{y^2}{4})\label{eq1}
\end{align}
Setting $S_{T-1} w = \sum_{s=1}^{T-1}X_s \eta_{s+1}^{\prime}P^{\prime}w$ we observe that $\eta_{s+1}^{\prime}P^{\prime}w$ satisfies the conditions of Theorem~\ref{selfnorm_main} with variance proxy $\sigma_{\max}(P)^2$. Then replace in Eq.~\eqref{eq1}
$$y^2 = 8R^2 \log{\Bigg(\dfrac{\text{det}(\bar{Y}_{T-1})^{1/2} \text{det}(V)^{-1/2}}{5^{-d}\delta}\Bigg)}$$
which gives us from Theorem~\ref{selfnorm_main}
\[
\mathbb{P}(||\bar{Y}_{T-1}^{-1/2}S_{T-1}||_2 \leq y) \leq \delta
\]
\end{proof}
\begin{thm}[Hanson--Wright Inequality]
\label{hanson-wright}
Given a subGaussian vector $X=(X_1, X_2, \ldots, X_n) \in \mathbb{R}^n$ with $\sup_i ||X_i||_{\psi_2}\leq K$ and $X_i$ are independent. Then for any $B \in \mathbb{R}^{n \times n}$ and $t \geq 0$
\begin{align}
&\Pr(|X^{\prime} B X - \mathbb{E}[X^{\prime} B X]| \leq t) \nonumber\\
&\leq 2 \exp\Bigg\{- c \min{\Big(\frac{t}{K^2 ||B||}, \frac{t^2}{K^4 ||B||^2_{HS}}\Big)}\Bigg\}
\end{align}
\end{thm}
\begin{prop}[Theorem 5.39~\cite{vershynin2010introduction}]
\label{noise_energy_bnd}
Let $E$ be an $T \times d$ matrix whose rows $\eta_i^{\prime}$ are independent sub--Gaussian isotropic random vectors with variance proxy $1$ in $\mathbb{R}^{d}$. Then for every $t \geq 0$, with probability at least $1 - 2 e^{-ct^2}$ one has
\begin{equation}
\sqrt{T} - C\sqrt{d} - t \leq \sigma_{\min}(E) \leq \sqrt{T} + C\sqrt{d} + t
\end{equation}
\end{prop}
The implication of Proposition~\ref{noise_energy_bnd} is as follows: $E^{\prime}E \succeq (\sqrt{T} - C\sqrt{d} - t)^2 I$ with probability at least $1 - 2 e^{-ct^2}$. Let $t = \sqrt{\frac{1}{c}\log{\frac{2}{\delta}}}$, and ensure that
\[
T \geq T_{\eta}(\delta) = C \Big( d + \log{\frac{2}{\delta}} \Big)
\]
for some large enough universal constant $C$. Then for $T > T_{\eta}(\delta)$ we have, with probability at least $1-\delta$, that
\begin{align}
\frac{3}{4}I &\preceq \dfrac{1}{T}\underbrace{\sum_{t=1}^T \eta_t \eta_t^{\prime}}_{E^{\prime}E} \preceq \frac{5}{4}I \label{tight_noise_bound}
\end{align}
Further with the same probability
\begin{align}
\frac{3\sigma_{\min}^{2}(P)}{4}I &\preceq \dfrac{1}{T}\sum_{t=1}^T P\eta_t \eta_t^{\prime}P^{\prime} \preceq \frac{5\sigma_{\max}^{2}(P)}{4}I \nonumber \\
T_{\eta}(\delta) &= C \Big( d + \log{\frac{2}{\delta}} \Big) \label{dep_tight_noise_bound}
\end{align}
\begin{cor}[Dependent Hanson--Wright Inequality]
\label{dep-hanson-wright}
Given independent subGaussian vectors $X_i \in \mathbb{R}^d$ such that $X_{ij}$ are independent and $\sup_{ij} ||X_{ij}||_{\psi_2}\leq K$. Let $P$ have full row rank. Define
$$X=\begin{bmatrix}PX_1 \\
PX_2 \\
\vdots \\
PX_n \end{bmatrix} \in \mathbb{R}^{dn}$$
Then for any $B \in \mathbb{R}^{dn \times dn}$ and $t \geq 0$
\begin{align}
&\Pr(|X^{\prime} B X - \mathbb{E}[X^{\prime} B X]| \leq t) \nonumber\\
&\leq 2 \exp\Bigg\{- c \min{\Big(\frac{t}{K^2\sigma_1^2(P) ||B||}, \frac{t^2}{K^4 \sigma_1^4(P)||B||^2_{HS}}\Big)}\Bigg\}
\end{align}
\end{cor}
\begin{proof}
Define
$$\tilde{X} = \begin{bmatrix}X_1 \\
X_2 \\
\vdots \\
X_n \end{bmatrix}$$
Now $\tilde{X}$ is such that $\tilde{X}_i$ are independent. Observe that $X = (I_{n \times n}\otimes P)\tilde{X} $. Then $X^{\prime} B X = \tilde{X} (I_{n \times n}\otimes P) B (I_{n \times n}\otimes P') \tilde{X}$. Since
\begin{align*}
||(I_{n \times n}\otimes P) B (I_{n \times n}\otimes P')|| &\leq \sigma_1^2(P) ||B|| \\
\text{tr}((I_{n \times n}\otimes P) B (I_{n \times n}\otimes P')(I_{n \times n}\otimes P) B (I_{n \times n}\otimes P')) &\leq \sigma_1^2(P) \text{tr}((I_{n \times n}\otimes P) B^2 (I_{n \times n}\otimes P')) \\
&\leq \sigma_1^4(P) \text{tr}(B^2)
\end{align*} and now we can use Hanson--Wright in Theorem~\ref{hanson-wright} and get the desired bound.
\end{proof}
Let $X_t = \sum_{j=0}^{t-1} A^j \eta_{t-j}$.
\begin{prop}
\label{energy_markov}
Let $P$ have full row rank and
\[
X_{t+1} = AX_t + P \eta_{t+1}
\]
where $\{\eta_t\}$ is an i.i.d. process and each $\eta_t$ has independent elements. Then with probability at least $1-\delta$, we have
\begin{align*}
||\sum_{t=1}^T X_t X_t^{\prime}||_2 &\leq \sigma_1(P)^2\frac{ T\text{tr}(\Gamma_{T-1}(A))}{\delta} \\
||\sum_{t=1}^T AX_t X_t^{\prime}A^{\prime}||_2 &\leq \sigma_1(P)^2 \frac{T\text{tr}(\Gamma_{T}(A) - I)}{\delta}
\end{align*}
Let $\delta \in (0, e^{-1})$ then with probability at least $1-\delta$
\[
||\sum_{t=1}^T X_t X_t^{\prime}||_2 \leq \sigma_1(P)^2 \text{tr}(\sum_{t=0}^{T-1}\Gamma_t(A))\Big(1 + \frac{1}{c}\log{\Big(\frac{1}{\delta}\Big)}\Big)
\]
for some universal constant $c$.
\end{prop}
\begin{proof}
Define $\tilde{\eta} = \begin{bmatrix} P\eta_1 \\ P\eta_2 \\ \vdots \\ P\eta_T\end{bmatrix}$. Then $\tilde{\eta}$ is a non--trivial subGaussian whenever $P$ has full row rank.
As in Corollary~\ref{dep-hanson-wright} by defining $\tilde{A}$ as
\[
\tilde{A} = \begin{bmatrix} I & 0 & 0 & \hdots & 0 \\
A & I & 0 & \hdots & 0 \\
\vdots & \vdots & \ddots & \vdots &\vdots\\
\vdots & \vdots & \vdots & \ddots&\vdots\\
A^{T-1} & A^{T-2} & A^{T-3} & \hdots & I
\end{bmatrix} (I_{n \times n} \otimes P^{\prime})
\]
observe that
\[
\tilde{A} \tilde{\eta} = \begin{bmatrix} X_1 \\ X_2 \\ \vdots \\ X_T\end{bmatrix}.
\]
Since
\[
||X_t X_t^{\prime}|| = X_t^{\prime} X_t,
\]
we have that
$$||\sum_{t=1}^T X_t X_t^{\prime}|| \leq \sum_{t=1}^T X_t^{\prime} X_t = \tilde{\eta}^{\prime} \tilde{A}^{\prime} \tilde{A}\tilde{\eta} = \text{tr}(\tilde{A}\tilde{\eta}\tilde{\eta}^{\prime} \tilde{A}^{\prime}).$$
The assertion of proposition follows by applying Markov's Inequality to $\text{tr}(\tilde{A}\tilde{\eta} {\eta}^{\prime} \tilde{A}^{\prime})$. For the second part observe that each block matrix of $\tilde{A}$ is scaled by $A$, but the proof remains the same.
Then in the notation of Theorem~\ref{hanson-wright} $B=\tilde{A}^{\prime} \tilde{A}, X=\tilde{\eta}$
\begin{align*}
||B||_S &= \text{tr}(\tilde{A}^{\prime}\tilde{A}) \\
&= \sum_{t=0}^{T-1}\text{tr}(\Gamma_t(A)) \\
||B||_F^2 &\leq ||B||_S ||B||_2
\end{align*}
Define $c^{*} = \min{(c, 1)}$. Set $t = \frac{||B||_F^2}{c^{*}||B||} {\log{(\frac{1}{\delta})}}$ and assume $\delta \in (0, e^{-1})$ then
\begin{align*}
\frac{t}{c^{*}||B||} \leq \frac{t^2}{c^{*}||B||_F^2}
\end{align*}
we get from Theorem~\ref{hanson-wright} that
\begin{align*}
\tilde{\eta}^{\prime}\tilde{A}^{\prime}\tilde{A}\tilde{\eta} &\leq \text{tr}(\sum_{t=0}^{T-1}\Gamma_t(A)) + \frac{||B||_F^2}{c^{*}||B||} \log{\Big(\frac{1}{\delta}\Big)} \\
&\leq \text{tr}(\sum_{t=0}^{T-1}\Gamma_t(A)) + \frac{||B||_s}{c^{*}} \log{\Big(\frac{1}{\delta}\Big)} \\
&\leq \text{tr}(\sum_{t=0}^{T-1}\Gamma_t(A))\Big(1 + \frac{1}{c^{*}}\log{\Big(\frac{1}{\delta}\Big)}\Big)
\end{align*}
with probability at least $1 - \exp{\Big(- \frac{c||B||_F^2}{c^{*}||B||_2^2}\log{\frac{1}{\delta}}\Big)}$. Since
\[
\frac{c||B||_F^2}{c^{*}||B||_2^2} \geq 1
\]
it follows that
\[
\exp{\Big(- \frac{c||B||_F^2}{c^{*}||B||_2^2}\log{\frac{1}{\delta}}\Big)} \leq \delta
\]
and we can conclude that with probability at least $1-\delta$
\[
\tilde{\eta}^{\prime}\tilde{A}^{\prime}\tilde{A}\tilde{\eta} \leq \text{tr}(\sum_{t=0}^{T-1}\Gamma_t(A))\Big(1 + \frac{1}{c^{*}}\log{\Big(\frac{1}{\delta}\Big)}\Big)
\]
\end{proof}
\begin{cor}
\label{sub_sum}
Whenever $\delta \in (0, e^{-1})$, we have with probability at least $1-\delta$
\[
||\sum_{t=k+1}^T X_t X_t^{\prime}||_2 \leq \sigma^2_1(P)\text{tr}(\sum_{t=k}^{T-1}\Gamma_t(A))\Big(1 + \frac{1}{c}\log{\Big(\frac{1}{\delta}\Big)}\Big)
\]
for some universal constant $c$.
\end{cor}
\begin{proof}
The proof follows the same steps as Proposition~\ref{energy_markov}. Define
\[
\tilde{A} = \begin{bmatrix} I & 0 & 0 & \hdots & 0 \\
A & I & 0 & \hdots & 0 \\
\vdots & \vdots & \ddots & \vdots &\vdots\\
\vdots & \vdots & \vdots & \ddots&\vdots\\
A^{T-1} & A^{T-2} & A^{T-3} & \hdots & I
\end{bmatrix}(I_{n \times n} \otimes P^{\prime})
\]
Define $\tilde{A}_k$ as the matrix formed by zeroing out all the rows of $\tilde{A}$ from $k+1$ row onwards. Then observe that
\begin{align*}
||\sum_{t=k+1}^T X_t X_t^{\prime}|| &\leq \text{tr}(\sum_{t=k+1}^T X_t X_t^{\prime}) = \text{tr}(\sum_{t=1}^T X_t X_t^{\prime} - \sum_{t=1}^k X_t X_t^{\prime}) \\
&= \tilde{\eta}^{\prime}(\tilde{A}^{\prime}\tilde{A} - \tilde{A}^{\prime}_k\tilde{A}_k)\tilde{\eta}
\end{align*}
Since $ \text{tr}(\sum_{t=1}^T X_t X_t^{\prime} - \sum_{t=1}^k X_t X_t^{\prime}) \geq 0$ for any $\tilde{\eta}$ it implies $B = (\tilde{A}^{\prime}\tilde{A} - \tilde{A}^{\prime}_k\tilde{A}_k) \succeq 0$.
\begin{align*}
||B||_S &= \text{tr}(\tilde{A}^{\prime}\tilde{A}) = \sum_{t=k}^{T-1}\text{tr}(\Gamma_t(A)) \\
||B||_F^2 &\leq ||B||_S ||B||_2
\end{align*}
Define $c^{*} = \min{(c, 1)}$. Set $t = \frac{||B||_F^2}{c^{*}||B||} {\log{(\frac{1}{\delta})}}$ and assume $\delta \in (0, e^{-1})$ then
\begin{align*}
\frac{t}{c^{*}||B||} \leq \frac{t^2}{c^{*}||B||_F^2}
\end{align*}
we get from Theorem~\ref{hanson-wright} that
\begin{align*}
\tilde{\eta}^{\prime}\tilde{A}^{\prime}\tilde{A}\tilde{\eta} &\leq ||B||_S + \frac{||B||_F^2}{c^{*}||B||} \log{\Big(\frac{1}{\delta}\Big)} \leq ||B||_S + \frac{||B||_S}{c^{*}} \log{\Big(\frac{1}{\delta}\Big)} \leq ||B||_S\Big(1 + \frac{1}{c^{*}}\log{\Big(\frac{1}{\delta}\Big)}\Big)
\end{align*}
with probability at least $1 - \exp{\Big(- \frac{c||B||_F^2}{c^{*}||B||_2^2}\log{\frac{1}{\delta}}\Big)}$. Since
\[
\frac{c||B||_F^2}{c^{*}||B||_2^2} \geq 1
\]
it follows that
\[
\exp{\Big(- \frac{c||B||_F^2}{c^{*}||B||_2^2}\log{\frac{1}{\delta}}\Big)} \leq \delta
\]
and we can conclude that with probability at least $1-\delta$
\[
\tilde{\eta}^{\prime}\tilde{A}^{\prime}\tilde{A}\tilde{\eta} \leq \text{tr}(\sum_{t=k}^{T-1}\Gamma_t(A))\Big(1 + \frac{1}{c^{*}}\log{\Big(\frac{1}{\delta}\Big)}\Big)
\]
\end{proof}
\begin{prop}
\label{cont_rand}
Whenever the pdf of $X$, $f(\cdot)$, satisfies $\text{ess sup}_x f(x) = C_X < \infty$ we have
\[
\mathbb{P}(|X| \leq \delta) \leq 2C_X \delta
\]
\end{prop}
\begin{proof}
Since the essential supremum of $f(\cdot)$ is bounded. Then
\[
\mathbb{P}(|X| \leq \delta) = \int_{x=-\delta}^{\delta}f(x)dx \leq 2C_X \delta
\]
\end{proof}
\begin{prop}[Proposition 2 in~\cite{faradonbeh2017finite}]
\label{anti_conc}
Let $P^{-1} \Lambda P = A$ be the Jordan decomposition of $A$ and define $z_T = A^{-T}\sum_{i=1}^TA^{T-i}\eta_i$. Further assume that $\eta_t$ is continuous, subGaussian with variance proxy $=1$ then
$$\psi(A, \delta) = \sup \Bigg\{y \in \mathbb{R} : \mathbb{P}\Bigg(\min_{1 \leq i \leq d}|P_i^{'}z_T| < y \Bigg) \leq \delta \Bigg\}$$
where $P = [P_1, P_2, \ldots, P_d]^{'}$. If $\rho_{\min}(A) > 1$, then
$$\psi(A, \delta) \geq \psi(A) \delta > 0$$
where $\psi(A)$ depend only on $A$.
\end{prop}
\begin{proof}
Define the event $\mathcal{E} = \{\min_{1 \leq i \leq d}|P_i^{'}z_T| < y\}, \mathcal{E}_i = \{|P_i^{'}z_T| < y\}$. Clearly $\mathcal{E} \implies \cup_{i=1}^d \mathcal{E}_i$, then
\begin{align*}
\mathbb{P}(\mathcal{E}) &\leq \mathbb{P}( \cup_{i=1}^d \mathcal{E}_i) \leq \sum_{i=1}^d \mathbb{P}( \mathcal{E}_i)
\end{align*}
From Proposition~\ref{cont_rand} and Assumption~\ref{subgaussian_noise}, we have $\mathbb{P}( \mathcal{E}_i) \leq 2 C_{|P_i^{'}z_T|} y$. Then we get
\begin{align*}
\mathbb{P}(\mathcal{E}) &\leq (2\sum_{i=1}^d C_{|P_i^{'}z_T|}) y \leq 2 d \sup_{1 \leq i \leq d}C_{|P_i^{'}z_T|} y
\end{align*}
where $C_{|P_i^{'}z_T|}$ is the essential supremum of the pdf of $|P_i^{'}z_T|$. Then $\psi(A) = \frac{1}{2 d \sup_{1 \leq i \leq d}C_{|P_i^{'}z_T|}}$.
\end{proof}
\subsection*{Bounding $C_T$}
In this section we will bound $C_T$ which includes the cross terms with noise and past state variables. We will show that with high probability this is only a small fraction of the total energy of the past terms. Specifically, recall
\begin{align*}
S_T &= \Gamma_T^{-1/2}\sum_{t=0}^{T-1} A x(t) x(t)^{'} A^{'} \Gamma_T^{-1/2} \\
C_T &= \sum_{t=0}^{T-1}\Gamma_T^{-1/2}Ax(t)\eta_{t+1}^{'}\Gamma_T^{-1/2}+ \Gamma_T^{-1/2}\eta_{t+1}x(t)^{'}A^{'}\Gamma_T^{-1/2}
\end{align*}
We want to show that $C_T + S_T \succeq (1 - \epsilon) S_T$. We will approach this by showing that $-\epsilon S_T \preceq C_T \preceq \epsilon S_T$. Formally, with high probability
\[
||S_T^{-1/2} C_T S_T^{-1/2}||_{\text{op}} < \epsilon
\]
Next, observe that $\hat{\eta}_t = \Gamma_T^{-1/2} \eta_t$, \textit{i.e.}, a transformed Gaussian vector with reduced variance. Before proving this we will need an essential result that is a variant (and stronger version) of Lemma 2 in~\cite{faradonbeh2017finite}. We defer the proof for later.
\begin{lem}
\label{least_eigv}
For a regular, explosive dynamical system, \textit{i.e.}, $\rho_{\min}(A) > 1$. If $T \geq C \log{\Big(\frac{-\log{\delta}}{\delta}\Big)}$ where $C$ is an absolute constant, then with probability at least $1-\delta$ it holds that,
\begin{align*}
\rho_{\min}(\Gamma_T^{-1/2} Y_T \Gamma_T^{-1/2}) \geq \frac{c\phi(A)^2 \psi(A)^2\delta^2}{2}
\end{align*}
\end{lem}
\begin{lem}
\label{small_opnorm}
For any $v \in \mathcal{S}^{n}$ we have that
\begin{align*}
\mathbb{P}(v^T S_T^{-1/2} C_T S_T^{-1/2} v )
\end{align*}
\end{lem}
\section{Invertibility of $Y_T$ in explosive systems}
\label{explosive}
Assume for this case that $\eta_{t} = L \bar{\eta}_t$ where $\{\bar{\eta}_t\}_{t=1}^T$ are i.i.d and all elements of $\bar{\eta}_t$ are independent. Further $L$ is full row rank. Define $\sigma_{\min}(LL^{\prime}) = R^2 > 0$. Let $\sigma_{\max}(LL^{\prime})=1$. Recall that
\begin{align*}
z_t &= A^{-t}x_t\\
&= x_0 + \sum_{\tau=1}^{t} A^{-\tau} \eta_{\tau}
\end{align*}
Define
\begin{align*}
z(T, t) &= \Bigg(\sum_{s=0}^{t-1}A^{-s} \eta_{T+1-t+s}\Bigg)
\end{align*}
where $z(T, t) = 0$ for $t \leq 0, t \geq T+1$. An observation that will be useful is that $z(t)$ is statistically independent of $z(T) - z(t)$. Recall that $U_T = A^{-T}\sum_{t=1}^T x_t x_t^{\prime}A^{-T \prime}, F_T = \sum_{t=1}^T A^{-t+1} z_T z_T^{\prime}A^{-t+1 \prime}$
\subsection*{Bounding $||F_T - U_T||_{\text{op}}$}
Observe that
\begin{align}
z(T) - z(T-t) &= A^{-T+t-1}\Bigg(\sum_{s=0}^{t-1}A^{-s} \eta_{T+1-t+s}\Bigg) =A^{-T+t-1} z(T, t) \label{zt_diff}
\end{align}
Then
\begin{align*}
|| U_T - F_T||_{\text{op}} = &||\sum_{t=1}^{T} A^{-t}(z(T-t)z(T-t)^{'} - z(T)z(T)^{'})(A^{-t})^{'}||_2
\end{align*}
Let $u = z(T-t), v=z(T)$ and since $uu^{\prime} - vv^{\prime}= (u-v)u^{\prime} + u(u-v)^{\prime} - (u-v)(u-v)^{\prime}$ we have
\begin{align}
|| U_T - F_T||_{\text{op}} &\leq ||\sum_{t=1}^{T} A^{-t}(z(T-t) - z(T))(z(T-t) - z(T))^{'}A^{-t'}||_2 \nonumber\\
&+||\sum_{t=1}^{T} A^{-t} ((z(T-t) - z(T))z(T-t)^{'} + z(T-t)(z(T-t)^{'} - z(T)^{'}) A^{-t '}||_2 \label{ft_ut}
\end{align}
The reason we decompose it in such a way is so that we can represent the cross terms $(z(T-t) - z(T))z(T-t)^{\prime}$ as the product of independent terms. This will be useful in using Hanson--Wright bounds as we show later.
First we bound
$$ ||\sum_{t=1}^{T} A^{-t}(z(T-t) - z(T))(z(T-t) - z(T))^{'}A^{-t'}||_2$$
From Eq.~\eqref{zt_diff} we see that $A^{-t}(z(T-t) - z(T)) = -A^{-T-1}z(T, t)$, then
\begin{align*}
A^{-T-1}z(T, t) &= A^{-T-1}[0, 0, \ldots, \underbrace{I}_{T-t+1 \text{ term}}, A^{-1}, A^{-2} , \ldots, A^{-t+1}] \begin{bmatrix}
\eta_1 \\
\eta_2 \\
\vdots \\
\eta_T
\end{bmatrix}
\end{align*}
Since $\sum_{t=1}^{T} (z(T-t) - z(T))(z(T-t) - z(T))^{'} \preceq \sum_{t=1}^{T} \text{trace}((z(T-t) - z(T))(z(T-t) - z(T))^{'}) I$. Based on these observations we have
\begin{align*}
&||\sum_{t=1}^{T} A^{-t}(z(T-t) - z(T))(z(T-t) - z(T))^{'}A^{-t'}||_2 = ||\sum_{t=1}^{T} A^{-T-1} z(T, t)z(T, t)^{'} A^{-T-1 '}||_2 \\
&\leq \text{trace}(A^{-T-1}\sum_{t=1}^{T} z(T, t)z(T, t)^{'} A^{-T-1 '}) = \sum_{t=1}^{T} z(T, t)^{'} A^{-T-1 '} A^{-T-1} z(T, t) = \tilde{\eta}^{'} \tilde{A}^{'} \tilde{A} \tilde{\eta} \\
\end{align*}
where $\tilde{\eta} = \begin{bmatrix}
\eta_1 \\
\eta_2 \\
\vdots \\
\eta_T
\end{bmatrix}$ and
\[
\tilde{A} = \begin{bmatrix}
0 & 0 &\ldots & 0 & A^{-T-1} \\
0 & 0 & \ldots & A^{-T-1} & A^{-T-2} \\
\vdots & \vdots & \vdots & \vdots & \vdots \\
A^{-T-1} & A^{-T-2} & \ldots & A^{-2T+1} & A^{-2T}
\end{bmatrix}
\]
Since $\text{tr}(\tilde{A}\tilde{A}^{\prime}) =T\text{tr}(A^{-T-1}\Gamma_T(A^{-1})A^{-T-1 \prime})$. Applying Markov's Inequality (See Proposition~\ref{energy_markov}), we have with probability at least $1-\delta$ that
\begin{align}
\tilde{\eta}^{'} \tilde{A}^{'} \tilde{A} \tilde{\eta} &\leq \frac{\text{tr}(\mathbb{E}[\tilde{A} \tilde{\eta}\tilde{\eta}^{'} \tilde{A}^{'}])}{\delta} \leq \frac{\sigma_1(L)^2T\text{tr}(A^{-T-1}\Gamma_T(A^{-1})A^{-T-1 \prime})}{\delta} \label{final_diff}
\end{align}
%
Although this bound can be tightened by dependent Hanson--Wright (See Corollary~\ref{dep-hanson-wright}), there is no reason to do so as $\delta$ depends only logarithmically on $T$. In fact we get with probability at least $1 -\delta$ that
\begin{equation}
\tilde{\eta}^{'} \tilde{A}^{'} \tilde{A} \tilde{\eta} \leq \Big(1 + \frac{1}{c} \log{\frac{1}{\delta}}\Big)(\sigma_1(L)^2T\text{tr}(A^{-T-1}\Gamma_T(A^{-1})A^{-T-1 \prime})) \label{tight_final_diff}
\end{equation}
Next we analyze the second term
\begin{align*}
&||\sum_{t=1}^{T} A^{-t} ((z(T-t) - z(T))z(T-t)^{'} + z(T-t)(z(T-t)^{'} - z(T)^{'}) A^{-t '}||_2
\end{align*}
Consider the summand $\sum_{t=1}^{T} A^{-t} ((z(T-t) - z(T))z(T-t)^{'} A^{-t \prime}$, then
\begin{align}
\label{cross_summand}
\sum_{t=1}^{T} A^{-t} ((z(T-t) - z(T))z(T-t)^{'}A^{-t \prime} &= A^{-T-1}\sum_{t=1}^{T} z(T, t) z(T-t)^{'}A^{-t \prime}
\end{align}
We define scaled version of $z(T, t), z(T-t)$.
\begin{align*}
\tilde{z}(T, t)&= A^{-T-1}z(T, t) = A^{-T-1}\underbrace{[0, 0, \ldots, \underbrace{I}_{T-t+1 \text{ term}}, A^{-1}, A^{-2} , \ldots, A^{-t+1}]}_{A(T, t)} \begin{bmatrix}
\eta_1 \\
\eta_2 \\
\vdots \\
\eta_T
\end{bmatrix} \\
\tilde{z}(T-t)^{\prime} &= z(T-t)^{\prime}A^{-t \prime} = \underbrace{[\eta_1^{\prime}, \eta_2^{\prime}, \ldots, \eta_T^{\prime}]}_{\tilde{\eta}^{\prime}}\underbrace{\begin{bmatrix}
A^{-t-1 \prime} \\
A^{-t-2 \prime} \\
\vdots \\
A^{-T \prime} \\
0 \\
\vdots \\
0
\end{bmatrix}}_{A(T-t)^{\prime}} + x_0
\end{align*}
Then the probability of the second term can be written as
\begin{align}
&\mathbb{P}(||\sum_{t=1}^{T} (\tilde{z}(T, t)\tilde{z}(T-t)^{'} + \tilde{z}(T-t)\tilde{z}(T, t)^{'})||_2 \geq z) \underbrace{\leq}_{\frac{1}{2}-\text{net}} 2 \times 5^{2d} \times\mathbb{P}(\Bigg | \sum_{t=1}^{T} 2u^{'} \tilde{z}(T, t)\tilde{z}(T-t)^{'}v \Bigg | ) \geq z/4) \nonumber \\
&\leq 2 \times 5^{2d} \times \mathbb{P}\Bigg(\Bigg | \tilde{\eta}^{\prime}{\Big(\sum_{t=1}^{T} A(T, t)^{\prime}A^{-T-1 \prime} uv^{\prime} A(T-t) + A(T-t)^{\prime} vu^{\prime}A^{-T-1} A(T, t)\Big)} \tilde{\eta} \Bigg | \leq z/4 \Bigg) \label{cross}
\end{align}
To Eq.~\eqref{cross} apply Hanson-Wright inequality. For any $u, v$, due to the statistical independence of $z(T-t), z(T, t)$ we have
$$\mathbb{E}[\sum_{t=1}^{T} 2u^{'} \tilde{z}(T, t)\tilde{z}(T-t)^{'}v] = 0$$
We now need an upper bound on $||S||_2, ||S||_F$. Since $CD^{\prime} + DC^{\prime} \preceq CC^{\prime} + DD^{\prime}$
\begin{align*}
S &= \sum_{t=1}^{T} A(T, t)^{\prime}A^{-T-1 \prime} uv^{\prime} A(T-t) + A(T-t)^{\prime} vu^{\prime}A^{-T-1} A(T, t) \\
&=\sum_{t=1}^{T} \underbrace{A(T, t)^{\prime}A^{-(T+1)\epsilon \prime}}_{=C} \underbrace{A^{-(T+1)(1-\epsilon) \prime} uv^{\prime} A(T-t)}_{=D^{\prime}} + A(T-t)^{\prime} vu^{\prime}A^{-(T+1)(1-\epsilon)}A^{-(T+1)\epsilon} A(T, t)\\
&\preceq \sum_{t=1}^{T} \underbrace{A(T, t)^{\prime}A^{-(T+1)\epsilon \prime}A^{-(T+1)\epsilon}A(T, t) }_{=CC^{\prime}}
+\sum_{t=1}^{T} \underbrace{A(T-t)^{\prime} v u^{\prime} A^{-(T+1)(1-\epsilon)} A^{-(T+1)(1-\epsilon) \prime} uv^{\prime} A(T-t)}_{=DD^{\prime}} \\
&\preceq \sigma_1^2(A^{-(T+1)\epsilon}) \sum_{t=1}^{T} A(T, t)^{\prime}A(T, t) + u^{\prime} A^{-(T+1)(1-\epsilon)} A^{-(T+1)(1-\epsilon) \prime} u \sum_{t=1}^{T} A(T-t)^{\prime} v v^{\prime} A(T-t) \\
&\preceq \sigma_1^2(A^{-(T+1)\epsilon}) \text{tr}\Big(\sum_{t=1}^{T} A(T, t)^{\prime}A(T, t)\Big)I + \sigma_1^{2}(A^{-(T+1)(1-\epsilon)}) \text{tr}\Big(\sum_{t=1}^{T} A(T-t)^{\prime} v v^{\prime} A(T-t)\Big)I \\
&\overset{(a)}{\preceq} 2T\sigma_1^2(A^{-(T+1)\epsilon}) \text{tr}(\Gamma_T(A^{-1})) I
\end{align*}
Here $(a)$ follows because
$$A(T, t) A(T, t)^{\prime} = \Gamma_{t-1}(A), A(T-t)A(T-t)^{\prime} = \Gamma_{T-t}(A)$$
Then whenever
\begin{equation}
\label{T_req}
T \geq T_0=\frac{2}{c}\Big(\log{\frac{1}{\delta} + \log{2} + 2d \log{5}}\Big)
\end{equation}
Eq.~\eqref{cross} becomes with probability at least $1-\delta$ that
\begin{equation}
||\sum_{t=1}^{T} ((z(T-t) - z(T))z(T-t)^{'} + z(T-t)(z(T-t)^{'} - z(T)^{'})||_2 \leq 4T^2 \sigma_1^2(A^{-(T+1)\epsilon}) \text{tr}(\Gamma_T(A^{-1})) \label{cross_bound}
\end{equation}
Then combining Eq.~\eqref{final_diff},\eqref{cross_bound} we get for $T \geq T_0$ given in Eq.~\eqref{T_req},
\begin{equation}
||U_T - F_T||_{2} \leq \Bigg(4T^2 \sigma_1^2(A^{-(T+1)\epsilon}) \text{tr}(\Gamma_T(A^{-1}))+ \frac{T\text{tr}(A^{-T-1}\Gamma_T(A^{-1})A^{-T-1 \prime})}{\delta}\Bigg) \label{error_cum}
\end{equation}
with probability at least $1-2\delta$. We pick $\epsilon$ such that $(T+1)\epsilon = \lfloor \frac{T+1}{2} \rfloor$. In fact using Eq.~\eqref{tight_final_diff} instead of Eq.~\eqref{final_diff} we get
\begin{equation}
||U_T - F_T||_{2} \leq \Bigg(4T^2 \sigma_1^2(A^{-(T+1)\epsilon}) \text{tr}(\Gamma_T(A^{-1}))+ \Big(1 + \frac{1}{c}\log{\frac{1}{\delta}}\Big)T\text{tr}(A^{-T-1}\Gamma_T(A^{-1})A^{-T-1 \prime})\Bigg) \label{tight_error_cum}
\end{equation}
\subsection*{Bounding $U_T$}
To give lower and upper bounds on $U_T$, we need to bound $F_T$. The steps involve
\begin{align*}
||U_T - F_T||_2 &\leq \Delta \\
F_T &\succeq V_{dn} \succ 0\\
\implies U_T &\geq V_{dn} - \Delta I \\
F_T &\preceq V_{up} \\
\implies U_T &\preceq V_{up} + \Delta I \\
\end{align*}
From Proposition~\ref{ft_inv} we get, with probability at least $1-2\delta$,
\begin{align*}
F_T &\succeq \phi_{\min}(A)^2 \psi(A)^2 \delta^2 \sigma_{\min}(P^{-1})^2I \\
F_T &\preceq \frac{\phi_{\max}(A)^2}{\sigma_{\min}(P)^2}(1+\frac{1}{c}\log{\frac{1}{\delta}})\text{tr}(P(\Gamma_T(A^{-1})-I)P^{\prime}) I
\end{align*}
Define
\begin{align*}
\Delta &= \frac{1}{2}\min{ \Bigg(\frac{\phi_{\max}(A)^2}{\sigma_{\min}(P)^2}(1+\frac{1}{c}\log{\frac{1}{\delta}})\text{tr}(P(\Gamma_T(A^{-1})-I)P^{\prime}), \phi_{\min}(A)^2 \psi(A)^2 \delta^2 \sigma_{\min}(P^{-1})^2\Bigg)} \\
&= \frac{\phi_{\min}(A)^2 \psi(A)^2 \delta^2 \sigma_{\min}(P^{-1})^2}{2}
\end{align*}
Then in Eq.~\eqref{error_cum} by ensuring that
\begin{align}
&\Bigg(4T^2 \sigma_1^2(A^{-(T+1)\epsilon}) \text{tr}(\Gamma_T(A^{-1})) + \frac{T\text{tr}(A^{-T-1}\Gamma_T(A^{-1})A^{-T-1 \prime})}{\delta}\Bigg) \leq \frac{\phi_{\min}(A)^2 \psi(A)^2 \delta^2}{2\sigma_{\max}(P)^2} \nonumber
\end{align}
we get with probability at least $1-4\delta$ (since this is the intersection of events governed by Eq.~\eqref{error_cum},\eqref{lb_ft},\eqref{ub_ft})
\begin{align}
U_T &\succeq \phi_{\min}(A)^2 \psi(A)^2 \delta^2 \sigma_{\min}(P^{-1})^2I - \frac{\phi_{\min}(A)^2 \psi(A)^2 \delta^2}{2\sigma_{\max}(P)^2} I \succeq \frac{\phi_{\min}(A)^2 \psi(A)^2 \delta^2}{2\sigma_{\max}(P)^2} I \label{ut-ft}
\end{align}
Similarly, for the upper bound
\begin{equation}
U_T \preceq \frac{3\phi_{\max}(A)^2}{2\sigma_{\min}(P)^2}(1+\frac{1}{c}\log{\frac{1}{\delta}})\text{tr}(P(\Gamma_T(A^{-1})-I)P^{\prime}) I \label{ub-ft}
\end{equation}
Thus with probability at least $1-4\delta$ we have
\begin{align}
Y_T&\succeq \frac{\phi_{\min}(A)^2 \psi(A)^2 \delta^2}{2\sigma_{\max}(P)^2}A^T A^{T \prime}\nonumber\\
Y_T&\preceq\frac{3\phi_{\max}(A)^2}{2\sigma_{\min}(P)^2}(1+\frac{1}{c}\log{\frac{1}{\delta}})\text{tr}(P(\Gamma_T(A^{-1})-I)P^{\prime}) A^T A^{T \prime} \label{exp_bnds}
\end{align}
whenever
\begin{align}
&\Bigg(4T^2 \sigma_1^2(A^{-(T+1)\epsilon}) \text{tr}(\Gamma_T(A^{-1})) + \frac{T\text{tr}(A^{-T-1}\Gamma_T(A^{-1})A^{-T-1 \prime})}{\delta}\Bigg) \leq \frac{\phi_{\min}(A)^2 \psi(A)^2 \delta^2}{2\sigma_{\max}(P)^2}\label{t_exp_req}
\end{align}
\section{Regularity and Invertibility}
\label{regularity_inv}
Through a counterexample in~\cite{nielsen2008singular}, Remark 4 in~\cite{phillips2013inconsistent} it is shown that unless a matrix is regular, the estimation of the parameters maybe asymptotically inconsistent.
Recall $F_T$ from Eq.~\eqref{ut_ft}. Assume again that $\eta_{t} = L \bar{\eta}_t$ where $\{\bar{\eta}_t\}_{t=1}^T$ are i.i.d isotropic subGaussian and all elements of $\bar{\eta}_t$ are independent. Further $L$ is full row rank. Define $\sigma_{\min}(LL^{\prime}) = R^2 > 0$. Let $\sigma_{\max}(LL^{\prime})=1$ (this does not affect the main result as it appears only as a scaling). For the invertibility of $Y_T$ in explosive systems, it will be important that $F_T$ is invertible with high probability. It will turn out that invertibility of $F_T$ can be ensured by assuming regularity of $A$. This is Proposition 1 in~\cite{faradonbeh2017finite} and has been presented here for completeness. It will be useful to recall the definitions of $\phi_{\min}(A), \phi_{\max}(A)$ from Definition~\ref{outbox}.
\vspace{2mm}
We will show $F_{T}$ indeed has rank $d$ with probability $1$. Formally,
\begin{prop}
\label{ft_inv}
Let $A$ be regular, then we have with probability at least $1 - 2\delta$
\begin{align*}
\sigma_{\min}(F_T) &\geq \frac{\phi_{\min}(A)^2}{\sigma_{\max}(P)^2} \psi(A)^2 \delta^2 \\
\sigma_{\max}(F_T) &\leq \frac{\phi_{\max}(A)^2}{\sigma_{\min}(P)^2}(1+\frac{1}{c}\log{\frac{1}{\delta}})\text{tr}(P(\Gamma_T(A^{-1})-I)P^{\prime})
\end{align*}
where $A = P^{-1}\Lambda P$ is the Jordan decomposition of $A$.
\end{prop}
\begin{proof}
Let $S_k = [z_T, A^{-1}z_T, \ldots, A^{-k}z_T]$ where $z_T = A^{-T}x_T = A^{-T}(\sum_{k=0}^{T-1} A^{k} L \bar{\eta}_{T-k})$. Note that $L \bar{\eta}_t$ is continuous whenever $L$ is full row rank. Then $F_T = S_T S_T^{\prime}$. Observe that
\[
A^{-t}z_{T} = P^{-1} \Lambda^{-t} P z_{T}
\]
Define the event
$$\mathcal{E}_{+}(\delta) = \{\min_{1 \leq i \leq d} |P_i^{\prime}z_T| > \psi(A) \delta\}$$
where $\psi(A) $ is the lower bound shown in Proposition~\ref{anti_conc1} (which we can use due to the continuity of $L\bar{\eta}_t$) and $v = Pz_T$. Under $\mathcal{E}_{+}(\delta)$, $|v_i| > 0$. Now we need a lower bound for $\sigma_{\min}(F_T)$ under $\mathcal{E}_{+}(\delta)$
\begin{align}
F_T &= P^{-1} \sum_{i=1}^T \Lambda^{-i+1} Pz_T z_T^{\prime}P^{\prime} \Lambda^{-i+1 \prime} P^{-1 \prime} = P^{-1} \sum_{i=1}^T \Lambda^{-i+1} vv^{\prime} \Lambda^{-i+1 \prime} P^{-1 \prime} \label{ft_eq} \\
&\succeq \phi_{\min}(A)^2 \psi(A)^2 \delta^2 P^{-1} P^{-1 \prime} {\succeq} \frac{\phi_{\min}(A)^2}{\sigma_{\max}(P)^2} \psi(A)^2 \delta^2 I\label{lb_ft}
\end{align}
Further, since $A$ is regular we have that $\phi_{\min}(A) > 0$ from Proposition~\ref{reg_invertible}. Then with probability at least $1-\delta$ we have
\[
\sigma_{\min}(F_T) \geq \frac{\phi_{\min}(A)^2}{\sigma_{\max}(P)^2} \psi(A)^2 \delta^2 > 0
\]
For the upper bound, observe that $Pz_T$ is a sub-Gaussian random variable. Since
$$||Pz_T z_T^{\prime}P^{\prime}|| \leq z_T^{\prime} P^{\prime} P z_T$$
and recalling that
\[
z_T = \underbrace{[A^{-1}, A^{-2}, \ldots, A^{-T}]}_{\tilde{A}} \begin{bmatrix}
\eta_1 \\\eta_2 \\
\vdots \\
\eta_T
\end{bmatrix}
\]
we can use dependent Hanson Wright inequality (Corollary~\ref{dep-hanson-wright}) to bound $z_T^{\prime} P^{\prime} P z_T$. In Theorem~\ref{hanson-wright},
\begin{align*}
B &= \tilde{A}^{\prime} P^{\prime} P \tilde{A} \\
\mathbb{E}[z_T^{\prime} P^{\prime} P z_T] &= \text{tr}(P(\Gamma_T(A^{-1})-I)P^{\prime}) \sigma_1(L)^2 = \text{tr}(P(\Gamma_T(A^{-1})-I)P^{\prime})\\
||B||_2, ||B||_F \leq \text{tr}(\tilde{A}^{\prime} P^{\prime} P \tilde{A}) &= \text{tr}(P(\Gamma_T(A^{-1})-I)P^{\prime})
\end{align*}
Then with probability at least $1 -\delta$ we have
\[
z_T^{\prime} P^{\prime} P z_T \leq (1 + \frac{1}{c}\log{\frac{1}{\delta}})\text{tr}(P(\Gamma_T(A^{-1})-I)P^{\prime})
\]
and we get from Eq.~\eqref{ft_eq}
\begin{align}
F_T &\preceq P^{-1} \sum_{i=1}^T \Lambda^{-i+1} Pz_T z_T^{\prime}P^{\prime} \Lambda^{-i+1 \prime} P^{-1 \prime} \nonumber \\
&\preceq (z_T^{\prime}P^{\prime} Pz_T) \sup_{||v||_2=1}\sigma_{\max}\Big(P^{-1} \sum_{i=1}^T \Lambda^{-i+1} vv^{\prime} \Lambda^{-i+1 \prime} P^{-1 \prime}\Big)I \nonumber \\
&\preceq\frac{\phi_{\max}(A)^2}{\sigma_{\min}(P)^2}(1+\frac{1}{c}\log{\frac{1}{\delta}})\text{tr}(P(\Gamma_T(A^{-1})-I)P^{\prime}) I \label{ub_ft}
\end{align}
Then we have with probability at least $1-2\delta$
\begin{align}
F_T &\succeq \frac{\phi_{\min}(A)^2}{\sigma_{\max}(P)^2} \psi(A)^2 \delta^2 I \\
F_T &\preceq \frac{\phi_{\max}(A)^2}{\sigma_{\min}(P)^2}(1+\frac{1}{c}\log{\frac{1}{\delta}})\text{tr}(P(\Gamma_T(A^{-1})-I)P^{\prime}) I \label{bnds_fT}
\end{align}
\end{proof}
\section{Appendix}
\label{appendix_matrix}
\begin{prop}
\label{psd_result_2}
Let $P, V$ be a psd and pd matrix respectively and define $\bar{P} = P + V$. Let there exist some matrix $Q$ for which we have the following relation
\[
||\bar{P}^{-1/2} Q|| \leq \gamma
\]
For any vector $v$ such that $v^{\prime} P v = \alpha, v^{\prime} V v =\beta$ it is true that
\[
||v^{\prime}Q|| \leq \sqrt{\beta+\alpha} \gamma
\]
\end{prop}
\begin{proof}
Since
\[
||\bar{P}^{-1/2} Q||_2^2 \leq \gamma^2
\]
for any vector $v \in \mathcal{S}^{d-1}$ we will have
\[
\frac{v^{\prime} \bar{P}^{1/2}\bar{P}^{-1/2} Q Q^{\prime}\bar{P}^{-1/2}\bar{P}^{1/2} v}{v^{\prime} \bar{P} v} \leq \gamma^2
\]
and substituting $v^{\prime} \bar{P} v = \alpha + \beta$ gives us
\begin{align*}
{v^{\prime} Q Q^{\prime} v} &\leq \gamma^2{v^{\prime} \bar{P} v} \\
&= (\alpha + \beta) \gamma^2
\end{align*}
\end{proof}
\begin{prop}
\label{inv_jordan}
Consider a Jordan block matrix $J_d(\lambda)$ given by \eqref{jordan}, then $J_d(\lambda)^{-k}$ is a matrix where each off--diagonal (and the diagonal) has the same entries, \textit{i.e.},
\begin{equation}
J_d(\lambda)^{-k} =\begin{bmatrix}
a_1 & a_2 & a_3 & \hdots & a_d \\
0 & a_1 & a_2 & \hdots & a_{d-1} \\
\vdots & \vdots & \ddots & \ddots & \vdots \\
0 & \hdots & 0 & a_1 & a_2 \\
0 & 0 & \hdots & 0 & a_1
\end{bmatrix}_{d \times d}
\end{equation}
for some $\{a_i\}_{i=1}^d$.
\end{prop}
\begin{proof}
$J_d(\lambda) = (\lambda I + N)$ where $N$ is the matrix with all ones on the $1^{st}$ (upper) off-diagonal. $N^k$ is just all ones on the $k^{th}$ (upper) off-diagonal and $N$ is a nilpotent matrix with $N^d = 0$. Then
\begin{align*}
(\lambda I + N)^{-1} &= (\sum_{l=0}^{d-1} (-1)^{l}\lambda^{-l-1}N^{l}) \\
(-1)^{k-1}(k-1 )!(\lambda I + N)^{-k} &= \Big(\sum_{l=0}^{d-1} (-1)^{l}\frac{d^{k-1}\lambda^{-l-1}}{d \lambda^{k-1}}N^{l}\Big) \\
&= \Big(\sum_{l=0}^{d-1} (-1)^{l}c_{l, k}N^{l}\Big)
\end{align*}
and the proof follows in a straightforward fashion.
\end{proof}
\begin{prop}
\label{reg_invertible}
Let $A$ be a regular matrix and $A = P^{-1} \Lambda P$ be its Jordan decomposition. Then
\[
\inf_{||a||_2 = 1}||\sum_{i=1}^d a_i \Lambda^{-i+1}||_2 > 0
\]
Further $\phi_{\min}(A) > 0$ where $\phi_{\min}(\cdot)$ is defined in Definition~\ref{outbox}.
\end{prop}
\begin{proof}
When $A$ is regular, the geometric multiplicity of each eigenvalue is $1$. This implies that $A^{-1}$ is also regular. Regularity of a matrix $A$ is equivalent to the case when minimal polynomial of $A$ equals characteristic polynomial of $A$ (See Section~\ref{lemmab} in appendix), \textit{i.e.},
\begin{align*}
\inf_{||a||_2 = 1}||\sum_{i=1}^d a_i A^{-i+1}||_2 &> 0
\end{align*}
Since $A^{-j} = P^{-1} \Lambda^{-j} P$ we have
\begin{align*}
\inf_{||a||_2 = 1}||\sum_{i=1}^d a_i P^{-1}\Lambda^{-i+1}P||_2 &> 0 \\
\inf_{||a||_2 = 1}||\sum_{i=1}^d a_i P^{-1}\Lambda^{-i+1}||_2 \sigma_{\min}(P) &> 0 \\
\inf_{||a||_2 = 1}||\sum_{i=1}^d a_i \Lambda^{-i+1}||_2 \sigma_{\min}(P) \sigma_{\min}(P^{-1}) &> 0 \\
\inf_{||a||_2 = 1}||\sum_{i=1}^d a_i \Lambda^{-i+1}||_2 &>0
\end{align*}
Since $\Lambda$ is Jordan matrix of the Jordan decomposition, it is of the following form
\begin{equation}
\Lambda =\begin{bmatrix}
J_{k_1}(\lambda_1) & 0 & \hdots & 0 &0 \\
0 & J_{k_2}(\lambda_2) & 0 & \hdots &0 \\
\vdots & \vdots & \ddots & \ddots & \vdots \\
0 & \hdots & 0 & J_{k_{l}}(\lambda_l) & 0 \\
0 & 0 & \hdots & 0 & J_{k_{l+1}}(\lambda_{l+1})
\end{bmatrix}
\end{equation}
where $J_{k_i}(\lambda_i)$ is a $k_i \times k_i$ Jordan block corresponding to eigenvalue $\lambda_i$. Then
\begin{equation}
\Lambda^{-k} =\begin{bmatrix}
J^{-k}_{k_1}(\lambda_1) & 0 & \hdots & 0 &0 \\
0 & J^{-k}_{k_2}(\lambda_2) & 0 & \hdots &0 \\
\vdots & \vdots & \ddots & \ddots & \vdots \\
0 & \hdots & 0 & J^{-k}_{k_{l}}(\lambda_l) & 0 \\
0 & 0 & \hdots & 0 & J^{-k}_{k_{l+1}}(\lambda_{l+1})
\end{bmatrix}
\end{equation}
Since $||\sum_{i=1}^d a_i \Lambda^{-i+1}||_2 >0$, without loss of generality assume that there is a non--zero element in $k_1 \times k_1$ block. This implies
\begin{align*}
||\underbrace{\sum_{i=1}^d a_i J_{k_1}^{-i+1}(\lambda_1)}_{=S}||_2 > 0
\end{align*}
By Proposition~\ref{inv_jordan} we know that each off--diagonal (including diagonal) of $S$ will have same element. Let $j_0 = \inf{\{j | S_{ij} \neq 0\}}$ and in column $j_0$ pick the element that is non--zero and highest row number, $i_0$. By design $S_{i_0, j_0} > 0$ and further
$$S_{k_1 -(j_0 - i_0), k_1} = S_{i_0, j_0}$$
because they are part of the same off--diagonal (or diagonal) of $S$. Thus the row $k_1 - (j_0 - i_0)$ has only one non--zero element because of the minimality of $j_0$.
We proved that for any $||a||=1$ there exists a row with only one non--zero element in the matrix $\sum_{i=1}^d a_i \Lambda^{-i+1}$. This implies that if $v$ is a vector with all non--zero elements, then $||\sum_{i=1}^d a_i \Lambda^{-i+1} v||_2 > 0$, \textit{i.e.},
\begin{align*}
\inf_{||a||_2 = 1}||\sum_{i=1}^d a_i \Lambda^{-i+1} v ||_2 &> 0
\end{align*}
This implies
\begin{align*}
\inf_{||a||_2 = 1}||[v, \Lambda^{-1} v, \ldots, \Lambda^{-d+1}v] a||_2 &> 0\\
\sigma_{\min}([v, \Lambda^{-1} v, \ldots, \Lambda^{-d+1}v]) &> 0 \\
\end{align*}
By Definition~\ref{outbox} we have
\begin{align*}
\phi_{\min}(A) &> 0
\end{align*}
\end{proof}
\begin{prop}[Corollary 2.2 in~\cite{ipsen2011determinant}]
\label{det_lb}
For any positive definite matrix $M$ with diagonal entries $m_{jj}$, $1 \leq j \leq d$ and $\rho$ is the spectral radius of the matrix $C$ with elements
\begin{align*}
c_{ij} &= 0 \hspace{3mm} \text{if } i=j \\
&=\frac{m_{ij}}{\sqrt{m_{ii}m_{jj}}} \hspace{3mm} \text{if } i\neq j
\end{align*}
then
\begin{align*}
0 < \frac{\prod_{j=1}^d m_{jj} - \text{det}(M)}{\prod_{j=1}^d m_{jj}} \leq 1 - e^{-\frac{d \rho^2}{1+\lambda_{\min}}}
\end{align*}
where $\lambda_{\min} = \min_{1 \leq j \leq d} \lambda_j(C)$.
\end{prop}
\begin{prop}
\label{gramian_lb}
Let $1 - C/T \leq \rho_i(A) \leq 1 + C/T$ and $A$ be a $d \times d$ matrix. Then there exists $\alpha(d)$ depending only on $d$ such that for every $8 d \leq t \leq T$
\[
\sigma_{\min}(\Gamma_t(A)) \geq t \alpha(d)
\]
\end{prop}
\begin{proof}
Since $A = P^{-1} \Lambda P$ where $\Lambda$ is the Jordan matrix. Since $\Lambda$ can be complex we will assume that adjoint instead of transpose. This gives
\begin{align*}
\Gamma_T(A) &= I + \sum_{t=1}^T A^t (A^{t})^{\prime} \\
&= I + P^{-1}\sum_{t=1}^T \Lambda^tPP^{\prime} (\Lambda^t)^{*} P^{-1 \prime} \\
&\succeq I + \sigma_{\min}(P)^2P^{-1}\sum_{t=1}^T \Lambda^t(\Lambda^t)^{*} P^{-1 \prime}
\end{align*}
Then this implies that
\begin{align*}
\sigma_{\min}( \Gamma_T(A)) &\geq 1 +\sigma_{\min}(P)^2 \sigma_{\min}(P^{-1}\sum_{t=1}^T \Lambda^t(\Lambda^t)^{\prime} P^{-1 \prime}) \\
&\geq 1 + \sigma_{\min}(P)^2 \sigma_{\min}(P^{-1})^2\sigma_{\min}(\sum_{t=1}^T \Lambda^t(\Lambda^t)^{\prime} ) \\
&\geq 1 + \frac{\sigma_{\min}(P)^2}{\sigma_{\max}(P)^2}\sigma_{\min}(\sum_{t=1}^T \Lambda^t(\Lambda^t)^{\prime} )
\end{align*}
Now
\begin{align*}
\sum_{t=0}^T \Lambda^t(\Lambda^t)^{*} &= \begin{bmatrix}
\sum_{t=0}^T J^{t}_{k_1}(\lambda_1)(J^{t}_{k_1}(\lambda_1))^{*} & 0 & \hdots & 0 \\
0 & \sum_{t=1}^T J^{t}_{k_2}(\lambda_2)(J^{t}_{k_2}(\lambda_2))^{*} & 0 & \hdots \\
\vdots & \vdots & \ddots & \ddots \\
0 & \hdots & 0 & \sum_{t=1}^T J^{t}_{k_{l}}(\lambda_l) (J^{t}_{k_l}(\lambda_l))^{*}
\end{bmatrix}
\end{align*}
Since $\Lambda$ is block diagonal we only need to worry about the least singular value corresponding to some block. Let this block be the one corresponding to $J_{k_1}(\lambda_1)$, \textit{i.e.},
\begin{equation}
\sigma_{\min}(\sum_{t=0}^T \Lambda^t(\Lambda^t)^{*} ) =\sigma_{\min}(\sum_{t=0}^T J^{t}_{k_1}(\lambda_1)(J^{t}_{k_1}(\lambda_1))^{*}) \label{bnd_1}
\end{equation}
Define $B = \sum_{t=0}^T J^{t}_{k_1}(\lambda_1)(J^{t}_{k_1}(\lambda_1))^{*}$. Note that $J_{k_1}(\lambda_1) = (\lambda_1 I + N)$ where $N$ is the nilpotent matrix that is all ones on the first off--diagonal and $N^{k_1} = 0$. Then
\begin{align*}
(\lambda_1 I + N)^t &= \sum_{j=0}^t {t \choose j} \lambda_1^{t-j}N^{j} \\
(\lambda_1 I + N)^t((\lambda_1 I + N)^t)^{*} &= \Big(\sum_{j=0}^t {t \choose j} \lambda_1^{t-j}N^{j}\Big)\Big(\sum_{j=0}^t {t \choose j} (\lambda_1^{*})^{t-j}N^{j \prime}\Big) \\
&= \sum_{j=0}^t {t \choose j}^2 |\lambda_1|^{2(t-j)} \underbrace{N^j (N^j)^{\prime}}_{\text{Diagonal terms}} + \sum_{j \neq k}^{j=t, k=t} {t \choose k}{t \choose j} \lambda_1^j (\lambda_1^{*})^{k} N^j (N^k)^{\prime} \\
&= \sum_{j=0}^t {t \choose j}^2 |\lambda_1|^{2(t-j)} \underbrace{N^j (N^j)^{\prime}}_{\text{Diagonal terms}} + \sum_{j > k}^{j=t, k=t} {t \choose k}{t \choose j} \lambda_1^j (\lambda_1^{*})^{k} N^j (N^k)^{\prime} \\
&+ \sum_{j< k}^{j=t, k=t} {t \choose k}{t \choose j} \lambda_1^j (\lambda_1^{*})^{k} N^j (N^k)^{\prime} \\
&= \sum_{j=0}^t {t \choose j}^2 |\lambda_1|^{2(t-j)} \underbrace{N^j (N^j)^{\prime}}_{\text{Diagonal terms}} + \sum_{j > k}^{j=t, k=t} {t \choose k}{t \choose j} \underbrace{|\lambda_1|^{2k} \lambda_1^{j-k} N^{j-k} N^k(N^k)^{\prime}}_{\text{On $(j-k)$ upper off--diagonal}} \\
&+ \sum_{j< k}^{j=t, k=t} {t \choose k}{t \choose j} \underbrace{|\lambda_1|^{2j} (\lambda_1^{*})^{k-j} N^j(N^{j})^{\prime} (N^{j-k})^{\prime}}_{\text{On $(k-j)$ lower off--diagonal}}
\end{align*}
Let $\lambda_1 = r e^{i\theta}$, then similar to~\cite{erxiong1994691}, there is $D = \text{Diag}(1, e^{-i\theta}, e^{-2i\theta}, \ldots, e^{-i(k_1-1)\theta})$ such that
\[
\underbrace{D (\lambda_1 I + N)^t((\lambda_1 I + N)^t)^{*} D^{*}}_{\text{Real Matrix}}
\]
Observe that any term on $(j-k)$ upper off--diagonal of $(\lambda_1 I + N)^t((\lambda_1 I + N)^t)^{*}$ is of the form $r_0 e^{i(j-k)\theta}$. In the product $D (\lambda_1 I + N)^t((\lambda_1 I + N)^t)^{*} D^{*}$ any term on the $(j-k)$ upper off diagonal term now looks like $e^{-ij\theta + ik\theta} r_0 e^{i(j-k)\theta} = r_0$, which is real. Then we have
\begin{align}
D (\lambda_1 I + N)^t((\lambda_1 I + N)^t)^{*} D^{*} &= \sum_{j=0}^t {t \choose j}^2 |\lambda_1|^{2(t-j)} \underbrace{N^j (N^j)^{\prime}}_{\text{Diagonal terms}} + \sum_{j > k}^{j=t, k=t} {t \choose k}{t \choose j} \underbrace{|\lambda_1|^{2k} |\lambda_1|^{j-k} N^{j-k} N^k(N^k)^{\prime}}_{\text{On $(j-k)$ upper off--diagonal}} \nonumber\\
&+ \sum_{j< k}^{j=t, k=t} {t \choose k}{t \choose j} \underbrace{|\lambda_1|^{2j} |\lambda_1|^{k-j} N^j(N^{j})^{\prime} (N^{k-j})^{\prime}}_{\text{On $(k-j)$ lower off--diagonal}} \label{real}
\end{align}
Since $D$ is unitary and $D (\lambda_1 I + N)^t((\lambda_1 I + N)^t)^{*} D^{*} =(|\lambda_1| I + N)^t((|\lambda_1| I + N)^t)^{\prime} $, we can simply work with the case when $\lambda_1 > 0$ and real, as the singular values remain invariant under unitary transformations. Now we show the growth of $ij^{th}$ term of the product $D (\lambda_1 I + N)^t((\lambda_1 I + N)^t)^{*} D^{*})$,
Define $B=\sum_{t=1}^T (|\lambda_1| I + N)^t((|\lambda_1| I + N)^t)^{\prime}$
\begin{align}
B_{ll} &=\sum_{t=1}^T [(\lambda_1 I + N)^t((\lambda_1 I + N)^t)^{*}]_{ll} \\
&= \sum_{t=1}^T \sum_{j=0}^{k_1-l} {t \choose j}^2 |\lambda_1|^{2(t-j)} \label{bll}
\end{align}
Since $1-C/T \leq |\lambda_1| \leq 1+C/T$, then for every $t \leq T$ we have
$$e^{-C} \leq |\lambda_1|^t \leq e^{C}$$
Then
\begin{align}
B_{ll} &= \sum_{t=1}^T \sum_{j=0}^{k_1-l} {t \choose j}^2 |\lambda_1|^{2(t-j)} \nonumber\\
&\geq e^{-2C} \sum_{t=1}^T \sum_{j=0}^{k_1-l} {t \choose j}^2 \nonumber\\
& \geq e^{-2C} \sum_{t=T/2}^T \sum_{j=0}^{k_1-l} {t \choose j}^2 \geq e^{-C} \sum_{t=T/2}^T c_{k_1} \frac{t^{2k_1-2l+2} - 1}{t^2 - 1} \geq C(k_1) T^{2k_1 - 2l+1} \label{lb}
\end{align}
An upper bound can be achieved in an equivalent fashion.
\begin{align}
B_{ll} &= \sum_{t=1}^T \sum_{j=0}^{k_1-l} {t \choose j}^2 |\lambda_1|^{2(t-j)} \nonumber\\
& \leq e^{2C} T \sum_{j=0}^{k_1-l} T^{2j} \leq C(k_1) T^{2k_1 - 2l + 1} \label{ub1}
\end{align}
Similarly, for any $B_{k,k+l} $ we have
\begin{align}
B_{k, k+l} &=\sum_{t=1}^T \sum_{j=0}^{k_1-k - l} {t \choose j}{t \choose j+l} |\lambda_1|^{2j} |\lambda_1|^{l} \\
&\geq \sum_{t=1}^T e^{-2C} \sum_{t=T/2}^T \sum_{j=0}^{k_1-k - l} {t \choose j}{t \choose j+l} \\
&\geq e^{-2C} \frac{T}{2}\sum_{j=0}^{k_1-k - l} {T/2 \choose j}{T/2 \choose j+l} \\
&\geq C(k_1) T^{2k_1 - 2k -l +1}
\end{align}
and by a similar argument as before we get $B_{jk} = C(k_1)T^{2k_1-j-k +1}$. For brevity we use the same $C(k_1)$ to indicate different functions of $k_1$ as we are interested only in the growth with respect to $T$. To summarize
\begin{align}
B_{jk} &= C(k_1)T^{2k_1 - j - k +1} \label{jordan_value}
\end{align}
whenever $T \geq 8d$. Recall Proposition~\ref{det_lb}, let the $M$ there be equal to $B$ then since
\[
C_{ij} = C(k_1)\frac{B_{ij}}{\sqrt{B_{ii} B_{jj}}} = C(k_1)\frac{T^{2k_1 - j -k +1}}{\sqrt{T^{4k_1 - 2j - 2k + 2}}}
\]
it turns out that $C_{ij}$ is independent of $T$ and consequently $\lambda_{min}(C), \rho$ are independent of $T$ and depend only on $k_1$: the Jordan block size. Then $\prod_{j=1}^{k_1} B_{jj} \geq \text{det}(B) \geq \prod_{j=1}^{k_1} B_{jj} e^{-\frac{d\rho^2}{1 + \lambda_{\min}}} = C(k_1) \prod_{j=1}^{k_1} B_{jj}$. This means that $\text{det}(B) = C(k_1) \prod_{j=1}^{k_1} B_{jj}$ for some function $C(k_1)$ depending only on $k_1$. Further using the values for $B_{jj}$ we get
\begin{equation}
\label{det}
\text{det}(B) = C(k_1) \prod_{j=1}^{k_1} B_{jj} = \prod_{j=1}^{k_1} C(k_1) T^{2k_1 - 2l +1} = C(k_1) T^{k_1^2}
\end{equation}
Next we use Schur-Horn theorem, \textit{i.e.}, let $\sigma_i(B)$ be the ordered singular values of $B$ where $\sigma_i(B) \geq \sigma_{i+1}(B)$. Then $\sigma_i(B)$ majorizes the diagonal of $B$, \textit{i.e.}, for any $k \leq k_1$
\[
\sum_{i=1}^k \sigma_i(B) \geq \sum_{i=1}^{k} B_{ii}
\]
Observe that $B_{ii} \leq B_{jj}$ when $i \leq j$. Then from Eq.~\eqref{jordan_value} it implies that
\begin{align*}
B_{k_1 k_1}=C_1(k_1)T &\geq \sigma_{k_1}(B) \\
\sum_{j=k_1-1}^{k_1} B_{jj} &= C_{2}(k_1)T^{3} + C_1(k_1)T \geq \sigma_{k_1 - 1}(A) + \sigma_{k_1}(A)
\end{align*}
Since $k_1 \geq 1$ it can be checked that for $T \geq T_1 =k_1\sqrt{\frac{C_1(k_1)}{C_2(k_1)}}$ we have $\sigma_{k_1-1}(A) \leq {(1+k_1^{-2})C_2(k_1)T^3} \leq {(1+k_1^{-1})C_2(k_1)T^3}$ as for every $T \geq T_1$ we have $C_2(k_1)T^3 \geq k_1^2C_1(k_1)T$. Again to upper bound $\sigma_{k_1-2}(A)$ we will use a similar argument
\begin{align*}
\sum_{j=k_1-2}^{k_1} B_{jj} &= C_3(k_1)T^{5} + C_2(k_1)T^{3} + C_1(k_1)T \geq \sigma_{k_1-2}(A) +\sigma_{k_1-1}(A) + \sigma_{k_1}(A)
\end{align*}
and show that whenever
\[
T \geq \max{\Big(T_1, k_1\sqrt{\frac{C_2(k_1)}{C_3(k_1)}}\Big)}
\]
we get $\sigma_{k_1-2}(A) \leq (1+k_1^{-2} + k_1^{-4}){C_3(k_1)T^5} \leq (1+k_1^{-1}){C_3(k_1)T^5}$ because $T \geq T_1$ ensures $C_2(k_1)T^3 \geq k_1^2C_1(k_1)T$ and $T \geq T_2 = k_1\sqrt{\frac{C_2(k_1)}{C_3(k_1)}}$ ensures $C_3(k_1)T^5 \geq k_1^2 C_2(k_1)T^3$. The $C_i(k_1)$ are not important, the goal is to show that for a sufficiently large $T$ we have an upper bound on each singular values (roughly) corresponding to the diagonal element. Similarly we can ensure for every $i$ we have $\sigma_i(A) \leq (1+k_1^{-1})C_{k_1 -i+1}(k_1)T^{2k_1 - 2i + 1}$, whenever
\[
T > T_{i} = \max{\Big(T_{i-1}, k_1\sqrt{\frac{C_{i}(k_1)}{C_{i+1}(k_1)}}\Big)}
\]
Recall Eq.~\eqref{det} where $\text{det}(B) = C(k_1) T^{k_1^2}$. Assume that $\sigma_{k_1}(B) < \frac{C(k_1) T}{e \prod_{i=1}^d C_{i+1}(k_1)}$. Then whenever $T \geq \max{\Big(8d, \sup_{i}k_1\sqrt{\frac{C_{i}(k_1)}{C_{i+1}(k_1)}}\Big)}$
\begin{align*}
\text{det}(B) &= C(k_1) T^{k_1^2} \\
\prod_{i=1}^{k_1}\sigma_i &= C(k_1) T^{k_1^2} \\
\sigma_{k_1}(B)(1+k_1^{-1})^{k_1-1} T^{k_1^2-1}\prod_{i=2}^{k_1}C_{i+1} &\geq C(k_1) T^{k_1^2} \\
\sigma_{k_1}(B) &\geq \frac{C_{k_1}T}{(1+k_1^{-1})^{k_1-1}\prod_{i=2}^{k_1}C_{i+1}} \\
&\geq \frac{C(k_1) T}{e \prod_{i=1}^{k_1} C_{i+1}(k_1)}
\end{align*}
which is a contradiction. This means that $\sigma_{k_i}(B) \geq \frac{C(k_1) T}{e \prod_{i=1}^{k_1} C_{i+1}(k_1)}$. This implies
\[
\sigma_{\min}(\Gamma_T(A)) \geq 1 + \frac{\sigma_{\min}(P)^2}{\sigma_{\max}(P)^2} C(k_1)T
\]
for some function $C(k_1)$ that depends only on $k_1$.
\end{proof}
It is possible that $\alpha(d)$ might be exponentially small in $d$, however for many cases such as orthogonal matrices or diagonal matrices $\alpha(A)=1$ [As shown in~\cite{simchowitz2018learning}]. We are not interested in finding the best bound $\alpha(d)$ rather show that the bound of Proposition~\ref{gramian_lb} exists and assume that such a bound is known.
\begin{prop}
\label{gramian_ratio}
Let $t_1/t_2 = \beta > 1$ and $A$ be a $d \times d$ matrix. Then
\[
\lambda_1(\Gamma_{t_1}(A)\Gamma_{t_2}^{-1}(A)) \leq C(d, \beta)
\]
where $C(d, \beta)$ is a polynomial in $\beta$ of degree at most $d^2$ whenever $t_i \geq 8d$.
\end{prop}
\begin{proof}
Since $\lambda_1(\Gamma_{t_1}(A)\Gamma_{t_2}^{-1}(A)) \geq 0$
\begin{align*}
\lambda_1(\Gamma_{t_1}(A)\Gamma_{t_2}^{-1}(A)) &\leq \text{tr}(\Gamma_{t_1}(A)\Gamma_{t_2}^{-1}(A)) \\
&\leq \text{tr}(\Gamma_{t_2}^{-1/2}(A)\Gamma_{t_1}(A)\Gamma_{t_2}^{-1/2}(A)) \\
&\leq d \sigma_1(\Gamma_{t_2}^{-1/2}(A)\Gamma_{t_1}(A)\Gamma_{t_2}^{-1/2}(A)) \\
&\leq d\sup_{||x|| \neq 0}\frac{x^{\prime} \Gamma_{t_1}(A) x}{x^{\prime}\Gamma_{t_2}(A) x}
\end{align*}
Now
\begin{align*}
\Gamma_{t_i}(A) &= P^{-1}\sum_{t=0}^{t_i} \Lambda^{t}PP^{\prime}(\Lambda^{t})^{*}P^{-1 \prime} \\
&\preceq \sigma_{\max}(P)^2 P^{-1}\sum_{t=0}^{t_i} \Lambda^{t}(\Lambda^{t})^{*}P^{-1 \prime} \\
\Gamma_{t_i}(A) &\succeq \sigma_{\min}(P)^2 P^{-1}\sum_{t=0}^{t_i} \Lambda^{t}(\Lambda^{t})^{*}P^{-1 \prime}
\end{align*}
Then this implies
\[
\sup_{||x|| \neq 0}\frac{x^{\prime} \Gamma_{t_1}(A) x}{x^{\prime}\Gamma_{t_2}(A) x} \leq \frac{\sigma_{\max}(P)^2}{\sigma_{\min}(P)^2} \sup_{||x|| \neq 0}\frac{x^{\prime} \sum_{t=0}^{t_1} \Lambda^{t}(\Lambda^{t})^{*} x}{x^{\prime}\sum_{t=0}^{t_2} \Lambda^{t}(\Lambda^{t})^{*} x}
\]
Then from Lemma 12 in~\cite{abbasi2011improved} we get that
\[
\sup_{||x|| \neq 0}\frac{x^{\prime} \sum_{t=0}^{t_1} \Lambda^{t}(\Lambda^{t})^{*} x}{x^{\prime}\sum_{t=0}^{t_2} \Lambda^{t}(\Lambda^{t})^{*} x} \leq \frac{\text{det}(\sum_{t=0}^{t_1} \Lambda^{t}(\Lambda^{t})^{*})}{\text{det}(\sum_{t=0}^{t_2} \Lambda^{t}(\Lambda^{t})^{*})}
\]
Then
\begin{align*}
\frac{\text{det}(\sum_{t=0}^{t_2} \Lambda^{t}(\Lambda^{t})^{*})}{\text{det}(\sum_{t=0}^{t_1} \Lambda^{t}(\Lambda^{t})^{*})} &\leq \frac{\text{det}(\prod_{i=1}^l (\sum_{t=0}^{t_2} J_{k_i}(\lambda_i)^{t}(J_{k_i}(\lambda_i)^{t})^{*}))}{\text{det}(\prod_{i=1}^l(\sum_{t=0}^{t_1} J_{k_i}(\lambda_i)^{t}(J_{k_i}(\lambda_i)^{t})^{*}))}
\end{align*}
Here $l$ are the number of Jordan blocks of $A$. Then our assertion follows from Eq.~\eqref{det} which implies that the determinant of $\sum_{t=0}^{t_2} J_{k_i}(\lambda_i)^{t}(J_{k_i}(\lambda_i)^{t})^{*} $ is equal to the product of the diagonal elements (times a factor that depends only on Jordan block size), \textit{i.e.}, $C(k_i)t_2^{k_i^2}$. As a result the ratio is given by
\[
\frac{\text{det}(\prod_{i=1}^l (\sum_{t=0}^{t_2} J_{k_i}(\lambda_i)^{t}(J_{k_i}(\lambda_i)^{t})^{*}))}{\text{det}(\prod_{i=1}^l(\sum_{t=0}^{t_1} J_{k_i}(\lambda_i)^{t}(J_{k_i}(\lambda_i)^{t})^{*}))} = \prod_{i=1}^l \beta^{k_i^2}
\]
whenever $t_2, t_1 \geq 8d$. Summarizing we get
\[
\sup_{||x|| \neq 0}\frac{x^{\prime} \Gamma_{t_1}(A) x}{x^{\prime}\Gamma_{t_2}(A) x} \leq \frac{\sigma_{\max}(P)^2}{\sigma_{\min}(P)^2} \prod_{i=1}^l \beta^{k_i^2}
\]
\end{proof}
\section{Inconsistency of OLS}
\label{inconsistent}
We will now show that when a matrix is irregular, then it cannot be learned despite a high signal-to-noise ratio. Consider the two cases
\begin{align*}
A_r &= \begin{bmatrix}
1.1 & 1 \\
0 & 1.1
\end{bmatrix}, A_o = \begin{bmatrix}
1.1 & 0 \\
0 & 1.1
\end{bmatrix}
\end{align*}
Here $A_r$ is a regular matrix and $A_o$ is not. Now we run Eq.~\eqref{lti} for $A=A_r, A_o$ for $T=10^3$. Let the OLS estimate of $A_r, A_o$ be $\hat{A}_r, \hat{A}_o$ respectively. Define
\begin{align*}
\beta_r &= [A_r]_{1,2}, \beta_o = [A_o]_{1,2} \\
\hat{\beta_r} &= [\hat{A}_r]_{1,2}, \hat{\beta}_o = [\hat{A}_o]_{1,2}
\end{align*}
Although $\beta_r \approx \hat{\beta}_r$, $\hat{\beta}_o$ does not equal zero. Instead Fig.~\ref{beta_dist} shows that $\hat{\beta}_o$ has a non--trivial distribution which is bimodal at $\{-0.55, 0.55\}$ and as a result OLS is inconsistent for $A_o$. This happens because the sample covariance matrix for $A_o$ is singular despite the fact that $\Gamma_T(A_o) = (1.1)^T I$, \textit{i.e.}, a high signal to noise ratio. In general, the relation between OLS identification of $A$ and its controllability Gramian, $\Gamma_T(A)$, is tenuous for unstable systems unlike what is suggested in~\cite{simchowitz2018learning}.
\begin{figure}
\includegraphics[width=\linewidth]{content/cdf.pdf}
\caption{CDF and PDF of $\hat{\beta}_o$}
\label{beta_dist}
\end{figure}
To see this singularity observe that
\begin{align*}
X_{t+1} &= A_{o} \begin{bmatrix} X^{(1)}_t \\
X^{(2)}_t \end{bmatrix} + \begin{bmatrix} \eta_{t+1}^{(1)} \\
\eta_{t+1}^{(2)} \end{bmatrix}\\
Y_T &= \begin{bmatrix}
\sum_{t=1}^T (X^{(1)}_t)^2 & \sum_{t=1}^T (X^{(1)}_t)(X^{(2)}_t)\\
\sum_{t=1}^T (X^{(1)}_t)(X^{(2)}_t) & \sum_{t=1}^T (X^{(2)}_t)^2
\end{bmatrix}
\end{align*}
where $X^{(1)}_t, X^{(2)}_t$ are independent of each other. Define $a=1.1$.
\begin{prop}
\label{singular}
Let $\{\eta_t\}_{t=1}^T$ be i.i.d standard Gaussian then whenever $T^2 \leq a^T$, we have that
\[
||\hat{A}_o - A_{o}|| = \gamma_T
\]
where $\gamma_T$ is a random variable that admits a continuous pdf and does not decay to zero as $T \rightarrow \infty$. Further, the sample covariance matrix has the following singular values
\begin{align*}
\sigma_1(\sum_{t=1}^T X_t X_t^{\top}) &= \Theta(a^{2T}), \sigma_2(\sum_{t=1}^T X_t X_t^{\top}) = O(\sqrt{T}a^{T})
\end{align*}
\end{prop}
The proof is given in Section~\ref{inconsistent} and Proposition~\ref{condition_number}. Proposition~\ref{singular} suggests that the consistency of OLS estimate depends directly on the condition number of the sample covariance matrix. In fact, OLS is inconsistent when condition number grows exponentially fast in $T$ (as in the case of $A_o$). The proof requires a careful expansion of the (appropriately scaled) sample covariance matrix inverse using Woodbury's identity. Since the sample covariance matrix is highly ill--conditioned, it magnifies the noise-covariate cross terms so that the identification error no longer decays as time increases. Although for stable and marginally stable $A$ this invertibility can be characterized $\sigma_{\min}(\Gamma_T(A))$ such an intuition does not extend to explosive systems. This is because the behavior of $Y_T$ is dominated by ``past'' $\eta_t$s such as $\eta_1, \eta_2$ much more than the $\eta_{T-1}, \eta_{T}$ etc. When $A$ is explosive, all singular values of $||A^T||$ grow exponentially fast. Since $X_T = A^{T-1} \eta_1 + A^{T-2} \eta_2 + \ldots + A \eta_{T-1} + \eta_T$ the behavior of $X_T$ is dominated by $A^{T-1} \eta_1$. This causes a very strong dependence between $X_T$ and $X_{T+1}$ and some structural constraints (such as regularity) are necessary for OLS identification.
\section{Contributions}
\label{contributions}
In this paper we offer a new statistical analysis of the ordinary least squares estimator of the dynamics $X_{t+1} = A X_t + \eta_{t+1}$ with no inputs. Unlike previous work, we do not impose any restrictions on the spectral radius of $A$ and provide nearly optimal rates (up to logarithmic factors) for every regime of $\rho(A)$. The contributions of our paper can be summarized as follows
\begin{itemize}
\item At the center of our techniques is a systematic analysis of the sample covariance $\sum_{t=1}^T X_t X_t^{\prime}$ and a certain self normalized martingale difference term. Although such a coupled analysis is similar in flavor to~\cite{simchowitz2018learning}, it comes without the overhead of choosing a block size and applies to a general case when covariates grow exponentially in time.
\item Specifically, for the case when $\rho(A) \leq 1$, we recover the optimal finite time identification error rates previously derived in~\cite{simchowitz2018learning}. For the case when all eigenvalues are outside the unit circle, we argue that small ball methods cannot be used. Instead we use anti--concentration arguments discussed in~\cite{faradonbeh2017finite,lai1983asymptotic}. By leveraging subgaussian tail inequalities we sharpen previous error bounds by removing polynomial factors. We also show that this analysis is indeed tight by deriving a matching lower bound.
\item We provide the first analysis of the general case when eigenvalues of $A$ are arbitrarily distributed in three regimes: stable, marginally stable and explosive. This involves a careful analysis of the noise-covariate cross terms as the underlying process behaves differently in each of these regimes.
\item We show that when $A$ does not satisfy certain regularity conditions, OLS identification is statistically inconsistent, even when signal-to-noise ratio is high. Our result indicates that consistency of OLS identification depends on the condition number of the sample covariance matrix, rather than the signal-to-noise ratio itself.
\end{itemize}
\section{Lower Bound for $Y_T$ when $A \in \mathcal{S}_0 \cup \mathcal{S}_1$}
\label{short_proof}
Here we will prove our results when $\rho(A) \leq 1+ C/T$. Assume for this case that $\eta_{t} = L\bar{\eta}_t$ where $\{\bar{\eta}_t\}_{t=1}^T$ are i.i.d and all elements of $\bar{\eta}_t$ are independent. Further $L$ is full row rank. Define $\sigma_{\min}(LL^{\prime}) = R^2 > 0$. Let $\sigma_{\max}(LL^{\prime})=1$ (this does not affect our result: $R$ is just the inverse of the condition number).
Define
\begin{align*}
P &= A Y_{T-1} A^{\prime} \\
Q &= \sum_{\tau=0}^{T-1}{Ax_t \eta_{t+1}^{\prime} }\\
V &= TI \\
T_{\eta} &= C\Big(\log{\frac{2}{\delta}} + d \log{5}\Big) \\
\mathcal{E}_{1}(\delta) &= \Bigg\{||Q||^2_{(P+V)^{-1}} \leq 8 \log{\Bigg(\dfrac{5^d\text{det}(P+V)^{1/2} \text{det}(V)^{-1/2}}{\delta}\Bigg)}\Bigg\} \\
\mathcal{E}_{2}(\delta) &= \Bigg\{||\sum_{\tau=0}^{T-1} Ax_{\tau} x_{\tau}^{\prime}A^{\prime}|| \leq \frac{T \text{tr}(\Gamma_{T}(A) - I)}{\delta}\Bigg\}\\
\mathcal{E}_{\eta}(\delta) &= \{T > T_{\eta}(\delta), \frac{3R^2}{4}I \preceq \dfrac{1}{T}\sum_{t=1}^T \eta_t \eta_t^{'} \preceq \frac{5}{4}I\} \\
\mathcal{E}(\delta) &= \mathcal{E}_{\eta}(\delta) \cap \mathcal{E}_{1}(\delta) \cap \mathcal{E}_{2}(\delta) \\
\end{align*}
Recall that
\begin{equation}
\label{lb_step1}
{Y}_T \succeq A {Y}_{T-1} A^{\prime} + \sum_{t=0}^{T-1} {Ax_t \eta_{t+1}^{\prime} + \eta_{t+1} x_t^{\prime}A^{\prime}} + \sum_{t=1}^T \eta_t \eta_t^{\prime}
\end{equation}
Our goal here will be to control
\begin{equation}
\label{cross_terms}
||Q||_2
\end{equation}
Following Proposition~\ref{selfnorm_bnd}, Proposition~\ref{energy_markov}, it is true that $\mathbb{P}(\mathcal{E}_{1}(\delta) \cap \mathcal{E}_{2}(\delta)) \geq 1-2\delta$. We will show that
$$\mathcal{E}(\delta) = \mathcal{E}_{\eta}(\delta) \cap \mathcal{E}_{1}(\delta) \cap \mathcal{E}_{2}(\delta) \implies \sigma_{\min}(\hat{Y}_T) \geq 1/4$$
Under $\mathcal{E}_{\eta}(\delta)$, we get
\begin{align}
{Y}_T &\succeq A {Y}_{T-1} A^{\prime} + \sum_{t=0}^{T-1} {Ax_t \eta_{t+1}^{\prime} + \eta_{t+1} x_t^{\prime}A^{\prime}} + \sum_{t=1}^T \eta_t \eta_t^{\prime} \nonumber \\
{Y}_T &\succeq A {Y}_{T-1} A^{\prime} + \sum_{t=0}^{T-1} {Ax_t \eta_{t+1}^{\prime} + \eta_{t+1} x_t^{\prime}A^{\prime}} + \frac{3}{4}R^2TI \nonumber \\
U^{\prime} {Y}_T U &\geq U^{\prime} A Y_{T-1} A^{\prime} U + U^{\prime} \sum_{t=0}^{T-1} \Bigg({Ax_t \eta_{t+1}^{\prime} + \eta_{t+1} x_t^{\prime}A^{\prime}} \Bigg) U + \frac{3}{4}TR^2 \hspace{3mm} \forall U\in \mathcal{S}^{d-1} \label{contra_eq}
\end{align}
Intersecting Eq.~\eqref{contra_eq} with $\mathcal{E}_1(\delta) \cap \mathcal{E}_2(\delta)$, we find under $\mathcal{E}(\delta)$
\begin{align*}
&||Q||^2_{(P+V)^{-1}} \leq 8 \log{\Bigg(\dfrac{5^d\text{det}(P+V)^{1/2} \text{det}(V)^{-1/2}}{\delta}\Bigg)} \\
&\leq 8 \log{\Bigg(\dfrac{5^d \text{det}(\frac{T \text{tr}(\Gamma_{T}(A) - I)}{\delta} + TI)^{1/2}\text{det}(TI)^{-1/2}}{\delta}\Bigg)} \\
&\leq 8 \log{\Bigg(\dfrac{5^d \text{det}({ \text{tr}(\Gamma_{T}(A) - I)} + I)^{1/2}}{\delta^d}\Bigg)}
\end{align*}
Using Proposition~\ref{psd_result_2} and letting $\kappa^2 = U^{\prime} P U$ then
\begin{align*}
&||QU||_2 \\
&\leq \sqrt{\kappa^2 + T}\sqrt{8 \log{\Bigg(\dfrac{5^d \text{det}({ \text{tr}(\Gamma_{T}(A) - I)} + I)^{1/2}}{\delta^d}\Bigg)}}
\end{align*}
So Eq.~\eqref{contra_eq} implies
\begin{align}
U^{\prime} {Y}_T U &\geq \kappa^2 \nonumber - \sqrt{(\kappa^2 + T)}{ \sqrt{ 16d \log{( \text{tr}(\Gamma_T - I)+1)} + 32d \log{\frac{5}{\delta}}}} + \frac{3}{4}TR^2
\end{align}
which gives us
\begin{align}
U^{\prime} \frac{{Y}_T}{T} U &\geq \frac{\kappa^2}{T} - \sqrt{(\frac{\kappa^2}{T} + 1)}\underbrace{ \sqrt{ \frac{16d}{T} \log{( \text{tr}(\Gamma_T - I)+1)} + \frac{32d}{T}\log{\frac{5}{\delta}}}}_{=\beta} + \frac{3}{4}R^2 \label{contra_eq2}
\end{align}
If we can ensure
\begin{equation}
\label{t_req}
\frac{TR^4}{128} \geq { \frac{d}{2} \log{( \text{tr}(\Gamma_T - I)+1)} + d \log{\frac{5}{\delta}} }
\end{equation}
then $\beta \leq R^2/2$, \textit{i.e.},
\[
\sqrt{ \frac{16d}{T} \log{( \text{tr}(\Gamma_T - I)+1)} + \frac{32d}{T}\log{\frac{5}{\delta}}} \leq \frac{R^2}{2}
\]
Let $T$ be large enough that Eq.~\eqref{t_req} is satisfied then Eq.~\eqref{contra_eq2} implies
\begin{equation}
\label{final_eq}
U^{\prime} \frac{{Y}_T}{T} U \geq \frac{\kappa^2}{T} - \frac{\sqrt{(\frac{\kappa^2}{T} + 1)}R^2}{2} + \frac{3R^2}{4} \geq \frac{R^2}{4} + \frac{\kappa^2}{2T}
\end{equation}
Since $U$ is arbitrarily chosen Eq.~\eqref{final_eq} implies
\begin{align}
Y_T \succeq \frac{TR^2}{4}I \label{lower_bnd}
\end{align}
with probability at least $1 - 3\delta$ whenever
\begin{align}
\rho_i(A) &\leq 1 + \frac{c}{T} \nonumber\\
T &\geq \max{\Big(C\Big(\log{\frac{2}{\delta}} + d \log{5}\Big), CR^2\Big({ \frac{d}{2} \log{( \text{tr}(\Gamma_T - I)+1)} + d \log{\frac{5}{\delta}} }\Big)\Big)} \label{t_req_comb}
\end{align}
\begin{remark}
Eq.~\eqref{t_req} is satisfied whenever $\text{tr}(\Gamma_T - I)$ grows at most polynomially in $T$. This is true whenever $\rho(A) \leq 1 +\frac{c}{T}$.
\end{remark}
\section{Sharpened bounds when $1 - \frac{c}{T}\leq \rho_i(A) \leq 1 + \frac{c}{T}$}
\label{sharp_bounds}
Here we show that the bound for $Y_T$ in Eq.~\eqref{lower_bnd} can be sharpened to have quadratic growth in $T$. The key idea towards sharpening will be that we want Eq.~\eqref{lower_bnd} satisfied for every $t \geq \frac{T}{2}$ simultaneously, \textit{i.e.}, we need
\begin{align}
Y_t \succeq \frac{tR^2}{4}I \label{lower_bnd_t}
\end{align}
simultaneously for $t \geq \frac{T}{2}$ with high probability. By similar arguments as before as long as we have
\begin{align}
\rho_i(A) &\leq 1 \nonumber\\
t &\geq \max{\Big(C\Big(\log{\frac{2}{\delta}} + d \log{5}\Big), CR^2\Big({ \frac{d}{2} \log{( \text{tr}(\Gamma_t - I)+1)} + d \log{\frac{5}{\delta}} }\Big)\Big)} \label{t_req_comb_t}
\end{align}
we can conclude with probability at least $1 - 2\delta$ that $Y_t \succeq \frac{tR^2}{4}I$. This means that with probability at least $1 - 3\delta \frac{T}{2}$ we have for $t \geq \frac{T}{2}$ simultaneously
\[
Y_t \succeq \frac{tR^2}{4}I
\]
when Eq.~\eqref{t_req_comb_t} is satisfied for each $t$. Since the LHS of Eq.~\eqref{t_req_comb_t} is least at $t = T/2$ and RHS is greatest at $t=T$, a sufficient condition for every $t \geq \frac{T}{2}$ satisfying Eq.~\eqref{t_req_comb_t} is the following
\[
T \geq \max{\Big(C\Big(\log{\frac{2}{\delta}} + d \log{5}\Big), C\Big({ \frac{d}{2} \log{( \text{tr}(\Gamma_T - I)+1)} + d \log{\frac{5}{\delta}} }\Big)\Big)}
\]
Then by substituting $\delta \rightarrow \frac{2\delta}{3T}$ we can conclude with probability at least $1-\delta$ that
\[
Y_t \succeq \frac{tR^2}{4}I
\]
simultaneously for every $t \geq \frac{T}{2}$ whenever
\begin{equation}
T \geq \max{\Big(C\Big(\log{\frac{3T}{2\delta}} + d \log{5}\Big), CR^2\Big({ \frac{d}{2} \log{( \text{tr}(\Gamma_T - I)+1)} + d \log{\frac{15T}{2\delta}} }\Big)\Big)} \label{T_sharp_cond}
\end{equation}
Define $\gamma_{t-1} = \sqrt{U^{\prime} A^{\prime} Y_{t-1} A U}$ and Eq.~\eqref{final_eq} becomes
\begin{align}
U^{\prime} Y_t U &\geq \gamma_{t-1}^2 - \sqrt{(\gamma_{t-1}^2 + t)}\underbrace{ \sqrt{ {16d} \log{( \text{tr}(\Gamma_t - I)+1)} + {32d}\log{\frac{15T}{2\delta}}}}_{\text{Under Eq.~\eqref{T_sharp_cond} is}\leq \frac{R^2\sqrt{t}}{2}} + \frac{3}{4}tR^2 \nonumber \\
&\geq \gamma_{t-1}^2 -(\gamma_{t-1} + \sqrt{t})\sqrt{ {16d} \log{( \text{tr}(\Gamma_t - I)+1)} + {32d}\log{\frac{15T}{2\delta}}} + \frac{3t}{4}R^2 \nonumber \\
&\geq \gamma_{t-1}^2 -\gamma_{t-1} \sqrt{ {16d} \log{( \text{tr}(\Gamma_t - I)+1)} + {32d}\log{\frac{15T}{2\delta}}} + \frac{3tR^2}{4} - \sqrt{t} \underbrace{\sqrt{ {16d} \log{( \text{tr}(\Gamma_t - I)+1)} + {32d}\log{\frac{15T}{2\delta}}}}_{\leq R^2\frac{\sqrt{t}}{2}} \nonumber \\
&\geq \gamma_{t-1}^2 \Big(1 - \sqrt{\frac{{ {16d} \log{( \text{tr}(\Gamma_t - I)+1)} + {32d}\log{\frac{15T}{2\delta}}}}{\gamma_{t-1}^2}}\Big) + \frac{tR^2}{4} \nonumber \\
&\geq \gamma_{t-1}^2 \Big(1 - \underbrace{\sqrt{\frac{{ {16d} \log{( \text{tr}(\Gamma_T - I)+1)} + {32d}\log{\frac{15T}{2\delta}}}}{\gamma_{t-1}^2}}}_{=\sqrt{\frac{c(A, \delta)}{\gamma_{t-1}^2}}}\Big) + \frac{TR^2}{8} \label{contra_eq3}
\end{align}
Observe that
\begin{equation}
\gamma_{t-1} = \sqrt{U^{\prime} A^{\prime} Y_{t-1} A U}\geq \sigma_{\min}(A)\sqrt{\frac{TR^2}{8e}} \label{gamma_t}
\end{equation}
Eq.~\eqref{contra_eq3} will give us a non--trivial bound only when $\frac{c(A, \delta)}{\gamma_{t-1}^2} \leq 1/4$ which is true whenever
\begin{equation}
T \geq \frac{64ec(A, \delta)}{R^2\sigma_{\min}^2(A)} \label{bet_cond}
\end{equation}
The scaling $1 - \sqrt{\frac{c(A,\delta)}{\gamma_{t-1}^2 }}$ in Eq.~\eqref{contra_eq3} depends on $\gamma_{t-1}$ itself. We will show that
\begin{align*}
&\gamma^2_{t-1} = T\Omega(1) \implies \gamma^2_{t-1} = T\Omega\Big(\sqrt{\frac{T}{c(A, \delta)}}\Big) \\
&\gamma^2_{t-1} = T\Omega\Big(\Big(\frac{T}{c(A, \delta)}\Big)^{1/2}\Big) \implies \gamma^2_{t-1} = T\Omega\Big(\Big(\frac{T}{c(A, \delta)}\Big)^{3/4}\Big) \\
&\gamma^2_{t-1} = T\Omega\Big(\Big(\frac{T}{c(A, \delta)}\Big)^{\frac{2^k-1}{2^k}}\Big) \implies \gamma^2_{t-1} = T\Omega\Big(\Big(\frac{T}{c(A, \delta)}\Big)^{\frac{2^{k+1} - 1}{2^{k+1}}}\Big)\\
&\implies \hdots \implies \gamma^2_{t-1} = T\Omega\Big(\frac{T}{c(A, \delta)}\Big)
\end{align*}
From Eq.~\eqref{contra_eq3},\eqref{gamma_t} since
\[
\sqrt{\frac{c(A,\delta)}{\gamma_{t-1}^2 }} \leq \sqrt{\frac{16ec(A,\delta)}{\sigma_{\min}(AA^{\prime}) T}} = \beta_1
\]
it follows that
\begin{align}
{Y_t} &{\succeq} \Bigg(1 - \underbrace{\sqrt{\frac{16ec(A,\delta)}{\sigma_{\min}(AA^{\prime}) TR^2}}}_{=\beta_1}\Bigg)A {Y_{t-1}} A^{\prime} + \frac{R^2TI}{8}\label{recur_eq}
\end{align}
The goal here is to refine the upper bound for $\sqrt{\frac{c(A,\delta)}{\gamma_{t-1}^2 }}$ such that
\[
\sqrt{\frac{c(A,\delta)}{\gamma_{t-1}^2 }} \leq \frac{C}{T}
\]
Eq.~\eqref{recur_eq} implies that
\begin{align*}
Y_t &\overset{(a)}{\succeq} \frac{TR^2}{8}\sum_{k=1}^{\min{(\lfloor \frac{1}{\beta_1}\rfloor, \frac{T}{4})}} (1 - \beta_1)^k A^k A^{k \prime} + \frac{R^2TI}{16} \\
&\overset{(b)}{\succeq} \frac{TR^2}{16e}\sum_{k=1}^{\min{(\lfloor \frac{1}{\beta_1}\rfloor, \frac{T}{4})}} A^k A^{k \prime} + \frac{R^2TI}{16} \\
&{\succeq} \frac{R^2T}{16e} \Gamma_{\lfloor \frac{1}{\beta_1}\rfloor}(A) + \frac{R^2TI}{16}
\end{align*}
Here
\begin{equation}
\label{beta}
\beta_1 = \sqrt{\frac{16ec(A,\delta)}{\sigma_{\min}(AA^{\prime}) R^2T}}
\end{equation}
Due to the choice of $T, d$ we will usually have $\lfloor \frac{1}{\beta_1}\rfloor^2 \leq \frac{T}{4}$. $(a)$ follows by successively expanding Eq.~\eqref{recur_eq}, $(b)$ follows because $(1 - \beta_1)^{\lfloor \frac{1}{\beta_1}\rfloor} \geq \frac{e^{-1}}{2}$ since $\beta_1 \leq 1/2$ by Eq.~\eqref{bet_cond}. Then we can conclude that
\begin{align}
\gamma_{t-1}^2 &\geq {\sigma_{\min}(AY_tA^{\prime})} \nonumber \\
&\geq \frac{R^2T\sigma_{\min}(A A^{\prime})\sigma_{\min}(\Gamma_{\lfloor \frac{1}{\beta_1}\rfloor}(A)) }{16e} \label{beta_inter}
\end{align}
which gives us
\begin{align}
\sqrt{\frac{c(A, \delta)}{\gamma_{t-1}^2}} &\leq \Big(\frac{ 16ec(A, \delta)}{ R^2T\sigma_{\min}(A A^{\prime})\sigma_{\min}(\Gamma_{\lfloor \frac{1}{\beta_1}\rfloor}(A))}\Big)^{1/2}=\beta_2 \label{recursion}
\end{align}
It is clear from Eq.~\eqref{recursion} that we get a recursion during the refinement process. Specifically at the $k^{th}$ repetition of Eq.~\eqref{recur_eq} up to Eq.~\eqref{recursion} we get,
\begin{equation}
\label{betak_rec}
\beta_k = \Big(\frac{ 16ec(A, \delta)}{ R^2T\sigma_{\min}(A A^{\prime})\sigma_{\min}(\Gamma_{\lfloor \frac{1}{\beta_{k-1}}\rfloor}(A))}\Big)^{1/2}
\end{equation}
Now $\beta_k$ is a non-increasing sequence. We show this by induction. Since $\sigma_{\min}(\Gamma_t(A)) \geq 1$ and
\[
\sqrt{\frac{16ec(A,\delta)}{\sigma_{\min}(AA^{\prime}) R^2T}} \leq 1
\]
it follows trivially that $\beta_2 \leq \beta_1$. Assume our hypothesis holds for all $k \leq m$. Then since $\Gamma_{t_1}(A) \succeq \Gamma_{t_2}(A)$ whenever $t_1 \geq t_2$ we have
\begin{align*}
\Big(\frac{ 16ec(A, \delta)}{ R^2T\sigma_{\min}(A A^{\prime})\sigma_{\min}(\Gamma_{\lfloor \frac{1}{\beta_{m}}\rfloor}(A))}\Big)^{1/2} &\leq \Big(\frac{ 16ec(A, \delta)}{R^2 T\sigma_{\min}(A A^{\prime})\sigma_{\min}(\Gamma_{\lfloor \frac{1}{\beta_{m-1}}\rfloor}(A))}\Big)^{1/2} \\
\beta_{m+1} &\leq \beta_{m}
\end{align*}
and we have proven our hypothesis. To now find the best upper bound for $\sqrt{\frac{c(A, \delta)}{\gamma_{t-1}^2}}$ we find the steady state solution for Eq.~\eqref{betak_rec}, \textit{i.e.}
\begin{equation}
\beta_0^2\sigma_{\min}(\Gamma_{\lfloor \frac{1}{\beta_0}\rfloor}(A)) = \Big(\frac{ 16ec(A, \delta)}{ R^2T\sigma_{\min}(A A^{\prime})}\Big) \label{final_sol}
\end{equation}
Now a solution for $\beta_0 \in (\frac{2C}{\sigma_{\min}(A A^{\prime})TR^2}, 1)$. To see this set $\beta_0 = 1$, then LHS $>$ RHS. Next set $\beta_0 = \frac{2C}{\sigma_{\min}(A A^{\prime})TR^2}$ then since $\rho_{\min}(A^t) \geq \sigma_{\min}(A^{t})$ and $\rho_i \leq 1+C/T$ we see that
\begin{align*}
\frac{4C^2\sigma_{\min}(\Gamma_{\lfloor \frac{1}{\beta_0}\rfloor}(A))}{\sigma_{\min}(A A^{\prime})^2 T^2} &\leq \frac{4\sum_{t=0}^{\sigma_{\min}(A)^2R^2T/2C} \rho_{\min}(A)^{2t}}{R^4\sigma_{\min}(A A^{\prime})^2T^2/C^2} \\
&\leq \frac{2eC}{\sigma_{\min}(A)^2T}\leq \Big(\frac{ 16ec(A, \delta)}{ R^2T\sigma_{\min}(A A^{\prime})}\Big)
\end{align*}
and LHS $<$ RHS because $C$ is a constant but $c(A, \delta)$ is growing logarithmically with $T$ (and we can pick $T$ accordingly). By ensuring that
\begin{align*}
T \geq \frac{64ec(A, \delta)}{R^2\sigma_{\min}(A)^2}
\end{align*}
we also ensure that $\beta_1 < 1/2$ and as a result all subsequent $\beta_k < 1/2$. Now we can conclude that whenever $T \geq \frac{64ec(A, \delta)}{\sigma_{\min}(A)^2}$ we get Eq.~\eqref{recur_eq}
\begin{equation}
\label{recur_eq2}
{Y_t} {\succeq} (1 - \beta_0 )A {Y_{t-1}} A^{\prime} + \frac{TR^2I}{8}
\end{equation}
and following as before we get with probability at least $1-\delta$
\begin{align}
Y_T &\succeq \frac{TR^2}{16e} \Gamma_{\lfloor \frac{1}{\beta_0}\rfloor}(A) + \frac{TR^2I}{16} \label{stable_yt}
\end{align}
where $\beta_0$ is solution to
\[
\beta_0^2\sigma_{\min}(\Gamma_{\lfloor \frac{1}{\beta_0}\rfloor}(A)) = \Big(\frac{ 16ec(A, \delta)}{ TR^2\sigma_{\min}(A A^{\prime})}\Big)
\]
and
\[
c(A, \delta) = { {16d} \log{( \text{tr}(\Gamma_T - I)+1)} + {32d}\log{\frac{15T}{2\delta}}}
\]
It should be noted that $\frac{1}{\beta_0}$ will equal $\frac{\sqrt{\alpha(d)}TR^2\sigma_{\min}(A A^{\prime})}{16ec(A, \delta)}$, \textit{i.e.}, grow linearly with $T$, as shown in Proposition~\ref{gramian_lb}. Then it can be seen from Eq.~\eqref{stable_yt} that
\begin{align}
Y_T &\succeq \frac{TR^2}{16e} \Gamma_{\lfloor \frac{1}{\beta_0}\rfloor}(A) + \frac{TR^2I}{16} \nonumber\\
Y_T &\succeq \frac{TR^2}{16e} \sigma_{\min}(\Gamma_{\lfloor \frac{1}{\beta_0}\rfloor}(A)) + \frac{TR^2I}{16}\nonumber \\
&\succeq \frac{TR^2}{16e} \frac{TR^2\sqrt{\alpha(d)}\sigma_{\min}(A A^{\prime})}{16ec(A, \delta)C(d)} I = \frac{\sqrt{\alpha(d)} T^2R^4\sigma_{\min}(A A^{\prime})}{256e^2c(A, \delta)} \label{quad}
\end{align}
\subsection*{Bounding $\sum_{t=0}^{T-1}\frac{(A x_t\eta_{t+1}^{'} + \eta_{t+1}x_t^{'}A^{'})}{T}$}
We want to show that the probability
\[
\mathbb{P}\Bigg(\Bigg | \Bigg | \sum_{t=0}^{T-1}\frac{(A x_t\eta_{t+1}^{'} + \eta_{t+1}x_t^{'}A^{'})}{T}\Bigg | \Bigg | \geq z \Bigg)
\]
is small. To show this, observe
\begin{align*}
\mathbb{P}\Bigg(\Bigg | \Bigg | \sum_{t=0}^{T-1}\frac{(A x_t\eta_{t+1}^{'} + \eta_{t+1}x_t^{'}A^{'})}{T}\Bigg | \Bigg | \geq z \Bigg) &\leq \mathbb{P}\Bigg( \Bigg | \Bigg | \sum_{t=0}^{T-1}\frac{A x_t\eta_{t+1}^{'}}{T}\Bigg | \Bigg | \geq z/2\Bigg) \\
&\leq 5^{2d} \mathbb{P}\Bigg(\Bigg | \sum_{t=0}^{T-1}\frac{w^{\prime}A x_t\eta_{t+1}^{\prime} v}{T}\Bigg | \geq z/8 \Bigg) \\
&= 5^{2d} \mathbb{P}\Bigg(\Bigg |\sum_{t=0}^{T-1}\frac{W_tZ_{t+1}}{T}\Bigg | \geq z/8\Bigg)
\end{align*}
where $Z_t = \eta_{t}^{\prime} v, W_t = w^{\prime}A x_t$. Instead of $\mathbb{P}\Bigg(\Bigg |\sum_{t=0}^{T-1}\frac{W_tZ_{t+1}}{T}\Bigg | \geq z/8\Bigg)$, we analyze
$$\mathbb{P}\Bigg(\Bigg |\sum_{t=0}^{T-1}\frac{W_tZ_{t+1}}{T}\Bigg | \geq z/8, \sum_{t=1}^T W_t^2 \leq \frac{3T}{2}\text{tr}(\Gamma_T)\Bigg)$$
In Proposition~\ref{cumulative_energybnd} we substitute $\delta$ by $5^{-2d}\delta$ and for $T > \frac{16 (\gamma_T-1)^2 \text{tr}(\Gamma_T - I)}{c\text{tr}(\Gamma_{T/2}-I)^2}(\log{\frac{2}{\delta}} + 2d \log{5})$ we obtain
\begin{align*}
\mathbb{P}\Bigg(\Bigg |\sum_{t=0}^{T-1}\frac{W_tZ_{t+1}}{T}\Bigg | \geq z/8, \sum_{t=1}^T W_t^2 \leq \frac{3T}{2}\text{tr}(\Gamma_T-1)\Bigg) + 5^{-2d}\delta &\geq \mathbb{P}\Bigg(\Bigg |\sum_{t=0}^{T-1}\frac{W_tZ_{t+1}}{T}\Bigg | \geq z/8\Bigg)
\end{align*}
Using Lemma~\ref{simchowitz_learning} we get
\begin{equation}
\label{simchowitz_bnd}
2 \exp{\Bigg(-\frac{Tz^2}{192 \text{tr}(\Gamma_T-I)}\Bigg)} +5^{-2d} \delta \geq \mathbb{P}\Bigg(\Bigg |\sum_{t=0}^{T-1}\frac{W_tZ_{t+1}}{T}\Bigg | \geq z/8\Bigg)
\end{equation}
Let
$$T_1(\delta) = \max{\Bigg\{768 \text{tr}(\Gamma_T-I)(2d \log{5} + \log{\frac{2}{\delta}}), \frac{16 (\gamma_T-1)^2 \text{tr}(\Gamma_T - I)}{c\text{tr}(\Gamma_{T/2}-I)^2} (\log{\frac{2}{\delta}} + 2d \log{5})\Bigg\}}$$
then it easy to check that for $T > T_1(\delta)$ and $z=1/4$ in Eq.~\eqref{simchowitz_bnd}
\[
2 \exp{\Bigg(-\frac{T(1/4)^2}{192 \text{tr}(\Gamma_T-I)}\Bigg)} \leq 5^{-2d} \delta
\]
and we have
\begin{align*}
\mathbb{P}\Bigg(\Bigg |\sum_{t=0}^{T-1}\frac{W_tZ_{t+1}}{T}\Bigg | \geq 1/32\Bigg) &\leq 5^{-2d} 2 \delta \\
\mathbb{P}\Bigg(\Bigg | \Bigg | \sum_{t=0}^{T-1}\frac{(A x_t\eta_{t+1}^{'} + \eta_{t+1}x_t^{'}A^{'})}{T}\Bigg | \Bigg | \geq 1/4 \Bigg) &\leq 2 \delta
\end{align*}
For brevity, define $\mathcal{E}_1(\delta) = \Bigg\{T > T_1(\delta), \Bigg | \Bigg | \sum_{t=0}^{T-1}\frac{(A x_t\eta_{t+1}^{'} + \eta_{t+1}x_t^{'}A^{'})}{T}\Bigg | \Bigg | \leq 1/4\Bigg\}$
\subsection*{Bounding $\sum_{t=1}^T \frac{\eta_t \eta_t^{\prime}}{T}$}
From Proposition~\ref{noise_energy_bnd} it follows that $\sum_{t=1}^T \frac{\eta_t \eta_t^{\prime}}{T}$ concentrates around $I$. We have, with probability at least $1 - \delta$, that
\begin{align*}
||\dfrac{1}{T}\sum_{t=1}^T \eta_t \eta_t^{'} - I|| \leq \Bigg( \sqrt{\dfrac{32\Big(\log{\dfrac{2}{\delta}} + d\log{5})}{T}}\Bigg)
\end{align*}
Again we want $T$ large enough so that
\[
\Bigg( \sqrt{\dfrac{32\Big(\log{\dfrac{2}{\delta}} + d\log{5})}{T}}\Bigg) \leq 1/4
\]
Then necessarily we have
$$T \geq T_2(\delta) = 512\Big(\log{\dfrac{2}{\delta}} + d\log{5}\Big) $$
For $T > T_2(\delta)$ we will have, with at least probability $1 - \delta$, that
\[
\frac{3}{4} I \preceq \dfrac{1}{T}\sum_{t=1}^T \eta_t \eta_t^{'} \preceq \frac{5}{4}I
\]
Define $\mathcal{E}_2(\delta) = \Bigg\{T \geq T_2(\delta), \frac{3}{4} I \preceq \dfrac{1}{T}\sum_{t=1}^T \eta_t \eta_t^{'} \preceq \frac{5}{4}I\Bigg\}$
\subsection*{Bounding $\frac{x(0) x(0)^{\prime}}{T} - \frac{Ax(T) x(T)^{\prime}A^{\prime}}{T}$}
Since the initial condition is a constant, $\frac{x(0) x(0)^{\prime}}{T}$ decays linearly in $T$. From Assumption~\ref{bnd_init}, $\frac{||x(0) x(0)^{\prime}||}{T} \leq \frac{\gamma^2_0}{T}$. But for $\frac{Ax(T) x(T)^{\prime}A^{\prime}}{T}$ we use Proposition~\ref{stateOp_bnd}. From this we can conclude that
\begin{align}
\mathbb{P}\Bigg(\frac{||Ax_T||_2}{\sqrt{T}} \geq \gamma + \frac{||A^{T+1}x_0||_2}{\sqrt{T}} \Bigg) &\leq 2d \exp\Big\{-\frac{T \gamma^2}{2 \text{tr}(\Gamma_T - I)}\Big\} \label{state_bnd}
\end{align}
Now, fix $0 < \epsilon < 1/4$. Then in Eq.~\eqref{state_bnd} we replace $\gamma = \frac{1}{4} - \frac{\beta_0 \gamma_0}{\sqrt{T}}$, and if $T > \frac{16 \beta_0 \gamma_0^2}{(1 -4 \epsilon)^2}$, we get
\begin{align}
\mathbb{P}\Bigg(\frac{||Ax_T||_2}{\sqrt{T}} \geq 1/4 \Bigg) &\leq 2d \exp\Big\{-\frac{T\epsilon^2}{2 \text{tr}(\Gamma_T - I)}\Big\} \label{state_bnd2}
\end{align}
Define $T_3(\delta) = \max{\{\frac{16 \beta_0 \gamma_0^2}{(1 -4 \epsilon)^2}, \log{\frac{2d}{\delta}} \frac{2 \text{tr}(\Gamma_T - I)}{\epsilon^2}\}}$. Then, if $T > T_3(\delta)$, with probability at least $1 - \delta$ we have
\[
\frac{||Ax_T||_2}{\sqrt{T}} \leq 1/4
\]
Define $\mathcal{E}_3(\delta) = \Bigg\{T > T_3(\delta), \frac{||Ax_T||_2}{\sqrt{T}} \leq 1/4\Bigg\}$.
\subsection*{Event $\mathcal{E}_1(\delta) \cap \mathcal{E}_2(\delta) \cap \mathcal{E}_3(\delta)$}
Since $\mathbb{P}(\mathcal{E}_1) \geq 1-2\delta, \mathbb{P}(\mathcal{E}_2) \geq 1-\delta, \mathbb{P}(\mathcal{E}_3) \geq 1-\delta$, we have $\mathbb{P}(\mathcal{E}_1(\delta) \cap \mathcal{E}_2(\delta) \cap \mathcal{E}_3(\delta)) \geq 1-4\delta$.
\vspace{3mm}
Let $T_0 = \max\{T_1(\delta), T_2(\delta), T_3(\delta)\}$. Then for $T \geq T_0$ with probability at least $1 - 4 \delta$ we have
\begin{align*}
\hat{Y}_T &= \frac{x_0 x_0^{'}}{T} + A \hat{Y}_T A^{'} \\
&+ \sum_{t=0}^{T-1}\frac{(A x_t\eta_{t+1}^{'} + \eta_{t+1}x_t^{'}A^{'})}{T} + \sum_{t=1}^{T}\frac{\eta_t \eta_t^{'}}{T} \\
&- \frac{Ax(T)x(T)^{'}A^{'}}{T} \\
\hat{Y}_T &\succeq 0 + A \hat{Y}_T A^{'} \\
&- \frac{1}{4}I + \frac{3}{4} I \\
&- \frac{1}{16}I \\
&= \frac{7}{16} I + A \hat{Y}_T A^{'}
\end{align*}
On the other hand, by a similar argument
\begin{align*}
\hat{Y}_T &= \frac{x_0 x_0^{'}}{T} + A \hat{Y}_T A^{'} \\
&+ \sum_{t=0}^{T-1}\frac{(A x_t\eta_{t+1}^{'} + \eta_{t+1}x_t^{'}A^{'})}{T} + \sum_{t=1}^{T}\frac{\eta_t \eta_t^{'}}{T} \\
&- \frac{Ax(T)x(T)^{'}A^{'}}{T} \\
\hat{Y}_T &\preceq \frac{x_0 x_0^{'}}{T} + A \hat{Y}_T A^{'} \\
&+\frac{1}{4}I + \frac{5}{4} I \\
&\preceq 2I + A \hat{Y}_T A^{'}
\end{align*}
Define the following matrix solutions
\begin{align*}
P_1 &= A P_1 A^{'} + \dfrac{7}{16}I \\
P_2 &= A P_2 A^{'} + 2I
\end{align*}
Then, for $T > T_0$, we have with probability at least $1- 4\delta$, $P_1 \preceq \bar{Y}_T \preceq P_2$.
$$T P_1 \preceq Y_T \preceq T P_2$$
\end{proof}
Then, we have that
\section{Appendix}
We want to bound the quantity
\[
||(\textbf{X}^T \textbf{X})^{+} \textbf{X}^T E||_2
\]
We provide a simplified analysis of this case here
\begin{align*}
||(\textbf{X}^T \textbf{X})^{+} \textbf{X}^T E||_2 &\leq ||((\textbf{X}^T \textbf{X})^{+})^{1/2}||_2||((\textbf{X}^T \textbf{X})^{+})^{1/2} \textbf{X}^T E||_2 \\
&= ||(Y_T^{+})^{1/2}||_2 ||(Y_T^{+})^{1/2} \sum_{t=1}^T X_t \eta_t^T||_2
\end{align*}
Recall that $Y_T$ is the cumulative state energy at time $T$.
\begin{remark}
We will attempt to show invertibility of $\textbf{X}^{T}\textbf{X}$ without a small ball argument. The case for explosive systems will require a nuanced argument as the control required for a small ball type argument is not possible.
\end{remark}
Lemma~\ref{selfnorm_bnd} gives us a way to control the norm of $S_T$. Unfortunately we do not know for what $V$ get the best bound. A key observation is the following:
\begin{prop}
\label{matrix_prop}
Let $Y = U \Sigma^Y U^T$. Given $V = U \Lambda^V U^T \succ 0$ and a vector $S$ which lies in the subspace spanned by the rows of positive semi--definite matrix $Y$, we have
\[
||S||_{(V +Y)^{-1}} \geq \inf_{1 \leq i \leq d, \sigma^{Y}_i > 0}\dfrac{\sigma^{Y}_i}{\lambda^V_i + \sigma^{Y}_i} ||S||_{Y^{+}}
\]
Here $\sigma_i^Y$ is the $i^{th}$ singular value of $Y$ and $\sigma_i^Y \geq \sigma_{i+1}^Y$.
\end{prop}
\begin{proof}
The proof follows by simple linear algebra. Let $Y = U \Sigma^Y U^T$ where $\sigma_i^Y$ is the $i^{th}$ singular value of $Y$. For a vector $S = \sum_{i=1}^d \lambda_i u_i$, where $u_i$ are the eigenvectors of $Y$. Note that $\lambda_i = 0$ if the eigenvalue corresponding to $u_i$ is $0$.
\[
S^T (V + Y)^{-1} S = \sum_{i=1}^d \lambda_i^2 (\sigma_i + \lambda_i^V)^{-1}
\]
Similarly, if $\sigma_K$ is the least eigenvalue that is greater than $0$ we have
\[
S^T Y^{+} S = \sum_{i=1}^K \lambda_i^2 \sigma_i^{-1}
\]
and the claim follows in a straightforward way.
\end{proof}
\begin{prop}
\label{psd_result_2}
Let $P, V$ be a psd and pd matrix respectively and define $\bar{P} = P + V$. Let there exist some matrix $Q$ for which we have the following relation
\[
||\bar{P}^{-1/2} Q|| \leq \gamma
\]
For any vector $v$ such that $v^{\prime} P v = \alpha, v^{\prime} V v =\beta$ it is true that
\[
||v^{\prime}Q|| \leq \sqrt{\beta+\alpha} \gamma
\]
\end{prop}
\begin{proof}
Since
\[
||\bar{P}^{-1/2} Q||_2^2 \leq \gamma^2
\]
for any vector $v \in \mathcal{S}^{d-1}$ we will have
\[
\frac{v^{\prime} \bar{P}^{1/2}\bar{P}^{-1/2} Q Q^{\prime}\bar{P}^{-1/2}\bar{P}^{1/2} v}{v^{\prime} \bar{P} v} \leq \gamma^2
\]
and substituting $v^{\prime} \bar{P} v = \alpha + \beta$ gives us
\begin{align*}
{v^{\prime} Q Q^{\prime} v} &\leq \gamma^2{v^{\prime} \bar{P} v} \\
&= (\alpha + \beta) \gamma^2
\end{align*}
\end{proof}
\begin{prop}
\label{inv_jordan}
Consider a Jordan block matrix $J_d(\lambda)$ given by Definition~\ref{jordan}, then $J_d(\lambda)^{-k}$ is a matrix where each off--diagonal (and the diagonal) has the same entries, \textit{i.e.},
\begin{equation}
J_d(\lambda)^{-k} =\begin{bmatrix}
a_1 & a_2 & a_3 & \hdots & a_d \\
0 & a_1 & a_2 & \hdots & a_{d-1} \\
\vdots & \vdots & \ddots & \ddots & \vdots \\
0 & \hdots & 0 & a_1 & a_2 \\
0 & 0 & \hdots & 0 & a_1
\end{bmatrix}_{d \times d}
\end{equation}
for some $\{a_i\}_{i=1}^d$.
\end{prop}
\begin{proof}
$J_d(\lambda) = (\lambda I + N)$ where $N$ is the matrix with all ones on the $1^{st}$ (upper) off-diagonal. $N^k$ is just all ones on the $k^{th}$ (upper) off-diagonal and $N$ is a nilpotent matrix with $N^d = 0$. Then
\begin{align*}
(\lambda I + N)^{-1} &= (\sum_{l=0}^{d-1} (-1)^{l}\lambda^{-l-1}N^{l}) \\
(-1)^{k-1}(k-1 )!(\lambda I + N)^{-k} &= \Big(\sum_{l=0}^{d-1} (-1)^{l}\frac{d^{k-1}\lambda^{-l-1}}{d \lambda^{k-1}}N^{l}\Big) \\
&= \Big(\sum_{l=0}^{d-1} (-1)^{l}c_{l, k}N^{l}\Big)
\end{align*}
and the proof follows in a straightforward fashion.
\end{proof}
\begin{prop}
\label{reg_invertible}
Let $A$ be a regular matrix and $A = P^{-1} \Lambda P$ be its Jordan decomposition. Then
\[
\inf_{||a||_2 = 1}||\sum_{i=1}^d a_i \Lambda^{-i+1}||_2 > 0
\]
Further $\phi_{\min}(A) > 0$ where $\phi_{\min}(\cdot)$ is defined in Definition~\ref{outbox}.
\end{prop}
\begin{proof}
When $A$ is regular, the geometric multiplicity of each eigenvalue is $1$. This implies that $A^{-1}$ is also regular. Regularity is equivalent to the case when minimal polynomial equals characteristic polynomial [Cite source], \textit{i.e.},
\begin{align*}
\inf_{||a||_2 = 1}||\sum_{i=1}^d a_i A^{-i+1}||_2 &> 0
\end{align*}
Since $A^{-j} = P^{-1} \Lambda^{-j} P$ we have
\begin{align*}
\inf_{||a||_2 = 1}||\sum_{i=1}^d a_i P^{-1}\Lambda^{-i+1}P||_2 &> 0 \\
\inf_{||a||_2 = 1}||\sum_{i=1}^d a_i P^{-1}\Lambda^{-i+1}||_2 \sigma_{\min}(P) &> 0 \\
\inf_{||a||_2 = 1}||\sum_{i=1}^d a_i \Lambda^{-i+1}||_2 \sigma_{\min}(P) \sigma_{\min}(P^{-1}) &> 0 \\
\inf_{||a||_2 = 1}||\sum_{i=1}^d a_i \Lambda^{-i+1}||_2 &>0
\end{align*}
Since $\Lambda$ is Jordan matrix of the Jordan decomposition, it is of the following form
\begin{equation}
\Lambda =\begin{bmatrix}
J_{k_1}(\lambda_1) & 0 & \hdots & 0 &0 \\
0 & J_{k_2}(\lambda_2) & 0 & \hdots &0 \\
\vdots & \vdots & \ddots & \ddots & \vdots \\
0 & \hdots & 0 & J_{k_{l}}(\lambda_l) & 0 \\
0 & 0 & \hdots & 0 & J_{k_{l+1}}(\lambda_{l+1})
\end{bmatrix}
\end{equation}
where $J_{k_i}(\lambda_i)$ is a $k_i \times k_i$ Jordan block corresponding to eigenvalue $\lambda_i$. Then
\begin{equation}
\Lambda^{-k} =\begin{bmatrix}
J^{-k}_{k_1}(\lambda_1) & 0 & \hdots & 0 &0 \\
0 & J^{-k}_{k_2}(\lambda_2) & 0 & \hdots &0 \\
\vdots & \vdots & \ddots & \ddots & \vdots \\
0 & \hdots & 0 & J^{-k}_{k_{l}}(\lambda_l) & 0 \\
0 & 0 & \hdots & 0 & J^{-k}_{k_{l+1}}(\lambda_{l+1})
\end{bmatrix}
\end{equation}
Since $||\sum_{i=1}^d a_i \Lambda^{-i+1}||_2 >0$, without loss of generality assume that there is a non--zero element in $k_1 \times k_1$ block. This implies
\begin{align*}
||\underbrace{\sum_{i=1}^d a_i J_{k_1}^{-i+1}(\lambda_1)}_{=S}||_2 > 0
\end{align*}
By Proposition~\ref{inv_jordan} we know that each off--diagonal (including diagonal) of $S$ will have same element. Let $j_0 = \inf{\{j | S_{ij} \neq 0\}}$ and in column $j_0$ pick the element that is non--zero and highest row number, $i_0$. By design $S_{i_0, j_0} > 0$ and further
$$S_{k_1 -(j_0 - i_0), k_1} = S_{i_0, j_0}$$
because they are part of the same off--diagonal (or diagonal) of $S$. Thus the row $k_1 - (j_0 - i_0)$ has only one non--zero element because of the minimality of $j_0$.
We proved that for any $||a||=1$ there exists a row with only one non--zero element in the matrix $\sum_{i=1}^d a_i \Lambda^{-i+1}$. This implies that if $v$ is a vector with all non--zero elements, then $||\sum_{i=1}^d a_i \Lambda^{-i+1} v||_2 > 0$, \textit{i.e.},
\begin{align*}
\inf_{||a||_2 = 1}||\sum_{i=1}^d a_i \Lambda^{-i+1} v ||_2 &> 0
\end{align*}
This implies
\begin{align*}
\inf_{||a||_2 = 1}||[v, \Lambda^{-1} v, \ldots, \Lambda^{-d+1}v] a||_2 &> 0\\
\sigma_{\min}([v, \Lambda^{-1} v, \ldots, \Lambda^{-d+1}v]) &> 0 \\
\end{align*}
By Definition~\ref{outbox} we have
\begin{align*}
\phi_{\min}(A) &> 0
\end{align*}
\end{proof}
\begin{prop}[Error for Hadamard's Inequality in~\cite{ipsen2011determinant}]
\label{det_lb}
For any positive definite matrix $M$ with diagonal entries $m_{jj}$, $1 \leq j \leq d$ and $\rho$ is the spectral radius of the matrix $C$ with elements
\begin{align*}
c_{ij} &= 0 \hspace{3mm} \text{if } i=j \\
&=\frac{m_{ij}}{\sqrt{m_{ii}m_{jj}}} \hspace{3mm} \text{if } i\neq j
\end{align*}
then
\begin{align*}
0 < \frac{\prod_{j=1}^d m_{jj} - \text{det}(M)}{\prod_{j=1}^d m_{jj}} \leq 1 - e^{-\frac{d \rho^2}{1+\lambda_{\min}}}
\end{align*}
where $\lambda_{\min} = \min_{1 \leq j \leq d} \lambda_j(C)$.
\end{prop}
\begin{prop}
\label{gramian_lb}
Let $1 - C/T \leq \rho_i(A) \leq 1 + C/T$. Then there exists $\alpha(A)$ depending only on $A$ such that for every $8 d \leq t \leq T$
\[
\sigma_{\min}(\Gamma_t(A)) \geq t \alpha(A) > 1
\]
\end{prop}
\begin{proof}
Since $A = P^{-1} \Lambda P$ where $\Lambda$ is the Jordan matrix. Since $\Lambda$ can be complex we will assume that adjoint instead of transpose. This gives
\begin{align*}
\Gamma_T(A) &= I + \sum_{t=1}^T A^t (A^{t})^{\prime} \\
&= I + P^{-1}\sum_{t=1}^T \Lambda^tPP^{\prime} (\Lambda^t)^{*} P^{-1 \prime} \\
&\succeq I + \sigma_{\min}(P)^2P^{-1}\sum_{t=1}^T \Lambda^t(\Lambda^t)^{*} P^{-1 \prime}
\end{align*}
Then this implies that
\begin{align*}
\sigma_{\min}( \Gamma_T(A)) &\geq 1 +\sigma_{\min}(P)^2 \sigma_{\min}(P^{-1}\sum_{t=1}^T \Lambda^t(\Lambda^t)^{\prime} P^{-1 \prime}) \\
&\geq 1 + \sigma_{\min}(P)^4 \sigma_{\min}(\sum_{t=1}^T \Lambda^t(\Lambda^t)^{\prime} )
\end{align*}
Now
\begin{align*}
\sum_{t=0}^T \Lambda^t(\Lambda^t)^{*} &= \begin{bmatrix}
\sum_{t=0}^T J^{t}_{k_1}(\lambda_1)(J^{t}_{k_1}(\lambda_1))^{*} & 0 & \hdots & 0 \\
0 & \sum_{t=1}^T J^{t}_{k_2}(\lambda_2)(J^{t}_{k_2}(\lambda_2))^{*} & 0 & \hdots \\
\vdots & \vdots & \ddots & \ddots \\
0 & \hdots & 0 & \sum_{t=1}^T J^{t}_{k_{l}}(\lambda_l) (J^{t}_{k_l}(\lambda_l))^{*}
\end{bmatrix}
\end{align*}
Since $\Lambda$ is block diagonal we only need to worry about the least singular value corresponding to some block. Let this block be the one corresponding to $J_{k_1}(\lambda_1)$, \textit{i.e.},
\begin{equation}
\sigma_{\min}(\sum_{t=0}^T \Lambda^t(\Lambda^t)^{*} ) =\sigma_{\min}(\sum_{t=0}^T J^{t}_{k_1}(\lambda_1)(J^{t}_{k_1}(\lambda_1))^{*}) \label{bnd_1}
\end{equation}
Define $B = \sum_{t=0}^T J^{t}_{k_1}(\lambda_1)(J^{t}_{k_1}(\lambda_1))^{*}$. Note that $J_{k_1}(\lambda_1) = (\lambda_1 I + N)$ where $N$ is the nilpotent matrix that is all ones on the first off--diagonal and $N^{k_1} = 0$. Then
\begin{align*}
(\lambda_1 I + N)^t &= \sum_{j=0}^t {t \choose j} \lambda_1^{t-j}N^{j} \\
(\lambda_1 I + N)^t((\lambda_1 I + N)^t)^{*} &= \Big(\sum_{j=0}^t {t \choose j} \lambda_1^{t-j}N^{j}\Big)\Big(\sum_{j=0}^t {t \choose j} (\lambda_1^{*})^{t-j}N^{j \prime}\Big) \\
&= \sum_{j=0}^t {t \choose j}^2 |\lambda_1|^{2(t-j)} \underbrace{N^j (N^j)^{\prime}}_{\text{Diagonal terms}} + \sum_{j \neq k}^{j=t, k=t} {t \choose k}{t \choose j} \lambda_1^j (\lambda_1^{*})^{k} N^j (N^k)^{\prime} \\
&= \sum_{j=0}^t {t \choose j}^2 |\lambda_1|^{2(t-j)} \underbrace{N^j (N^j)^{\prime}}_{\text{Diagonal terms}} + \sum_{j > k}^{j=t, k=t} {t \choose k}{t \choose j} \lambda_1^j (\lambda_1^{*})^{k} N^j (N^k)^{\prime} \\
&+ \sum_{j< k}^{j=t, k=t} {t \choose k}{t \choose j} \lambda_1^j (\lambda_1^{*})^{k} N^j (N^k)^{\prime} \\
&= \sum_{j=0}^t {t \choose j}^2 |\lambda_1|^{2(t-j)} \underbrace{N^j (N^j)^{\prime}}_{\text{Diagonal terms}} + \sum_{j > k}^{j=t, k=t} {t \choose k}{t \choose j} \underbrace{|\lambda_1|^{2k} \lambda_1^{j-k} N^{j-k} N^k(N^k)^{\prime}}_{\text{On $(j-k)$ upper off--diagonal}} \\
&+ \sum_{j< k}^{j=t, k=t} {t \choose k}{t \choose j} \underbrace{|\lambda_1|^{2j} (\lambda_1^{*})^{k-j} N^j(N^{j})^{\prime} (N^{j-k})^{\prime}}_{\text{On $(k-j)$ lower off--diagonal}}
\end{align*}
Let $\lambda_1 = r e^{i\theta}$, then similar to~\cite{erxiong1994691}, there is $D = \text{Diag}(1, e^{-i\theta}, e^{-2i\theta}, \ldots, e^{-i(k_1-1)\theta})$ such that
\[
\underbrace{D (\lambda_1 I + N)^t((\lambda_1 I + N)^t)^{*} D^{*}}_{\text{Real Matrix}}
\]
Observe that any term on $(j-k)$ upper off--diagonal of $(\lambda_1 I + N)^t((\lambda_1 I + N)^t)^{*}$ is of the form $r_0 e^{i(j-k)\theta}$. In the product $D (\lambda_1 I + N)^t((\lambda_1 I + N)^t)^{*} D^{*}$ any term on the $(j-k)$ upper off diagonal term now looks like $e^{-ij + ik} r_0 e^{i(j-k)\theta} = r_0$, which is real. Then we have
\begin{align}
D (\lambda_1 I + N)^t((\lambda_1 I + N)^t)^{*} D^{*} &= \sum_{j=0}^t {t \choose j}^2 |\lambda_1|^{2(t-j)} \underbrace{N^j (N^j)^{\prime}}_{\text{Diagonal terms}} + \sum_{j > k}^{j=t, k=t} {t \choose k}{t \choose j} \underbrace{|\lambda_1|^{2k} |\lambda_1|^{j-k} N^{j-k} N^k(N^k)^{\prime}}_{\text{On $(j-k)$ upper off--diagonal}} \nonumber\\
&+ \sum_{j< k}^{j=t, k=t} {t \choose k}{t \choose j} \underbrace{|\lambda_1|^{2j} |\lambda_1|^{k-j} N^j(N^{j})^{\prime} (N^{j-k})^{\prime}}_{\text{On $(k-j)$ lower off--diagonal}} \label{real}
\end{align}
Since $D$ is unitary and $D (\lambda_1 I + N)^t((\lambda_1 I + N)^t)^{*} D^{*} =(|\lambda_1| I + N)^t((|\lambda_1| I + N)^t)^{\prime} $, we can simply work with the case when $\lambda_1$ is real. Now we show the growth of $ij^{th}$ term of the product $D (\lambda_1 I + N)^t((\lambda_1 I + N)^t)^{*} D^{*})$,
Define $B=\sum_{t=1}^T (|\lambda_1| I + N)^t((|\lambda_1| I + N)^t)^{\prime}$
\begin{align}
B_{ll} &=\sum_{t=1}^T [(\lambda_1 I + N)^t((\lambda_1 I + N)^t)^{*}]_{ll} \\
&= \sum_{t=1}^T \sum_{j=0}^{k_1-l} {t \choose j}^2 |\lambda_1|^{2(t-j)} \label{bll}
\end{align}
Since $1-C/T \leq |\lambda_1| \leq 1+C/T$, then for every $t \leq T$ we have
$$e^{-C} \leq |\lambda_1|^t \leq e^{C}$$
Then
\begin{align}
B_{11} &= \sum_{t=1}^T \sum_{j=0}^{k_1-1} {t \choose j}^2 |\lambda_1|^{2(t-j)} \nonumber\\
&\geq e^{C} \sum_{t=1}^T \sum_{j=0}^{k_1-1} {t \choose j}^2 \nonumber\\
& \geq e^{-C} \sum_{t=T/2}^T \sum_{j=0}^{k_1-1} {t \choose j}^2 \geq e^{-C} \sum_{t=T/2}^T c_{k_1} \frac{t^{2k_1} - 1}{t^2 - 1} \geq C(A) T^{2k_1 - 1} \label{lb}
\end{align}
An upper bound can be achieved in an equivalent fashion.
\begin{align}
B_{11} &= \sum_{t=1}^T \sum_{j=0}^{k_1-1} {t \choose j}^2 |\lambda_1|^{2(t-j)} \nonumber\\
& \leq e^{C} T \sum_{j=0}^{k_1-1} T^{2j-2} \leq C(A) T^{2k_1 - 1} \label{ub1}
\end{align}
Similarly, any $B_{ll} = C(A)T^{2k_1-2l +1}, B_{jk} = C(A)T^{2k_1-j-k +1}$. For brevity we use the same $C(A)$ to indicate different functions of $A$ as we are interested only in the growth with respect to $T$. To summarize
\begin{align}
B_{jk} &= C(A)T^{2k_1 - j - k +1} \label{jordan_value}
\end{align}
whenever $T \geq 8d$. Recall Proposition~\ref{det_lb}, let the $M$ there be equal to $B$ then since
\[
C_{ij} = C(A)\frac{B_{ij}}{\sqrt{B_{ii} B_{jj}}} = \frac{T^{2k_1 - j -k +1}}{\sqrt{T^{4k_1 - 2j - 2k + 2}}}
\]
it turns out that $C_{ij}$ is independent of $T$ and consequently $\lambda_{min}(C), \rho$ are independent of $T$. Then $\text{det}(B) = \prod_{j=1}^{k_1} B_{jj} e^{-\frac{d\rho^2}{1 + \lambda_{\min}}} = C(A) \prod_{j=1}^{k-1} B_{jj}$. Further using the values for $B_{jj}$ we get
\begin{equation}
\label{det}
\text{det}(B) = C(A) \prod_{j=1}^{k_1} B_{jj} = \prod_{j=1}^{k_1} C(A) T^{2k_1 - 2l +1} = C(A) T^{k_1^2}
\end{equation}
Next we use Schur-Horn theorem, \textit{i.e.}, let $\sigma_i(B)$ be the ordered singular values of $B$ where $\sigma_i(B) \geq \sigma_{i+1}(B)$. Then $\sigma_i(B)$ majorizes the diagonal of $B$, \textit{i.e.}, for any $k \leq k_1$
\[
\sum_{i=1}^k \sigma_i(B) \geq \sum_{i=1}^{k} B_{ii}
\]
Observe that $B_{ii} \leq B_{jj}$ when $i \leq j$. This implies that
\begin{align*}
B_{k_1 k_1}=C_0(A)T &\geq \sigma_{k_1}(B) \\
\sum_{j=k_1-1}^{k_1} B_{jj} &= C_1(A)T^{3} + C_2(A)T^2 + C_3(A)T \geq \sigma_{k_2}(A) + \sigma_{k_1}(A)
\end{align*}
Then it can be checked that for $T \geq T_1 = \max{\Big(\sqrt{\frac{4C_0(A) + 4C_3(A)}{C_1(A)}}, \frac{4C_2(A)}{C_1(A)}\Big)}$ we have $\sigma_{k_1-1}(A) \leq \frac{C_1(A)T^3}{2}$. Again to upper bound $\sigma_{k_1-2}(A)$ we will use a similar argument
\begin{align*}
\sum_{j=k_1-2}^{k_1} B_{jj} &= C_1(A)T^{5} + C_2(A)T^{4} + C_3(A)T^2 + C_4(A)T \geq \sigma_{k_1-2}(A) +\sigma_{k_1-1}(A) + \sigma_{k_1}(A)
\end{align*}
and show that whenever
\[
T \geq \max{\Big(T_1, \frac{6C_2(A)}{C_1(A)}\Big)}
\]
we get $\sigma_{k_1-2}(A) \leq \frac{C_1(A)T^5}{2}$. The $C_i(A)$ are not important, the goal is to show that for a sufficiently large $T$ we have an upper bound on each singular values (roughly) corresponding to the diagonal element. This implies $\sigma_i(A) \leq C(A)T^{2k_1 - 2i + 1}$. Assume additionally that $\sigma_{k_1}(A) = o(T)$. But as shown in Eq.~\eqref{det} we have that
\[
\text{det}(B) = C(A)T^{k_1^2} = \prod_{i=1}^{k_1}\sigma_i(A) \leq \prod_{i=1}^{k_i-1} C(A)T^{2k_1 - 2i + 1} o(T) = o(T^{k_1^2})
\]
which is a contradiction. This means that $\sigma_{k_i}(B) \geq C(A)T$. This implies
\[
\sigma_{\min}(\Gamma_T(A)) \geq 1 + \sigma_{\min}(P)^4 C(A)T
\]
for some function $C(A)$ that depends only on $A$.
\end{proof}
It is possible that $\alpha(A)$ might depend exponentially in $d$, however for many cases such as orthogonal matrices or diagonal matrices $\alpha(A)=1$ [As shown in~\cite{simchowitz2018learning}]. We are not interested in finding the best bound $\alpha(A)$ rather show that the bound of Proposition~\ref{gramian_lb} exists and assume that such a bound is known.
\begin{prop}
\label{gramian_ratio}
Let $t_1/t_2 = \beta > 1$. Then
\[
\sigma_1(\Gamma_{t_1}(A)\Gamma_{t_2}^{-1}(A)) \leq C(A, \beta)
\]
where $C(A, \beta)$ is a function depending only on $A$ and $\beta$.
\end{prop}
\begin{proof}
Since
\begin{align*}
\sigma_1(\Gamma_{t_1}(A)\Gamma_{t_2}^{-1}(A)) &\leq \text{tr}(\Gamma_{t_1}(A)\Gamma_{t_2}^{-1}(A)) \\
&\leq \text{tr}(\Gamma_{t_2}^{-1/2}(A)\Gamma_{t_1}(A)\Gamma_{t_2}^{-1/2}(A)) \\
&\leq d \sigma_1(\Gamma_{t_2}^{-1/2}(A)\Gamma_{t_1}(A)\Gamma_{t_2}^{-1/2}(A)) \\
&\leq d\sup_{||x|| \neq 0}\frac{x^{\prime} \Gamma_{t_1}(A) x}{x^{\prime}\Gamma_{t_2}(A) x}
\end{align*}
Then from Lemma 12 in~\cite{abbasi2011improved} we get that
\[
\sup_{||x|| \neq 0}\frac{x^{\prime} \Gamma_{t_1}(A) x}{x^{\prime}\Gamma_{t_2}(A) x} \leq \frac{\text{det}(\Gamma_{t_2}(A))}{\text{det}(\Gamma_{t_1}(A))}
\]
Now
\begin{align*}
\Gamma_{t_i}(A) &= P^{-1}\sum_{t=0}^{t_i} \Lambda^{t}PP^{\prime}(\Lambda^{t})^{*}P^{-1 \prime} \\
&\preceq \sigma_{\max}(P)^2 P^{-1}\sum_{t=0}^{t_i} \Lambda^{t}(\Lambda^{t})^{*}P^{-1 \prime} \\
\text{det}(\Gamma_{t_i}(A)) &\leq \text{det}(\sigma_{\max}(P)^2 P^{-1}\sum_{t=0}^{t_i} \Lambda^{t}(\Lambda^{t})^{*}P^{-1 \prime}) \\
&= \text{det}(P^{-1})^2 \text{det}(\sigma_{\max}(P)^2\sum_{t=0}^{t_i} \Lambda^{t}(\Lambda^{t})^{*}) \\
&= \text{det}(P^{-1})^2\sigma_{\max}(P)^{2d} \text{det}(\sum_{t=0}^{t_i} \Lambda^{t}(\Lambda^{t})^{*})
\end{align*}
Similarly
$$\text{det}(\Gamma_{t_i}(A)) \geq \text{det}(P^{-1})^2\sigma_{\min}(P)^{2d} \text{det}(\sum_{t=0}^{t_i} \Lambda^{t}(\Lambda^{t})^{*})$$
Then
\begin{align*}
\frac{\text{det}(\Gamma_{t_2}(A))}{\text{det}(\Gamma_{t_1}(A))} &\leq \Big(\frac{\sigma_{\max}}{\sigma_{\min}}\Big)^{2d} \frac{\text{det}(\sum_{t=0}^{t_2} \Lambda^{t}(\Lambda^{t})^{*})}{\text{det}(\sum_{t=0}^{t_1} \Lambda^{t}(\Lambda^{t})^{*})} \\
&\leq \Big(\frac{\sigma_{\max}}{\sigma_{\min}}\Big)^{2d} \frac{\text{det}(\prod_{i=1}^l \sum_{t=0}^{t_2} J_{k_i}(\lambda_i)^{t}(J_{k_i}(\lambda_i)^{t})^{*})}{\text{det}(\prod_{i=1}^l\sum_{t=0}^{t_1} J_{k_i}(\lambda_i)^{t}(J_{k_i}(\lambda_i)^{t})^{*})}
\end{align*}
By our discussion in the previous proposition, specifically in Eq.~\eqref{real}, we showed that we can focus simply on the case when $\lambda$ is real.
For this case we will use Proposition~\ref{det_lb}. Proposition~\ref{det_lb} essentially helps us in showing that the upper bounding the error between product of diagonal elements of a matrix and its determinant.
\end{proof}
\subsubsection*{\bibname}}
\usepackage{times}
\setlength\parindent{0pt}
\usepackage{graphicx}
\usepackage{tikz}
\usetikzlibrary{fit,positioning,arrows,automata,calc}
\tikzset{
main/.style={circle, minimum size = 5mm, thick, draw =black!80, node distance = 10mm},
connect/.style={-latex, thick},
box/.style={rectangle, draw=black!100}
}
\usepackage{epigraph}
\newcommand\tab[1][1cm]{\hspace*{#1}}
\usepackage{csquotes}
\usepackage{hyperref}
\usepackage{url}
\usepackage{graphicx}
\usepackage{relsize}
\usepackage{booktabs}
\usepackage{amsfonts}
\usepackage{nicefrac}
\usepackage{microtype}
\usepackage{color}
\usepackage{scalerel}
\usepackage[toc,page]{appendix}
\usepackage{blindtext}
\usepackage{amssymb}
\usepackage{mathtools}
\usepackage{dsfont}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amsthm}
\usepackage{float}
\usepackage{graphicx}
\graphicspath{ {images/} }
\usepackage{letltxmacro}%
\usepackage{thmtools,thm-restate}
\usepackage{relsize}
\usepackage{algorithm}
\usepackage{algorithmic}
\usepackage{cuted}
\newcommand\givenbase[1][]{\:#1\lvert\:}
\let\given\givenbase
\newcommand\sgiven{\givenbase[\delimsize]}
\DeclarePairedDelimiterX\Basics[1](){\let\given\sgiven #1}
\DeclarePairedDelimiter\abs{\lvert}{\rvert}
\DeclarePairedDelimiter\norm{\lVert}{\rVert}
\DeclarePairedDelimiterX{\infdivx}[2]{(}{)}{#1\;\delimsize\|\;#2}
\newcommand{D\infdivx}{D\infdivx}
\newcommand{\thickhat}[1]{\mathbf{\hat{\text{$#1$}}}}
\newcommand{\thickbar}[1]{\mathbf{\bar{\text{$#1$}}}}
\newcommand{\thicktilde}[1]{\mathbf{\tilde{\text{$#1$}}}}
\DeclareMathOperator*{\argmax}{arg\,max}
\DeclareMathOperator*{\argmin}{arg\,min}
\makeatletter
\newcommand{\distas}[1]{\mathbin{\overset{#1}{\kern\z@\sim}}}%
\newsavebox{\mybox}\newsavebox{\mysim}
\newcommand{\distras}[1]{%
\savebox{\mybox}{\hbox{\kern3pt$\scriptstyle#1$\kern3pt}}%
\savebox{\mysim}{\hbox{$\sim$}}%
\mathbin{\overset{#1}{\kern\z@\resizebox{\wd\mybox}{\ht\mysim}{$\sim$}}}%
}
\makeatother
\usepackage{enumitem}
\newlist{inparaenum}{enumerate}{2
\setlist[inparaenum]{nosep
\setlist[inparaenum,1]{label=\bfseries\arabic*.
\setlist[inparaenum,2]{label=\arabic{inparaenumi}\emph{\alph*})
\usepackage{bm}
\usepackage{breqn}
\usepackage{physics}
\usepackage{enumitem}
\usepackage{bbold}
\usepackage[colorinlistoftodos]{todonotes}
\newtheorem{cor}{Corollary}[section]
\newtheorem{lem}{Lemma}[section]
\newtheorem{prop}{Proposition}[section]
\newtheorem{property}{Property}[section]
\newtheorem{assumption}{Assumption}
\newtheorem{definition}{Definition}
\newtheorem{remark}{Remark}
\newtheorem{theorem}{Theorem}
\newtheorem{thm}{Theorem}
\newcommand{\boldsymbol{X}}{\boldsymbol{X}}
\newcommand{\boldsymbol{Y}}{\boldsymbol{Y}}
\newcommand{\boldsymbol{M}}{\boldsymbol{M}}
\newcommand{\boldsymbol{E}}{\boldsymbol{E}}
\newcommand{\boldsymbol{Q}}{\boldsymbol{Q}}
\newcommand{\boldsymbol{A}}{\boldsymbol{A}}
\newcommand{\boldsymbol{B}}{\boldsymbol{B}}
\newcommand{\boldsymbol{C}}{\boldsymbol{C}}
\newcommand{\boldsymbol{P}}{\boldsymbol{P}}
\newcommand{\boldsymbol{W}}{\boldsymbol{W}}
\newcommand{\boldsymbol{I}}{\boldsymbol{I}}
\newcommand{\boldsymbol{U}}{\boldsymbol{U}}
\newcommand{\boldsymbol{V}}{\boldsymbol{V}}
\newcommand{\boldsymbol{D}}{\boldsymbol{D}}
\newcommand{\boldsymbol{Z}}{\boldsymbol{Z}}
\newcommand{\boldsymbol{N}}{\boldsymbol{N}}
\newcommand{\mathcal{H}}{\mathcal{H}}
\newcommand{\mathcal{W}}{\mathcal{W}}
\newcommand{\tilde{M}}{\widetilde{M}}
\newcommand{\widetilde{Y}}{\widetilde{Y}}
\newcommand{\widehat{M}}{\widehat{M}}
\newcommand{\widetilde{\bA}}{\widetilde{\boldsymbol{A}}}
\newcommand{\widetilde{\bM}}{\widetilde{\boldsymbol{M}}}
\newcommand{\widetilde{\bY}}{\widetilde{\boldsymbol{Y}}}
\newcommand{\widehat{\bB}}{\widehat{\boldsymbol{B}}}
\newcommand{\widehat{\bM}}{\widehat{\boldsymbol{M}}}
\newcommand{\widehat{\bQ}}{\widehat{\boldsymbol{Q}}}
\newcommand{\widehat{\bZ}}{\widehat{\boldsymbol{Z}}}
\newcommand{\widehat{\bW}}{\widehat{\boldsymbol{W}}}
\newcommand{X^{(1)}}{X^{(1)}}
\newcommand{Z^{(1)}}{Z^{(1)}}
\newcommand{X^{(2)}}{X^{(2)}}
\newcommand{Z^{(2)}}{Z^{(2)}}
\newcommand{\eta^{(1)}}{\eta^{(1)}}
\newcommand{\eta^{(2)}}{\eta^{(2)}}
\newcommand{\tilde{\eta}^{(2)}}{\tilde{\eta}^{(2)}}
\newcommand{\tilde{\eta}^{(1)}}{\tilde{\eta}^{(1)}}
\newcommand{\widehat{\widetilde{\bM}}}{\widehat{\widetilde{\boldsymbol{M}}}}
\newcommand{\widehat{\widetilde{\bM}^{(k)}}}{\widehat{\widetilde{\boldsymbol{M}}^{(k)}}}
\newcommand{\bm{\mathcal{F}}}{\bm{\mathcal{F}}}
\newcommand{\bm{\mathcal{T}}}{\bm{\mathcal{T}}}
\newcommand{\bm{\mathcal{N}}}{\bm{\mathcal{N}}}
\newcommand{\overline{M}}{\overline{M}}
\newcommand{\overline{\bM}}{\overline{\boldsymbol{M}}}
\newcommand{\mathbb{I}}{\mathbb{I}}
\newcommand{\boldsymbol{\Sigma}}{\boldsymbol{\Sigma}}
\newcommand{\boldsymbol{\Lambda}}{\boldsymbol{\Lambda}}
\newcommand{\textbf{X}}{\textbf{X}}
\newcommand{\hat{\bm{\eta}}}{\hat{\bm{\eta}}}
\newcommand{\hat{w}}{\hat{w}}
\newcommand{\hat{w}}{\hat{w}}
\newcommand{\hat{\eta}}{\hat{\eta}}
\newcommand{\bm{\eta}}{\bm{\eta}}
\newcommand{\bm{\varepsilon}}{\bm{\varepsilon}}
\newcommand{\Bigg |}{\Bigg |}
\newcommand{\mathbb{E}}{\mathbb{E}}
\newcommand{\mathbb{P}}{\mathbb{P}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{\mathcal{E}}{\mathcal{E}}
\newcommand{X^{e}}{X^{e}}
\newcommand{X^{s}}{X^{s}}
\newcommand{X^{ms}}{X^{ms}}
\newcommand{X^{mss}}{X^{mss}}
\newcommand{\eta^{e}}{\eta^{e}}
\newcommand{\eta^{s}}{\eta^{s}}
\newcommand{\eta^{ms}}{\eta^{ms}}
\newcommand{\tilde{X}}{\tilde{X}}
\newcommand{\tilde{\eta}}{\tilde{\eta}}
\newcommand{\mathbb{V}\text{ar}}{\mathbb{V}\text{ar}}
\usepackage{enumitem}
\setlist{nolistsep}
\newcommand{2}{2}
\newcommand{m^{\mbox{\tiny{syn}}}}{m^{\mbox{\tiny{syn}}}}
\newcommand{\emph{MSE}}{\emph{MSE}}
\newcommand{\text{lcm}}{\text{lcm}}
\newcommand{\langle}{\langle}
\newcommand{\rangle}{\rangle}
\newcommand{Z_T^{\perp}}{Z_T^{\perp}}
\usepackage[compact]{titlesec}
\titlespacing{\section}{0pt}{0pt}{0pt}
\usepackage{etoolbox}
\AfterEndEnvironment{strip}{\leavevmode}
\begin{document}
\twocolumn[
\aistatstitle{Near optimal finite time identification of arbitrary linear dynamical systems}
\aistatsauthor{ Tuhin Sarkar \And Alexander Rakhlin }
\aistatsaddress{ MIT \And MIT } ]
\begin{abstract}
We derive finite time error bounds for estimating general linear time-invariant (LTI) systems from a single observed trajectory using the method of least squares. We provide the first analysis of the general case when eigenvalues of the LTI system are arbitrarily distributed in three regimes: stable, marginally stable, and explosive. Our analysis yields sharp upper bounds for each of these cases separately. We observe that although the underlying process behaves quite differently in each of these three regimes, the systematic analysis of a self--normalized martingale difference term helps bound identification error up to logarithmic factors of the lower bound. On the other hand, we demonstrate that the least squares solution may be statistically inconsistent under certain conditions even when the signal-to-noise ratio is high.
\end{abstract}
\smallskip
\input{content/intro}
\input{content/contributions}
\input{content/model}
\input{content/main_results}
\input{content/inconsistent}
\input{content/discussion}
\bibliographystyle{alpha}
|
1,941,325,220,070 | arxiv | \section{Introduction}
\label{sec:intro}
Recently, Brown et al$.$~(2007, hereafter BBRS) announced the
discovery of a collisional family associated with the Kuiper belt
object known as (136108) 2003~EL$_{61}$ (hereafter referred to as
EL$_{61}$). With a diameter of $\sim\!1500\,$km (Rabinowitz et
al$.$~2006), EL$_{61}$\ is the third largest known Kuiper belt object.
The family so far consists of EL$_{61}$\ plus seven other objects that
range from 150 to $400\,$km in diameter (BBRS; Ragozzine \&
Brown~2007\Red{, hereafter RB07}). Their proper semi-major axes ($a$)
are spread over only $1.6\,$AU, their eccentricities ($e$) differ by
less than 0.08, and their inclinations ($i$) differ by less than
$1.5^\circ$ (see Table~\ref{tab:fam}). After correcting for some drift
in the eccentricity of EL$_{61}$\ \Red{and 1999 OY$_3$ due to Neptune's
mean motion resonances}, this corresponds to a velocity dispersion
of $\lesssim\!150\,$m/s (\Red{RB07}). BBRS estimates that there is
only a one in a million chance that such a grouping of objects would
occur at random.
\begin{table}[h]
\singlespace
\begin{center}
\begin{tabular}{|lccc|}
\hline
name & $a$ & $e$ & $i$ \\
& (AU) & & (deg) \\
\hline
2003 EL$_{61}$ & 43.3 & 0.19 & 28.2 \\
1995 SM$_{55}$ & 41.7 & 0.10 & 27.1 \\
1996 TO$_{66}$ & 43.2 & 0.12 & 27.5 \\
1999 OY$_3$ & 44.1 & 0.17 & 24.2 \\
2002 TX$_{300}$ & 43.2 & 0.12 & 25.9 \\
2003 OP$_{32}$ & 43.3 & 0.11 & 27.2 \\
2003 UZ$_{117}$ & 44.1 & 0.13 & 27.4 \\
2005 RR$_{43}$ & 43.1 & 0.14 & 28.5 \\
\hline
\end{tabular}
\caption{\label{tab:fam} The orbital elements of the known EL$_{61}$\
family as supplied by the {\it Minor Planet Center} on July 20,
2007.}
\end{center}
\end{table}
Based on the size and density of EL$_{61}$\ and on the hydrodynamic
simulations of Benz \& Asphaug~(1999), and assuming an impact velocity
of $3\,$km/s, BBRS estimate that this family is the result of an
impact between two objects with diameters of $\sim\!1700\,$km and
$\sim\!1000\,$km. Such a collision is surprising (M$.$~Brown, pers.
comm.), because there are so few objects this big in the Kuiper belt
that the probability of the collision occurring in the age of the
Solar System is very small.
Thus, in this paper we investigate the circumstances under which a
collision like the one needed to create the EL$_{61}$\ family could have
occurred. In particular, in \S{\ref{sec:KB}} we look again at the
idea that the larger of the two progenitors of this family (the
target) originally resided in the Kuiper belt and carefully determine
the probability that the impact could have occurred there. We show
that this probability is small, and so we have to search for an
alternative idea. In \S{\ref{sec:SD}}, we investigate a new scenario
where the EL$_{61}$\ family formed as a result of a collision between two
scattered disk objects (hereafter SDOs). The implications of these
calculations are discussed in \S{\ref{sec:end}}.
\section{The Kuiper belt as the source of the target}
\label{sec:KB}
In this section we evaluate the chances that a Kuiper belt object
roughly $1700\,$km in diameter could have been struck by a
$\sim\!1000\,$km body over the age of the Solar System. There are two
plausible sources of the impactor: the Kuiper belt itself, and the
scattered disk (Duncan \& Levison~1997 hereafter DL97; see Gomes et
al$.$~2007 for a review). We evaluate each of these separately.
\subsection{The Kuiper belt as the source of the impactor}
\label{ssec:KBKB}
We start our discussion with an estimate of the likelihood that a
collision similar to the one described in BBRS could have
occurred between two Kuiper belt objects over the age of the Solar
System. Formally, the probability ($p$) that an impact will occur
between between two members of a population in time $t_l$ is:
\begin{equation}
\label{eq:p}
p = N_i\, N_t\, t_l\, (R_i+R_t)^2\,\bar\varrho,
\end{equation}
where $N$ is the number of objects, $R$ is their radii, and the
subscripts $i$ and $t$ refer to the impactors and targets,
respectively. In addition, $\bar\varrho$ is the {\it mean intrinsic}
impact rate which is the average of the probability that any two
members of the population in question will strike each other in any
given year assuming that they have a combined radius of $1\,$km. As
such, $\bar\varrho$ is only a function of the orbital element
distribution of the population. For the remainder of this subsection
we append the subscript $kk$ to both $p$ and $\bar\varrho$ to indicate
that we are calculating these values for Kuiper belt --- Kuiper belt
collisions.
To evaluate Eq$.$~\ref{eq:p}, we first use the Bottke et al$.$~(1994)
algorithm to calculate the intrinsic impact rate between each pair of
orbits in a population. The average of these rates is
$\bar\varrho_{kk}$. Using the currently known multi-opposition Kuiper
belt objects, we find that $\bar\varrho_{kk}\!=\!1.8 \times
10^{-22}\,{\rm km}^{-2}\,{\rm y}^{-1}$. However, this distribution
suffers from significant observational biases, which could, in
principle, affect our estimate. As a check, we apply this calculation
to the synthetic Kuiper belts resulting from the formation simulations
by Levison et al$.$~(2008). These synthetic populations are clearly
not affected by observational biases, but may not represent the real
distribution very well. As such, although they suffer from their own
problems, these problems are entirely orthogonal to those of the
observed distribution. We find $\bar\varrho_{kk}$'s between $1.5
\times 10^{-22}$ and $1.6 \times 10^{-22}\,{\rm km}^{-2}\,{\rm
y}^{-1}$. The fact that the models and observations give similar
results gives us confidence that our answer is accurate despite the
weaknesses of the datasets we used. We adopt a value of
$\bar\varrho_{kk}\!=\!1.7 \times 10^{-22}\,{\rm km}^{-2}\,{\rm
y}^{-1}$.
Next, we need to estimate $N_i$ and $N_t$. Roughly 50\% of the sky
has been searched for Kuiper belt objects (KBOs) larger than
$1000\,$km in radius, and two have been found: Pluto and Eris (Brown
et al$.$~2005; Brown \& Schaller~2007). Thus, given that almost all
of the ecliptic has been searched, let us assume that there are 3 such
objects in total. Recent pencil beam surveys have found that the
cumulative size distribution of the Kuiper belt is $N(>\!R) \propto
R^{-3.8}$ for objects the size of interest here (Petit et al$.$~2006).
Thus, there are roughly 5 objects in the KBOs consistent with
BBRS's estimate of the size of the target body
($R_t\!=\!850\,$km) and $\sim\!40$ impactors ($R_i\!=\!500\,$km).
Plugging these numbers into Eq$.$~\ref{eq:p}, we find that the
probability that the impact that formed the EL$_{61}$\ family could have
occurred in the current Kuiper belt is only $2.5\times 10^{-4}$ in the
age of the Solar System.
In the above discussion, we are assuming that the Kuiper belt has
always looked the way we see it today. However, it (and the rest of
the trans-Neptunian region) most likely went through three distinct
phases of evolution (see Morbidelli et al$.$~2007 for a review):
\begin{enumerate}[1)]
\item At the earliest times, Kuiper belt objects had to have been in
an environment where they could grow. This implies that the disk
had to have been massive (so that collisions were common) and
dynamically quiescent (so that collisions were gentle and led to
accretion; Stern~1996; Stern \& Colwell~1997a; Kenyon \& Luu~1998;
1999). Indeed, numerical experiments suggest that the disk needed
to contain tens of Earth-masses of material and have eccentricities
significantly less than 0.01 (see Kenyon et al$.$~2008 for a
review). In what follows we refer to this quiescent period as {\it
Stage~I}.
\item The Kuiper belt that we see today is neither massive nor
dynamically quiescent. The average eccentricity of the Kuiper belt
is $\sim\!0.14$ and estimates of its total mass range from
0.01~$M_\oplus$ (Bernstein et al$.$~2004) to 0.1~$M_\oplus$ (Gladman
et al$.$~2001). Thus, there must have been some sort of dynamical
event that significantly excited the orbits of the KBOs. This event
was either violent enough to perturb $>\!99\%$ of the primordial
objects onto planet-crossing orbits thereby directly leading to the
Kuiper belt's small mass (Morbidelli \& Valsecchi~1997; Nagasaki \&
Ida$.$~2000; Levison \& Morbidelli~2003; Levison et al$.$~2008), or
excited the Kuiper belt enough that collisions became erosional
(Stern \& Colwell~1997b; Davis \& Farinella~1997; Kenyon \&
Bromley~2002, 2004). It was during this violent period that most of
the structure of the Kuiper belt was established. As we discuss
below, the Kuiper belt's resonant populations might be the only
exception to this. Indeed, the inclination distribution in the
resonances shows that these populations either formed during this
period or post-date it (Hahn \& Malhotra~2005). It is difficult to
date this event. However, there has been some work that suggests
that it might be associated with the late heavy bombardment of the
Moon, which occurred 3.9 Gy ago (Levison et al$.$~2008). In what
follows we refer to this violent period as {\it Stage~II}.
\item Since this dramatic event, the Kuiper belt has been relatively
quiet. Indeed, the only significant dynamical changes may have
resulted from \Red{the gradual decay of intrinsically unstable
populations and} the slow outward migration of Neptune. As
discussed in more detail in \S{\ref{sec:SD}}, this migration occurs
as a result of a massive scattered disk that formed during Stage~II.
It might be responsible for creating at least some of the resonant
structure seen in the Kuiper belt (Malhotra~1995; Hahn \&
Malhotra~2005). This migration continues today, although at an
extremely slow rate. We refer to this modern period of Kuiper belt
evolution as {\it Stage~III}.
\end{enumerate}
Perhaps the simplest way to resolve the problem of the low probability
of an EL$_{61}$-like collision is to consider whether this event could
have occurred during Stage~I, when the Kuiper belt may have been 2 to
3 orders of magnitude more populous than today (see Morbidelli et
al$.$~2007 for a review). Increasing the $N$'s in Eq$.$~\ref{eq:p} by
a factor of 100--1000 would not only make a collision like the one
needed to make the EL$_{61}$\ family much more likely, but it would make
them ubiquitous. Indeed, this explains why many large KBOs (Pluto and
Eris, for example) have what appear to be impact-generated satellites
(e$.$g$.$ Canup~2005; Brown et al$.$~2005).
However, the fact the we see the EL$_{61}$\ family in a tight clump in
orbital element space implies that if the collision occurred during
Stage~I, then whatever mechanism molded the final structure of the
Kuiper belt during Stage~II must have left the clump intact. Three
general scenarios have been proposed to explain the Kuiper belt's
small mass: {\it (i)} the Kuiper belt was originally massive, but the
strong dynamical event in Stage~II caused the ejection of most of the
bodies from the Kuiper belt to the Neptune-crossing region (Morbidelli
\& Valsecchi~1997; Nagasawa \& Ida~2000), {\it (ii)} the Kuiper belt
was originally massive, but the dynamical excitation in Stage~II
caused collisions to become erosive\footnote{Recall that by
definition, most of the collisions that occurred during Stage~I were
accretional.} and thus most of the original Kuiper belt mass was
ground to dust (Stern \& Colwell~1997b; Davis \& Farinella~1997;
Kenyon \& Bromley~2002, 2004), and {\it (iii)} the observed KBOs
accreted closer to the Sun, and during Stage~II a small fraction of
them were transported outward and trapped in the Kuiper belt by the
dynamical evolution of the outer planets (Levison \& Morbidelli~2003;
Levison et al$.$~2008).
Scenario~{\it (ii)} cannot remove objects as large as the EL$_{61}$\
precursors because the collisions are not energetic enough. Indeed,
in order to get this mechanism to explain the Kuiper belt's small
mass, almost all of the original mass must have been in objects with
radii less then $\sim\!10\,$km (Kenyon \& Bromley~2004). Thus, in
this scenario, the number of $\sim\!500\,$km objects present at early
epochs is no different than what is currently observed. Thus, this
scenario cannot solve our problem. Scenarios~{\it (i)} and {\it
(iii)} invoke the wholesale dynamical transport of most of the
Kuiper belt. While, this can remove most of the targets and
impactors, the dynamical shakeup of the Kuiper belt would obviously
destroy the coherence of the family. This is due to the fact that any
dynamical mechanism that could cause such an upheaval would cause the
orbits of the KBOs to become wildly chaotic, and thus any tight clump
of objects would spread exponentially in time. From these
considerations we conclude that the collision that created the EL$_{61}$\
family could not have occurred between two KBOs (see Morbidelli~2007
for further discussion).
\subsection{The scattered disk as the source of the impactor}
\label{ssec:KBSD}
In this section we evaluate the probability that the larger progenitor
of the EL$_{61}$\ family originally was found in the Kuiper belt, but the
impactor was a member of the scattered disk. For reasons described
above, the family-forming impact must have occurred some time during
Stage~{III} when the main dynamical structure of the Kuiper belt was
already in place. However, the scattered disk is composed of
trans-Neptunian objects that have perihelia near enough to Neptune's
orbit that their orbits are not stable over the age of the Solar
System (see Gomes~2007 for a review). As a result, they are part of a
dynamically active population where objects are slowly diffusing
through orbital element space and occasionally leave the scattered
disk by either being ejected from the Solar System, evolving into the
Oort cloud, or being handed inward by Neptune, thereby becoming
Centaurs.
Therefore, unlike the Kuiper belt, the population of the scattered
disk has slowly been decreasing since its formation and this decay is
ongoing even today. It is an ancient structure (Morbidelli et
al$.$~2004; Duncan et al$.$~2004) that was probably constructed during
Stage~II, and thus has slowly evolved and decayed in number during all
of Stage~III. DL97 estimated that the primordial scattered
disk\footnote{In what follows, when we refer to the `primordial
scattered disk' we mean the scattered disk that existed at the end
of Stage~II and at the beginning of Stage~III.} may have contained
roughly 100 times more material at the beginning of Stage~III than we
see today. We need to include this evolution in our estimate of the
collision probability.
The above requirement forces us to modify Eq$.$~\ref{eq:p}. In
particular, since we have to assume that the number of Kuiper belt
targets ($N_t$) has not significantly changed since the beginning of
Stage~III (Duncan et al$.$~1995),
\begin{equation}
\label{eq:pt}
p_{sk} = (R_i+R_t)^2\,\bar\varrho_{sk}\,N_t\, \int{N_i(t) dt},
\end{equation}
where the subscript $sk$ refers to the fact that we are calculating
these values for SDO---KBO collisions. \Red{In writing
Eq.~\ref{eq:pt} in the manner, we are assuming that the scattered
disk orbital element distribution, and thus $\bar\varrho_{sk}$, does
not significantly change with time. In all the calculations
discussed below, we find that this is an accurate assumption.}
Assuming that the size distribution of SDOs does not change with time,
we can define $f(t) \equiv N_{i}(t)/N_{i0}$, where $N_{i0}$ is the
number of impactors at the beginning of Stage~III. As a result,
$\int{N_i(t) dt}$ in Eq$.$~\ref{eq:pt} becomes $N_{i0}\int{f\,dt}$.
Now, if we define $\bar{t} \equiv \int{f\,dt}$, then $p_{sk}$ takes on
the same form as in Eq$.$~\ref{eq:p}:
\begin{equation}
\label{eq:ptt2}
p_{sk} = N_{i0}\, N_{t}\, \bar{t}\, (R_i+R_t)^2\,\bar\varrho_{sk}.
\end{equation}
We must rely on dynamical simulations in order to estimate $f(t)$ and
$\bar{t}$. In addition, our knowledge of the orbital element
distribution (and thus $\varrho_{sk}$) of SDOs is hampered by
observational biases on a scale that is much worse than exists for the
Kuiper belt because of the larger semi-major axes involved. Thus, we
are required to use dynamical models to estimate $\varrho_{sk}$ as
well. For this purpose, we employ three previously published models
of the evolution of the scattered disk:
\begin{enumerate}
\item {\it LD/DL97:} Levison \& Duncan~(1997) and DL97 studied the
evolution of a scattered disk whose members originated in the Kuiper
belt. In particular, they performed numerical orbital integrations
of massless particles as they evolved from Neptune-encountering
orbits in the Kuiper belt. The initial orbits for these particles
were chosen from a previous set of integrations whose test bodies
were initially placed on low-eccentricity, low-inclination orbits in
the Kuiper belt but then evolved onto Neptune-crossing orbits
(Duncan et al$.$~1995). The solid curve in Figure~\ref{fig:noft}
shows the relative number of SDOs as a function of time in this
simulation. After $4\times 10^{9}\,$yr, 1.25\% of the particles
remain in the scattered disk. We refer to this fraction as $f_s$
(see Table~\ref{tab:val}). Note that $f_s$ is equivalent to
$f(4Gy)$. In addition, we find that $\bar{t}=1.9\times 10^8\,$y in
this integration.
\item {\it DWLD04:} Dones et al$.$~(2004) studied the formation of the
Oort cloud and dynamical evolution of the scattered disk from a
population of massless test particles initially spread from 4 to
$40\,$AU with a surface density proportional to $r^{-3/2}$. For the
run employed here, the \Red{initial} RMS eccentricity and
inclination were 0.2 and $5.7^\circ$, respectively. Also, we
restricted ourselves to use only those objects with initial
perihelion distances $<\!32\,$AU. The dotted curve in
Figure~\ref{fig:noft} shows the relative number of SDOs as a
function of time in this simulation. For this model $f_s = 0.63\%$
and $\bar{t}=3.9\times 10^8\,$y.
\item {\it TGML05:} Tsiganis et al$.$~(2005, hereafter TGML05)
proposed a new comprehensive scenario --- now often called `the Nice
model' --- that reproduces, for the first time, many of the
characteristics of the outer Solar System. It quantitatively
recreates the orbital architecture of the giant planet system
(orbital separations, eccentricities, inclinations; Tsiganis et
al$.$~2005). It also explains the origin the Trojan populations of
Jupiter (Morbidelli et al$.$~2005) and Neptune (Tsiganis et
al$.$~2005; Sheppard \& Trujillo~2006), and the irregular satellites
of the giant planets (Nesvorn\'y et al$.$ 2007a). Additionally, the
planetary evolution that is described in this model can be
responsible for the early Stage~II evolution of the Kuiper belt
(Levison et al$.$~2008). Indeed, it reproduces many of the Kuiper
belt's characteristics for the first time. It also naturally
supplies a trigger for the so-called Late Heavy Bombardment (LHB) of
the terrestrial planets that occurred $\sim\!3.9$ billion years ago
(Gomes et al$.$~2005).
\medskip
TGML05 envisions that the giant planets all formed within
$\sim\!15\,$AU of the Sun, while the known KBOs formed in a massive
disk that extended from just beyond the orbits of the giant planets to
$\sim\!30\,$AU. A global instability in the orbits of the giant
planets led to a violent phase of close planetary encounters
(Stage~II). This, in turn, caused to Uranus and Neptune to be
scattered into the massive disk. Gravitational interactions between
the disk and the planets caused the dispersal of the disk (some
objects being pushed into the Kuiper belt; Levison et al$.$~2008) and
forced the planets to evolve onto their current orbits (see also
Thommes et al$.$~1999; 2002). After this violent phase (i$.$e$.$ at
the beginning of Stage~III), the scattered disk is massive. As in the
other models above, it subsequently decays slowly due to the
gravitational effects of Neptune. The gray curve in
Figure~\ref{fig:noft} shows the relative number of SDOs as a function
of time in TGML05's nominal simulation\footnote{TGML05 stopped their
integrations at 348 Myr. Here we continued their simulation to
$4\,$Gyr using the RMVS integrator (Levison \& Duncan 1994),
assuming that the disk particles were massless.}. We set $t=0$ to
be the point at which the orbits of Uranus and Neptune no longer
cross. For this model $f_s = 0.41\%$ and $\bar{t}=1.5\times 10^8\,$y.
\end{enumerate}
Once the $f$'s are known, all we need in order to calculate
Eq$.$~\ref{eq:ptt2} is $N_{i0}$, which, recall, is the initial number
of $1000\,$km diameter impactors in the scattered disk, and
$\varrho_{sk}$. To evaluate $N_{i0}$, we need to combine our
dynamical models with observational estimates of the scattered disk.
The most complete analysis of this kind to date is by Trujillo et
al$.$~(2000). These authors performed a survey of a small area of the
sky in which they discovered three scattered disk objects. They
combined these data with those of previous surveys, information about
their sky coverage, limiting magnitudes, and dynamical models of the
structure of the scattered disk to calculate the number of SDOs with
radii larger than $50\,$km. To perform this calculation, they needed,
however, to assume a size distribution for the scattered disk. In
particular, they adopted $N(>\!R) \propto R^{-q}$, and studied cases
with $q\!=\!2$ and 3.
Unfortunately, if we are to adopt Trujillo et al$.$'s estimates of the
number of SDOs, we must also adopt their size distributions, because
the former is dependent on the latter. This might be perceived to be
a problem because we employed a much steeper size distribution for the
Kuiper belt in \S{\ref{ssec:KBKB}}. Fortunately, $q\!=\!3$ is in
accordance with the available observations for this population. In
particular, it is in agreement with the most modern estimate of
Bernstein et al$.$~(2004), who found $q=3.3^{+0.7}_{-0.4}$ for bright
objects (as these are) in a volume limited sample of what they call
the `excited class' (which includes the scattered disk). In addition,
it is consistent with the results of Morbidelli et al$.$~(2004), who
found that $2.5\,\lesssim\,q\,\lesssim\,3.5$ for the scattered disk.
Also note that Bernstein et al$.$~(2004) concluded that the size
distribution of their `excited class' is different from the rest of
the Kuiper belt at the 96\% confidence level, which again supports the
choices we make here. Thus, we adopt Trujillo et al$.$'s estimate for
$q\!=\!3$, which is that there are between 18,000 and 50,000 SDOs with
$R\!>\!50\,$km and $50\!<\!a\!<\!200\,$AU. We also adopt $q\!=\!3$ in
the remainder of this discussion.\footnote{Note that if we had adopted
$q\!=\!3$ in \S{\ref{ssec:KBKB}}, our final estimate of the
probability that the EL$_{61}$\ family was the result of a collision
between two KBOs ($p_{kk}$) would have actually been {\it smaller}
by about a factor of two. This, therefore, would strengthen the
basic result of this paper.}
\begin{figure}[h!]
\vglue 3.0truein
\special{psfile=Levison_EL61_fig1.eps angle=0 hoffset=120 voffset=-50 vscale=40
hscale=40}
\caption{\footnotesize \label{fig:noft}
{The fraction of scattered disk objects remaining in a simulation as
a function of time. The solid curve shows the results from LD97
and DL97. The dotted curve shows the results from DWLD04. The
gray curve shows TGML05. Time is measured from the beginning of
Stage~III.}}
\end{figure}
LD/DL97's model of the scattered disk places about 66\% of its SDOs in
Trujillo et al$.$'s range of semi-major axes. This fraction is 47\%
in DWLD04 and 40\% in TGML05. Thus, we estimate that there are
currently between 27,000 and 125,000 SDOs larger than $50\,$km
($N_{\rm 50km}$) depending on the model. So, the initial number of
objects in the scattered disk of radius $R$ is
\begin{equation}
\label{eq:n0}
N_{0}(R) = \frac{N_{\rm 50km}}{f_s} \left(\frac{R}{50{\rm km}}\right)^{-q}.
\end{equation}
The values of $N_{i0}$ derived from this equation are presented in
Table~\ref{tab:val} for Trujillo et al$.$'s value of $q=3$.
\begin{table}[h]
\singlespace
\begin{center}
\begin{tabular}{|l|ccc|}
\hline
& LD/DL97 & DWLD04 & TGML05 \\
\hline
$f_s$ & 1.3\% & 0.63\% & 0.41\% \\
$\bar{t}$ (y) & $1.9\times 10^8$ & $3.9\times 10^8$ & $1.5\times 10^8$ \\
$\bar\varrho_{sk}$ (km$^{-2}$y$^{-1}$)
& $\Red{6.7}\times 10^{-23}$ & $\Red{7.2}\times 10^{-23}$ & $\Red{1.1 \times 10^{-22}}$ \\
$N_{i0}$ & 2180 -- 6060 & 6980 -- 16900 & 11,000 --- 30,400 \\
$p_{sk}$ & $\Red{2.4}\times 10^{-4}$ -- $\Red{6.9}\times 10^{-4}$ &
$\Red{1.7}\times 10^{-3}$ -- $\Red{4.3}\times 10^{-3}$ & $\Red{1.6}\times 10^{-3}$ -- $\Red{4.5}\times 10^{-3}$ \\
\hline
$\Delta V_{\rm min}$ (m/s) & \Red{198} & \Red{263} & \Red{93} \\
$t_l$ (y) & $3.4\times 10^7$ & $1.5\times 10^8$ & $4.6\times 10^7$ \\
$N_{t0}$ & 440 -- 1230 & 1240 -- 3440 & 2230 -- 6260 \\
$\bar\varrho_{ss}$ (km$^{-2}$y$^{-1}$)
& $1.1\times 10^{-22}$ & $8.0\times 10^{-23}$ & $7.9\times 10^{-23}$ \\
$p_{ss}$ & 0.007 -- 0.051 & 0.16 -- 1.27 & 0.16 -- 1.26 \\
$p_{\rm KB}$ & 0.19 & 0.076 & 0.32 \\
$p_{\rm SD}$ & $1.5\times 10^{-3}$ -- 0.011 & 0.012 --
0.10 & 0.061 -- 0.47 \\
\hline
\end{tabular}
\caption{\label{tab:val} Important dynamical parameters derived from
the three pre-existing scattered disk models. See text for a full
description.}
\end{center}
\end{table}
Finally, we need $\bar\varrho_{sk}$, which, recall, only depends on
the orbital element distribution of the targets and impactors. We can
take the orbital element distribution of the impactors directly from
our scattered disk numerical models, but we need to assume the orbit
of the target. \Red{We place the target on the center of mass orbit
for the family as determined by RB07. This orbit has
$a\!=\!43.1\,$AU, $e\!=\!0.12$, and $i\!=\!28.2^\circ$.} As before,
the values of $\bar\varrho_{sk}$ are calculated using the Bottke et
al$.$~(1994) algorithm and are also given in the table. \Red{It was
somewhat surprising to us that the values for $\bar\varrho_{sk}$ are
so similar to $\bar\varrho_{kk}$ because the scattered disk is
usually thought of as a much more extended structure. However, we
found the median semi-major axis of objects in our scattered disk
simulations is only about $60\,$AU. This is similar enough to the
Kuiper belt to explain the similarity.}
It should be noted that \Red{the Bottke et al$.$} algorithm assumes a
uniform distribution of orbital angles, which might be of some doubt
for the scattered disk. As a result, we tested these distributions
for our objects with semi-major axes between 40 and $200\,$AU and
found that, although there was a slight preference for arguments of
perihelion near 0 and 180$^\circ$, the distributions were uniform to
better than one part in ten.
We can now evaluate $p_{sk}$ for the various models. These too are
given in Table~\ref{tab:val}. We find that the probability that the
EL$_{61}$\ family is the result of a collision between a Kuiper belt
target with a radius of $850\,$km and a scattered-disk impactor with a
radius of $500\,$km is less than 1 in 220. Although this number is
larger than that for Kuiper belt -- Kuiper belt collisions, it is
still small. Thus, we conclude that we can rule out that idea that
the progenitor (i$.$e$.$ the target) of the EL$_{61}$\ family was in the
Kuiper belt.
\section{The scattered disk as the source of both the target and impactor}
\label{sec:SD}
In the last section we found that an SDO-KBO collision is much more
likely than a KBO-KBO collision because the scattered disk was more
massive in the past. Thus, in order to increase the overall
probability of a EL$_{61}$\ family forming event even further, we need to
investigate whether {\it both} the target and the impactor could have
been in the scattered disk at the time of the collision. This
configuration has the advantage of increasing the number of potential
targets by roughly 2 orders of magnitude relative to the estimate
employed in \S{\ref{ssec:KBSD}}, at least at the beginning of
Stage~III. At first sight, the assumption that both progenitors were
in the scattered disk may seem at odds with the fact that the family
is found in the Kuiper belt today. Remember, however, that collisions
preserve the total linear momentum of the target and the impactor. As
a result, the family is dispersed around the center of mass of the two
colliding bodies, {\bf not} around the orbit of the target. If the
relative velocity of the colliding objects is comparable to their
orbital velocity and the two bodies have comparable masses, then the
center-of-mass of the resulting family can be on a very different
orbit than the progenitors.
With this in mind, we propose that at some time near the beginning of
Stage~III, two big scattered disk objects collided. Before the
collision, each of them was on an eccentric orbit typical of the
scattered disk. At the time of the collision, one object was moving
inward while the other was moving outward, so that the center of mass
of the target-projectile pair had an orbit typical of a Kuiper belt
object. As a result, we should find the family clustered around this
orbit today.
We start our investigation of the above hypothesis by determining
whether it is possible for the center of mass of two colliding SDOs to
have a Kuiper belt orbit like that of the EL$_{61}$\ family. We
accomplish this by comparing $\Delta V_{\rm min}$ to $\delta V_{\rm
min}$, where $\Delta V_{\rm min}$ is defined to be the \Red{minimum}
difference in velocity between the EL$_{61}$ family orbit and the
scattered disk region, and $\delta V_{\rm min}$ is the possible
difference in velocity between the center of mass of the collision and
the original orbit of the target. \Red{If $\Delta V_{\rm
min}\!>\!\delta V_{\rm min}$ then a collision between two SDOs
cannot lead to EL$_{61}$'s orbit. If, on the other hand, $\Delta V_{\rm
min}\!<\!\delta V_{\rm min}$, our scenario is at least possible.
Note, however, that this condition is necessary, but not sufficient,
because the orientation of the impact is also important. This
effect will be accurately taken into account in the numerical models
performed later in this section.}
We start \Red{our simple comparision} with $\Delta V_{\rm min}$. The
green areas in Figure~\ref{fig:aei} show the regions of orbital
element space visited by SDOs during our three $N$-body simulations.
It is important to note that these are two-dimensional projections of
the six-dimensional distribution consisting of all the orbital
elements. Therefore, the fact that an area of one of the plots is
green does not imply that all the orbits that project into that region
belong to the scattered disk, only that some of them do. The red
\Red{dot represents RB07's center of mass orbit for} the EL$_{61}$\
family. Note that the family is close to the region visited by SDOs.
\begin{figure}[h!]
\vglue 7.5truein
\special{psfile=Levison_EL61_fig2a.eps angle=0 hoffset=-80 voffset=200 vscale=60 hscale=60}
\special{psfile=Levison_EL61_fig2b.eps angle=0 hoffset=170 voffset=200 vscale=60 hscale=60}
\special{psfile=Levison_EL61_fig2c.eps angle=0 hoffset=45 voffset=-100 vscale=60 hscale=60}
\caption{\footnotesize \label{fig:aei}
{The green area illustrates the regions (eccentricity --- semi-major
axis distribution on the top panels and inclination --- semi-major
axis distribution on the bottom panels) visited by scattered disk
objects with no collisions included (left panels: from LD/DL97;
right panels: from DWLD04; bottom TGML05). The blue curve in the
top panels marks $q\!=\!34\,$AU and the \Red{red dot shows the
center of mass orbit of the EL$_{61}$\ family (RB07).} The
\Red{minimum} difference in orbital velocity between the EL$_{61}$\
family and the scattered disk visited region is $\Red{265}\,$m/s.
For reference, a typical impact of scattered disk bodies with a
mass ratio of 5 (as for the target/impactor estimated in BBRS)
gives a $\delta V_{\rm min}$ of $\sim 450$~m/s. The black dots
show stable Kuiper belt orbits that result from actual simulations
of SD evolution, accounting for such collisions. In all cases the
osculating orbits are shown.}}
\end{figure}
The distance in velocity space between the location of the family and
the scattered disk region, $\Delta V_{\rm min}$, can be computed using
the techniques developed in Nesvorn{\' y} et al$.$~(2007b). Given two
crossing orbits this algorithm uses Gauss' equations to seek the
minimum relative velocity ($\Delta V$) needed to move an object from
one orbit to another. In particular, it searches through all values
of true longitudes and orbital orientations in space to find the
smallest $\Delta V$ while holding $a$, $e$, and $i$ of each orbit
fixed. Using this algorithm, we take each entry from the orbital
distributions saved during the scattered disk $N$-body simulations and
compare it to \Red{RB07's center of mass orbit for} the EL$_{61}$\ family.
We then take $\Delta V_{\rm min}$ to be the \Red{minimum} difference
in orbital velocity between the EL$_{61}$\ family and the region visited
by scattered disk particles during our simulations. These values are
listed in Table~\ref{tab:val}, and we find that all the $N$-body
simulations have particles which get within $\Red{265}\,$m/s of the
family.
Next we estimate $\delta V_{\rm min}$. The center of mass velocity,
$\vec{V}_{\rm CM}$, of \Red{target-impactor system} is
$(m_i\vec{V}_i+m_t\vec{V}_t)/(m_i+m_t)$, where $m$ is the mass of
\Red{each body}. So, $\delta V \equiv \vert\vec{V}_{\rm CM} -
\vec{V}_t\vert = \vert\vec{V}_i-\vec{V}_t\vert/(1+\frac{m_t}{m_i})$,
where $\vert\vec{V}_i-\vec{V}_t\vert$ is the impact speed, which BBRS
argues is roughly $3\,$km/s (in the simulations below we find the
average to be about $2.7\,$km/s). Therefore, assuming a mass ratio
between the target and impactor of 5 (as argued by BBRS), we expect
that the center of mass velocity (from which the fragments are
ejected) to be offset from the initial velocity of the target by about
$450\,$m/s. Since this is larger than the minimum velocity distance
that separates the scattered disk from the EL$_{61}$\ family ($\Delta
V_{\rm min}$; $<\!\Red{265}\,$m/s, as discussed above), it is possible
that the observed orbit of the EL$_{61}$\ family could result from such a
collision.
We now estimate the likelihood that such a collision will happen. To
accomplish this, we divide the problem into two parts. We first
evaluate the probability ($p_{ss}$) that a collision occurred in the
age of the Solar System between two SDOs with $R_i\!=\!500\,$km and
$R_t\!=\!850\,$km. Then, we calculate the likelihood ($p_{KB}$) that
the center of mass of the two colliding bodies was on a stable Kuiper
belt orbit. Since fragments of the collision will be centered on this
orbit, the family members should span it. In what follows, we refer
to this \Red{theoretical} orbit as the {\it `collision orbit'}. The
probability that the EL$_{61}$\ family originated in the scattered disk is
thus $p_{SD} = p_{ss} \times p_{KB}$.
As with the determination of $p_{sk}$ in \S{\ref{ssec:KBSD}}, we need
to modify Eq$.$~\ref{eq:p} to take into account that the number of
objects in the scattered disk is changing with time. In this case,
however, both the number of targets and the number of impactors vary.
As result,
\begin{equation}
\label{eq:ptt}
p_{ss} = (R_i+R_t)^2\,\bar\varrho_{ss}\, \int{N_i(t)\,
N_t(t)\, dt}.
\end{equation}
Assuming that the size distribution of SDOs does not change with time,
$N_t(t)/N_{t0} = N_i(t)/N_{i0} = f(t)$, where $f(t)$ was defined
above. Thus, $\int{N_i(t)\, N_t(t)\, dt}$ becomes
$N_{i0}N_{t0}\int{f^2\,dt}$. Now, if we define $t_l \equiv
\int{f^2dt}$, then $p_{ss}$ again takes on the same form as in
Eq$.$~\ref{eq:p}:
\begin{equation}
\label{eq:pt2}
p_{ss} = N_{i0}\, N_{t0}\, t_l\, (R_i+R_t)^2\,\bar\varrho_{ss},
\end{equation}
where the subscript $ss$ refers to the fact that we are calculating
these values for SDO---SDO collisions. Note that $t_l$ is not the same
as $\bar{t}$ used in Eq$.$~\ref{eq:ptt2}, but it is a measure of the
characteristic time of the collision. The values of $t_l$ for our
three scattered disk models are given in Table~\ref{tab:val}.
The values of $N_{i0}$ are the same as we calculated in
\S{\ref{ssec:KBSD}} using Eq$.$~\ref{eq:n0} because in both cases the
impacting population is the same. In this case, we can also use
Eq$.$~\ref{eq:n0} to estimate $N_{t0}$. These values are given in
Table~\ref{tab:val}. The table also shows the values of
$\bar\varrho_{ss}$ for each of the models, which were again calculated
using the Bottke et al$.$~(1994) algorithm. Recall that this
parameter only depends on the orbital element distribution of the
scattered disk.
We can now evaluate $p_{ss}$ for the various models. These too are
given in Table~\ref{tab:val}. Again, we are assuming a target radius
of $850\,$km and a impactor radius of $500\,$km. We find that our
scenario is least likely in the LD/DL97 model, with $p_{ss} \lesssim
0.06$, while it is most likely in the TGML05 model with
$p_{ss}\!\sim\!1$. The fact that $p_{ss}$ can be close to one is
encouraging. After all, we see one family and there are probably not
many more in this size range. However, we urge caution in
interpreting these $p_{ss}$ values because there are significant
uncertainties in several of the numbers used to calculate them ---
particularly the $N$'s. Indeed, we believe that the differences
between the $p_{ss}$ values from the various models are probably more
a result of the intrinsic uncertainties in our procedures rather than
the merit of one model over another.
Next, we need to calculate the probability that the impacts described
above have collision orbits in the Kuiper belt ($p_{KB}$). We
accomplish this with the use of a Monte Carlo simulation where we take
the output of our three orbital integrations and randomly choose
particle pairs to collide with one another based on their location and
the local number density. We apply the following procedures to the
LD/DL97, DWLD04, and TGML05 datasets, separately.
Our preexisting $N$-body simulations supply us with a series of {\it
snapshots} of the evolving scattered disk as a function of time. In
particular, the original $N$-body code recorded the position and
velocity of each object in the system at fixed time intervals. For
two objects to collide, they must be at the same place at the same
time. However, because of the small number of particles in our
simulations (compared to the real scattered disk) and the fact that
the time intervals between snapshots are long, it is very unlikely to
find any actual collisions in our list of snapshots. Thus, we must
bin our data in both space and time in order to generate pairs of
particles to collide. For this purpose, we divided the Solar System
into a spatial grid. We assumed that the spatial distribution of
particles is both axisymmetric and symmetric about the mid-plane.
Thus, our grid covers the upper part of the meridional plane. The
cylindrical radius ($\varpi$) was divided into 300 bins between 30 and
$930\,$AU, while the positive part of the vertical coordinate was
divided into 100 bins with $z\!\leq\!100\,$AU. We also binned time.
However, since the number of particles in the $N$-body simulations
decreases with time (see Figure~\ref{fig:noft}), we increased the
width of the time bins ($\Delta t_{\rm bin}$) at later times in order
to insure we had enough particles in each bin to collide with one
another. In particular, we choose the width of each time bin so that
the total number of particles in the bin (summing over the spatial
bins) is the same.
We assigned each entry (meaning a particular particle at a particular
time) in the dataset of our original $N$-body simulation to a bin in
the 3-dimensional space discussed above (i$.$e$.$ $\varpi$--$z$--$t$).
As a result, the entries associated with each bin represent a list of
objects that were roughly at the same location at roughly the same
time in the $N$-body simulation.
Finally, we generated collisions at random. This was accomplished by
first randomly choosing a bin based on the local collision rate, as
determined by a particle-in-the-box calculation. It is important to
note that since the bins were populated using the $N$-body
simulations, this choice is consistent with the collision rates used
to calculate the mean collision probability $p_{ss}$ above. As a
result, we are justified multiplying $p_{ss}$ and $p_{KB}$ together at
the end of this process. Once we had chosen a bin, we randomly chose
a target and impactor from the list of objects in that bin. From the
velocities of the colliding pair we determined the orbit of the pair's
center-of-mass assuming a mass ratio of 5.
The next issue is to determine whether these collision orbits are in
the Kuiper belt. For this purpose, we define a KBO as an object on a
stable (for at least a long period of time) orbit with a perihelion
distance, $q$, greater than $34\,$AU (indicated by the blue curves in
Figure~\ref{fig:aei}). To test stability, we performed a $50\,$Myr
integration of the orbit under the gravitational influence of the Sun
and the four giant planets. As previous studies of the stability of
KBOs have shown (Duncan et al$.$~1995; Kuchner et al$.$~2002), a
time-span of $50\,$Myr adequately separates the stable from the
unstable regions of the Kuiper belt. Any object that evolved onto an
orbit with $q\!<\!33\,$AU during this period of time was assumed to be
unstable. The remainder were assumed to be stable and are shown as
the black dots in Figure~\ref{fig:aei}.
We find that collisions can effectively fill the Kuiper belt out to
near Neptune's 1:2 mean motion resonance at $48\,$AU. We created
stable, non-resonant objects with $q$'s as large as $46.5\,$AU.
Indeed, the object with the largest $q$ has $a\!=\!47.3\,$AU,
$e\!=\!0.017$, and $i\!=\!18.2^\circ$ and thus it is fairly typical of
the KBOs that we see. With regard to the EL$_{61}$\ family, we easily
reproduce stable orbits with the same $a$ and $e$. However, we find
that it is difficult to reproduce the family's inclination. Although
we do produce a few orbits with inclinations larger than the family's,
$\sim\!90\%$ of the orbits in our simulations have inclinations less
than that of the EL$_{61}$\ family.
The lack of high inclination objects is clearly a limitation of our
model. We believe, however, that this mismatch is more the result of
limitations in our scattered disk models than of our collisional
mechanism for capture in the Kuiper belt. Neither the LD/DL97,
DWLD04, nor TGML05 simulations produce high enough inclinations to
explain what we see in the scattered disk. So, if we had a more
realistic scattered disk model, we would probably be able to produce
more objects with inclinations like EL$_{61}$\ and it cohorts. One
concern of such a solution is that the higher inclinations would
affect our collision probabilities, particularly through their effects
on $\varrho_{ss}$. To check this, we performed a new set of
calculations where we arbitrarily increased the inclinations of the
scattered disk particles by a factor of 2. We find that the increased
inclinations decrease $\varrho_{ss}$ by less than 20\%. Thus, we
conclude that if we had access to a scattered disk model with more
realistic inclinations, we should be able to better reproduce the
orbit of the family without significantly affecting the probability of
producing it.
The values of $p_{\rm KB}$ (the fraction of {EL$_{61}$}-forming impacts
that lead to objects that are trapped in the Kuiper belt) resulting
from our main Monte Carlo simulations are listed in
Table~\ref{tab:val}. Combining $p_{\rm KB}$ and $p_{ss}$ we find that
the probability that, in the age of the Solar System, two SDOs with
radii of $500\,$km and $850\,$km hit one another leading to a family
in the Kuiper belt (which we called $p_{\rm SD}$) is between 0.1\% and
47\%, depending on the assumptions we use. For comparison, in
\S{\ref{sec:KB}} we computed that the probability that the EL$_{61}$\
family is the result of the collision between two Kuiper belt objects
is $\sim\!0.02\%$, or is the result of a KBO--SDO collision is
$\lesssim\!0.1\%$\footnote{In \S{\ref{ssec:KBSD}}, we did not take
into account the fact that collisions between a larger KBO and a
somewhat smaller SDO could result in a family on an unstable orbit,
i$.$e$.$ on an orbit that is not in the Kuiper belt. Applying the
above procedures to the collisions described in \S{\ref{ssec:KBSD}},
we find that there is only a 29\% chance that the resulting family
would be on a stable Kuiper belt orbit. The values of $p_{sk}$ in
Table~\ref{tab:val} should be multiplied by this factor.}. Thus, we
conclude that the progenitors of the EL$_{61}$\ family are much more
likely to have originated in the scattered disk than in the Kuiper
belt.
\Red{Up to this point, we have been concentrating on whether our model
can reproduce the observed center of mass orbit of the EL$_{61}$\
family. However, the spread of orbits could also represent an
important observational constraint (Morbidelli et al$.$~1995). In
particular, assuming that the ejection velocities of the collision
were isotropic around the center of mass, the family members should
fall inside an ellipse in $a$--$e$ and $a$--$i$ space. The
orientation and axis ratio of the ellipse in $a$--$e$ space are
strong functions of the mean anomaly of the collision orbit at the
time of the impact ($M$), while the axis ratio of the ellipse in
$a$--$i$ space is a function of both $M$ and the argument of
perihelion ($\omega$). The major axis of the ellipse in $a$--$i$
space should always be parallel to the $a$ axis}\footnote{\Red{This
is indeed observed for the EL$_{61}$\ family. This fact strongly
supports the idea the these objects really are the result of a
collision and not simply a statistical fluke.}}. \Red{Using this
information, RB07 estimated that at the time of the collision, the
center of mass orbit had $M\!=\!76^\circ$ and
$\omega\!=\!271^\circ$.}
\Red{Given that we are arguing that the target and impactor originated
in the scattered disk, we might expect that certain impact
geometries are preferred, while others are forbidden. Thus, we
examined the $M$ and $\omega$ of all the collisions shown in
Figure~\ref{fig:aei} (black dots) with orbits near that of RB07's
center-of-mass orbit. In particular, we chose collision orbits with
$42\!<\!a\!<\!44\,$AU, $0.08\!<\!e\!<\!0.14$, and $i\!>\!15^\circ$.
We found that we cannot constrain the values of $\omega$. Indeed,
these orbits are roughly uniform in this angle. However, our model
avoids values of $M$ between $-37^\circ$ and $62^\circ$, i$.$e$.$
near perihelion. This is a result of the fact that the collision
must conserve momentum, lose energy, and that the initial orbits of
the progenitors were in the scattered disk \Red{while the family
must end up} in the Kuiper belt. RB07's value of $M$ falls in the
range covered by our models.}
\Red{Figure~\ref{fig:orbel} shows a comparison between the spread of
the EL$_{61}$\ family in orbital element space (black dots) and two of
our fictitious families. The fictitious family members were
generated by isotropically ejecting particles from the point of
impact with a velocity of 150$\,$m/s (BBRS). The collision orbits
for these families are consistent with the center of mass orbit for
the family. The collision orbit of the family shown in green has
$M\!=\!71^\circ$ and $\omega\!=\!273^\circ$ --- similar to the
values inferred by RB07. For comparison, the family shown in red
has an orbit with similar $a$, $e$, and $i$, but $M\!=\!174^\circ$
and $\omega\!=\!294^\circ$. We can conclude that, although this
test is not very constraining because our model can reproduce most
values and $M$ and $\omega$, we can match what is seen.}
\begin{figure}[h!]
\vglue 7.3truein
\special{psfile=Levison_EL61_fig3.eps angle=0 hoffset=-50 voffset=-80 vscale=90 hscale=90}
\caption{\footnotesize \label{fig:orbel}
{ \Red{ A comparison of the spread of families in $\Delta
a$--$\Delta e$ and $\Delta a$--$\Delta i$ space, where $\Delta
x$ is defined to be the difference between a particular orbital
element of the family member and that of the collision orbit.
The black dots show the proper orbital elements of the real
family members as determined by RB07. We did not plot
2003~EL$_{61}$ or 1999~OY$_{3}$ because their orbits have
changed since the family formed (RB07). The green dots show a
fictitious family with a collision orbit of $a\!=\!42.0\,$AU,
$e\!=\!0.09$, $i\!=\!21^\circ$, $\omega\!=\!111^\circ$,
$M\!=\!-72^\circ$. For comparison, the red dots show a
fictitious family with a collision orbit of $a\!=\!42.2\,$AU,
$e\!=\!0.09$, $i\!=\!23^\circ$, $\omega\!=\!294^\circ$,
$M\!=\!174^\circ$. This shows that this diagnostic is a
sensitive test for the models and that we can reproduce the
observations.}}}
\end{figure}
There is one more issue we must consider. In \S{\ref{ssec:KBKB}}, we
described the three phases of Kuiper belt evolution: 1) A quiescent
phase of growth (Stage~I), 2) a violent phase of dynamical excitation
and, perhaps, mass depletion (Stage~II), and 3) a relatively benign
modern phase (Stage~III). We argued that any collisional family that
formed during Stage~I or Stage~II would have been dispersed during the
chaotic events that excited the orbits in the Kuiper belt. Thus, the
family forming impact must have occurred during Stage~III. However,
the fact that the violent evolution is over before the collision does
not mean that the orbits of the planets must have remained unchanged.
As a matter of fact, the decay of the scattered disk population
actually causes Neptune's orbit to slowly migrate outward. This, in
turn, causes resonances to sweep through the Kuiper belt, potentially
affecting the orbits of some KBOs. So, as a final step in our
analysis, we must determine whether the dynamical coherence of the
EL$_{61}$\ family would be preserved during this migration.
To address the above issue, we performed an integration of 100
fictitious family members on orbits initially with the same $a$, $e$,
and $i$ as \Red{RB07's center of mass orbit}, under the gravitational
influence of the four giant planets as they migrate. We adopted the
case presented in Malhotra~(1995), where Neptune migrated from 23 to
$30\,$AU. Note that the model in Levison et al$.$~(2008) has Neptune
migrating from $\sim\!27\,$AU, so we are adopting an extreme range of
migration here. We found that only 12\% of the family members were
trapped in and pushed outward by Neptune's mean motion resonances
(i$.$e$.$ they were removed from the family). The orbits of the
remaining particles were only slightly perturbed and thus they
remained recognizable family members. Thus, we conclude that the
family would have survived the migration and that the SDO-SDO
collision is still a valid model for the origins of the EL$_{61}$\ family.
Interestingly, however, this simulation predicts that we might find
family members (which can be identified by their IR spectra; BBRS) in
the more distant Neptune resonances (1:2, 2:5...). If so, the
location of these objects can be used to constrain Neptune location at
the time when the EL$_{61}$\ family formed.
So, we conclude that the most probable scenario for the origin of the
EL$_{61}$\ family is that it resulted from a collision between two SDOs.
If true, this result has implications far beyond the origin of a
single collisional family because it shows, for the first time, that
collisions can affect the dynamical evolution of the Kuiper belt, in
particular, and small body populations, in general. Indeed, this
process might be especially important for the so-called `hot'
classical Kuiper belt. Brown~(2001) argued that the de-biased
inclination distribution of the classical Kuiper belt is bi-modal and
can be fitted with two Gaussian functions, one with a standard
deviation $\sigma \sim 2^\circ$ (the low-inclination {\it `cold'}
core), and the other with $\sigma \sim 12^\circ$ (the high-inclination
{\it `hot'} population). Since the work of Brown, it has been shown
that the members of these two populations have different physical
properties (Tegler \& Romanishin~2000; Levison \& Stern~2001;
Doressoundiram et al$.$~2001), implying different origins.
Gomes~(2003) suggested that one way to explain the differences between
the hot and cold populations is that the hot population originated in
the scattered disk, because a small fraction of the scattered disk
could be captured into the Kuiper belt due to the gravitational
effects of planets as they migrated. Here we show that collisions can
accomplish the same result. Indeed, a collisional origin for these
objects may have the advantage of explaining why binaries with equal
mass-components are rarer in this population than in other parts of
the trans-Neptunian region. Using HST, Noll et al$.$~(\Red{2008})
found that 29\% of classical Kuiper belt objects (see their paper for
a precise definition) with inclination $<\!5.5^\circ$ are similar-mass
binary objects, while this fraction is only 2\% for objects with
larger inclinations. A collisional origin for the hot population
might explain this discrepancy because a collision that is violent
enough to kick an object from the scattered disk to the Kuiper belt
would also disrupt the binary (the binary member that was not struck
would have continued in the scattered disk).
One might expect that if the majority of the hot population was put in
place by collisions, we should be able to predict a relationship
between the size distribution of its members and that of the scattered
disk. Eq$.$~\ref{eq:pt2} shows that the collision probability scales
roughly as $N^2$. And since in the scattered disk, $N(R) \propto
R^{-q}$, we might predict that the size distribution of the hot
population is $N_h(R) \propto R^{2-2q}$ (one power of $-q$ from both
$N_{i0}$ and $N_{0t}$, and a power of 2 from $(R_i+R_t)^2$, see
Eq$.$~\ref{eq:pt2}). In this case $q\!\sim\!3$ (see above) and thus
$N_h(R) \sim R^{-4}$. However, this estimate does not take into
account the fact that the collisions themselves could affect the size
distribution of the resulting hot population. Unfortunately, it is
not yet clear what the size of the fragments would be because of poor
understanding of the collisional physics of icy objects at these
energies. As a result, it is not yet possible to investigate this
intriguing idea.
\section{Conclusions}
\label{sec:end}
The recent discovery of the EL$_{61}$\ family in the Kuiper belt
(BBRS) is surprising because its formation is, at first
glance, a highly improbable event. BBRS argues that this
family is the result of a collision between two objects with radii of
$\sim\!850\,$km and $\sim\!500\,$km. The chances that such event
would have occurred in the current Kuiper belt in the age of the Solar
System is roughly 1 in $4000$ (see \S{\ref{sec:KB}}). In addition, it
is not possible for the collision to have occurred in a massive
primordial Kuiper belt because the dynamical coherence of the family
would not have survived whatever event molded the final Kuiper belt
structure. We also investigated the idea that the family could be the
result of a target KBO being struck by a SDO projectile, and found
that the probability of such an event forming a family on a stable
Kuiper belt orbit is $\lesssim\!10^{-3}$.
In this paper, we argue that the EL$_{61}$\ family is the result of a
collision between two scattered disk objects. In particular, we
present the novel idea that the collision between two SDOs on highly
eccentric unstable orbits could damp enough orbital energy so that the
family members would end up on stable Kuiper belt orbits. This idea
of using the scattered disk as the source of both of the family's
progenitors has the advantage of significantly increasing the
probability of a collision because the population of the scattered
disk \Red{was much larger in the early Solar System (it is currently
eroding away due to the gravitational influence of Neptune --- DL97;
DWLD04)}. With the use of three pre-existing models of the
dynamical evolution of the scattered disk (DWLD04, LD/DL97, and
TGML05) we show that the probability that a collision between a
$\sim\!850\,$km SDO and $\sim\!500\,$km SDO occurred and that the
resulting collisional family was spread around a stable Kuiper belt
orbit can be as large as 47\%. Given the uncertainties involved, this
can be considered on the order of unity. Thus, we conclude that the
EL$_{61}$\ family progenitors are significantly more likely to have
originated in the scattered disk than the Kuiper belt.
If true, this result has important implications for the origin of the
Kuiper belt because it is the first direct indication that collisions
can affect the dynamical evolution of this region. Indeed, we
\Red{suggest} at the end of \S{\ref{sec:SD}} that this process might
be responsible for the emplacement of the so-called `hot' classical
belt (Brown~2001) because it naturally explains why so few of these
objects are found to be binaries (Noll et al$.$~\Red{2008}).
\acknowledgments HFL is grateful for funding from NASA's Origins, OPR,
and PGG programs. AM acknowledges funding from the french National
Programme of Planetaology (PNP). DV acknowledges funding from
\Red{funding from Grant Agency of the Czech Republic (grant
205/08/0064) and the Research Program MSM0021620860 of the Czech
Ministry of Education. WB's contribution was paid for by the NASA's
PGG, Origins, and NSF's Planetary Astronomy programs. We would also
like to thank Mike Brown, Luke Dones, Darin Ragozzine, and Paul
Weissman for comments on early versions of this manuscript.}
\section*{References}
\begin{itemize}
\setlength{\itemindent}{-30pt}
\setlength{\labelwidth}{0pt}
\item[] Benz, W., Asphaug, E.\ 1999.\ Catastrophic Disruptions
Revisited.\ Icarus 142, 5-20.
\item[] Bernstein, G.~M., Trilling, D.~E., Allen, R.~L., Brown, M.~E.,
Holman, M., Malhotra, R.\ 2004.\ The Size Distribution of
Trans-Neptunian Bodies.\ Astronomical Journal 128, 1364-1390.
\item[] Bottke, W.~F., Nolan, M.~C., Greenberg, R., Kolvoord, R.~A.\
1994.\ Velocity distributions among colliding asteroids.\ Icarus
107, 255-268.
\item[] Brown, M.~E.\ 2001.\ The Inclination Distribution of the
Kuiper Belt.\ Astronomical Journal 121, 2804-2814.
\item[] Brown, M.~E., and 14 colleagues 2005.\ Keck Observatory Laser
Guide Star Adaptive Optics Discovery and Characterization of a
Satellite to the Large Kuiper Belt Object 2003 EL$_{61}$.\
Astrophysical Journal 632, L45-L48.
\item[] Brown, M.~E., Schaller, E.~L.\ 2007.\ The Mass of Dwarf Planet
Eris.\ Science 316, 1585.
\item[] Brown, M.~E., Trujillo, C.~A., Rabinowitz, D.~L.\ 2005.\
Discovery of a Planetary-sized Object in the Scattered Kuiper Belt.\
Astrophysical Journal 635, L97-L100.
\item[] Brown, M.~E., Barkume, K.~M., Ragozzine, D., Schaller, E.~L.\
2007. A Collisional Family of Icy Objects in the Kuiper Belt.\
Nature 446, 294-296.
\item[] Canup, R.~M.\ 2005.\ A Giant Impact Origin of Pluto-Charon.\
Science 307, 546-550.
\item[] Davis, D.~R., Farinella, P.\ 1997.\ Collisional Evolution of
Edgeworth-Kuiper Belt Objects.\ Icarus 125, 50-60.
\item[] Dones, L., Weissman, P.~R., Levison, H.~F., Duncan, M.~J.\
2004.\ Oort cloud formation and dynamics.\ Comets II 153-174.
\item[] Doressoundiram, A., Barucci, M.~A., Romon, J., Veillet, C.\
2001.\ Multicolor Photometry of Trans-neptunian Objects.\ Icarus
154, 277-286.
\item[] Duncan, M.~J., Levison, H.~F.\ 1997.\ A scattered comet disk
and the origin of Jupiter family comets.\ Science 276, 1670-1672.
\item[] Duncan, M.~J., Levison, H.~F., Budd, S.~M.\ 1995.\ The
Dynamical Structure of the Kuiper Belt.\ Astronomical Journal 110,
3073-3081.
\item[] Duncan, M., Levison, H., Dones, L.\ 2004.\ Dynamical evolution
of ecliptic comets.\ Comets II 193-204.
\item[] Gladman, B., Kavelaars, J.~J., Petit, J.-M., Morbidelli, A.,
Holman, M.~J., Loredo, T.\ 2001.\ The Structure of the Kuiper Belt:
Size Distribution and Radial Extent.\ Astronomical Journal 122,
1051-1066.
\item[] Gomes, R.~S.\ 2003.\ The origin of the Kuiper Belt
high-inclination population.\ Icarus 161, 404-418.
\item[] Gomes, R., Levison, H.~F., Tsiganis, K., Morbidelli, A.\
2005.\ Origin of the cataclysmic Late Heavy Bombardment period of
the terrestrial planets.\ Nature 435, 466-469.
\item[] Gomes, R.~S., Fern\'andez, J., Gallardo, T., Brunini,
A.~2007.\ The Scattered Disk: Origins, Dynamics and End States.\ The
Solar System Beyond Neptune. 259-273.
\item[] Hahn, J.~M., Malhotra, R.\ 2005.\ Neptune's Migration into a
Stirred-Up Kuiper Belt: A Detailed Comparison of Simulations to
Observations.\ Astronomical Journal 130, 2392-2414.
\item[] Kenyon, S.~J., Luu, J.~X.\ 1998.\ Accretion in the Early
Kuiper Belt. I. Coagulation and Velocity Evolution.\ Astronomical
Journal 115, 2136-2160.
\item[] Kenyon, S.~J., Luu, J.~X.\ 1999.\ Accretion in the Early
Outer Solar System.\ Astrophysical Journal 526, 465-470.
\item[] Kenyon, S.~J., Bromley, B.~C.\ 2002.\ Collisional Cascades in
Planetesimal Disks. I. Stellar Flybys.\ Astronomical Journal 123,
1757-1775.
\item[] Kenyon, S.~J., Bromley, B.~C.\ 2004.\ The Size Distribution of
Kuiper Belt Objects.\ Astronomical Journal 128, 1916-1926.
\item[] Kenyon, S.~J., Bromley, B.~C., O'Brien, D.~P., Davis, D.~R.\
2008. Formation and Collisional Evolution of Kuiper belt Objects.
The Solar System Beyond Neptune 293-314.
\item[] Kuchner, M.~J., Brown, M.~E., Holman, M.\ 2002.\ Long-Term
Dynamics and the Orbital Inclinations of the Classical Kuiper Belt
Objects.\ Astronomical Journal 124, 1221-1230.
\item[] Levison, H.~F., Duncan, M.~J.\ 1994.\ The long-term dynamical
behavior of short-period comets.\ Icarus 108, 18-36.
\item[] Levison, H.~F., Duncan, M.~J.\ 1997.\ From the Kuiper Belt to
Jupiter-Family Comets: The Spatial Distribution of Ecliptic Comets.\
Icarus 127, 13-32.
\item[] Levison, H.~F., Stern, S.~A.\ 2001.\ On the Size Dependence of
the Inclination Distribution of the Main Kuiper Belt.\ Astronomical
Journal 121, 1730-173
\item[] Levison, H.~F., Morbidelli, A., Van Laerhoven, C., Gomes, R.,
Tsiganis, K.~ 2008. Origin of the structure of the Kuiper Belt
during a Dynamical Instability in the Orbits of Uranus and Neptune.\
Icarus 196, 258-273.
\item[] Malhotra, R.\ 1995.\ The Origin of Pluto's Orbit: Implications
for the Solar System Beyond Neptune.\ Astronomical Journal 110, 420.
\item[] Morbidelli, A.\ 2007.\ Solar system: Portrait of a suburban
family.\ Nature 446, 273-274.
\item[] \Red{Morbidelli, A., Zappala, V., Moons, M., Cellino, A.,
Gonczi, R.\ 1995.\ Asteroid families close to mean motion
resonances: dynamical effects and physical implications.\ Icarus
118, 132. }
\item[] Morbidelli, A., Valsecchi, G.~B.\ 1997.\ NOTE: Neptune
Scattered Planetesimals Could Have Sculpted the Primordial
Edgeworth-Kuiper Belt.\ Icarus 128, 464-468.
\item[] Morbidelli, A., Emel'yanenko, V.~V., Levison, H.~F.\ 2004.\
Origin and orbital distribution of the trans-Neptunian scattered
disc.\ Monthly Notices of the Royal Astronomical Society 355,
935-940.
\item[] Morbidelli, A., Levison, H.~F., Tsiganis, K., Gomes, R.\
2005.\ Chaotic capture of Jupiter's Trojan asteroids in the early
Solar System.\ Nature 435, 462-465.
\item[] Morbidelli, A., Levison, H.~F., Gomes, R.\ 2007.\ The
Dynamical Structure of the Kuiper Belt and its Primordial Origin.\
The Kuiper Belt, in press.
\item[] Nagasawa, M., Ida, S.\ 2000.\ Sweeping Secular Resonances in
the Kuiper Belt Caused by Depletion of the Solar Nebula.\
Astronomical Journal 120, 3311-3322.
\item[] Nesvorn{\'y}, D., Vokrouhlick{\'y}, D., Morbidelli, A.\
2007a.\ Capture of Irregular Satellites during Planetary
Encounters.\ Astronomical Journal 133, 1962-1976.
\item[] Nesvorn{\'y}, D., Vokrouhlick{\'y}, D., Bottke, W.~F.,
Gladman, B., H{\"a}ggstr{\"o}m, T.\ 2007b.\ Express delivery of
fossil meteorites from the inner asteroid belt to Sweden.\ Icarus
188, 400-413.
\item[] \Green{Noll, K.~S., Grundy, W.~M., Stephens, D.~C., Levison, H.~F.,
Kern, S.~D.\ 2008.\ Evidence for two populations of classical
transneptunian objects: The strong inclination dependence of
classical binaries.\ Icarus 194, 758-768.}
\item[] Petit, J.-M., Holman, M.~J., Gladman, B.~J., Kavelaars, J.~J.,
Scholl, H., Loredo, T.~J.\ 2006.\ The Kuiper Belt luminosity
function from $m_{R}=$ 22 to 25.\ Monthly Notices of the Royal
Astronomical Society 365, 429-438.
\item[] Rabinowitz, D.~L., Barkume, K., Brown, M.~E., Roe, H.,
Schwartz, M., Tourtellotte, S., Trujillo, C.\ 2006.\ Photometric
Observations Constraining the Size, Shape, and Albedo of 2003 EL61,
a Rapidly Rotating, Pluto-sized Object in the Kuiper Belt.\
Astrophysical Journal 639, 1238-1251.
\item[] Ragozzine, D., Brown, M.~E.\ 2007.\ Candidate Members and Age
Estimate of the Family of Kuiper Belt Object 2003 EL61.\
Astronomical Journal 134, 2160-2167.
\item[] Sheppard, S.~S., Trujillo, C.~A.\ 2006.\ A Thick Cloud of
Neptune Trojans and Their Colors.\ Science 313, 511-514.
\item[] Stern, S.~A.\ 1996.\ On the Collisional Environment, Accretion
Time Scales, and Architecture of the Massive, Primordial Kuiper
Belt..\ Astronomical Journal 112, 1203.
\item[] Stern, S.~A., Colwell, J.~E.\ 1997a.\ Accretion in the
Edgeworth-Kuiper Belt: Forming 100-1000 KM Radius Bodies at 30 AU
and Beyond.\ Astronomical Journal 114, 841.
\item[] Stern, S.~A., Colwell, J.~E.\ 1997b.\ Collisional Erosion in
the Primordial Edgeworth-Kuiper Belt and the Generation of the 30-50
AU Kuiper Gap.\ Astrophysical Journal 490, 879.
\item[] Tegler, S.~C., Romanishin, W.\ 2000.\ Extremely red
Kuiper-belt objects in near-circular orbits beyond 40 AU.\ Nature
407, 979-981.
\item[] Thommes, E.~W., Duncan, M.~J., Levison, H.~F.\ 1999.\ The
formation of Uranus and Neptune in the Jupiter-Saturn region of the
Solar System.\ Nature 402, 635-638.
\item[] Thommes, E.~W., Duncan, M.~J., Levison, H.~F.\ 2002.\ The
Formation of Uranus and Neptune among Jupiter and Saturn.\
Astronomical Journal 123, 2862-2883.
\item[] Trujillo, C.~A., Jewitt, D.~C., Luu, J.~X.\ 2000.\ Population
of the Scattered Kuiper Belt.\ Astrophysical Journal 529, L103-L106.
\item[] Tsiganis, K., Gomes, R., Morbidelli, A., Levison, H.~F.\
2005.\ Origin of the orbital architecture of the giant planets of
the Solar System.\ Nature 435, 459-461.
\end{itemize}
\clearpage
\end{document}
|
1,941,325,220,071 | arxiv | \section{Introduction}
It is believed that Galactic cosmic rays (CRs) are
accelerated by supernova remnant (SNRs) or massive star clusters in our Galaxy. CR protons
interact with the interstellar gas and produce neutral pions
(schematically written as $p+p\rightarrow \pi^0+{\rm other \,
products}$), which in turn decay into $\gamma$-rays
($\pi^0\rightarrow\gamma+\gamma$). { Seven star-forming galaxies }have been firmly detected in $\gamma$-rays with the Fermi Large Area Telescope (LAT), including the Large Magellanic Cloud \citep[LMC;][]{2010A&A...512A...7A,2016A&A...586A..71A}, the Small Magellanic Cloud \citep{2010A&A...523A..46A}, the Andromeda galaxy M31 \citep{2010A&A...523L...2A}, starburst galaxies M82 and NGC~253 \citep{2010ApJ...709L.152A}, NGC~2146 \citep{2014ApJ...794...26T} and Arp~220 \citep{2016ApJ...821L..20P,2016ApJ...823L..17G}. In addition, a few star-forming galaxies with
obscured active galactic nuclei (AGNs), such as NGC~1068, NGC~4945, NGC 3424 and UGC 11041, have been detected by \textit {Fermi}--LAT\/ \citep{2012ApJ...755..164A,2019ApJ...884...91P}.
Noting the connection between star formations and CRs
in starburst galaxies, some authors have proposed scaling
relationships between star formation rates (SFRs) and $\gamma$-ray luminosities \citep{2002ApJ...575L...5P,2004ApJ...617..966T,2007ApJ...654..219T,2007APh....26..398S,2010MNRAS.403.1569P,2011ApJ...734..107L}. SFR indicators include the total infrared (IR)
luminosity in $8-1000\,\mu$m \citep{1998ApJ...498..541K}, and radio
continuum luminosity at 1.4 GHz produced by synchrotron emitting CR electrons \citep{2001ApJ...554..803Y}. With the accumulation
of \textit {Fermi}--LAT\/ data, the correlation
between $\gamma$-ray luminosities and SFR indicators are first
found in \cite{2010A&A...523L...2A}. \cite{2012ApJ...755..164A} studied a sample of 69 dwarf, spiral, and luminous and
ultraluminous IR galaxies using 3 years of data collected by
\textit {Fermi}--LAT\/. They find further evidence for quasi-linear
scaling relation between the { $\gamma$-ray luminosity and total}
infrared luminosity.
{ This correlation was later extended to}
higher luminosity galaxies with $\gamma$-ray detection from a luminous IR galaxy, NGC~2146 \citep{2014ApJ...794...26T}, and an ultraluminous IR galaxy, Arp~220\citep{2016ApJ...821L..20P,2016ApJ...823L..17G}.
In this paper, we report a systematic search for possible $\gamma$-ray emission from galaxies in the IRAS Revised Bright Galaxies Sample, using 11.4 years of $\gamma$-ray data {\bf collected} by the \textit {Fermi}--LAT\/ telescope. {While the result of detection of GeV emission from M33 and Arp~299 has been briefly mentioned in our previous paper (\citealt{Xi2020}, hereafter Paper I), here we present the details of the two $\gamma$-ray sources and discuss the nature of their emissions.} The paper is organized as follows.
In \S 2, we present a description of the galaxy sample selection and the \textit {Fermi}--LAT\/ data analysis procedure. In \S 3, we present the results of the analysis. In \S 4, we discuss the nature of the $\gamma$-ray emissions from M33 and Arp~299.
\section{Data set and analysis methods}
The scaling relation reported in \cite{2012ApJ...755..164A} implies the $\gamma$-ray detection is likely associated with bright IR galaxies. We selected our sample galaxies from the IRAS Revised Bright Galaxies Sample\footnote{This is a complete flux-limited sample of all extragalactic objects brighter than 5.24\,Jy at 60\,$\mu m$, covering the entire sky surveyed by IRAS at Galactic latitudes $ |b|> 5^{\circ}$.} \citep{2003AJ....126.1607S}, excluding the 15 IR-bright galaxies that have been detected in $\gamma$-rays with \textit {Fermi}--LAT\/ and listed in \textit {Fermi}--LAT\/ Fourth Source Catalog \citep[4FGL;][]{2020ApJS..247...33A}. We performed the standard sequence of analysis steps for each galaxies (described in our Paper I), resulting in the detection of two new $\gamma$-ray sources that are, respectively, spatially coincident with M33 and Arp~299. The details of the analysis for these two galaxies are given as below.
\textit {Fermi}--LAT\/ is a pair-conversion telescope covering the energy range from 20~MeV to more than 300~GeV with a field of view of 2.4~sr \citep{2009ApJ...697.1071A}. For the analysis in this work, we employed recent developments of the Science Tools and use the \textit {Fermi}--LAT\/ Pass 8 Source class events collected in $\sim 11.4$ yr, which include { both front and back-converted LAT events and correspond to P8R3\_SOURCE\_V2 instrument response functions}, but exclude the events with a zenith angle larger than $90^\circ$ in order to remove the contaminant from the Earth limb.
For the galaxy M33, we selected the events in the energy range $0.3-500$\,GeV and within a rectangular {\bf region of interest} (ROI) of size $17^\circ \times 17^\circ$ centered at M33 IR center ($\alpha_{2000}=23.475^\circ,\delta_{2000}=30.669^\circ$). We used $gtmktime$ tool to select time intervals expressed by (DATA\_QUAL $> 0$) \&\& (LAT\_CONFIG ==1), and binned the data in 20 logarithmically spaced bins in energy and in a spatial bin of $0.025^\circ$ per pixel The $\gamma$-ray background model consists of the latest template $\rm gll\_iem\_v7.fits$ for Galactic interstellar emission and the isotropic template with a spectrum described by the file $\rm iso\_P8R3\_SOURCE\_V2\_v01.txt$, as well as the sources listed in the 4FGL catalog within $20^\circ$ around M33. One possible shortcoming of using the 4FGL catalog (based on 8 years of LAT observations) to perform the search within a data set covering 11.4 yr is that unrelated new point sources may be discovered inside the ROI of the target source, which may influence the analysis. We produce a $6^\circ \times 6^\circ$ map of the Test Statistic (TS) \footnote{TS is defined as $\rm{TS}=-2({\rm ln}L_0-{\rm ln}L)$, where $L_0$ is the maximum-likelihood value for null hypothesis and $L$ is the maximum-likelihood with the additional point source with a power-law spectrum.} centered at M33 {\bf IR} center
to search for the new background $\gamma$-ray sources. \footnote{ { For the new $\gamma$-ray point source, the best location and uncertainty can be determined by maximizing the likelihood value with respect to its position only and using the distribution of Localization Test Statistic (LTS), defined by twice the logarithm of the likelihood ratio of any position with respect to the maximum. Using the $6^\circ \times 6^\circ$ TS map with the resolution of $0.05^\circ$ per pixel, we determined the position of the new sources at the pixels of the local peak TS value preliminarily. For each new sources, we produced the $0.6^\circ \times 0.6^\circ$ TS and LTS map with the resolution of $0.01^\circ$ per pixel around the preliminary position to determine the best-fit location and uncertainty, respectively}}. The criteria of the new {\bf background} source is that the $\gamma$-ray excess has a significance of $\rm TS > 25$ above the diffuse background and has an angular separation larger than $0.3^\circ$ from the center of M33. We find three new background sources and include them in our background model for M33 (see the appendix A).
For the galaxy Arp~299, we selected the events in the energy range $0.3-500$\,GeV within a rectangular ROI of size $17^\circ \times 17^\circ$ centered at the galaxy IR center ($\alpha_{2000}=172.136^\circ,\delta_{2000}=58.561^\circ$). The rest steps, i.e., the data filter, data bin and background modeling are carried out with similar approaches to what have been done for M33. {There is a new background source in the $6^\circ \times 6^\circ$ region centered at Arp~299}.
In the likelihood analysis, we allow each source within $6.5^\circ$ from the ROI center to have a free normalization (the $68\%$ containment radius of photons at normal incidence with an energy of 300~MeV is roughly $2.5^\circ$). This choice ensures that $99.9\%$ of the predicted $\gamma$-ray counts is contained within the chosen radius. The normalizations of the Galactic and isotropic diffuse components are always left free. { For the background-only fitting of M33 and Arp 299, we first free the spectral parameter (including normalization and index) of sources within $6.5^\circ$ region around the galaxies and then fixed the index for subsequent analysis.}
\section{Data analysis results}
\begin{table*}
\begin{deluxetable}{lcccccc}
\tabletypesize{\small}
\tablecaption{Spatial template analysis and spectral results of M 33 and Arp 299\label{tab:1}}
\tablewidth{0pt}
\tablehead{
\colhead{Spatial model} &\colhead{$\rm TS$} &\colhead{$R.A.$}& \colhead{(Decl.)} & \colhead{$F_{\rm 0.1-100 \rm GeV}$}& \colhead{$\Gamma$} &\colhead{$N_{\rm dof}$} \\
& & \colhead{[deg]}& \colhead{[deg]} & \colhead{[$10^{-12}$\,${\rm erg}\,{\rm cm}^{-2}\,{\rm s}^{-1}$\,]} & & \\
\colhead{(1)} & \colhead{(2)} & \colhead{(3)} & \colhead{(4)} & \colhead{(5)} & \colhead{(6)} & \colhead{(7)}}
\startdata
\multicolumn{7}{c}{M33}\\
\hline%
Point source (free)\tablenotemark{a} & 25.1 & 23.609 & 30.784 & 1.28$\pm$0.42 & 2.23$\pm$0.24 & 4 \\
Point source (fixed)\tablenotemark{b} & 16.7 & 23.475 & 30.669 & 1.34$\pm$0.47&2.41$\pm$0.26 & 2 \\
$0.23^\circ$ Disk \tablenotemark{c} & 23.2 & 23.475 & 30.669 & 1.55$\pm$0.35&2.22$\pm$0.42 & 3\\
Herschel/PACS map (160 $\mu {\rm m}$)) & 22.8 & & & 1.48$\pm$0.40&2.22$\pm$0.42 & 2 \\
IRAS map (60 $\mu {\rm m}$)) & 23.9 & & & 1.52$\pm$0.40&2.20$\pm$0.43 & 2 \\
\hline%
\multicolumn{7}{c}{Arp~299}\\
\hline%
Point source (free)\tablenotemark{a} & 27.8 & 172.050 & 58.526 & 1.08$\pm$0.28 & 2.07$\pm$0.20 & 4 \\
\enddata
\tablenotetext{a}{ The point source is at the best-fit location. The best-fit location is determined at the position of the peak TS value using the TS map with the resolution of $0.01^\circ$ per pixel.}
\tablenotetext{b}{ The point source is at the {\bf IR} center of the {\bf galaxy M33}.}
\tablenotetext{c} { The center of the disk model are fixed to the {IR} center of the {\bf galaxy M33}.}
\tablecomments{(1)Spatial model name;
(2)$\rm TS$ value for each spatial model;
(3) right ascension J2000;
(4) declination J2000;
(5) 100 MeV-100 GeV $\gamma-ray$ average flux;
(6) power-law spectral photon index derived by broad band spectrum fitting;
(7) degree of freedom for each spatial model .}
\end{deluxetable}
\end{table*}
\subsection{M33}
\subsubsection{Morphological analysis}
Fig.\ref{fig:1} shows the $0.6^\circ\times 0.6^\circ$ TS map in $0.3-500$\,GeV around M33. We find that the position of the TS peak locates at the northeast part of the galaxy. { We tested various spatial models and investigated which model is better by employing $\Delta \rm TS = TS_{1}-TS_{2}$, where $\rm TS_{1}$ and $\rm TS_{2}$ are respectively the TS value of model 1 and model 2 respectively \footnote{{Considering the difference in degree of freedom of the model 1 and model 2 is $\Delta N_{dof}$, we can expect the cumulative density of $\Delta \rm TS$ follows a $\chi^2$ distribution with $\Delta N_{dof}$ degree of freedom. In the case of the same number of degrees of freedom, we simply assume the $\chi^2$ distribution with one degree of freedom.}}}. We first explored the point source models at this best-fit location (i.e., the position of the peak TS value) and at the center of M33, respectively. The TS values are, respectively, 25.1 and 16.7, which suggests that the source is likely to be offset from the galaxy center with a significance $2.4\sigma$. In addition, we considered spatially extended templates based on Herschel/PACS map at $160\,\mu$m and IRAS map at $60\, \mu$m. These templates are used to test the spatial correlation of the $\gamma$-ray emission with star formation sites. The Hershcel/PACS and IRAS map models provide better fits ($\sim 2.5\sigma$) to the data than the point source model at the center of M33, but give almost equally good fits as the point source model at the best-fit location. We also test the uniform-brightness disk model with free radius centered at the optical center of M33. The TS value peaks when the radius is $0.23^\circ$. We do not find any improvement over the point source model at the best-fit location. The results for all the considered morphological tests are shown in Table \ref{tab:1}.
\begin{figure}[htbp]
\includegraphics[width=0.45\textwidth]{fig1.pdf}
\caption{TS map in the energy band $0.3-500$\,GeV around M33. The purple contours correspond to $68\%$ and $95\%$ confidence region assuming a template of a point-like source at the best-fit location. The dark green contours correspond to the map of IR flux measured by IRAS at $60\mu$m. }
\label{fig:1}
\end{figure}
\subsubsection{Flux variability}\label{M33_FV}
We retained the point source model at the best-fit location for examining the variability of the $\gamma$-ray flux. We computed light curves in four and eight time bins over 11.4 yr, for events in the energy range $0.3-500$\,GeV. For the analysis in each time bin, all sources within $6.5^\circ$ region around M33 have their spectra fixed to the shapes obtained from the above broad band analysis. The result is shown in Figure \ref{fig:2}. We then used a likelihood-based statistic to test the significance of the variability. Following the definition in 2FGL \citep{2012ApJS..199...31N} , the variability index from the likelihood analysis is constructed, with a value in the null hypothesis where the source flux is constant across the full time period, and the value under the alternate hypothesis where the flux in each bin is optimized: $TS_{var} = \sum_{i=1}^{N} 2 \times (Log(L_i(F_i))-Log(L_i(F_{mean})))$, where $L_i$ is the likelihood corresponding to bin $i$, $F_i$ is the best-fit flux for bin $i$, and $F_{mean}$ is the best-fit flux for the full time assuming a constant flux. We get $1.3-1.6 \sigma$ significance for the flux variability for the analyses using the above two time bins, which suggests no significant variability for the $\gamma$-ray emission from M33.
\begin{figure}[htbp]
\includegraphics[width=0.45\textwidth]{fig2.pdf}
\caption{Light curves of M33 with 4 and 8 time bins. The mean flux is the averaged flux over the $\sim11.4$ year analysis. The upper limits at $95\%$ confidence level are derived when the TS value for the data points is lower than 4.}
\label{fig:2}
\end{figure}
\subsubsection{Spectral Analysis}\label{M33_SA}
For the spectral analysis of M33, we performed a binned maximum likelihood fitting in the $0.3-500$\,GeV energy range with 20 logarithmic energy bins in total. The power law indices are consistent with each other for the four spatial models, as shown in Table \ref{tab:1}. { We generated the spectral points based on a maximum likelihood analysis with 4 logarithmic energy bins over $0.3-500$\,GeV.} Within each bin, we used the point source model at the best-fit location and the power law spectrum with a fixed photon index of ${\rm \Gamma = 2}$ and a free normalization. For the background diffuse components and sources within $6.5^\circ$ of M33, we fixed their spectral indices to the best-fit values obtained from the above background fitting, but allowing the normalization to vary. { We also obtain the spectral point at the energy band $0.1-0.3$\,GeV to check whether the power-law model is good. We find that the best-fit wide-band power-law model is consistent with all the spectral points, as shown in Fig \ref{fig:3}}.
\begin{figure}[htbp]
\includegraphics[width=0.45\textwidth]{fig3.pdf}
\caption{SED for M33 in the energy band 0.1-500 GeV. The red line and yellow band represent the wide-band best-fit power-law spectrum and the uncertainty {\bf obtained in the energy band 0.3-500 GeV.} The upper limits at $95\%$ confidence level are derived when the TS value for the data point is lower than 4 ($<2\sigma$). }
\label{fig:3}
\end{figure}
\subsection{Arp~299}
\subsubsection{Morphological analysis}
\begin{figure}[htbp]
\includegraphics[width=0.45\textwidth]{fig4.pdf}
\caption{TS map in the energy band $0.3-500$\,GeV for Arp~299. The purple contours correspond to $68\%$ and $95\%$ confidence region assuming the point-like source template. { The dark green diamond correspond to the optical localization of Arp~299.)}}
\label{fig:4}
\end{figure}
{Fig. \ref{fig:4} shows the TS map in the $0.3-500$\,GeV energy range around the galaxy Arp~299.}
We find that the galaxy position is located within the $68\%$ confidence region of the $\gamma$-ray excess.
\subsubsection{Flux variability}
To examine the variability of the $\gamma$-ray flux from Arp~299, we computed light curves in four and eight time bins over 11.4 yr for events in the energy range $0.3-500\,$GeV. We followed a procedure similar to that used for M33 in Section \ref{M33_FV}, and the result is shown in Figure \ref{fig:5}. We obtain a variability significance of $2.6-2.9 \sigma$ for the analyses using the above two time bins, which suggests a mildly significant variability in the $\gamma$-ray emission of Arp~299. We also checked the flux variability with 16 and 40 time bins. We get $3.0 \sigma$ significance for the flux variability using the 16 time bins, which agrees with the analysis using four and eight time bins. For the 40 time-bin case, we get $0.7 \sigma$ significance for the flux variability, which seems to indicate less variability on such short timescales. However, we note that, for such a weak $\gamma$-ray source, the statistics in each bin in the 40 time-bin analysis may be too low for a reliable analysis.
\begin{figure}[htbp]
\includegraphics[width=0.45\textwidth]{fig5.pdf}
\caption{ Light curves of Arp~299 with 4 and 8 time bins. The mean flux is the averaged flux over the $\sim11.4$ year analysis. The upper limits at $95\%$ confidence level are derived when the TS value for the data points is lower than 4.}
\label{fig:5}
\end{figure}
\subsubsection{Spectral Analysis}
For the spectral analysis of Arp~299, we performed a binned maximum likelihood fitting in the $0.3-500\,$
GeV energy range with 20 logarithmic energy bins in total { considering the point source model}. The result is shown in Table.~\ref{tab:1}. We also generated the spectral points determined by performing a maximum likelihood analysis {\bf in five energy bins} over $0.1-500$\,GeV similar to the case of M33 in Section \ref{M33_SA}. As shown in Figure \ref{fig:6}, the wide-band power-law model is
consistent with these spectral points.
\begin{figure}[htbp]
\includegraphics[width=0.45\textwidth]{fig6.pdf}
\caption{{ SED for Arp 299 in the energy band 0.1-500 GeV. The red line and yellow band represent the wide-band best-fit power-law spectrum and the uncertainty {\bf obtained in the energy band 0.3-500 GeV.} } The upper limits at $95\%$ confidence level are derived when the TS value for the data point is lower than 4. }
\label{fig:6}
\end{figure}
\subsection{Non-detected IR galaxies}
We derived the 95\% C.L. upper limits (UL) for each non-detected galaxy (i.e., ${\rm TS<25}$) using the Bayesian method assuming a power-law spectrum with a fixed photon index of $2.2$. For NGC~2403, we attribute the $\gamma$-ray emission, { which is present only in the first 5.7-year \textit {Fermi}--LAT\/ observation (4 Aug. 2014-25 Mar. 2014), to SN~2004dj (see Paper I). Using the last 5.7-year \textit {Fermi}--LAT\/ data (25 Mar. 2014 - 14 Nov. 2019), }we derived an upper limit for NGC~2403 assuming a point source model at the galaxy center. We compare the ULs on the $\gamma$-ray luminosities ($0.1-100$\,GeV) to the total IR luminosities ($8-1000\,\mu $m) for these non-detected IR galaxies, which is shown in Fig \ref{fig:7}. We find these non-detected IR galaxies are basically consistent with the empirical $L_\gamma-L_{\rm IR}$ correlation.
\begin{figure}[htbp]
\includegraphics[width=0.45\textwidth]{fig7.pdf}%
\caption{$\gamma$-ray luminosities ($0.1-100$\,GeV) v.s. total IR luminosities ($8-1000 \mu$m) for nearby star-forming galaxies. {The yellow band represents the empirical correlation ${\rm log}(L_{0.1-100\,{\rm GeV}}/ {\rm erg\,s^{-1}})=(1.17\pm0.07){\rm log}(L_{8-1000\,\mu \rm m}/{10^{10}L_{\odot}})+(39.28\pm0.08)$ with an intrinsic dispersion normally distributed in the logarithmic space and a standard deviation of $\sigma_{D}=0.24$ \citep{2012ApJ...755..164A}. } The $\gamma$-ray luminosities for the galaxies reported in 4FGL are derived from the catalog values \citep{2020ApJS..247...33A}. The IR luminosities are from \cite{2003AJ....126.1607S}. The blue upper downward triangle represents the upper limit for NGC~2403 for the last 5.7-year \textit {Fermi}--LAT\/ data.}
\label{fig:7}
\end{figure}
\section{Discussions and Conclusions}\label{discussion}
\subsection{M33}
As the third largest galaxy in our Local Group, M33 has been considered to be a
promising $\gamma$-ray source due to its proximity and relatively high gas masses and star formation activity. By using nearly 2 years of \textit {Fermi}--LAT\/ data, \citet{2010A&A...523L...2A} searched for the $\gamma$-ray emission from M33, but no significant $\gamma$-ray emission was detected. \citet{2017ApJ...836..208A} revisited the $\gamma$-ray emission in the direction M33 using more than 7\,yr of LAT Pass 8 data in the energy range $0.1-100$\,GeV, but still found no significant detection. More recently, \citet{2019ApJ...880...95K} found positive residual towards the M33 region. Di Mauro et al. (2019) reported M33 as a point-like source with $TS=39.4$ in the energy band $0.3-1000\rm\ GeV$. However, we find that three new background sources (see the Appendix for details) nearby M33 identified by us are not in their background model. We repeated our analysis using the background model excluding these three new background sources and find a similar values of $\rm TS= 35.9$ and $F_{0.1-100\rm GeV}\sim2.1\times10^{-12}$\,${\rm erg}\,{\rm cm}^{-2}\,{\rm s}^{-1}$\, for the best-fit point source model. Thus, we think that the difference between our results and that of Di Mauro et al. (2019) is due to the new background sources.
Our measurement gives a flux of $(1.28\pm0.42) \times 10^{-12}{\rm \,erg\ cm^{-2}\ s^{-1}}$ in the energy range $0.1-100$\,GeV, implying a luminosity of $\sim1.1\times 10^{38} {\rm erg\ s^{-1}}$. In Figure \ref{fig:7}, we show the position of M33 on the empirical $L_\gamma-L_{\rm IR}$ correlation for local group galaxies and nearby star forming galaxies \citep{2012ApJ...755..164A,2016ApJ...821L..20P}. M33 agrees well with this correlation, indicating that the $\gamma$-ray emission of M33 may arise from the CR-ISM interaction process.
The TS map of M33 shows that the $\gamma$-ray emission locates at the northeast region of the galaxy, where a supergiant H~II region, NGC~604, resides. NGC~604 is the second most massive H~II region in the Local Group and it has a relatively high star-formation rate. From the H~I density distribution map of M33, an over-density of H~I gas filament is seen around that region, with a column density of ${\rm \sim 30 M_\odot \,pc^{-2}}$ \citep{2003ApJS..149..343E}. Such a high-density region provides a thick target for CR-ISM interaction, so the efficiency for the $pp$ collision is expected to be high.
The energy-loss time of protons due to the $pp$ collision $t_{\text{loss}}$ can be expressed as $\left(
0.5n\sigma _{pp}c\right) ^{-1}$, where 0.5 is the inelasticity, $n$
is the hydrogen atom number density and $\sigma_{pp}$ is the inelastic $pp$
collision cross section. Converting the atom number density to gas
surface density, $\Sigma_{g}=m_{p}nH$, where $m_{p}$ is the mass of proton
and $H$ is the size of the over-density region, the energy-loss time is
\begin{equation}
t_{\text{loss}}=6.5\times 10^{6}\left(\frac{H}{200\text{pc}}\right)\left( \frac{\sigma
_{pp}}{30\text{mb}}\right) ^{-1}\left( \frac{\Sigma _{g}}{30 M_\odot {pc}^{-2}}
\right) ^{-1}\text{yr}\end{equation}
where $H$ is the the typical width of an H~I filament.
CRs are scattered off small-scale
magnetic field inhomogeneities randomly and diffuse out of the H~I filament.
The diffusive escape time is $t_{\text{diff}}=H^{2}/4D$. Here $D=D_{0}\left(
E/E_{0}\right) ^{\delta }$ is the diffusion coefficient, where $D_{0}$ and $%
E_{0}={\rm 3 GeV}$ are normalization factors, and $\delta =0-1$
depending on the spectrum of interstellar magnetic turbulence. The
diffusion time is
\begin{equation}
t_{\text{diff}}=3\times 10^{5}\left( \frac{H}{200\text{pc}}\right)
^{2}\left( \frac{D_{0}}{10^{28}\text{cm}^{2}\text{s}^{-1}}\right)
^{-1}\left( \frac{E_{p}}{3\text{GeV}}\right) ^{-\delta }\text{yr}.
\end{equation}%
With $D_0\sim {10^{27}\text{cm}^{2}\text{s}^{-1}}$,
the escape time is comparable to the $pp$ cooling time, and the region may be considered to be a proton calorimeter. {Although this value is one order of magnitude smaller than the standard diffusion coefficient in the ISM of our Galaxy, it is not {\it a priori} impossible. A recent polarisation analysis on Cygnus-X, a massive star-forming region in our Galaxy, has revealed that the turbulence in the region is dominated by the magnetosonic mode \citep{2018arXiv180801913Z}, which is more effective than the commonly considered Alfv{\'e}nic mode in CR confinement \citep{2002PhRvL..89B1102Y}. There are also a few giant molecular clouds with mass up to $10^6M_\odot$ spatially associated with NGC~604 \citep{2003ApJS..149..343E}, which would enhance the average atom density and subsequently the $\gamma$-ray emissivity by a factor of at least a few in that region. In addition, massive stellar winds are probably efficient CR factories \citep{1980ApJ...237..236C, 1983SSRv...36..173C, 2019NatAs...3..561A}, so we expect the CR density around NGC~604 to be higher than the average CR density in the ISM. This may explain why the peak of $\gamma$-ray emission locates in the northeast region of M33.}
\subsection{Arp~299}
Arp~299 is one of the most powerful merging galaxy system in the local Universe, at a distance of 44\,Mpc \citep{1999ApJ...517..130H}. {The system consists of two galaxies in an advanced merging state, NGC~3690 to the west and IC 694 to the east, plus a small compact galaxy \citep{1999AJ....118..162H}. The total IR luminosity of both galaxies(NGC~3690+IC~694) are ${L_{\rm IR}= 5.16\times 10^{11}L_{\odot}}$ \citep{2002ApJ...571..282C}, so it belongs to the class of Luminous IR Galaxies (LIRGs).} BeppoSAX revealed for the first time the existence of a deeply buried (${N_H=2.5\times10^{24} \rm cm^{−2}}$) AGN with a unabsorbed luminosity of $L_{0.5-100\,\rm keV}= 1.9\times10^{43}{\rm erg~s^{-1}}$ \citep{2002ApJ...581L...9D}. Chandra and XMM-Newton observations later confirmed the existence of a strongly absorbed AGN and located it in the nucleus of NGC~3690, while there is evidence that the second nucleus IC 694 might also host an AGN of lower luminosity \citep{2003ApJ...594L..31Z,2004ApJ...600..634B,2009ApJ...695L.103I,2010A&A...519L...5P,2002ApJ...581L...9D,2013ApJ...779L..14A}. {According to the correlation between the X-ray luminosity ($L_{2-10\,{\rm keV}}$) and the bolometric luminosity ($L_{\rm bol}$) of X-ray selected AGN \citep{2012A&A...545A..45R}, we find an intrinsic luminosity of $L_{\rm bol}\simeq 5\times 10^{43}\rm erg~s^{-1}$ for the obscured AGN. Even if all the AGN luminosity is reprocessed into the IR band, its contribution is negligible to the measured IR luminosity from the galaxy and hence the latter is related to the star-forming process in Arp~299, as in other star-forming galaxies.}
As shown in Fig.\ref{fig:5}, there is a tentative evidence of flux variability in Arp~299. If this variability is true, it may be due to the contribution from the obscured AGN. Some other merging galaxy systems, such as NGC~3424, also show flux variability in $\gamma$-ray emission \citep{2019ApJ...884...91P}. However, different from NGC~3424, Arp~299 lies on the empirical $L_\gamma-L_{\rm IR}$ scaling (see Fig.\ref{fig:7}). { Considering also that the hint of variability is not very significant($<3\sigma$), this may indicate that the obscured AGN contributes a subdominant part to the whole $\gamma$-ray flux}.
\subsection{Conclusions}
To summarize, our analysis using 11.4 years of \textit {Fermi}--LAT\ data in new detections of $\gamma$-ray emission from M33 and Arp~299. The fluxes of both sources are consistent with the correlation between the $\gamma$-ray luminosities and the total IR luminosities for star-forming galaxies, suggesting that $\gamma$-ray emissions from the two sources should arise mainly from CRs interacting with the ISM. However, it is found that there is a tentative evidence of variability in the $\gamma$-ray flux of Arp~299. The variability can be tested in future with longer observation time. If the variability is true, part of the $\gamma$-ray emission should come from the obscured AGN in Arp~299. The morphological analysis of the $\gamma$-ray emission from M33 shows that the peak of the TS map is not located at the galaxy center, but coincident with the supergiant H~II region NGC~604. This implies that
some bright star-forming regions could dominate over the bulk of the galaxy disk in producing $\gamma$-ray emission.
{\em A note added:} We note that during the final stage of the present work, an independent research paper \citep{Ajello2020} appears online, which conducted a similar study to that of the present work.
\acknowledgments The work is supported by NSFC grants 11625312 and 11851304, and the National Key
R \& D program of China under the grant 2018YFA0404203.
|
1,941,325,220,072 | arxiv | \section{Introduction} \label{sec:intro}
In the restricted three-body problem, the Lidov-Kozai resonance
provides a way
for an external perturber to torque test particle orbits
to high eccentricity and inclination \citep{lidov62,kozai62}.
When the perturber's orbit is circular ($e_2 = 0$),
and when the inclination $I$
between the test particle's orbit and the perturber's exceeds
$\arccos \sqrt{3/5} \simeq 39^\circ$, the test particle's
argument of periastron $\omega_1$ can librate (oscillate) about either
90$^\circ$ or 270$^\circ$: these are the fixed points of the
Lidov-Kozai (``Kozai'' for short) resonance.\footnote{Throughout this
paper, subscript ``1'' denotes the interior body and subscript ``2''
denotes the exterior body; by definition, the orbital semimajor axes
$a_1<a_2$. These subscripts apply regardless of whether the body is a test
particle or a perturber.}
The larger the libration amplitude,
the greater the eccentricity
variations. For circular perturbers, the test particle
eccentricity $e_1$ can cycle between 0 and 1 as the inclination $I$
cycles between $90^\circ$ and $39^\circ$;\footnote{There is also a
retrograde branch for the standard Kozai resonance in which
$I$ cycles between $90^\circ$ and $141^\circ$. In this paper we will
encounter several resonances for which retrograde fixed points
are paired with prograde fixed points,
but will sometimes focus on the prograde branches for
simplicity.} $e_1$ and $I$ seesaw
to conserve $J_{1z} \propto \sqrt{1-e_1^2} \cos I$,
the test particle's vector angular momentum projected onto
the perturber's orbit normal.
For eccentric external perturbers ($e_2 \neq 0$),
the gravitational potential is no longer
axisymmetric, and the test particle's $J_{1z}$ is now free to vary,
which it can do with a vengeance: the test particle can start from a
nearly coplanar, prograde orbit ($J_{1z} > 0$) and ``flip'' to being
retrograde ($J_{1z} < 0$; e.g., \citealt{lithwick11} and
\citealt{katz11}).
The large eccentricities and inclinations
accessed by the Kozai mechanism have found application in numerous
settings: enabling Jupiter to send comets onto sun-grazing trajectories
(e.g., \citealt{bailey92});
delineating regions of orbital stability for
planetary satellites perturbed by exterior satellites and the Sun
(e.g., \citealt{carruba02};
\citealt{nesvorny03}; \citealt{tremaine14});
merging compact object binaries in triple systems
(e.g., \citealt{kushnir13}; \citealt{silsbee17});
and explaining the orbits of eccentric or short-period
extrasolar planets (e.g., \citealt{wu03}), including
warm Jupiters (\citealt{dawson14}), and hot Jupiters
with their large spin-orbit obliquities \citep{naoz11}.
See \cite{naoz16} for a review.
The Kozai resonance is a secular effect (i.e., it does not depend on
orbital longitudes, which are time-averaged away from the equations
of motion) that applies to an interior
test particle perturbed by an exterior body.
Curiously, the ``inverse'' secular problem---an
exterior test particle and an interior perturber---does not seem to
have received as much attention as the conventional problem.
\citet{gallardo12} studied the inverse problem in the context
of Kuiper belt objects perturbed by Neptune and the other giant planets,
idealizing the latter as occupying circular ($e_1=0$) and coplanar orbits,
and expanding the disturbing function (perturbation potential)
to hexadecapolar order in $\alpha \equiv a_1/a_2$.
They discovered an analogous Kozai resonance
in which $\omega_2$ librates
about either $+90^\circ$ or $-90^\circ$,
when $I \simeq \arccos \sqrt{1/5} \simeq 63^\circ$
(see also \citealt{tremaine14}). Eccentricity variations are
stronger inside this ``inverse Kozai'' or $\omega_2$ resonance
than outside. \citet{thomas96}
also assumed the solar system giant planets to be on circular
coplanar orbits, dispensing with a multipole expansion and numerically
computing secular Hamiltonian level curves for an exterior
test particle. For $a_2 = 45$ AU, just outside Neptune's orbit,
resonances appear at high $I$ and $e_2$ that are
centered on $\omega_2 = 0^\circ$ and $180^\circ$.
\citet{naoz17} also studied the inverse
problem, expanding the potential to octopolar order and considering non-zero
$e_1$. Orbit flipping was found to be possible via a quadrupole-level
resonance
that exists only when $e_1 \neq 0$ and
for which $\Omega_2-\varpi_1$ librates about either $+90^\circ$ or $-90^\circ$.
Here $\Omega$ and $\varpi$ are the longitudes of ascending node
and of periastron, respectively.
As $e_1$ increases, the minimum $I$ at which orbits can flip decreases.
All of this inclination behavior obtains at the quadrupole level;
the $\Omega_2-\varpi_1$ resonance was also uncovered by
\citet{verrier09} and studied analytically by \citet{farago10}.
Octopole terms were shown by \citet{naoz17}
to enable test particles to alternate between
one libration center ($\Omega_2-\varpi_1 = +90^\circ$) and another
($\Omega_2-\varpi_1 = -90^\circ$), modulating the inclination evolution
and introducing chaos, particularly at high $e_1$.
In this paper we explore more systematically
the inverse problem, expanding the perturbation potential
to hexadecapolar order
and considering non-zero $e_1$. In addition to studying more closely
the hexadecapolar inverse Kozai resonance found by \citet{gallardo12}
and how it alters when $e_1$ increases, we will uncover
strong, octopolar resonances not identified by \citet{naoz17}.
By comparison with the latter work, we focus more on the test
particle's eccentricity variations than on its inclination variations.
We are interested, for example, in identifying dynamical channels
that can connect planetary systems with more distant reservoirs
of minor bodies, e.g., the ``extended scattered'' or ``detached''
Kuiper belt (e.g., \citealt{sheppard16}), or the Oort cloud (e.g.,
\citealt{silsbee16}). Another application is to extrasolar debris disks,
some of whose scattered light morphologies appear sculpted by
eccentric perturbers (e.g., \citealt{lee16}).
We seek to extend such models to large mutual inclinations
(e.g., \citealt{verrier08}; \citealt{pearce14}; \citealt{nesvold16}; \citealt{zanardi17}).
This paper is organized as follows. In Section \ref{sec:disturb}
we write down the secular disturbing function of an interior
perturber to hexadecapolar order in $\alpha = a_1/a_2$. There we
also fix certain parameters (masses and semimajor axes; e.g., $\alpha = 0.2$)
for our subsequent numerical survey.
Results for a circular perturber are given in Section \ref{sec:circular}
and for an eccentric perturber in Section \ref{sec:eccentric},
with representative secular integrations tested
against $N$-body integrations. We wrap up in Section \ref{sec:conclude}.
\section{Secular Disturbing Function for Exterior Test Particle}
\label{sec:disturb}
The disturbing function of
\citet[][hereafter Y03]{yokoyama03} is
for the conventional problem: an interior test particle
(satellite of Jupiter)
perturbed by an exterior body (the Sun).
We may adapt their $R$ to our inverse problem
of an exterior test particle perturbed by
an interior body by a suitable reassignment of variables.
This reassignment is straightforward because the disturbing function
is proportional to $1/\Delta$, where $\Delta$ is the absolute magnitude
of the distance between
the test particle and the perturber (Y03's equation 2, with the indirect
term omitted because that term vanishes after secular averaging).
This distance $\Delta$ is obviously the same between the conventional and inverse
problems, and so the Legendre polynomial expansion of $1/\Delta$
performed by Y03 for their problem holds just as well for ours.
The change of variables begins with a simple replacing of subscripts.
We replace Y03's subscript $\odot$ (representing the Sun, their exterior
perturber) with ``2'' (our exterior test particle). For their unsubscripted
variables (describing their interior test particle),
we add a subscript ``1'' (for
our interior perturber). Thus we have:
\begin{align}
a &\rightarrow a_1 \label{eq:a1}\\
a_\odot &\rightarrow a_2 \\
e &\rightarrow e_1 \\
e_\odot &\rightarrow e_2 \, ,
\end{align}
where $a$ is the semimajor axis and $e$ is the eccentricity.
Since we are interested in the inverse problem
of an interior perturber, we replace their perturber mass $M_\odot$
with our perturber mass:
\begin{equation}
M_\odot \rightarrow m_1 \,.
\end{equation}
Their inclination $I$ is the mutual inclination between the interior
and exterior orbits; we leave it as is.
Mapping of the remaining angular variables requires more care.
Y03's equations (6) and (7) take the reference plane to coincide with
the orbit of the exterior body---this is the invariable plane
when the interior body is a test particle. We want the reference
plane to coincide instead
with the orbit of the interior body (the invariable plane
for our inverse problem). To convert to the new reference
plane, we use the relation
\begin{equation}
\Omega_1 - \Omega_2 = \pi
\end{equation}
for longitude of ascending node $\Omega$,
valid whenever the reference plane is the invariable plane
for arbitrary masses 1 and 2
(the vector pole of the invariable plane is co-planar
with the orbit normals of bodies 1 and 2, and lies between them).
We therefore map Y03's $\Omega$ to
\begin{equation} \label{eq:Omega}
\Omega \rightarrow \Omega_1 \rightarrow \Omega_2 + \pi \,,
\end{equation}
and their argument of periastron $\omega$ to
\begin{equation} \label{eq:omega}
\omega \equiv (\varpi - \Omega) \rightarrow (\varpi_1 - \Omega_1) \rightarrow (\varpi_1 -
\pi - \Omega_2)
\end{equation}
where $\varpi$ is the longitude of periastron.
Although $\varpi_1$ remains meaningful in our new reference plane,
$\Omega_1$ and $\omega_1$ are no longer meaningful, and are
swapped out using (\ref{eq:Omega}) and (\ref{eq:omega}).
Finally
\begin{equation} \label{eq:varpi2}
\varpi_\odot \rightarrow \varpi_2 \,.
\end{equation}
Armed with (\ref{eq:a1})--(\ref{eq:varpi2}), we
re-write equations (6)--(8) of
Y03 to establish the secular
disturbing function $R$ for an exterior test particle
perturbed by an interior body of mass $m_1$, expanded to hexadecapole
order:
\begin{align}
b_1 &= -(5/2)e_1 - (15/8)e_1^3 \\
b_2 &= -(35/8)e_1^3 \\
c_1 &= Gm_1a_1^2 \frac{1}{a_2^3 (1-e_2^2)^{3/2}} \\
c_2 &= Gm_1a_1^3 \frac{e_2}{a_2^4 (1-e_2^2)^{5/2}} \\
c_3 &= Gm_1a_1^4 \frac{1}{a_2^5 (1-e_2^2)^{7/2}} \\
d_1^\ast &= 1 + (15/8)e_1^2 + (45/64)e_1^4 \label{eq:d1star} \\
d_2 &= (21/8) e_1^2 (2 + e_1^2) \\
d_3 & = (63/8)e_1^4 \\
cI &\equiv \cos I \\
sI &\equiv \sin I \\
R_2 &= \frac{1}{8} \left(1 + \frac{3}{2}e_1^2\right) (3 cI^2 - 1) +
\frac{15}{16} e_1^2 sI^2 \cos 2(\Omega_2 - \varpi_1) \label{eq:quad}\\
R_3 &= \frac{1}{64} \left[
\left(-3 + 33 cI + 15cI^2 - 45cI^3\right) b_1
\cos (\varpi_2 +\varpi_1 - 2\Omega_2) \right. \nonumber \\
&+ \left(-3 - 33 cI + 15cI^2 + 45cI^3\right) b_1
\cos (\varpi_2 - \varpi_1) \nonumber \\
&+ \left(15 - 15cI -15cI^2 + 15cI^3\right) b_2
\cos (\varpi_2 +3 \varpi_1 - 4\Omega_2) \nonumber \\
&+ \left. \left(15 + 15cI -15cI^2 - 15cI^3\right) b_2
\cos (\varpi_2 - 3 \varpi_1 + 2\Omega_2)
\right] \label{eq:oct}\\
R_4 &= \frac{3}{16} \left( 2 + 3 e_2^2 \right) d_1^\ast - \frac{495}{1024}
e_2^2 - \frac{135}{256} cI^2 - \frac{165}{512} \nonumber \\
&+ \frac{315}{512}
cI^4 + \frac{945}{1024} cI^4 e_2^2 - \frac{405}{512} cI^2e_2^2
\nonumber \\
&+ \left( \frac{105}{512} cI^4 + \frac{315}{1024} e_2^2 -
\frac{105}{256} cI^2 + \frac{105}{512} - \frac{315}{512}
cI^2e_2^2 \right .\nonumber \\
&\left. + \frac{315}{1024} cI^4 e_2^2 \right) d_3 \cos 4(\varpi_1
- \Omega_2) \nonumber \\
&+ \frac{105}{512} \left( cI^3 - cI - \frac{1}{2} cI^4 +
\frac{1}{2} \right) d_3 e_2^2
\cos (4\varpi_1 +2 \varpi_2 - 6\Omega_2)
\nonumber \\
&+ \frac{105}{512} \left( -cI^3 + cI - \frac{1}{2} cI^4 +
\frac{1}{2} \right) d_3 e_2^2
\cos (4\varpi_1 -2 \varpi_2 - 2\Omega_2)
\nonumber \\
&+ \left( \frac{45}{64}cI^2 - \frac{45}{512} - \frac{315}{512}cI^4
\right) e_2^2
\cos (2\varpi_2 - 2\Omega_2) \nonumber \\
&+ \left( \frac{15}{16} cI^2 + \frac{45}{32} cI^2 e_2^2 -
\frac{45}{256} e_2^2 - \frac{15}{128} - \frac{315}{256} cI^4 e_2^2 \right.
\nonumber \\
&\left. - \frac{105}{128} cI^4 \right) d_2 \cos (2 \varpi_1 - 2 \Omega_2) \nonumber \\
&+ \left( \frac{15}{256} - \frac{45}{128} cI^2 + \frac{75}{256} cI
+ \frac{105}{256} cI^4 \right. \nonumber \\
&- \left. \frac{105}{256} cI^3 \right) d_2 e_2^2
\cos (2\varpi_1 + 2 \varpi_2 - 4\Omega_2) \nonumber \\
&+ \left( \frac{15}{256} - \frac{45}{128} cI^2 - \frac{75}{256} cI
+ \frac{105}{256} cI^4 \right. \nonumber \\
&+ \left. \frac{105}{256} cI^3 \right) d_2 e_2^2
\cos (2\varpi_1 - 2 \varpi_2) \label{eq:hex}\\
R &= R_2 c_1 + R_3 c_2 + R_4 c_3 \,.
\end{align}
A few notes: ($i$) this disturbing function
includes only the quadrupole ($R_2c_1$), octopole
($R_3c_2$), and hexadecapole ($R_4c_3$) terms;
the monopole term has been dropped (it is equivalent
to adding $m_1$ to the central mass $m_0$), as has the dipole term
which orbit-averages to zero;
($ii$) there are typos in equation (6) of Y03:
their $c_i$'s are missing factors of $M_\odot$ ($\rightarrow m_1$);
($iii$) we have starred $d_1^\ast$
in equation (\ref{eq:d1star}) to highlight that this term
as printed in Y03 is in error, as brought to our
attention by Matija \'Cuk, who also provided the correction.
We have verified this correction independently by
computing the hexadecapole disturbing function
in the limit $I=0$ and $e_2 = 0$.
We insert the disturbing function $R$ into Lagrange's planetary
equations for $\dot{e}_2$, $\dot{\Omega}_2$, $\dot{\varpi}_2$, and $\dot{I}$
(equations 6.146, 6.148, 6.149, and 6.150 of \citealt{murray00}).
These coupled ordinary differential equations are
solved numerically using a Runge-Kutta-Dormand-Prince method with
stepsize control and dense output ({\tt runge\_kutta\_dopri5} in {\tt C++}).
\subsection{Fixed Parameters} \label{sec:fixed}
The number of parameters is daunting,
even for the restricted, secular three-body problem considered here.
Throughout this paper, we fix the following parameters:
\begin{align}
a_1 &= 20 \, {\rm AU} \\
a_2 &= 100 \, {\rm AU} \\
m_0 &= 1 M_\odot \\
m_1 &= 0.001\, m_0 \,.
\end{align}
The ratio of orbital semimajor axes is fixed at
$\alpha \equiv a_1/a_2 = 0.2$, the largest value we thought
might still be amenable to a truncated expansion in $\alpha$
of the disturbing function (the smaller is $\alpha$,
the better the agreement with $N$-body integrations,
as we have verified explicitly; see also
Section \ref{sec:nbody}). Many of our results---the existences of
secular resonances, and the amplitudes
of eccentricity and inclination variations---fortunately do not depend
critically on $\alpha$; for a different $\alpha$, we
can obtain the same qualitative results by
adjusting initial eccentricities
(see, e.g., Section \ref{sec:circular}).
The above parameter choices do directly determine the timescales
of the test particle's evolution, which should (and mostly do)
fall within the Gyr ages of actual planetary systems
(see Section \ref{sec:prectime}).
Our parameters are those of a
distant Jupiter-mass planet (like 51 Eri b; \citealt{macintosh15})
perturbing an exterior collection of minor bodies (like the Kuiper belt).
With no loss of generality, we align the apsidal line
of the perturber's orbit with the $x$-axis:
\begin{equation}
\varpi_1 = 0^\circ \,.
\end{equation}
The remaining variables of the problem are $e_1$, $e_2$, $I$, $\omega_2$,
and $\Omega_2$. Often in lieu of $e_2$ we will plot the
periastron distance $q_2 = a_2 (1-e_2)$ to see how close the test particle
comes to the perturber. Once the orbits cross or nearly cross
(i.e., once $q_2 \lesssim a_1 (1+e_1) = 20$--35 AU),
our secular equations break down
and the subsequent evolution cannot be trusted.
Nevertheless we will sometimes show orbit-crossing trajectories
just to demonstrate that channels exist whereby test particle
periastra can be lowered from large distances to
near the perturber (if not conversely).
To the extent that $R$ is dominated by
the first quadrupole term in (\ref{eq:quad})
proportional to $(3 \cos^2 I - 1)$,
more positive $R$ corresponds to more co-planar orbits (i.e.,
wires 1 and 2 closer together).
The numerical values for $R$ quoted below
have been scaled to avoid large unwieldy numbers;
they should be multiplied by $3.55 \times 10^6$
to bring them into cgs units.
\section{Circular Perturber} \label{sec:circular}
When $e_1=0$, the octopole contribution to the potential vanishes, but the
quadrupole and hexadecapole contributions do not.
Because the potential
for a circular perturber is axisymmetric, the $z$-component of the test
particle's angular momentum is conserved (we omit the subscript 2 on
$J_z$ for convenience):
\begin{equation} \label{eq:jz}
J_{z} \equiv \sqrt{1-e_2^2} \cos I
\end{equation}
where we have dropped the dependence of the angular momentum on $a_2$,
since semimajor axes never change in secular dynamics.
We therefore have two constants of the motion: $J_{z}$ and the disturbing
function itself (read: Hamiltonian), $R$.
Smaller $J_{z}$ corresponds to more highly inclined and/or more eccentric
test particle orbits.
\subsection{The Inverse Kozai ($\omega_2$) Resonance} \label{sec:ikr}
Figure \ref{fig1} gives a quick survey of the test particle dynamics
for $e_1 = 0$.
For a restricted range in $J_z \simeq $ 0.40--0.45,
the test particle's argument of periastron $\omega_2$
librates about either $90^\circ$ or 270$^\circ$,
with concomitant oscillations
in $q_2$ (equivalently $e_2$) and $I$. This is the
analogue of the conventional Kozai resonance, exhibited
here by an exterior test particle;
we refer to it as the ``inverse Kozai'' resonance
or the $\omega_2$ resonance. The inverse Kozai resonance
appears only at hexadecapole order; it originates from the term
in (\ref{eq:hex}) proportional to
$e_2^2 \cos (2\varpi_2 - 2\Omega_2) = e_2^2 \cos 2\omega_2$.\footnote{\citet{naoz17} refer to their octopole-level treatment as exploring the ``eccentric Kozai-Lidov mechanism for an outer test particle.'' Our terminology here differs; we consider the analogue of the Kozai-Lidov resonance the $\omega_2$ resonance, which appears only at hexadecapole order, not the $\Omega_2-\varpi_1$
resonance that they highlight.}
The inverse Kozai resonance appears near
\begin{equation}
I (\omega_2{\rm-res})
= \arccos (\pm \sqrt{1/5}) \simeq 63^\circ \, {\rm and} \, 117^\circ
\end{equation}
which, by Lagrange's planetary equations and (\ref{eq:quad}),
are the special inclinations
at which the quadrupole precession rate
\begin{equation}
\left. \frac{d\omega_2}{dt} \right|_{{\rm quad},e_1=0} = \frac{3}{8} \frac{m_1}{m_0} \left( \frac{a_1}{a_2} \right)^2 \frac{n_2}{(1-e_2)^2} \left( 5 \cos^2 I - 1 \right) \label{eq:domegadt}
\end{equation}
vanishes, where $n_2$ is the test particle mean motion;
see Gallardo et al.~(\citeyear{gallardo12}, their equation 11 and
subsequent discussion). At $I = I (\omega_2{\rm-res})$, fixed points
appear at $\omega_2 = 90^\circ$ and $\omega_2 = 270^\circ$.
The critical angles $63^\circ$ and $117^\circ$ are related
to their well-known Kozai counterparts
of $39^\circ$ and $141^\circ$ (i.e., $\arccos (\pm \sqrt{3/5})$),
but the correspondence is not exact. In the conventional
problem, the inclinations at which the fixed points
($\omega_1 = \pm 90^\circ$, $\dot{\omega}_1=0$) appear vary
from case to case; they are given by
$I = \arccos \left[ \pm \sqrt{(3/5) (1-e_1^2)} \right] = \arccos \left[ \pm (3/5)^{1/4} |J_{1z}|^{1/2} \right]$,
where $J_{1z} \equiv \sqrt{1-e_1^2} \cos I$ is conserved
at quadrupole order (e.g., \citealt{lithwick11}).
But for our inverse problem, the fixed points ($\omega_2 = \pm 90^\circ$,
$\dot{\omega}_2 = 0$) appear at fixed inclinations $I$ of $63^\circ$
and $117^\circ$ that are independent of $J_z$ (for $e_1 = 0$).
In this sense, the inverse Kozai resonance is ``less flexible''
than the conventional Kozai resonance.
\begin{figure*}
\includegraphics[width=\textwidth]{Figure1.eps}
\caption{Periastron distances $q_2$ vs.~arguments of periastron $\omega_2$, for $e_1 = 0$. Because the potential presented by a circular perturber is axisymmetric,
$J_z = \sqrt{1-e_2^2} \cos I$ is conserved; trajectories in a given panel
have $J_z$ as annotated. For given $J_z$, the
inclination $I$ increases monotonically but not linearly
with $q_2$; the maximum and minimum inclinations
for the trajectories plotted are labeled on the right of each panel.
In a narrow range of $J_z = 0.40$--0.45 (for our chosen
$\alpha = a_1/a_2 = 0.2$), the inverse Kozai (a.k.a.~$\omega_2$)
resonance appears, near $I \simeq 63^\circ$.
Near $J_z = 0.40$, the inverse Kozai resonance can force the test particle to cross orbits with the perturber.}
\label{fig1}
\end{figure*}
The $\omega_2$ resonance exists only in a narrow range
of $J_z$ that is
specific to a given $\alpha = a_1/a_2$, as we have determined
by numerical experimentation.
Outside this range,
$\omega_2$ circulates and $e_2$ and $I$ hardly vary (Figure \ref{fig1}).
Fine-tuning $J_z$ can produce large
resonant oscillation amplitudes in $e_2$ and $I$;
some of these trajectories lead to orbit crossing
with the perturber, as seen in the panel for $J_z = 0.400$
in Figure \ref{fig1}.
\begin{figure}
\includegraphics[width=\columnwidth]{Figure2.eps}
\caption{Time evolution within the inverse Kozai resonance.
The trajectory chosen is the one in Figure \ref{fig1} with
$J_z = 0.45$ and the largest libration amplitude.
The nodal ($\Omega_2$) precession arises from the
quadrupole potential and is therefore on the order of
$1/\alpha^2 \sim 25$ times faster than the libration
timescale for $\omega_2$, which is determined
by the hexadecapole potential.
Initial conditions:
$\varpi_2 = 90^\circ$,
$\Omega_2 = 0^\circ$,
$q_2 = 87.5$ AU, $I = 63^\circ$.
Overplotted in red dashed lines are the results of an $N$-body integration
with identical initial conditions (and initial true anomalies
$f_1 = 0^\circ$ and $f_2 = 180^\circ$) inputted as Jacobi coordinates.
The $N$-body integration is carried out
using \texttt{WHFast}, part of the \texttt{REBOUND} package
(\citealt{rein15}; \citealt{wisdom91}).}
\label{fig:gallardo_time}
\end{figure}
\subsubsection{Precession Timescales} \label{sec:prectime}
To supplement (\ref{eq:domegadt}),
we list here for ease of reference the remaining equations of motion
of the test particle, all to leading order,
as derived by \citet{gallardo12}
for the case $e_1 = 0$.
We have verified that the disturbing function
we have derived in Section \ref{sec:disturb} yields identical expressions:
\begin{align}
\left. \frac{de_2}{dt} \right|_{{\rm hex}, e_1=0} &= +\frac{45}{512} \frac{m_1}{m_0} \left( \frac{a_1}{a_2} \right)^4 \frac{ e_2 n_2 }{(1-e_2^2)^3} \nonumber \\
& \times \left( 5 + 7 \cos 2I \right) \sin^2 I \sin 2\omega_2\\
\left. \frac{dI}{dt} \right|_{{\rm hex}, e_1 = 0} &= -\frac{45}{1024} \frac{m_1}{m_0} \left( \frac{a_1}{a_2} \right)^4 \frac{ e_2^2 n_2 }{(1-e_2^2)^4} \nonumber \\
& \times \left( 5 + 7 \cos 2I \right) \sin 2I \sin 2\omega_2 \\
\left. \frac{d\Omega_2}{dt} \right|_{{\rm quad}, e_1 = 0} &= -\frac{3}{4} \frac{m_1}{m_0} \left( \frac{a_1}{a_2} \right)^2 \frac{ n_2 }{(1-e_2^2)^2} \cos I \label{eq:dOmegadt}\\
\left. \frac{d\varpi_2}{dt} \right|_{{\rm quad}, e_1 = 0} &= +\frac{3}{16} \frac{m_1}{m_0} \left( \frac{a_1}{a_2} \right)^2 \frac{ n_2 }{(1-e_2^2)^2} \left( 3 - 4 \cos I + 5 \cos 2I \right) \,. \label{eq:dvarpidt}
\end{align}
As equations (\ref{eq:dOmegadt}) and (\ref{eq:dvarpidt}) show,
the magnitudes of the precession rates
for $\Omega_2$ and $\varpi_2$ are typically similar
to within order-unity factors. We define a fiducial secular
precession period
\begin{equation} \label{eq:fiducial}
\left. t_{\rm prec} \right|_{{\rm quad}, e_1=0} \sim \frac{2\pi}{n_2} \frac{m_0}{m_1} \left( \frac{a_2}{a_1} \right)^2 \left( 1-e_2^2 \right)^2
\end{equation}
which reproduces the precession period for $\Omega_2$
seen in the sample evolution
of Figure \ref{fig:gallardo_time} to within a factor of 3.
The scaling factors in (\ref{eq:fiducial}) are more reliable than the
overall magnitude; the dependencies on $m_0$, $m_1$, $a_1$, and $a_2$
can be used to scale the time coordinate
of one numerical computation to another.
Figure \ref{fig:gallardo_time} is made for a particle in the inverse
Kozai resonance; note how the oscillation periods for $\omega_2$, and
by extension $I$ and $q_2$, are each a few dozen times longer
than the nodal precession period. This is expected since for the
inverse Kozai resonance, $d\omega_2/dt$ vanishes at quadrupole order,
leaving the hexadecapole contribution, which is smaller by
$\sim$$(a_1/a_2)^2 = 1/25$, dominant.
As shown in Figure \ref{fig:gallardo_time},
the secular trajectory within the $\omega_2$ resonance
is confirmed qualitatively by the $N$-body
symplectic integrator \texttt{WHFast} (\citealt{rein15}; \citealt{wisdom91}), part of the \texttt{REBOUND} package (version 3.5.8; \citealt{rein12}). A timestep of 0.25 yr was used (0.28\% of the orbital
period of the interior perturber) for the $N$-body integration shown;
it took less than 3 wall-clock hours to complete the 5 Gyr integration using
a 2.2 GHz Intel Core i7 processor on a 2015 MacBook Air laptop.
\subsubsection{Inverse Kozai vs.~Kozai} \label{sec:ikk}
In the top panel of Figure \ref{fig:inverse_Kozai_curves}, we show
analogues to the ``Kozai curves''
made by \citet{lithwick11} for the conventional problem.
This top panel delineates the allowed values of test particle
eccentricity and inclination for given $J_z$ and $R$ when $e_1 = 0$.
Contrast these ``inverse Kozai curves'' with the Kozai
curves calculated by \citet{lithwick11} in their Figure 2 (left panel):
for the inverse problem, the range of allowed
eccentricities and inclinations is much
more restricted (at fixed $J_z$ and $R$)
than for the conventional problem.
For the inverse problem when $e_1 = 0$,
$e_2$ and $I$ are strictly constant at quadrupole order;
variations in $e_2$ and $I$ for the case of a circular perturber
are possible starting only at hexadecapole order,
via the small inverse Kozai resonant term
in $R$ proportional to $e_2^2 \cos 2\omega_2$ (variations in $\omega_2$
directly drive the variations in $e_2$ and $I$
when $e_1 = 0$).
By comparison, in the conventional problem, variations in test particle
eccentricity and inclination are possible even at quadrupole order, and large.
\begin{figure}
\includegraphics[width=\columnwidth]{Figure3.eps}
\caption{
Inclination $I$ vs.~eccentricity $e_2$ for a constant disturbing function
$R = -0.05032$ (see Section \ref{sec:fixed} for the units of $R$).
When $e_1 = 0$, $J_z$ is an additional constant of the motion;
the resultant ``inverse Kozai curves'' (top panel) for our external test
particle are analogous to the conventional ``Kozai curves'' shown in
Figure 2 of \citet{lithwick11} for an internal test particle.
Compared to the conventional case, the ranges of $I$ and $e_2$ in the
inverse case are much more restricted; what variation there is
is only possible because of the $e_2^2 \cos (2\omega_2)$ term
that appears at hexadecapolar order (see equation \ref{eq:hex}).
As $e_1$ increases above zero (middle and bottom panels),
$J_z$ varies more and variations in $I$ grow larger.
In each of the middle and bottom panels, points are generated
by integrating the equations of motion for six sets of initial
conditions specified in Table \ref{tab:inverse_Kozai_curves}.}
\label{fig:inverse_Kozai_curves}
\end{figure}
This key difference between the conventional and inverse
problems stems from the difference between the interior
and exterior expansions of the $1/\Delta$ potential.
The conventional interior expansion involves the sum
$P_\ell r_{\rm test}^{\ell}$,
where $P_\ell$ is the Legendre polynomial of order $\ell$
and $r_{\rm test}$ is the radial position of the test particle.
The inverse exterior expansion
involves a qualitatively different sum,
$P_\ell r_{\rm test}^{-(\ell+1)}$.
The time-averaged potentials in
the conventional and inverse problems therefore involve
different integrals;
what averages to a term proportional to
$\cos(2\omega_{\rm test})$
in the conventional quadrupole problem averages
instead in the inverse problem to a constant,
independent of the test particle's argument of periastron
$\omega_{\rm test}$ (we have verified this last statement by evaluating
these integrals).
An interior multipole moment of order $\ell$
is not the same as an exterior multipole moment of the same order.
We could say that the inverse exterior potential looks ``more Keplerian''
insofar as its monopole term scales as $1/r_{\rm test}$.
\section{Eccentric Perturber} \label{sec:eccentric}
When $e_1 \neq 0$, all orders (quadrupole, octopole, and hexadecapole)
contribute to the potential seen by the test particle.
The potential is no longer axisymmetric, and so $J_z$ is no longer
conserved. This opens the door to orbit ``flipping'', i.e.,
a prograde ($I < 90^\circ$) orbit can switch to being retrograde
($I > 90^\circ$) and vice versa (e.g., \citealt{naoz17}).
There is only one constant of the motion, $R$.
\subsection{First Survey}
Whereas when $e_1=0$ the evolution did not depend on
$\Omega_2$, it does when $e_1 \neq 0$. For our first foray
into this large multi-dimensional phase space, we divided up
initial conditions as follows. For
each of four values of $e_1 \in \{0.03, 0.1, 0.3, 0.7\}$,
we scanned systematically through
different initial values of $q_{2,{\rm init}}$
(equivalently $e_{2,{\rm init}}$)
ranging between $a_2 = 100$ AU and $a_1 = 20$ AU.
For each $q_{2,{\rm init}}$, we assigned $I_{\rm init}$ according
to one of three values of $J_{z,{\rm init}} \equiv \sqrt{1-e_{2,{\rm init}}^2} \cos I_{\rm init} \in \{0.8, 0.45, 0.2\}$,
representing ``low'', ``intermediate'', and ``high'' inclination
cases, broadly speaking.
Having set $e_1$, $q_{2,{\rm init}} (e_{2,{\rm init}})$,
and $J_{z,{\rm init}} (I_{\rm init})$, we cycled through five values of
$\varpi_{2,{\rm init}} \in \{0^\circ, 45^\circ, 90^\circ, 135^\circ, 180^\circ\}$
and three values of $\omega_{2,{\rm init}} \in \{0^\circ,90^\circ,270^\circ\}$.
We studied all integrations from this large ensemble, adding more
with slightly different initial conditions as our curiosity led us.
In what follows, we present a subset of the results from this first
survey, selecting those we thought representative or interesting.
Later, in Section \ref{sec:sos}, we will provide a second and
more thorough survey using surfaces of section.
A few sample integrations from both surveys will be tested
against $N$-body calculations in Section \ref{sec:nbody} (see also
Figure \ref{fig:gallardo_time}).
\subsubsection{Low Perturber Eccentricity $e_1 \leq 0.1$}
Comparison of Figure \ref{fig:different_jz} with Figure \ref{fig1} shows that at low
perturber eccentricity, $e_1 \lesssim 0.1$, the test particle
does not much change its behavior from when $e_1 = 0$
(for a counter-example, see Figure 6).
The same inverse Kozai resonance appears for $J_{z,{\rm init}} = 0.45$
and $e_1 = 0.1$ as it does for $e_1 = 0$. The maximum libration
amplitude of the resonance is somewhat higher at the larger $e_1$.
The trajectories
shown in Figure \ref{fig:different_jz} are for
$\varpi_{2,{\rm init}} = 0^\circ$,
but qualitatively similar results obtain for other choices
of $\varpi_{2,{\rm init}}$.
\begin{figure}
\includegraphics[width=\columnwidth]{Figure4.eps}
\caption{Analogous to Figure \ref{fig1},
but now for a mildly eccentric perturber ($e_1 = 0.1$).
Because $e_1 \neq 0$, $J_z$ is not conserved and cannot be used
to connect $q_2$ and $I$ uniquely; we have to plot
$q_2$ and $I$ in separate panels.
Nevertheless, $e_1$ is still small enough that $J_z$ is approximately
conserved; $q_2$ and $I$ still roughly follow one another
for a given $J_{z,{\rm init}}$, i.e., the family of trajectories
proceeding from lowest $I$ (marked by vertical bars) to highest $I$
corresponds to the same family of trajectories proceeding from lowest
$q_2$ (marked by vertical bars) to highest $q_2$.
The $\omega_2$ resonance can still
be seen near $I \simeq 63^\circ$,
in the center panels for $J_{z,{\rm init}} = 0.45$.
All the non-resonant trajectories are
initialized with $\varpi_2 = 0^\circ$ and $\omega_2 = 0^\circ$.
For the four resonant
trajectories, the initial $\varpi_2 = 0^\circ$ and
$\omega_2 = \pm 90^\circ$.
}
\label{fig:different_jz}
\end{figure}
The middle panel of Figure \ref{fig:inverse_Kozai_curves}
elaborates on this result,
showing that even though $J_{z,{\rm init}}$
is not strictly conserved when $e_1\neq 0$, it can be approximately
conserved (again, see the later Figure \ref{fig:flip}
for a counter-example).
Test particles explore
more of $e_2$-$I$ space when $e_1 = 0.1$ than when $e_1 = 0$,
but they still largely
respect (for the specific $R$ of Figure \ref{fig:inverse_Kozai_curves})
the constraints imposed by $J_z$ when $e_1 = 0$.
This statement also holds at $e_1 = 0.3$ (lower panel), but to a lesser
extent.
\begin{figure}
\includegraphics[width=\columnwidth]{Figure5.eps}
\caption{
Comparing quadrupole (quad), octopole (oct), and
hexadecapole (hex) evolutions for $e_1 = 0.1$ and $\varpi_1 = 0^\circ$
and the same test particle initial
conditions ($e_2 = 0.2$, $I = 62.66^\circ$,
$\varpi_2 = 0^\circ$, and $\omega_2 = 90^\circ$).
The hex panel features a second set of initial conditions
identical to the first except that $\omega_2 = 270^\circ$;
the two hex trajectories map to the same quad and oct trajectories as shown.
The inverse Kozai resonance, featuring libration of
$\omega_2$ about $\pm 90^\circ$ and $\sim$20\%
variations in $e_2$, appears only in a
hex-level treatment.
}
\label{fig:gallardo_hex_v_oct}
\end{figure}
Figure \ref{fig:gallardo_hex_v_oct}
illustrates how the hexadecapole (hex) potential---specifically
the inverse Kozai resonance---can qualitatively change
the test particle dynamics at octopole (oct) order.
Only at hex order is the $\omega_2$ resonance evident.
Compared to the oct level dynamics, the periastron distance
$q_2$ varies more strongly, hitting its maximum and minimum values
at $\omega_2 = 90^\circ$ or $270^\circ$
(instead of at $0^\circ$ and $180^\circ$,
as an oct treatment would imply).
Orbit flipping becomes possible when $e_1 \neq 0$, for sufficiently
large $I$ or $e_2$ \citep{naoz17}. Figure \ref{fig:flip} is analogous to Figure \ref{fig:inverse_Kozai_curves}
except that it is made for a more negative $R$, corresponding
to larger $I$ (insofar as $R$ is dominated by the quadrupole
term). For this $R = -0.1373$, as with the previous $R = -0.05032$,
$e_2$ and $I$
hardly vary when $e_1 = 0$ (Section \ref{sec:ikk}).
But when $e_1 = 0.1$, the constraints imposed by fixed $J_z$
come loose; Figure \ref{fig:flip} shows that a single particle's $J_z$
can vary dramatically from positive (prograde) to negative (retrograde)
values. As shown by Naoz et al.~(\citeyear{naoz17}, see their Figure 1),
such orbit flipping
is possible even at quadrupole order; flipping is not associated
with the $\omega_2$ resonance, but rather with librations of
$\Omega_2-\varpi_1$
about $90^\circ$ or $270^\circ$.
We verify the influence of this
$\Omega_2-\varpi_1$ resonance
in the middle panel of Figure \ref{fig:flip}.
\begin{figure}
\includegraphics[width=\columnwidth]{Figure6.eps}
\caption{Top panel: Inclination $I$ vs.~eccentricity $e_2$ for a fixed disturbing function $R = -0.1373$ (see Section \ref{sec:fixed} for the units of $R$).
Different colored points, corresponding
to different $J_z$ values as marked,
are for $e_1 = 0$, and are analogous to those shown in the top panel of
Figure \ref{fig:inverse_Kozai_curves}. The black points represent the
trajectory of a single test particle, integrated for $e_1 = 0.1$
and using the following initial conditions:
$e_2 = 0.3691$, $I = 85^\circ$, $\varpi_2 = 0^\circ$,
and $\Omega_2 = 0^{\circ}$.
When $e_1 \neq 0$, $J_z$ is no longer conserved, and $e_2$ and $I$
vary dramatically for this value of $R$; $J_z$ even changes sign
as the orbit flips.
The variation in $e_2$ is so large that eventually
the test particle crosses the orbit of the perturber ($e_2 > 0.8$),
at which point we terminate the trajectory.
Center panel: Inclination $I$ vs.~longitude of ascending node $\Omega_2$
(referenced to $\varpi_1$, the periapse longitude of the perturber)
for the
same
black trajectory shown in the top panel.
The two lobes of the $\Omega_2-\varpi_1$ resonance \citep{naoz17},
around which the
particle lingers, are visible.
Bottom panel: The same test particle trajectory shown in black
for the top and middle panels, now in $q_2$ vs.~$\omega_2$ space.
The evolution is evidently chaotic.
}
\label{fig:flip}
\end{figure}
\subsubsection{High Perturber Eccentricity $e_1 = 0.3, 0.7$}
We highlight a few comparisons between an oct level
treatment and a hex level treatment. We begin with Figure \ref{fig:no_diff}
which shows practically no difference. Many of the integrations
in our first survey showed no significant difference in going
from oct to hex. We also tested some of the cases
showcased in \citet{naoz17} and found that including the hex
dynamics did not substantively alter their evolution.
\begin{figure}
\includegraphics[width=\columnwidth]{Figure7.eps}
\caption{Analogous to Figure \ref{fig:gallardo_hex_v_oct}, but for
$e_1 = 0.3$ and the following test particle initial
conditions: $e_2 = 0.2$,
$I = 62.66^\circ$,
$\Omega_2 = \pm 90^\circ$,
$\varpi_2 = 0^\circ$.
The two test particle
trajectories
overlap in $q_2$-$\omega_2$ space (top panels).
For these initial conditions,
the $\Omega_2-\varpi_1$ resonance \citep{naoz17} appears at all orders
quad through hex (bottom panels).
The oct and hex trajectories appear qualitatively
similar in all respects.
}
\label{fig:no_diff}
\end{figure}
Cases where the hex terms matter are shown in Figures \ref{fig:gallardo_stab}--\ref{fig:nudge_retro}.
The $\omega_2$ resonance, seen only at hex order, can stabilize
the motion; in Figure \ref{fig:gallardo_stab}, the $\omega_2$ resonance eliminates
the chaotic variations seen at the oct level in $q_2$ and $I$.
Even when the $\omega_2$ resonance is not active,
hex level terms can dampen eccentricity and inclination
variations (Figures \ref{fig:hex_damp1} and \ref{fig:hex_damp2}). But the hex terms do not necessarily
suppress; in Figure \ref{fig:nudge_retro} they are seen to nudge the test particle
from a prograde to a retrograde orbit,
across the separatrix of the $\Omega_2-\varpi_1$ resonance.
\begin{figure}
\includegraphics[width=\columnwidth]{Figure8.eps}
\caption{Analogous to Figure \ref{fig:gallardo_hex_v_oct}, but for
$e_1 = 0.3$ and $\varpi_1 = 0^\circ$
and the following test particle initial conditions:
$e_2 = 0.55$, $I = 57.397^\circ$,
$\Omega_{2} = 45^\circ$ and $225^\circ$,
$\varpi_2 = 135^\circ$. The inverse Kozai ($\omega_2$) resonance
is visible in the hex panels only, with a
more widely varying inclination here for $e_1 = 0.3$ than
for $e_1 = 0$ (compare with Figure
\ref{fig:gallardo_hex_v_oct}).
The phase space available to the $\omega_2$ resonance shrinks
with increasing $e_1$; at $e_1 = 0.7$, we could not find
the resonance (see Figure \ref{fig:SOS_e7}).
Two test particle trajectories are displayed for the hex
panel; since they overlap at the quad and oct levels,
only one trajectory is shown for those panels
(the one for which the initial $\Omega_2 = 45^\circ$).}
\label{fig:gallardo_stab}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{Figure9.eps}
\caption{Analogous to Figure \ref{fig:gallardo_hex_v_oct}, but for
$e_1 = 0.3$ and $\varpi_1 = 0^\circ$
and the following test particle initial conditions:
$e_2 = 0.15$, $I = 62.925^\circ$, $\Omega_{2} = 45^\circ$ and
$225^\circ$, $\varpi_2 = 135^\circ$.
Two test particle trajectories are displayed for the hex
panel; since they overlap at the quad and oct levels,
only one trajectory is shown for those panels
(the one for which the initial $\Omega_2 = 225^\circ$).
The hex potential suppresses the eccentricity variation
seen at the oct level, and removes the particle
from the separatrix of the $\Omega_2$ resonance,
bringing it onto one of two islands of libration.}
\label{fig:hex_damp1}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{Figure10.eps}
\caption{Analogous to Figure \ref{fig:gallardo_hex_v_oct}, but for
$e_1 = 0.7$ and $\varpi_1 = 0^\circ$
and the following test particle initial conditions:
$e_2 = 0.525$, $I = 58.081^\circ$, $\Omega_{2} = 135^\circ$ and
$315^\circ$, $\varpi_2 = 135^\circ$.
As with Figures \ref{fig:gallardo_stab} and \ref{fig:hex_damp1},
the hex potential helps to stabilize the motion; here it
locks the particle to one of two librating islands of the
$\Omega_2$ resonance and prevents the orbit crossing
seen at the oct level.}
\label{fig:hex_damp2}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{Figure11.eps}
\caption{Analogous to Figure \ref{fig:gallardo_hex_v_oct}, but for
$e_1 = 0.7$ and $\varpi_1 = 0^\circ$
and the following test particle initial conditions:
$e_2 = 0.1$, $I = 84.232^\circ$, $\Omega_{2} = 180^\circ$,
$\varpi_2 = 180^\circ$. Here the hex potential nudges the particle
from a circulating trajectory onto the separatrix of the $\Omega_2-\varpi_1$
resonance (contrast with Figures \ref{fig:hex_damp1} and
\ref{fig:hex_damp2}).}
\label{fig:nudge_retro}
\end{figure}
\subsection{Second Survey: Surfaces of Section} \label{sec:sos}
Surfaces of section (SOS's) afford a more global (if also more
abstract) view of the dynamics. By plotting the test particle's
position in phase space only when one of its coordinates
periodically equals some value, we thin its trajectory out,
enabling it to be compared more easily with
the trajectories of other test particles with different initial
conditions. In this lower dimensional projection, it is also possible
to identify resonances, and to distinguish chaotic from regular
trajectories.
Since we are particularly interested in seeing how $\omega_2$ and
its quasi-conjugate $e_2$ behave,
we section using $\Omega_2$, plotting the particle's
position in $q_2$-$\omega_2$ space and $I$-$\omega_2$ space
whenever $\Omega_2 = 180^\circ$ (with zero longitude defined
by $\varpi_1 = 0^\circ$),
regardless of the sign of $\dot{\Omega}_2$. A conventional
SOS would select for $\dot{\Omega}_2$ of a single sign, but
in practice there is no confusion;
prograde orbits all have $\dot{\Omega}_2 < 0$ (see
equation \ref{eq:dOmegadt}) while retrograde orbits have
$\dot{\Omega}_2 > 0$; we focus for simplicity on prograde
orbits and capture a few retrograde branches
at the smallest values of $R$ (see the rightmost panels of
Figures \ref{fig:SOS_e1}--\ref{fig:SOS_e7}).
We have verified in a few cases
that the trajectories so plotted trace the maximum and minimum values
of $q_2$ and $I$; our SOS's contain the bounding
envelopes of the trajectories.
\begin{figure*}
\includegraphics[width=\textwidth]{Figure12.eps}
\caption{Surfaces of section (SOS's)
for perturber eccentricity $e_1 = 0.1$ and $\varpi_1 = 0^\circ$
and various values
of the disturbing function $R$ (the only constant of the motion
when $e_1\neq 0$; see Section \ref{sec:fixed} for the units of $R$)
labeled at the top of the figure.
These SOS's are sectioned using $\Omega_2$: a point is plotted
every time $\Omega_2$ crosses 180$^\circ$, irrespective of
the sign of $\dot{\Omega}_2$ (see text).
Each test particle trajectory is assigned its own color; see
Table \ref{tab:SOS_e1} in the Appendix for the initial conditions.
At $R = 0.072$, the $\varpi_2-\varpi_1$ resonance appears.
At $R = -0.0721$, the $\omega_2$ (inverse Kozai) resonance appears
(dark blue and turquoise lobes centered on $\omega_2 = \pm 90^\circ$).
At $R = 0$, $-0.0938$, and $-0.1373$, the $\varpi_2+\varpi_1-2\Omega_2$
resonance manifests (this angle librates about $0^\circ$).
These three resonances are accessed at inclinations $I \sim
45^\circ$--75$^\circ$ (and at analogous retrograde inclinations
that are not shown).
The region at large $q_2$ for $R = -0.1373$ is empty because
here the test particle locks into
the $\Omega_2-\varpi_1$ resonance studied by \citet{naoz17},
in which $\Omega_2-\varpi_1$ librates about $90^\circ$ and so does
not trigger our sectioning criterion.}
\label{fig:SOS_e1}
\end{figure*}
Figure \ref{fig:SOS_e1}
shows $\Omega_2$-SOS's for $e_1 = 0.1$ and a sequence of $R$'s
(including those $R$ values used for Figures \ref{fig:inverse_Kozai_curves}
and \ref{fig:flip}).
At the most positive
$R$ (lowest $I$), the trajectories are regular, with small-amplitude
variations in $q_2$ and $I$. At more negative $R$ (larger $I$),
three strong resonances appear, each characterized by
substantial variations in $q_2$:
\begin{enumerate}
\item The first of these (appearing at $R = +0.0721$, second
panel from left) is an ``apse-aligned'' resonance
for which $\varpi_2-\varpi_1$ librates about $0^\circ$ and
\begin{equation}
I (\varpi_2{\rm-}\varpi_1{\rm-res}) \simeq \arccos \left( \frac{+1 \pm \sqrt{6}}{5} \right) \simeq 46^\circ \,
{\rm and} \, 107^\circ \,.
\end{equation}
At these inclinations, by equation (\ref{eq:dvarpidt}),
$\left. d\varpi_2/dt\right|_{{\rm quad},e_1=0}=0$.\footnote{The
apse-aligned resonance identified here is at small $\alpha$ and large $I \simeq 46^\circ/107^\circ$,
but another apse-aligned resonance also exists
for orbits that are co-planar or nearly so (e.g., \citealt{wyatt99}).
The latter can be found using
Laplace-Lagrange secular theory (e.g., \citealt{murray00}),
which does not expand in $\alpha$
but rather in eccentricity and inclination; it corresponds to a purely forced
trajectory with no free oscillation. Laplace-Lagrange (read: low-inclination secular)
dynamics are well understood so we do not discuss them further.}
\item The second of the resonances ($R = -0.0721$, two lobes in the
middle panel)
is the inverse Kozai or $\omega_2$
resonance, appearing at $I (\omega_2{\rm-res}) \simeq 63^\circ$ and
$117^\circ$, and for which $\omega_2$ librates about $\pm 90^\circ$
(Section \ref{sec:ikr}).
\item The third resonance ($R = -0.0721$, $-0.0938$, and $-0.1373$;
middle, fourth, and fifth panels) appears at
\begin{equation}
I (\varpi_2{\rm+}\varpi_1{\rm-}2\Omega_2{\rm-res}) \simeq \arccos \left( \frac{-1 \pm \sqrt{6}}{5} \right) \simeq 73^\circ \,
{\rm and} \, 134^\circ \,,
\end{equation}
inclinations for which $d(\varpi_2+\varpi_1-2\Omega_2)/dt = 0$
or equivalently $\dot{\omega}_2=\dot{\Omega}_2$ (equations
\ref{eq:domegadt} and \ref{eq:dOmegadt}). The resonant
angle $\varpi_2 + \varpi_1 - 2\Omega_2$ $(= \omega_2-\Omega_2)$
librates about $0^\circ$.\footnote{The analogue of this
resonance for the interior test particle problem
has been invoked, together with other secular and mean-motion
resonant effects, in the context of Planet Nine and
Centaur evolution \citep{batygin17}.}
\end{enumerate}
For the above three resonances, we have verified that their
respective resonant arguments
($\varpi_2-\varpi_1$; $\omega_2$; $\varpi_2+\varpi_1-2\Omega_2$)
librate (see also Figure \ref{fig:nbody}), and have omitted their retrograde
branches from the SOS for simplicity.
The $\varpi_2+\varpi_1-2\Omega_2$ and $\varpi_2-\varpi_1$
resonances appear at octopole
order; they are associated with the first two terms
in the octopole disturbing function (\ref{eq:oct}), respectively.
The $\omega_2$ resonance is a hexadecapolar effect, as noted earlier.
\begin{figure*}
\includegraphics[width=\textwidth]{Figure13.eps}
\caption{Same as Figure \ref{fig:SOS_e1} but for $e_1 = 0.3$.
The $\varpi_2-\varpi_1$ resonance appears at $R = 0.0503$;
the $\varpi_2 + \varpi_1 - 2\Omega_2$ resonance appears at
$R = 0$ and $-0.0503$; and the double-lobed
inverse Kozai resonance appears at $R = -0.0503$.
Initial conditions used to make this figure are in Table
\ref{tab:SOS_e3}.}
\label{fig:SOS_e3}
\end{figure*}
The SOS for $e_1 = 0.3$ (Figure \ref{fig:SOS_e3}) reveals dynamics
qualitatively similar to $e_1 = 0.1$, but with larger amplitude variations
in $q_2$. We have verified in Figure \ref{fig:SOS_e3}
that the island of libration seen at $R = 0.0503$
is the $\varpi_2-\varpi_1$ resonance; that the islands near the top
of the panels for $R = 0$ and $-0.0503$ represent the
$\varpi_2+\varpi_1 - 2\Omega_2$
resonance; and that the two islands centered on $\omega_2 = \pm
90^\circ$ at $R = -0.0503$ represent the inverse Kozai resonance.
For both $e_1 = 0.3$ and $0.1$, chaos is more prevalent at
more negative $R$ / larger $I$. The chaotic
trajectories dip to periastron distances $q_2$ near $a_1 = 20$ AU, and
in Figure \ref{fig:SOS_e1} we show a few that actually cross orbits
with the perturber (the gray trajectory for $R = -0.0938$
is situated near the separatrix of the inverse Kozai resonance:
the two resonant lobes are seen in ghostly outline).
The orbit-crossing behavior seen in Figure \ref{fig:SOS_e1}
occurs late in the test particle's evolution---in fact, at times longer
than the age of the universe for our parameter choices!
We nevertheless show these trajectories because the evolutionary
timescales shorten and become realistic for smaller $a_1$ and $a_2$
(Section \ref{sec:prectime}). Unfortunately, no matter how we scale
$a_1$ and $a_2$, the computational cost
of finding $N$-body counterparts to the orbit-crossing trajectories
of Figure \ref{fig:SOS_e1}
is necessarily expensive because the $N$-body timestep scales with the
orbital period of the interior body; $N$-body tests of these
particular trajectories are deferred to future work.
\begin{figure*}
\includegraphics[width=\textwidth]{Figure14.eps}
\caption{Same as Figure \ref{fig:SOS_e1} but for $e_1 = 0.7$.
In addition to the $\varpi_2-\varpi_1$ resonance at $R \leq 0.4$, a new
resonance appears at $R \geq 0.5$ for which
$\varpi_2-3\varpi_1+2\Omega_2$ $(= \omega_2 + 3 \Omega_2)$ librates
about $0^\circ$.
This last resonance, however,
is not found in full $N$-body integrations
(by contrast to the other four resonances
identified in this paper; see Figure \ref{fig:nbody}).
Initial conditions used to make this figure
are in Table \ref{tab:SOS_e7}.}
\label{fig:SOS_e7}
\end{figure*}
At $e_1 = 0.7$ (Figure \ref{fig:SOS_e7}) we find, in addition to the
$\varpi_2-\varpi_1$ resonance at $R \leq 0.4$, a new resonance at $R \geq 0.5$.
For this latter resonance,
$\varpi_2 - 3 \varpi_1 + 2 \Omega_2 = \omega_2 + 3 \Omega_2$
librates about $0^\circ$.
Although this resonance is found at octopole order---it is embodied in
the fourth term in equation (\ref{eq:oct})---we found by
experimentation that eliminating the hexadecapole contribution to the
disturbing function removes the test particle from this resonance (for
the same initial conditions as shown in Figure \ref{fig:SOS_e7}).
Evidently the hexadecapole potential helps to enforce
$\dot{\omega}_2 = - 3 \dot{\Omega}_2$ so that this octopole resonance
can be activated. Remarkably, this $\varpi_2-3\varpi_1+2\Omega_2$
resonance enables the test particle to cycle between a nearly (but not
exactly) co-planar orbit to one inclined by
$\sim$60--70$^\circ$, while having its eccentricity $e_2$
vary between $\sim$0.2--0.6.
We will see in Section \ref{sec:nbody}, however,
that a full $N$-body treatment mutes the effects of this resonance.
\subsection{$N$-Body Tests} \label{sec:nbody}
Having identified five resonances in the above surveys,
we test how robust they are using $N$-body integrations.
We employ the \texttt{WHFast} symplectic
integrator (\citealt{rein15}; \citealt{wisdom91}), part of the
\texttt{REBOUND} package (\citealt{rein12}), adopting
timesteps between 0.1--0.25 yr.
Initial conditions (inputted for the $N$-body experiments
as Jacobi elements, together with initial true anomalies
$f_1 = 0^\circ$ and $f_2 = 180^\circ$)
were drawn from the above surveys
with the goal of finding resonant
libration at as large a perturber eccentricity $e_1$ as possible.
In Figure \ref{fig:nbody} we verify that the
$\omega_2$, $\Omega_2-\varpi_1$, $\varpi_2-\varpi_1$, and
$\varpi_2 +\varpi_2-2\Omega_2$
resonances survive a full $N$-body treatment when $e_1$
is as high as $0.1$, $0.7$, $0.7$, and $0.1$, respectively
(see also Figure \ref{fig:gallardo_time}).
Table \ref{tab:nbody} records the initial conditions.
We were unable in $N$-body calculations
to lock the test particle
into the $\varpi_2-3\varpi_1+2\Omega_2$ $(= \omega_2 + 3\Omega_2)$ resonance,
despite exploring the parameter
space in the vicinity where we found it in
the secular surfaces of section.
This is unsurprising insofar as we had found
this particular
resonance to depend on both octopole and hexadecapolar effects
at the largest perturber eccentricity
tested, $e_1 = 0.7$; at such a high eccentricity, effects even higher
order than hexadecapole are likely to be significant, and it appears
from our $N$-body calculations that they are, preventing a resonant lock.
We show in Figure \ref{fig:nbody} an $N$-body
trajectory that comes close to
being in
this
resonance (on average,
$\dot{\omega}_2 \approx -2.7 \dot{\Omega}_2$). Although the
inclination does not vary as dramatically
as in the truncated secular evolution, it can still cycle
between $\sim$$20^\circ$ and $70^\circ$.
\begin{figure*}
\includegraphics[width=\textwidth]{Figure15.eps}
\caption{Comparison of $N$-body (dashed red)
vs. secular (solid black) integrations.
Initial conditions, summarized in Table \ref{tab:nbody},
are chosen to lock the test particle into the $\omega_2$,
$\Omega_2-\varpi_1$, $\varpi_2-\varpi_1$, and $\varpi_2+\varpi_1-2\Omega_2$ resonances
(panels a through d). We failed to obtain a lock for the
$\varpi_2 - 3 \varpi_1 +2\Omega_2 = \omega_2+3\Omega_2$ resonance in our $N$-body calculations
and offer an instead an $N$-body trajectory that
comes close to librating
($\dot{\omega}_2 \approx -2.7 \dot{\Omega}_2$; panel e),
together with its secular counterpart which does
appear to librate.
}
\label{fig:nbody}
\end{figure*}
The agreement between the $N$-body and secular integrations shown
in Figure \ref{fig:nbody} is good, qualitatively and even quantitatively
in some cases. We emphasize that these trajectories have not been cherry-picked
to display such agreement; the initial conditions were drawn from the
preceding surveys for the purpose of testing which resonances
survive an $N$-body treatment. In the cases of the $\Omega_2-\varpi_1$
and $\varpi_2-\varpi_1$ resonances, the secular trajectories
show amplitude modulation/beating not seen in their
$N$-body counterparts. Similar behavior was reported
by \citet[][see their Figure 12]{naoz17}. A broader continuum
of forcing frequencies must be present at our standard value of
$\alpha = 0.2$ than is captured by our hex-limited treatment;
certainly we obtain better agreement with $N$-body calculations
at lower values of $\alpha$ (as we have explicitly verified by testing, e.g.,
$\alpha = 0.05$ for the parameters of Figure \ref{fig:nbody}c).
\section{Summary}\label{sec:conclude}
We have surveyed numerically the dynamics
of an external test particle in the restricted,
secular, three-body problem. We wrote down the secular potential
of an internal perturber to hexadecapolar order (where
the expansion parameter is the ratio of
semimajor axes of the internal and external bodies,
$\alpha = a_1/a_2 < 1$)
by adapting the disturbing function for an external perturber as
derived by
\citet[][Y03]{yokoyama03}. In making this adaptation,
we corrected a misprint in the hexadecapolar potential
of Y03 (M.~\'Cuk 2017, personal communication).
Our numerical survey was conducted at fixed $\alpha = 0.2$,
the largest value we thought might still be captured
by a truncated secular expansion (lower values generally do better).
Inclination variations for an external test particle
can be dramatic when the eccentricity of the internal
perturber $e_1$ is non-zero. The variations in mutual
inclination $I$ are effected by a
quadrupole resonance for which $\Omega_2$, the test
particle's longitude of ascending node (referenced to the orbit plane
of the perturber, whose periapse is at longitude $\varpi_1$),
librates about $\varpi_1 \pm 90^\circ$. Within this $\Omega_2-\varpi_1$
resonance, the test particle's
orbit flips (switches from prograde to retrograde). Flipping
is easier---i.e., the minimum
$I$ for which flipping is possible decreases---with increasing
$e_1$. All of this inclination behavior was described by
Naoz et al.~(\citeyear{naoz17}; see also \citealt{verrier09}
and \citealt{farago10})
and we have confirmed these essentially quadrupolar
results here.
Eccentricity variations for an external test particle
rely on octopole or higher-level effects (at the quadrupole
level of approximation, the test particle
eccentricity $e_2$ is strictly constant).
When $e_1 = 0$, octopole effects vanish, and
the leading-order resonance able to produce
eccentricity variations is the hexadecapolar
``inverse Kozai'' resonance
in which the test particle's argument of periastron
$\omega_2$ librates about $\pm 90^\circ$ \citep{gallardo12}.
The resonance demands rather high inclinations,
$I \simeq 63^\circ$ or $117^\circ$.
By comparison to its conventional Kozai counterpart which exists
at quadrupole order, the hexadecapolar
inverse Kozai resonance is more restricted
in scope: it exists only over a narrow range of
$J_z = \sqrt{1-e_2^2}\cos I$ for a given $\alpha$,
and produces eccentricity variations on the order of
$\Delta e_2 \simeq 0.2$. For suitable $J_z$
it can, however, lead to orbit-crossing with the perturber.
In our truncated secular treatment,
we found the inverse Kozai resonance
to persist up to perturber eccentricities of $e_1 = 0.3$;
in $N$-body experiments, we found the resonance only
up to $e_1 = 0.1$.
At higher $e_1$, the hexadecapolar
resonance seems to disappear,
overwhelmed by octopole effects.
Surfaces of section made for $e_1 \neq 0$
and $\Omega_2 = 180^\circ$
revealed two
octopole resonances
characterized by stronger eccentricity variations of
$\Delta e_2$ up to 0.5. The resonant angles are the apsidal difference
$\varpi_2-\varpi_1$, which librates
about $0^\circ$, and
$\varpi_2+\varpi_1-2\Omega_2$, which also librates about $0^\circ$.
The $\varpi_2-\varpi_1$ and $\varpi_2+\varpi_1-2\Omega_2$ resonances
are like the inverse Kozai resonance in that they
also require large inclinations,
$I \simeq 46^\circ$/$107^\circ$ and $73^\circ/134^\circ$,
respectively.
The apse-aligned $\varpi_2-\varpi_1$ resonance survives
full $N$-body integrations up to $e_1 = 0.7$;
the $\varpi_2+\varpi_1-2\Omega_2$ resonance survives up to $e_1 = 0.1$.
At large $e_1$, the requirement on $I$ for the $\varpi_2-\varpi_1$ resonance
lessens to about $\sim$20$^\circ$.
We outlined two rough, qualitative trends: (1)
the larger $e_1$ is, the more
the eccentricity and inclination of the test particle can vary;
and (2) the more polar the test particle orbit (i.e.,
the closer $I$ is to $90^\circ$), the more chaotic its evolution.
In some high-inclination trajectories---near the separatrix
of the inverse Kozai resonance, for example---test particle
periastra could be lowered from large distances to near the perturber.
These secular channels of transport
need to be confirmed with $N$-body tests.
This paper is but an initial reconnaissance of the external
test particle problem. How the various resonances we have identified
may have operated in practice to shape actual planetary/star
systems is left for future study. In addition to more
$N$-body tests, we also need to explore the effects
of general relativity (GR). For our chosen parameters,
GR causes the periapse of the perturber to precess
at a rate that is typically several hundreds of times slower
than the rate at which the test particle's node precesses.
Such an additional apsidal precession is not expected to affect
our results materially; still, a check should be made.
A way to do that comprehensively is
to re-compute our surfaces of section with GR.
\section*{Acknowledgements}
We are grateful to Edgar Knobloch and Matthias Reinsch for teaching
Berkeley's upper-division mechanics course Physics 105,
and for connecting BV with EC.
This work was supported by a Berkeley Excellence Account for Research
and the NSF.
Matija \'Cuk alerted us to the misprint in Y03
and provided insights that were most helpful.
We thank Smadar Naoz for an encouraging and constructive
referee's report; Konstantin Batygin, Alexandre Correira, Bekki
Dawson,
Eve Lee, Yoram Lithwick,
and Renu Malhotra for useful discussions and feedback;
and Daniel Tamayo for teaching us how to use \texttt{REBOUND}.
\bibliographystyle{mnras}
|
1,941,325,220,073 | arxiv | \section{Introduction}
The Surface Quasigeostrophic equation (SQG) of geophysical origin (\cite{held}) was proposed as a two dimensional model for the study of inviscid incompressible formation of singularities (\cite{c}, \cite{cmt}). While the global regularity of all solutions of SQG whose initial data are smooth is still unknown, the original blow-up scenario of \cite{cmt} has been ruled out analytically (\cite{cord}) and numerically (\cite{cnum}), and nontrivial examples of global smooth solutions have been constructed (\cite{cascor}). Solutions of SQG and related equations without dissipation and with non-smooth (piece-wise constant) initial data give rise to interface dynamics (\cite{fr}, \cite{castro}) with potential finite time blow up (\cite{cornum}).
The addition of fractional Laplacian dissipation produces globally regular solutions if the power of the Laplacian is larger or equal than one half. When the linear dissipative operator is precisely the square root of the Laplacian, the equation is commonly referred to as the ``critical dissipative SQG'', or ``critical SQG''. This active scalar equation (\cite{c}) has been the object of intensive study in the past decade. The solutions are transported by divergence-free velocities they create, and are smoothed out and decay due to nonlocal diffusion. Transport and diffusion do not add size to a solution: the solution remains bounded, if it starts so (\cite{res}). The space $L^{\infty}({\mathbb R}^2)$ is not a natural phase space for the nonlinear evolution: the nonlinearity involves Riesz transforms and these are not well behaved in $L^{\infty}$. Unfortunately, for the purposes of studies of global in time behavior of solutions, $L^{\infty}$ is unavoidable: it quantifies the most important information freely available. The equation is quasilinear and $L^{\infty}$--critical, and there is no `` wiggle room'', nor a known better (smaller) space which is invariant for the evolution. One must work in order to obtain better information. A pleasant aspect of criticality is that solutions with small initial $L^{\infty}$ norm are smooth (\cite{ccw}). The global regularity of large solutions was obtained independently in \cite{caf} and \cite{knv} by very different methods: using harmonic extension and the De Georgi methodology of zooming in, and passing from $L^2$ to $L^{\infty}$ and from $L^{\infty}$ to $C^{\alpha}$ in \cite{caf}, and constructing a family of time-invariant moduli of continuity in \cite{knv}. Several subsequent proofs were obtained (please see \cite{cvt} and references therein). All the proofs are dimension-independent, but are in either ${\mathbb R}^d$ or on the torus ${\mathbb {T}}^d$. The proofs of \cite{cv1} and \cite{cvt} were based on an extension of the C\'{o}rdoba-C\'{o}rdoba inequality (\cite{cc}).
This inequality states that
\begin{equation}
\Phi'(f)\Lambda f - \Lambda\Phi(f)\ge 0
\label{corcorone}
\end{equation}
pointwise. Here $\Lambda = \sqrt{-\Delta}$ is the square root of the Laplacian in the whole space ${\mathbb R}^d$, $\Phi$ is a real valued convex function of one variable, normalized so that $\Phi(0) = 0$ and $f$ is a smooth function. The fractional Laplacian in the whole space has a (very) singular integral representation, and this can be used to obtain (\ref{corcorone}). In \cite{cv1} specific nonlinear maximum principle lower bounds were obtained and used to prove the global regularity. A typical example is
\begin{equation}
D(f) =f\Lambda f- \frac{1}{2}\Lambda\left({f^2}\right) \ge c\left(\|\theta\|_{L^{\infty}}\right)^{-1} {f^3}
\label{corcorv}
\end{equation}
pointwise, for $f=\partial_i\theta$ a component of the gradient of a bounded function $\theta$. This is a useful cubic lower bound for a quadratic expression, when $\|\theta\|_{L^{\infty}}\le\|\theta_0\|_{L^{\infty}}$ is known to be bounded above. The critical SQG equation in ${\mathbb R}^2$ is
\begin{equation}
\partial_t \theta + u\cdot\nabla\theta + \Lambda\theta = 0
\label{critsqg}
\end{equation}
where
\begin{equation}
u = \nabla^{\perp}\Lambda^{-1}\theta = R^{\perp}{\theta}
\label{ucr}
\end{equation}
and $\nabla^{\perp} = (-\partial_2, \partial_1)$ is the gradient rotated by $\frac{\pi}{2}$.
Because of the conservative nature of transport and the good dissipative properties of $\Lambda$ following from (\ref{corcorone}), all $L^p$ norms of $\theta$ are nonincreasing in time. Moreover, because of properties of Riesz transforms, $u$ is essentially of the same order of magnitude as $\theta$. Differentiating the equation we obtain the stretching equation
\begin{equation}
\left(\partial_t + u\cdot\nabla + \Lambda\right)\nabla^{\perp}\theta = (\nabla u)\nabla^{\perp}\theta.
\label{stretch}
\end{equation}
(In the absence of $\Lambda$ this is the same as the stretching equation for three dimensional vorticity in incompressible Euler equations, one of the main reasons SQG was considered in
\cite{c}, \cite{cmt} in the first place.) Taking the scalar product with $\nabla^{\perp}\theta$ we obtain
\begin{equation}
\frac{1}{2}(\partial_t + u\cdot\nabla + \Lambda)q^2 + D(q) = Q
\label{qf}
\end{equation}
for $q^2 = |\nabla^{\perp}\theta|^2$, with
\[
Q = (\nabla u)\nabla^{\perp}\theta\cdot\nabla^{\perp}\theta \le |\nabla u| q^2.
\]
The operator $\partial_t + u\cdot\nabla + \Lambda$ is an operator of advection and fractional diffusion: it does not add size. Using the pointwise bound (\ref{corcorv}) we already see that the dissipative lower bound is potentially capable of dominating the cubic term $Q$, but there are two obstacles. The first obstacle is that constants matter: the two expressions are cubic, but the useful dissipative cubic lower bound $D(q)\ge K |q|^{3}$ has perhaps too small a prefactor $K$ if the $L^{\infty}$ norm of $\theta_0$ is too large. The second obstacle is that although
\[
\nabla u = R^{\perp}(\nabla\theta)
\]
has the same size as $\nabla^{\perp}\theta$ (modulo constants) in all $L^p$ spaces $1<p<\infty$, it fails to be bounded in $L^{\infty}$ by the $L^{\infty}$ norm of $\nabla^{\perp}\theta$. In order to overcome these obstacles, in \cite{cv1} and \cite{cvt}, instead of estimating directly gradients, the proof proceeds by estimating finite differences, with the aim of obtaining bounds for $C^{\alpha}$ norms first. In fact, in critical SQG, once
the solution is bounded in any $C^{\alpha}$ with $\alpha>0$, it follows that it is $C^{\infty}$. More generally, if the equation has a dissipation of order $s$, i.e., $\Lambda$ is replaced by $\Lambda^s$ with $0<s\le 1$ then if $\theta$ is bounded in $C^{\alpha}$ with $\alpha>1-s$, then the solution is smooth (\cite{cw}). (This condition is sharp, if one considers general linear advection diffusion equations, (\cite{svz}). In \cite{cvt} the smallness of $\alpha$ is used to show that the term corresponding to $Q$ in the finite difference version of the argument is dominated by the term corresponding to $D(q)$.
In this paper we consider the critical SQG equation in bounded domains. We take a bounded open domain $\Omega\subset {\mathbb R}^d$ with smooth (at least $C^{2,\alpha}$) boundary and denote by $\Delta$ the Laplacian operator with homogeneous Dirichlet boundary conditions and by $\l$ its square root defined in terms of eigenfunction expansions. Because no explicit kernel for the fractional Laplacian is available in general, our approach, initiated in \cite{ci} is based on bounds on the heat kernel.
The critical SQG equation is
\begin{equation}
\partial_t \theta + u\cdot\nabla\theta + \l\theta =0
\label{sqg}
\end{equation}
with
\begin{equation}
u = \nabla^{\perp}\l^{-1}\theta = R_D^{\perp}\theta
\label{u}
\end{equation}
and smooth initial data. We obtain global regularity results, in the spirit of the ones in the whole space. There are quite significant differences between the two cases. First of all, the fact that no explicit formulas are available for kernels requires a new approach; this yields as a byproduct new proofs even in the whole space. The main difference and additional difficulty in the bounded domain case is due to the lack of translation invariance.
The fractional Laplacian is not translation invariant, and from the very start, differentiating the equation (or taking finite differences) requires understanding the respective commutators. For the same reason, the Riesz transforms $R_D$ are not spectral operators, i.e., they do not commute with functions of the Laplacian, and so velocity bounds need a different treatment. In \cite{ci} we proved using the heat kernel approach the existence of global weak solutions of (\ref{sqg}) in $L^2(\Omega)$. A proof of local existence of smooth solutions is provided in the present paper in $d=2$. The local existence is obtained in Sobolev spaces based on $L^2$ and uses Sobolev embeddings. Because of this, the proof is dimension dependent. A proof in higher dimensions is also possible but we do not pursue this here. We note that for regular enough solutions (e.g. $\theta\in H_0^1(\Omega)$) the normal component of the velocity
vanishes at the boundary $\left(R_D^{\perp}\theta\cdot N\right)_{\left |\right. \partial\Omega}=0$ because the stream function $\psi = \l^{-1}\theta$ vanishes at the boundary and its gradient is normal to the boundary. Let us remark here that even in the case of a half-space and $\theta\in C_0^{\infty}(\Omega)$, the tangential component of the velocity need not vanish: there is tangential slip.
In order to state our main results, let
\begin{equation}
d(x) = dist(x,\partial\Omega)
\label{dx}
\end{equation}
denote the distance from $x$ to the boundary of $\Omega$. We introduce the $C^{\alpha}(\Omega)$ space for interior estimates:
\begin{defi}\label{calpha}
Let $\Omega$ be a bounded domain and let $0<\alpha<1$ be fixed. We say that $\theta\in C^{\alpha}(\Omega)$ if $\theta\in L^{\infty}(\Omega)$ and
\begin{equation}
[f]_{\alpha} = \sup_{x\in\Omega}(d(x))^{\alpha}\left(\sup_{h\neq 0,|h|<d(x)}\frac{|f(x+h)-f(x)|}{|h|^{\alpha}}\right) <\infty.
\label{semi}
\end{equation}
The norm in $C^{\alpha}(\Omega)$ is
\begin{equation}
\|f\|_{C^{\alpha}} = \|f\|_{L^{\infty}(\Omega)} + [f]_{\alpha}.
\label{norm}
\end{equation}
\end{defi}
Our main results are the following:
\begin{thm}\label{alphaint} Let $\theta(x,t)$ be a smooth solution of (\ref{sqg}) on a time interval $[0, T)$, with $T\le \infty$, with initial data $\theta(x,0)= \theta_0(x)$. Then the solution is uniformly bounded,
\begin{equation}
\sup_{0\le t< T}\|\theta(t)\|_{L^{\infty}(\Omega)}\le \|\theta_0\|_{L^{\infty}(\Omega)}.
\label{linftyb}
\end{equation}
There exists $\alpha$ depending only on $\|\theta_0\|_{L^{\infty}(\Omega)}$ and
$\Omega$, and a constant $\Gamma$ depending only on the domain $\Omega$ (and in particular, independent of $T$) such that
\begin{equation}
\sup_{0\le t<T}\|\theta(t)\|_{C^{\alpha}(\Omega)} \le \Gamma\|\theta_0\|_{C^{\alpha}(\Omega)}
\label{calphain}
\end{equation}
holds.
\end{thm}
The second theorem is about global interior gradient bounds:
\begin{thm}\label{gradint} Let $\theta(x,t)$ be a smooth solution of (\ref{sqg}) on a time interval $[0, T)$, with $T\le \infty$, with initial data $\theta(x,0)= \theta_0(x)$. There exists a constant $\Gamma_1$ depending only on $\Omega$ such that
\begin{equation}
\sup_{x\in\Omega, 0\le t<T}d(x)|\nabla_x\theta(x,t)|\le \Gamma_1\left[\sup_{x\in\Omega}d(x)|\nabla_x\theta_0(x)| + \left(1+\|\theta_0\|_{L^{\infty}(\Omega)}\right)^4\right]
\label{gradintb}
\end{equation}
holds.
\end{thm}
\begin{rem} Higher interior regularity can be proved also. In fact, once global interior $C^{\alpha}$ bounds are obtained for any $\alpha>0$, the interior regularity problem becomes subcritical, meaning that ``there is room to spare''. This is already the case for Theorem \ref{gradint} and justifies thinking that the equation is $L^{\infty}$ interior-critical. However, we were not able to obtain global uniform $C^{\alpha}(\bar{\Omega})$ bounds. Moreover, we do not know the implication
$C^{\alpha}(\bar{\Omega}) \Rightarrow C^{\infty}(\bar{\Omega})$ uniformly, and thus the equation is not $L^{\infty}$ critical up to the boundary. This is due to the fact that the commutator between normal derivatives and the fractional Dirichlet Laplacian is not controlled uniformly up to the boundary. The example of half-space is instructive because explicit kernels and calculations are available. In this example odd reflection across the boundary permits the construction of global smooth solutions, if the initial data are smooth and compactly supported away from the boundary. The support of the solution remains compact and cannot reach the boundary in finite time, but the gradient of the solution might grow in time at an exponential rate.
\end{rem}
The proofs of our main results use the following elements. First, the inequality (\ref{corcorone}) which has been proved in (\cite{ci}) for the Dirichlet $\l$ is shown to have a lower bound
\begin{equation}
D(f)(x) = \left(f\l f - \frac{1}{2}\l\left({f^2}\right)\right)(x) \ge c\frac{f^2(x)}{d(x)}
\label{dfdxb}
\end{equation}
with $c>0$ depending only on $\Omega$. Note that in ${\mathbb R}^d$, $d(x)=\infty$, which is consistent with (\ref{corcorone}). This lower bound (valid for general $\Phi$ convex, with $c$ independent of $\Phi$, see (\ref{cor})) provides a strong damping boundary repulsive term, which is essential to overcome boundary effects coming from the lack of translation invariance.
The second element of proofs consists of nonlinear lower bounds in the spirit of (\cite{cv1}). A version for derivatives in bounded domains, proved in (\cite{ci}) is modified for finite differences. In order to make sense of finite differences near the boundary in a manner suitable for transport, we introduce a family of good cutoff functions depending on a scale $\ell$ in Lemma \ref{goodcutoff}. The finite difference nonlinear lower bound is
\begin{equation}
D(f)(x)\ge c\left(|h|\|\theta\|_{L^{\infty}(\Omega)}\right)^{-1}|f(x)|^3+ c\frac{|f(x)|^2}{d(x)}
\label{dfcube}
\end{equation}
when $f=\chi\delta_h\theta$ is large (see (\ref{nlb})), where $\chi$ belongs to the family of good cutoff functions.
Once global interior $C^{\alpha}(\Omega)$ bounds are obtained, in order to obtain global interior bounds for the gradient, we use a different nonlinear lower bound,
\begin{equation}
D(f) = \left(f\l f -\frac{1}{2}(\l f^2)\right)(x) \ge c \frac{|f(x)|^{3+\frac{\alpha}{1-\alpha}}}{\|\theta\|_{C^{\alpha}(\Omega)}^{\frac{1}{1-\alpha}}}(d(x))^{\frac{\alpha}{1-\alpha}} + c\frac{f^2(x)}{d(x)}
\label{nnlbd}
\end{equation}
for large $f=\chi\nabla\theta$ (see (\ref{nlbd})). This is a super-cubic bound, and makes the gradient equation look subcritical. Similar bounds were obtained in the whole space in (\cite{cv1}). Proving the bounds (\ref{dfcube}) and (\ref{nnlbd}) requires a different approach and new ideas because of the absence of explicit formulas and lack of translation invariance.
The third element of proofs are bounds for $R_D^{\perp}\theta$ based only on global apriori information on $\|\theta\|_{L^{\infty}}$ and the nonlinear lower bounds on $D(f)$ for appropriate $f$. Such an approach was initiated in (\cite{cv1}) and (\cite{cvt}). In the bounded domain case, again, the method of proof is different because the kernels are not explicit, and reference is made to the heat kernels. The boundaries introduce additional error terms. The bound for finite differences is
\begin{equation}
|\delta_h R^{\perp}_D\theta(x)| \le C\left(\sqrt{\rho D(f)(x)} + \|\theta\|_{L^{\infty}}\left(\frac{|h|}{d(x)}+ \frac{|h|}{\rho}\right) + |\delta_h\theta(x)|\right)
\label{dhrtb}
\end{equation}
for $\rho\le cd(x)$, with $f=\chi\delta_h \theta$ and with $C$ a constant depending on $\Omega$ (see \ref{dhub}). The bound for gradient is
\begin{equation}
|\nabla R^{\perp}_D\theta(x)| \le C\left(\sqrt{\rho D(f)(x)} + \|\theta\|_{L^{\infty}(\Omega)}\left(\frac{1}{d(x)} + \frac{1}{\rho}\right) + |\nabla\theta(x)|\right)
\label{nartxb}
\end{equation}
for $\rho\le cd(x)$ with $f=\chi\nabla\theta$ with a constant $C$ depending on $\Omega$ (see (\ref{nauxb})). These are remarkable pointwise bounds (clearly not valid for the case of the Laplacian even in the whole space, where $D(f)(x) = |\nabla f(x)|^2$).
The fourth element of the proof are bounds for commutators. These bounds
\begin{equation}
\left |\left[\chi\delta_h, \l\right]\theta(x)\right| \le C\frac{|h|}{d(x)^2}\|\theta\|_{L^{\infty}(\Omega)},
\label{chidellcomm}
\end{equation}
for $\ell\le d(x)$, (see (\ref{commhb})), and
\begin{equation}
\left|\left[\chi\nabla, \l\right]\theta(x)\right| \le \frac{C}{d(x)^2}\|\theta\|_{L^{\infty}(\Omega)},
\label{nachicommd}
\end{equation}
for $\ell\le d(x)$, (see (\ref{nachi})), reflect the difficulties due to the boundaries. They are remarkable though in that the only price to pay for a second order commutator in $L^{\infty}$ is $d(x)^{-2}$. Note that in the whole space this commutator vanishes ($\chi=1$). This nontrivial situation in bounded domains is due to cancellations and bounds on the heat kernel representing translation invariance effects away from boundaries (see (\ref{cancel1}, \ref{cancel2})). Although the heat kernel in bounded domains has been extensively studied, and the proofs of (\ref{cancel1}) and (\ref{cancel2}) are elementary, we have included them in the paper because we have not found them readily available in the literature and for the sake of completeness.
The paper is organized as follows: after preliminary background, we
prove the nonlinear lower bounds. We have separate sections for bounds for the Riesz transforms and the commutators. The proof of the main results are then provided, using nonlinear maximum principles. We give some of the explicit calculations in the example of a half-space and conclude the paper by proving the translation invariance bounds for the heat kernel (\ref{cancel1}), (\ref{cancel2}), and a local well-posedness result in two appendices.
\section{Preliminaries}
The $L^2(\Omega)$ - normalized eigenfunctions of $-\Delta$ are denoted $w_j$, and its eigenvalues counted with their multiplicities are denoted $\lambda_j$:
\begin{equation}
-\Delta w_j = \lambda_j w_j.
\label{ef}
\end{equation}
It is well known that $0<\lambda_1\le...\le \lambda_j\to \infty$ and that $-\Delta$ is a positive selfadjoint operator in $L^2(\Omega)$ with domain ${\mathcal{D}}\left(-\Delta\right) = H^2(\Omega)\cap H_0^1(\Omega)$.
The ground state $w_1$ is positive and
\begin{equation}
c_0d(x) \le w_1(x)\le C_0d(x)
\label{phione}
\end{equation}
holds for all $x\in\Omega$, where $c_0, \, C_0$ are positive constants depending on $\Omega$. Functional calculus can be defined using the eigenfunction expansion. In particular
\begin{equation}
\left(-\Delta\right)^{\beta}f = \sum_{j=1}^{\infty}\lambda_j^{\beta} f_j w_j
\label{funct}
\end{equation}
with
\[
f_j =\int_{\Omega}f(y)w_j(y)dy
\]
for $f\in{\mathcal{D}}\left(\left (-\Delta\right)^{\beta}\right) = \{f\left |\right. \; (\lambda_j^{\beta}f_j)\in \ell^2(\mathbb N)\}$.
We will denote by
\begin{equation}
\l^s = \left(-\Delta\right)^{\frac{s}{2}},
\label{lambdas}
\end{equation}
the fractional powers of the Dirichlet Laplacian, with $0\le s \le 2$
and with $\|f\|_{s,D}$ the norm in ${\mathcal{D}}\left (\l^s\right)$:
\begin{equation}
\|f\|_{s,D}^2 = \sum_{j=1}^{\infty}\lambda_j^{s}f_j^2.
\label{norms}
\end{equation}
It is well-known and easy to show that
\[
{\mathcal{D}}\left( \l \right) = H_0^1(\Omega).
\]
Indeed, for $f\in{\mathcal{D}}\left (-\Delta\right)$ we have
\begin{equation}
\|\nabla f\|^2_{L^2(\Omega)} = \int_{\Omega}f\left(-\Delta\right)fdx = \|\l f\|_{L^2(\Omega)}^2 = \|f\|^2_{1,D}.
\label{kat}
\end{equation}
We recall that the Poincar\'{e} inequality implies that the Dirichlet integral on the left-hand side above is equivalent to the norm in $H_0^1(\Omega)$
and therefore the identity map from the dense subset ${\mathcal{D}}\left(-\Delta\right)$ of $H_0^1(\Omega)$ to ${\mathcal D}\left(\l\right)$ is an isometry, and thus $H_0^1(\Omega)\subset {\mathcal{D}}\left(\l\right)$. But ${\mathcal{D}}\left(-\Delta\right)$ is dense in ${\mathcal D}\left(\l\right)$ as well, because finite linear combinations of eigenfunctions are dense in ${\mathcal D}\left(\l\right)$. Thus the opposite inclusion is also true, by the same isometry argument. \\
Note that in view of the identity
\begin{equation}
\lambda^{\frac{s}{2}} = c_{s}\int_0^{\infty}(1-e^{-t\lambda})t^{-1-\frac{s}{2}}dt,
\label{lambdalpha}
\end{equation}
with
\[
1 = c_{s} \int_0^{\infty}(1-e^{-\tau})\tau^{-1-\frac{s}{2}}d\tau,
\]
valid for $0\le s <2$, we have the representation
\begin{equation}
\left(\left(\l\right)^{s}f\right)(x) = c_{s}\int_0^{\infty}\left[f(x)-e^{t\Delta}f(x)\right]t^{-1-\frac{s}{2}}dt
\label{rep}
\end{equation}
for $f\in{\mathcal{D}}\left(\left (-\l\right)^{s}\right)$.
We use precise upper and lower bounds for the kernel $H_D(t,x,y)$ of the heat operator,
\begin{equation}
(e^{t\Delta}f)(x) = \int_{\Omega}H_D(t,x,y)f(y)dy .
\label{heat}
\end{equation}
These are as follows (\cite{davies1},\cite{qszhang1},\cite{qszhang2}).
There exists a time $T>0$ depending on the domain $\Omega$ and constants
$c$, $C$, $k$, $K$, depending on $T$ and $\Omega $ such that
\begin{equation}
\begin{array}{l}
c\min\left (\frac{w_1(x)}{|x-y|}, 1\right)\min\left (\frac{w_1(y)}{|x-y|}, 1\right)t^{-\frac{d}{2}}e^{-\frac{|x-y|^2}{kt}}\le \\H_D(t,x,y)\le C
\min\left (\frac{w_1(x)}{|x-y|}, 1\right)\min\left (\frac{w_1(y)}{|x-y|}, 1\right)t^{-\frac{d}{2}}e^{-\frac{|x-y|^2}{Kt}}
\end{array}
\label{hb}
\end{equation}
holds for all $0\le t\le T$. Moreover
\begin{equation}
\frac{\left |\nabla_x H_D(t,x,y)\right|}{H_D(t,x,y)}\le
C\left\{
\begin{array}{l}
\frac{1}{d(x)},\quad\quad \quad\quad {\mbox{if}}\; \sqrt{t}\ge d(x),\\
\frac{1}{\sqrt{t}}\left (1 + \frac{|x-y|}{\sqrt{t}}\right),\;{\mbox{if}}\; \sqrt{t}\le d(x)
\end{array}
\right.
\label{grbx}
\end{equation}
holds for all $0\le t\le T$.
Note that, in view of
\begin{equation}
H_D(t,x,y) = \sum_{j=1}^{\infty}e^{-t\lambda_j}w_j(x)w_j(y) ,
\label{hphi}
\end{equation}
elliptic regularity estimates and Sobolev embedding which imply uniform absolute convergence of the series (if $\partial\Omega$ is smooth enough), we have that
\begin{equation}
\partial_1^{\beta}H_D(t,y,x) = \partial_2^{\beta}H_D(t,x,y)
= \sum_{j=1}^{\infty}e^{-t\lambda_j}\partial_y^{\beta}w_j(y)w_j(x)
\label{dh}
\end{equation}
for positive $t$, where we denoted by $\partial_1^{\beta}$ and $\partial_2^{\beta}$ derivatives with respect to the first spatial variables and the second spatial variables, respectively.
Therefore, the gradient bounds (\ref{grbx}) result in
\begin{equation}
\frac{\left |\nabla_y H_D(t,x,y)\right|}{H_D(t,x,y)}\le
C\left\{
\begin{array}{l}
\frac{1}{d(y)},\quad\quad \quad\quad\quad {\mbox{if}}\; \sqrt{t}\ge d(y),\\
\frac{1}{\sqrt{t}}\left (1 + \frac{|x-y|}{\sqrt{t}}\right),\;{\mbox{if}}\; \sqrt{t}\le d(y).
\end{array}
\right.
\label{grby}
\end{equation}
We also use a bound
\begin{equation}
\nabla_x\nabla_x H_D(x,y,t) \le Ct^{-1-\frac{d}{2}}e^{-\frac{|x-y|^2}{\tilde{K}t}}
\label{naxnaxb}
\end{equation}
valid for $t\le cd(x)^2$ and $0<t\le T$, which follows from the upper bounds (\ref{hb}), (\ref{grbx}).
Important additional bounds we need are
\begin{equation}
\int_{\Omega}\left |(\nabla_x +\nabla_y)H_D(x,y,t)\right|dy \le Ct^{-\frac{1}{2}}e^{-\frac{d(x)^2}{\tilde{K}t}}
\label{cancel1}
\end{equation}
and
\begin{equation}
\int_{\Omega}\left |\nabla_x(\nabla_x +\nabla_y)H_D(x,y,t)\right|dy \le Ct^{-1}e^{-\frac{d(x)^2}{\tilde{K}t}}
\label{cancel2}
\end{equation}
valid for $t\le cd(x)^2$ and $0<t\le T$. These bounds reflect the fact that translation invariance is remembered in the solution of the heat equation with Dirichlet boundary data for short time, away from the boundary. We sketch the proofs of (\ref{naxnaxb}), (\ref{cancel1}) and (\ref{cancel2}) in the Appendix 1.
\section{Nonlinear Lower Bounds}
We prove bounds in the spirit of (\cite{cv1}). The proofs below are based on the method of (\cite{ci}), but they concern different objects (finite differences, properly localized) or different assumptions ($C^{\alpha}$). Nonlinear lower bounds are an essential ingredient in proofs of global regularity for drift-diffusion equations with nonlocal dissipation.
We start with a couple lemmas. In what follows we denote by $c$ and $C$ generic positive constants that depend on $\Omega$. When the logic demands it, we temporarily manipulate them and number them to show that the arguments are not circular. There is no attempt to optimize constants, and their numbering is local in the proof, meaning that, if for instance $C_2$ appears in two proofs, it need not be the same constant. However, when emphasis is necessary we single out constants, but then we avoid the letters $c,C$ with or without subscripts.
\begin{lemma}\label{Thet}
The solution of the heat equation with initial datum equal to 1 and zero boundary conditions,
\begin{equation}
\Theta(x,t) = \int_{\Omega}H_D(x,y,t)dy
\label{Theta}
\end{equation}
obeys $0\le \Theta(x,t)\le 1$, because of the maximum principle. There exist constants $T, c, C$ depending only on $\Omega$ such that the following inequalities hold:
\begin{equation}
\Theta(x,t)\ge c\min\left\{1, \left(\frac{d(x)}{\sqrt{t}}\right)^d\right\}
\label{thetalow}
\end{equation}
for all $0\le t\le T$, and
\begin{equation}
\Theta(x,t) \le C \frac{d(x)}{\sqrt{t}}
\label{thetaup}
\end{equation}
for all $0\le t\le T$. Let $0<s<2$. There exists a constant $c$ depending on $\Omega$ and $s$ such that
\begin{equation}
\int_0^{\infty}t^{-1-\frac{s}{2}}(1-\Theta(x,t))dt \ge c d(x)^{-s}
\label{lambdatheta}
\end{equation}
holds.
\end{lemma}
\begin{rem} $\l^{s} 1 $ is defined by duality by the left hand side of (\ref{lambdatheta}) and belongs to $H^{-1}(\Omega)$.
\end{rem}
\noindent{\bf Proof.} Indeed,
\[
\Theta (x,t) = \int_{\Omega}H_D(t,x,y)dy\ge \int_{|x-y|\le \frac{d(x)}{2}} H_D(t,x,y)dy
\]
because $H_D$ is positive. Using the lower bound in (\ref{phione}) we have that
$|x-y|\le \frac{d(x)}{2}$ implies
\[
\frac{w_1(x)}{|x-y|}\ge 2c_0,\quad \frac{w_1(y)}{|x-y|}\ge c_0,
\]
and then, using the lower bound in (\ref{hb}) we obtain
\[
H_D(t,x,y) \ge 2cc_0^2t^{-\frac{d}{2}}e^{-\frac{|x-y|^2}{kt}}.
\]
Integrating it follows that
\[
\Theta(x,t) \ge 2 cc_0^2\omega_{d-1}k^{\frac{d}{2}}\int_0^{\frac{d(x)}{2\sqrt{kt}}}\rho^{d-1}e^{-\rho^2}d\rho.
\]
If $\frac{d(x)}{2\sqrt{kt}}\ge 1$ then the integral is bounded below by
$\int_0^1\rho^{d-1}e^{-\rho^2}d\rho$. If $\frac{d(x)}{2\sqrt{kt}}\le 1$ then
$\rho\le 1$ implies that the exponential is bounded below by $e^{-1}$ and so
(\ref{thetalow}) holds.
Now (\ref{thetaup}) holds immediately from (\ref{phione}) and the upper bound
in (\ref{hb}) because the integral
\[
\int_{{\mathbb R}^d}|\xi|^{-1}e^{-\frac{|\xi|^2}{K}}d\xi <\infty
\]
if $d\ge 2$.
Regarding (\ref{lambdatheta}) we use
\[
\int_0^{\infty}t^{-1-\frac{s}{2}}(1-\Theta(x,t))dt\ge \int_\tau^{T}t^{-1-\frac{s}{2}}(1-\Theta(x,t))dt
\]
and choose appropriately $\tau$. In view of (\ref{thetaup}), if
\[
\frac{d(x)}{\sqrt{\tau}}\le \frac{1}{2C}
\]
then, when $\tau\le t\le T$ we have
\[
1-\Theta(x,t)\ge \frac{1}{2},
\]
and therefore
\[
\int_{\tau}^{T} t^{-1-\frac{s}{2}}\left(1-\Theta(x,t)\right)dt \ge\frac{1}{s} \tau^{-\frac{s}{2}}\left ( 1- \left(\frac{\tau}{T}\right)^{\frac{s}{2}}\right)
\]
holds. The choice
\[
\frac{d(x)}{\sqrt{\tau}} =\frac{1}{2C}
\]
implies (\ref{lambdatheta}) provided $2\tau \le T$ which is the same as $d(x)\le \frac{\sqrt{T}}{2C\sqrt{2}}$. On the other hand, $\Theta$ is exponentially small if $t$ is large enough, so the contribution to the integral in (\ref{lambdatheta}) is bounded below by a nonzero constant. This ends the proof of the lemma.
\begin{lemma}\label{grbnd}
Let $0\le \alpha<1$. There exists constant $C$ depending on $\Omega$ and
$\alpha$ such that
\begin{equation}
\int_{\Omega}|\nabla_y H_D(t,x,y)| |x-y|^{\alpha}dy \le C t^{-\frac{1-\alpha}{2}}
\label{grup}
\end{equation}
holds for $0\le t\le T$.
\end{lemma}
Indeed, the upper bounds (\ref{hb}) and (\ref{grby}) yield
\[
\begin{array}{l}
\int_{d(y)\ge \sqrt{t}}|\nabla_y H_D(t,x,y)||x-y|^{\alpha}dy \\\le C_2t^{-\frac{1}{2}}
\int_{{\mathbb R}^d}\left (1 + \frac{|x-y|}{\sqrt{t}}\right)t^{-\frac{d}{2}}e^{-\frac{|x-y|^2}{Kt}}|x-y|^{\alpha}dy\\
= C_3t^{-\frac{1-\alpha}{2}}
\end{array}
\]
and, in view of the upper bound in (\ref{phione}), $\frac{1}{d(y)}w_1(y)\le C_0$ and the upper bound in (\ref{hb}), we have
\[
\begin{array}{l}
\int_{d(y)\le \sqrt{t}}|\nabla_y H_D(t,x,y)||x-y|^{\alpha}dy \\\le C_4\int_{{\mathbb R}^d}\frac{1}{|x-y|}t^{-\frac{d}{2}}e^{-\frac{|x-y|^2}{Kt}}|x-y|^{\alpha}dy = C_5t^{-\frac{1-\alpha}{2}}.
\end{array}
\]
This proves (\ref{grup}). We introduce now a good family of cutoff functions $\chi$ depending on a length scale $\ell$.
\begin{lemma}\label{goodcutoff} Let $\Omega$ be a bounded domain with $C^2$ boundary. For $\ell>0$ small enough (depending on $\Omega$) there exist cutoff functions $\chi$ with the properties: $0\le \chi\le 1$, $\chi(y)=0$ if $d(y)\le \frac{\ell}{4}$, $\chi(y)= 1$ for $d(y)\ge \frac{\ell}{2}$, $|\nabla^k\chi|\le C\ell^{-k}$ with $C$ independent of $\ell$ and
\begin{equation}
\int_{\Omega}\frac{(1-\chi(y))}{|x-y|^{d+j}}dy \le C\frac{1}{d(x)^{j}}
\label{chij}
\end{equation}
and
\begin{equation}
\int_{\Omega}|\nabla\chi(y)|\frac{1}{|x-y|^{d-\alpha}}\le Cd(x)^{-(1-\alpha)}
\label{nachij}
\end{equation}
hold for $j>-d$, $\alpha<d$ and $d(x)\ge \ell$. We will refer to such $\chi$ as a ``good cutoff''.
\end{lemma}
\noindent{\bf Proof.} There exists a length $\ell_0$ such that if $P$ is a point of the boundary $\partial\Omega$, and if $|P-y|\le 2\ell_0$, then $y\in\Omega$ if and only if (after a rotation and a translation) $y_d>F(y')$, where
$y'=(y_1, \dots, y_{d-1})$ and $F$ is a $C^2$ function with $F(0)=0$, $\nabla F(0)=0$, $|\nabla F|\le\frac{1}{10}$. We took thus without loss of generality coordinates such that $P= (0,0)$ and the normal to $\partial\Omega$ at $P$ is $(0,\dots, 0, 1)$. Now if $\ell<\ell_0$ and $d(x)\ge \ell$ and $|y-P|\le \frac{\ell_0}{2}$ satisfies $d(y)\le \frac{\ell}{2}$, then there exists a point $Q\in B(P,\ell_0)$ such that
\[
|x-y|^2\ge \frac{1}{16}(|y-Q|^2 + d(x)^2)\ge \frac{1}{16}(|y'-Q'|^2 + d(x)^2)
\]
Indeed, if $|x-P|\ge \ell_0$ we take $Q=P$ because then $|x-y| =|x-P+P-y|\ge \ell_0-\frac{\ell_0}{2}$, so $|x-y|\ge \frac{|y-Q|}{2}$. But also $|x-y|\ge\frac{d(x)}{2}$ because there exists a point $P_1 =(p, F(p))\in\partial\Omega$ such that $|y-P_1| = d(y)\le\frac{\ell}{2}$ while obviously $|x-P_1|\ge d(x)\ge\ell$.
If, on the other hand $|x-P|< \ell_0$, then $x$ is in the neighborhood of $P$ and we take $Q=x$. Because $y-P_1= (y'-p, y_d-F(p))$ we have
\[
d(y)\le |y_d-F(y')| \le \frac{11}{10}d(y)
\]
for $y\in B(P,\ell_0)$. We take a partition of unity of the form $1= \psi_0 +\sum_{j=1}^N\psi_j$
with $\psi_k\in C_0^{\infty}({\mathbb R}^d)$, subordinated to the cover of the boundary with neighborhoods as above, and with $\psi_0$ supported in
$d(x)\ge \frac{\ell_0}{4}$, identically 1 for $d(x)\ge \frac{\ell_0}{2}$, $\psi_j$ supported near the boundary $\partial\Omega$ in balls of size $2\ell_0$ and identically 1 on balls of radius $\ell_0$.
The cutoff will be taken of the form
$\chi= \alpha_0 +\sum_{j=1}^N \chi_j(\frac{y_d-F(y')}{\ell})\alpha_j(y)$, where of course the meaning of $y$ changes in each neighborhood. The smooth functions $\chi_j(z)$, are identically zero for $|z|\le \frac{11}{40}$ and identically 1 for $|z| \ge\frac{10}{22}$. The integrals in (\ref{chij}) and (\ref{nachij}) reduce to integrals of the type
\[
\begin{array}{l}
\int_{y_d>F(y'), |y'|\le\ell_0}\frac{\left(1-\chi_1\left(\frac{y_d-F(y')}{\ell}\right)\right)}{|x-y|^{d+j}}dy\le C\left(\int_{0}^{\infty}\left(1-\chi_1\left(\frac{u}{\ell}\right)\right)du\right)\left(\int_{{\mathbb R}^{d-1}}\frac{dy'}{\left({|y'-Q'|^2 +d(x)^2}\right)^{\frac{d+j}{2}}}\right)\\
\le
C\ell d(x)^{-1-j}\le Cd(x)^{-j}
\end{array}
\]
and
\[
\begin{array}{l}
\int_{y_d>F(y'), |y'|\le\ell_0}\frac{\left|\nabla_y\chi_1\left(\frac{y_d-F(y')}{\ell}\right)\right|}{|x-y|^{d-\alpha}}dy\le C\left(\int_{-{\infty}}^{\infty}|\nabla\chi_1(z)|dz\right)\left(\int_{{\mathbb R}^{d-1}}\frac{dy'}{\left({|y'-Q'|^2 +d(x)^2}\right)^{\frac{d-\alpha}{2}}}\right)\\
\le
C d(x)^{-(1-\alpha)}.
\end{array}
\]
This completes the proof.
We recall from (\cite{ci}) that the C\'{o}rdoba-C\'{o}rdoba inequality (\cite{cc}) holds in bounded domains. In fact, more is true: there is a lower bound that provides a strong boundary repulsive term:
\begin{prop}{\label{cordoba}} Let $\Omega$ be a bounded domain with smooth boundary. Let $0\le s<2$. There exists a constant $c>0$ depending only on the domain $\Omega$ and on $s$, such that, for any
$\Phi$, a $C^2$ convex function satisfying $\Phi(0)= 0$, and any $f\in C_0^{\infty}(\Omega)$, the inequality
\begin{equation}
\Phi'(f)\l^s f - \l^s(\Phi(f))\ge \frac{c}{d(x)^s}\left(f\Phi'(f)-\Phi(f)\right)
\label{cor}
\end{equation}
holds pointwise in $\Omega$.
\end{prop}
The proof follows in a straightforward manner from the proof of (\cite{ci}) using convexity, approximation, and the lower bound (\ref{lambdatheta}).
We prove below two nonlinear lower bounds for the case $\Phi(f)= \frac{f^2}{2}$, one when $f$ is a localized finite difference, and one when $f$ is a localized first derivative. The proof of Proposition \ref{cordoba} can be left as an exercise, following the same pattern as below.
\begin{thm}\label{nlmax}
Let $f\in L^{\infty}(\Omega)$ be smooth enough ($C^2$, e.g.) and vanish at the boundary, $f\in{\mathcal{D}}(\l^{s})$ with $0\le s<2$.
Then
\begin{equation}
\begin{array}{l}
D(f) = f\l^{s} f -\frac{1}{2}\l^{s} f^2\\
= \gamma_0\int_0^{\infty}t^{-1-\frac{s}{2}}dt\int_{\Omega}H_D(x,y,t)(f(x)-f(y))^2dy + \gamma_0 f^2(x)\int_0^{\infty}t^{-1-\frac{s}{2}}\left[1-e^{t\Delta}1\right](x)dt\\
= \gamma_0\int_0^{\infty}t^{-1-\frac{s}{2}}dt\int_{\Omega}H_D(x,y,t)(f(x)-f(y))^2dy + \; f^2(x)\frac{1}{2}\l^{s} 1.
\end{array}
\label{d}
\end{equation}
holds for all $x\in\Omega$. Here $\gamma_0 =\frac{c_{s}}{2}$ with $c_s$ of (\ref{rep}). Let $\ell>0$ be a small number and
let $\chi\in C_0^{\infty}(\Omega)$, $0\le\chi\le 1$ be a good cutoff function, with $\chi(y)=1$ for $d(y)\ge \frac{\ell}{2}$, $\chi(y) =0$ for $d(y)\le\frac{\ell}{4}$ and with
$|\nabla\chi(y)|\le \frac{C}{\ell}$.
There exist constants $\gamma_1>0$ and $M>0$ depending on $\Omega$ such that,
if $q(x)$ is a smooth function in $L^{\infty}(\Omega)$ then
if
\[
f(x)=\chi(x)(\delta_h q(x)) =\chi(x)(q(x+h)-q(x))
\]
then
\begin{equation}
D(f) = (f\l^{s} f)(x) -\frac{1}{2}(\l^{s} f^2)(x) \ge \gamma_1 |h|^{-s}\frac{|f_d(x)|^{2+s}}{\|q\|_{L^{\infty}}^{s}} + \gamma_1\frac{f^2(x)}{d(x)^{s}}
\label{nlb}
\end{equation}
holds pointwise in $\Omega$ when $|h|\le\frac{\ell}{16}$, and $d(x)\ge \ell$ with
\begin{equation}
|f_d(x)| = \left\{
\begin{array}{l}
|f(x)|,\quad\quad {\mbox{if}}\;\; |f(x)| \ge M\|q\|_{L^{\infty}(\Omega)}\frac{|h|}{d(x)},\\
0,\quad\quad\quad\quad {\mbox{if}}\;\; |f(x)| \le M\|q\|_{L^{\infty}(\Omega)}
\frac{|h|}{d(x)}.
\end{array}
\right.
\label{fd}
\end{equation}
\end{thm}
\noindent{\bf{Proof.}} We start by proving (\ref{d}):
\[
\begin{array}{l}
f(x)\l^{s}f(x) - \frac{1}{2}\l^{s} f^2(x) \\
= c_{s}\int_0^{\infty}t^{-1-\frac{s}{2}}\int_{\Omega}\left\{\left[\frac{1}{|\Omega|} f(x)^2 - f(x)H_D(t,x,y)f(y)\right]- \frac{1}{2|\Omega|}f^{2}(x) + \frac{1}{2}H_D(t,x,y)f^2(y)\right\}dy\\
=c_{s}\int_0^{\infty}t^{-1-\frac{s}{2}}dt\int_{\Omega}\left\{\frac{1}{2}\left[H_D(t,x,y)(f(x)-f(y))^2\right] + \frac{1}{2}f^2(x)\left[\frac{1}{|\Omega|} -H_D(t,x,y)\right]\right\}dy \\
= c_{s}\int_0^{\infty}t^{-1-\frac{s}{2}}dt\int_{\Omega}\left\{\frac{1}{2}\left[H_D(t,x,y)(f(x)-f(y))^2\right]dy + \frac{1}{2}f^2(x)\left[1-e^{t\Delta}1\right](x)\right\} \\
= c_{s}\int_0^{\infty}t^{-1-\frac{s}{2}}dt\int_{\Omega}\frac{1}{2}\left[H_D(t,x,y)(f(x)-f(y))^2\right]dy + \frac{1}{2}f^2(x)\l^{s}1.
\end{array}
\]
It follows that
\begin{equation}
\begin{array}{l}
\left(f\l^{s}f - \frac{1}{2}\l^{s} f^2\right)(x) \\\ge \frac{1}{2}c_{s}\int_0^{\infty}\psi\left(\frac{t}{\tau}\right)t^{-1-\frac{s}{2}}dt\int_{\Omega}H_D(t,x,y)(f(x)-f(y))^2dy + \frac{1}{2}f^2(x)\l^{s}1
\end{array}
\label{lowone}
\end{equation}
where $\tau>0$ is arbitrary and $0\le \psi(s)\le 1$ is a smooth function, vanishing identically for $0\le s\le 1$ and equal identically to $1$ for $s\ge 2$.
We restrict to $t\le T$,
\begin{equation}
\begin{array}{l}
\left(f\l^{s}f - \frac{1}{2}\l^{s} f^2\right)(x)\\ \ge
\frac{1}{2}c_{s}\int_0^{T}\psi\left(\frac{t}{\tau}\right)t^{-1-\frac{s}{2}}dt\int_{\Omega}H_D(t,x,y)\left(f(x)-f(y)\right)^2dy +\frac{1}{2}f^2(x)\l^{s}1
\end{array}
\label{lowtwo}
\end{equation}
and open brackets in (\ref{lowtwo}):
\begin{equation}
\begin{array}{l}
\left(f\l^{s}f - \frac{1}{2}\l^{s} f^2\right)(x)
\ge \frac{1}{2}f^2(x)c_{s}\int_0^T\psi\left(\frac{t}{\tau}\right)t^{-1-\frac{s}{2}}dt\int_{\Omega}H_D(t,x,y)dy\\
- f(x)c_{s}\int_0^T\psi\left(\frac{t}{\tau}\right)t^{-1-\frac{s}{2}}dt\int_{\Omega}H_D(t,x,y)f(y)dy + \frac{1}{2}f^2(x)\l^{s}1\\
\ge |f(x)|\left [ \frac{1}{2}|f(x)| I(x) - J(x)\right] +\frac{1}{2}f^2(x)\l^{s}1
\end{array}
\label{lowthree}
\end{equation}
with
\begin{equation}
I(x) = c_{s}\int_0^T\psi\left(\frac{t}{\tau}\right)t^{-1-\frac{s}{2}}dt\int_{\Omega}H_D(t,x,y)dy,
\label{ix}
\end{equation}
and
\begin{equation}
\begin{array}{l}
J(x) = c_{s}\left |\int_0^T\psi\left(\frac{t}{\tau}\right)t^{-1-\frac{s}{2}}dt\int_{\Omega}H_D(t,x,y)f(y)dy\right |\\
= c_{s}\left |\int_0^T\psi\left(\frac{t}{\tau}\right)t^{-1-\frac{s}{2}}dt\int_{\Omega}H_D(t,x,y)\chi(y)\delta_hq(y)dy\right |.
\end{array}
\label{jx}
\end{equation}
We proceed with a lower bound on $I$ and an upper bound on $J$. For the lower bound on $I$ we note that, in view of (\ref{thetalow}) and the fact that
\[
I(x) = c_{s}\int_0^T\psi\left(\frac{t}{\tau}\right)t^{-1-\frac{s}{2}}\Theta(x,t)dt
\]
we have
\[
\begin{array}{l}
I(x)\ge c_1\int_0^{\min(T, d^2(x))}\psi\left(\frac{t}{\tau}\right)t^{-1-\frac{s}{2}}dt\\
= c_1\tau^{-\frac{s}{2}}\int_1^{\tau^{-1}(\min(T, d^2(x)))}\psi(u)u^{-1-\frac{s}{2}}du.
\end{array}
\]
Therefore we have that
\begin{equation}
I(x)\ge c_2 \tau^{-\frac{s}{2}}
\label{ilow}
\end{equation}
with $c_2 = c_1\int_1^2\psi(u)u^{-1-\frac{s}{2}}du$, a positive constant depending only on $\Omega$ and $s$, provided $\tau$ is small enough,
\begin{equation}
\tau \le \frac{1}{2}\min(T, d^2(x)).
\label{taucond}
\end{equation}
In order to bound $J$ from above we use (\ref{grup}) with $\alpha=0$. Now
\[
\begin{array}{l}
J \le c_{s}\left |\int_0^T\psi\left(\frac{t}{\tau}\right)t^{-1-\frac{s}{2}}dt\int_{\Omega}\delta_{-h}H_D(t,x,y)\chi(y)q(y)dy\right | + \\
c_{s}\left |\int_0^T\psi\left(\frac{t}{\tau}\right)t^{-1-\frac{s}{2}}dt\int_{\Omega}H_D(t,x,y-h)(\delta_{-h}\chi(y))q(y)dy\right |
\end{array}
\]
We have that
\[
J_2 = c_{s}\left |\int_0^T\psi\left(\frac{t}{\tau}\right)t^{-1-\frac{s}{2}}dt\int_{\Omega}H_D(t,x,y-h)(\delta_{-h}\chi(y))q(y)dy\right |\le C_{6}\frac{|h|}{d(x)}\|q\|_{L^{\infty}}\tau^{-\frac{s}{2}}.
\]
Indeed,
\[
t^{-{\frac{d}{2}}}e^{-\frac{|x-y|^2}{Kt}}\le C_K |x-y|^{-d}
\]
so the bound follows from (\ref{hb}) and (\ref{nachij}). On the other hand,
\[
\begin{array}{l}
J_1= c_{s}\left |\int_0^T\psi\left(\frac{t}{\tau}\right)t^{-1-\frac{s}{2}}dt\int_{\Omega}\delta_{-h}H_D(t,x,y)\chi (y)q(y)dy\right | \\
\le
\|q\|_{L^{\infty}(\Omega)}|h|\int_0^T\psi\left(\frac{t}{\tau}\right)t^{-1-\frac{s}{2}}dt\int_{\Omega}|\nabla_y H_D(t,x,y)|dy
\end{array}
\]
and therefore, in view of (\ref{grup})
\[
J_1\le C_1 |h|\|q\|_{L^{\infty}(\Omega)}\int_0^T\psi\left(\frac{t}{\tau}\right )t^{-\frac{3}{2}-\frac{s}{2}}dt
\]
and therefore
\begin{equation}
J_1 \le C_7|h|\|q\|_{L^{\infty}(\Omega)}\tau^{-\frac{1}{2}-\frac{s}{2}}
\label{jupone}
\end{equation}
with
\[
C_7 = C_1\int_1^{\infty}\psi(u)u^{-\frac{3}{2}-\frac{s}{2}}du
\]
a constant depending only on $\Omega$ and $s$. In conclusion
\begin{equation}
|J| \le C_8\tau^{-\frac{s}{2}}|h| (\tau^{-\frac{1}{2}} + d(x)^{-1})\|q\|_{L^{\infty}}.
\label{jup}
\end{equation}
Now, because of the lower bound (\ref{lowthree}),
if we can choose $\tau$ so that
\[
J(x) \le \frac{1}{4} |f(x)|I(x)
\]
then it follows that
\begin{equation}
\left[f\l^{s}f - \frac{1}{2}\l^{s} f^2\right](x) \ge \frac{1}{4}f^2(x)I(x) + \frac{1}{2}f^2(x)\l^{s}1.
\label{lowfour}
\end{equation}
Because of the bounds (\ref{ilow}), (\ref{jup}), if
\[
|f(x)|\ge \frac{8C_8}{c_2}\frac{|h|}{d(x)}\|q\|_{L^{\infty}},
\]
then a choice
\begin{equation}
\tau(x)^{-\frac{1}{2}} = C_9{\|q\|_{L^{\infty}}^{-1}}|f(x)||h|^{-1}
\label{tauchoice}
\end{equation}
with $C_9 = c_2 (8C_8)^{-1}$ achieves the desired bound. This concludes the proof.
We are providing now a lower bound for $D(f)$ for a different situation.
\begin{thm}\label{nlmaxd}
Let $\ell>0$ be a small number and let $\chi\in C_0^{\infty}(\Omega)$, $0\le\chi\le 1$ be a good cutoff function, with $\chi(y)=1$ for $d(y)\ge \frac{\ell}{2}$, $\chi(y) =0$ for $d(y)\le\frac{\ell}{4}$ and with $|\nabla\chi(y)|\le \frac{C}{\ell}$.
There exist constants $\gamma_2>0$ and $M>0$ depending on $\Omega$ such that, if $q(x)$ is a smooth function in $C^{\alpha}(\Omega)$ with $0<\alpha<1$ and
\[
f(x)=\chi(x)\nabla q(x),
\]
then
\begin{equation}
D(f) = (f\l^{s} f)(x) -\frac{1}{2}(\l^{s} f^2)(x) \ge \gamma_2 \frac{|f_d(x)|^{2+\frac{s}{1-\alpha}}}{\|q\|_{C^{\alpha}(\Omega)}^{\frac{s}{1-\alpha}}}(d(x))^{\frac{s\alpha}{1-\alpha}} + \gamma_1\frac{f^2(x)}{d(x)^{s}}
\label{nlbd}
\end{equation}
holds pointwise in $\Omega$ when $d(x)\ge \ell$, with
\begin{equation}
|f_d(x)| = \left\{
\begin{array}{l}
|f(x)|,\quad\quad {\mbox{if}}\;\; |f(x)| \ge M\|q\|_{L^{\infty}(\Omega)}(d(x))^{-1},\\
0,\quad\quad\quad\quad {\mbox{if}}\;\; |f(x)| \le M\|q\|_{L^{\infty}(\Omega)}(d(x))^{-1}.
\end{array}
\right.
\label{fdd}
\end{equation}
\end{thm}
\noindent{\bf Proof.} We follow exactly the proof of Theorem \ref{nlmax} up to, and including the definition of $I(x)$ given in (\ref{ix}). In particular, the lower bound (\ref{ilow}) is still valid, provided $\tau$ is small enough (\ref{taucond}). The term $J$ starts out the same, but is treated slightly differently,
\begin{equation}
\begin{array}{l}
J(x) = c_{s}\left |\int_0^T\psi\left(\frac{t}{\tau}\right)t^{-1-\frac{s}{2}}dt\int_{\Omega}H_D(t,x,y)f(y)dy\right |\\
= c_{s}\left |\int_0^T\psi\left(\frac{t}{\tau}\right)t^{-1-\frac{s}{2}}dt\int_{\Omega}H_D(t,x,y)\chi(y)\nabla_y (q(y)-q(x))dy\right |.
\end{array}
\label{jxd}
\end{equation}
In order to bound $J$ we use (\ref{nachij}) and (\ref{grup}).
\[
\begin{array}{l}
|J(x)| \le c_{s}\left |\int_0^T\psi\left(\frac{t}{\tau}\right)t^{-1-\frac{s}{2}}dt\int_{\Omega}\partial_yH_D(t,x,y)\chi(y)(q(y)-q(x))dy\right | + \\
c_{s}\left |\int_0^T\psi\left(\frac{t}{\tau}\right)t^{-1-\frac{s}{2}}dt\int_{\Omega}H_D(t,x,y)(\nabla\chi(y))(q(y)-q(x))dy\right |\\
= J_1(x) + J_2(x)
\end{array}
\]
We have from (\ref{hb}) and (\ref{nachij}), as before,
\[
J_2(x) = c_{s}\left |\int_0^T\psi\left(\frac{t}{\tau}\right)t^{-1-\frac{s}{2}}dt\int_{\Omega}H_D(t,x,y)(\nabla\chi(y))(q(y)-q(x))dy\right |\le Cd(x)^{-1}\|q\|_{L^{\infty}}\tau^{-\frac{s}{2}}.
\]
On the other hand,
\[
\begin{array}{l}
J_1(x) = c_{s}\left |\int_0^T\psi\left(\frac{t}{\tau}\right)t^{-1-\frac{s}{2}}dt\int_{\Omega}\nabla_yH_D(t,x,y)\chi (y)(q(y)-q(x))dy\right | \\
\le
c_s(d(x))^{-\alpha}\|q\|_{C^{\alpha}(\Omega)}\int_0^T\psi\left(\frac{t}{\tau}\right)t^{-1-\frac{s}{2}}dt\int_{\Omega\cap |x-y|\le d(x)}|\nabla_y H_D(t,x,y)||x-y|^{\alpha}dy \\
+ c_s\|q\|_{L^{\infty}}\int_0^T\psi\left(\frac{t}{\tau}\right)t^{-1-\frac{s}{2}}dt\int_{\Omega\cap |x-y|\ge d(x) }|\nabla_y H_D(t,x,y)|dy \\
= J_{11}(x) + J_{12}(x).
\end{array}
\]
In view of (\ref{grup})
\[
J_{11}(x)\le C_1 d(x)^{-\alpha} \|q\|_{C^{\alpha}(\Omega)}\int_0^T\psi\left(\frac{t}{\tau}\right )t^{-\frac{3-\alpha}{2}-\frac{s}{2}}dt
\]
and so
\begin{equation}
J_{11}(x) \le C_2(d(x))^{-\alpha}\|q\|_{C^{\alpha}(\Omega)}\tau^{-\frac{1-\alpha}{2}-\frac{s}{2}}
\label{juponed}
\end{equation}
with
\[
C_2 = C_1\int_1^{\infty}\psi(z)z^{-\frac{3-\alpha}{2}-\frac{s}{2}}dz
\]
a constant depending only on $\Omega$ and $s$. Regarding $J_{12}(x)$ we have in view of (\ref{grby})
\[
J_{12}(x) \le C\|q\|_{L^{\infty}(\Omega)}\int_0^T\psi\left(\frac{t}{\tau}\right )t^{-1-\frac{s}{2}}\left (\frac{1}{\sqrt{t}} + \frac{1}{d(x)}\right )e^{-\frac{d(x)^2}{2Kt}}dt\le C_3\tau^{-\frac{s}{2}}d(x)^{-1}\|q\|_{L^{\infty}(\Omega)}
\]
because, in view of (\ref{phione})
\[
\frac{w_1(y)}{|x-y|}\le C_0\frac{d(y)}{|x-y|} \le C_0\frac{d(y)}{d(x)}
\]
on the domain of integration.
In conclusion
\begin{equation}
|J(x)| \le C_3\tau^{-\frac{s}{2}}\left[\tau^{-\frac{1-\alpha}{2}}(d(x))^{-{\alpha}}\|q\|_{C^{\alpha}(\Omega)} + d(x)^{-1}\|q\|_{L^{\infty}(\Omega)}\right].
\label{jupd}
\end{equation}
The rest is the same as in the proof of Theorem \ref{nlmax}:
If $|f(x)|\ge Md(x)^{-1}\|q\|_{L^{\infty}(\Omega)}$ for suitable $M$, ($M= 8C_3c_2^{-1}$) then we choose $\tau$ such that
\[
\frac{|f(x)|}{\|q\|_{C^{\alpha}(\Omega)}} = M\tau^{-\frac{1-\alpha}{2}}(d(x))^{-\alpha},
\]
and this yields $|f(x)| I\ge 4 |J(x)|$, and consequently, in view of
(\ref{lowfour}) which is then valid, the result (\ref{nlbd}) is proved.
We specialize from now on to $s=1$.
\section{Bounds for Riesz transforms}
We consider $u$ given in (\ref{u}),
\[
u = \nabla^{\perp}\l^{-1}\theta.
\]
We are interested in estimates of $u$ in terms of $\theta$, and in particular estimates of finite differences and the gradient.
We fix a length scale $\ell$ and take a good cutoff function $\chi\in C_0^{\infty}(\Omega)$ which satisfies $\chi(x) =1 $ if $d(x)\ge \frac{\ell}{2}$, $\chi(x) = 0$ if $d(x)\le \frac{\ell}{4}$, $|\nabla\chi(x)|\le C\ell^{-1}$, (\ref{chij}) and (\ref{nachij}).
We take $|h|\le \frac{\ell}{14}$. In view of the representation
\begin{equation}
\l^{-1} = c\int_0^{\infty}t^{-\frac{1}{2}}e^{t\Delta}dt
\label{lambdaminusone}
\end{equation}
we have on the support of $\chi$
\begin{equation}
\delta_h u(x) = c\int_0^{\infty}t^{-\frac{1}{2}}dt\int_{\Omega}\delta_h^x\nabla_x^{\perp}H_D(x,y,t)\theta(y)dy.
\label{duh}
\end{equation}
We split
\begin{equation}
\delta_h u = \delta_h u^{in} + \delta_h u^{out}
\label{split}
\end{equation}
with
\begin{equation}
\delta_h u(x)^{in} = c\int_0^{\rho^2}t^{-\frac{1}{2}}dt\int_{\Omega}\delta_h^x\nabla_x^{\perp}H_D(x,y,t)\theta(y)dy
\label{duhin}
\end{equation}
and $\rho=\rho(x,h)>0$ a length scale to be chosen later but it will be smaller than the distance from $x$ to the boundary of $\Omega$:
\begin{equation}
\rho \le c d(x).
\label{rholess}
\end{equation}
We represent
\begin{equation}
\delta_h u^{in}(x) = u_h(x) + v_h(x)
\label{deltahu}
\end{equation}
where
\begin{equation}
u_h(x) = c\int_0^{\rho^2}t^{-\frac{1}{2}}dt\int_{\Omega}\nabla_x^{\perp}H(x,y,t)(\chi(y)\delta_h\theta(y)-\chi(x)\delta_h\theta(x))dy
\label{uh}
\end{equation}
and where
\begin{equation}
v_h(x) = e_1(x) + e_2(x) + e_3(x) + \chi(x)\delta_h\theta(x)e_4(x)
\label{vh}
\end{equation}
with
\begin{equation}
e_1(x) = c\int_0^{\rho^2}t^{-\frac{1}{2}}dt\int_{\Omega}\nabla_x^{\perp}(H_D(x+h,y,t)-H_D(x,y,t))(1-\chi(y))\theta(y)dy,
\label{e1}
\end{equation}
\begin{equation}
e_2(x) = c\int_0^{\rho^2}t^{-\frac{1}{2}}dt\int_{\Omega}\nabla_x^{\perp}(H_D(x+h,y,t)-H_D(x,y-h,t))\chi(y)\theta(y)dy,
\label{e2}
\end{equation}
\begin{equation}
e_3(x) =c\int_0^{\rho^2}t^{-\frac{1}{2}}dt\int_{\Omega}\nabla_x^{\perp}H_D(x,y,t)(\chi(y+h)-\chi(y))\theta(y+h)dy,
\label{e3}
\end{equation}
and
\begin{equation}
e_4(x) =c\int_0^{\rho^2}t^{-\frac{1}{2}}dt\int_{\Omega}\nabla_x^{\perp}H_D(x,y,t)dy.
\label{e4}
\end{equation}
We used here the fact that $(\chi\theta)(\cdot)$ and $(\chi\theta)(\cdot + h)$
are compactly supported in $\Omega$ and hence
\[
\int_{\Omega}\nabla_x^{\perp}H_D(x,y-h,t)\chi(y)\theta(y)dy = \int_{\Omega}\nabla_x^{\perp}H_D(x,y,t)\chi(y+h)\theta(y+h)dy.
\]
The following elementary lemma will be used in several instances:
\begin{lemma}\label{intpk}
Let $\rho>0$, $p>0$. Then
\begin{equation}
\int_0^{\rho^2} t^{-1-\frac{m}{2}}\left(\frac{p}{\sqrt{t}}\right)^je^{-\frac{p^2}{Kt}}dt \le C_{K,m,j}p^{-m}
\label{pbeta}
\end{equation}
if $m\ge 0$, $j\ge 0$, $m+j>0$, and
\begin{equation}
\int_0^{\rho^2} t^{-1}e^{-\frac{p^2}{Kt}}dt \le C_K\left(1+ 2\log_{+}\left(\frac{\sqrt{K}\rho}{p}\right)\right)
\label{pzero}
\end{equation}
if $m=0$ and $j=0$, with constants $C_{K,m,j}$ and $C_K$ independent of $\rho$ and $p$. Note that when $m+j>0$, $\rho=\infty$ is allowed.
\end{lemma}
We start estimating the terms in (\ref{vh}).
For $e_1$ we use the inequality (\ref{naxnaxb}), and it then follows from Lemma \ref{intpk} with $m= {d} +1$ that
\[
|e_1(x)|\le C|h|\|\theta\|_{L^{\infty}}\int_0^1d\lambda \int_{\Omega}\frac{1}{|x +\lambda h -y |^{d+1}}(1-\chi(y))dy
\]
and therefore we have from (\ref{chij}) that
\begin{equation}
|e_1(x)| \le C \|\theta\|_{L^{\infty}} \frac{|h|}{d(x)}
\label{e1b}
\end{equation}
holds for $d(x)\ge \ell$. Concerning $e_3$ we use Lemma (\ref{intpk}) with
$m= d$ and $j=0,1$ in conjunction with (\ref{grbx}) and obtain
\[
|e_3(x)|\le C{|h|}\|\theta\|_{L^{\infty}}\int_{\Omega}|\nabla\chi(y)|\frac{1}{|x-y|^d}dy
\]
and therefore we obtain from (\ref{nachij})
\begin{equation}
|e_3(x)| \le C \|\theta\|_{L^{\infty}}\frac{|h|}{d(x)}
\label{e3b}
\end{equation}
holds for $d(x)\ge \ell$. Regarding $e_4$ we can split it into
\[
e_4 (x) = e_5(x) + e_6(x)
\]
with
\[
e_{5}(x) = \int_0^{\rho^2}t^{-\frac{1}{2}}\int_{\Omega}\nabla_x^{\perp}H_D(x,y,t)\chi(y)dy
\]
and
\[
e_{6}(x) = \int_0^{\rho^2}t^{-\frac{1}{2}}\int_{\Omega}\nabla_x^{\perp}H_D(x,y,t)(1-\chi(y))dy.
\]
Now $e_6$ is bounded using the Lemma (\ref{intpk}) with $m= d$ and $j=0,1$ in conjunction with (\ref{grbx}) and (\ref{chij}) and obtain
\begin{equation}
|e_6(x)|\le C\int_{\Omega}\frac{(1-\chi(y))}{|x-y|^d}dy \le C
\label{e6b}
\end{equation}
for $d(x)\ge \ell$, with a constant independent of $\ell$. For $e_5$ we use the fact that $\chi $ is a fixed smooth function which vanishes at the boundary.
In order to bound the terms $e_2$ and $e_5$ we need to use additional information, namely the inequalities (\ref{cancel1}) and (\ref{cancel2}). For $e_5$ we write
\[
\begin{array}{l}
e_5(x) = \int_0^{\rho^2}t^{-\frac{1}{2}}dt\int_{\Omega}\left(\nabla_x^{\perp}H_D(x,y,t) + \nabla_y^{\perp} H_D(x,y,t)\right)\chi(y)dy \\
+ \int_0^{\rho^2}t^{-\frac{1}{2}}dt\int_{\Omega}H_D(x,y,t)\nabla_y^{\perp}\chi(y)dy,
\end{array}
\]
and using (\ref{cancel1}) and Lemma \ref{intpk} with $m=0$, $j=0$ and (\ref{nachij}) we obtain the bound
\[
|e_5(x)|\le C\left(1+ \log_{+}\left(\frac{\rho}{d(x)}\right)\right) + C\rho\int_{\Omega}\frac{|\nabla \chi(y)|}{|x-y|^d}dy
\]
and therefore, in view of (\ref{nachij}) and $\rho\le d(x)$ we have
\begin{equation}
|e_5(x)|\le C
\label{e5b}
\end{equation}
for $d(x)\ge \ell$, with $C$ depending on $\Omega$ but not on $\ell$.
Consequently, we have
\begin{equation}
|e_4(x)|\le C
\label{e4b}
\end{equation}
for $d(x)\le \ell$, with a constant $C$ depending on $\Omega$ only.
In order to estimate $e_2$ we write
\begin{equation}
H_D(x+h,y,t)- H_D(x,y-h,t) = h\cdot\int_0^1 (\nabla_x+\nabla_y)H_D(x+\lambda h, y + (\lambda-1)h,t)d\lambda
\label{transh}
\end{equation}
and use (\ref{cancel2}) and Lemma \ref{intpk} with $m=1$, $j=0$ to obtain
\[
\begin{array}{l}
|e_2(x)|\le |h|\|\theta\|_{L^{\infty}}\int_0^1d\lambda \int_0^{\rho^2}t^{-\frac{1}{2}}dt\int_{\Omega}|\nabla_x^{\perp}(\nabla_x +\nabla_y)H_D(x+\lambda h, y+(\lambda-1)h)||\chi(y)|dy\\
\le C|h|\|\theta\|_{L^{\infty}}\int_0^1d\lambda\int_0^{\rho^2}t^{-\frac{3}{2}}e^{-\frac{d(x)^2}{4Kt}}dt
\end{array}
\]
and thus
\begin{equation}
|e_2(x)|\le C\|\theta\|_{L^{\infty}}\frac{|h|}{d(x)}
\label{e2b}
\end{equation}
holds for $d(x)\ge \ell$.
Summarizing, we have that
\begin{equation}
|v_h(x)| \le C\|\theta\|_{L^{\infty}}\frac{|h|}{d(x)} + C|\delta_h\theta(x)|
\label{vhbo}
\end{equation}
for $d(x)\ge \ell$. We now estimate $u_h$ using (\ref{grbx}) and a Schwartz inequality
\[
\begin{array}{l}
|u_h(x)| \le c\int_0^{\rho^2}t^{-1}\int_{\Omega}\left(1+\frac{|x-y|}{\sqrt{t}}\right)H_D(x,y,t)(\chi(\delta_h\theta)(y)-\chi\delta_h\theta(x))dy\\
\le \sqrt{\rho}\left\{\int_0^{\rho^2}t^{-\frac{3}{2}}dt\int_{\Omega}H_D(x,y,t)(\chi(\delta_h\theta)(y)-\chi\delta_h\theta)^2dy\right\}^{\frac{1}{2}}
\end{array}
\]
We have therefore
\begin{equation}
|u_h(x)|\le C\sqrt{\rho D(f)(x)}.
\label{uhbo}
\end{equation}
where $f=\chi\delta_h\theta$ and $D(f)$ is given in Theorem \ref{nlmax}.
Regarding $\delta_h u^{out}$ we have
\begin{equation}
|\delta_h u^{out}(x)| \le C\|\theta\|_{L^{\infty}}\frac{|h|}{\rho}
\label{dhoutb}
\end{equation}
in view of (\ref{naxnaxb}). Putting together the estimates (\ref{vhbo}), (\ref{uhbo}) and (\ref{dhoutb}) we have
\begin{prop} Let $\chi$ be a good cutoff, and let $u$ be defined by
(\ref{u}). Then
\begin{equation}
|\delta_h u(x)| \le C\left(\sqrt{\rho D(f)(x)} + \|\theta\|_{L^{\infty}}\left(\frac{|h|}{d(x)}+ \frac{|h|}{\rho}\right) + |\delta_h\theta(x)|\right)
\label{dhub}
\end{equation}
holds for $d(x)\ge\ell$, $\rho\le cd(x)$, $f=\chi\delta_h \theta$ and with $C$ a constant depending on $\Omega$.
\end{prop}
Now we will obtain similar estimates for $\nabla u$. We start with the representation
\begin{equation}
\nabla u(x) = \nabla u^{in}(x) + \nabla u^{out}(x)
\label{nauinout}
\end{equation}
where
\begin{equation}
\nabla u^{in}(x) = c\int_0^{\rho^2}t^{-\frac{1}{2}}\int_{\Omega}\nabla_x\nabla_x^{\perp}H_D(x,y,t)\theta(y)dy
\label{nauin}
\end{equation}
and $\rho= \rho(x) \le c d(x)$.
In view of (\ref{naxnaxb}) we have
\begin{equation}
|\nabla u^{out}(x)|\le \frac{C}{\rho}\|\theta\|_{L^{\infty}(\Omega)}
\label{nauoutb}
\end{equation}
We split now
\begin{equation}
\nabla u^{in}(x) = g(x) + g_1(x) + g_2(x) + g_3(x) + g_4(x)f(x)
\label{nasplit}
\end{equation}
where
\begin{equation}
f(x) = \chi(x)\nabla\theta(x)
\label{fnatheta}
\end{equation}
and with
\begin{equation}
g(x) = c\int_0^{\rho^2}t^{-\frac{1}{2}}\int_{\Omega}\nabla_x^{\perp}H_D(x,y,t)(f(y)-f(x))dy,
\label{g}
\end{equation}
and
\begin{equation}
g_1(x) = c\int_0^{\rho^2}t^{-\frac{1}{2}}\int_{\Omega}\nabla_x\nabla_x^{\perp}(H_D(x,y,t)(1-\chi(y))\theta(y)dy,
\label{g1}
\end{equation}
\begin{equation}
g_2(x) = c\int_0^{\rho^2}t^{-\frac{1}{2}}\int_{\Omega}\nabla_x^{\perp}(\nabla_x+ \nabla_y)H_D(x,y,t)\chi(y)\theta(y)dy,
\label{g2}
\end{equation}
\begin{equation}
g_3(x) =c\int_0^{\rho^2}t^{-\frac{1}{2}}\int_{\Omega}\nabla_x^{\perp}H_D(x,y,t)(\nabla_y\chi(y))\theta(y)dy,
\label{g3}
\end{equation}
and
\begin{equation}
g_4(x) =c\int_0^{\rho^2}t^{-\frac{1}{2}}\int_{\Omega}\nabla_x^{\perp}H_D(x,y,t)dy.
\label{g4}
\end{equation}
Now
\begin{equation}
|g_1(x)| \le \frac{C}{d(x)}\|\theta\|_{L^{\infty}(\Omega)}
\label{g1b}
\end{equation}
holds for $d(x)\ge \ell$ because of (\ref{naxnaxb}), time integration using Lemma \ref{intpk} and then use of (\ref{chij}).
For $g_2(x)$ we use (\ref{cancel2}) and then Lemma \ref{intpk} to obtain
\begin{equation}
|g_2(x)| \le \frac{C}{d(x)}\|\theta\|_{L^{\infty}(\Omega)}
\label{g2b}
\end{equation}
for $d(x)\ge \ell$. Now
\begin{equation}
|g_3(x)| \le \frac{C}{d(x)}\|\theta\|_{L^{\infty}(\Omega)}
\label{g3b}
\end{equation}
holds because of (\ref{grbx}), Lemma \ref{intpk} and then use of (\ref{nachij}).
Regarding $g_4$, in view of
\begin{equation}
\int_{\Omega}\nabla_y^{\perp}H_D(x,y,t)dy = 0
\label{intnah}
\end{equation}
we have
\[
g_4(x) = c\int_0^{\rho^2}t^{-\frac{1}{2}}\int_{\Omega}\left(\nabla_x^{\perp} + \nabla_y^{\perp}\right)H_D(x,y,t)dy
\]
and, we thus obtain from (\ref{cancel1}) and from Lemma \ref{intpk} with $m=j=0$
\begin{equation}
|g_4(x)| \le C
\label{g4b}
\end{equation}
because $\rho\le cd(x)$.
Finally we have using a Schwartz inequality like for (\ref{uhbo})
\begin{equation}
|g(x)| \le C\sqrt{\rho D(f)}.
\label{gb}
\end{equation}
Gathering the bounds we have proved
\begin{prop}\label{naub}
Let $\chi$ be a good cutoff with scale $\ell$ and let $u$ be given by (\ref{u}). Then
\begin{equation}
|\nabla u(x)| \le C\left(\sqrt{\rho D(f)} + \|\theta\|_{L^{\infty}(\Omega)}\left(\frac{1}{d(x)} + \frac{1}{\rho}\right) + |\nabla\theta(x)|\right)
\label{nauxb}
\end{equation}
holds for $d(x)\ge \ell$, $\rho\le cd(x)$ and $f=\chi\nabla\theta$ with a constant $C$ depending on $\Omega$.
\end{prop}
\section{Commutators}
We consider the finite difference
\begin{equation}
(\delta_h\l\theta)(x) = \l\theta(x+h)-\l\theta(x)
\label{dht}
\end{equation}
with $d(x)\ge \ell$ and $|h|\le \frac{\ell}{16}$. We use a good cutoff $\chi$ again and denote
\begin{equation}
f(x) = \chi(x)\delta_h\theta(x).
\label{f}
\end{equation}
We start by computing
\begin{equation}
\begin{array}{l}
(\delta_h\l\theta)(x) = (\l f)(x) + c\int_0^{\infty}t^{-\frac{3}{2}}dt\int_{\Omega}(H_D(x,y,t)-H_D(x+h,y,t))(1-\chi(y))\theta(y)dy\\
-c\int_0^{\infty}t^{-\frac{3}{2}}dt\int_{\Omega}(H_D(x+h,y,t)-H_D(x,y-h,t))\chi(y)\theta(y)dy\\
-c\int_0^{\infty}t^{-\frac{3}{2}}dt\int_{\Omega}H_D(x,y,t)(\delta_h\chi)(y)\theta(y+h)dy\\
= (\l f)(x) + E_1(x) + E_2(x) + E_3(x).
\end{array}
\label{compucom}
\end{equation}
\begin{lemma}\label{commuh}
There exists a constant $\Gamma_0$ such that the commutator
\begin{equation}
C_h(\theta) = \delta_h\l\theta -\l(\chi\delta_h\theta)
\label{comh}
\end{equation}
obeys
\begin{equation}
\left |C_h(\theta)(x)\right| \le \Gamma_0\frac{|h|}{d(x)^2}\|\theta\|_{L^{\infty}(\Omega)}
\label{commhb}
\end{equation}
for $d(x)\ge \ell$, $|h|\le\frac{\ell}{16}$, $f=\chi\delta_h\theta$ and $\theta\in H_0^1(\Omega)\cap L^{\infty}(\Omega)$.
\end{lemma}
\noindent{\bf Proof.} We use (\ref{compucom}). For $E_1(x)$ we use a similar argument as for $e_1$ leading to (\ref{e1b}), namely the inequality (\ref{hb}) and Lemma \ref{intpk} with $m=d+2$, $j=0$, and (\ref{chij}) to obtain
\[
|E_1(x)|\le C\frac{|h|}{d(x)^2}\|\theta\|_{L^{\infty}}.
\]
For $E_2$ we proceed in a manner analagous to the one leading to the bound (\ref{e2b}), by using (\ref{transh}), (\ref{cancel1}), Lemma \ref{intpk} with $m=d+2$, $j=0$, and (\ref{nachij}) to obtain
\[
|E_2(x)|\le C\frac{|h|}{d(x)^2}\|\theta\|_{L^{\infty}}.
\]
For $E_3$ we use
\[
|E_3(x)|\le |h|\|\theta\|_{L^{\infty}}\int_0^{\infty}t^{-\frac{3}{2}}dt\int_{\Omega}H_D(x,y,t)|\nabla(\chi)(y)|dy
\]
and using Lemma \ref{intpk} with $m=d+1$, $j=0$ and (\ref{nachij}) we obtain
\[
|E_3(x)|\le C\frac{|h|}{d(x)^2}\|\theta\|_{L^{\infty}},
\]
concluding the proof.
We consider now the commutator $[\nabla, \l]$.
\begin{lemma}\label{commd} There exists a constant $\Gamma_3$ depending on $\Omega$ such that for any smooth function $f$ vanishing at $\partial\Omega$ and any $x\in\Omega$ we have
\begin{equation}
\left| [\nabla, \l]f(x)\right| \le \frac{\Gamma_3}{d(x)^2}\|f\|_{L^{\infty}(\Omega)}.
\label{nac}
\end{equation}
If $\chi$ is a good cutoff with scale $\ell$ and if $\theta$ is a smooth bounded function in ${\mathcal{D}}\left(\l\right)$, then
\begin{equation}
C_{\chi}(\theta) = \nabla\l\theta -\l\chi\nabla\theta
\label{cchi}
\end{equation}
obeys
\begin{equation}
|C_{\chi}(\theta)(x)| = \left|(\nabla\l\theta - \l(\chi\nabla\theta))(x)\right| \le \frac{\Gamma_3}{d(x)^2}\|\theta\|_{L^{\infty}(\Omega)}
\label{nachi}
\end{equation}
for $d(x)\ge \ell$, with a constant $\Gamma_3$ independent of $\ell$.
\end{lemma}
\noindent{\bf Proof.} We note that
\begin{equation}
[\nabla,\l]f(x) = -c_1\int_0^{\infty}t^{-\frac{3}{2}}\int_{\Omega}\left(\nabla_xH_D(x,y,t)f(y) - H_D(x,y,t)\nabla_y f(y)\right)dy
\label{commde}
\end{equation}
and therefore
\begin{equation}
[\nabla,\l]f(x) = -c_1\int_0^{\infty}t^{-\frac{3}{2}}\int_{\Omega}\left(\nabla_x+\nabla_y\right)H_D(x,y,t)f(y)dy.
\label{commdef}
\end{equation}
The inequality (\ref{nac}) follows from (\ref{cancel1}) and Lemma \ref{intpk}.
For the inequality (\ref{nachi}) we need also to estimate
\[
C(x) = c_s\left|\int_0^{\infty}t^{-\frac{3}{2}}\int_{\Omega}H_D(x,y,t)(\nabla\chi(y))\theta(y)dy\right|
\]
by the right hand side of (\ref{nachi}), and this follows from (\ref{nachij}) in view of (\ref{hb}).
\section{SQG: H\"{o}lder bounds}
We consider the equation (\ref{sqg}) with $u$ given by (\ref{u}) and with smooth initial data $\theta_0$ compactly supported in $\Omega$. We note that by the C\'{o}rdoba-C\'{o}rdoba inequality we have
\begin{equation}
\|\theta(t)\|_{L^{\infty}}\le \|\theta_0\|_{L^{\infty}}.
\label{linf}
\end{equation}
We prove the following uniform interior H\"{o}lder bound:
\begin{thm}\label{glh} Let $\theta(x,t)$ be a smooth solution of (\ref{sqg}) in the smooth bounded domain $\Omega$. There exists a constant $0<\alpha<1$ depending
only on $\|\theta_0\|_{L^{\infty}(\Omega)}$, and a constant $\Gamma>0$ depending on the domain $\Omega$ such that, for any $\ell>0$ sufficiently small
\begin{equation}
\sup_{d(x)\ge \ell, \; |h|\le\frac{\ell}{16},\; t\ge 0}\frac{|\theta(x+h,t)-\theta(x,t)|}{|h|^{\alpha}} \le \|\theta_0\|_{C^{\alpha}} + \Gamma \ell^{-\alpha}\|\theta_0\|_{L^{\infty}(\Omega)}
\label{hldrbnd}
\end{equation}
holds.
\end{thm}
\noindent{\bf{Proof.}} We take a good cutoff $\chi$ used above, $|h|\le\frac{\ell}{16}$ and observe that, from the SQG equation we obtain the equation
\begin{equation}
(\partial_t + u\cdot\nabla + (\delta_h u)\cdot\nabla_h)(\delta_h\theta)+ \l(\chi \delta_h \theta) + C_h(\theta) = 0
\label{le}
\end{equation}
where $C_h(\theta)$ is the commutator given above in (\ref{comh}). Denoting (as before in (\ref{f})) $f=\chi\delta_h\theta$ we have after multiplying by $\delta_h\theta$ and using the fact that $\chi(x)=1$ for $d(x)\ge \ell$,
\begin{equation}
\frac{1}{2}L_{\chi}\left(\delta_h\theta\right)^2 + D(f) + (\delta_h\theta) C_h(\theta) = 0
\label{lchieq}
\end{equation}
where
\begin{equation}
L_{\chi}g = \partial_t g + u\cdot\nabla_x g + \delta_h u\cdot\nabla_h g + \l(\chi^2 g)
\label{lchi}
\end{equation}
and $D(f)$ is given in Theorem \ref{nlmax}.
Multiplying by $|h|^{-2\alpha}$ where $\alpha>0$ will be chosen small to be small enough we obtain
\begin{equation}
\frac{1}{2}L_{\chi}\left(\frac{\delta_h\theta(x)^2}{|h|^{2\alpha}}\right) + |h|^{-2\alpha} D(f) \le 2\alpha\frac{|\delta_h u|}{|h|}\left(\frac{\delta_h\theta(x)^2}{|h|^{2\alpha}}\right) + |C_h(\theta)| |\delta_h\theta||h|^{-2\alpha}.
\label{bu}
\end{equation}
The factor $2\alpha$ comes from the differentiation $\delta_h u\cdot\nabla_h (|h|^{-2\alpha})$ and its smallness will be crucial below.
Let us record here the inequality (\ref{d}) in the present case:
\begin{equation}
D(f) \ge \gamma_1 |h|^{-1}\|\theta\|_{L^{\infty}}^{-1}|(\delta_h\theta)_d|^3 + \gamma_1 (d(x))^{-1} |\delta_h\theta|^2,
\label{d1}
\end{equation}
valid pointwise, when $|h|\le \frac{\ell}{16}$ and $d(x)\ge\ell$, where
\[
|(\delta_h\theta)_d| = |\delta_h\theta|, \quad {\mbox{if}}\; |\delta_h\theta(x)|\ge M\|\theta\|_{L^{\infty}}\frac{|h|}{d(x)},
\]
and $|(\delta_h\theta)_d|=0$ otherwise.
We use now the estimates (\ref{dhub}), (\ref{commhb}) and a Young inequality for the term involving $\sqrt{\rho D(f)}$ to obtain
\begin{equation}
\begin{array}{l}
\frac{1}{2}L_{\chi}\left(\frac{\delta_h\theta(x)^2}{|h|^{2\alpha}}\right) + \frac{1}{2}|h|^{-2\alpha} D(f) \le
C_1\alpha^2|h|^{-2-2\alpha}\rho |\delta_h\theta|^4 \\+
C_1\alpha \|\theta\|_{L^{\infty}}\left(\frac{1}{d(x)} + \frac{1}{\rho}\right)|h|^{-2\alpha}|\delta_h\theta|^2 + C_1\alpha|\delta_h\theta||h|^{-1-2\alpha}|\delta_h\theta|^2\\
+ \Gamma_0\frac{|h|}{d(x)^2}\|\theta\|_{L^{\infty}} |\delta_h\theta||h|^{-2\alpha}
\end{array}
\label{bo}
\end{equation}
for $d(x)\ge \ell$, $|h|\le \frac{\ell}{16}$. Let us choose $\rho$ now. We set
\begin{equation}
\rho = \left\{
\begin{array}{l}
|\delta_h\theta(x)|^{-1}|h|\|\theta\|_{L^{\infty}}, \quad {\mbox{if}}\;
|\delta_h\theta(x)|\ge M_1\|\theta\|_{L^{\infty}}\frac{|h|}{d(x)},\\
d(x), \quad\quad {\mbox{if}}\quad |\delta_h\theta(x)|\le M_1\|\theta\|_{L^{\infty}}\frac{|h|}{d(x)},
\end{array}
\right.
\label{rhoc}
\end{equation}
where we put
\begin{equation}
M_1 = M + \sqrt{\frac{8\Gamma_0}{\gamma_1}} + 1,
\label{mone}
\end{equation}
where $M$ is the constant from Theorem \ref{nlmax}, $\Gamma_0$ is the constant from (\ref{commhb}) and $\gamma_1$ is the constant from (\ref{d1}). This choice was made in order to use the lower bound on $D(f)$ to estimate the contribution due to the inner piece $u_h$ (see (\ref{uh})) of $\delta_h u$ and the contribution from the commutator $C_h(\theta)$. We distinguish two cases. The first case is when
$|\delta_h\theta(x)|\ge M_1\|\theta\|_{L^{\infty}}\frac{|h|}{d(x)}$. Then we have
\begin{equation}
\begin{array}{l}
\frac{1}{2}L_{\chi}\left(\frac{\delta_h\theta(x)^2}{|h|^{2\alpha}}\right) + \frac{1}{2}|h|^{-2\alpha}D(f) \le
C_1\left[(\alpha\|\theta\|_{L^{\infty}})^2 +(2+\frac{1}{M_1})\alpha\|\theta\|_{L^{\infty}}\right]|\delta_h\theta|^3|h|^{-1-2\alpha}\|\theta\|_{L^{\infty}}^{-1} \\
+ \Gamma_0\frac{|h|}{d(x)^2}\|\theta\|_{L^{\infty}} |\delta_h\theta||h|^{-2\alpha}.
\end{array}
\label{boup}
\end{equation}
The choice of $M_1$ was such that, in this case
\[
\Gamma_0\frac{|h|}{d(x)^2}\|\theta\|_{L^{\infty}} |\delta_h\theta(x)||h|^{-2\alpha}
\le \frac{\gamma_1}{8} |\delta_h\theta(x)|^3|h|^{-1-2\alpha}\|\theta\|_{L^{\infty}}^{-1} .
\]
We choose now $\alpha$ by requiring
\begin{equation}
\epsilon= \alpha\|\theta\|_{L^{\infty}}
\label{epsilong}
\end{equation}
to satisfy
\begin{equation}
C_1M_1^2(\epsilon^2 + (2+M_1^{-1})\epsilon) \le\frac{\gamma_1}{8}
\label{epsilongre}
\end{equation}
and obtain from (\ref{boup})
\begin{equation}
\frac{1}{2}L_{\chi}\left(\frac{|\delta_h\theta(x)|^2}{|h|^{2\alpha}}\right) + \frac{1}{4}|h|^{-2\alpha}D(f) \le 0
\label{bonu}
\end{equation}
for $d(x)\ge\ell$, $|h|\le\frac{\ell}{16}$, in the case $|f|\ge M_1\|\theta\|_{L^{\infty}}\frac{|h|}{d(x)}$.
The second case is when the opposite inequality holds, i.e, when
$|\delta_h\theta(x)|\le M_1\|\theta\|_{L^{\infty}}\frac{|h|}{d(x)}$. Then, using $\rho = d(x)$ we obtain from (\ref{bo})
\begin{equation}
\begin{array}{l}
\frac{1}{2}L_{\chi}\left(\frac{\delta_h\theta(x)^2}{|h|^{2\alpha}}\right) + \frac{1}{2}|h|^{-2\alpha}D(f) \le C_1(M_1^2\epsilon^2 + (M_1+2)\epsilon)\frac{1}{d(x)} (\delta_h\theta(x))^2|h|^{-2\alpha}\\ +
\Gamma_0d(x)^{-2}\|\theta\|_{L^{\infty}}|\delta_h\theta||h|^{1-2\alpha}\\
\le \frac{\gamma_1}{8d(x)}\left(\frac{\delta_h\theta(x)^2}{|h|^{2\alpha}}\right)
+ 2\Gamma_0M_1\|\theta\|_{L^{\infty}}^2d(x)^{-3}|h|^{2-2\alpha}.
\end{array}
\label{badu}
\end{equation}
Summarizing, in view of the inequalities (\ref{bonu}) and (\ref{badu}), the damping term $\frac{\gamma_1}{d(x)}|\delta_h\theta(x)|^2$ in (\ref{d1}) and the choice of small $\epsilon$ in (\ref{epsilongre}), we have that
\begin{equation}
L_{\chi}\left(\frac{\delta_h\theta(x)^2}{|h|^{2\alpha}}\right) + \frac{\gamma_1}{4d(x)}\left(\frac{\delta_h\theta(x)^2}{|h|^{2\alpha}}\right) \le
B
\label{fin}
\end{equation}
holds for $d(x)\ge \ell$ and $|h|\le\frac{\ell}{16}$ where
\begin{equation}
B= 2(16)^{-2+2\alpha}\Gamma_0M_1\|\theta\|_{L^{\infty}}^2d(x)^{-1-2\alpha} =
\Gamma_1\frac{\gamma_1}{4}\|\theta\|_{L^{\infty}}^2d(x)^{-1-2\alpha}
\label{B}
\end{equation}
with $\Gamma_1$ depending on $\Omega$. Without loss of generality we may
take $\Gamma_1> 4(16)^{2\alpha}$ so that
\[
\frac{|\delta_h\theta|^2}{|h|^{2\alpha}} <\Gamma_1\ell^{-2\alpha}\|\theta_0\|_{L^{\infty}}^2
\]
when $|h|\ge \frac{\ell}{16}$. We note that
\begin{equation}
L_\chi \left(\frac{\delta_h\theta(x)^2}{|h|^{2\alpha}}\right) +
\frac{\gamma_1}{4d(x)}\left(\frac{\delta_h\theta(x)^2}{|h|^{2\alpha}} - \Gamma_1\ell^{-2\alpha}\|\theta\|_{L^{\infty}}^2\right)\le 0
\label{fina}
\end{equation}
holds for any $t$, $x\in\Omega$ with $d(x)\ge \ell$ and $|h|\le\frac{\ell}{16}$.
We take $\delta>0$, $T>0$. We claim that, for any $\delta>0$ and any $T>0$
\[
\sup_{d(x)\ge \ell, |h|\le\frac{\ell}{16}, 0\le t \le T}\frac{|\delta_h\theta(x)|^2}{|h|^{2\alpha}} \le (1+\delta)\left[\|\theta_0\|_{C^{\alpha}}^2 + \Gamma_1 \ell^{-2\alpha}\|\theta_0\|_{L^{\infty}}^2\right]
\]
holds.
The rest of the proof is done by contradiction. Indeed, assume by contradiction that there exists $\tilde{t}\le T$, $\tilde{x}$ and $\tilde{h}$ with $d({\tilde{x}})\ge \ell$ and $|{\tilde{h}}|\le {\frac{\ell}{16}}$ such that
\[
\frac{|\theta(\tilde{x} +\tilde{h}, \tilde{t})- \theta(\tilde{x},\tilde{t}) |^2}{|h|^{2\alpha}} >(1+\delta)\left[\|\theta_0\|_{C^{\alpha}}^2 + \Gamma_1 \ell^{-2\alpha}\|\theta_0\|_{L^{\infty}}^2\right] = R
\]
holds. Because the solution is smooth, we have
\[
\frac{|\delta_h\theta(x,t)|^2}{|h|^{2\alpha}} \le (1+\delta)\|\theta_0\|_{C^{\alpha}}^2
\]
for a short time $0\le t\le t_1$. (Note that this is not a statement about well-posedness in this norm: $t_1$ may depend on higher norms.) Also, because the solution is smooth, it is bounded in $C^1$, and
\[
\sup_{d(x)\ge \ell, |h|\le \frac{\ell}{16}}\frac{|\delta_h\theta(x)|^2}{|h|^2}\le C
\]
on the time interval $[0,T]$. It follows that there exists $\delta_1>0$ such that
\[
\sup_{d(x)\ge \ell, |h|\le \delta_1}\frac{|\delta_h\theta(x)|^2}{|h|^{2\alpha}}
\le C\delta_1^{2-2\alpha}\le \frac{R}{2}.
\]
In view of these considerations, we must have $\tilde{t} >t_1$, $|\tilde h|\ge\delta_1$. Moreover, the supremum is attained: there exists $\bar{x}\in\Omega$ with $d(\bar{x})\ge \ell$ and $\bar h \neq 0$ such that $\delta_1\le |\bar h|\le\frac{\ell}{16}$ such that
\[
\frac{|\theta(\bar{x}+\bar{h}, \tilde{t})-\theta(\bar{x},\tilde{t})|^2}{|\bar{h}|^{2\alpha}} =
s(\tilde{t}) =\sup_{d(x)\ge \ell, |h|\le \frac{\ell}{16}}\frac{|\delta_h\theta (\tilde{t})|^2}{|h|^{2\alpha}} > R.
\]
Because of (\ref{fina}) we have that
\[
\frac{d}{dt}\frac{|\theta(\bar{x}+\bar{h},t)-\theta(\bar{x},t)|^2}{|\bar{h}|^{2\alpha}}_{\left|\right. t=\tilde{t}} <0
\]
and therefore there exists $t'<\tilde{t}$ such that $s(t')>s(\tilde{t})$. This implies that
$\inf\{t>t_1\left|\right. s(t)>R\} = t_1$ which is absurd because we made sure that $s(t_1)< R$. Now $\delta$ and $T$ are arbitrary, so we have proved
\begin{equation}
\sup_{d(x)\ge \ell, |h|\le\frac{\ell}{16}, t\ge 0}\frac{|\delta_h\theta(x)|^2}{|h|^{2\alpha}} \le \left[\|\theta_0\|_{C^{\alpha}}^2 + \Gamma_1 \ell^{-2\alpha}\|\theta_0\|_{L^{\infty}}^2\right]
\label{final}
\end{equation}
which finishes the proof of the theorem.
\noindent{\bf Proof of Theorem \ref{alphaint}}. The proof follows from (\ref{final}) because $\Gamma_1$ does not depend on $\ell$. For any fixed $x\in\Omega$ we may take $\ell$ such that $\ell\le d(x)\le 2 \ell$. Then (\ref{final}) implies
\begin{equation}
d(x)^{2\alpha}\frac{|\delta_h\theta(x,t)|^2}{|h|^{2\alpha}}\le \left[\|\theta_0\|_{C^{\alpha}}^2 + \Gamma_1 2^{2\alpha}\|\theta_0\|_{L^{\infty}}^2\right].
\label{alpahintb}
\end{equation}
\section{Gradient bounds}
We take the gradient of (\ref{sqg}). We obtain
\[
(\partial_t + u\cdot\nabla)\nabla\theta + (\nabla u)^*\nabla\theta +\nabla\l\theta =0
\]
where $(\nabla u)^*$ is the transposed matrix. Let us take a good cutoff $\chi$. Then $g=\nabla\theta$ obeys everywhere
\begin{equation}
\partial_t g + u\cdot\nabla g +\l(\chi g) + C_{\chi}(\theta) + (\nabla u)^*g = 0
\label{nateq}
\end{equation}
with $C_{\chi}$ given in (\ref{cchi}). We multiply by $g$ and, using the fact that $\chi(x)=1 $ when $d(x)\ge \ell$ we obtain
\begin{equation}
\frac{1}{2}L_{\chi}g^2 + D(f) + gC_{\chi}(\theta) + g(\nabla u)^*g = 0
\label{lchieqn}
\end{equation}
when $d(x)\ge \ell$, where $L_{\chi}$ is similar to the one defined in (\ref{lchi}):
\begin{equation}
L_{\chi}(\phi) = \partial_t\phi + u\cdot\nabla \phi + \l(\chi^2\phi)
\label{lchin}
\end{equation}
and $f=\chi g$. Recall that $D(f) = f\l f-\l\left(\frac{f^2}{2}\right)$. Then, using (\ref{nachi}) and (\ref{nauxb}) we deduce
\begin{equation}
\frac{1}{2}L_{\chi}g^2 + D(f) \le \frac{\Gamma_3}{d(x)^2}|g|\|\theta\|_{L^{\infty}(\Omega)}
+ C\left(\sqrt{\rho D(f)} + \|\theta\|_{L^{\infty}(\Omega)}\left(\frac{1}{d(x)} + \frac{1}{\rho}\right) + |\nabla\theta(x)|\right)g^2
\label{ginone}
\end{equation}
for $d(x)\ge \ell$. Using a Young inequality we deduce
\begin{equation}
L_{\chi}g^2 + D(f) \le \frac{2\Gamma_3}{d(x)^2}\|\theta\|_{L^{\infty}(\Omega)}|g|
+ C_4\rho g^4 + C_4\|\theta\|_{L^{\infty}(\Omega)}\left(\frac{1}{d(x)} + \frac{1}{\rho}\right)g^2 + C_4|g|^3
\label{gintwo}
\end{equation}
for $d(x)\ge \ell$. Now $|g| = |f|$ when $d(x)\ge \ell$. If $|g(x)|\ge M\|\theta\|_{L^{\infty}(\Omega)}d(x)^{-1}$ then, in view of (\ref{nlbd})
\begin{equation}
D(f)\ge \gamma_2\|\theta\|_{C^{\alpha}(\Omega)}^{-\frac{1}{1-\alpha}}|g|^{3+\frac{\alpha}{1-\alpha}}(d(x))^{\frac{\alpha}{1-\alpha}} + \frac{\gamma_1}{d(x)}g^2
\label{dlow}
\end{equation}
which is a super-cubic lower bound. We choose in this case
\begin{equation}
\rho^{-1} = C_5 |g(x)|,
\label{rhoch}
\end{equation}
and the right hand side of (\ref{gintwo}) becomes at most cubic in $g$:
\begin{equation}
L_{\chi}g^2 + D(f) \le |g|^3\left[\frac{2\Gamma_3}{M^2\|\theta\|_{L^{\infty}(\Omega)}} + C_4\left (\frac{1}{C_5} + \frac{1}{M} + C_5\|\theta\|_{L^{\infty}(\Omega)} + 1\right)\right] = K|g|^3.
\label{ginthree}
\end{equation}
In view of (\ref{dlow}) we see that
\begin{equation}
L_{\chi}g^2 + |g|^3\left(\gamma_2\left(\|\theta\|_{C^{\alpha}(\Omega)}^{-\frac{1}{\alpha}}|g(x)|d(x)\right)^{\frac{\alpha}{1-\alpha}} - K\right)\le 0
\label{ginfour}
\end{equation}
holds for $d(x)\ge \ell$, if $|g|\ge M\|\theta\|_{L^{\infty}}d(x)^{-1}$.
In the opposite case, $|g(x)|\le M\|\theta\|_{L^{\infty}}d(x)^{-1}$ we choose
\begin{equation}
\rho(x) = d(x)
\label{rhochoib}
\end{equation}
and obtain from (\ref{gintwo})
\begin{equation}
\begin{array}{l}
L_{\chi}g^2 + D(f) \\
\le \frac{1}{d(x)^3}\left[C_4M^4\|\theta\|_{L^{\infty}(\Omega)}^4 + C_4M^3\|\theta\|_{L^{\infty}(\Omega)}^3 + 2C_4M^2\|\theta\|_{L^{\infty}(\Omega)}^3+ 2M\Gamma_3\|\theta\|_{L^{\infty}(\Omega)}^2\right] = \frac{K_1}{d(x)^3}
\end{array}
\label{gina}
\end{equation}
and using the convex damping inequality (\ref{nlbd})
\[
D(f)\ge \gamma_1\frac{g^2}{d(x)}
\]
we obtain in this case
\begin{equation}
L_{\chi}g^2 + \frac{1}{d(x)}\left(\gamma_1g^2(x) - \frac{K_1}{d(x)^2}\right)\le 0.
\label{gino}
\end{equation}
Putting together (\ref{ginfour}) and (\ref{gino}) and \ref{hldrbnd}
we obtain
\begin{thm}\label{gradb} Let $\theta$ be a smooth solution of (\ref{sqg}). Then
\begin{equation}
\sup_{d(x)\ge \ell}|\nabla \theta(x,t)| \le C\left[\|\nabla\theta_0\|_{L^{\infty}(\Omega)} + \frac{P(\|\theta\|_{L^{\infty}(\Omega)})}{\ell}\right]
\label{fing}
\end{equation}
where $P(\|\theta\|_{L^{\infty}(\Omega)})$ is a polynome of degree four.
\end{thm}
\noindent{\bf Proof of Theorem \ref{gradint}}. The proof follows by choosing $\ell$ depending on $x$, because the constants in (\ref{fing}) do not depend on $\ell$.
\section{Example: Half Space}
The case of the half space is interesting because global smooth solutions of (\ref{sqg}) are easily obtained by reflection: If the initial data $\theta_0$ is smooth and compactly supported in $\Omega = {\mathbb R}^d_+$ and if we consider its odd reflection
\begin{equation}
\widetilde{\theta_0}(x) = \left\{
\begin{array}{c}
\theta_0(x_1,\dots x_d),\quad\quad \; {\mbox{if}}\; x_d>0,\\
-\theta_0(x_1,\dots, -x_d)\quad {\mbox{if}}\; x_d<0
\end{array}
\right.
\label{tildet}
\end{equation}
then the solution of the critical SQG equation in the whole space, with intitial data $\widetilde{\theta_0}$ is globally smooth and its restriction to $\Omega$ solves (\ref{sqg}) there. This follows because of reflection properties of the heat kernel and of the Dirichlet Laplacian.
The heat kernel with Dirichlet boundary conditions in $\Omega ={\mathbb R}^d_{+}$ is
\[
H(x,y,t) = ct^{-\frac{d}{2}}\left( e^{-\frac{|x-y|^2}{4t}} - e^{-\frac{|x-{\widetilde {y}}|^2}{4t}}\right)
\]
where $\widetilde{y} = (y_1,\dots, y_{d-1}, -y_d)$. More precisely,
\begin{equation}
H(x,y,t) = G^{(d-1)}_t(x'-y')\left[G_t(x_d-y_d)-G_t(x_d+y_d)\right]
\label{Hsp}
\end{equation}
with $x'= (x_1,\dots, x_{d-1})$,
\begin{equation}
G_t^{(d-1)}(x') = \left(\frac{1}{4\pi t}\right)^{\frac{d-1}{2}}e^{-\frac{|x'|^2}{4t}}
\label{gprime}
\end{equation}
and
\begin{equation}
G_t(\xi) = \left(\frac{1}{4\pi t}\right)^{\frac{1}{2}} e^{-\frac{\xi^2}{4t}}
\label{G}
\end{equation}
Let us note that
\begin{equation}
\nabla_x H = H\left (
\begin{array}{c}
-\frac{x'-y'}{2t}\\
-\frac{x_d -y_d}{2t} + \frac{y_d}{t}\frac{e^{-\frac{x_dy_d}{t}}}{1-e^{-\frac{x_dy_d}{t}}}
\end{array}
\right)
\label{naxhh}
\end{equation}
We check that (\ref{grbx}) is obeyed. Indeed, because $1-e^{-p}\ge \frac{p}{2}$ when $0\le p\le 1$ it follows that
\[
\frac{y_d}{t}e^{-\frac{x_dy_d}{t}}(1-e^{-\frac{x_dy_d}{t}})^{-1}\le \frac{y_d}{t}\frac{2t}{x_dy_d}
\]
if $\frac{x_dy_d}{t}\le 1$, and if $p=\frac{x_dy_d}{t} \ge 1$ then
\[
\frac{y_d}{t}e^{-\frac{x_dy_d}{t}}(1-e^{-\frac{x_dy_d}{t}})^{-1}\le \frac{e}{e-1}\frac{y_d}{t}e^{-\frac{x_dy_d}{t}}.
\]
In this case, if $\frac{x_d}{\sqrt{t}} \ge 1$ then $\frac{y_d}{t}\le t^{-\frac{1}{2}}p$ and $pe^{-p}$ is bounded; if $\frac{x_d}{\sqrt{t}}\le 1$ we write
$\frac{y_d}{t} = t^{-\frac{1}{2}}(\frac{y_d-x_d}{\sqrt{t}} + \frac{x_d}{\sqrt{t}})$ and thus we obtain:
\begin{equation}
\left|\nabla_x H\right| \le C H\left [\frac{1}{\sqrt{t}}(1+ \frac{|x-y|}{\sqrt{t}}) + \frac{1}{x_d}\right]
\label{naxhhb}
\end{equation}
We check (\ref{cancel1}): First we have
\begin{equation}
(\nabla_x + \nabla_y)H = \left(
\begin{array}{c}
0\\
\frac{x_d+y_d}{t}G_t(x_d+y_d)G^{(d-1)}_t(x'-y')
\end{array}
\right)
\label{naxplusnay}
\end{equation}
and then
\begin{equation}
\int_{\Omega}\left| (\nabla_x+ \nabla_y)H(x,y,t)\right| dy \le Ct^{-\frac{1}{2}}e^{-\frac{x_d^{2}}{4t}}.
\label{hone}
\end{equation}
Indeed, the only nonzero component occurs when the differentiation is with respect to the normal direction, and then
\begin{equation}
\left|(\partial_{x_d} + \partial_{y_d})H(x,y,t)\right| = ct^{-\frac{d}{2}}e^{-\frac{|x'-y'|^2}{4t}}\left(\frac{x_d+y_d}{t}\right) e^{-\frac{(x_d+y_d)^2}{4t}}
\label{naxplus}
\end{equation}
Therefore
\begin{equation}
\begin{array}{l}
\int_{\Omega} \left|(\nabla_x+ \nabla_y)H(x,y,t)\right| dy \le Ct^{-\frac{1}{2}}\int_0^{\infty}\left(\frac{x_d+y_d}{t}\right )e^{-\frac{(x_d+y_d)^2}{4t}}dy_d\\
= Ct^{-\frac{1}{2}}\int_{\frac{x_d}{\sqrt {t}}}^{\infty}\xi e^{-\frac{\xi^2}{4}}d\xi
\\
= Ct^{-\frac{1}{2}}e^{-\frac{x_d^2}{4t}}.
\end{array}
\label{naxnaxc}
\end{equation}
We check (\ref{cancel2}): first
\begin{equation}
\begin{array}{l}
\partial_{x'}(\nabla_x + \nabla_y)H = -\frac{x_d+y_d}{t}G_t(x_d+y_d)\frac{(x'-y')}{2t}G_t^{(d-1)}(x'-y')\\
\partial_{x_d}(\nabla_x + \nabla_y)H = \left(\frac{1}{t} - \frac{(x_d+y_d)^2}{2t^2}\right)G_t(x_d+y_d)G_t^{(d-1)}(x'-y')
\end{array}
\end{equation}
Consequently
\begin{equation}
|\nabla_x(\nabla_x + \nabla_y)H(x,y,t)|\le Ct^{-\frac{d}{2} -1}\left(1+\frac{|x'-y'|}{\sqrt{t}}\right)\left(1+ \frac{(x_d+y_d)^2}{t}\right)e^{-\frac{|x'-y'|^2}{4t}}e^{-\frac{(x_d+y_d)^2}{4t}}
\label{naxnaxplusnay}
\end{equation}
and (\ref{cancel2}) follows:
\[
\int_{\Omega}|\nabla_x(\nabla_x + \nabla_y)H(x,y,t)|dy \le Ct^{-1}\int_{\frac{x_d}{\sqrt{2t}}}^{\infty}(1+z^2)e^{-\frac{z^2}{2}}dz.
\]
We compute $\Theta$ and $\l 1$:
\begin{equation}
\Theta(x,t)= (e^{t\Delta}1)(x) = \int_{\Omega}H(x,y,t)dy = \frac{1}{\sqrt{2\pi}}\int_{-\frac{x_d}{\sqrt{2t}}}^{\frac{x_d}{\sqrt{2t}}}e^{-\frac{\xi^2}{2}}d\xi
\label{heat1}
\end{equation}
and therefore
\[
\int_0^{\infty}t^{-\frac{3}{2}}(1-e^{t\Delta}1)dt =
\frac{2}{\sqrt{2\pi}}\int_0^{\infty}t^{-\frac{3}{2}}dt\int_{\frac{x_d}{\sqrt{2t}}}e^{-\frac{\xi^2}{2}}d\xi= \frac{4}{x_d\sqrt{\pi}}.
\]
\begin{rem}{\label{weak}} We note here that $\l^{s} 1 = C_sy_d^{-s}$ is calculated by duality:
\[
\begin{array}{l}
\left(\l^{s}1, \phi\right) = \left(1, \l^{s}\phi\right ) \\
=c_{s}\int_{\Omega}dx\int_0^{\infty}t^{-1-\frac{s}{2}}dt\left [\phi(x)-\int_{\Omega}H(x,y,t)\phi(y)dy\right]\\
=c_{s}\int_0^{\infty}t^{-1-\frac{s}{2}}dt\left[\int_{\Omega}\phi(x)dx -\int_{\Omega}\Theta(y_d,t)\phi(y)dy\right]\\
= c_{s}\int_0^{\infty}t^{-1-\frac{s}{2}}dt\int_{\Omega}\left(1-\Theta(y_d,t)\right)\phi(y)dy\\
=\frac{2c_{s}}{\sqrt{2\pi}}\int_{\Omega}\phi(y)\int_0^{\infty}t^{-1-\frac{s}{2}}dt\int_{\frac{y_d}{\sqrt{2t}}}^{\infty}e^{-\frac{\xi^2}{2}}d\xi\\
= C_{s}\int_{\Omega}y_d^{-s}\phi(y)dy
\end{array}
\]
where we used the symmetry of the kernel $H$ and (\ref{heat1}).
\end{rem}
We observe that if we consider horizontal finite differences, i.e.
$h_d= 0$ then $C_h(\theta)$ vanishes, and we deduce that
\begin{equation}
\sup_{x,h',t}|h'|^{-\alpha}|\theta (x'+ h', x_d, t)-\theta (x', x_d,t)|\le C_{1,\alpha}
\label{partialh}
\end{equation}
with $C_{1,\alpha}$ the partial $C^{\alpha}$ norm of the initial data.
This inequality can be used to prove that $u_2$ is bounded when $d=2$. Indeed
\begin{equation}
u_2(x,t) = c\int_{\Omega}\left(\frac{1}{|x-y|^3} -\frac{1}{|x-\widetilde{y}|^3}\right)(x_1-y_1)\theta(y,t)dy
\label{uone}
\end{equation}
and the bound is obtained using the partial H\"{o}lder bound on $\theta$ (\ref{partialh}) and the uniform bounds $\|\theta\|_{L^p}$ for $p=1, \infty$.
The outline of the proof is as follows: we split the integral
\begin{equation}
u_2 = u_{2}^{in} + u_2^{out}
\label{uinout}
\end{equation}
with
\begin{equation}
u_2^{in}(x) = c\int_{|x_1-y_1|\le \delta, |x_2-y_2|\le \delta}\left(\frac{1}{|x-y|^3} -\frac{1}{|x-\widetilde{y}|^3}\right)(x_1-y_1)\left(\theta(y_1, y_2, t) - \theta( x_1, y_2,t)\right)dy
\label{uin}
\end{equation}
and
\begin{equation}
u_2^{out}(x) = c\int_{\max\{|x_1-y_1|, |x_2-y_2|\}\ge \delta}\left(\frac{1}{|x-y|^3} -\frac{1}{|x-\widetilde{y}|^3}\right)(x_1-y_1)\theta(y_1,y_2,t)dy
\label{uout}
\end{equation}
where in (\ref{uin}) we used the fact that the kernel is odd in the first variable. Then, for $u^{in}$ we use the bound (\ref{partialh}) to derive
\begin{equation}
|u_2^{in}(x)|\le C_{1,\alpha}C\int_0^{\sqrt{2}\delta}\rho^{-1+\alpha}d\rho =
CC_{1,\alpha}\delta^{\alpha}
\label{uinbound}
\end{equation}
and for $u^{out}$, if we have no other information on $\theta$ we just bound
\begin{equation}
|u_2^{out}(x)|\le C\log\left(\frac{L}{\delta}\right )\|\theta_0\|_{L^{\infty}} + CL^{-2}\|\theta_0\|_{L^1}
\label{uoutbou}
\end{equation}
with some $L\ge \delta$. Both $\delta$ and $L$ are arbitrary.
Finally, let us note that even if $\theta\in C_0^{\infty}(\Omega)$, the tangential component of the velocity need not vanish at the boundary because it is given by the integral
\[
u_1(x_1, 0, t) = -c\int_{{\mathbb R}^2_{+}}\frac{2y_2}{\left((x_1-y_1)^2 +y_2^2\right )^{\frac{3}{2}}} \theta(y,t)dy.
\]
\section{Appendix 1}
We sketch here the proofs of (\ref{naxnaxb}) (\ref{cancel1}) and (\ref{cancel2}). We take a point $\bar{x}\in\Omega$, a point $y\in\Omega$ and
distinguish between two cases, if $d(\bar{x})< \frac{|\bar{x}-y|}{4}$ and if
$d(\bar{x})\ge \frac{|\bar{x}-y|}{4}$. In the first case we take a ball $B$ of radius $\delta=\frac{d(\bar{x})}{8}$ centered at $\bar{x}$ and in the second case we take also a ball $B$ centered at $\bar{x}$ but with radius $\delta=\frac{d(\bar{x})}{2}$. We note that in both cases the radius $\delta$ is proportional to $d(\bar{x})$. We take $x\in B(\bar{x}, \frac{\delta}{2})$, we fix $y\in\Omega$, take the function $h(z,t) = H_D(z,y,t)$, and apply Green's identity in the domain
$U = B\times (0,t)$. We obtain
\[
\begin{array}{l}
0=\int_U\left[(\partial_s-\Delta_z)h(z,s)G_{t-s}(x-z) + h(z,s)(\partial_s+\Delta_z)G_{t-s}(x-z)\right]dzds \\
= h(x,t)-G_t(x-y) + \int_0^t\int_{\partial B}\left[\frac{\partial G_{t-s}(x-z)}{\partial n}h(z,s)-\frac{\partial h(z,s)}{\partial n}G_{t-s}(x-z)\right]
\end{array}
\]
and thus
\[
H_D(x,y,t) = G_t(x-y) - \int_0^t\int_{\partial B}\left[\frac{\partial G_{t-s}(x-z)}{\partial n}h(z,s)-\frac{\partial h(z,s)}{\partial n}G_{t-s}(x-z)\right]
\]
We note that the $x$ dependence is only via $G$, and $x-z$ is bounded away from zero.
We differentiate twice under the integral sign, and use the upper bounds (\ref{hb}), (\ref{grbx}). We have
\[
\begin{array}{l}
|\nabla_x\nabla_x H_D(x,y,t)-\nabla_x\nabla_x G_{t}(x-y)| \\
\le C\int_0^t\int_{\partial B}(t-s)^{-\frac{d+3}{2}}p_3(\frac{|x-z|}{\sqrt{t-s}})e^{-\frac{|x-z|^2}{4(t-s)}}s^{-\frac{d}{2}}e^{-\frac{|y-z|^2}{Ks}}dzds\\
+\int_0^{\min\{t; d^2(y)\}}\int_{\partial B} (t-s)^{-\frac{d+2}{2}}p_2(\frac{|x-z|}{\sqrt{t-s}})e^{-\frac{|x-y|^2}{4(t-s)}}s^{-\frac{d+1}{2}}p_1(\frac{|y-z|}{\sqrt{s}})e^{-\frac{|y-z|^2}{Ks}}dzds \\
+\int_{\min\{t; d^2(y)\}}^t\int_{\partial B} (t-s)^{-\frac{d+2}{2}}p_2(\frac{|x-z|}{\sqrt{t-s}})e^{-\frac{|x-y|^2}{4(t-s)}}s^{-\frac{d}{2}}\frac{1}{d(y)}\frac{w_1(y)}{|y-z|}e^{-\frac{|y-z|^2}{Ks}}dzds \\
\end{array}
\]
where $p_k(\xi)$ are polynomials of degree $k$.
The integrals are not singular. In both cases $|x-z|\ge \frac{\delta}{2}$,
and any negative power $(t-s)^{-\frac{k}{2}}$ can be absorbed by $e^{-\frac{|x-z|^2}{8(t-s)}}$ at the price $|x-z|^{-k}\le C\delta^{-k}$, still leaving $e^{-\frac{|x-z|^2}{8(t-s)}}$ available. Similarly,
in the first case $|y-z|\ge |\bar{x}-y| -\delta \ge \delta $ and in the second case $|y-z|\ge |\bar{x}-z| - |\bar{x}-y| \ge \frac{\delta}{2}$. Any power $s^{-\frac{k}{2}}$ can be absorbed by $e^{-\frac{|y-z|^2}{2Ks}}$ at the price $|y-z|^{-k}\le C\delta^{-k}$ still leaving $e^{-\frac{|y-z|^2}{2Ks}}$ available. We note that if $d(y)<d(x)$ so that $d(y)^2<t$ is possible, then, in view of (\ref{phione}) we have $\frac{w_1(y)}{|y-z|d(y)}\le C\delta^{-1}$.
We also note that view of the fact that
\[
|x-y|^2t^{-1}\le 2\left(\frac{t-s}{t}\left(\frac{|x-z|^2}{t-s}\right) + \frac{s}{t}\left(\frac{|y-z|^2}{s}\right)\right)
\]
we have a bound
\[
e^{-\frac{|x-z|^2}{8(t-s)}-\frac{|y-z|^2}{2Ks}}\le e^{-\frac{|x-y|^2}{\tilde{K}t}}
\]
with $\tilde{K} = 16 + 4K$. Pulling this exponential out and estimating all the rest in terms of $\delta$ we obtain, in both cases, all the integrals bounded by
$Ct\delta^{-d-4}$ and therefore we have, in both cases,
\[
|\nabla_x\nabla_x H_D(x,y,t)-\nabla_x\nabla_x G_{t}(x-y)|\le Ce^{-\frac{|x-y|^2}{\tilde{K}t}}t\delta^{-d-4}\le Ct^{-1-\frac{d}{2}}e^{-\frac{|x-y|^2}{\tilde{K}t}}
\]
because $t\le c\delta^2$. This proves (\ref{naxnaxb}).
For (\ref{cancel1}) and (\ref{cancel2}) we start by noticing that it is enough to prove the estimates
\begin{equation}
\int_{B(x, \frac{d(x)}{14})}|(\nabla_x+\nabla_y)H_D(x,y,t)|dy \le C t^{-\frac{1}{2}}e^{-\frac{d(x)^2}{Kt}}
\label{cone}
\end{equation}
and
\begin{equation}
\int_{B(x, \frac{d(x)}{14})}|\nabla_x(\nabla_x+\nabla_y)H_D(x,y,t)|dy \le C t^{-1}e^{-\frac{d(x)^2}{Kt}}
\label{ctwo}
\end{equation}
for $t<cd^2(x)$. Indeed, if $|x-y|\ge \frac{d(x)}{14}$, individual Gaussian upper bounds for up to two derivatives of $H_D$ suffice (there is no need for cancellations).
In order to prove (\ref{cone}) and (\ref{ctwo}) we use a good cutoff $\chi$ with a scale $\ell = \frac{d(x)}{100}$. We take $y\in B(x, \frac{d(x)}{14})$. Both $x$ and $y$ are fixed for now. We note that the function
\[
z\mapsto h(z) = \chi(z)G_t(z-y)
\]
solves
\[
(\partial_t-\Delta)h(z,t) = -\left[(\Delta\chi(z))G_t(z-y) +2(\nabla\chi(z))\cdot\nabla G_t(z-y)\right] = F(z,y,t),
\]
vanishes for $z\in \partial\Omega$, and has initial datum $h_0= \chi(z)\delta(z-y)$, so, by Duhamel
\[
h(z,t) = e^{t\Delta}h_0 + \int_0^t e^{(t-s)\Delta}F(s)ds,
\]
which, in view of $(e^{t\Delta}f)(z) = \int_{\Omega}H_D(z,w,t)f(w)dw$ yields
\[
\chi(z)G_t(z-y) = \chi(y)H_D(z,y,t) + \int_0^t\int_{\Omega}H_D(z,w,t-s)F(w,s)dwds
\]
for all $z$, and recalling that $\chi(x)=\chi(y) =1$, and reading at $z=x$ we have
\begin{equation}
H_D(x,y,t) = G_t(x-y) + \int_0^t\int_{\Omega}H_D(x,w,t-s)\left[\Delta\chi(w)G_s(w-y) +2\nabla\chi(w)\cdot\nabla G_s(w-y)\right]dwds.
\label{repc}
\end{equation}
The right hand side integral is not singular and can be differentiated because the support of $\nabla\chi$ is far from the ball $B(x,\frac{d(x)}{14})$. Differentiation $\nabla_x+\nabla_y$ cancels the Gaussian $G_t(x-y)$. The estimates of the right hand side
\[
\left|(\nabla_x+\nabla_y) \int_0^t \int_{\Omega}H_D(x,w,t-s)F(w,y,s)dwds\right| \le Ct^{-\frac{d+1}{2}}e^{-\frac{d(x)^2}{Kt}}
\]
and
\[
\left|\nabla_x(\nabla_x+\nabla_y) \int_0^t \int_{\Omega}H_D(x,w,t-s)F(w,y,s)dwds\right| \le Ct^{-{\frac{d+2}{2}}}e^{-\frac{d(x)^2}{Kt}}
\]
for $t<cd^2(x)$ follow from Gaussian upper bounds. Integration $dy$ on the ball
$B(\frac{d(x)}{14})$ picks up the volume of the ball, and thus (\ref{cone}) and
(\ref{ctwo}) are verified.
\section{Appendix 2}
We sketch here the proof of local wellposedness of the equation (\ref{sqg}).
We start by defining a Galerkin approximation. We consider the projectors
$P_n$
\begin{equation}
P_n f= \sum_{j=1}^nf_jw_j
\label{pn}
\end{equation}
with $f_j= \int_{\Omega}f(x)w_j(x)dx$. We consider for fixed $n$ the approximate system
\begin{equation}
\partial_t \theta_n + P_n\left(u_n\cdot\nabla\theta_n\right) + \l\theta_n = 0
\label{sqgn}
\end{equation}
where
\begin{equation}
u_n = \nabla^{\perp}\l^{-1}\theta_n = R_D^{\perp}\theta_n
\label{un}
\end{equation}
with
\begin{equation}
(P_n\theta_n)(x,t) = \theta_n(x,t) = \sum_{j=1}^n\theta_{n,j}(t)w_j(x)
\label{pnt}
\end{equation}
and with initial data $\theta_n(0) = P_n\theta_0$ where $\theta_0$ is a fixed smooth function belonging to $H_0^1(\Omega)\cap H^2(\Omega)$. Although it was written as a PDE, the system (\ref{sqgn}) is a system of ODEs for the coefficients $\theta_{n,j}(t)= \int_{\Omega}\theta_nw_jdx$. Let us note that
$P_n$ does not commute with $\nabla$ but does commute with $-\Delta$ and functions of it. The function $u_n$ is divergence-free and it is a finite sum of divergence-free functions,
\begin{equation}
u_n(x) = \sum_{j=1}^n\lambda_j^{-\frac{1}{2}}\theta_{n,j}(t)\nabla^{\perp}w_j(x).
\label{unwj}
\end{equation}
Note however that $u_n\notin P_nL^{2}(\Omega)$. The normal component of $u_n$ vanishes at the boundary because $\nabla^{\perp}w_j\cdot\nu_{\left|\right.\Omega} =0$.
Moreover, because
\[
\int_{\Omega}P_n(u_n\cdot\nabla\theta_n)\theta_n dx = \int_{\Omega}(u_n\cdot\nabla\theta_n)\theta_ndx = 0
\]
it follows that $\|\theta_n(t)\|_{L^2(\Omega)}$ is bounded in time and therefore the solution exists for all time. The following upper bound for higher norms is uniform only for short time, and it is the bound that is used for local existence of smooth solutions. We apply $\l^2=-\Delta$ to (\ref{sqgn}) and use the fact that it is a local operator, it commutes with $P_n$ and with derivatives:
\begin{equation}
\partial_t\l^2\theta_n + P_n\left(u_n\cdot\nabla\l^2\theta_n -2\nabla u_n\nabla\na\theta_n + (\l^2 u_n)\cdot\nabla\theta_n\right) + \l^3\theta_n = 0
\label{lapsqgn}
\end{equation}
We take the scalar product with $\l^2\theta_n$. Because this is finite linear combinations of eigenfunctions, it vansihes at $\partial\Omega$ and integration by parts is allowed. We obtain
\begin{equation}
\begin{array}{l}
\frac{d}{2dt}\|\l^2\theta_n\|^2_{L^2(\Omega)} + \|\l^{\frac{5}{2}}\theta_n\|^2_{L^2(\Omega)}\\
\le\|\l^2u_n\|^2_{L^2(\Omega)}\|\l^2\theta_n\|_{L^2(\Omega)}\|\nabla\theta_n\|_{L^{\infty}(\Omega)} + 2\|\nabla u_n\|_{L^{\infty}(\Omega)}\|\nabla\na\theta\|_{L^2(\Omega)}\|\l^2\theta\|_{L^2(\Omega)}
\end{array}
\label{incomplete}
\end{equation}
We note now that
\begin{equation}
\l^2u_n = \sum_{j=1}^n\theta_{n,j}(-\Delta)\lambda_j^{-\frac{1}{2}}\nabla^{\perp}w_j=
\nabla^{\perp}\l^{-1}(\l^2\theta_n) = R_D^{\perp}(\l^2\theta_n).
\label{delun}
\end{equation}
Now $R_D$ is bounded in $L^2(\Omega)$ (It is in fact an isometry on components; this follows from (\ref{kat})), therefore
\begin{equation}
\|\l^2 u_n\|_{L^2{(\Omega)}} \le \|\l^2\theta_n\|_{L^2(\Omega)}.
\label{delunb}
\end{equation}
The fact that $R_D$ is bounded in $L^4(\Omega)$ is also true (\cite{jerisonkenig}). Then
\begin{equation}
\|\l^2 u_n\|_{L^4{(\Omega)}} \le \|\l^{2}\theta_n\|_{L^4(\Omega)}.
\label{dellunb}
\end{equation}
Moreover, it is known (see for instance (\cite{cabre})) that in $d=2$ we have
\[
\|f\|_{L^4(\Omega)} \le C\|\l^{\frac{1}{2}}f\|_{L^2(\Omega)}
\]
and therefore
\begin{equation}
\|\Delta \theta_n\|_{L^{4}(\Omega)}\le \|\l^{\frac{5}{2}} \theta_n\|_{L^2{(\Omega)}}.
\label{delfourthn}
\end{equation}
and
\begin{equation}
\|\Delta u_n\|_{L^{4}(\Omega)}\le C\|\l^{\frac{5}{2}} \theta_n\|_{L^2{(\Omega)}}.
\label{delfourun}
\end{equation}
Now we use the Sobolev embedding
\begin{equation}
\|\nabla \phi\|_{L^{\infty}(\Omega)}\le C\left(\|\Delta\phi\|_{L^{4}(\Omega)} + \|\phi\|_{L^2(\Omega)}\right)
\label{sob}
\end{equation}
and deduce, using also a Poincar\'{e} inequality
\begin{equation}
\frac{d}{dt}\|\l^2\theta_n\|_{L^2(\Omega)}^2 + \|\l^{\frac{5}{2}}\theta_n\|^2_{L^2(\Omega)} \le C\|\l^2\theta_n\|_{L^2(\Omega)}^2\|\l^{\frac{5}{2}}\theta\|_{L^2(\Omega)}.
\label{complete}
\end{equation}
Thus, after a Young inequality we deduce that
\begin{equation}
\sup_{t\le T} \|\l^2\theta_n\|_{L^2(\Omega)}^2 + \int_0^T\|\l^{\frac{5}{2}}\theta_n\|^2_{L^2(\Omega)}dt \le C\|\l^2\theta_0\|_{L^2(\Omega)}^2
\label{finn}
\end{equation}
holds for $T$ depending only on $\|\l^2\theta_0\|_{L^2(\Omega)}$, with a constant independent of $n$. The following result can now be obtained by assing to the limit in a subsequence and using a Aubin-Lions lemma (\cite{lions}):
\begin{prop} Let $\theta_0\in H_0^1(\Omega)\cap H^{2}(\Omega)$ in $d=2$. There exists $T>0$ a unique solution of (\ref{sqg}) with initial datum $\theta_0$ satisfying
\begin{equation}
\theta\in L^{\infty}(0,T; H_0^1(\Omega)\cap H^{2}(\Omega))\cap L^2\left(0,T; {\mathcal{D}}\left(\l^{2.5}\right) \right).
\label{loc}
\end{equation}
\end{prop}
Higher regularity can be obtained as well. Because the proof uses $L^2$- based Sobolev spaces and Sobolev embedding, it is dimension dependent. A proof in higher dimensions is also possible, but it requires using higher powers of $\Delta$, and will not be pursued here.
{\bf{Acknowledgment.}} The work of PC was partially supported by NSF grant DMS-1209394
|
1,941,325,220,074 | arxiv | \section{Introduction}
The determination of heavy quarkonium properties from QCD has always
been a major objective in high energy physics. In this respect, the
development of effective field theories (EFT) directly derived from
QCD like NRQCD~\cite{Caswell:1985ui} or pNRQCD~\cite{Pineda:1997bj}
(for a review see Ref.~\cite{Brambilla:2004jw}) has opened the door to
model independent determinations of heavy quarkonium properties.
Instrumental in this development is the fact that heavy quarkonium
systems can be considered to be non-relativistic (NR). They are then
characterized by, at least, three widely separated scales: hard (the
mass $m$, of the heavy quarks), soft (the relative momentum $|{\bf p}|
\sim mv \ll m$, of the heavy-quark--antiquark pair in the center
of mass frame), and ultrasoft (the typical kinetic energy $E \sim
mv^2$ of the heavy quark in the bound state system).
In this paper we focus on pNRQCD. This EFT takes full advantage of the
hierarchy of scales that appear in the system,
\begin{equation}
\label{hierarchy}
m \gg mv \gg mv^2 \cdots
\,,
\end{equation}
and makes a systematic and natural connection between quantum
field theory and the Schr\"odinger equation. Schematically the EFT
takes the form
\begin{eqnarray*}
\,\left.
\begin{array}{ll}
&
\displaystyle{
\left(i\partial_0-{{\bf p}^2 \over m}-V_s^{(0)}(r)\right)\Phi({\bf r})=0}
\\
&
\displaystyle{\ + \ \mbox{corrections to the potential}}
\\
&
\displaystyle{\ +\
\mbox{interactions with other low-energy degrees of freedom}}
\end{array} \right\}
{\rm pNRQCD}
\end{eqnarray*}
where $V_s^{(0)}(r)$ is the static potential and $\Phi({\bf r})$ is
the $Q$-$\bar{Q}$ wave function.
A major issue to be settled is to decide upon the precise form of
$V_s^{(0)}(r)$, in particular whether one works in the weak or strong
coupling regime and how to treat subleading terms. In the strict weak
coupling regime one could approximate the static potential by the
Coulomb potential $V_s^{(0)}(r)\simeq V_C = -C_F\, \alpha_{\rm s}/r$ and include
higher-order terms perturbatively. There seems to be growing consensus
that the weak coupling regime appears to work properly for $t$-$\bar
t$ production near threshold, the bottomonium ground state mass, and
bottomonium sum rules (for a recent discussion on this issue see
\cite{Pineda:2009zz}). One would then expect that other properties of
the bottomonium ground state like the hyperfine splitting or
electromagnetic decay widths could be described as well by the weak
coupling version of pNRQCD. However, in this case the situation is not
that clear. There has been a precise determination of the bottomonium
ground state hyperfine splitting using the renormalization group in
pNRQCD \cite{Kniehl:2003ap}. Nevertheless, the predicted value does
not agree well with the recently obtained experimental number
\cite{:2008vj,Bonvicini:2009hs}. Therefore the situation remains
unsettled. For the inclusive electromagnetic decays the convergence is
not very good \cite{Pineda:2006ri}. Even for top, higher-order
corrections to the normalization appear to be
sizable~\cite{Pineda:2006ri, Hoang:2003ns, Beneke:2005hg,
Beneke:2007gj}.
In principle, the novel feature of these observables (maybe more so
for the decays) compared to the heavy quarkonium ground state mass is a
bigger sensitivity to the value of the wave function at the origin and
to its relativistic corrections. Note that in this case the
relativistic corrections are divergent and their divergences have to
be absorbed by the matching coefficients of the effective theory:
potentials and current matching coefficients. If one considers the
decay ratio, the dependence on the wave function associated to the
static potential drops out and only the relativistic correction
survives. This makes the decay ratio the cleanest possible place on
which to quantify the importance of the relativistic corrections to
the wave function.
In Ref. \cite{Penin:2004ay} the decay ratios have been computed with
NNLL accuracy, accounting for the resummation of logarithms. The scale
dependence has greatly improved over fixed-order computations and the
result is much more stable. The convergence could be classified as
good for the top case, reasonable for the bottom, and not good for the
charm, although in all three cases the scale dependence of the
theoretical result was quite small. For the case of the charm there is
experimental data available, and the agreement with experiment
deteriorates when higher order corrections are introduced. On the
other hand there exists a nice analysis for charmonium in
Ref. \cite{Czarnecki:2001zc}, where they consider a potential model (a
Cornell-like one, yet compatible with perturbation theory at short
distances, since it is coulomb-like in this regime) for the bound
state dynamics, but a tree-level perturbative potential for the
spin-dependence. They also correctly performed the matching in the
ultraviolet with QCD along the lines of what would be pNRQCD in the
strong coupling regime\footnote{Actually the whole computation would
fit into the strong coupling regime of pNRQCD except for the fact
that the spin-dependent potential is computed in perturbation
theory.}. Their net result was that they were able to obtain
consistency with experiment albeit with large errors. Unfortunately,
this result suffers from model dependence. In particular, since a
perturbative potential has been used for the spin-dependent potential,
it would have been more consistent to treat the static potential also
in a perturbative approach. In this respect, it has been shown in
Refs. \cite{Recksiegel:2001xq,Pineda:2002se,Lee:2002sn,Brambilla:2009bi}
that, once the renormalon cancellation is taken into account, the
inclusion of perturbative corrections to the static potential leads to
a convergent series and that this series gets closer to the lattice
values in the quenched approximation up to scales of around 1 GeV. It
is then natural to ask whether the inclusion of these effects may lead
to a better agreement in the case of charmonium and for sizable
corrections in the case of bottomonium and $t$-$\bar t$ production
near threshold. Note that in this
comparison between lattice and perturbation theory one has to go to
high orders to get good agreement. Therefore, a computation of the
relativistic correction based on the leading order expression for the
static potential, i.e. the Coulomb potential, as the one used in an
strict NNLL computation, may lead to large corrections, since these
corrections, as well as the wave function at the origin, could be
particularly sensitive to the shape of the potential.
Therefore, in this paper we reorganize the perturbative expansion and
consider the static potential exactly, whereas we treat the the
relativistic terms as corrections. By doing so we expect to have an
effect similar to the one observed in Ref.~\cite{Czarnecki:2001zc}.
Including also the renormalization group improved expressions, we
expect to obtain results with only a modest scale dependence. The
explicit computation will confirm to a large extent these
expectations. We will be able to give an updated prediction for the
decay of the $\eta_b$ to two photons and obtain a result for the charm
decay ratio compatible with experiment (though in this last case with
rather large errors). Note that our computation is
completely based on a weak coupling analysis derived from QCD
and no non-perturbative input is introduced.
\section{Decay ratio}
The one-photon mediated processes are induced by the electromagnetic
current $j_\mu$, which has the following decomposition in
terms of operators constructed from the non-relativistic quark and
anti-quark two-component Pauli spinors $\psi$ and $\chi$ \cite{BBL}:
\begin{equation}
\bfm{j}=c_v(\mu)\psi^\dagger{\bfm\sigma}\chi+{d_v(\mu)\over6m_q^2}
\psi^\dagger\bfm{\sigma}\mbox{\boldmath$D$}^2\chi
+\ldots,
\label{vcurr}
\end{equation}
where $\mu$ is the
renormalization scale, $\bfm{D}$ is the covariant
derivative, ${\bfm\sigma}$ is the Pauli matrix, and the ellipsis stands
for operators of higher mass dimension. The Wilson coefficients $c_v$
and $d_v$ represent the contributions from the hard modes and may be
evaluated as a series in $\alpha_s$ in
full QCD for free on-shell on-threshold external (anti)quark fields.
We define it through
\begin{eqnarray}
c_v(\mu) &=& \sum_{i=0}^\infty\left(\alpha_s(\mu)\over
\pi\right)^i c_v^{(i)}(\mu)\
\,, \qquad c_v^{(0)}=1\,,
\end{eqnarray}
and similarly for other coefficients. The coefficients $c_v^{(1)}$ and
$c_v^{(2)}$ have been computed in Refs.~\cite{KalSar} and
\cite{CzaMel1,BSS} respectively.
The operator responsible for the two-photon $S$-wave processes in the
non-relativistic limit is generated by the expansion of the product of
two electromagnetic currents and has the following representation~\cite{BBL}
\begin{equation}
O_{\gamma\gamma}=c_{\gamma\gamma}(\mu)\psi^\dagger\chi
+{d_{\gamma\gamma}(\mu)\over6m_q^2}
\psi^\dagger\mbox{\boldmath$D$}^2\chi
+\ldots,
\label{gcurr}
\end{equation}
which reduces to the pseudo-scalar current in the non-relativistic
limit. The coefficients $c_{\gamma\gamma}^{(1)}$ and
$c_{\gamma\gamma}^{(2)}$ have been computed in Refs.~\cite{HarBro} and
\cite{CzaMel2} (in semi-numerical form) respectively.
Let us define the spin ratio for the production and annihilation of
heavy quarkonium ${\cal Q}$ as
\begin{equation}
{\cal R}_q=
{\sigma(e^+e^- \rightarrow {\cal Q}(n^3S_1) )\over
\sigma(\gamma\gamma \rightarrow {\cal Q}(n^1S_0))}=
{\Gamma({\cal Q}(n^3S_1)\to e^+e^-)\over
\Gamma({\cal Q}(n^1S_0)\to \gamma\gamma)}\,.
\end{equation}
The effective theory expression for the spin ratio reads
\begin{equation}
{\cal R}_q={c_s^{\,2}(\mu)\over 3Q_q^2}
{|\psi_n^{v}(0)|^2\over|\psi_n^{p}(0)|^2}+{\cal O}(\alpha_s v^2)\,,
\label{Rdef}
\end{equation}
where $Q_q$ is the quark electric charge,
$c_s(\mu)=c_v(\mu)/c_{\gamma\gamma}(\mu)$, $\psi_n^{(v,p)}(\bfm{r})$
are the spin triplet (vector) and spin singlet (pseudo-scalar)
quarkonium wave functions with principal quantum number $n$. The wave
functions describe the dynamics of the non-relativistic bound state
and can be computed within pNRQCD. The latter is the
Schr\"odinger-like effective theory of potential (anti)quarks whose
energies scale like $m_qv^2$ and three-momenta scale like $m_qv$, and
their multipole interaction to the ultrasoft gluons~\cite{KniPen1,
BPSV2, Beneke:2007pj, Beneke:2008cr}. The contributions of hard and
soft modes in pNRQCD are represented by the perturbative and
relativistic corrections to the effective Hamiltonian, which is
systematically evaluated order by order in $\alpha_s$ and $v$ around
the leading order (LO) Coulomb approximation.
\section{pNRQCD framework}
As we have mentioned before, the framework we use to compute the decay
ratio, and more specifically the wave function, is pNRQCD. For the
purposes of our paper the full setup of pNRQCD is not needed. We will
only need the static potential, $V_s^{(0)}(r)$, and the spin-dependent
potential ${V}^{(2)}_{S^2,s}(r)$. Furthermore, we will
reorganize the perturbative expansion. The static potential will be
treated exactly by including it in the leading-order Hamiltonian
\begin{eqnarray}
\label{H0}
H^{(0)}\equiv -\frac{{\bf \nabla}^2}{2m_r}+V^{(0)}_s(r),
\end{eqnarray}
where $m_r=m_1m_2/(m_1+m_2)$. On the other hand, the spin-dependent
potential (in $D= 1+d= 4-2\epsilon$ dimensions)
\begin{equation}
\label{DeltaH}
\Delta H=
\frac{V^{(2)}_{S^2,s}(\mu)}{m_1m_2}=
- \frac{4\pi C_F D^{(2)}_{S^2,s}}{d\, m_1m_2}\,
[{\bf S}_1^i,{\bf S}_1^j][{\bf S}_2^i,{\bf S}_2^j]
\delta^{(d)}({\bf r})
\end{equation}
is considered to be a perturbation to the result obtained with
$H^{(0)}$. Therefore, we distinguish between an expansion in $v$ and
$\alpha_{\rm s}$. $v$ has an expansion in $\alpha_{\rm s}$ itself but this expansion does
not converge quickly for these relativistic corrections. This remains
so even after the inclusion of the renormalon cancellation,
which has only a minor impact on the determination of the wave
function. This is the reason we choose to take the static potential
exactly.
As mentioned in the introduction there are different options on how
precisely to treat $V_s^{(0)}$ and we will discuss in
Section~\ref{sec:BrVs} the various options we consider. Roughly
speaking we will take the static potential up to NNNLO including also
the leading ultrasoft corrections. We will also need to define a scheme of
renormalon subtraction. Therefore, the general form of the static
potential will be
\begin{equation}
\label{VsRen}
V_s^{(0)}(r)=V_{SD}(r)+2\, \delta m_X
\,,
\end{equation}
where $\delta m_X$ represents a residual mass that encodes the pole
mass renormalon contribution and $X$ stands for the specific
renormalon subtraction scheme. We will show some specific examples in
Section~\ref{sec:RGIpot}. In \Eqn{VsRen}, $V_{SD}$ is the short
distance behavior of the static potential, which is independent of the
scheme for renormalon subtraction (even if we use a non-perturbative
potential). In momentum space it reads
\begin{equation}
\lim_{q \rightarrow \infty} \widetilde{V}^{(0)}_s(q)=\widetilde{V}_{SD}(q)
=
-\frac{4\pi C_F\,
\widetilde{\alpha}_{V^{(0)}_s}(q)}{{\bf q}^2},
\label{eq:StaticPotential}
\end{equation}
with $\widetilde{\alpha}_{V^{(0)}_s}(q) \sim \alpha_s(\mu)$ (for the
precise relation see \Eqn{eq:VtildeSDfo}), where $\alpha_s(\mu)$ is the
QCD coupling constant in the $\overline{\rm MS}$-scheme.
For the spin-dependent potential in momentum space we have
\begin{eqnarray}
\widetilde{V}^{(2)}_{S^2}(\mu)
&=&
- \frac{4\pi C_F D^{(2)}_{S^2,s}(\mu)}{d\,}\,
[{\bf S}_1^i,{\bf S}_1^j][{\bf S}_2^i,{\bf S}_2^j]
\nonumber \\
&=&
- \frac{4\pi C_F D^{(2)}_{S^2,s}(\mu)}{3}\,
\left(\frac{3}{2}-S^{\,2}
+\epsilon\left(\frac{9}{2}-\frac{8}{3}S^{\,2}\right)
\right)+ {\cal O}(\epsilon^2)\,,
\label{eq:spin_dep_pot}
\end{eqnarray}
where ${\bf S}_{1,2}$ is the spin operator for heavy quark and
anti-quark, respectively and
$D_{S^2,s}^{(2)}(\mu)=\alpha_s(\mu)+\ldots$. In the second line in
Eq.(\ref{eq:spin_dep_pot}) the spin projection has been done,
resulting in $S^2\equiv 0$ and 2 for spin-singlet and spin-triplet
states, respectively (this expression actually corresponds to the
regularization prescription of \cite{Czarnecki:2001zc} for the
spin-zero states). We have to keep the term of ${\cal O}(\epsilon)$
because the spin-dependent potential generates $1/\epsilon$
divergences. The renormalization procedure for these $1/\epsilon$
will be discussed in the next section.
\section{Wave function ratio}
We now turn to the computation of
\begin{eqnarray}
\label{rho_n}
\frac{|\psi_n^{v}(0)|^2}{|\psi_n^{p}(0)|^2} &\equiv&
\rho_n(\mu)\,\,\equiv\,\,1+\delta \rho_n(\mu)
\,,
\end{eqnarray}
Applying Rayleigh-Schr\"odinger perturbation theory to the problem we
obtain
\begin{eqnarray}
\psi^{v/p}_n(0)
&=&
\psi_{n}^{(0)}(0)
-
\widehat{G}(E^{(0)}_n)
\frac{\widetilde{V}^{(2)}_{S^2}(\mu)}{m_1m_2}\, \psi_n^{(0)}(0)\,
+ {\cal O}\left(\tilde V_{S^2}^{(2)}\right)^2, \,
\label{eq:psi_at_0}
\end{eqnarray}
where $\psi_n^{(0)}(0)$ is the wave function for the LO Hamiltonian
$H^{(0)}$ and $\widehat{G}(E^{(0)}_n)$ is the reduced Green function
at $E=E^{(0)}_n$, which is defined by
\begin{equation}
\widehat{G}(E^{(0)}_n)
\equiv
\sum_m{}^{\prime}
\frac{|\psi^{(0)}_m(0)|^2}{E^{(0)}_m-E^{(0)}_n}
=
\lim_{E\rightarrow E^{(0)}_n}
\bigg(
G(E)-\frac{|\psi^{(0)}_n(0)|^2}{E^{(0)}_n-E}
\bigg)\, .
\label{eq:Gred}
\end{equation}
The prime indicates that the sum does not include the state $n$ and
\begin{equation}
G(E) = G(0,0;E)
\equiv
\lim_{r \rightarrow 0}G(r,r;E)
=
\lim_{r \rightarrow 0}\langle {\bf r}| \frac{1}{H^{(0)}-E-i 0}|{\bf r} \rangle
\,
\end{equation}
is the zero-distance limit of the Green function $G(r,r';E)$, which is
the solution of the Schr\"odinger equation
\begin{eqnarray}
&&
\bigg[-\frac{\mbox{\boldmath $\nabla$}^2}{2m_r}+V_s^{(0)}(r)-E \bigg] G(r, r'; E)
=\delta({\bf r}-{\bf r}').
\end{eqnarray}
The short distance behavior of the static potential $V_s^{(0)}(r)\sim
1/r$ makes $G(E)$ and, therefore, $\delta\rho_n$ divergent. Thus we
will need to regularize the Green function and we will deal with two
different ways to do this: dimensional regularization and finite-$r$
regularization. We start by considering the former and will come back
to finite-$r$ regularization in the next section.
The divergences in $\delta \rho_n$ are cancelled by divergences in the
Wilson coefficient $c^2_s(\mu)$. Since the latter have been computed
in dimensional regularization we will need $G(E)$ in dimensional
regularization as well. We denote the corresponding bare and reduced
Green functions by $G^{(D)}(E) =G^{(D)}(0,0;E)$ and
$\widehat{G}^{(D)}(E^{(0)}_n)$ respectively. We remark that the LO wave
functions (corresponding to $H^{(0)}$) are finite, thus
$|\psi^{(0)(D)}_n(0)|^2=|\psi^{(0)(4)}_n(0)|^2\equiv
|\psi_n^{(0)}(0)|^2$.
Using Eqs.~(\ref{eq:spin_dep_pot})--(\ref{eq:psi_at_0}), the bare
expression of $\delta \rho_n(\mu)$ in dimensional regularization can
be written as
\begin{eqnarray}
&&
\delta \rho_n^{(D)}(\mu)
=
-\frac{16\pi C_F}{3 m_1m_2}
D_{S^2,s}^{(2)}(\mu)
\left(1 +\frac{8}{3}\,\epsilon+{\cal O}(\epsilon^2)\right)
\widehat{G}^{(D)}(E^{(0)}_n).
\label{eq:rho_formula}
\end{eqnarray}
In order to obtain the $\overline{\rm MS}$-renormalized expression of $\delta
\rho_n$, we need to identify the divergences of
$\widehat{G}^{(D)}(E^{(0)}_n)$. They are the same as those of $G^{(D)}(E)$,
are independent of $E$, and can be computed order by order in
perturbation theory, since they are related to the short distance
behavior of the Green function. We thus parameterize the divergent and
finite terms of $G^{(D)}(E)$ and $\widehat{G}^{(D)}(E^{(0)}_n)$ as
\begin{eqnarray}
G^{(D)}(E)&=&\frac{m_r}{2\pi}
\bigg[A_{\overline{\rm MS}}^{(D)}(\epsilon;\mu)+B_{V_s^{(0)}}^{\overline{\rm MS}}(E;\mu)\bigg]\,,
\label{eq:GD}
\\
\widehat{G}^{(D)}(E^{(0)}_n)&=&\frac{m_r}{2\pi}
\bigg[A_{\overline{\rm MS}}^{(D)}(\epsilon;\mu)+\widehat{B}_{V_s^{(0)}}^{\overline{\rm MS}}(E^{(0)}_n;\mu)\bigg]\,,
\label{eq:GDhat}
\end{eqnarray}
where $B_{V^{(0)}_s}^{\overline{\rm MS}}(E;\mu)$ and
$\widehat{B}_{V^{(0)}_s}^{\overline{\rm MS}}(E^{(0)}_n;\mu)$ are finite in 4 dimensions,
but contain terms to all orders in $\alpha_s/v$, since the bound-state
dynamics needs all order resummation in $\alpha_s$. As will be shown,
\Eqn{GCbare}, the ultraviolet divergent part can be expressed in terms
of the (dimensionfull) bare coupling $g^2\equiv 4\pi \alpha_{\rm s}\,
\mu^{2\epsilon}$ as
\begin{equation}
A^{(D)}_{\overline{\rm MS}}(\epsilon;\mu)
=\frac{g^2\, C_F\, m_r}{8\pi\epsilon}
\left(\frac{\mu^2 e^{\gamma_E}}{4\pi}\right)^{-2\epsilon}+{\cal O}(\alpha_s^2)
\, .
\label{eq:AD}
\end{equation}
$A_{\overline{\rm MS}}^{(D)}$ will be removed by renormalization. This has to be
done consistently with the calculation of other parts order by order
in the expansion in $\alpha_s$ (in our case $\overline{\rm MS}$). The divergences
are then absorbed in $c_s$ and we can write
\begin{eqnarray}
&&
\delta \rho_n^{\overline{\rm MS}}(\mu)
=
-\frac{8 m_r C_F}{3 m_1m_2}
D_{S^2,s}^{(2)}(\mu)
\left(\widehat{B}_{V^{(0)}_s}^{\overline{\rm MS}}(E^{(0)}_n;\mu) +
\frac{4}{3}m_rC_F\alpha_{\rm s}+{\cal O}(\alpha_{\rm s}^2)\right).
\label{eq:rho_formulaMS}
\end{eqnarray}
This will have to be combined with the $\overline{\rm MS}$ subtracted matching
coefficient $c_s^2(\mu)$ in \Eqn{Rdef} to obtain the decay ratio.
\section{Green Function in position space}
\label{Gr}
The main goal of the present paper is to compute
$\widehat{G}^{(D)}(E^{(0)}_n)$ or, equivalently,
$\widehat{B}_{V^{(0)}_s}^{\overline{\rm MS}}(E^{(0)}_n;\mu)$, with the effect of the
static potential included exactly. This calls for a numerical
evaluation of the Green function rather than pursuing an analytic
approach. Numerical calculations are most conveniently performed in
coordinate space. It is here where finite-$r$ regularization comes
into play. In Section~\ref{Gr:reg} we will discuss this regularization
and in Section~\ref{Gr:match} we show how to convert the Green
function obtained in finite-$r$ regularization by matching into the
one in dimensional regularization.
\subsection{Regularization of the Green function in position space}
\label{Gr:reg}
The zero-distance Green function with finite-$r$ regularization is
simply defined as $G^{(r)}(E)\equiv G(r_0, r_0; E)$, where $r_0\ll
1/(m\alpha_s)$. In order to compute it, we first have to describe how
to obtain a numerical solution for the Green function $G(r,r';E)$ in
general, given the static potential $V^{(0)}_s(r)$. Actually the
whole procedure holds valid for a generic potential (not unbounded
from below at long distances) that has the correct, perturbative,
short distance limit\footnote{This opens the possibility of using the
same formalism for pNRQCD in the strong coupling regime but then we
should also consider a non-perturbative potential in
Eq.~(\ref{DeltaH}), albeit with the correct short distance limit.}.
According to \Eqn{VsRen} renormalon associated affects are power
suppressed. Therefore, they will not affect properties associated to
the $r \rightarrow 0$ limit of the potential.
For the class of potentials described above, the Green function
$G(r,r'; E)$ can be constructed from the two independent solutions
$u_<(r), u_>(r)$ of the homogeneous Schr\"odinger equation (our
approach follows Ref.~\cite{Strassler:1990nw}, see also
Ref.~\cite{Melnikov:1998pr})
\begin{eqnarray}
\left[\frac{d^2}{dr^2} + 2m_r \left(E-V_s^{(0)}(r)\right)\right]\,u(r)=0.
\label{eq:diff_eq_for_u}
\end{eqnarray}
Here $u(r)$ represents $u_<(r)$ or $u_>(r')$, which are the solutions
to \Eqn{eq:diff_eq_for_u} that are regular for $r\rightarrow 0$ and
$r'\rightarrow \infty$ respectively. The angular-momentum term is
dropped in the Schr\"odinger equation assuming S-wave contribution
because the limit $r,r'\rightarrow 0$ is taken later. The Green
function is written as
\begin{eqnarray}
G(r,r'; E)=\left(\frac{m_r}{2\pi}\right)\,\frac{u_<(r)}{r}\,\frac{u_>(r')}{r'}
\hspace{1cm} \mbox{for}~~ r < r'.
\end{eqnarray}
The numerical solution at finite $r$ is obtained by solving the
Schr\"odinger equation with boundary conditions at short distances.
To this end we prepare two independent solutions $u_{0}(r)$ and
$u_{1}(r)$ that are defined by the following initial conditions: For
$u_1(r)$, which we will call the regular solution we set
\begin{equation}
u_1(0)=0 \qquad {\rm and} \qquad u_1^{\prime}(0)=1
\end{equation}
so that
\begin{equation}
u_1(r)= r + {\cal O}(r^2).
\end{equation}
This completely fixes $u_1(r)$.
For the non-regular solution, $u_0(r)$, we can not work this
way. Whereas we can still take $u_0(0)=1$, we can not define
$u_0^{\prime}(0)$, as it becomes singular. Therefore, we first define
$u_0^{\prime}$ for small values of $r$ in the following way
\begin{eqnarray}
u'_0(r)&=&C_0(r_c)+2m_r\,\int_{r_c}^r\,dr'\, V_{SD}(r')+{\cal O}(r),
\end{eqnarray}
where $C_0(r_c)$ is an integration constant. Note that $r_c > 0$ acts
as a cutoff to avoid the denominator-zero of $V_{SD}(r')$. The total
solution then reads (at short distances)
\begin{equation}
u_0(r)=1+C_0(r_c)\, r+2m_r\int_0^rdr^{\prime}
\int_{r_c}^{r^{\prime}}dr^{\prime\prime} V_{SD}(r^{\prime\prime})+{\cal O}(r^2)
\,.
\end{equation}
This expression can be rewritten as
\begin{equation}
u_0(r) = 1 + C_0(r_c)\, r +
2m_r r \bigg\{ \int_{r_c}^{r}dr' V_{SD}(r')
- \int_{0}^{r} dr' \frac{r' V_{SD}(r')}{r}
\bigg\} +{\cal O}(r^2).
\label{eq:bc_for_u}
\end{equation}
The derivatives $u'_{0,1}(r)$ and $u_{0,1}(r)$ at small $r$
are used as boundary conditions to solve differential equations
by, for instance, the Runge-Kutta method.
For later convenience we take $r_c=1/(\mu e^{\gamma_E})$ and fix
\begin{eqnarray}
C_0(r_c)
=
-\frac{2m_r}{r_c}\int_0^{r_c}dr'\int_{r_c}^{r'}dr^{\prime\prime}
V_{SD}(r^{\prime\prime})
=
2m_r\int_{0}^{r_c} dr'\, \frac{r' V_{SD}(r')}{r_c}\,.
\label{eq:C0}
\end{eqnarray}
With this choice the ${\cal O}(r)$ term of \Eqn{eq:bc_for_u} for $u_0$
is a function of $\ln\left(\mu e^{\gamma_E} r\right)$ with no
log-independent terms
\begin{equation}
u_0(r)
=
1+2m_r r\sum_{n=1}^{\infty}v_n\ln^n(\mu e^{\gamma_E}r)+{\cal O}(r^2)
\,.
\end{equation}
The coefficients $v_n$, which can be written as an expansion in powers
of $\alpha_{\rm s}(\mu)$, only depend on the coefficients $a_n$ of $V_{SD}$ (see
\Eqn{VSDfo}), i.e. only on the pure short distance behavior of the
static potential. This choice will turn out to be very convenient for
the conversion to dimensional regularization, but the final result for
$G(r,r;E)$ does not depend on this choice.
From the two solutions $u_{0}(r)$ and $u_{1}(r)$ we can construct
$u_>(r)$ and $u_<(r)$ as follows: First the solution at short distance
$u_<(r)$ is identified as
\begin{eqnarray}
u_<(r) &=& u_1(r)\,,
\end{eqnarray}
because $\lim_{r\rightarrow 0} u_1(r)=0$. The other solution which
satisfies $\lim_{r\rightarrow \infty} u_>(r)=0$ is given by
\begin{eqnarray}
u_>(r) &=&
u_0(r) + B^{(r)}_{V_s^{(0)}}(E)\, u_1(r)\,,
\\
B^{(r)}_{V_s^{(0)}}(E)&=&-\lim_{r\rightarrow \infty}\,
\left\{\,u_0(r)/u_1(r)\,\right\}\,.
\end{eqnarray}
From the boundary conditions of $u_0$ and $u_1$ it follows that we can
mix a $u_1$-component into $u_0(r)$. However, the precise choice of
$u_0(r)$ does not affect $u_>(r)$ because of the invariance under
$u_0(r)\rightarrow u_0(r)+\kappa\, u_1(r)$ with $\kappa$ being an
arbitrary constant. The zero-distance Green function with finite-$r$
regularization is then obtained as
\begin{eqnarray}
G^{(r)}(E)
&=&
\frac{m_r}{2\pi}
\bigg[ A^{(r)}(r_0;\mu)+B_{V_s^{(0)}}^{(r)}(E;\mu)\bigg]\, ,
\label{eq:Gr}
\\
A^{(r)}(r_0;\mu)
&=&
\frac{u_0(r_0)}{r_0}
=
\frac{1}{r_0} -2m_rC_F\alpha_s \ln\left(\mu\, e^{\gamma_E}r_0\right)
+{\cal O}(\alpha_s^2)\,,
\label{eq:Ar}
\end{eqnarray}
where the last equality is a good approximation for $\mu
e^{\gamma_E}r_0 \sim 1$. $A^{(r)}(r_0;\mu)$ encodes the divergence of
$G^{(r)}(E)$ and plays the role of the $1/\epsilon$ pole of
$G^{(D)}(E)$. It is energy independent because it is related to the
overall divergence of the Green function. Nevertheless, note that
according to \Eqn{eq:Gr} we define $B_{V_s^{(0)}}^{(r)}(E;\mu)$ by
subtracting exactly $u_0(r_0)/r_0$, which, depending on the potential,
will include terms with arbitrary powers of $\alpha_{\rm s}$. $B^{(r)}(E;\mu)$
is computed numerically by solving the Schr\"odinger equation and is
independent of the regulator $r_0$. Note however that it is scheme
dependent, i.e. it depends on the specific condition we use for
$u_0'(r_0)$. This dependence cancels between $A^{(r)}$ and
$B_{V_s^{(0)}}^{(r)}$ such that $G^{(r)}(E)$ is independent of the
specific choice for $u_0'(r_0)$. In analogy to \Eqn{eq:GDhat} we also
define
\begin{equation}
\widehat{G}^{(r)}(E^{(0)}_n)=\frac{m_r}{2\pi}
\bigg[A^{(r)}(r_0;\mu)+\widehat{B}_{V_s^{(0)}}^{(r)}(E^{(0)}_n;\mu) \bigg]\, .
\label{eq:Grhat}
\end{equation}
Finally we remark that $B_{V_s^{(0)}}^{(r)}$ is independent of the
renormalon subtraction scheme used, since $\widehat{G}^{(r)}(E^{(0)}_n)$ and
$A^{(r)}(r_0;\mu)$ are; the latter by the definition used in this paper.
\subsection{Conversion to the $\overline{\rm MS}$ scheme}
\label{Gr:match}
Once we have the zero-distance Green function $G^{(r)}(E)\equiv G(r_0,
r_0; E)$, where $r_0\ll 1/(m\alpha_s)$, or more precisely
$B_{V_s^{(0)}}^{(r)}$, we have to convert the result by matching into
the one in dimensional regularization $B_{V^{(0)}_s}^{\overline{\rm MS}}$, in order
to be able to use \Eqn{eq:rho_formulaMS}. We define the difference
\begin{equation}
c^{\overline{\rm MS}}_r=
B^{\overline{\rm MS}}_{V_s^{(0)}}(E;\mu) - B^{(r)}_{V_s^{(0)}}(E;\mu) =
\widehat{B}^{\overline{\rm MS}}_{V_s^{(0)}}(E^{(0)}_n;\mu) -
\widehat{B}^{(r)}_{V_s^{(0)}}(E^{(0)}_n;\mu) \, .
\label{eq:BD_Br}
\end{equation}
This difference between the schemes can be accounted for by a finite
($r_0$ and $\epsilon$ independent) constant. We also use the fact
that the ultraviolet divergent term of the Green function is energy
independent. This means that $c_r^{\overline{\rm MS}}$ is energy independent and its
perturbative expansion is short distance dominated and can be computed
order by order in $\alpha_{\rm s}$.
Note that $c_r^{\overline{\rm MS}}$ does not depend on the long distance behavior of
$V_s^{(0)}$, only on its short distance behavior, which is universal
and dictated by perturbation theory, i.e. by
\Eqn{eq:StaticPotential}. In particular, the result is independent of
the pole mass renormalon. Therefore, the value obtained for
$c_r^{\overline{\rm MS}}$ holds true for a general potential (not unbounded from
below at long distances) that has the correct short distance limit.
Considering the lowest order approximation of the static potential,
the Coulomb potential
\begin{equation}
V_s^{(0)}\simeq V_C=-C_F\frac{\alpha_{\rm s}(\mu)}{r}
\,.
\end{equation}
the exact solution for this potential, the Coulomb Green function, is
known in dimensional regularization and can be expressed in terms of
$\lambda\equiv C_F\, \alpha_s/\sqrt{-2 E/m_r}$ as
\begin{equation}
G_c^{(D)}(E)=\frac{g^2\, C_F\, m^2_r}{4\pi^2}
\left(\frac{-8m_r E}{4\pi e^{-\gamma_E}}\right)^{-2\epsilon}
\bigg[
\frac{1}{4 \epsilon}
-\frac{1}{2\lambda}+\frac{1}{2}-\gamma_E-\psi(1-\lambda)
+{\cal O}(\epsilon) \bigg]
\label{GCbare}
\,.
\end{equation}
According to \Eqn{eq:GD} we thus get
\begin{equation}
\label{eq:BMSC}
B^{\overline{\rm MS}}_{V_C} = 2m_r\,C_F\,\alpha_s
\left(
-\frac{1}{2\lambda}
- \frac{1}{2} \ln\left(\frac{-8 m_r E}{\mu^2}\right)
+ \frac{1}{2} - \gamma_E - \psi(1-\lambda)
\right)
\,.
\end{equation}
Turning to finite-$r$ regularization the Coulomb Green function
reads
\begin{eqnarray}
G_c^{(r)}(E)
&=&
\frac{m_r^2 C_F\,\alpha_s}{\pi}
\bigg[
\frac{1}{2\, m_r C_F\,\alpha_s r_0}
-\ln \left(\mu e^{\gamma_E} r_0\right)
\nonumber
\\
&&\hspace{1cm}
-\frac{1}{2\lambda} - \frac{1}{2} \ln\left(\frac{-8 m_r E}{\mu^2}\right)
+ 1
- \gamma_E - \psi(1-\lambda)
\bigg]\,.
\label{eq:rGreenFunc1}
\end{eqnarray}
whereas $u_0(r_0)$ for the Coulomb case is given by
\begin{equation}
u_0(r_0) = 1 - 2m_r\,r_0\, C_F\,\alpha_s\,
\ln\left(\mu e^{\gamma_E}r_0\right)\, .
\end{equation}
The expression stops at ${\cal O}(\alpha_{\rm s})$.
In the Coulomb approximation there are no ${\cal O}(\alpha_{\rm s}^2)$ terms in
$u_0(r_0)$ and, therefore, in $A_{V_C}^{(r)}$. Using \Eqn{eq:Gr} we then find
\begin{equation}
\label{eq:BRC}
B_{V_C}^{(r)}(E) =
2 m_r\,C_F\,\alpha_s
\left[
-\frac{1}{2\lambda}
- \frac{1}{2} \ln\left(\frac{-8 m_r E}{\mu^2}\right)
+ 1 - \gamma_E - \psi(1-\lambda)
\right]
\,.
\end{equation}
Note that in an strict NNLO or NNLL computation of the decay ratio
this would be the only term that should be considered.
Thus we compute $c^{\overline{\rm MS}}_r$ in an expansion in $\alpha_s$ and
obtain\footnote{If the constant $e^{\gamma_E}$ were not introduced in
Eq. (\ref{eq:Ar}), $c^{\overline{\rm MS}}_r$ would read
\begin{equation}
\label{eq:CDbis}
c^{\overline{\rm MS}}_{(r)}=-2m_rC_F\alpha_{\rm s}\left(\frac{1}{2}-\gamma_E\right)
+{\cal O}(\alpha_{\rm s}^2)
\,.
\end{equation}}
\begin{equation}
c^{\overline{\rm MS}}_r=-2m_r\frac{C_F\alpha_s}{2}+{\cal O}(\alpha_s^2).
\label{eq:CD}
\end{equation}
This constant can also be obtained from the difference between
dimensional- and $r$-regularized computations at finite order in
$\alpha_{\rm s}$. At the lowest order it corresponds to the computation of one-
and two-loop contributions to the Green function in both schemes, by
considering the difference of their renormalized pieces. We have
checked in an explicit calculation that \Eqn{eq:CD} is reproduced by
the difference of two-loop contributions.
In order to obtain the ${\cal O}(\alpha_{\rm s}^2)$ corrections to
$c^{\overline{\rm MS}}_{r}$ one has to include the ${\cal O}(\alpha_{\rm s}^2)$ corrections
to the static potential and compute the associated corrections to the
Green function in both schemes. In principle, this is possible and
partial results can be found in the literature. Nevertheless, this
would go beyond the aim of this work, since it would produce
corrections that are anyway unmatched by the precision of the hard
matching coefficient.
Finally, for a general potential with the right short distance
structure, we can combine \Eqn{eq:rho_formulaMS} with
\Eqns{eq:BD_Br}{eq:CD} and write
\begin{equation}
\label{deltaMSKiyo}
\delta \rho_n^{\overline{\rm MS}}(\mu) =
-\frac{8 m_r C_F}{3 m_1m_2}
D_{S^2,s}^{(2)}(\mu)
\left(\widehat{B}_{V^{(0)}_s}^{(r)}(E^{(0)}_n;\mu) +
\frac{1}{3} m_rC_F\,\alpha_{\rm s}+{\cal O}(\alpha_{\rm s}^2)\right).
\end{equation}
Once we know the $\overline{\rm MS}$ expression we can also write $\delta \rho_n$
in different schemes. For instance, in the "hard-matching" scheme used
in Ref. \cite{Penin:2004ay} we have
\begin{equation}
\label{deltaHMKiyo}
\delta \rho_n^{HM}(\mu) =
-\frac{8 m_r C_F}{3 m_1m_2}
D_{S^2,s}^{(2)}(\mu)
\left(\widehat{B}_{V^{(0)}_s}^{(r)}(E^{(0)}_n;\mu) -
2m_rC_F\,\alpha_{\rm s}+{\cal O}(\alpha_{\rm s}^2)\right),
\end{equation}
which will be relevant afterwards.
These results enable us to compute the decay ratio in terms of
$\widehat{B}_{V^{(0)}_s}^{(r)}$, whose determination will be discussed
in the next section.
\section{Determination of $\widehat{B}^{(r)}_{V_s^{(0)}}$}
\label{sec:BrVs}
In this section we determine
$\widehat{B}^{(r)}_{V_s^{(0)}}(E^{(0)}_n)$ in several approximation
schemes for $V_s^{(0)}$. We have already mentioned that our idea is
to treat the static potential exactly, yet we only know its expression
up to three loops. There is some freedom on how this truncation is
performed. This produces a class of potentials to study, which
introduces some scheme and scale uncertainties. As we have stressed
in previous sections, the analysis applies to any arbitrary potential
with the correct short distance behavior and not unbounded from below
at long distances. Therefore, in what follows we will consider
different approximations to the static potential. One quality that
they have in common is the renormalon cancellation. We have to
preserve renormalon cancellation between the static potential and the
pole mass of the heavy quark. At the same time we will be forced to
consider the resummation of logarithms to reproduce the correct
behavior of the potential at short distances. Thus we will have to
devise schemes where both the resummation of the logarithms and the
renormalon cancellation is achieved order by order in the perturbative
expansion. We illustrate this discussion in the following sections,
where we show the determination of $B^{(r)}_{V_s^{(0)}}$ using either
the Coulomb potential, the static potential at different orders in
$\alpha_{\rm s}(\mu)$, and the static potential at different orders in
$\alpha_{\rm s}(1/r)$. In this last case we will use different schemes with
renormalon cancellation. The dependence on the scheme of renormalon
subtraction (potential) may give an estimate of the error, since it is
also a measure of the dependence on the long distance behavior of the
potential.
Finally, let us note as well that, in order for our computation to
make sense, the successive approximations to the static potential
should be convergent (or at least small) themselves. We will check
this convergence in this section.
\subsection{Coulomb potential}
If we approximate the static potential by the Coulomb potential $V_C$
we can get an analytic solution for $\widehat{B}^{\overline{\rm MS}}_{V_C}$ by
directly working in dimensional regularization. Expanding
$G_c^{(D)}(E)$ as given in \Eqn{GCbare} around its poles at $E^{(0)}_n
\equiv - m_r C_F \alpha_{\rm s}^2/(2n^2)$ we can write
\begin{equation}
\label{Greenorig}
G^{(D)}_c(E)
= -\frac{\alpha_s\, C_F\, m_r^2}{\pi}
\frac{2\, E^{(0)}_n}{n(E^{(0)}_n-E)}
+\widehat G_c^{(D)}(E^{(0)}_n) + {\cal O}(E-E^{(0)}_n)
\end{equation}
with
\begin{equation}
\widehat G_c^{(D)}(E^{(0)}_n) =
\frac{g^2\, C_F\, m_r^2}{4\pi^2}
\left(\frac{-8m_rE}{4\pi e^{-\gamma_E}}\right)^{-2\epsilon}
\bigg[
\frac{1}{4\epsilon}
+ \frac{1}{2} - \gamma_E + \frac{1}{n}-\psi(n)
+{\cal O}(\epsilon) \bigg]\, .
\end{equation}
Comparing to \Eqn{eq:GDhat} we obtain
\begin{equation}
\label{BdCoulomb}
\widehat B_{V_C}^{\overline{\rm MS}}(E^{(0)}_n)
=
2m_rC_F\alpha_{\rm s}
\left(
- \frac{1}{2} \ln \frac{-8 m_r E^{(0)}_n}{\mu^2}
+ \frac{1}{2} - \gamma_E + \frac{1}{n}-\psi(n)
\right)
\end{equation}
and thus $\delta \rho_n^{\overline{\rm MS}}(\mu)$ in the Coulomb approximation
directly from \Eqn{eq:rho_formulaMS}.
\subsection{Fixed order $V_s^{(0)}$: $\alpha_s(\mu_s)$ expansion}
\label{sec:alsmuexp}
The standard way to go beyond the Coulomb potential approximation for
the static potential is to make an expansion in $\alpha_{\rm s}(\mu_s)$. Thus we
write
\begin{equation}
\widetilde{V}_{SD}(q) =
-\frac{4\pi C_F\, \alpha_s(\mu_s)}{{\bf q}^2}
\,
\bigg(
1+ \sum_{n=1}^{\infty}\bigg(\frac{\alpha_s(\mu_s)}{4\pi}\bigg)^n\,
\widetilde{a}_n(\mu_s;q)\bigg).
\label{eq:VtildeSDfo}
\end{equation}
This expanded version of the static potential is often used in
quarkonium phenomenology to respect rigorous expansion according to
non-relativistic power counting\footnote{In the most rigorous fixed
order computation only the Coulomb part of the static potential is
treated exactly and $\alpha_s$ corrections corresponding to the
second and remaining terms in Eq.(\ref{eq:VtildeSDfo}) are treated
iteratively order by order by insertion.}.
In position space we have
\begin{eqnarray}
\lim_{r \rightarrow 0}V_s^{(0)}(r)=V_{SD}(r)
&=&
-\frac{C_F\,\alpha_s(\mu_s)}{r}\,
\bigg\{1+\sum_{n=1}^{\infty}
\left(\frac{\alpha_s(\mu_s)}{4\pi}\right)^n a_n(\mu_s;r)\bigg\}
\label{VSDfo}
\end{eqnarray}
In practice we will take the static potential up to NNNLO, i.e. up to
${\cal O}(\alpha_{\rm s}^4)$ including also the leading ultrasoft corrections.
This means we take into account the first three terms of this
expansion with coefficients
\begin{eqnarray}
a_1(\mu_s;r)
&=&
a_1+2\beta_0\,\ln\left(\mu_s e^{\gamma_E} r\right)
\,,
\nonumber\\
a_2(\mu_s;r)
&=&
a_2 + \frac{\pi^2}{3}\beta_0^{\,2}
+\left(\,4a_1\beta_0+2\beta_1 \right)\,\ln\left(\mu_s e^{\gamma_E} r\right)\,
+4\beta_0^{\,2}\,\ln^2\left(\mu_s e^{\gamma_E} r\right)\,
\,,
\nonumber \\
a_3(\mu_s;r)
&=&
a_3+ a_1\beta_0^{\,2} \pi^2+
\frac{5\pi^2}{6}\beta_0\beta_1 +16\zeta_3\beta_0^{\,3}
\nonumber \\
&+&\bigg(2\pi^2\beta_0^{\,3}
+ 6a_2\beta_0+4a_1\beta_1+2\beta_2
+\frac{16}{3}C_A^{\,3}\pi^2\bigg)\,
\ln\left(\mu_s e^{\gamma_E} r\right)\,
\nonumber \\
&+&\bigg(12a_1\beta_0^{\,2}+10\beta_0\beta_1\bigg)\,
\ln^2\left(\mu_s e^{\gamma_E} r\right)\,
+8\beta_0^{\,3} \ln^3\left(\mu_s e^{\gamma_E} r\right)\,
\nonumber
\\
&+&\delta a_3^{us}(\mu_s,\mu_{us}),
\label{eq:Vr}
\end{eqnarray}
Explicit expression for $a_i(\mu_s;r)$ can be found in the literature
\cite{FSP,Schroder, short,KP1,RG,Anzai:2009tm,Smirnov:2009fh}. For the
ultrasoft corrections to the static potential we take
\begin{equation}
\delta a_3^{us}(\mu_s,\mu_{us})
\simeq \frac{16}{3}C_A^3 \pi^2\ln\left(\frac{\mu_{us}}{\mu_s}\right)
\, .
\end{equation}
We will not consider the renormalization group improved ultrasoft
contribution in this paper as its numerical impact is small. The
potential is shown in Figure~\ref{fig:potential} (dashed lines) for
$\mu_s = 2$~GeV and the number of light flavors set to $N_l=4$. It is
clear that for small $r$, depicted in the inset of
Figure~\ref{fig:potential}, there are serious issues regarding the
convergence. The potential changes drastically in going from LO to
NLO to NNLO etc. This behavior occurs for the typical values of
$\mu_s$ and $N_l$ that apply for the charm and bottom case. As one
increases the value of $\mu_s$, one has to go to shorter distance to
see this effect, as it would happen for top.
\begin{figure}[ht]
\begin{center}
\epsfxsize=13cm
\epsffile{Figure1.eps}
\end{center}
\caption{The FO (dashed) and RGI (solid) static potential $V_{SD}(r)$
according to \Eqn{VSDfo} and \Eqn{VSDrg} respectively. We take $\mu_s =
2$~GeV, $N_l=4$ and $\mu_r=2$~GeV. The potential is shown as a
function of $r$ at LO (yellow), NLO (green), NNLO (blue) and NNNLO
(red) with the small $r$ region shown in the inset. The shaded area
in blue indicates the short distance regime $0<r<1/m_b$.}
\label{fig:potential}
\end{figure}
Ignoring this problem for the moment and working with the Fixed Order
(FO) static potential we can obtain $u_0(r_0)$ and, therefore,
$A^{(r)}_{V_s^{(0)}}(r_0)$ as an expansion in $\alpha_{\rm s}$ as well. We find
\begin{eqnarray}
A^{(r)}_{V_s^{(0)}}(r_0)
&=&
\frac{1}{r_0}
-2m_r \, \alpha_s(\mu)\,C_F\, v(l_0)
\,,
\nonumber
\\
v(l_0)
&=&
\sum_{i=0}^{3} v_n\left(l_0\right)\,
\left(\frac{\alpha_s(\mu)}{4\pi}\right)^n
\,,
\label{eq:rSingularGreenFunction}
\end{eqnarray}
where $l_0=\ln\left(\mu\, e^{\gamma_E}r_0\right)$ and the expansion
coefficients are given by
\begin{eqnarray}
v_0(l_0)&=&l_0
\,,
\nonumber\\
v_1(l_0)
&=&
\big(a_1-2\beta_0\big)\,l_0
+\beta_0\,l_0^{\,2}
\,,
\nonumber\\
v_2(l_0)
&=&
\bigg(a_2-4a_1\beta_0+8\beta_0^{\,2}
+\frac{\pi^2}{3}\beta_0^{\,2}-2\beta_1\bigg)\, l_0
\nonumber \\
&+& \bigg(2a_1\beta_0-4\beta_0^{\,2}+\beta_1\bigg)\, l_0^{\,2}
+\frac{4\beta_0^2}{3}\,l_0^3
\,,
\nonumber \\
v_3(l_0)
&=&
\bigg(a_3 + \delta a_3^{us}
-6a_2\beta_0+24a_1\beta_0^{\,2}+a_1\beta_0^{\,2}\pi^2
-\left(48+2\pi^2\right)\beta_0^{\,3}-4a_1\beta_1
\nonumber \\
&& +\left(20+\frac{5\pi^2}{6}\right)\beta_0\beta_1
-2\beta_2+16\beta_0^{\,3}\zeta_3
-\frac{16}{3}\pi^2C_A^{\,3}
\bigg)\,l_0
\nonumber \\
&+&
\bigg(3a_2\beta_0-12a_1\beta_0^{\,2}+\left(24+\pi^2\right)\beta_0^{\,3}
+2a_1\beta_1-10\beta_0\beta_1+\beta_2 +\frac{8\pi^2}{3}C_A^{\,3}
\bigg)\,l_0^{\,2}
\nonumber\\
&+&
\bigg(4a_1\beta_0^{\,2}-8\beta_0^{\,3}+\frac{10}{3}\beta_0\beta_1
\bigg)\, l_0^{\,3}
+2\beta_0^{\,4}\,l_0^{\,4}.
\end{eqnarray}
The $\mu$ dependence appearing in \Eqn{eq:rSingularGreenFunction}
enters through \Eqn{eq:C0} and should be cancelled in \Eqn{Rdef}. Even
though the exact expression for the static potential is scale
independent, working at a finite order in $\alpha_{\rm s}(\mu_s)$ there is some
residual $\mu_s$ dependence.
The computation of $B^{(r)}_{V_s^{(0)}}$ is done numerically along the
lines of Section~\ref{Gr}. We use the input values $m_{b,\rm
PS}(2\,{\rm GeV})=4.515\,{\rm GeV}$~\cite{Pineda:2006gx} and
$m_{c,\rm PS}(0.7\,{\rm GeV})=1.50\,{\rm GeV}$~\cite{Signer:2008da}
for bottom and charm quarks, respectively. They can be translated
into scale-invariant $\overline{\rm MS}$-mass of $\overline{m}_b=4.19\,{\rm GeV}$
and $\overline{m}_c=1.25\,{\rm GeV}$. The strong coupling
$\alpha_s^{(n_f=5)}(M_z)=0.118$ is used as an input evolved down to
low energy scale using 4-loop running formulae. For the top quark
mass we use $m_{t, \rm PS}(20\,{\rm GeV}) = 173$~GeV for
illustration. The scale $\mu_{us}$ needed for the leading ultrasoft
contribution is set to $\mu_{us}=0.7$~GeV for charm, $\mu_{us}=1$~GeV
for bottom and $\mu_{us}=10$~GeV for top.
In Figure~\ref{fig:Brfixed} we show the results for charm, bottom and
top (dashed lines) as a function of the scale $\mu_s$ for fixed
$\mu$. For illustration we have chosen $\mu = 1.5$~GeV for charm,
2~GeV for bottom and 20~GeV for top. Note that, ideally, the result
should be independent of $\mu_s$, as it reflects a dependence on the
long distance behavior of the potential. For charm and bottom we see
problems of convergence, in particular for small values of
$\mu_s$. This is due to the behavior of the potential at short
distances, which we have already illustrated in
Figure~\ref{fig:potential}. For top the situation is much better. Note
that the LO curve corresponds to the Coulomb potential. In all three
cases we observe a significant gap between the Coulomb solution and
the higher order corrections (for the range of $\mu_s$ for which the
result can be trusted).
\begin{figure}
\epsfxsize=10cm
\centerline{\epsffile{Figure2a.eps}}
\medskip
\epsfxsize10cm
\centerline{\epsffile{Figure2b.eps}}
\medskip
\epsfxsize=10cm
\centerline{\epsffile{Figure2c.eps}}
\caption{$\widehat{B}^{(r)}_{V_s^{(0)}}$ as
a function of $\mu_s$ at LO(yellow), NLO(green), NNLO(blue) and
NNNLO(red) with $\mu = 1.5$ GeV for charm, 2 GeV for bottom, and
20 GeV for top. Dashed lines are obtained using
\Eqn{VSDfo}, solid lines are obtained using \Eqn{VSDrg}. }
\label{fig:Brfixed}
\end{figure}
Before we address the problem of the bad convergence, let us remark
that expanding the potential in $\alpha_{\rm s}(\mu)$, the pole mass renormalon
enters as an $r$-independent constant in the potential. This constant
cancels in the evaluation of $B^{(r)}_{V_s^{(0)}}$, which is
independent of the overall normalization of the potential. Thus, in
this evaluation the dependence will only enter in the values of the
mass used. The error associated to this uncertainty is beyond our
accuracy.
\subsection{RG-Improved $V_s^{(0)}$: $\alpha_{\rm s}(1/r)$ expansion}
\label{sec:RGIpot}
In the previous subsection we have seen that the convergence for
$\widehat{B}^{(r)}_{V_s^{(0)}}$ is very unsatisfactory if we use
\Eqn{VSDfo}. Surprisingly the problem comes from short and not long
distances. The solution is to absorb the large logarithms into the
running coupling. However, this has to be done carefully in order not
to destroy the renormalon cancellation achieved order by order in
$\alpha_{\rm s}$. More specifically, we consider different approximations to the
static potential behaving for $r \rightarrow 0$ as
\begin{equation}
V_s^{(0)} \simeq -\frac{C_F\,\alpha_s(1/r)}{r}
\bigg\{ 1+ \sum_{n=1}^{3}a_n(1/r;r)
\left(\frac{\alpha_{\rm s}(1/r)}{4\pi}\right)^n
\bigg\}
\,,
\label{eq:Vsresum}
\end{equation}
and yet achieving renormalon cancellation order by order in
$\alpha_{\rm s}^n(1/r)$. This will give us an estimate of the dependence of the
result on the long distance behavior of the potential. We will
generically name this class of potentials renormalization group
improved (RGI) and denote them by LO, NLO, ... according to the power
of $\alpha_{\rm s}(1/r)$ at which we stop the perturbative expansion in
\Eqn{eq:Vsresum}.
One possibility that fulfills all these requirements is the PS scheme
\cite{Beneke:1998rk} with the following modification:
\begin{equation}
\label{VPS}
V_{PS}(r) = V_{SD}(r,\mu_r)+2\,\delta m_{PS}
\end{equation}
with
\begin{equation}
V_{SD}(r,\mu_r) \equiv
\int_{q\leq \mu_r} \frac{d^3 {\bf q}}{(2\pi)^3}
e^{i{\bf q}\cdot{\bf r}}
\widetilde{V}_{SD}|_{\mu=\mu_s}(q)
+
\int_{q > \mu_r} \frac{d^3 {\bf q}}{(2\pi)^3}
e^{i{\bf q}\cdot{\bf r}}
\widetilde{V}_{SD}|_{\mu=q}(q)
\label{VSDrg}
\end{equation}
Thus we introduce a factorization scale $\mu_r$. For $q<\mu_r$ we
expand $\widetilde{V}_{SD}(q)$ in $\alpha_{\rm s}(\mu_s)$, as in the previous
subsection. For $q>\mu_r$ however, we use the running coupling in
$\widetilde{V}_{SD}(q)$. As can be seen in Figure~\ref{fig:potential},
the RGI potential (solid lines) shows good convergence for all values
of $r$. We have checked that the results for
$\widehat{B}^{(r)}_{V_s^{(0)}}$ are not sensitive to the precise value
of the factorization scale, as long as $\mu_r$ is large enough. This
definition has the advantage that the renormalon contribution is $r$
independent and achieves the resummation of logarithms. The fact that
the renormalon cancellation is $r$ independent makes it possible to
work also with $\delta m_{PS}=0$ in \Eqn{VPS}, as far as the
determination of $\widehat{B}^{(r)}_{V_s^{(0)}}(E^{(0)}_n)$ is
concerned.
The results for $\widehat{B}^{(r)}_{V_s^{(0)}}$, using \Eqn{VSDrg}
rather than \Eqn{VSDfo} are depicted as solid lines in
Figure~\ref{fig:Brfixed}. We have taken $\mu_r = 1$~GeV, $\mu =
1.5$~GeV for charm, $\mu_r = \mu= 2$~GeV for bottom, and $\mu_r =\mu=
20$~GeV for top. As can be seen, the resummation of logarithms results
in a dramatic improvement in the charm and bottom case, and is also
quite significant in the top case. In all three cases, the RG result
is nearly independent of $\mu_s$. This signals a weak dependence on
the long distance tail of the potential. This is to be contrasted
with the results obtained using \Eqn{VSDfo}, which are completely
unreliable unless unnaturally large values for $\mu_s$ are used
(especially for charm). The RGI curves show a good convergent pattern
for top, and also a reasonable convergence in the case of bottom.
Even for charm we see signs of convergence, albeit marginal. In
particular, in this case, and to a lesser extent in the case of
bottom, the splitting between the NNLO and NNNLO curves is not much
smaller than the splitting between the NLO and NNLO curves. Note,
though, that at NNNLO the potential starts to be sensitive to
ultrasoft physics, which we do not include in our analysis. In this
respect the NNNLO curves are to be considered incomplete (though the
explicit dependence on the ultrasoft factorization scale is
small). Moreover, at some point the asymptotic behavior of the
perturbative series should set in and it cannot be ruled out that we
are approaching this regime. Still, we would like to point out the
smallness of this splitting compared to the total magnitude of the
correction achieved by the reorganization of the perturbative series,
which can be estimated by comparing the Coulomb line versus the NNNLO
curve. In this respect, even if we consider the splitting between the
NNLO and NNNLO curves as an error, its magnitude is rather small
compared with the total gap. From this analysis, we conclude that we
should use the RGI potential instead of the FO one and we will take
this attitude in the rest of the paper.
Another possibility that we explore is the use of the RS or RS'
potential \cite{Pineda:2001zq}. To avoid numerical instabilities, due
to the behavior of the potential at long distances, we also modify the
potential in the following way:
\begin{equation}
\label{VRS}
V_{\rm RS}(r)=
\,\left\{
\begin{array}{ll}
&
\displaystyle{
(V_{SD}+2\delta m_{\rm RS})|_{\mu=\mu_s}=
\sum_{n=0}^{\infty}V_{RS,n}\alpha_{\rm s}^{n+1}(\mu_s)
\qquad {\rm if} \quad r>\mu_r }
\\
&
\displaystyle{
(V_{SD}+2\delta m_{\rm RS})|_{\mu=1/r}=
\sum_{n=0}^{\infty}V_{RS,n}\alpha_{\rm s}^{n+1}(1/r)
\qquad {\rm if} \quad r<\mu_r }
\end{array} \right.
\end{equation}
Irrespectively of the potential we use, the short distance behavior of
the potential and, consequently, $A^{(r)}(r_0;\mu)$ is the same. The
full expression for $A^{(r)}(r_0;\mu)$ is more complicated in these
cases than in Section~\ref{sec:alsmuexp} and we refrain
from giving the general explicit expression
and only show (for illustration) how it would look like
at the lowest order. If for instance we consider LL
running at short distance, namely
\begin{eqnarray}
V_s^{(0)}(r)\simeq
-\frac{C_F}{r}\,
\frac{\alpha_s(\mu)
}{1-\frac{\beta_0\alpha_s}{2\pi} \ln(\mu r)}
\,,
\end{eqnarray}
we have
\begin{eqnarray}
A^{(r)}(r_0)
&=&
\frac{1}{r_0}
-2m_r\,C_F\,\alpha_s(\mu)\, v(l_0)
\,.
\label{ArRG}
\end{eqnarray}
\begin{eqnarray}
v(l_0)
&=&
\frac{2\pi}{\beta_0\alpha_s(\mu)}
\bigg\{
f\bigg [\gamma_E+\frac{2\pi}{\beta_0\alpha_s(\mu)}-l_0\bigg ]
-f\bigg [\gamma_E+\frac{2\pi}{\beta_0\alpha_s(\mu)}\bigg ]
\bigg\}
\,,
\end{eqnarray}
with
$f[x ]\equiv e^x\,{\rm Ei}(-x)-\ln\,x$.
The coefficients $v_n$ have an expansion in $\alpha_s$ and $l_0$
\begin{eqnarray}
v(l_0)
&=&
l_0
+\left(\frac{\beta_0\alpha_s}{4\pi}\right)\,
\bigg\{-2(\gamma_E+1)l_0+l_0^{\,2}\bigg\}
\nonumber
\\
&+&\left(\frac{\beta_0\alpha_s}{4\pi}\right)^2\,
\bigg\{\left(8+8\gamma_E+4\gamma_E^2\right)\,l_0
-(4+4\gamma_E)l_0^{\,2}+\frac{4}{3}l_0^{\,3}
\bigg\}+\cdots
\,.
\end{eqnarray}
\begin{figure}
\epsfxsize=10cm
\centerline{\epsffile{Brschemea.eps}}
\medskip
\epsfxsize10cm
\centerline{\epsffile{Brschemeb.eps}}
\medskip
\epsfxsize=10cm
\centerline{\epsffile{Brschemec.eps}}
\caption{\label{fig:Brscheme2_b} $\widehat{B}^{(r)}_{V_s^{(0)}}$ as a
function of $\mu$ at LO (yellow), NLO (green), NNLO (blue) and NNNLO
(red) with $\mu_s = 1.5$ GeV, $\mu_r = 1$~GeV and
$\mu_F=\mu_{us}=0.7$~GeV for charm, $\mu_s=\mu_r =\mu_F= 2$~GeV and
$\mu_{us}=1$~GeV for bottom, and $\mu_s=\mu_r = \mu_F=20$~GeV, and
$\mu_{us}=10$~GeV for top. Solid lines are obtained in the PS scheme
using \Eqn{VSDrg} and dashed lines are obtained in the RS' scheme
using \Eqn{VRS}. For reference we also include
$\widehat{B}^{(r)}_{V_C}$ (short-dashed black line).}
\end{figure}
We now perform the numerical evaluation of
$\widehat{B}^{(r)}_{V_s^{(0)}}$ at different orders in the static
potential and compare the results obtained using the PS and RS'
scheme. The results are shown in Figure~\ref{fig:Brscheme2_b}. The
difference between the schemes is small and converging for the case of
bottom and top. In these two cases the differences between both
schemes is pretty small for the NNNLO curves. This is again a good
signal, since the dependence on the scheme is an indirect measure of
the dependence on the long distance tail of the potential. For charm,
the situation is less convincing. The gaps between schemes show
marginal convergence at best as we increase the order. Yet, this gap
is still much smaller than the gap between the Coulomb result and the
NNNLO result. Comparing the Coulomb result, shown as the black short-dashed
line, to our results, we can see that in all three cases a rather
significant portion of the correction is already achieved with the LO
RGI potential. In the case of top the NLO RGI potential is already
quite close the most accurate NNNLO result. This behavior is also
seen, to a lesser extent in the case of bottom. Note that the LO RGI
potential exactly incorporates the $r$-dependent leading logarithms.
This is equivalent to introducing an infinite number of corrections to
the Coulomb potential and to iterate them an infinite numbers of
times. This reorganization of perturbation theory seems to produce a
major effect. Another observation is that the RS' scheme produces an
accelerated convergence to the asymptotic regime. This is clearly
seen in the top case, and to a lesser extent, in the bottom case. In
those cases the LO RGI potential produces the bulk of the correction and
the magnitude of the higher order corrections is smaller in the RS'
than in the PS scheme. The price paid is that the splitting between
different orders in the RS' scheme is less convergent.
The dependence of the results on $\mu_r$ is very small. Changing
$\mu_r$ from 2~GeV to 4~GeV for example results in differences that
are an order of magnitude smaller than the changes we find by going
from say LO to NLO.
The dependence on $\mu$ will have to be cancelled by the scale
dependence of the matching coefficient $c_s(\mu)$. Note that our
evaluation of $\widehat{B}^{(r)}_{V_s^{(0)}}$ also includes subleading
logarithms, which are not matched by the precision of the RG (hard)
computation. The fact that the scale dependence roughly corresponds to
the Coulomb potential (with RG running) can be taken as an indication
that subleading logarithms are not very important (see
Figure~\ref{fig:Brscheme2_b} for illustration).
Finally, there is also a dependence on the scale $\mu_s$. This
dependence (as the dependence on the renormalon subtraction scheme)
partly reflects the dependence of the result on the long distance tail
of the potential. On the other hand one can not take $\mu_s$ very
small otherwise $\alpha_{\rm s}(\mu_s)$ becomes very large. We now perform the
numerical evaluation of $\widehat{B}^{(r)}_{V_s^{(0)}}$ at different
orders in the loop expansion in the PS scheme and using different
values of $\mu_s$. The results for varying values of $\mu_s$ are shown
as bands in Figure~\ref{fig:Brmus}. We also show the Coulomb result as
the band enclosed by black dashed lines. This plot also illustrates
that the bulk of the correction is already achieved with the LO/NLO
RGI potential in the top and bottom case, where we have convergence
(in the charm case convergence is marginal at best). The $\mu_s$
dependence tends to diminish as one increases the order of the RGI
potential in the top, and to a lesser extent in the bottom case. In
the charm case the $\mu_s$ dependence remains almost
constant. Overall, we find that the $\mu_s$ dependence is slightly
larger than the scheme dependence, but still smaller than the typical
gap due to working at different orders in the RGI potential.
\begin{figure}
\epsfxsize=10cm
\centerline{\epsffile{musa.eps}}
\medskip
\epsfxsize10cm
\centerline{\epsffile{musb.eps}}
\medskip
\epsfxsize=10cm
\centerline{\epsffile{musc.eps}}
\caption{\label{fig:Brmus} $\widehat{B}^{(r)}_{V_s^{(0)}}$ using
\Eqn{VSDrg} as a function of $\mu$ at LO (yellow), NLO (green), NNLO
(blue) and NNNLO (red) with $\mu_r=1$~GeV and
$\mu_F=\mu_{us}=0.7$~GeV for charm, $\mu_r =\mu_F= 2$~GeV and
$\mu_{us}=1$~GeV for bottom, and $\mu_r = \mu_F=20$~GeV, and
$\mu_{us}=10$~GeV for top. The bands are obtained by variation of
$\mu_s$ in the range 1--1.5 GeV, 2--4 GeV and 20--60 GeV for charm,
bottom and top respectively. For reference we also include
$\widehat{B}^{(r)}_{V_C}$ (grey band).}
\end{figure}
\section{Phenomenology of the decay ratio}
Using the results obtained for ${\hat
B}^{(r)}_{V_s^{(0)}}(E^{(0)}_n;\mu)$ we can get improved
determinations of the decay ratio, by combining Eqs.~(\ref{Rdef}),
(\ref{rho_n}) and (\ref{deltaMSKiyo}) with the determination of $c_s$
from Ref.~\cite{Penin:2004ay}. We use the results obtained in
Section~\ref{sec:RGIpot} with the RGI potential, since they both
achieve the resummation of logarithms and renormalon cancellation. The
main source of uncertainties in the evaluation of ${\hat
B}^{(r)}_{V_s^{(0)}}(E^{(0)}_n;\mu)$ is reflected by the
computations at different orders in $\alpha_{\rm s}$ in the static potential
and, to a lesser extent, by the dependence on $\mu_s$. In comparison,
the dependence on the quark mass, $\mu_r$, $\mu_f$ and $\mu_{us}$ is
small. Therefore, we will fix those parameters to the values used in
Section~\ref{sec:RGIpot}. In Section~\ref{sec:RGIpot} we also saw
that the scheme dependence for renormalon subtraction was small,
compared with the uncertainty due to the computation at different
orders. Therefore, we will only take one scheme (PS) for reference in
the plots.
In order to explore different power counting expansions for our
results, we will consider and compare different approximations. In
particular we will show the effect of resumming logarithms in the
matching coefficients $D_{S^2,s}^{(2)}$ and $c_s$. We will see that
the RGI in the matching coefficients plays an important role to make
the result more factorization scale independent. The results obtained
within a strict perturbative expansion (see Ref.~\cite{Penin:2004ay})
are labelled as LO, NLO and NNLO respectively and, after resummation
of logarithms, as LL, NLL and NNLL. Taking into account the static
potential exactly, using numerical methods as described in the
previous sections, we obtain improved predictions for the relativistic
corrections that we label by including "I" to the previous labelling:
NLLI (including $c_s$ with NLL precision and the improved relativistic
correction $\delta \rho_n$) and NNLLI ($c_s$ with NNLL precision and
the improved relativistic correction $\delta \rho_n$). For comparison
we will also consider the result without resummation of the logarithms
in the matching coefficient, NNLOI ($c_s$ with NNLO precision and the
improved relativistic correction $\delta \rho_n$). For both, NNLLI and
NNLOI we will consider the results taking the RGI static potential at
LO, NLO, NNLO and NNNLO.
From the point of view of a double counting in $\alpha_{\rm s}$ and $v$ the NLL
result (with NLL precision for $c_s$) can be considered as ${\cal
O}(\alpha_{\rm s}, v^0)$ whereas NLLI is ${\cal O}(\alpha_{\rm s}, v^2)$ and NNLLI is
${\cal O}(\alpha_{\rm s}^2, v^2)$. As a general trend, moving from NLL to NLLI
improves the scale dependence. This is due to the fact that, by using
the RGI, NNLO ${\cal O}(\alpha_{\rm s}^2)$ logarithms count as NLL and can be
matched with a part of the scale dependence of the relativistic ${\cal
O}(v^2)$ correction. Note as well that the inclusion of $c_s$ with
NNLL precision accounts for ${\cal O}(\alpha_{\rm s}^3)$ leading logarithms and
beyond. Those should be cancelled by the inclusion of the subleading
scale dependence of the relativistic correction. Most of it is
actually built in by the numerical evaluation of the relativistic
correction with the RG potential. In principle, this should be
reflected in an improvement in the scale dependence in going from NLLI
to NNLLI. On the other hand, this double counting in $\alpha_{\rm s}$ and $v$
scheme produces an unmatched scheme dependence, which can only be
matched by working at the same order in $\alpha_{\rm s}$ and $v$.
We have also studied the dependence on the specific
$r$-renormalization scheme of ${\hat
B}^{(r)}_{V_s^{(0)}}(E^{(0)}_n;\mu)$. This dependence should vanish
when combined with $c_r^{\overline{\rm MS}}$. In particular we have studied the
effect of eliminating $\gamma_E$ in the logarithms in \Eqn{eq:Ar} and
consequently using \Eqn{eq:CDbis} for $c_r^{\overline{\rm MS}}$. Note that this is
actually equivalent to using ${\hat B}^{(r)}_{V_s^{(0)}}(E^{(0)}_n;\mu
e^{-\gamma_E})$. We have checked that (at least in the cases where
the series converges) this dependence fades away when considering the
potential with increasing accuracy. The reason is that the $\gamma_E$
terms that appear at higher orders get more accurately described as we
increase the order of our computation. This increases our confidence
in the perturbative approach. The introduction of $\gamma_E$ in the
scale $\mu$ makes the different terms in the expansion approach the
asymptotic result faster, but the effect is not very significant.
In the following subsections we will consider in turn the cases of
top, bottom and charm.
\subsection{Top}
We start with the top since it is the cleanest possible case, where we
expect best convergence. The scales are fixed as
$\mu_F=\mu_r=\mu_s=20$~GeV and $\mu_{us}=10$ GeV and we work in the PS
scheme.
\begin{figure}[t]
\epsfxsize=0.8\textwidth
\centerline{\epsffile{ratioRSp2_t.eps}}
\caption{\label{fig:ratioRSp2_t} Decay ratio in the PS scheme at NNLOI
(dashed) and NNLLI (solid) at different orders in $\alpha_{\rm s}$ in the
static potential (${\cal O}(\alpha_{\rm s})$: yellow; ${\cal O}(\alpha_{\rm s}^2)$:
green; ${\cal O}(\alpha_{\rm s}^3)$: blue; ${\cal O}(\alpha_{\rm s}^4)$: red). For
reference we also include the LL, NLL, and NNLL results
(short-dashed).}
\end{figure}
In Figure~\ref{fig:ratioRSp2_t} we show the decay ratio at NNLOI
(dashed lines) and NNLLI (solid lines) at different orders in $\alpha_{\rm s}$
in the static potential (LO: yellow; NLO: green; NNLO: blue; NNNLO:
red). For reference we also include the LL, NLL, and NNLL results
(short-dashed lines) obtained within a strict perturbative expansion.
Comparing the NNLOI with the NNLLI curves, it can be seen that the
inclusion of the RG matching coefficients has a significant impact in
reducing the scale dependence. Also, there is a sizable gap when
moving from NNLL to NNLLI even if we take the LO RGI static potential
(which includes the $r$ running producing the shift we observe in the
plot). The inclusion of subleading corrections to the potential
produces a convergent effect. Actually, the NLO RGI static potential
result is already quite close to the asymptotic result. This may allow
to define a counting in $v$, by taking the asymptotic limit of the
series. The potential problem is that this counting in $v$ is scheme
dependent.
\begin{figure}[t]
\begin{center}
\epsfxsize=0.8\textwidth
\epsffile{decaytop_scheme.eps}
\end{center}
\caption{\label{fig:decay_top} Decay ratio in the PS scheme at NLLI in
the $\overline{\rm MS}$ (grey dashed) and hard-matching scheme (grey solid) and at
NNLLI (red solid). For reference we also include the LL, NLL, and
NNLL results. }
\end{figure}
To study this scheme dependence, in Figure~\ref{fig:decay_top} we show
the decay ratio at NLLI in the $\overline{\rm MS}$ and hard-matching scheme (see
Ref.~\cite{Penin:2004ay} and Eq. (\ref{deltaHMKiyo})) and at NNLLI,
all of them at ${\cal O}(\alpha_{\rm s}^4)$ in the static potential. These
results are compared to the LL, NNL and NNLL results. Moving from NLL
to NLLI improves the scale dependence no matter what scheme is used.
As we have already discussed, this is due to the fact that, by using
the RG, NNLO ${\cal O}(\alpha_{\rm s}^2)$ logarithms count as NLL and can be
matched with a part of the scale dependence of the relativistic ${\cal
O}(v^2)$ correction. On the other hand there is a sizable gap
between the NLLI result obtained in the $\overline{\rm MS}$ and hard-matching
scheme. The latter is much closer to the full NNLLI result. The reason
is that the two-loop hard correction is much smaller in the
hard-matching scheme compared with the $\overline{\rm MS}$ scheme. This could
indicate that the hard-matching scheme leads to a more convergent
series but it cannot be ruled out that this smallness is accidental
for ${\cal O}(\alpha_{\rm s}^2)$. Therefore, we believe this gap gives a
conservative estimate of the remaining uncertainties. Note that it is
much larger than the other sources of uncertainties considered in this
paper. For instance, we have also investigated the $\mu_s$ dependence
and observed that it gets smaller when we consider higher orders in
the static potential, pointing to the fact that the long-distance tail
of the potential does not have a significant impact on the
determination of the decay ratio. A similar comment applies to the
renormalon scheme dependence. Therefore, in summary we find nice
convergence in the top quark case.
\subsection{Bottom}
\begin{figure}[t]
\begin{center}
\epsfxsize=0.8\textwidth
\epsffile{ratioRSp2_b.eps}
\end{center}
\caption{\label{fig:ratioRSp2_b} Decay ratio in the PS scheme at NNLOI
(dashed) and NNLLI (solid) at different orders in $\alpha_{\rm s}$
in the static potential (${\cal O}(\alpha_{\rm s})$: yellow; ${\cal
O}(\alpha_{\rm s}^2)$: green; ${\cal O}(\alpha_{\rm s}^3)$: blue; ${\cal O}(\alpha_{\rm s}^4)$:
red). For reference we also include the LL, NLL, and NNLL results
(short-dashed). }
\end{figure}
Turning to the bottom case, in Figure~\ref{fig:ratioRSp2_b} we show the
decay ratio in the PS scheme at NNLOI and NNLLI at different orders in
$\alpha_{\rm s}$ in the static potential. For reference we also include the LL,
NLL, and NNLL results. We use $\mu_r=\mu_F=\mu_s=2$~GeV and
$\mu_{us}=1$~GeV. Again we can see that the inclusion of the RG
matching coefficients has a significant impact in reducing the scale
dependence. As in the top case, there is a sizable gap when moving
from NNLL to NNLLI. The bulk of it is already obtained by taken the
NLO(LO) RGI static potential in the PS(RS') scheme.
The inclusion of subleading corrections to the potential
produces a smaller effect, yet sizable. Compared to the top case the
magnitude of the corrections is larger and the convergence using the
static potential at different orders is worse, in particular in going
from the ${\cal O}(\alpha_{\rm s}^3)$ to the ${\cal O}(\alpha_{\rm s}^4)$ approximation of
the static potential. Nevertheless, one can still see a band (though
much wider than for top) where to roughly define a counting in $v$. We
should also stress that using the ${\cal O}(\alpha_{\rm s}^4)$ RGI potential has
some ambiguities, since ultrasoft effects enter at this
order. Therefore, it can not be considered complete.
We study the scheme dependence in Figure~\ref{fig:decay_bottom},
showing the decay ratio at NLLI in the $\overline{\rm MS}$ and hard-matching scheme
and at NNLLI. In all cases the static potential is taken at ${\cal
O}(\alpha_{\rm s}^4)$. These results are compared with the LL, NNL and NNLL
results. The general pattern of the results is similar to the top
case. Moving from NLL to NLLI improves the scale dependence
irrespective of the scheme used. However, there is a sizable gap
between the NLLI result obtained in the $\overline{\rm MS}$ and hard-matching
scheme, the latter is much closer to the full NNLLI result. As for
top, we take this gap for a conservative estimate of the remaining
uncertainty. Again, this gap is larger than other sources of
uncertainties considered in this paper, like the splitting associated
to different orders in the static potential, the $\mu_s$ or renormalon
scheme dependence. Either way the errors are obviously larger here
than in the top case. In particular we have found a larger sensitivity
to $\mu_s$ and the specific implementation of the initial conditions.
\begin{figure}[t]
\begin{center}
\epsfxsize=0.8\textwidth
\epsffile{decaybottom_scheme.eps}
\end{center}
\caption{\label{fig:decay_bottom} Decay ratio in the PS scheme at NLLI
in the $\overline{\rm MS}$ (grey dashed) and hard-matching scheme (grey solid) and
at NNLLI (red solid). For reference we also include the LL, NLL, and
NNLL results. }
\end{figure}
We use this analysis to obtain an updated prediction for
$\Gamma(\eta_b(1S) \rightarrow \gamma\gamma)$. For the central value
we use the NNLLI result with $\mu=2$~GeV and the set of parameters
described before, obtaining 0.544~keV. The theoretical error has been
estimated considering the difference between the NLLI (in the $\overline{\rm MS}$)
and NNLLI result for $\mu=2$~GeV. We obtain 0.146~keV for this
error. As we have already mentioned, we have checked that the
uncertainties due the variation of these parameters, the scheme, or
the consideration of different order in $\alpha_{\rm s}$ in the potential, is
much smaller than the error quoted. Another source of error is
experimental, coming from $\Gamma(\Upsilon(1S) \rightarrow
e^+e^-)=1.340\pm0.018$~keV \cite{Amsler:2008zzb}. This produces a
very small error: $\pm 0.007$~keV. Finally, we have also computed the
error associated to the indetermination of $\alpha_{\rm s}(M_z)=0.118 \pm
0.003$. This error is even smaller: ${}^{+0.002}_{-0.004}$~keV. We
combine the last two errors in quadrature and add linearly to the
theoretical error (which completely dominates the error). After
rounding we obtain $\Gamma(\eta_b(1S) \rightarrow \gamma\gamma) = 0.54
\pm 0.15$~keV.
\subsection{Charm}
Finally we consider the charmonium ground state. The applicability of
our weak coupling approach to this system is doubtful. Nevertheless,
we will find it rewarding that the reorganization of the perturbative
expansion significantly improves the agreement with the experimental
data. Again we will use the PS scheme and set $\mu_s=1.5$~GeV,
$\mu_r=1$~GeV and $\mu_F=1=\mu_{us}=0.7$~GeV.
\begin{figure}[t]
\begin{center}
\epsfxsize=0.8\textwidth
\epsffile{ratioRSp1_c.eps}
\end{center}
\caption{\label{fig:ratioRSp1_c} Decay ratio in the PS scheme at NNLOI
(dashed) and NNLLI (solid) at different orders in $\alpha_{\rm s}$ in the
static potential (${\cal O}(\alpha_{\rm s})$: yellow; ${\cal O}(\alpha_{\rm s}^2)$:
green; ${\cal O}(\alpha_{\rm s}^3)$: blue; ${\cal O}(\alpha_{\rm s}^4)$: red). For
reference we also include the LL, NLL, and NNLL results
(short-dashed). The light blue band represents the experimental
error of the ratio where the central value is given by the
horizontal solid line.}
\end{figure}
In Figure~\ref{fig:ratioRSp1_c} we show the decay ratio at NNLOI and
NNLLI at different orders in $\alpha_{\rm s}$ in the static potential. For
reference we also include the LL, NLL, and NNLL results. The
experimental result, using $\Gamma(J/\psi\to e^+ e^-)=5.55\pm
0.14$~keV and
$\Gamma(\eta_c\to\gamma\gamma)=7.2\pm0.7\pm2.0$~keV~\cite{Amsler:2008zzb}
is shown as the light blue band, with the central value indicated by
the horizontal solid light blue line. Once more we can see that the
inclusion of the RG matching coefficients improves the scale
dependence and there is a sizable gap when moving from NNLL to NNLLI.
The inclusion of subleading corrections to the potential produces a
slightly smaller though still quite large effect. Compared to the bottom
case the magnitude of the corrections is larger and the convergence is
worse. We find the same problem in the associated evaluations of the
energy and the wave function at the origin. Despite these
shortcomings, the effect goes in the direction of bringing agreement
with experiment.
We study the scheme dependence by showing the decay ratio at NLLI in
the $\overline{\rm MS}$ and hard-matching scheme and at NNLLI in
Figure~\ref{fig:decay_charm}. The static potential is taken at ${\cal
O}(\alpha_{\rm s}^4)$. The discussion is pretty similar to the top and bottom
case. Moving from NLL to NLLI improves the scale dependence no matter
what scheme is used. On the other hand there is a sizable gap between
the NLLI result obtained in the $\overline{\rm MS}$ and hard-matching scheme, the
latter being much closer to the full NNLLI result. The reason is the
same as for top and bottom. Taking this gap for an estimate of the
typical size of the uncertainties produces an error of around 50\% in
the ratio. This encodes most of the experimental band and it is
significantly larger than the typical split produced by working at
different orders in $\alpha_{\rm s}$ in the static potential.
\begin{figure}[t]
\begin{center}
\epsfxsize=0.8\textwidth
\epsffile{decaycharm_scheme.eps}
\end{center}
\caption{\label{fig:decay_charm} Decay ratio in the PS scheme at NLLI
in the $\overline{\rm MS}$ (grey dashed) and hard-matching scheme (grey solid) and
at NNLLI (red solid). For reference we also include the LL, NLL,
and NNLL results and the experimental ratio.}
\end{figure}
\section{Conclusions}
We have considered a different power counting in potential NRQCD by
incorporating the static potential exactly in the leading order
Hamiltonian. In this scheme we compute the leading relativistic
corrections to the inclusive electromagnetic decay ratios. The effect
of this new power counting is dramatic for charm, large for bottom,
and sizable even for top. In the case of bottom, we produce an updated
value for the $\eta_b$ decay to two photons
\begin{equation}
\Gamma(\eta_b(1S) \rightarrow \gamma\gamma)=0.54 \pm 0.15 \, {\rm keV}.
\label{GammaEta}
\end{equation}
In the case of charmonium, this scheme brings consistency between the
weak coupling computation and the experimental value of the decay
ratio, but the theoretical error is large.
It is worth emphasizing that in the case where our expansion is more
reliable, i.e. the top and bottom case, the bulk of the correction
comes from using the first two orders of the RGI potential. The
effect of higher-order corrections in the RGI potential is relatively
small. The details of the importance of higher-order corrections
depends on the scheme. In the RS' already the LO RGI potential gives
the bulk of of the correction whereas in the PS two terms in the
expansion are needed. Irrespectively, they both converge as one goes
to higher orders.
This approach could open the possibility to reorganize the
perturbative series in a controlled way. We stress again that this is
also relevant for top. Therefore, it is not a strong coupling effect
but rather reflects the need of a more optimal resummation of
perturbation theory. This might call for a reanalysis of previous
results in this new scheme. It is an open question whether there is a
similar effect in the case of the hyperfine splitting. We leave this
discussion for a forthcoming paper.
It would be misleading to only assign a theoretical error from the
scale dependence. This is particularly obvious in the charm case,
where the scale dependence by no means reflects a reasonable estimate
of the size of higher-order corrections, which are difficult to
estimate and can only be inferred from the apparent convergence of the
expansion.
Our formalism is flexible enough so that, with little effort, we could
replace the perturbative static potential by any potential, in
particular, by one fitted to non-perturbative lattice data. This could
be of particular relevance for charmonium but it could also be of help
for bottomonium, provided the static potential is know with enough accuracy
in the unquenched approximation. This would eliminate the error associated
to higher order terms in the static potential, but not the error due to
higher order terms in the hard matching coefficient and the associated RG
improvement.
|
1,941,325,220,075 | arxiv | \subsection{Bi-superintuitionistic and Tense Deductive Systems}\label{sec:preliminaries2}
We begin by reviewing definitions and basic facts concerning the structures dealt with in this section.
\subsubsection{Bi-superintuitionistic Deductive Systems, bi-Heyting Algebras, and bi-Esakia Spaces}
We work in the \emph{bi-superintuitionistic signature}, \[bsi:=\{\land, \lor, \to,\from, \bot, \top\}.\]
The set $\mathit{Frm_{bsi}}$ of bi-superintuitionistic (bsi) formulae is defined recursively as follows.
\[\varphi::= p\sep \bot\sep\top\sep \varphi\land \varphi\sep\varphi\lor \varphi\sep\varphi\to \varphi\sep\varphi\from\varphi\]
We let $\gen\varphi:=\varphi\from \top$ and $\varphi\leftrightarrow \psi:=(\varphi\to \psi)\land (\psi\to \varphi)$. The \emph{bi-intuitionistic propositional calculus} $\mathtt{bi\text{-}IPC}$ is defined as the least logic over $\mathit{Frm_{bsi}}$ containing $\mathtt{IPC}$, containing the axioms
\begin{align*}
&p\to (q\lor (q\from p)) & (q\from p)\to \gen(p\to q)\\
&(r\from (q\from p))\to ((p\lor q)\from p) &\neg (p\from q)\to (p\to q) \\
&\neg\gen (p\from p)
\end{align*}
and such that if $\varphi, \varphi\to \psi\in \mathtt{bi\text{-}IPC}$ then $\psi\in \mathtt{bi\text{-}IPC}$, and if $\varphi\in \mathtt{bi\text{-}IPC}$ then $\neg \gen \varphi\in \mathtt{bi\text{-}IPC}$. The logic $\mathtt{bi\text{-}IPC}$ was introduced and extensively studied by \citet{Rauszer1974AFotPCoHBL,Rauszer1974SBAaTAtILwDO,Rauszer1977AoKMtHBL}. It was also investigated by \citet{Esakia1975TPoDitILaBL}, and more recently by \citet{Gore2000DILR}.
\begin{definition}
A \emph{bsi-logic} is a logic $\mathtt{L}$ over $\mathit{Frm_{bsi}}$ containing $\mathtt{bi\text{-}IPC}$ and satisfying the following conditions:
\begin{itemize}
\item If $\varphi, \varphi\to \psi\in \mathtt{L}$ then $\psi\in \mathtt{L}$ (MP);
\item If $\varphi\in \mathtt{L}$ then $\neg \gen \varphi\in \mathtt{L}$ (DN).
\end{itemize}
A \emph{bsi-rule system} is a rule system $\mathtt{L}$ over $\mathit{Frm_{bsi}}$ satisfying the following conditions:
\begin{itemize}
\item $\varphi, \varphi\to \psi/\psi\in \mathtt{L}$ (MP-R);
\item $\varphi/\neg\gen\varphi\in \mathtt{L}$ (DN-R);
\item $/\varphi\in \mathtt{L}$ for every $\varphi\in \mathtt{bi\text{-}IPC}$.
\end{itemize}
\end{definition}
If $\mathtt{L}$ is a bsi-logic let $\mathbf{Ext}(\mathtt{L})$ be the set of bsi-logics containing $\mathtt{L}$, and similarly for bsi-rule systems. Then $\mathbf{Ext}(\mathtt{bi\text{-}IPC})$ is the set of all bsi-logics. It is easy to see that $\mathbf{Ext}(\mathtt{bi\text{-}IPC})$ carries a complete lattice, with $\oplus_{\mathbf{Ext}(\mathtt{bi\text{-}IPC})}$ as join and intersection as meet. Observe that for every $\mathtt{L}\in \mathbf{Ext}(\mathtt{bi\text{-}IPC})$ there is a least bsi-rule system containing $/\varphi$ for each $\varphi\in \mathtt{L}$, which we denote by $\mathtt{L_R}$. Then $\mathtt{bi\text{-}IPC_R}$ is the least bsi-rule system and $\mathbf{Ext}(\mathtt{bi\text{-}IPC_R})$ is the set of all bsi-rule systems. Again, it is not hard to verify that $\mathbf{Ext}(\mathtt{bi\text{-}IPC_R})$ forms a complete lattice with $\oplus_{\mathbf{Ext}(\mathtt{bi\text{-}IPC_R})}$ as join and intersection as meet. Henceforth we write both $\oplus_{\mathbf{Ext}(\mathtt{bi\text{-}IPC})}$ and $\oplus_{\mathbf{Ext}(\mathtt{bi\text{-}IPC_R})}$ simply as $\oplus$, leaving context to clarify any ambiguity.
We generalise \Cref{deductivesystemisomorphismsi} to the bsi setting.
\begin{proposition}
The mappings $(\cdot)_{\mathtt{R}}$ and $\mathsf{Taut}(\cdot)$ are mutually inverse complete lattice isomorphisms between $\mathbf{Ext}(\mathtt{bi\text{-}IPC})$ and the sublattice of $\mathbf{Ext}(\mathtt{bi\text{-}IPC_R})$ consisting of all bsi-rule systems $\mathtt{L}$ such that $\mathsf{Taut}(\mathtt{L})_\mathtt{R}=\mathtt{L}$.\label{deductivesystemisomorphismbsi}
\end{proposition}
A \emph{bi-Heyting algebra} is a tuple $\mathfrak{H}=(H, \land, \lor, \to,\from, 0, 1)$ such that the $\from$-free reduct of $\mathfrak{H}$ is a Heyting algebras, and such that for all $a, b, c\in H$ we have
\[a\from b\leq c\iff a\leq b\lor c.\]
Bi-Heyting algebras are discussed at length in \cite{Rauszer1974AFotPCoHBL,Rauszer1974SBAaTAtILwDO,Rauszer1977AoKMtHBL} and more recently in \cite{Taylor2017DHA,PedrosoDeLimaMartins2021BGAaCT}. Let $\mathsf{bi\text{-}HA}$ denote the class of all bi-Heyting algebras. By \Cref{syntacticvarietiesuniclasses}, $\mathsf{bi\text{-}HA}$ is a variety.
Let $\mathfrak{L}=(L, \land, \lor, 0, 1)$ be a bounded lattice. The \emph{order dual} of $\mathfrak{L}$ is the lattice $\bar{\mathfrak{L}}=(L, \lor, \land, 1, 0)$, where $\lor$ is viewed as the meet operation and $\land$ as the join operation. We have the following elementary but important fact.
\begin{proposition}[Order duality principle for bi-Heyting algebras]
For every bi-Heyting algebra $\mathfrak{H}$, the order dual $\bar{\mathfrak{H}}$ of $\mathfrak{H}$ is a Heyting algebra, where implication is defined, for all $a, b\in H$, by \label{orderdual2ha}
\[a\from b:=\bigwedge\{c\in H:a\leq b\lor c\}.\]
\end{proposition}
This observation can be leveraged to establish a number of properties about bi-Heyting algebras via straightforward adaptations of the theory of Heyting algebras. We shall see numerous examples of this strategy in this section.
We write $\mathbf{Var}(\mathsf{bi\text{-}HA})$ and $\mathbf{Uni}(\mathsf{bi\text{-}HA})$ respectively for the lattice of subvarieties and of universal subclasses of $\mathsf{bi\text{-}HA}$. The following result may be proved via the same techniques used to prove \Cref{thm:algebraisationHA}. A recent self-contained proof of \Cref{algebraisation2havar} may be found in \cite[Theorem 2.8.3]{PedrosoDeLimaMartins2021BGAaCT}.
\begin{theorem}
The following maps are pairs of mutually inverse dual isomorphisms:\label{algebraisation2ha}
\begin{enumerate}
\item $\mathsf{Alg}:\mathbf{Ext}(\mathtt{bi\text{-}IPC})\to \mathbf{Var}(\mathsf{bi\text{-}HA})$ and $\mathsf{Th}:\mathbf{Var}(\mathsf{bi\text{-}HA})\to \mathbf{Ext}(\mathtt{bi\text{-}IPC})$;\label{algebraisation2havar}
\item $\mathsf{Alg}:\mathbf{Ext}(\mathtt{bi\text{-}IPC_R})\to \mathbf{Uni}(\mathsf{bi\text{-}HA})$ and $\mathsf{ThR}:\mathbf{Uni}(\mathsf{bi\text{-}HA})\to \mathbf{Ext}(\mathtt{bi\text{-}IPC_R})$.\label{algebraisation2hauni}
\end{enumerate}
\end{theorem}
\begin{corollary}
Every bsi-logic $($resp. bsi-rule system$)$ is complete with respect to some variety $($resp. universal class$)$ of bi-Heyting algebras. \label{completeness_bsi}
\end{corollary}
A \emph{bi-Esakia space} is an Esakia space $\mathfrak{X}=(X, \leq, \mathcal{O})$, satisfying the following additional conditions:
\begin{itemize}
\item ${{\downarrow}}x$ is closed for every $x\in X$;
\item ${{\uparrow}}[U]\in \mathsf{Clop}(\mathfrak{X})$ whenever $U\in \mathsf{Clop}(\mathfrak{X})$.
\end{itemize}
Bi-Esakia spaces were introduced by \citet{Esakia1975TPoDitILaBL}. We let $\mathsf{bi\text{-}Esa}$ denote the class of all bi-Esakia spaces. For $\mathfrak{X}\in \mathsf{bi\text{-}Esa}$, we write $\mathsf{ClopDown}(\mathfrak{X})$ for the set of clopen downsets in $\mathfrak{X}$. If $\mathfrak{X}, \mathfrak{Y}\in \mathsf{bi\text{-}Esa}$, a map $h:\mathfrak{X}\to \mathfrak{Y}$ is called a \emph{bounded morphism} if for all $x, y\in X$, we have that $x\leq y$ implies that $f(x)\leq f(y)$, and moreover:
\begin{itemize}
\item $h(x)\leq y$ implies that there is $z\in X$ with $x\leq z$ and $h(z)=y$;
\item $h(x)\geq y$ implies that there is $z\in X$ with $x\geq z$ and $h(z)=y$.
\end{itemize}
If $\mathfrak{X}=(X, \leq , \mathcal{O})$ is an Esakia space, the \emph{order dual} $\bar{\mathfrak{X}}$ of $\mathfrak{X}$ is the structure $\mathfrak{X}=(X, \geq, \mathcal{O})$, where $\geq$ is the converse of $\leq$. The algebraic order duality principle of \Cref{orderdual2ha} has the following geometric counterpart.
\begin{proposition}
For every bi-Esakia space $\mathfrak{X}$, the order dual $\bar{\mathfrak{X}}$ of $\mathfrak{X}$ is an Esakia space.
\end{proposition}
As in the case of algebras, a number of results from the theory of Esakia spaces can be transferred smoothly to bi-Esakia spaces in virtue of this fact. For example, we may generalise \Cref{propesa} to the following result.
\begin{proposition}
Let $\mathfrak{X}\in \mathsf{bi\text{-}Esa}$. Then for all $x, y\in X$ we have:\label{propesa2}
\begin{enumerate}
\item If $x\not\leq y$ then there is $U\in \mathsf{ClopUp}(\mathfrak{X})$ such that $x\in U$ and $y\notin U$;\label{propesa21}
\item If $y\not\leq x$ then there is $U\in \mathsf{ClopDown}(\mathfrak{X})$ such that $x\in U$ and $y\notin U$.\label{propesa22}
\end{enumerate}
\end{proposition}
\begin{proof}
(\ref{propesa21}) is just \Cref{propesa}, whereas (\ref{propesa22}) follows from (\ref{propesa21}) and the order-duality principle.
\end{proof}
A \emph{valuation} on a bi-Esakia space space $\mathfrak{X}$ is a map $V:\mathit{Prop}\to \mathsf{ClopUp}(\mathfrak{X})\cup \mathsf{ClopDown}(\mathfrak{X})$. Bsi formulae are interpreted over bi-Esakia spaces the same way si formulae are interpreted over Esakia space, except for the following additional clause for co-implication (here $\mathfrak{X}\in \mathsf{bi\text{-}Esa}$, $x\in X$ and $V$ is a valuation on $\mathfrak{X}$).
\[\mathfrak{X}, V, x\models \varphi\from \psi\iff \text{ there is }y\in {\downarrow} x: \mathfrak{X}, V, x\models \varphi\text{ and }\mathfrak{X}, V, x\nvDash \psi\]
It is known that the category of bi-Heyting algebras with corresponding homomorphisms is dually equivalent to the category of bi-Esakia spaces with continuous bounded morphisms. This result generalizes Esakia duality, and is proved in \cite{Esakia1975TPoDitILaBL}. We denote the bi-Esakia space dual to a bi-Heyting algebra $\mathfrak{H}$ as $\mathfrak{H_*}$, and the bi-Heyting algebra dual to a bi-Esakia space $\mathfrak{X}$ as $\mathfrak{X}^*$.
\subsubsection{Tense Deductive Systems, Tense Algebras, and Tense Spaces}
We now work in the \emph{tense signature},
\[ten:=\{\land, \lor, \neg, \square_F, \lozenge_P, \bot, \top\}.\]
We prefer this signature to one with two primitive boxes to strengthen the connection between bi-Heyting coimplication and backwards looking modalities. As usual, we write $\lozenge_F=\neg\square_F\neg$ and $\square_P=\neg \lozenge_P\neg$. The set $\mathit{Frm_{ten}}$ of \emph{tense formulae} is defined recursively as follows:
\[\varphi::= p\sep \bot\sep\top\sep \varphi\land \varphi\sep\varphi\lor \varphi\sep\square_F \varphi\sep\lozenge_P\varphi.\]
We introduce \emph{tense deductive systems}. Good references on tense logics include \cite[Ch. 1, Ch. 4]{BlackburnEtAl2001ML} and \cite{GabbayEtAl1994TLMFaCA}. Tense rule systems have not received much attention in the literature.
\begin{definition}
A \emph{(normal) tense logic} is a logic $\mathtt{M}$ over $\mathit{Frm}_{ten}$ satisfying the following conditions:
\begin{enumerate}
\item $\mathtt{S4}_{\square_F},\mathtt{S4}_{\lozenge_P}\subseteq \mathtt{M}$, where $\mathtt{S4}_{\heartsuit}$ is the normal modal logic $\mathtt{S4}$ formulated in the modal signature with modal operator $\heartsuit\in \{\square_F, \lozenge_P\}$;
\item $\varphi\to \square_F\lozenge_P\varphi\in \mathtt{M}$;
\item $\varphi\to \psi, \varphi\in \mathtt{M}$ implies $\psi\in \mathtt{M}$ (MP);
\item $\varphi\in \mathtt{M}$ implies $\square_F \varphi\in \mathtt{M}$ (NEC$_F$);
\item $\varphi\in \mathtt{M}$ implies $\square_P\varphi\in \mathtt{M}$ (NEC$_P$);
\end{enumerate}
We let $\mathtt{S4.t}$ denote the least normal tense logic. A \emph{(normal) tense rule system} is a rule system $\mathtt{M}$ over $\mathit{Frm}_{ten}$ satisfying the following requirements:
\begin{enumerate}
\item $\varphi, \varphi\to \psi/\psi\in \mathtt{M}$ (MP-R);
\item $\varphi/\square_F\varphi\in \mathtt{M}$ (NEC$_F$-R);
\item $\varphi/\square_P\varphi\in \mathtt{M}$ (NEC$_P$-R);
\item $/\varphi\in \mathtt{M}$ whenever $\varphi\in \mathtt{S4.t}$.
\end{enumerate}
\end{definition}
We note that, for convenience, we are using a somewhat non-standard notion of a tense deductive system by requiring that tense deductive system contain $\mathtt{S4}$. It is more customary to require only that tense deductive system contain $\mathtt{K}$.
If $\mathtt{M}$ is a tense logic let $\mathbf{NExt}(\mathtt{M})$ be the set of normal tense logics containing $\mathtt{M}$, and similarly for tense rule systems. Then $\mathbf{NExt}(\mathtt{S4.t})$ is the set of all tense logics. It is easily checked that $\mathbf{NExt}(\mathtt{S4.t})$ is a complete lattice, with $\oplus_{\mathbf{NExt}(\mathtt{S4.t})}$ as join and intersection as meet. Note that for every $\mathtt{M}\in \mathbf{NExt}(\mathtt{S4.t})$ there is always a least tense rule system containing $/\varphi$ for each $\varphi\in \mathtt{M}$, which we denote by $\mathtt{M_R}$. Then $\mathtt{S4.t_R}$ is the least tense rule system and $\mathbf{NExt}(\mathtt{S4.t_R})$ is the set of all tense rule systems. Again, one can easily verify that $\mathbf{NExt}(\mathtt{S4.t_R})$ forms a complete lattice with $\oplus_{\mathbf{NExt}(\mathtt{S4.t_R})}$ as join and intersection as meet. As usual, we write both $\oplus_{\mathbf{NExt}(\mathtt{S4.t})}$ and $\oplus_{\mathbf{NExt}(\mathtt{S4.t_R})}$ simply as $\oplus$.
We have the following tense counterpart of \Cref{deductivesystemisomorphismbsi}.
\begin{proposition}
The mappings $(\cdot)_{\mathtt{R}}$ and $\mathsf{Taut}(\cdot)$ are mutually inverse complete lattice isomorphisms between $\mathbf{NExt}(\mathtt{S4.t})$ and the sublattice of $\mathbf{NExt}(\mathtt{S4.t_R})$ consisting of all si-rule systems $\mathtt{L}$ such that $\mathsf{Taut}(\mathtt{L})_\mathtt{R}=\mathtt{L}$.\label{deductivesystemisomorphismten}
\end{proposition}
A \emph{tense algebra} is a structure $\mathfrak{A}=(A, \land, \lor, \neg, \square_F, \lozenge_P, 0, 1)$, such that both the $\square_F$-free and the $\lozenge_P$-free reducts of $\mathfrak{A}$ are closure algebras, and $\square_F, \lozenge_P$ form a residual pair, that is, for all $a, b\in A$ we have the following identity:
\[\lozenge_P a \leq b\iff a\leq \square_F y.\]
Tense algebras are extensively discussed in, e.g., \cite{Kowalski1998VoTA} and \cite[Section 8.1]{Venema2007AaC}. We let $\mathsf{Ten}$ denote the class of tense algebras. It is well known that $\mathsf{Ten}$ is equationally definable
(see, e.g., \cite[Proposition 8.5]{Venema2007AaC}), and hence is a variety by \Cref{syntacticvarietiesuniclasses}. We let $\mathbf{Var}(\mathsf{Ten})$ and $\mathbf{Uni}(\mathsf{Ten})$ be the lattice of subvarieties and of universal subclasses of $\mathsf{Ten}$ respectively. The following result can be obtained by similar techniques as \Cref{thm:algebraisationMA}.
\begin{theorem}
The following maps are pairs of mutually inverse dual isomorphisms:\label{algebraisationtense}
\begin{enumerate}
\item $\mathsf{Alg}:\mathbf{NExt}(\mathtt{S4.t})\to \mathbf{Var}(\mathsf{Ten})$ and $\mathsf{Th}:\mathbf{Var}(\mathsf{Ten})\to \mathbf{NExt}(\mathtt{S4.t})$;\label{algebraisationtensevar}
\item $\mathsf{Alg}:\mathbf{NExt}(\mathtt{S4.t_R})\to \mathbf{Uni}(\mathsf{Ten})$ and $\mathsf{ThR}:\mathbf{Uni}(\mathsf{Ten})\to \mathbf{NExt}(\mathtt{S4.t_R})$.\label{algebraisationtenseuni}
\end{enumerate}
\end{theorem}
\begin{corollary}
Every tense logic $($resp. tense rule system$)$ is complete with respect to some variety $($resp. universal class$)$ of tense algebras. \label{completeness_ten}
\end{corollary}
A \emph{tense space} is an $\mathtt{S4}$-modal space $\mathfrak{X}=(X, R, \mathcal{O})$, satisfying the following additional conditions:
\begin{itemize}
\item $R^{-1}(x)$ is closed for every $x\in X$;
\item $R[U]\in \mathsf{Clop}(\mathfrak{X})$ whenever $U\in \mathsf{Clop}(\mathfrak{X})$.
\end{itemize}
It should be clear from the above definition that tense spaces, like bi-Esakia spaces, also satisfy an order-duality principle.
\begin{proposition}
For every tense space $\mathfrak{X}=(X, R, \mathcal{O})$, its \emph{order dual} $\bar{\mathfrak{X}}=(X, \breve{R}, \mathcal{O})$, where $\breve{R}$ is the converse of $R$, is an $\mathtt{S4}$-modal space.
\end{proposition}
If $\mathfrak{X}, \mathfrak{Y}$ are tense spaces, a map $f:\mathfrak{X}\to \mathfrak{Y}$ is called a \emph{bounded morphism} if for all $x, y\in X$, if $Rxy$ then $Rf(x)f(y)$, and moreover for all $x\in X$ and $y\in Y$ the following conditions hold:
\begin{itemize}
\item If $Rf(x)y$ then there is $z\in X$ such that $Rxz$ and $f(z)=y$;
\item If $Ryf(x)$ then there is $z\in X$ such that $Rzx$ and $f(z)=y$.
\end{itemize}
A \emph{valuation} on a tense space $\mathfrak{X}$ is a map $V:\mathit{Prop}\to \mathsf{Clop}(\mathfrak{X})$. The geometrical semantics of tense logics and rule systems over tense spaces is a routine generalisation of the semantics of modal logics and rule systems on modal spaces, using $R$ to interpret $\square_F$ and $\breve{R}$ to interpret $\square_P$.
We list some important properties of tense spaces, which are obtained straightforwardly from \Cref{props4} and the order-duality principle.
\begin{proposition}
Let $\mathfrak{X}\in \mathsf{Spa}(\mathtt{S4.t})$ and $U\in \mathsf{Clop}(\mathfrak{X})$. Then the following conditions hold: \label{proptensespace}
\begin{enumerate}
\item The sets $\mathit{max}_R(U)$, $\mathit{min}_R(U)$ are closed;
\item If $x\in U$ then there is $y\in \mathit{qmax}_R(U)$ such that $Rxy$, and there is $z\in \mathit{qmin}_R(U)$ such that $Rzx$
\end{enumerate}
\end{proposition}
As a straightforward extension of the duality between modal algebras and modal spaces, one can prove that the category of tense algebras with homomorphisms is dually equivalent to the category of tense spaces with continuous bounded morphisms. We denote the tense space dual to a tense algebra $\mathfrak{A}$ as $\mathfrak{A_*}$, and the tense algebra dual to an tense space $\mathfrak{X}$ as $\mathfrak{X}^*$.
We will pay particular attention to tense algebras and spaces validating the tense logic $\mathtt{GRZ.T}$ below.
\begin{align*}
\mathtt{GRZ.T}:=\mathtt{S4.t}&\oplus \square_F(\square_F( p\to\square_F p)\to p)\to p\\
&\oplus p\to \lozenge_P(p\land \neg \lozenge_P(\lozenge_P p\land \neg p)).
\end{align*}
We name this logic $\mathtt{GRZ.T}$ rather than $\mathtt{GRZ.t}$ to emphasize that the $\mathtt{GRZ}$-axiom is required for both operators rather than just for $\square_F$. We let $\mathsf{GRZ.T}:=\mathsf{Alg}(\mathtt{GRZ.T})$. Clearly, for any $\mathfrak{A}\in \mathsf{Ten}$ we have $\mathfrak{A}\in \mathsf{GRZ.T}$ iff every $a\in A$ satisfies both the inequalties
\begin{gather*}
\square_F(\square_F( a\to\square_F a)\to a)\leq a,\\
a\leq \lozenge_P(a\land \neg \lozenge_P(\lozenge_P a\land \neg a)).
\end{gather*}
The following proposition is a counterpart to \Cref{propgrz1}, and is proved straightforwardly using the latter and the order-duality principle.
\begin{proposition}
For every $\mathtt{GRZ}$-space $\mathfrak{X}$ and ${U}\in \mathsf{Clop}(\mathfrak{X})$, the following hold: \label{propGrz.T}
\begin{enumerate}
\item $\mathit{qmax}_R(U)\subseteq \mathit{max}_R(U)$, and $\mathit{qmin}_R(U)\subseteq \mathit{min}_R(U)$;\label{propGrz.T:1}
\item The sets $\mathit{max}_R(U)$ and $\mathit{min}_R(U)$ is closed;\label{propGrz.T:2}
\item For every $x\in U$ there are $y\in \mathit{pas}_R(U)$ such that $Rxy$, and $z\in \mathit{pas}_{\breve{R}}(U)$ such that $Rzx$;\label{propGrz.T:3}
\item $\mathit{max}_R(U)\subseteq\mathit{pas}_R(U)$ and $\mathit{min}_R(U)\subseteq\mathit{pas}_{\breve{R}}(U)$. \label{propGrz.T:4}
\end{enumerate}
\end{proposition}
Recall that for $\mathfrak{X}$ a $\mathtt{GRZ.T}$-space, a set $U\subseteq X$ is said to \emph{cut} a cluster $C\subseteq X$ when both $U\cap C\neq\varnothing$ and $U\smallsetminus C\neq\varnothing$. As a consequence of \Cref{propGrz.T:4} in \Cref{propGrz.T} above, we obtain in particular that in any $\mathtt{GRZ.T}$-space $\mathfrak{X}$, no cluster $C\subseteq X$ can be cut by either of $\mathit{max}_R(U), \mathit{pas}_R(U),\mathit{min}_R(U), \mathit{pas}_{\breve{R}}(U)$ for any $U\in \mathsf{Clop}(\mathfrak{X})$.
\subsection{Stable Canonical Rules for Bi-superintuitionistic and Tense Rule Systems}\label{sec:scr2}
In this section we generalise the si and modal stable canonical rules from \Cref{sec:scr1} to the bsi and tense setting respectively. While bsi and tense stable canonical rules are not discussed in existing literature, the differences between their theory and that of si and modal stable canonical rules are few and inessential. In particular, all proofs of results in this sections are straightforward adaptations of corresponding results in \Cref{sec:scr1}, which is why we omit most of them.
\subsubsection{Bi-superintuitionistic Case} We begin by defining bsi stable canonical rules.
\label{sec:bsirules}
\begin{definition}
Let $\mathfrak{H}\in \mathsf{bi\text{-}HA}$ be finite and $D^\to, D^\from\subseteq A\times A$. For every $a\in H$ introduce a fresh propositional variable $p_a$. The \emph{bsi stable canonical rule} of $(\mathfrak{H}, D^\to, D^\from)$, is defined as the rule $\scrbsi{H}{D}=\Gamma/\Delta$, where
\begin{align*}
\Gamma=&\{p_0\leftrightarrow 0\}\cup\{p_1\leftrightarrow 1\}\cup\\
&\{p_{a\land b}\leftrightarrow p_a\land p_b:a,b\in H\}\cup \{p_{a\lor b}\leftrightarrow p_a\lor p_b:a,b\in H\}\cup\\
&\{p_{a\to b}\leftrightarrow p_a\to p_b:(a, b)\in D^\to\}\cup \{p_{a\from b}\leftrightarrow p_a\from p_b:(a, b)\in D^\from\}\\
\Delta&=\{p_a\leftrightarrow p_b:a, b\in H\text{ with } a\neq b\}.
\end{align*}
\end{definition}
The notion of a stable map between bi-Heyting algebras is defined exactly as in the Heyting case, i.e., stable maps are simply bounded lattice homomorphisms. We note that for any stable map $h:\mathfrak{H}\to \mathfrak{K}$ with $\mathfrak{H}, \mathfrak{K}\in \mathsf{bi\text{-}HA}$, for any $a\in H$ we also have
\[h(a\from b)\geq h(a)\from h(b).\]
Indeed, this is obvious in view of the order-duality principle. If $D\subseteq H\times H$ and $\heartsuit \in \{\to, \from\}$, we say that $h$ satisfies the \emph{$\heartsuit$-bounded domain condition} (BDC$^\heartsuit$) for $D$ if $h(a\heartsuit b)=h(a)\heartsuit h(b)$ for every $(a, b)\in D$.
If $D^\to, D^\from\subseteq H\times H$, for brevity we say that $h$ satisfies the BDC for $(D^\to, D^\from)$ to mean that $h$ satisfies the BDC$^\to$ for $D^\to$ and the BDC$^\from$ for $D^\from$.
The next two results characterise algebraic refutation conditions for bsi stable canonical rules.
\begin{proposition}
For all finite $\mathfrak{H}\in \mathsf{bi\text{-}HA}$ and $D^\to, D^\from\subseteq H\times H$, we have $\mathfrak{H}\nvDash \scrbsi{H}{D}$.
\end{proposition}
\begin{proposition}
For every bsi stable canonical rule $\scrbsi{H}{D}$ and every $\mathfrak{K}\in \mathsf{bi\text{-}HA}$, we have $\mathfrak{K}\nvDash \scrbsi{H}{D}$ iff there is a stable embedding $h:\mathfrak{H}\to \mathfrak{K}$ satisfying the BDC for $(D^\to, D^\from)$.
\end{proposition}
We now characterise geometric refutation conditions of bsi stable canonical rules on bi-Esakia spaces. Since bi-Esakia spaces are Esakia spaces, the notion of a stable map applies. Let $\mathfrak{X}, \mathfrak{Y}\in \mathsf{bi\text{-}Esa}$ and $\mathfrak{d}\subseteq {Y}$. A stable map $f:\mathfrak{X}\to \mathfrak{Y}$ is said to satisfy
\begin{itemize}
\item The BDC$^\to$ for $\mathfrak{d}$ if for all $x\in X$ we have
\[{\uparrow} f(x)\cap \mathfrak{d}\neq\varnothing\Rightarrow f[{\uparrow} x]\cap \mathfrak{d}\neq\varnothing;\]
\item The BDC$^\from$ for $\mathfrak{d}$ if for all $x\in X$ we have
\[{\downarrow} f(x)\cap \mathfrak{d}\neq\varnothing\Rightarrow f[{\downarrow} x]\cap \mathfrak{d}\neq\varnothing.\]
\end{itemize}
If $\mathfrak{D}\subseteq \wp(Y)$, we say that $f$ satisfies the BDC$^\heartsuit$ for $\mathfrak{D}$ when it does for each $\mathfrak{d}\in \mathfrak{D}$, where $\heartsuit\in \{\to, \from\}$. Given $\mathfrak{D}^\to, \mathfrak{D}^\from \in \wp(Y)$ we write that $f$ satisfies the BDC for $(\mathfrak{D}^\to, \mathfrak{D}^\from)$ if $f$ satisfies the BDC$^\to$ for $\mathfrak{D}^\to$ and the BDC$^\from$ for $\mathfrak{D}^\from$. Finally, if $\scrbsi{H}{D}$ is a bsi stable canonical rule consider $\mathfrak{X}:=\mathfrak{H}_*$ and let
\[\mathfrak{D}^\heartsuit:=\{\mathfrak{d}^\heartsuit_{(a, b)}:(a, b)\in D^\heartsuit\}\]
where
\[\mathfrak{d}^\heartsuit_{(a, b)}:= \beta (a)\smallsetminus \beta (b)\]
for $\heartsuit\in \{\to, \from\}$.
\begin{proposition}
For any bi-Esakia space $\mathfrak{X}$ and any bsi stable canonical rule $\scrbsi{H}{D}$, we have $\mathfrak{X}\nvDash\scrbsi{H}{D}$ iff there is a continuous stable surjection $f:\mathfrak{X}\to \mathfrak{H}_*$ satisfying the BDC for $(\mathfrak{D}^\to, \mathfrak{D}^\from)$ defined as above.\label{refutspace2}
\end{proposition}
In view of \Cref{refutspace2}, in geometric settings we prefer to write a bsi stable canonical rule $\scrbsi{H}{D}$ as $\scrbsi{H_*}{\mathfrak{D}}$.
We now elucidate the notion of filtration for bi-Heyting algebras presupposed by our bsi stable canonical rules.
\begin{definition}
Let $\mathfrak{H}$ be a bi-Heyting algebra, $V$ a valuation on $\mathfrak{H}$, and $\Theta$ a finite, subformula closed set of formulae. A (finite) model $(\mathfrak{K}', V')$ is called a (\emph{finite}) \emph{filtration of $(\mathfrak{H}, V)$ through $\Theta$} if the following hold:
\begin{enumerate}
\item $\mathfrak{K}'=(\mathfrak{K}, \to, \from)$, where $\mathfrak{K}$ is the bounded sublattice of $\mathfrak{H}$ generated by $\bar V[\Theta]$;
\item $V(p)=V'(p)$ for every propositional variable $p\in \Theta$;
\item The inclusion $\subseteq:\mathfrak{K}'\to \mathfrak{H}$ is a stable embedding satisfying the BDC for $(D^\to, D^\from)$, where
\[D^\heartsuit :=\{(\bar V(\varphi), \bar V(\psi)):\varphi\heartsuit \psi\in \Theta\}\]
for $\heartsuit\in \{\to, \from\}$.
\end{enumerate}
\end{definition}
\begin{theorem}[Filtration theorem for bi-Heyting algebras]
Let $\mathfrak{H}\in \mathsf{bi\text{-}HA}$ be a bi-Heyting algebra, $V$ a valuation on $\mathfrak{H}$, and $\Theta$ a a finite, subformula closed set of formulae. If $(\mathfrak{K}', V')$ is a filtration of $(\mathfrak{H}, V)$ through $\Theta$ then for every $\varphi\in \Theta$ we have
\[\bar V(\varphi)=\bar V'(\varphi).\]
Consequently, for every bsi rule $\Gamma/\Delta$ such that $\gamma, \delta\in \Theta$ for each $\gamma\in \Gamma$ and $\delta\in \Delta$ we have
\[\mathfrak{H}, V\models \Gamma/\Delta\iff \mathfrak{K}', V'\models \Gamma/\Delta.\]
\end{theorem}
The next lemma is a counterpart to \Cref{rewritesi}.
\begin{lemma}
For every bsi rule $\Gamma/\Delta$ there is a finite set $\Xi$ of bsi stable canonical rules such that for any $\mathfrak{K}\in \mathsf{bi\text{-}HA}$ we have that $\mathfrak{K}\nvDash \Gamma/\Delta$ iff there is $\scrbsi{H}{D}\in \Xi$ such that $\mathfrak{K}\nvDash \scrbsi{H}{D}$.\label{rewritebsi}
\end{lemma}
\begin{proof}
The proof is a straightforward generalisation of the proof of \Cref{rewritesi}, using the fact that every finite bounded distributive lattice $\mathfrak{J}$ may be expanded to a bi-Heyting algebra $\mathfrak{J}'=(\mathfrak{J}, \rightsquigarrow, \leftsquigarrow)$ by setting:
\begin{align*}
a\rightsquigarrow b&:=\bigvee\{c\in J: a\land b\leq c\}\\
a\leftsquigarrow b&:=\bigwedge\{c\in J: a\leq b\lor c\}.
\end{align*}
\end{proof}
Reasoning as in the proof of \Cref{axiomatisationsi} we obtain the following axiomatisation result.
\begin{theorem}
Every bsi-rule system $\mathtt{L}\in \mathbf{Ext}(\mathtt{bi\text{-}IPC_R})$ is axiomatisable over $\mathtt{bi\text{-}IPC_R}$ by some set of bsi stable canonical rules. \label{axiomatisationbsi}
\end{theorem}
\subsubsection{Tense Case} We now turn to tense stable canonical rules. \label{sec:tenrules}
\begin{definition}
Let $\mathfrak{A}\in \mathsf{Ten}$ be finite and $D^{\square_F}, D^{\lozenge_P}\subseteq A$. For every $a\in A$ introduce a fresh propositional variable $p_a$. The \emph{tense stable canonical rule} of $(\mathfrak{A}, D^{\square_F}, D^{\lozenge_P})$, is defined as the rule $\scrten{H}{D}=\Gamma/\Delta$, where
\begin{align*}
\Gamma=&\{p_{a\land b}\leftrightarrow p_a\land p_b:a,b\in A\}\cup \{p_{a\lor b}\leftrightarrow p_a\lor p_b:a,b\in A\}\cup \\
&\{p_{\neg a}\leftrightarrow \neg p_a:a\in A\}\cup\\
&\{\square_Fp_a\to p_{\square_F a}: a\in A\}\cup \{p_{\lozenge_P a}\to \lozenge_Pp_a: a\in A\}\cup\\
&\{p_{\square_F a}\to \square_Fp_a: a\in D^{\square_F}\}\cup \{\lozenge_P p_a\to p_{\lozenge_P a}: a\in D^{\lozenge_P}\}\\
\Delta=&\{p_a:a\in A\smallsetminus \{1\}\}.
\end{align*}
\end{definition}
If $\mathfrak{A}, \mathfrak{B}\in \mathsf{MA}$ are tense algebras, a map $h:\mathfrak{A}\to \mathfrak{B}$ is called \emph{stable} if for every $a\in A$ the following conditions hold:
\[h(\square_F a)\leq \square_F h(a)\qquad \lozenge_P h(a)\leq h(\lozenge_P a).\]
If $D\subseteq A$ and $\heartsuit \in \{\square_F, \lozenge_P\}$, we say that $h$ satisfies the \emph{$\heartsuit$-bounded domain condition} (BDC$^\heartsuit$) for $D$ if $h(\heartsuit a)= \heartsuit h(a)$ for every $a\in D$. If $D^{\square_F}, D^{\lozenge_P}\subseteq A$, for brevity we say that $h$ satisfies the BDC for $(D^{\square_F}, D^{\lozenge_P})$ to mean that $h$ satisfies the BDC$^{\square_F}$ for $D^{\square_F}$ and the BDC$^{\lozenge_P}$ for $D^{\lozenge_P}$.
We outline algebraic refutation conditions for tense stable canonical rules.
\begin{proposition}
For all finite $\mathfrak{A}\in \mathsf{Ten}$ and $D^{\square_F}, D^{\lozenge_P}\subseteq A$, we have $\mathfrak{A}\nvDash \scrten{A}{D}$. \label{refutationten1}
\end{proposition}
\begin{proposition}
For every tense stable canonical rule $\scrten{A}{D}$ and any $\mathfrak{B}\in \mathsf{Ten}$, we have $\mathfrak{B}\nvDash \scrten{A}{D}$ iff there is a stable embedding $h:\mathfrak{A}\to \mathfrak{B}$ satisfying the BDC for $(D^{\square_F}, D^{\lozenge_P})$.\label{refutationten2}
\end{proposition}
Tense spaces are modal spaces, therefore the notion of a stable map applies. Let $\mathfrak{X}, \mathfrak{Y}$ be tense spaces. and $\mathfrak{d}\subseteq {Y}$. A stable map $f:\mathfrak{X}\to \mathfrak{Y}$ is said to satisfy
\begin{itemize}
\item The BDC$^{\square_F}$ for $\mathfrak{d}$ if for all $x\in X$ we have
\[R[f(x)]\cap \mathfrak{d}\neq\varnothing\Rightarrow f[R[x]]\cap \mathfrak{d}\neq\varnothing;\]
\item The BDC$^{\lozenge_P}$ for $\mathfrak{d}$ if for all $x\in X$ we have
\[\breve{R}[f(x)]\cap \mathfrak{d}\neq\varnothing\Rightarrow f[\breve{R}[x]]\cap \mathfrak{d}\neq\varnothing.\]
\end{itemize}
If $\mathfrak{D}\subseteq \wp(Y)$, we say that $f$ satisfies the BDC$^\heartsuit$ for $\mathfrak{D}$ when it does for each $\mathfrak{d}\in \mathfrak{D}$, where $\heartsuit\in \{\square_F, \lozenge_P\}$. Given $\mathfrak{D}^{\square_F}, \mathfrak{D}^{\lozenge_P} \in \wp(Y)$ we write that $f$ satisfies the BDC for $(\mathfrak{D}^{\square_F}, \mathfrak{D}^{\lozenge_P})$ if $f$ satisfies the BDC$^{\square_F}$ for $\mathfrak{D}^{\square_F}$ and the BDC$^{\lozenge_P}$ for $\mathfrak{D}^{\lozenge_P}$. Finally, if $\scrten{A}{D}$ is a tense stable canonical rule consider $\mathfrak{X}:=\mathfrak{A}_*$ and for $\heartsuit\in \{\square_F, \lozenge_P\}$ let
\[\mathfrak{D}^{\heartsuit}:=\{\mathfrak{d}^\heartsuit_{a}:a\in D^\heartsuit\}\]
where for each $a\in A$ we have
\begin{align*}
\mathfrak{d}^{\square_F}_{a}&:= -\beta (a)\\
\mathfrak{d}^{\lozenge_P}_a&:= \beta (a)
\end{align*}
\begin{proposition}
For any tense space $\mathfrak{X}$ and any tense stable canonical rule $\scrten{A}{D}$, we have $\mathfrak{X}\nvDash\scrten{A}{D}$ iff there is a continuous stable surjection $f:\mathfrak{X}\to \mathfrak{A}_*$ satisfying the BDC for $(\mathfrak{D}^{\square_F}, \mathfrak{D}^{\lozenge_P})$ defined as above.\label{refutspaceten}
\end{proposition}
In view of \Cref{refutspaceten}, in geometric settings we prefer to write a tense stable canonical rule $\scrten{A}{D}$ as $\scrten{A_*}{\mathfrak{D}}$.
We now introduce the notion of filtration implicit in tense stable canonical rules. Filtration for tense logics was considered, e.g., in \cite{Wolter1997CaDoTLCRtLaK} from a frame-theoretic perspective. Here we prefer an algebraic approach in line with \Cref{ch:1}.
\begin{definition}
Let $\mathfrak{A}$ be a tense algebra, $V$ a valuation on $\mathfrak{A}$, and $\Theta$ a finite, subformula closed set of formulae. A (finite) model $(\mathfrak{B}', V')$ is called a (\emph{finite}) \emph{filtration of $(\mathfrak{A}, V)$ through $\Theta$} if the following hold:
\begin{enumerate}
\item $\mathfrak{B}'=(\mathfrak{B}, \square_F, \lozenge_P)$, where $\mathfrak{B}$ is the Boolean subalgebra of $\mathfrak{A}$ generated by $\bar V[\Theta]$;
\item $V(p)=V'(p)$ for every propositional variable $p\in \Theta$;
\item The inclusion $\subseteq:\mathfrak{B}'\to \mathfrak{A}$ is a stable embedding satisfying the BDC for $(D^{\square_F}, D^{\lozenge_P})$, where
\[D^{\heartsuit}:=\{\bar V(\varphi):\heartsuit\varphi\in \Theta\}\]
for $\heartsuit\in \{\square_F, \lozenge_P\}$.
\end{enumerate}
\end{definition}
\begin{theorem}[Filtration theorem for tense algebras]
Let $\mathfrak{A}\in \mathsf{Ten}$ be a tense algebra, $V$ a valuation on $\mathfrak{A}$, and $\Theta$ a finite, subformula closed set of formulae. If $(\mathfrak{B}', V')$ is a filtration of $(\mathfrak{A}, V)$ through $\Theta$ then for every $\varphi\in \Theta$ we have
\[\bar V(\varphi)=\bar V'(\varphi).\]
Consequently, for every tense rule $\Gamma/\Delta$ such that $\gamma, \delta\in \Theta$ for each $\gamma\in \Gamma$ and $\delta\in \Delta$ we have
\[\mathfrak{A}, V\models \Gamma/\Delta\iff \mathfrak{B}', V'\models \Gamma/\Delta.\]
\end{theorem}
Just like in the $\mathtt{S4}$ case, not every filtration of some model based on a tense algebra is itself based on a tense algebra, because the $\mathtt{S4}$-axiom for either $\square_F$ or $\lozenge_P$ may not be preserved. However, given any model based on a tense algebra, there is always a method for filtrating it through any finite set of formulae which yields a model based on a tense algebra.
\begin{definition}
Let $\mathfrak{A}\in \mathsf{Ten}$, $V$ a valuation on $\mathfrak{A}$ and $\Theta$ a finite, subformula closed set of formula. The (least) \emph{transitive filtration} of $(\mathfrak{A}, V)$ is the pair $(\mathfrak{B}', V')$ with $\mathfrak{B}=(\mathfrak{B}', \blacksquare_F,\blacklozenge_P)$ where $\mathfrak{B}'$ and $V'$ are as per \Cref{filtrmod}, and for all $b\in B$ we have
\begin{align*}
\blacksquare_F b&:=\bigvee\{\square_F a: \square_F a\leq \square_F b\text{ and }a, \square_F a\in B\}\\
\blacklozenge_P b&:=\bigwedge\{\lozenge_P a:\ \lozenge_P b\leq\lozenge_P a \text{ and }a, \lozenge_P a\in B\}
\end{align*}
\end{definition}
Via duality, it is not difficult to see that the least transitive filtration of any model based on a tense algebra is again a tense algebra.
At this stage, reasoning as in the proof of \Cref{rewritemod} using transitive filtrations we obtain the following results.
\begin{lemma}
For every tense rule $\Gamma/\Delta$ there is a finite set $\Xi$ of tense stable canonical rules such that for any $\mathfrak{K}\in \mathsf{Ten}$ we have that $\mathfrak{K}\nvDash \Gamma/\Delta$ iff there is $\scrbsi{H}{D}\in \Xi$ such that $\mathfrak{K}\nvDash \scrbsi{H}{D}$.\label{rewritetense}
\end{lemma}
\begin{theorem}
Every tense rule system is axiomatisable over $\mathtt{S4.t_R}$ by some set of tense stable canonical rules. \label{axiomatisationten}
\end{theorem}
\subsubsection{Comparison with Je\v{r}ábek-style Canonical Rules}\label{sec:comparison}
Our bsi and tense stable canonical rules generalise si and modal stable canonical rules in a way that mirrors the simple and intimate connection existing between Heyting and bi-Heyting algebras on the one hand, and modal and tense algebras on the other, explicated by the order-duality principles. Just like a bi-Heyting algebra is just a Heyting algebra whose order-dual is also a Heyting algebra, so every bsi stable canonical rule is a sort of ``independent fusion" between two si stable canonical rules, whose associated Heyting algebras are order-dual to each other. Similarly for the tense case.
Je\v{r}ábek-style si and modal canonical rules (like \z-style si and modal canonical formulae), by contrast, do not generalise as smoothly to the bsi and tense case. Algebraically, a Je\v{r}ábek-style si canonical rule may be defined as follows (cf. \cite{BezhanishviliBezhanishvili2009AAAtCFIC,BezhanishviliEtAl2016CSL}).
\begin{definition}
Let $\mathfrak{H}\in \mathsf{HA}$ be finite and let $D\subseteq H$. The \emph{si canonical rule} of $(\mathfrak{H}, D)$ is the rule $\zeta(\mathfrak{H}, D)=\Gamma/\Delta$, where
\begin{align*}
\Gamma:=&\{p_0\leftrightarrow \bot\}\cup\\
&\{p_{a\land b}\leftrightarrow p_a\land p_b:a, b\in H\}\cup\{p_{a\to b}\leftrightarrow p_a\to p_b:a, b\in H\}\cup\\
&\{p_{a\lor b}\leftrightarrow p_a\lor p_b:(a, b)\in D\}\\
\Delta:=&\{p_a\leftrightarrow p_b:a, b\in H\text{ with }a\neq b\}.
\end{align*}
\end{definition}
Generalising the proof of \cite[Corollary 5.10]{BezhanishviliEtAl2016CSL}, one can show that every si rule is equivalent to finitely many si canonical rules. The key ingredient in this proof is a characterisation of the refutation conditions for si canonical rules: $\zeta(\mathfrak{H}, D)$ is refuted by a Heyting algebra $\mathfrak{K}$ iff there is a $(\land, \to, 0)$-embedding $h:\mathfrak{H}\to \mathfrak{K}$ preserving $\lor$ on elements from $D$. Because $(\land, \to, 0)$-algebras are locally finite, a result known as \emph{Diego's theorem}, one can then reason as in the proof of, e.g., \Cref{rewritesi} to reach the desired result.
It should be clear that if one defined the bsi canonical rule $\zeta_B(\mathfrak{H}, D, D')$ by combining the rules $\zeta(\mathfrak{H}, D)$ and $\zeta(\bar{\mathfrak{H}}, D')$ the same way bsi stable canonical rule combine si stable canonical rules, then $\zeta_B(\mathfrak{H}, D, D')$ would be refuted by a bi-Heyting algebra $\mathfrak{K}$ iff there is a bi-Heyting algebra embedding $h:\mathfrak{H}\to \mathfrak{K}$.
Since the variety of bi-Heyting algebras is not locally finite, this refutation condition is clearly too strong to deliver a result to the effect that every bsi rule is equivalent to a set of bsi canonical rules. Without such a result, in turn there is no hope of axiomatising every rule system over $\mathtt{bi\text{-}IPC}$ by means of bsi canonical rules.
Similar remarks hold in the tense case, although in this case the details are too complex to do them justice in the limited space we have at our disposal. We limit ourselves to a rough sketch of the tense case. \citet{BezhanishviliEtAl2011AAAtSLMC} show that the proof of the fact that every modal formula is equivalent, over $\mathtt{S4}$, to finitely many modal \z-style canonical formulae of closure algebras rests on an application of Diego's theorem \cite[cf.][Main Lemma]{BezhanishviliEtAl2011AAAtSLMC}. This has to do with how selective filtrations of closure algebras are constructed. Given a closure algebra $\mathfrak{B}$ refuting a rule $\Gamma/\Delta$, a key step in constructing a finite selective filtration of $\mathfrak{B}$ through $\mathit{Sfor}(\Gamma/\Delta)$ consists in generating a $(\land, \to, 0)$-subalgebra of $\rho \mathfrak{A}$ from a finite subset of $O(A)$. This structure is guaranteed to be finite by Diego's theorem. On the most obvious ways of generalising this construction to tense algebras, we would need to replace this step with one of the following:
\begin{enumerate}
\item Generate both a $(\land, \to, 0)$-subalgebra of $\rho \mathfrak{A}$ and a $(\lor, \from, 1)$-subalgebra of $\rho \mathfrak{A}$ from a finite subset of $O(A)$;
\item Generate a bi-Heyting subalgebra of $\rho \mathfrak{A}$ from a finite subset of $O(A)$.
\end{enumerate}
On option 1, Diego's theorem and its order dual would guarantee that both the $(\land, \to, 0)$-subalgebra of $\rho \mathfrak{A}$ and the $(\lor, \from, 1)$-subalgebra of $\rho \mathfrak{A}$ are finite. However, it is not clear how one could then combine the two subalgebras into a bi-Heyting algebra, which is required to obtain a selective filtration based on a tense algebra. On option 2, on the other hand, we would indeed obtain a bi-Heyting subalgebra of $\rho \mathfrak{A}$, but not necessarily a finite one, since bi-Heyting algebras are not locally finite.
We realise that the argument sketches just presented are far from conclusive, so we do not go as far as ruling out the possibility that Je\v{r}ábek-style bsi and tense canonical rules could somehow be developed in such a way as to be a suitable tools for developing the theory of tense companions of bsi-rule systems. What such rules would look like, and in what sense they would constitute genuine generalisations of Je\v{r}ábek's canonical rules and \z's canonical formulae are interesting questions, but this paper is not the appropriate space to pursue them. At this stage we merely wish to stress that answering this sort of questions is a non-trivial matter, whereas generalising stable canonical rules to the bsi and tense setting and applying them to develop the theory of tense companions is a completely routine task. On our approach, exactly the same methods used in the si and modal case work equally well in the bsi-tense case.
\subsection{Tense Companions of Bi-superintuitionistic Rule Systems}\label{sec:tensecompanions}
We turn to the main topic of this section. This section generalises the results of \Cref{sec:modalcompanions} to the bsi-tense setting. As anticipated, this is done using exactly the same techniques seen in the si and modal case, which is one of the main advantages of our method.
\subsubsection{Semantic Mappings}\label{sec:mappingstc}
We begin by generalising the semantic transformations for turning Heyting algebras into corresponding closure algebras and vice versa, seen in \Cref{sec:modalcompanions}, to transformations between bi-Heyting and tense algebras. The results in this section are well known, and the reader may consult \cite[Section 7]{Wolter1998OLwC} for a more detailed overview.
\begin{definition}
The mapping $\sigma: \mathsf{bi\text{-}HA}\to \mathsf{Ten}$ assigns every $\mathfrak{H}\in \mathsf{bi\text{-}HA}$ to the algebra $\sigma \mathfrak{H}:=(B(\mathfrak{H}),\square_F, \lozenge_P)$, where $B(\mathfrak{H})$ is the free Boolean extension of $\mathfrak{H}$ and
\begin{align*}
\square_F a&:=\bigvee \{b\in H: b\leq a\}\\
\lozenge_P a&:=\bigwedge \{b\in H: a\leq b\}
\end{align*}
\end{definition}
That $\square_F, \lozenge_P$ are well-defined operations on $B(\mathfrak{H})$ follows from the order-duality principle and the results in the previous section. It is easy to verify that $\sigma \mathfrak{H}$ validates the $\mathtt{S4}$ axioms for both $\square_F$ and $\lozenge_P$. Moreover, for any $a\in B(H)$ clearly $\lozenge_P a\in H$, so $\square_F\lozenge_P a=\lozenge_P a$. This implies $a\leq \square_F\lozenge_P a$. Therefore indeed $\sigma \mathfrak{H}\in \mathsf{Ten}$.
\begin{definition}
The mapping $\rho:\mathsf{Ten}\to \mathsf{bi\text{-}HA}$ assigns every $\mathfrak{A}\in \mathsf{Ten}$ to the algebra $\rho \mathfrak{A}:=(O(A), \land, \lor, \to, \from, 0,1)$, where
\begin{align*}
O(A)&:=\{a\in A:\square_F a=a\}=\{a\in A:\lozenge_P a=a\}\\
a\to b&:=\square_F (\neg a\lor b)\\
a\from b&:=\lozenge_P ( a\land \neg b).
\end{align*}
\end{definition}
Using the order-duality principle, it is easy to verify that for every $\mathfrak{A}\in \mathsf{Ten}$, the algebra $\rho \mathfrak{A}$ is indeed a bi-Heyting algebra.
Recall the geometric mappings $\sigma :\mathsf{Esa}\to \mathsf{Spa}(\mathtt{GRZ})$ and $\rho:\mathsf{Spa}(\mathtt{S4})\to \mathsf{Esa}$. Since bi-Esakia spaces are Esakia spaces, and tense spaces are $\mathtt{S4}$-spaces, we may restrict these mappings to $\sigma :\mathsf{bi\text{-}Esa}\to \mathsf{Alg}(\mathtt{GRZ.T})$ and $\rho:\mathsf{Spa}(\mathtt{GRZ.T})\to \mathsf{bi\text{-}Esa}$ and obtain geometric counterparts to the algebraic mappings between bi-Heyting and tense algebras defined in the present subsection. Reasoning as in the proof of \Cref{prop:mcmapsdual} we find that the algebraic and geometric versions of the maps $\sigma, \rho$ are indeed dual to each other.
\begin{proposition}
The following hold.\label{tcmapsdual}
\begin{enumerate}
\item Let $\mathfrak{H}\in \mathsf{bi\text{-}HA}$. Then $(\sigma \mathfrak{H})_*\cong\sigma (\mathfrak{H}_*)$. Consequently, if $\mathfrak{X}$ is a bi-Esakia space then $(\sigma \mathfrak{X})^*\cong\sigma (\mathfrak{X})^*$.
\item Let $\mathfrak{X}$ be a tense space. Then $(\rho\mathfrak{X})^*\cong\rho(\mathfrak{X}^*)$. Consequently, if $\mathfrak{A}\in \mathsf{Alg}(\mathtt{S4.t})$, then $(\rho\mathfrak{A})_*\cong\rho(\mathfrak{A}_*)$.
\end{enumerate}
\end{proposition}
As an easy corollary, we obtain the following analogue of \Cref{cor:representationHAS4}.
\begin{proposition}
For every $\mathfrak{H}\in \mathsf{bi\text{-}HA}$ we have $\mathfrak{H}\cong \rho\sigma \mathfrak{H}$. Moreover, for every $\mathfrak{A}\in \mathsf{Ten}$ we have $\sigma \rho\mathfrak{A}\rightarrowtail\mathfrak{A}$.\label{representation2haGrz.T}
\end{proposition}
\subsubsection{A Gödelian Translation}
We extend the Gödel translation of the previous section to a translation from bsi formulae to tense ones.
\begin{definition}[Gödelian translation - bsi to tense]
The \emph{Gödelian translation} is a mapping $T:\mathit{Tm}_{bsi}\to \mathit{Tm}_{ten}$ defined recursively as follows.
\begin{align*}
T(\bot)&:=\bot\\
T(\top)&:=\top\\
T(p)&:=\square p\\
T(\varphi\land \psi)&:=T(\varphi)\land T(\psi)\\
T(\varphi\lor \psi)&:=T(\varphi)\lor T(\psi)\\
T(\varphi\to \psi)&:=\square_F (\neg T(\varphi)\lor T(\psi))\\
T(\varphi\from \psi)&:=\lozenge_P ( T(\varphi)\land \neg T(\psi))
\end{align*}
\end{definition}
An essentially equivalent translation was considered in \cite{Wolter1998OLwC}, though using $\square_P$ instead of $\lozenge_P$ to interpret $\from$.
The following analogue of \Cref{lem:gtskeleton} is proved the same way as the latter.
\begin{lemma}
For every $\mathfrak{A}\in \mathsf{Ten}$ and bsi rule $\Gamma/\Delta$, \label{lem:gtskeleton2}
\[\mathfrak{A}\models T(\Gamma/\Delta)\iff \rho\mathfrak{A}\models \Gamma/\Delta\]
\end{lemma}
We note that \Cref{lem:gtskeleton2} does not appear the literature, which only mentions a similar results concerning formulae rather than rules.
\subsubsection{Structure of Tense Companions}\label{sec:semantictensecompanions}
We are now ready to generalise \Cref{mcinterval} and \Cref{blokesakia} to the bsi-tense setting. We do so in this section. All the results of this section are new inasmuch as they involve rule systems. Their restrictions to logics were established by \citet{Wolter1998OLwC}, although our proofs differ from Wolter's Blok-style algebraic approach.
We begin by formally defining the notion of a \emph{tense companion}.
\begin{definition}
Let $\mathtt{L}\in \mathbf{Ext}(\mathtt{bi\text{-}IPC_R})$ be a bsi-rule system and $\mathtt{M}\in \mathbf{NExt}(\mathtt{S4.t_R})$ a tense rule system. We say that $\mathtt{M}$ is a \emph{tense companion} of $\mathtt{L}$ (or that $\mathtt{L}$ is the bsi fragment of $\mathtt{M}$) whenever $\Gamma/\Delta\in \mathtt{L}$ iff $T(\Gamma/\Delta)\in \mathtt{M}$ for every bsi rule $\Gamma/\Delta$. Moreover, let $\mathtt{L}\in \mathbf{Ext}(\mathtt{bi\text{-}IPC})$ be a bsi-logic and $\mathtt{M}\in \mathbf{NExt}(\mathtt{S4.t})$ a tense logic. We say that $\mathtt{M}$ is a \emph{tense companion} of $\mathtt{L}$ (or that $\mathtt{L}$ is the bsi fragment of $\mathtt{M}$) whenever $\varphi\in \mathtt{L}$ iff $T(\varphi)\in \mathtt{M}$.
\end{definition}
Clearly, $\mathtt{M}\in \mathbf{NExt}(\mathtt{S4.t_R})$ is a modal companion of $\mathtt{L}\in \mathbf{Ext}(\mathtt{bi\text{-}IPC_R})$ iff $\mathsf{Taut}(\mathtt{M})$ is a modal companion of $\mathsf{Taut}(\mathtt{L})$, and $\mathtt{M}\in \mathbf{NExt}(\mathtt{S4.t})$ is a modal companion of $\mathtt{L}\in \mathbf{Ext}(\mathtt{bi\text{-}IPC})$ iff $\mathtt{M_R}$ is a modal companion of $\mathtt{L_R}$.
Define the following three maps between $\mathbf{Ext}(\mathtt{bi\text{-}IPC_R})$ and $\mathbf{NExt}(\mathtt{S4.t_R})$.
\begin{align*}
\tau&:\mathbf{Ext}(\mathtt{bi\text{-}IPC_R})\to \mathbf{NExt}(\mathtt{S4.t_R}) & \sigma&:\mathbf{Ext}(\mathtt{bi\text{-}IPC_R})\to \mathbf{NExt}(\mathtt{S4.t_R}) \\
\mathtt{L}&\mapsto \mathtt{S4.t_R}\oplus \{T(\Gamma/\Delta):\Gamma/\Delta\in \mathtt{L}\} & \mathtt{L}&\mapsto \mathtt{GRZ.T_R}\oplus \tau \mathtt{L}
\end{align*}
\begin{align*}
\rho &:\mathbf{NExt}(\mathtt{S4.t_R}) \to \mathbf{Ext}(\mathtt{bi\text{-}IPC_R}) \\
\mathtt{M}&\mapsto\{\Gamma/\Delta:T(\Gamma/\Delta)\in \mathtt{M}\}
\end{align*}
These mappings are readily extended to lattices of logics.
\begin{align*}
\tau&:\mathbf{Ext}(\mathtt{bi\text{-}IPC})\to \mathbf{NExt}(\mathtt{S4.t}) & \sigma&:\mathbf{Ext}(\mathtt{bi\text{-}IPC})\to \mathbf{NExt}(\mathtt{S4.t}) \\
\mathtt{L}&\mapsto \mathsf{Taut}(\tau\mathtt{L_R})=\mathtt{S4.t}\oplus \{T(\varphi):\varphi\in \mathtt{L}\} & \mathtt{L}&\mapsto \mathsf{Taut}(\sigma\mathtt{L_R})=\mathtt{GRZ.T}\oplus\{T(\varphi):\varphi\in \mathtt{L}\}
\end{align*}
\begin{align*}
\rho &:\mathbf{NExt}(\mathtt{S4.t}) \to \mathbf{Ext}(\mathtt{bi\text{-}IPC}) \\
\mathtt{M}&\mapsto \mathsf{Taut}(\rho\mathtt{M_R})=\{\varphi:T(\varphi)\in \mathtt{M}\}
\end{align*}
Furthermore, extend the mappings $\sigma:\mathsf{bi\text{-}HA}\to \mathsf{Ten}$ and $\rho:\mathsf{Ten}\to \mathsf{bi\text{-}HA}$ to universal classes by setting
\begin{align*}
\sigma&:\mathbf{Uni}(\mathsf{bi\text{-}HA})\to \mathbf{Uni}(\mathsf{Ten}) & \rho&:\mathbf{Uni}(\mathsf{Ten})\to \mathbf{Uni}(\mathsf{bi\text{-}HA}) \\
\mathcal{U}&\mapsto \mathsf{Uni}\{\sigma \mathfrak{H}:\mathfrak{H}\in \mathcal{U}\} & \mathcal{W}&\mapsto \{\rho\mathfrak{A}:\mathfrak{A}\in \mathcal{W}\}.
\end{align*}
Finally, introduce a semantic counterpart to $\tau$ as follows.
\begin{align*}
\tau&: \mathbf{Uni}(\mathsf{bi\text{-}HA})\to \mathbf{Uni}(\mathsf{Ten}) \\
\mathcal{U}&\mapsto \{\mathfrak{A}\in \mathsf{Ten}:\rho\mathfrak{A}\in \mathcal{U}\}
\end{align*}
The following lemma is a counterpart to \Cref{mainlemma-simod}. It is proved via essentially the same argument which establishes the latter, though some adaptations are necessary which may be less than completely obvious. For this reason, as well as for the central place this lemma occupies in our strategy, we spell out the proof in some detail.
\begin{lemma}
Let $\mathfrak{A}\in \mathsf{GRZ.T}$. Then for every modal rule $\Gamma/\Delta$, we have $\mathfrak{A}\models\Gamma/\Delta$ iff $\sigma\rho\mathfrak{A}\models \Gamma/\Delta$.\label{mainlemma-bsimod}
\end{lemma}
\begin{proof}
$(\Rightarrow)$ This direction follows from the fact that $\sigma\rho\mathfrak{A}\rightarrowtail\mathfrak{A}$ (\Cref{representation2haGrz.T}).
$(\Leftarrow)$ We prove the dual statement that $\mathfrak{A}_*\nvDash \Gamma/\Delta$ implies $\sigma\rho\mathfrak{A}_*\nvDash \Gamma/\Delta$. Let $\mathfrak{X}:=\mathfrak{A}_*$. In view of \Cref{axiomatisationten} it is enough to consider the case $\Gamma/\Delta=\scrten{B}{D}$, for $\mathfrak{B}\in \mathsf{Ten}$ finite. So suppose $\mathfrak{X}\nvDash \scrmod{B}{D}$ and let $\mathfrak{F}:=\mathfrak{B}_*$. Then there is a stable map $f:\mathfrak{X}\to \mathfrak{F}$ satisfying the BDC for $(\mathfrak{D}^{\square_F}, \mathfrak{D}^{\lozenge_P})$. We construct a stable map $g:\sigma\rho\mathfrak{X}\to \mathfrak{F}$ which satisfies the BDC for $(\mathfrak{D}^{\square_F}, \mathfrak{D}^{\lozenge_P})$.
Let $C:=\{x_1, \ldots, x_n\}\subseteq F$ be some cluster and let $Z_C:=f^{-1}(C)$. Reasoning as in the proof of \Cref{mainlemma-simod}, we obtain that $\rho[Z_C]$ is clopen, and so is $f^{-1}(x_i)$ for each $x_i\in C$. Now for each $x_i\in C$ let
\begin{align*}
M_i&:=\mathit{max}_R(f^{-1}(x_i))\\
N_i&:=\mathit{min}_R(f^{-1}(x_i)).
\end{align*}
By \Cref{propGrz.T}, both $M_i, N_i$ are closed, and moreover neither cuts any cluster. Since $\sigma\rho\mathfrak{X}$ has the quotient topology, it follows that both $\rho[M_i], \rho[N_i]$ are closed as well.
For each $x_i\in C$ let $O_i:=M_i\cup N_i$. Clearly, $O_i\cap O_j=\varnothing$ for each $i, j\leq n$. Therefore, using the separation properties of Stone spaces to reason as in the proof of \Cref{mainlemma-simod}, there are disjoint clopens $U_1, \ldots, U_n\in \mathsf{Clop}(\sigma\rho\mathfrak{X})$ with $\rho[O_i]\subseteq U_i$ and $\bigcup_{i\leq n} U_i=\rho[Z_C]$.
We can now define a map
\begin{align*}
g_C&: \rho[Z_C]\to C\\
z&\mapsto x_i\iff z\in U_i.
\end{align*}
Clearly, $g_C$ is relation preserving and continuous. Finally, define $g: \sigma\rho\mathfrak{X}\to F$ by setting
\[
g(\rho(z)):=\begin{cases}
f(z)&\text{ if } f(z)\text{ does not belong to any proper cluster }\\
g_C(\rho(z))&\text{ if }f(z)\in C\text{ for some proper cluster }C\subseteq F.
\end{cases}
\]
Now, $g$ is evidently relation preserving. Moreover, it is continuous because both $f$ and each $g_C$ are. Reasoning as in the proof of \Cref{mainlemma-simod}, we obtain that $g$ satisfies the BDC$^{\square_F}$ for $\mathfrak{D}^{\square_F}$. The proof of the fact that $g$ satisfies the BDC$^{\lozenge_P}$ for $\mathfrak{D}^{\lozenge_P}$ is a straightforward adaptation of the latter, using that for all $U\in \mathsf{Clop}(\mathfrak{X})$, if $x\in U$ there is $y\in \mathit{min}_R(U)$ such that $Ryx$ (\Cref{propGrz.T}).
\end{proof}
\begin{theorem}
Every $\mathcal{U}\in \mathbf{Uni}(\mathsf{GRZ.T})$ is generated by its skeletal elements, i.e. $\mathcal{U}=\sigma \rho\mathcal{U}$. \label{uniGrz.Tensegeneratedskel}
\end{theorem}
\begin{proof}
Follows easily from \Cref{mainlemma-bsimod}, reasoning as in the proof of \Cref{unigrzgeneratedskel}.
\end{proof}
As in the previous section, the next step is to apply \Cref{mainlemma-bsimod} to prove that the syntactic tense companion maps $\tau, \rho, \sigma$ commute with $\mathsf{Alg}(\cdot)$, which leads to a purely semantic characterisation of tense companions.
\begin{lemma}
For each $\mathtt{L}\in \mathbf{Ext}(\mathtt{bi\text{-}IPC_R})$ and $\mathtt{M}\in \mathbf{NExt}(\mathtt{S4.t_R})$, the following hold:\label{prop:tenmapscommute}
\begin{align}
\mathsf{Alg}(\tau\mathtt{L})&=\tau \mathsf{Alg}(\mathtt{L}) \label{prop:tenmapscommute1}\\
\mathsf{Alg}(\sigma\mathtt{L})&=\sigma \mathsf{Alg}(\mathtt{L})\label{prop:tenmapscommute2}\\
\mathsf{Alg}(\rho\mathtt{M})&=\rho \mathsf{Alg}(\mathtt{M})\label{prop:tenmapscommute3}
\end{align}
\end{lemma}
\begin{proof}
The proof of \Cref{prop:tenmapscommute1} is trivial. To prove \Cref{prop:tenmapscommute2}, in view of \Cref{uniGrz.Tensegeneratedskel} it is enough to show that $\mathsf{Alg}(\sigma\mathtt{L})$ and $\sigma \mathsf{Alg}(\mathtt{L})$ have the same skeletal elements. This is proved the same way as \Cref{prop:mcmapscommute2} in \Cref{prop:mcmapscommute}. Finally, \Cref{prop:tenmapscommute3} is proved analogously to \Cref{prop:mcmapscommute3} in \Cref{prop:mcmapscommute}, applying \Cref{lem:gtskeleton2} instead of \Cref{lem:gtskeleton}.
\end{proof}
\begin{lemma}
$\mathtt{M}\in \mathbf{NExt}(\mathtt{S4.t_R})$ is a tense companion of $\mathtt{L}\in \mathbf{Ext}(\mathtt{bi\text{-}IPC_R})$ iff $\mathsf{Alg}(\mathtt{L})=\rho\mathsf{Alg}(\mathtt{M})$.\label{tcsemantic}
\end{lemma}
\begin{proof}
Analogous to \Cref{mcsemantic}.
\end{proof}
The main results of this section can now be proved.
\begin{theorem}
The following conditions hold: \label{tcinterval}
\begin{enumerate}
\item For every $\mathtt{L}\in \mathbf{Ext}(\mathtt{bi\text{-}IPC_R})$, the modal companions of $\mathtt{L}$ form an interval \label{tcinterval1} \[\{\mathtt{M}\in \mathbf{NExt}(\mathtt{S4.t_R}):\tau\mathtt{L}\leq \mathtt{M}\leq \sigma\mathtt{L}\};\]
\item For every $\mathtt{L}\in \mathbf{Ext}(\mathtt{bi\text{-}IPC})$, the modal companions of $\mathtt{L}$ form an interval \label{tcinterval2} \[\{\mathtt{M}\in \mathbf{NExt}(\mathtt{S4.t}):\tau\mathtt{L}\leq \mathtt{M}\leq \sigma\mathtt{L}\}.\]
\end{enumerate}
\end{theorem}
\begin{proof}
\Cref{tcinterval1} is proved the same way as \Cref{mcinterval1} in \Cref{mcinterval}. \Cref{tcinterval2} is immediate from \Cref{tcinterval1}.
\end{proof}
\begin{theorem}[Blok-Esakia theorem for bsi- and tense deductive systems]
The following conditions hold: \label{blokesakia2}
\begin{enumerate}
\item The mappings $\sigma: \mathbf{Ext}(\mathtt{bi\text{-}IPC_R})\to \mathbf{NExt}(\mathtt{GRZ.T_R})$ and $\rho:\mathbf{NExt}(\mathtt{GRZ.T_R})\to \mathbf{Ext}(\mathtt{bi\text{-}IPC_R})$ are complete lattice isomorphisms and mutual inverses. \label{blokesakia2:1}
\item The mappings $\sigma: \mathbf{Ext}(\mathtt{bi\text{-}IPC})\to \mathbf{NExt}(\mathtt{GRZ.T})$ and $\rho:\mathbf{NExt}(\mathtt{GRZ.T})\to \mathbf{Ext}(\mathtt{bi\text{-}IPC})$ are complete lattice isomorphisms and mutual inverses. \label{blokesakia2:2}
\end{enumerate}
\end{theorem}
\begin{proof}
\Cref{blokesakia2:1} is proved the same way as \Cref{blokesakia:1} in \Cref{blokesakia}. \Cref{blokesakia2:2} follows straightforwardly from \Cref{blokesakia2:1} and \Cref{deductivesystemisomorphismbsi,deductivesystemisomorphismten}.
\end{proof}
\subsubsection{The Dummett-Lemmon Conjecture for Bsi-Rule Systems} \label{sec:additionaltc}
The construction used to prove the Dummett-Lemmon conjecture for rule systems straightforwardly generalises to a proof of a variant of the conjecture applying to bsi-rule systems and their weakest tense companions. To establish this result, we first extend the notion of a collapsed rule and the rule collapse lemma to the bsi and tense setting.
\begin{definition}
Let $\scrten{F}{\mathfrak{D}}$ be a tense stable canonical rule. The \emph{collapsed tense stable canonical rule} $\scrbsi{\rho F}{\rho \mathfrak{D}}$ is defined by setting
\begin{align*}
\rho\mathfrak{D}^\to= \{\rho[\mathfrak{d}]:\mathfrak{d}\in \mathfrak{D}^{\square_F}\}\\
\rho\mathfrak{D}^\from= \{\rho[\mathfrak{d}]:\mathfrak{d}\in \mathfrak{D}^{\lozenge_P}\}
\end{align*}
\end{definition}
\begin{lemma}[Rule collapse lemma - bsi-tense]
For every tense space $\mathfrak{X}$ and every tense stable canonical rule $\scrten{F}{D}$, we have that $\mathfrak{X}\nvDash \scrten{F}{\mathfrak{D}}$ implies $\rho\mathfrak{X}\nvDash \scrbsi{\rho F}{\rho\mathfrak{D}}$.\label{rulecollapse2}
\end{lemma}
\begin{proof}
Analogous to the proof of \Cref{rulecollapse}.
\end{proof}
At this point, we can establish the desired result via a straightforward adaptation of our proof of \Cref{dummettlemmon}.
\begin{theorem}[Dummett-Lemmon conjecture for bsi rule systems]
For every bsi-rule system $\mathtt{L}\in \mathbf{Ext}(\mathtt{bi\text{-}IPC_R})$, $\mathtt{L}$ is Kripke complete iff $\tau \mathtt{L}$ is.
\end{theorem}
\begin{proof}
$(\Rightarrow)$ Let $\mathtt{L}$ be Kripke complete. Suppose that $\Gamma/\Delta\notin \tau\mathtt{L}$. Then there is $\mathfrak{X}\in \mathsf{Spa}(\tau\mathtt{L})$ such that $\mathfrak{X}\nvDash \Gamma/\Delta$. By \Cref{axiomatisationten}, we may assume that $\Gamma/\Delta=\scrmod{F}{\mathfrak{D}}$ for $\mathfrak{F}$ a preorder. By \Cref{rulecollapse2}
and \Cref{lem:gtskeleton2} it follows that $\rho\mathfrak{X}\models \mathtt{L}$, and so $\scrsi{\rho F}{\rho \mathfrak{D}}\notin \mathtt{L}$. Since $\mathtt{L}$ is Kripke complete, there is a bsi Kripke frame $\mathfrak{Y}$ such that $\mathfrak{Y}\nvDash \scrsi{\rho F}{\rho \mathfrak{D}}$. Take a stable map $f:\mathfrak{Y}\to \rho \mathfrak{F}$ satisfying the BDC for $\rho \mathfrak{D}$. Proceed as in the proof of \Cref{dummettlemmon} to construct a Kripke frame $\mathfrak{Z}$ with $\mathfrak{Z}\models\tau \mathtt{L}$ by expanding clusters in $\mathfrak{Y}$. We identify $\rho\mathfrak{Z}=\mathfrak{Y}$, and define a map $g:\mathfrak{Z}\to \mathfrak{F}$ via the same construction used in the proof of \Cref{dummettlemmon}. Clearly, $g$ is well defined, surjective, and relation preserving. We know that $g$ satisfies the BDC for $\mathfrak{D}^\to$ from the proof of \Cref{dummettlemmon}, and symmetric reasoning shows that $g$ also satisfies the BDC for $\mathfrak{D}^\from$.
$(\Leftarrow)$ Analogous to the si and modal case.
\end{proof}
\subsection{Modal and Superintuitionistic Deductive Systems}\label{sec:preliminaries1}
We begin with a brief overview of the semantic and syntactic structures discussed throughout the present section.
\subsubsection{Superintuitionistic Deductive Systems, Heyting Algebras, and Esakia Spaces}\label{sec:int}
We work with the \emph{superintuitionistic signature}, \[si:=\{\land, \lor, \to, \bot, \top\}.\]
The set $\mathit{Frm_{si}}$ of superintuitionistic (si) formulae is defined recursively as follows.
\[\varphi::= p\sep \bot\sep\top\sep \varphi\land \varphi\sep\varphi\lor \varphi\sep\varphi\to \varphi.\]
We abbreviate $\varphi\leftrightarrow \psi:=(\varphi\to \psi)\land (\psi \to \varphi)$. We let $\mathtt{IPC}$ denote the \emph{intuitionistic propositional calculus}, and point the reader to \cite[Ch. 2]{ChagrovZakharyaschev1997ML} for an axiomatisation.
\begin{definition}
A \emph{superintuitionistic logic}, or si-logic for short, is a logic $\mathtt{L}$ over $\mathit{Frm}_{si}$ satisfying the following additional conditions:
\begin{enumerate}
\item $\mathtt{IPC}\subseteq \mathtt{L}$;
\item $\varphi\to \psi, \varphi\in \mathtt{L}$ implies $\psi\in \mathtt{L}$ (MP).
\end{enumerate}
A \emph{superintuitionistic rule system}, or si-rule system for short, is a rule system $\mathtt{L}$ over $\mathit{Frm}_{si}$ satisfying the following additional requirements.
\begin{enumerate}
\item $/\varphi\in \mathtt{L}$ whenever $\varphi\in \mathtt{IPC}$.
\item $\varphi,\varphi\to \psi /\psi\in \mathtt{L}$ (MP-R).
\end{enumerate}
\end{definition}
For every si-logic $\mathtt{L}$ write $\mathbf{Ext}(\mathtt{L})$ for the set of si-logics extending $\mathtt{L}$, and similarly for si-rule systems. Then $\mathbf{Ext}(\mathtt{IPC})$ is the set of all si-logics. It is well known that $\mathbf{Ext}(\mathtt{IPC})$ admits the structure of a complete lattice, with $\oplus_{\mathbf{Ext}(\mathtt{IPC})}$ serving as join and intersection as meet. Clearly, for every $\mathtt{L}\in \mathbf{Ext}(\mathtt{IPC})$ there exists a least si-rule system $\mathtt{L_R}$ containing $/\varphi$ for each $\varphi\in \mathtt{L}$. Hence $\mathtt{IPC_R}$ is the least rule system. The set $\mathbf{Ext}(\mathtt{IPC_R})$ is also a lattice when endowed with $\oplus_{\mathbf{Ext}(\mathtt{IPC_R})}$ as join and intersection as meet. Slightly abusing notation, we refer to these lattices as we refer to their underlying sets, i.e., $\mathbf{Ext}(\mathtt{IPC})$ and $\mathbf{Ext}(\mathtt{IPC_R})$ respectively. Additionally, we make use of systematic ambiguity and write both $\oplus_{\mathbf{Ext}(\mathtt{IPC})}$ and $\oplus_{\mathbf{Ext}(\mathtt{IPC_R})}$ simply as $\oplus$, leaving context to clarify which operation is meant.
The following proposition is central for transferring results about si-rule systems to si-logics. Its proof is routine.
\begin{proposition}
The mappings $(\cdot)_{\mathtt{R}}$ and $\mathsf{Taut}(\cdot)$ are mutually inverse complete lattice isomorphisms between $\mathbf{Ext}(\mathtt{IPC})$ and the sublattice of $\mathbf{Ext}(\mathtt{IPC_R})$ consisting of all si-rule systems $\mathtt{L}$ such that $\mathsf{Taut}(\mathtt{L})_\mathtt{R}=\mathtt{L}$.\label{deductivesystemisomorphismsi}
\end{proposition}
A \emph{Heyting algebra} is a tuple $\mathfrak{H}=(H, \land, \lor, \to, 0, 1)$ such that $(H, \land, \lor, 0, 1)$ is a bounded distributive lattice and for every $a, b, c\in A$ we have
\[c\leq a\to b\iff a\land c\leq b.\]
We let $\mathsf{HA}$ denote the class of all Heyting algebras. By \Cref{syntacticvarietiesuniclasses}, $\mathsf{HA}$ is a variety. If $\mathcal{V}\subseteq \mathsf{HA}$ is a variety (resp: universal class) we write $\mathbf{Var}(\mathcal{V})$ and $\mathbf{Uni}(\mathcal{V})$ respectively for the lattice of subvarieties (resp: of universal subclasses) of $\mathcal{V}$.
The connections between $\mathbf{Ext}(\mathtt{IPC})$ and $\mathbf{Var}(\mathsf{HA})$ on the one hand, and between $\mathbf{Ext}(\mathtt{IPC_R})$ and $\mathbf{Uni}(\mathsf{HA})$ on the other, are as intimate as they come.
\begin{theorem}
The following maps are pairs of mutually inverse dual isomorphisms:
\begin{enumerate}
\item $\mathsf{Alg}:\mathbf{Ext}(\mathtt{IPC})\to \mathbf{Var}(\mathsf{HA})$ and $\mathsf{Th}:\mathbf{Var}(\mathsf{HA})\to \mathbf{Ext}(\mathtt{IPC})$;\label{algebraisationHAvar}
\item $\mathsf{Alg}:\mathbf{Ext}(\mathtt{IPC_R})\to \mathbf{Uni}(\mathsf{HA})$ and $\mathsf{ThR}:\mathbf{Uni}(\mathsf{HA})\to \mathbf{Ext}(\mathtt{IPC_R})$.\label{algebraisationHAuni}
\end{enumerate} \label{thm:algebraisationHA}
\end{theorem}
\Cref{algebraisationHAvar} is proved in \cite[Theorem 7.56]{ChagrovZakharyaschev1997ML}, whereas \Cref{algebraisationHAuni} follows from \cite[Theorem 2.2]{Jerabek2009CR} by standard techniques.
\begin{corollary}
Every si-logic $($resp. si-rule system$)$ is complete with respect to some variety $($resp. universal class$)$ of Heyting algebras. \label{completeness_si}
\end{corollary}
An \emph{Esakia space} is a tuple $\mathfrak{X}=(X, \leq, \mathcal{O})$, such that $(X, \mathcal{O})$ is a Stone space, $\leq$ is a partial order on $X$, and
\begin{enumerate}
\item ${\uparrow} x:=\{y\in X: x\leq y\}$ is closed for every $x\in X$;
\item ${\downarrow} U:=\{x\in X:{\uparrow} x\cap U\neq\varnothing\}\in \mathsf{Clop}(\mathfrak{X})$ for every $U\in \mathsf{Clop}(\mathfrak{X})$.
\end{enumerate}
We let $\mathsf{Esa}$ denote the class of all Esakia spaces. If $\mathfrak{X}, \mathfrak{Y}$ are Esakia spaces, a map $f:\mathfrak{X}\to \mathfrak{Y}$ is called a \emph{bounded morphism} if for all $x, y\in X$ we have that $x\leq y$ implies $f(x)\leq f(y)$, and $h(x)\leq y$ implies that there is $z\in X$ with $x\leq z$ and $h(z)=y$.
If $\mathfrak{X}$ is an Esakia space and $U\subseteq X$, we say that $U$ is an \emph{upset} if ${\uparrow}[U]=U$. We let $\mathsf{ClopUp}(\mathfrak{X})$ denote the set of clopen upsets in $\mathfrak{X}$. A \emph{valuation} on an Esakia space $\mathfrak{X}$ is a map $V:\mathit{Prop}\to \mathsf{ClopUp}(\mathfrak{X})$. A valuation $V$ on $\mathfrak{X}$ extends to a truth-set assignment $\bar V:\mathit{Frm}_{si}\to \mathsf{ClopUp}(\mathfrak{X})$ in the standard way, with
\[\bar V(\varphi\to \psi):=-{\downarrow}(\bar V(\varphi)\smallsetminus\bar V(\psi)).\]
The following result recalls some important properties of Esakia spaces, used throughout the paper. For proofs the reader may consult \cite[Lemma 3.1.5, Theorem 3.2.1]{Esakia2019HADT}.
\begin{proposition}
Let $\mathfrak{X}\in \mathsf{Esa}$. Then for all $x, y\in X$ we have:\label{propesa}
\begin{enumerate}
\item If $x\not\leq y$ then there is $U\in \mathsf{ClopUp}(\mathfrak{X})$ such that $x\in U$ and $y\notin U$;\label{propesa1}
\item For all $U\in \mathsf{Clop}(U)$ and $x\in U$, there is $y\in \mathit{max}_\leq (U)$ such that $x\leq y$.\label{propesa1b}
\end{enumerate}
\end{proposition}
\citet{Esakia1974TKM} proved that the category of Heyting algebras with corresponding homomorphisms is dually equivalent to the category of Esakia spaces with continuous bounded morphisms. The reader may consult \cite[\S 3.4]{Esakia2019HADT} for a detailed proof of this result. We denote the Esakia space dual to a Heyting algebra $\mathfrak{H}$ as $\mathfrak{H_*}$, and the Heyting algebra dual to an Esakia space $\mathfrak{X}$ as $\mathfrak{X}^*$.
\subsubsection{Modal Deductive Systems, Modal Algebras, and Modal Spaces}
We shall now work in the \emph{modal signature}, \[md:=\{\land, \lor, \neg, \square, \bot, \top\}.\]
The set $\mathit{Frm_{md}}$ of modal formulae is defined recursively as follows.
\[\varphi::= p\sep \bot\sep\top\sep \varphi\land \varphi\sep\varphi\lor \varphi\sep\neg \varphi\sep \square\varphi.\]
As usual we abbreviate $\lozenge\varphi:=\neg\square\neg \varphi$. Further, we let $\varphi\to \psi:=\neg\varphi\lor \psi$ and $\varphi\leftrightarrow\psi:=(\varphi\to\psi)\land (\psi\to \varphi)$.
\begin{definition}
A \emph{normal modal logic}, henceforth simply \emph{modal logic}, is a logic $\mathtt{M}$ over $\mathit{Frm}_{md}$ satisfying the following conditions:
\begin{enumerate}
\item $\mathtt{CPC}\subseteq \mathtt{M}$, where $\mathtt{CPC}$ is the classical propositional calculus;
\item $\square (\varphi\to \psi)\to (\square \varphi\to \square \psi)\in \mathtt{M}$;
\item $\varphi\to \psi, \varphi\in \mathtt{M}$ implies $\psi\in \mathtt{M}$ (MP);
\item $\varphi\in \mathtt{M}$ implies $\square \varphi\in \mathtt{M}$ (NEC).
\end{enumerate}
We denote the least modal logic as $\mathtt{K}$. A \emph{normal modal rule system}, henceforth simply \emph{modal rule system}, is a rule system $\mathtt{M}$ over $\mathit{Frm}_{md}$, satisfying the following additional requirements:
\begin{enumerate}
\item $/\varphi\in \mathtt{M}$ whenever $\varphi\in \mathtt{K}$;
\item $\varphi\to \psi,\varphi /\psi\in \mathtt{M}$ (MP-R);
\item $\varphi/\square\varphi\in \mathtt{M}$ (NEC-R).\label{nec}
\end{enumerate}
\end{definition}
If $\mathtt{M}$ is a modal logic let $\mathbf{NExt}(\mathtt{M})$ be the set of modal logics extending $\mathtt{M}$, and similarly for modal rule systems.
Obviously, the set of modal logics coincides with $\mathbf{NExt}(\mathtt{K})$. It is well known that $\mathbf{NExt}(\mathtt{K})$ forms a lattice under the operations $\oplus_{\mathbf{NExt}(\mathtt{K})}$ as join and intersection as meet. Clearly, for each $\mathtt{M}\in \mathbf{NExt}(\mathtt{K})$ there is always a least modal rule system $\mathtt{K_R}$ containing $/\varphi$ for each $\varphi\in \mathtt{M}$. Therefore, $\mathtt{K_R}$ is the least modal rule system. The set $\mathbf{NExt}(\mathtt{K_R})$ is also a lattice when endowed with $\oplus_{\mathbf{NExt}(\mathtt{K_R})}$ as join and intersection as meet. With slight abuse of notation, we refer to these lattices as we refer to their underlying sets, i.e., $\mathbf{NExt}(\mathtt{K})$ and $\mathbf{NExt}(\mathtt{K_R})$ respectively. Additionally, we make use of systematic ambiguity and write both $\oplus_{\mathbf{NExt}(\mathtt{K})}$ and $\oplus_{\mathbf{NExt}(\mathtt{K_R})}$ simply as $\oplus$, leaving context to clarify which operation is meant.
We have a modal counterpart of \Cref{deductivesystemisomorphismsi}.
\begin{proposition}
The mappings $(\cdot)_{\mathtt{R}}$ and $\mathsf{Taut}(\cdot)$ are mutually inverse complete lattice isomorphisms between $\mathbf{NExt}(\mathtt{K})$ and the sublattice of $\mathbf{NExt}(\mathtt{K_R})$ consisting of all si-rule systems $\mathtt{M}$ such that $\mathsf{Taut}(\mathtt{M})_\mathtt{R}=\mathtt{M}$.\label{deductivesystemisomorphismmodal}
\end{proposition}
A \emph{modal algebra} is a tuple $\mathfrak{A}=(A, \land, \lor, \neg, \square, 0, 1)$ such that $(A, \land, \lor, \neg, 0, 1)$ is a Boolean algebra and the following equations hold:
\begin{align}
\square 1&=1,\\
\square(a\land b)&=\square a\land \square b.
\end{align}
We let $\mathsf{MA}$ denote the class of all modal algebras. By \Cref{syntacticvarietiesuniclasses}, $\mathsf{MA}$ is a variety. We let $\mathbf{Var}(\mathsf{MA})$ and $\mathbf{Uni}(\mathsf{MA})$ be the lattice of subvarieties and the lattice of universal subclasses of $\mathsf{MA}$ respectively. We have the following analogue of \Cref{thm:algebraisationHA}.
\begin{theorem}
The following maps are pairs of mutually inverse dual isomorphisms: \label{thm:algebraisationMA}
\begin{enumerate}
\item $\mathsf{Alg}:\mathbf{NExt}(\mathtt{K})\to \mathbf{Var}(\mathsf{MA})$ and $\mathsf{Th}:\mathbf{Var}(\mathsf{MA})\to \mathbf{NExt}(\mathtt{K})$;\label{algebraisationMAvar}
\item $\mathsf{Alg}:\mathbf{NExt}(\mathtt{K_R})\to \mathbf{Uni}(\mathsf{MA})$ and $\mathsf{ThR}:\mathbf{Uni}(\mathsf{MA})\to \mathbf{NExt}(\mathtt{K_R})$.\label{algebraisationMAuni}
\end{enumerate}
\end{theorem}
\Cref{algebraisationMAuni} is proved in \cite[Theorem 7.56]{ChagrovZakharyaschev1997ML}, whereas \Cref{algebraisationMAuni} follows from \cite[Theorem 2.5]{BezhanishviliGhilardi2014MCRHSaSF}.
\begin{corollary}
Every modal logic $($resp. modal rule system$)$ is complete with respect to some variety $($resp. universal class$)$ of modal algebras. \label{completeness_md}
\end{corollary}
A \emph{modal space} is a tuple $\mathfrak{X}=(X, R, \mathcal{O})$, such that $(X, \mathcal{O})$ is a Stone space, $R\subseteq X\times X$ is a binary relation, and
\begin{enumerate}
\item $R[x]:=\{y\in X: Rxy\}$ is closed for every $x\in X$;
\item $R^{-1} (U):=\{x\in X:R[x]\cap U\neq\varnothing\}\in \mathsf{Clop}(\mathfrak{X})$ for every $U\in \mathsf{Clop}(\mathfrak{X})$.
\end{enumerate}
We let $\mathsf{Mod}$ denote the class of all modal spaces. If $\mathfrak{X}, \mathfrak{Y}$ are modal spaces, a map $f:\mathfrak{X}\to \mathfrak{Y}$ is called a \emph{bounded morphism} when for all $x, y\in X$, if $Rxy$ then $Rf(x)f(y)$, and $Rf(x) y$ implies that there is $z\in X$ with $Rx z$ and $f(z)=y$. A \emph{valuation} on a modal space $\mathfrak{X}$ is a map $V:\mathit{Prop}\to \mathsf{Clop}(\mathfrak{X})$. A valuation extends to a full truth-set assignment $\bar V:\mathit{Frm}\to \mathsf{Clop}(\mathfrak{X})$ in the usual way.
By a generalisation of Stone duality, the category of modal algebras with corresponding homomorphisms is dually equivalent to the category of modal spaces with continuous bounded morphisms. A proof of this result can be found, e.g., in \cite[Sections 3, 4]{SambinVaccaro1988TaDiML}. We denote the modal space dual to a modal algebra $\mathfrak{A}$ as $\mathfrak{A_*}$, and the modal algebra dual to an modal space $\mathfrak{X}$ as $\mathfrak{X}^*$.
In this paper we are mostly concerned with modal algebras and modal spaces validating one of the following modal logics.
\begin{align*}
\mathtt{K4}&:=\mathtt{K}\oplus \square p\to \square \square p\\
\mathtt{S4}&:=\mathtt{K4}\oplus \square p\to p
\end{align*}
We let $\mathsf{K4}:=\mathsf{Alg}(\mathtt{K4})$ and $\mathsf{S4}:=\mathsf{Alg}(\mathtt{S4})$. We call algebras in $\mathsf{K4}$ \emph{transitive algebras}, and algebras in $\mathsf{S4}$ \emph{closure algebras}. It is obvious that for every $\mathfrak{A}\in \mathsf{MA}$, $\mathfrak{A}\in \mathsf{K4}$ iff $\square \square a\leq \square a$ for every $a\in A$, and $\mathfrak{A}\in \mathsf{S4}$ iff $\mathfrak{A}\in \mathsf{K4}$ and additionally $\square a\leq a$ for every $a\in A$. Moreover, it is easy to see that a modal space validates $\mathtt{K4}$ iff it has a transitive relation, and that it validates $\mathtt{S4}$ iff it has a reflexive and transitive relation (see, e.g., \citealt[Section 3.8]{ChagrovZakharyaschev1997ML}).
Let $\mathfrak{X}\in \mathsf{Spa}(\mathtt{K4})$. A subset $C\subseteq X$ is called a \emph{cluster} if it is an equivalence class under the relation $\sim$ defined by $x\sim y$ iff $Rxy$ and $Ryx$. A cluster is called \emph{improper} if it is a singleton, otherwise we call it \emph{proper}.
We recall some basic properties of $\mathtt{K4}$- and $\mathtt{S4}$-spaces.
\begin{proposition}
Let $\mathfrak{X}\in \mathsf{Spa}(\mathtt{S4})$ and $U\in \mathsf{Clop}(\mathfrak{X})$. Then the following conditions hold: \label{props4}
\begin{enumerate}
\item The set $\mathit{qmax}_R(U)$ is closed;
\item If $x\in U$ then there is $y\in \mathit{qmax}_R(U)$ such that $Rxy$.
\end{enumerate}
Moreover, let $\mathfrak{X}\in \mathsf{Spa}(\mathtt{K4})$ and $U\in \mathsf{Clop}(\mathfrak{X})$. Then the following conditions hold:
\begin{enumerate}[resume]
\item The structure $(X, R^+)$, with the same topology as $\mathfrak{X}$, is a $\mathtt{S4}$-space, where for all $x, y\in X$ we have $R^+xy$ iff $Rxy$ or $x=y$;
\item The set $\mathit{qmax}_R(U)$ is closed;
\item If $x\in U$ then there is $y\in \mathit{qmax}_R(U)$ such that $Rxy$.
\end{enumerate}
\end{proposition}
Properties 1, 2 are proved in \cite[Theorems 3.2.1, 3.2.3]{Esakia2019HADT}. Property 3 is straightforward to check, and properties 4, 5 are immediate consequences of 1, 2, and 3.
Among extensions of $\mathtt{S4}$, the modal logic $\mathtt{GRZ}$ plays a particularly central role in this paper.
\begin{align*}
\mathtt{GRZ}:&=\mathtt{K}\oplus\square (\square( p\to\square p)\to p)\to p\\
&=\mathtt{S4}\oplus\square (\square( p\to\square p)\to p)\to p
\end{align*}
We let $\mathsf{GRZ}:=\mathsf{Alg}(\mathtt{GRZ})$. It is not difficult to see that $\mathsf{GRZ}$ coincides with the class of all closure algebras $\mathfrak{A}$ such that for every $a\in A$ we have
\[\square(\square( a\to\square a)\to a)\leq a\]
or equivalently,
\[a\leq \lozenge(a\land \neg \lozenge (\lozenge a \land \neg a)).\]
A poset $(X, R)$ is called \emph{Noetherian} if it contains no infinite $R$-ascending chain of pairwise distinct points. It is well known that $\mathtt{GRZ}$ is complete with respect to the class of Noetherian partially ordered Kripke frames \cite[Corollary 5.52]{ChagrovZakharyaschev1997ML}. In general, $\mathtt{GRZ}$-spaces may fail to be partially ordered.
Still, clusters cannot occur just anywhere in a $\mathtt{GRZ}$-space, as the following result clarifies.
\begin{proposition}
For every $\mathtt{GRZ}$-space $\mathfrak{X}$ and ${U}\in \mathsf{Clop}(\mathfrak{X})$, the following hold: \label{propgrz1}
\begin{enumerate}
\item $\mathit{qmax}_R(U)\subseteq \mathit{max}_R(U)$;\label{propgrz1:1}
\item The set $\mathit{max}_R(U)$ is closed;\label{propgrz1:2}
\item For every $x\in U$ there is $y\in \mathit{pas}_R(U)$ such that $Rxy$;\label{propgrz1:3}
\item $\mathit{max}_R(U)\subseteq\mathit{pas}_R(U)$. \label{propgrz1:4}
\end{enumerate}
\end{proposition}
\Cref{propgrz1:1} is proved in \cite[Theorem 3.5.6]{Esakia2019HADT}. \Cref{propgrz1:2} follows from \Cref{propgrz1:1} and \Cref{props4}. \Cref{propgrz1:3} is immediate from the $\mathtt{GRZ}$-axiom. \Cref{propgrz1:4} then follows from \Cref{props4}, \Cref{propgrz1:1}, and \Cref{propgrz1:3}.
Let us say that $U\subseteq X$ \emph{cuts} a cluster $C\subseteq X$ if both $U\cap C\neq\varnothing$ and $U\smallsetminus C\neq\varnothing$. As an immediate consequence of \Cref{propgrz1:4} in \Cref{propgrz1} we obtain that for any $U\in \mathsf{Clop}(\mathfrak{X})$, neither $\mathit{max}_R(U)$ or $\mathit{pas}_R(U)$ cut any clusters in $\mathfrak{X}$.
\subsection{Stable Canonical Rules for Superintuitionistic and Modal Rule Systems}\label{sec:scr1}
In both the si and the modal cases, the \emph{filtration} technique can be used to construct finite countermodels to a non-valid rule $\Gamma/\Delta$. Roughly, this construction consists of expanding finitely generated subreducts in a locally finite signature of arbitrary counter-models to $\Gamma/\Delta$, in such a way that the new operation added to the subreduct agrees with the original one on selected elements. Si and modal \emph{stable canonical rules} are essentially syntactic devices for encoding finite filtrations. The present section briefly reviews this method in both the si and modal case. We point the reader to \cite{BezhanishviliEtAl2016SCR,BezhanishviliEtAl2016CSL,BezhanishviliBezhanishvili2017LFRoHAaCF,BezhanishviliBezhanishvili2020JFaATfIL} and \cite[Ch. 5]{Ilin2018FRLoSNCL} for more in-depth discussion.
\subsubsection{Supertintuitionistic Case} We begin by defining si stable canonical rules.
\begin{definition}
Let $\mathfrak{H}\in \mathsf{HA}$ be finite and $D\subseteq A\times A$. For every $a\in H$ introduce a fresh propositional variable $p_a$. The \emph{si stable canonical rule} of $(\mathfrak{H}, D)$, is defined as the rule $\scrsi{H}{D}=\Gamma/\Delta$, where
\begin{align*}
\Gamma=&\{p_0\leftrightarrow 0\}\cup\{p_1\leftrightarrow 1\}\cup\\
&\{p_{a\land b}\leftrightarrow p_a\land p_b:a, b\in H\}\cup \{p_{a\lor b}\leftrightarrow p_a\lor p_b:a, b\in H\}\cup\\
& \{p_{a\to b}\leftrightarrow p_a\to p_b:(a, b)\in D\}\\
\Delta=&\{p_a\leftrightarrow p_b:a, b\in H\text{ with } a\neq b\}.
\end{align*}
\end{definition}
We write si stable canonical rules of the form $\scrsi{H}{\varnothing}$ simply as $\srsi{H}$, and call them \emph{stable rules}.
If $\mathfrak{H}, \mathfrak{K}\in \mathsf{HA}$, let us call a map $h:\mathfrak{H}\to \mathfrak{K}$ \emph{stable} if $h$ is a bounded lattice homomorphism, i.e., if it preserves $0, 1, \land$, and $\lor$. If $D\subseteq H\times H$, we say that $h$ satisfies the \emph{bounded domain condition} (BDC) for $D$ if \[h(a\to b)=h(a)\to h(b)\] for every $(a, b)\in D$. It is not difficult to check that every stable map $h:\mathfrak{H}\to \mathfrak{K}$ satisfies $h(a\to b)\leq h(a)\to h(b)$ for every $(a, b)\in H$.
\color{black}
\begin{remark}
The BDC was originally called \emph{closed domain condition} in, e.g., \cite{BezhanishviliEtAl2016SCR,BezhanishviliBezhanishvili2017LFRoHAaCF}, following \z's terminology for a similar notion in the theory of his canonical formulae. The name \emph{stable domain condition} was later used in \cite{BezhanishviliBezhanishvili2020JFaATfIL} to stress the difference with \z's notion. However, this choice may create confusion between the BDC and the property of being a stable map. The terminology used in this paper is meant to avoid this, while concurrently highlighting the similarity between the geometric version of the BDC, to be presented in a few paragraphs, and the definition of a bounded morphism.
\end{remark}
\color{black}
The next two results characterise refutation conditions for si stable canonical rules. For detailed proofs the reader may consult \cite[Proposition 3.2]{BezhanishviliEtAl2016CSL}.
\begin{proposition}
For every finite $\mathfrak{H}\in \mathsf{HA}$ and $D\subseteq H\times H$, we have $\mathfrak{H}\nvDash \scrsi{H}{D}$. \label{prop:siscr-refutation-1}
\end{proposition}
\begin{proof}[Proof sketch]
Use the valuation $V(p_a)=a$.
\end{proof}
\begin{proposition}
For every $\mathfrak{H}, \mathfrak{K}\in \mathsf{HA}$ with $\mathfrak{H}$ finite, and every $D\subseteq H\times H$, we have $\mathfrak{K}\nvDash \scrsi{H}{D}$ iff there is a stable embedding $h:\mathfrak{H}\to \mathfrak{K}$ satisfying the BDC for $D$.\label{prop:siscr-refutation-2}
\end{proposition}
\begin{proof}[Proof sketch]
$(\Rightarrow)$ Assume $\mathfrak{K}\nvDash \scrsi{H}{D}$, and take a valuation $V$ on $\mathfrak{K}$ such that $\mathfrak{K}, V\nvDash \scrsi{H}{D}$. Define a map $h:\mathfrak{H}\to \mathfrak{K}$ by setting $h(a)=V(p_a)$. Then $h$ is the desired stable embedding satisfying the BDC for $D$.
$(\Leftarrow)$ Assume we have a stable embedding $h:\mathfrak{H}\to \mathfrak{K}$ satisfying the BDC for $D$. By the proof of \Cref{prop:siscr-refutation-1} we know that the valuation $V$ with $V(p_a)=a$ witnesses $\mathfrak{H}\nvDash \scrsi{H}{D}$. So put $V(p_a)=h(a)$.
\end{proof}
Si stable canonical rules also have uniform refutation conditions on Esakia spaces. If $\mathfrak{X}, \mathfrak{Y}$ are Esakia spaces, a map $f:\mathfrak{X}\to \mathfrak{Y}$ is called \emph{stable} if $x\leq y$ implies $f(x)\leq f(y)$, for all $x, y\in X$. If $\mathfrak{d}\subseteq Y$ we say that $f$ satisfies the BDC for $\mathfrak{d}$ if for all $x\in X$,
\[{{\uparrow}}f(x)\cap \mathfrak{d}\neq\varnothing\Rightarrow f[{{\uparrow}}x]\cap \mathfrak{d}\neq\varnothing.\]
If $\mathfrak{D}\subseteq \wp(Y)$ then we say that $f$ satisfies the BDC for $\mathfrak{D}$ if it does for each $\mathfrak{d}\in \mathfrak{D}$. If $\mathfrak{H}$ is a finite Heyting algebra and $D\subseteq H$, for every $(a, b)\in D$ set $\mathfrak{d}_{(a, b)}:=\beta (a)\smallsetminus\beta (b)$. Finally, put \[\mathfrak{D}:=\{\mathfrak{d}_{(a, b)}:(a, b)\in D\}.\] The following result follows straightforwardly from \cite[Lemma 4.3]{BezhanishviliBezhanishvili2017LFRoHAaCF}.
\begin{proposition}
For every Esakia space $\mathfrak{X}$ and any si stable canonical rule $\scrsi{H}{D}$, we have $\mathfrak{X}\nvDash\scrsi{H}{D}$ iff there is a continuous stable surjection $f:\mathfrak{X}\to \mathfrak{H}_*$ satisfying the BDC for the family $\mathfrak{D}:=\{\mathfrak{d}_{(a, b)}:(a, b)\in D\}$.\label{refutspace}
\end{proposition}
In view of \Cref{refutspace}, when working with Esakia spaces we shall often write a si stable canonical rule $\scrsi{H}{D}$ as $\scrsi{H_*}{\mathfrak{D}}$.
Stable maps and the BDC are closely related to the filtration construction. We recall its definition in an algebraic setting, and state the fundamental theorem used in most of its applications.
\begin{definition}
Let $\mathfrak{H}$ be a Heyting algebra, $V$ a valuation on $\mathfrak{A}$, and $\Theta$ a finite, subformula closed set of formulae. A (finite) model $(\mathfrak{K}', V')$ is called a (\emph{finite}) \emph{filtration of $(\mathfrak{H}, V)$ through $\Theta$} if the following hold:
\begin{enumerate}
\item $\mathfrak{K}'=(\mathfrak{K}, \to)$, where $\mathfrak{K}$ is the bounded sublattice of $\mathfrak{H}$ generated by $\bar V[\Theta]$;
\item $V(p)=V'(p)$ for every propositional variable $p\in \Theta$;
\item The inclusion $\subseteq:\mathfrak{H}\to \mathfrak{K}$ is a stable embedding satisfying the BDC for the set \[\{(\bar V'(\varphi), \bar V'(\psi)):\varphi\to \psi\in \Theta\}.\]
\end{enumerate}
\end{definition}
\begin{theorem}[Filtration theorem for Heyting algebras]
Let $\mathfrak{H}\in \mathsf{HA}$ be a Heyting algebra, $V$ a valuation on $\mathfrak{H}$, and $\Theta$ a a finite, subformula closed set of formulae. If $(\mathfrak{K}', V')$ is a filtration of $(\mathfrak{H}, V)$ through $\Theta$ then for every $\varphi\in \Theta$ we have
\[\bar V(\varphi)=\bar V'(\varphi).\]
Consequently, for every rule $\Gamma/\Delta$ such that $\gamma, \delta\in \Theta$ for each $\gamma\in \Gamma$ and $\delta\in \Delta$ we have
\[\mathfrak{H}, V\models \Gamma/\Delta\iff \mathfrak{K}, V'\models \Gamma/\Delta.\]
\end{theorem}
A proof of the filtration theorem above follows from, e.g., the proof of \cite[Lemma 3.6]{BezhanishviliBezhanishvili2017LFRoHAaCF}.
The next result establishes that every si rule is equivalent to finitely many si stable canonical rules. This lemma was proved in \cite[Proposition 3.3]{BezhanishviliEtAl2016CSL}, but we rehearse the proof here to illustrate the exact role of filtration in the machinery of stable canonical rules.
\begin{lemma}
For every si rule $\Gamma/\Delta$ there is a finite set $\Xi$ of si stable canonical rules such that for any $\mathfrak{K}\in \mathsf{HA}$ we have $\mathfrak{K}\nvDash \Gamma/\Delta$ iff there is $\scrsi{H}{D}\in \Xi$ such that $\mathfrak{K}\nvDash \scrsi{H}{D}$.\label{rewritesi}
\end{lemma}
\begin{proof}
Since bounded distributive lattices are locally finite there are, up to isomorphism, only finitely many pairs $(\mathfrak{H}, D)$ such that
\begin{itemize}
\item $\mathfrak{H}$ is at most $k$-generated as a bounded distributive lattice, where $k=|\mathit{Sfor}(\Gamma/\Delta)|$;
\item $D=\{(\bar V(\varphi), \bar V(\psi)):\varphi\to \psi\in \mathit{Sfor}(\Gamma/\Delta)\}$, where $V$ is a valuation on $\mathfrak{H}$ refuting $\Gamma/\Delta$.
\end{itemize}
Let $\Xi$ be the set of all rules $\scrsi{H}{D}$ for all such pairs $(\mathfrak{H}, D)$, identified up to isomorphism.
$(\Rightarrow)$ Assume $\mathfrak{K}\nvDash \Gamma/\Delta$ and take a valuation $V$ on $\mathfrak{K}$ refuting $\Gamma/\Delta$. Consider the bounded distributive sublattice $\mathfrak{J}$ of $\mathfrak{K}$ generated by $\bar V[\mathit{Sfor}(\Gamma/\Delta)]$. Since bounded distributive lattices are locally finite, $\mathfrak{J}$ is finite. Define a binary operation $\rightsquigarrow$ on $\mathfrak{J}$ by setting, for all $a, b\in J$,
\[a\rightsquigarrow b:=\bigvee\{c\in J: a\land c\leq b\}.\]
Clearly, $\mathfrak{J}':=(\mathfrak{J}, \rightsquigarrow)$ is a Heyting algebra. Define a valuation $V'$ on $\mathfrak{J}'$ with $V'(p)=V(p)$ if $p\in \Theta$, $V'(p)$ arbitrary otherwise.
Since $\mathfrak{J}'$ is a sublattice of $\mathfrak{K}$, the inclusion $\subseteq$ is a stable embedding. Now let $\varphi\to \psi\in \Theta$. Then $\bar V'(\varphi)\to \bar V'(\psi)\in J$. From the fact that $\subseteq$ is a stable embedding it follows that $\bar V'(\varphi)\rightsquigarrow \bar V'(\psi)\leq \bar V'(\varphi)\to \bar V'(\psi)$. Conversely, by definition of $\rightsquigarrow$ we find $\bar V'(\varphi)\rightsquigarrow \bar V'(\psi)\land \bar V'(\varphi)\leq \bar V'(\psi)$. But then by the properties of Heyting algebras it follows that $\bar V'(\varphi)\to \bar V'(\psi)\leq \bar V'(\varphi)\rightsquigarrow \bar V'(\psi)$. Thus $\bar V'(\varphi)\rightsquigarrow \bar V'(\psi)= \bar V'(\varphi)\to \bar V'(\psi)$ as desired. We have shown that the model $(\mathfrak{J}', V')$ is a filtration of the model $(\mathfrak{K}, V)$ through $\mathit{Sfor}(\Gamma/\Delta)$, which implies $\mathfrak{J}', V'\nvDash \Gamma/\Delta$.
$(\Leftarrow)$ Assume that there is $\scrsi{H}{D}\in \Xi$ such that $\mathfrak{K}\nvDash \scrsi{H}{D}$. Let $V$ be the valuation associated with $D$ in the sense spelled out above. Then $\mathfrak{H}, V\nvDash \Gamma/\Delta$. Moreover $(\mathfrak{H}, V)$ is a filtration of the model $(\mathfrak{K}, V)$, so by the filtration theorem it follows that $\mathfrak{K}, V\nvDash \Gamma/\Delta$.
\end{proof}
As an immediate consequence we obtain a uniform axiomatisation of all si-rule systems by means of si stable canonical rules.
\begin{theorem}[{\cite[Proposition 3.4]{BezhanishviliEtAl2016CSL}}]
Any si-rule system $\mathtt{L}\in \mathbf{Ext}(\mathtt{IPC_R})$ is axiom\-atisable over $\mathtt{IPC_R}$ by some set of si stable canonical rules. \label{axiomatisationsi}
\end{theorem}
\begin{proof}
Let $\mathtt{L}\in \mathbf{Ext}(\mathtt{IPC_R})$, and take a set of rules $\Xi$ such that $\mathtt{L}=\mathtt{IPC_R}\oplus \Xi$. By \Cref{rewritesi} and the completeness of $\mathtt{IPC_R}$ (\Cref{completeness_si}), for every $\Gamma/\Delta\in \Xi$ there is a finite set $\Pi_{\Gamma/\Delta}$ of si stable canonical rules whose conjunction is equivalent to $\Gamma/\Delta$. But then $\mathtt{L}=\mathtt{IPC_R}\oplus \bigcup_{\Delta/\Gamma\in \Xi}\Pi_{\Gamma/\Delta}$.
\end{proof}
\subsubsection{Modal Case} \label{sec:scrmod} We now turn to modal stable canonical rules.
\begin{definition}
Let $\mathfrak{A}\in \mathsf{MA}$ be finite and $D\subseteq A$. For every $a\in A$ introduce a fresh propositional variable $p_a$. The \emph{modal stable canonical rule} of $(\mathfrak{A}, D)$ is defined as the rule $\scrmod{A}{D}=\Gamma/\Delta$, where
\begin{align*}
\Gamma&=\{p_{\neg a}\leftrightarrow \neg p_a:a\in A\}\cup\\
&\{p_{a\land b}\leftrightarrow p_a\land p_b:a, b\in A\}\cup \{p_{a\lor b}\leftrightarrow p_a\lor p_b:a, b\in A\}\cup\\
& \{p_{\square a}\to \square p_a:a\in A\}\cup \{\square p_a\to p_{\square a}:a\in D\}\\
\Delta&=\{p_a:a\in A\smallsetminus \{1\}\}.
\end{align*}
\end{definition}
As in the si case, a modal stable canonical rule of the form $\scrmod{A}{\varnothing}$ is written simply as $\srmod{A}$ and called a \emph{stable rule}.
If $\mathfrak{A}, \mathfrak{B}\in \mathsf{MA}$ are modal algebras, let us call a map $h:\mathfrak{A}\to \mathfrak{B}$ \emph{stable} if for every $a\in A$ we have $h(\square a)\leq \square h(a)$. If $D\subseteq A$, we say that $h$ satisfies the \emph{bounded domain condition} (BDC) for $D$ if $h(\square a)= \square h(a)$ for every $a\in D$.
The following two propositions are modal counterparts to \Cref{prop:siscr-refutation-1,prop:siscr-refutation-2}. Their proofs are similar to the latter's, and can be found in \cite[Lemma 5.3, Theorem 5.4]{BezhanishviliEtAl2016SCR}.
\begin{proposition}
For every finite $\mathfrak{A}\in \mathsf{MA}$ and $D\subseteq A$, we have $\mathfrak{A}\nvDash \scrmod{A}{D}$. \label{prop:modscr-refutation-1}
\end{proposition}
\begin{proposition}
For every $\mathfrak{A}, \mathfrak{B}\in \mathsf{MA}$ with $\mathfrak{A}$ finite, and every $D\subseteq A$, we have $\mathfrak{B}\nvDash \scrmod{A}{D}$ iff there is a stable embedding $h:\mathfrak{A}\to \mathfrak{B}$ satisfying the BDC for $D$.\label{prop:modscr-refutation-2}
\end{proposition}
Refutation conditions for modal stable canonical rules on modal spaces are obtained in analogous fashion to the si case. If $\mathfrak{X}, \mathfrak{Y}$ are modal spaces, a map $f:\mathfrak{X}\to \mathfrak{Y}$ is called \emph{stable} if for all $x, y\in X$, we have that $Rx y$ implies $Rf(x) f(y)$. If $\mathfrak{d}\subseteq Y$ we say that $f$ satisfies the BDC for $\mathfrak{d}$ if for all $x\in X$,
\[R[f(x)]\cap \mathfrak{d}\neq\varnothing\Rightarrow f[R[x]]\cap \mathfrak{d}\neq\varnothing.\]
If $\mathfrak{D}\subseteq \wp(Y)$ then we say that $f$ satisfies the BDC for $\mathfrak{D}$ if it does for each $\mathfrak{d}\in \mathfrak{D}$. If $\mathfrak{A}$ is a finite modal algebra and $D\subseteq H$, for every $a\in D$ set $\mathfrak{d}_{a}:=-\beta (a)$. Finally, put $\mathfrak{D}:=\{\mathfrak{d}_{a}:a\in D\}$. The following result is proved in \cite[Theorem 3.6]{BezhanishviliEtAl2016SCR}.
\begin{proposition}
For every modal space $\mathfrak{X}$ and any modal stable canonical rule $\scrmod{A}{D}$, $\mathfrak{X}\nvDash\scrmod{A}{D}$ iff there is a continuous stable surjection $f:\mathfrak{X}\to \mathfrak{A}_*$ satisfying the BDC for $\mathfrak{D}$.\label{refutspacemod}
\end{proposition}
In view of \Cref{refutspacemod}, when working with modal spaces we may write a modal stable canonical rule $\scrmod{A}{D}$ as $\scrmod{A_*}{\mathfrak{D}}$.
As in the si case, stable maps and the BDC are closely related to the filtration technique.
\begin{definition}
Let $\mathfrak{A}$ be a modal algebra, $V$ a valuation on $\mathfrak{A}$, and $\Theta$ a finite, subformula closed set of formulae. A (finite) model $(\mathfrak{B}, V')$ is called a (\emph{finite}) \emph{filtration of $(\mathfrak{A}, V)$ through $\Theta$} if the following conditions hold:\label{filtrmod}
\begin{enumerate}
\item $\mathfrak{B}=(\mathfrak{B}', \square)$, where $\mathfrak{B}'$ is the Boolean subalgebra of $\mathfrak{A}$ generated by $\bar V[\Theta]$;
\item $V(p)=V'(p)$ for every propositional variable $p\in \Theta$;
\item The inclusion $\subseteq:\mathfrak{B}\to \mathfrak{A}$ is a stable embedding satisfying the BDC for the set \[\{\bar V(\varphi):\square \varphi\in \Theta\}\]
\end{enumerate}
\end{definition}
The following result is proved, e.g., in \cite[Lemma 4.4]{BezhanishviliEtAl2016SCR}.
\begin{theorem}[Filtration theorem for modal algebras]
Let $\mathfrak{A}\in \mathsf{MA}$ be a modal algebra, $V$ a valuation on $\mathfrak{A}$, and $\Theta$ a finite, subformula closed set of formulae. If $(\mathfrak{B}', V')$ is a filtration of $(\mathfrak{A}, V)$ through $\Theta$ then for every $\varphi\in \Theta$ we have
\[\bar V(\varphi)=\bar V'(\varphi).\]
Consequently, for every rule $\Gamma/\Delta$ such that $\gamma, \delta\in \Theta$ for each $\gamma\in \Gamma$ and $\delta\in \Delta$ we have
\[\mathfrak{A}, V\models \Gamma/\Delta\iff \mathfrak{B}, V'\models \Gamma/\Delta.\]
\end{theorem}
Unlike the si case, filtrations of a given model through a given set of formulae are not necessarily unique when they exist. Depending on which construction is preferred, different properties of the original model may or may not be preserved. In this section we mainly deal with closure algebras, whence we are particularly interested in filtrations preserving reflexivity and transitivity. It is easy to see that any filtration preserves reflexivity. Whilst, in general, the filtration of a transitive model may fail to be transitive, transitive filtrations of transitive models can be constructed in multiple ways. Here we restrict attention to one particular construction.
\begin{definition}
Let $\mathfrak{A}\in \mathsf{S4}$, $V$ a valuation on $\mathfrak{A}$ and $\Theta$ a finite, subformula closed set of formula. The \emph{$($least$)$ transitive filtration} of $(\mathfrak{A}, V)$ is a pair $(\mathfrak{B}', V')$ with $\mathfrak{B}'=(\mathfrak{B}, \blacksquare)$, where $\mathfrak{B}$ and $V'$ are as per \Cref{filtrmod}, and for all $b\in B$ we have
\[\blacksquare b:=\bigvee\{\square a: \square a\leq \square b\text{ and }a, \square a\in B\}\]
\end{definition}
It is easy to see that transitive filtrations of transitive models are indeed based on closure algebras (cf., e.g., \cite[Lemma 6.2]{BezhanishviliEtAl2016SCR}).
Transitive filtrations provide the necessary countermodels to rewrite modal rules into (conjunctions of) modal stable canonical rules. The following lemma, which is a modal counterpart to \Cref{rewritesi}, explains how.
\begin{lemma}[{\cite[Theorem 5.5]{BezhanishviliEtAl2016SCR}}]
For every modal rule $\Gamma/\Delta$ there is a finite set $\Xi$ of modal stable canonical rules of the form $\scrmod{A}{D}$ with $\mathfrak{A}\in \mathsf{S4}$, such that for any $\mathfrak{B}\in \mathsf{S4}$ we have that $\mathfrak{B}\nvDash \Gamma/\Delta$ iff there is $\scrmod{A}{D}\in \Xi$ such that $\mathfrak{B}\nvDash \scrmod{A}{D}$.\label{rewritemod}
\end{lemma}
\begin{proof}
Since Boolean algebras are locally finite there are, up to isomorphism, only finitely many pairs $(\mathfrak{A}, D)$ such that
\begin{itemize}
\item $\mathfrak{A}$ is at most $k$-generated as a Boolean algebra, where $k=|\mathit{Sfor}(\Gamma/\Delta)|$;
\item $D=\{\bar V(\varphi):\square\varphi\in \mathit{Sfor}(\Gamma/\Delta)\}$, where $V$ is a valuation on $\mathfrak{H}$ refuting $\Gamma/\Delta$.
\end{itemize}
Let $\Xi$ be the set of all rules $\scrmod{A}{D}$ for all such pairs $(\mathfrak{A}, D)$, identified up to isomorphism. Then we reason as in the proof of \Cref{rewritesi}, using the well-known fact that every model $(\mathfrak{B}, V)$ with $\mathfrak{B}\in \mathsf{S4}$ has a transitive filtration through $\mathit{Sfor}(\Gamma/\Delta)$ to establish the $(\Rightarrow)$ direction.
\end{proof}
Exactly mirroring the si case we apply \Cref{rewritemod} to obtain the following uniform axiomatisation of modal rule systems extending $\mathtt{S4_R}$.
\begin{theorem}
Every modal rule system $\mathtt{M}\in \mathbf{NExt}(\mathtt{S4_R})$ is axiom\-atisable over $\mathtt{S4_R}$ by some set of modal stable canonical rules of the form $\scrmod{A}{D}$, for $\mathfrak{A}\in \mathsf{S4}$.\label{thm:axiomatisationS4scr}
\end{theorem}
\subsection{Modal Companions of Superintuitionistic Deductive Systems via Stable Canonical Rules}\label{sec:modalcompanions}
We now turn to the main topic of this section. \Cref{sec:mappings1} reviews the basic ingredients of the theory of modal companions. \Cref{sec:structure1} shows how to apply stable canonical rules to give a novel proof of the Blok-Esakia theorem. Lastly, \Cref{sec:additionalresults} applies our methods to obtain an analogue of the Dummett-Lemmon conjecture to rule systems.
\subsubsection{Semantic Mappings} \label{sec:mappings1}
We begin by defining semantic transformation between Heyting and closure algebras. For more details, consult \cite[Section 3.5]{Esakia2019HADT}.
\begin{definition}
The mapping $\sigma: \mathsf{HA}\to \mathsf{S4}$ assigns every $\mathfrak{H}\in \mathsf{HA}$ to the algebra $\sigma \mathfrak{H}:=(B(\mathfrak{H}),\square)$, where $B(\mathfrak{H})$ is the free Boolean extension of $\mathfrak{H}$ and
\[\square a:=\bigvee \{b\in H: b\leq a\}.\]
\end{definition}
It can be shown that for each $\mathfrak{H}\in \mathsf{HA}$ we have that $\sigma \mathfrak{H}$ is in fact a $\mathtt{GRZ}$-algebra \cite[Corollary 3.5.7]{Esakia2019HADT}.
\begin{definition}
The mapping $\rho:\mathsf{S4}\to \mathsf{HA}$ assigns every $\mathfrak{A}\in \mathsf{S4}$ to the algebra $\rho \mathfrak{A}:=(O(A), \land, \lor, \to, 0,1)$, where
\begin{align*}
O(A)&:=\{a\in A:\square_F a=a\}=\{a\in A:\lozenge_P a=a\}\\
a\to b&:=\square_F (\neg a\lor b)
\end{align*}
\end{definition}
The algebra $\rho(\mathfrak{A})$ is called the \emph{Heyting algebra of open elements} associated with $\mathfrak{A}$.
It is easy to verify that $\rho(\mathfrak{A})$ is indeed a Heyting algebra for any closure algebra $\mathfrak{A}$.
We now give a dual description of the maps $\sigma$, $\rho$ on modal and Esakia spaces.
\begin{definition}
If $\mathfrak{X}=(X, \leq, \mathcal{O})$ is an Esakia space we set $\sigma \mathfrak{X}:=(X, R, \mathcal{O})$ with $R:=\leq$. Let $\mathfrak{Y}:=(Y, R, \mathcal{O})$ be an $\mathtt{S4}$-space. For $x, y\in Y$ write $x\sim y$ iff $Rxy$ and $Ryx$. Define a map $\rho:Y\to \wp (Y)$ by setting $\rho(x)=\{y\in Y:x\sim y\}$. We define $\rho\mathfrak{Y}:=(\rho[Y], \leq, \rho[\mathcal{O}])$ where $\rho(x)\leq \rho(y)$ iff $Rxy$.
\end{definition}
Note that $\sigma$ here is effectively the identity map, though we find useful to distinguish an Esakia space $\mathfrak{X}$ from $\sigma \mathfrak{X}$ notationally in order to signal whether we are treating the space as a model for si or modal deductive systems. On the other hand, the map $\rho$ affects a modal space $\mathfrak{Y}$ by collapsing its $R$-clusters and endowing the result with the quotient topology. We shall refer to $\rho\mathfrak{Y}$ as the \emph{Esakia skeleton} of $\mathfrak{Y}$, and to $\sigma\rho\mathfrak{Y}$ as the \emph{modal skeleton} of $\mathfrak{Y}$. It is easy to see that the map $\rho:\mathfrak{Y}\to \rho \mathfrak{Y}$ is a surjective bounded morphism which moreover reflects $\leq$.
Routine arguments show that that the algebraic and topological versions of the maps $\sigma, \rho$ are indeed dual to each other, as stated in the following proposition.
\begin{proposition}
The following hold.\label{prop:mcmapsdual}
\begin{enumerate}
\item Let $\mathfrak{H}\in \mathsf{HA}$. Then $(\sigma \mathfrak{H})_*\cong\sigma (\mathfrak{H}_*)$. Consequently, if $\mathfrak{X}$ is an Esakia space then $(\sigma \mathfrak{X})^*\cong\sigma (\mathfrak{X})^*$.\label{prop:mcmapsdual1}
\item Let $\mathfrak{X}$ be an $\mathtt{S4}$ modal space. Then $(\rho\mathfrak{X})^*\cong\rho(\mathfrak{X}^*)$. Consequently, if $\mathfrak{A}\in \mathsf{S4}$, then $(\rho\mathfrak{A})_*\cong\rho(\mathfrak{A}_*)$.\label{prop:mcmapsdual2}
\end{enumerate}
\end{proposition}
The dual description of $\rho, \sigma$ makes the following result evident.
\begin{proposition}
For every $\mathfrak{H}\in \mathsf{HA}$ we have $\mathfrak{H}\cong \rho\sigma \mathfrak{H}$. Moreover, for every $\mathfrak{A}\in \mathsf{S4}$ we have $\sigma \rho\mathfrak{A}\rightarrowtail\mathfrak{A}$.\label{cor:representationHAS4}
\end{proposition}
\subsubsection{The Gödel Translation}
The close connection between Heyting and closure algebras just outlined manifests syntactically as the existence of a well-behaved translation of si formulae into modal ones, called the \emph{Gödel translation} after \citet{Goedel1933EIDIA}.
\begin{definition}[Gödel translation]
The \emph{Gödel translation} is a mapping $T:\mathit{Frm}_{si}\to \mathit{Frm}_{md}$ defined recursively as follows.
\begin{align*}
T(\bot)&:=\bot\\
T(\top)&:=\top\\
T(p)&:=\square p\\
T(\varphi\land \psi)&:=T(\varphi)\land T(\psi)\\
T(\varphi\lor \psi)&:=T(\varphi)\lor T(\psi)\\
T(\varphi\to \psi)&:=\square (\neg T(\varphi)\lor T(\psi))
\end{align*}
\end{definition}
We extend the Gödel translation from formulae to rules by setting
\[T(\Gamma/\Delta):=T[\Gamma]/T[\Delta].\]
We close this subsection by recalling the following key lemma due to \citet{Jerabek2009CR}.
\begin{lemma}[{\cite[Lemma 3.13]{Jerabek2009CR}}]
For every $\mathfrak{A}\in \mathsf{S4}$ and si rule $\Gamma/\Delta$, \label{lem:gtskeleton}
\[\mathfrak{A}\models T(\Gamma/\Delta)\iff \rho\mathfrak{A}\models \Gamma/\Delta\]
\end{lemma}
\subsubsection{Structure of Modal Companions}\label{sec:structure1}
We now have all the material needed to develop the theory of modal companions via the machinery of stable canonical rules.
\begin{definition}
Let $\mathtt{L}\in \mathbf{Ext}(\mathtt{IPC_R})$ be a si-rule system and $\mathtt{M}\in \mathbf{NExt}(\mathtt{S4_R})$ a modal rule system. We say that $\mathtt{M}$ is a \emph{modal companion} of $\mathtt{L}$ (or that $\mathtt{L}$ is the si fragment of $\mathtt{M}$) whenever $\Gamma/\Delta\in \mathtt{L}$ iff $T(\Gamma/\Delta)\in \mathtt{M}$. Moreover, let $\mathtt{L}\in \mathbf{Ext}(\mathtt{IPC})$ be a si-logic and $\mathtt{M}\in \mathbf{NExt}(\mathtt{S4})$ a modal logic. We say that $\mathtt{M}$ is a \emph{modal companion} of $\mathtt{L}$ (or that $\mathtt{L}$ is the si fragment of $\mathtt{M}$) whenever $\varphi\in \mathtt{L}$ iff $T(\varphi)\in \mathtt{M}$.
\end{definition}
Obviously, $\mathtt{M}\in \mathbf{NExt}(\mathtt{S4_R})$ is a modal companion of $\mathtt{L}\in \mathbf{Ext}(\mathtt{IPC_R})$ iff $\mathsf{Taut}(\mathtt{M})$ is a modal companion of $\mathsf{Taut}(\mathtt{L})$, and $\mathtt{M}\in \mathbf{NExt}(\mathtt{S4})$ is a modal companion of $\mathtt{L}\in \mathbf{Ext}(\mathtt{IPC})$ iff $\mathtt{M_R}$ is a modal companion of $\mathtt{L_R}$.
Define the following three maps between the lattices $\mathbf{Ext}(\mathtt{IPC_R})$ and $\mathbf{NExt}(\mathtt{K_R})$.
\begin{align*}
\tau&:\mathbf{Ext}(\mathtt{IPC_R})\to \mathbf{NExt}(\mathtt{S4_R}) & \sigma&:\mathbf{Ext}(\mathtt{IPC_R})\to \mathbf{NExt}(\mathtt{S4_R}) \\
\mathtt{L}&\mapsto \mathtt{S4_R}\oplus \{T(\Gamma/\Delta):\Gamma/\Delta\in \mathtt{L}\} & \mathtt{L}&\mapsto \mathtt{GRZ_R}\oplus \tau \mathtt{L}\\
\end{align*}
\begin{align*}
\rho &:\mathbf{NExt}(\mathtt{S4_R}) \to \mathbf{Ext}(\mathtt{IPC_R}) \\
\mathtt{M}&\mapsto\{\Gamma/\Delta:T(\Gamma/\Delta)\in \mathtt{M}\}
\end{align*}
These mappings are readily extended to lattices of logics.
\begin{align*}
\tau&:\mathbf{Ext}(\mathtt{IPC})\to \mathbf{NExt}(\mathtt{S4}) & \sigma&:\mathbf{Ext}(\mathtt{IPC})\to \mathbf{NExt}(\mathtt{S4}) \\
\mathtt{L}&\mapsto \mathsf{Taut}(\tau\mathtt{L_R})=\mathtt{S4}\oplus \{T(\varphi):\varphi\in \mathtt{L}\} & \mathtt{L}&\mapsto \mathsf{Taut}(\sigma\mathtt{L_R})=\mathtt{GRZ}\oplus\{T(\varphi):\varphi\in \mathtt{L}\} \\
\end{align*}
\begin{align*}
\rho &:\mathbf{NExt}(\mathtt{S4}) \to \mathbf{Ext}(\mathtt{IPC}) \\
\mathtt{M}&\mapsto \mathsf{Taut}(\rho\mathtt{M_R})=\{\varphi:T(\varphi)\in \mathtt{M}\}
\end{align*}
Furthermore, extend the mappings $\sigma:\mathsf{HA}\to \mathsf{S4}$ and $\rho:\mathsf{S4}\to \mathsf{HA}$ to universal classes by setting
\begin{align*}
\sigma&:\mathbf{Uni}(\mathsf{HA})\to \mathbf{Uni}(\mathsf{S4}) & \rho&:\mathbf{Uni}(\mathsf{S4})\to \mathbf{Uni}(\mathsf{HA}) \\
\mathcal{U}&\mapsto \mathsf{Uni}\{\sigma \mathfrak{H}:\mathfrak{H}\in \mathcal{U}\} & \mathcal{W}&\mapsto \{\rho\mathfrak{A}:\mathfrak{A}\in \mathcal{W}\}.\\
\end{align*}
Finally, introduce a semantic counterpart to $\tau$ as follows.
\begin{align*}
\tau&: \mathbf{Uni}(\mathsf{HA})\to \mathbf{Uni}(\mathsf{S4}) \\
\mathcal{U}&\mapsto \{\mathfrak{A}\in \mathsf{S4}:\rho\mathfrak{A}\in \mathcal{U}\}
\end{align*}
The goal of this subsection is to give alternative proofs of the following two classic results in the theory of modal companions. Firstly, that for every si-deductive system $\mathtt{L}$, the modal companions of $\mathtt{L}$ are exactly the elements of the interval $\rho^{-1}(\mathtt{L})$ (\Cref{mcinterval}). Secondly, that the syntactic mappings $\sigma, \rho$ are mutually inverse isomorphism (\Cref{blokesakia}). This last result (restricted to logics) is widely known as the \emph{Blok-Esakia theorem}.
The main problem one needs to deal with in order to prove the results just mentioned consists in showing that the mapping $\sigma:\mathbf{Ext}(\mathtt{IPC_R})\to \mathbf{NExt}(\mathtt{GRZ_R})$ is surjective. We solve this problem by first applying stable canonical rules to show that the semantic mapping $\sigma:\mathbf{Uni}(\mathsf{HA})\to \mathbf{Uni}(\mathsf{GRZ})$ is surjective, and subsequently establishing that the syntactic and semantic versions of $\sigma$ capture essentially the same transformation. Our key tool is the following technical lemma.
\begin{lemma}
Let $\mathfrak{A}\in \mathsf{GRZ}$. Then for every modal rule $\Gamma/\Delta$, $\mathfrak{A}\models\Gamma/\Delta$ iff $\sigma\rho\mathfrak{A}\models \Gamma/\Delta$.\label{mainlemma-simod}
\end{lemma}
\begin{proof}
$(\Rightarrow)$ This direction follows from the fact that $\sigma\rho\mathfrak{A}\rightarrowtail\mathfrak{A}$ (\Cref{cor:representationHAS4}).
$(\Leftarrow)$ We prove the dual statement that $\mathfrak{A}_*\nvDash \Gamma/\Delta$ implies $\sigma\rho\mathfrak{A}_*\nvDash \Gamma/\Delta$. Let $\mathfrak{X}:=\mathfrak{A}_*$. By \Cref{thm:axiomatisationS4scr}, $\Gamma/\Delta$ is equivalent to a conjunction of modal stable canonical rules of finite closure algebras, so without loss of generality we may assume $\Gamma/\Delta=\scrmod{B}{D}$, for $\mathfrak{B}\in \mathsf{S4}$ finite. So suppose $\mathfrak{X}\nvDash \scrmod{B}{D}$ and let $\mathfrak{F}:=\mathfrak{B}_*$. By \Cref{refutspace}, there is a stable map $f:\mathfrak{X}\to \mathfrak{F}$ satisfying the BDC for $\mathfrak{D}:=\{\mathfrak{d}_a:a\in D\}$. We construct a stable map $g:\sigma\rho\mathfrak{X}\to \mathfrak{F}$ which also satisfies the BDC for $\mathfrak{D}$. By \Cref{refutspace} again, this would show that $\sigma \rho\mathfrak{X}\nvDash\scrmod{B}{D}$, hence would conclude the proof.
Let $C\subseteq F$ be some cluster. Consider $Z_C:=f^{-1}(C)$. As $f$ is continuous, $Z_C\in \mathsf{Clop}(\mathfrak{X})$. Moreover, since $f$ is stable $Z_C$ does not cut any cluster. It follows that $\rho[Z_C]$ is clopen in $\sigma\rho\mathfrak{X}$, because $\sigma\rho \mathfrak{X}$ has the quotient topology. Enumerate $C:=\{x_1, \ldots, x_n\}$. Then $f^{-1}(x_i)\subseteq Z_C$ is clopen. By \Cref{propgrz1} we find that $M_i:=\mathit{max}(f^{-1}(x_i))$ is closed. Furthermore, as $\mathfrak{X}$ is a $\mathtt{GRZ}$ space and every element of $M_i$ is passive in $M_i$, by \Cref{propgrz1} again we have that $M_i$ does not cut any cluster. Therefore $\rho[M_i]$ is closed, again because $\sigma\rho\mathfrak{X}$ has the quotient topology. Clearly, $\rho[M_i]\cap \rho[M_j]=\varnothing$ for each $i\neq j$.
We shall now separate the closed sets $\rho[M_1], \ldots, \rho[M_n]$ by disjoint clopens. That is, we shall find disjoint clopens $U_1, \ldots, U_n\in \mathsf{Clop}(\sigma\rho\mathfrak{X})$ with $\rho[M_i]\subseteq U_i$ and $\bigcup_i U_i=\rho[Z_C]$.
Let $k\leq n$ and assume that $U_i$ has been defined for all $i<k$. If $k=n$ put $U_n=\rho[Z_C]\smallsetminus\left(\bigcup_{i< k} U_i\right)$ and we are done. Otherwise set $V_k:=\rho[Z_C]\smallsetminus\left(\bigcup_{i< k} U_i\right)$ and observe that it contains each $\rho[M_i]$ for $k\leq i\leq n$. By the separation properties of Stone spaces, for each $i$ with $k<i\leq n$ there is some $U_{k_i}\in \mathsf{Clop}(\sigma\rho\mathfrak{X})$ with $\rho[M_k]\subseteq U_{k_i}$ and $\rho[M_i]\cap U_{k_i}=\varnothing$. Then set $U_k:=\bigcap_{k<i\leq n} U_{k_i}\cap V_k$.
Now define a map
\begin{align*}
g_C&: \rho[Z_C]\to C\\
z&\mapsto x_i\iff z\in U_i.
\end{align*}
Note that $g_C$ is relation preserving, evidently, and continuous because $g_C^{-1}(x_i)=U_i$. Finally, define $g: \sigma\rho\mathfrak{X}\to F$ by setting
\[
g(\rho(z)):=\begin{cases}
f(z)&\text{ if } f(z)\text{ does not belong to any proper cluster }\\
g_C(\rho(z))&\text{ if }f(z)\in C\text{ for some proper cluster }C\subseteq F.
\end{cases}
\]
Now, $g$ is evidently relation preserving. Moreover, it is continuous because both $f$ and each $g_C$ are. Suppose $Rg(\rho(x)) y$ and $y\in \mathfrak{d}$ for some $\mathfrak{d}\in \mathfrak{D}$. By construction, $f(x)$ belongs to the same cluster as $g(\rho(x))$, so also $Rf(x)y$. Since $f$ satisfies the BDC for $\mathfrak{D}$, there must be some $z\in X$ such that $Rxz$ and $f(z)\in \mathfrak{d}$. Since $f^{-1}(f(z))\in \mathsf{Clop}(\mathfrak{X})$, by \Cref{propgrz1} there is $z'\in \mathit{max}(f^{-1}(f(z)))$ with $Rzz'$.
\color{black} Then also $Rxz'$ and $f(z')\in \mathfrak{d}$.
But from $z'\in \mathit{max}(f^{-1}(f(z))$ it follows that $f(z')=g(\rho(z'))$ by construction, so we have $g(\rho(z'))\in \mathfrak{d}$. As clearly $R\rho(x)\rho(z')$, we have shown that $g$ satisfies the BDC for $\mathfrak{D}$. By \Cref{refutspace} this implies $\sigma\rho\mathfrak{X}\not\models \scrmod{B}{D}$.
\color{black}
\end{proof}
\begin{theorem}
Every $\mathcal{U}\in \mathbf{Uni}(\mathsf{GRZ})$ is generated by its skeletal elements, i.e., $\mathcal{U}=\sigma \rho\mathcal{U}$. \label{unigrzgeneratedskel}
\end{theorem}
\begin{proof}
By $\sigma \rho\mathfrak{A}\rightarrowtail\mathfrak{A}$ (\Cref{cor:representationHAS4}), surely $\sigma\rho\mathcal{U}\subseteq \mathcal{U}$. Conversely, suppose $\mathcal{U}\nvDash \Gamma/\Delta$. Then there is $\mathfrak{A}\in \mathcal{U}$ with $\mathfrak{A}\nvDash \Gamma/\Delta$. By \Cref{mainlemma-simod} it follows that $\sigma\rho\mathfrak{A}\nvDash\Gamma/\Delta$. This shows $\mathsf{ThR}(\sigma\rho\mathcal{U})\subseteq \mathsf{ThR}(\mathcal{U})$, which is equivalent to $\mathcal{U}\subseteq \sigma\rho\mathcal{U}$. Hence indeed $\mathcal{U}=\sigma \rho\mathcal{U}$.
\end{proof}
\begin{remark}
The restriction of \Cref{unigrzgeneratedskel} to varieties plays an important role in the algebraic proof of the Blok-Esakia theorem given by \citet{Blok1976VoIA}. The unrestricted version is explicitly stated and proved in \cite[Lemma 4.4]{Stronkowski2018OtBETfUC} using a generalisation of Blok's approach, although it also follows from \cite[Theorem 5.5]{Jerabek2009CR}. Blok establishes the restricted version of \Cref{unigrzgeneratedskel} as a consequence of what is now known as the \emph{Blok lemma}. The proof of the Blok lemma is notoriously involved. By contrast, our techniques afford a direct and, we believe, semantically transparent proof of \Cref{unigrzgeneratedskel}. \label{remark:blok}
\end{remark}
Given \Cref{unigrzgeneratedskel}, the main result of this section can be obtained via known routine arguments. First, we show that the syntactic modal companion maps $\tau, \rho, \sigma$ commute with $\mathsf{Alg}(\cdot)$.
\begin{lemma}[{\cite[Theorem 5.9]{Jerabek2009CR}}]
For each $\mathtt{L}\in \mathbf{Ext}(\mathtt{IPC_R})$ and $\mathtt{M}\in \mathbf{NExt}(\mathtt{S4_R})$, the following hold:\label{prop:mcmapscommute}
\begin{align}
\mathsf{Alg}(\tau\mathtt{L})&=\tau \mathsf{Alg}(\mathtt{L}) \label{prop:mcmapscommute1}\\
\mathsf{Alg}(\sigma\mathtt{L})&=\sigma \mathsf{Alg}(\mathtt{L})\label{prop:mcmapscommute2}\\
\mathsf{Alg}(\rho\mathtt{M})&=\rho \mathsf{Alg}(\mathtt{M})\label{prop:mcmapscommute3}
\end{align}
\end{lemma}
\begin{proof}
(\ref{prop:mcmapscommute1}) For every $\mathfrak{A}\in \mathsf{S4}$ we have $\mathfrak{A}\in \mathsf{Alg}(\tau\mathtt{L})$ iff $\mathfrak{A}\models T(\Gamma/\Delta)$ for all $\Gamma/\Delta\in \mathtt{L}$ iff $\rho\mathfrak{A}\models \Gamma/\Delta$ for all $\Gamma/\Delta\in \mathtt{L}$ iff $\rho\mathfrak{A}\in \mathsf{Alg}(\mathtt{L})$ iff $\mathfrak{A}\in \tau \mathsf{Alg}(\mathtt{L})$.
(\ref{prop:mcmapscommute2}) In view of \Cref{unigrzgeneratedskel} it suffices to show that $\mathsf{Alg}(\sigma\mathtt{L})$ and $\sigma \mathsf{Alg}(\mathtt{L})$ have the same skeletal elements. So let $\mathfrak{A}=\sigma\rho\mathfrak{A}\in \mathsf{GRZ}$. Assume $\mathfrak{A}\in\sigma \mathsf{Alg}(\mathtt{L})$. Since $\sigma \mathsf{Alg}(\mathtt{L})$ is generated by $\{\sigma\mathfrak{B}:\mathfrak{B}\in \mathsf{Alg}(\mathtt{L})\}$ as a universal class, by \Cref{cor:representationHAS4} and \Cref{lem:gtskeleton} we have $\mathfrak{A}\models T(\Gamma/\Delta)$ for every $\Gamma/\Delta\in \mathtt{L}$. But then $\mathfrak{A}\in \mathsf{Alg}(\sigma\mathtt{L})$. Conversely, assume $\mathfrak{A}\in \mathsf{Alg}(\sigma\mathtt{L})$. Then $\mathfrak{A}\models T(\Gamma/\Delta)$ for every $\Gamma/\Delta\in \mathtt{L}$. By \Cref{lem:gtskeleton} this is equivalent to $\rho\mathfrak{A}\in \mathsf{Alg}(\mathtt{L})$, therefore $\sigma\rho\mathfrak{A}=\mathfrak{A}\in \sigma\mathsf{Alg}(\mathtt{L})$.
(\ref{prop:mcmapscommute3}) Let $\mathfrak{H}\in \mathsf{HA}$. If $\mathfrak{H}\in \rho \mathsf{Alg}(\mathtt{M})$ then $\mathfrak{H}=\rho \mathfrak{A}$ for some $\mathfrak{A}\in \mathsf{Alg}(\mathtt{M})$. It follows that for every si rule $T(\Gamma/\Delta)\in \mathtt{M}$ we have $\mathfrak{A}\models T(\Gamma/\Delta)$, and so by \Cref{lem:gtskeleton} in turn $\mathfrak{H}\models\Gamma/\Delta$. Therefore indeed $\mathfrak{H}\in \mathsf{Alg}(\rho\mathtt{M})$. Conversely, for all si rules $\Gamma/\Delta$, if $\rho\mathsf{Alg}(\mathtt{M})\models \Gamma/\Delta$ then by \Cref{lem:gtskeleton} $\mathsf{Alg}(\mathtt{M})\models T(\Gamma/\Delta)$, hence $\Gamma/\Delta\in \rho\mathtt{M}$. Thus $\mathsf{ThR}(\rho\mathsf{Alg}(\mathtt{M}))\subseteq \rho\mathtt{M}$, and so $\mathsf{Alg}(\rho\mathtt{M})\subseteq \rho\mathsf{Alg}(\mathtt{M})$.
\end{proof}
The result just proved leads straightforwardly to the following, purely semantic characterisation of modal companions.
\begin{lemma}
$\mathtt{M}\in \mathbf{NExt}(\mathtt{S4_R})$ is a modal companion of $\mathtt{L}\in \mathbf{Ext}(\mathtt{IPC_R})$ iff $\mathsf{Alg}(\mathtt{L})=\rho\mathsf{Alg}(\mathtt{M})$.\label{mcsemantic}
\end{lemma}
\begin{proof}
$(\Rightarrow)$ Assume $\mathtt{M}$ is a modal companion of $\mathtt{L}$. Then we have $\mathtt{L}=\rho\mathtt{M}$. By \Cref{prop:mcmapscommute} $\mathsf{Alg}(\mathtt{L})=\rho\mathsf{Alg}(\mathtt{M})$.
$(\Leftarrow)$ Assume that $\mathsf{Alg}(\mathtt{L})=\rho\mathsf{Alg}(\mathtt{M})$. Therefore, by \Cref{cor:representationHAS4}, $\mathfrak{H}\in \mathsf{Alg}(\mathtt{L})$ implies $\sigma \mathfrak{H}\in \mathsf{Alg}(\mathtt{M})$. This implies that for every si rule $\Gamma/\Delta$, $\Gamma/\Delta\in \mathtt{L}$ iff $T(\Gamma/\Delta)\in \mathtt{M}$.
\end{proof}
We can now prove the main two results of this section.
\begin{theorem}[{\cite[Theorem 5.5]{Jerabek2009CR}}, {\cite[Theorem 3]{Zakharyashchev1991MCoSLSSaPT}}]
The following conditions hold: \label{mcinterval}
\begin{enumerate}
\item For every $\mathtt{L}\in \mathbf{Ext}(\mathtt{IPC_R})$, the modal companions of $\mathtt{L}$ form an interval $\{\mathtt{M}\in \mathbf{NExt}(\mathtt{S4_R}):\tau\mathtt{L}\leq \mathtt{M}\leq \sigma\mathtt{L}\}$.\label{mcinterval1}
\item For every $\mathtt{L}\in \mathbf{Ext}(\mathtt{IPC})$, the modal companions of $\mathtt{L}$ form an interval $\{\mathtt{M}\in \mathbf{NExt}(\mathtt{S4}):\tau\mathtt{L}\leq \mathtt{M}\leq \sigma\mathtt{L}\}$.\label{mcinterval2}
\end{enumerate}
\end{theorem}
\begin{proof}
(\ref{mcinterval1}) In view of \Cref{prop:mcmapscommute} it suffices to prove that $\mathtt{M}\in \mathbf{NExt}(\mathtt{S4_R})$ is a modal companion of $\mathtt{L}\in \mathbf{Ext}(\mathtt{IPC_R})$ iff $\sigma\mathsf{Alg}(\mathtt{L})\subseteq \mathsf{Alg}(\mathtt{M})\subseteq\tau\mathsf{Alg}(\mathtt{L})$.
($\Rightarrow$) Assume $\mathtt{M}$ is a modal companion of $\mathtt{L}$. Then by \Cref{mcsemantic} we have $\mathsf{Alg}(\mathtt{L})=\rho\mathsf{Alg}(\mathtt{M})$, therefore it is clear that $\mathsf{Alg}(\mathtt{M})\subseteq \tau \mathsf{Alg}(\mathtt{L})$. To see that $\sigma\mathsf{Alg}(\mathtt{L})\subseteq \mathsf{Alg}(\mathtt{M})$ it suffices to show that every skeletal algebra in $\sigma\mathsf{Alg}(\mathtt{L})$ belongs to $\mathsf{Alg}(\mathtt{M})$. So let $\mathfrak{A}\cong\sigma\rho\mathfrak{A}\in \sigma\mathsf{Alg}(\mathtt{L})$. Then $\rho\mathfrak{A}\in \mathsf{Alg}(\mathtt{L})$ by \Cref{lem:gtskeleton}, so there must be $\mathfrak{B}\in \mathsf{Alg}(\mathtt{M})$ such that $\rho\mathfrak{B}\cong \rho\mathfrak{A}$. But this implies $\sigma \rho\mathfrak{B}\cong \sigma \rho\mathfrak{A}\cong \mathfrak{A}$, and as universal classes are closed under subalgebras, by \Cref{cor:representationHAS4} we conclude $\mathfrak{A}\in \mathsf{Alg}(\mathtt{M})$.
($\Leftarrow$) Assume $\sigma\mathsf{Alg}(\mathtt{L})\subseteq \mathsf{Alg}(\mathtt{M})\subseteq\tau\mathsf{Alg}(\mathtt{L})$. It is an immediate consequence of \Cref{cor:representationHAS4} that $\rho\sigma \mathsf{Alg}(\mathtt{L})=\mathsf{Alg}(\mathtt{L})$, which gives us $\rho\mathsf{Alg}(\mathtt{M})\supseteq\mathsf{Alg}(\mathtt{L})$. But by construction $\rho\mathsf{Alg}(\mathtt{M})=\rho\tau \mathsf{Alg}(\mathtt{L})$, hence $\rho\mathsf{Alg}(\mathtt{M})\subseteq\mathsf{Alg}(\mathtt{L})$. Therefore indeed $\rho\mathsf{Alg}(\mathtt{M})=\mathsf{Alg}(\mathtt{L})$, so by \Cref{mcsemantic} we conclude that $\mathtt{M}$ is a modal companion of $\mathtt{L}$.
(\ref{mcinterval2}) Immediate from \Cref{mcinterval1} and \Cref{deductivesystemisomorphismsi,deductivesystemisomorphismmodal}.
\end{proof}
\begin{theorem}[Blok Esakia theorem]
The following conditions hold: \label{blokesakia}
\begin{enumerate}
\item The mappings $\sigma: \mathbf{Ext}(\mathtt{IPC_R})\to \mathbf{NExt}(\mathtt{GRZ_R})$ and $\rho:\mathbf{NExt}(\mathtt{GRZ_R})\to \mathbf{Ext}(\mathtt{IPC_R})$ are complete lattice isomorphisms and mutual inverses.\label{blokesakia:1}
\item The mappings $\sigma: \mathbf{Ext}(\mathtt{IPC})\to \mathbf{NExt}(\mathtt{GRZ})$ and $\rho:\mathbf{NExt}(\mathtt{GRZ})\to \mathbf{Ext}(\mathtt{IPC})$ are complete lattice isomorphisms and mutual inverses. \label{blokesakia:2}
\end{enumerate}
\end{theorem}
\begin{proof}
(\ref{blokesakia:1}) It is enough to show that the mappings $\sigma: \mathbf{Uni}(\mathsf{HA})\to \mathbf{Uni}(\mathsf{GRZ})$ and $\rho:\mathbf{NExt}(\mathsf{GRZ})\to \mathbf{Ext}(\mathsf{HA})$ are complete lattice isomorphisms and mutual inverses. Both maps are evidently order preserving, and preservation of infinite joins is an easy consequence of \Cref{lem:gtskeleton}. Let $\mathcal{U}\in \mathbf{Uni}(\mathsf{GRZ})$. Then $\mathcal{U}=\sigma\rho\mathcal{U}$ by \Cref{unigrzgeneratedskel}, so $\sigma$ is surjective and a left inverse of $\rho$. Now let $\mathcal{U}\in \mathbf{Uni}(\mathsf{HA})$. It is an immediate consequence of \Cref{cor:representationHAS4} that $\rho\sigma \mathcal{U}=\mathcal{U}$. Hence $\rho$ is surjective and a left inverse of $\sigma$. Thus $\sigma$ and $\rho$ are mutual inverses, and therefore must both be bijections.
(\ref{blokesakia:2}) Immediate from \Cref{blokesakia:1} and \Cref{deductivesystemisomorphismsi,deductivesystemisomorphismmodal}.
\end{proof}
As noted earlier, the arguments given in the proofs of \Cref{mcinterval,blokesakia} are standard. The novelty of our strategy consists in establishing the key fact on which these standard arguments depend on, namely \Cref{unigrzgeneratedskel}, in a novel way using stable canonical rules.
\subsubsection{The Dummett-Lemmon Conjecture} \label{sec:additionalresults}
We call a modal or si-rule system \emph{Kripke complete} if it is of the form $\mathtt{L}=\{\Gamma/\Delta:\mathcal{K}\models \Gamma/\Delta\}$ for some class of Kripke frames $\mathcal{K}$. \citet[Corollary 2]{Zakharyashchev1991MCoSLSSaPT} applied his canonical formulae to prove the \emph{Dummett-Lemmon conjecture} \cite{DummettLemmon1959MLbS4aS5}, which states that a si-logic is Kripke complete iff its weakest modal companion is. To our knowledge, a proof that the Dummett-Lemmon conjecture generalises to rule systems has not been published, although perhaps one could be given by applying Je\v{r}ábek-style canonical rule to adapt \z's argument. Here we give a proof that the Dummett-Lemmon conjecture does indeed generalise to rule systems using stable canonical rules.
It is easy to see that refutation conditions for stable canonical rules work essentially the same way for Kripke frames as they do for Esakia and modal spaces: for every Kripke frame $\mathfrak{X}$ and si stable canonical rule $\scrsi{F}{\mathfrak{D}}$, we have that $\mathfrak{X}\nvDash \scrsi{F}{\mathfrak{D}}$ iff there is a surjective stable homomorphism $f:\mathfrak{X}\to \mathfrak{F}$ satisfying the BDC for $\mathfrak{D}$, and analogously for the modal case. For details the reader may consult, e.g., \cite{BezhanishviliEtAl2016SCR}. The mappings $\sigma, \tau, \rho$ also extend to classes of Kripke frames in an obvious way. Finally \Cref{lem:gtskeleton} works for Kripke frames as well, the latter appropriately reformulated to incorporate the refutation conditions for stable canonical rules just stated.
We now introduce the notion of a \emph{collapsed} stable canonical rule. We prefer to do so in a geometric setting, so to emphasize the main intuition behind this concept.
\begin{definition}
Let $\scrmod{F}{\mathfrak{D}}$ be some modal stable canonical rule, with $\mathfrak{F}\in \mathsf{Spa}(\mathtt{S4})$. The \emph{collapsed stable canonical rule} $\scrsi{\rho F}{\rho \mathfrak{D}}$ is obtained by setting
\[\rho \mathfrak{D}:=\{\rho [\mathfrak{d}]: \mathfrak{d}\in \mathfrak{D}\}.\]
\end{definition}
Intuitively, $\scrsi{\rho F}{\rho \mathfrak{D}}$ is obtained from $\scrmod{F}{\mathfrak{D}}$ by collapsing all clusters in $\mathfrak{F}$ and in the set of domains $\mathfrak{D}$ as well.
Collapsed rules obey the following refutation condition.
\begin{lemma}[Rule collapse lemma]
For all $\mathfrak{X}\in \mathsf{Spa}(\mathtt{S4})$ and modal stable canonical rule $\scrmod{F}{\mathfrak{D}}$ with $\mathfrak{F}\in \mathsf{Spa}(\mathtt{S4})$, if $\mathfrak{X}\nvDash \scrmod{F}{\mathfrak{D}}$ then $\rho \mathfrak{X}\nvDash \scrsi{\rho F}{\rho \mathfrak{D}}$.\label{rulecollapse}
\end{lemma}
\begin{proof}
Assume $\mathfrak{X}\nvDash\scrmod{F}{\mathfrak{D}}$. Then there is a continuous, relation preserving map $f:\mathfrak{X}\to \mathfrak{F}$ that satisfies the BDC for $\mathfrak{D}$. Consider the map $g:\rho\mathfrak{X}\to \rho\mathfrak{F}$ given by
\[g(\rho(x))=\rho(f(x)).\]
Now $\rho(x)\leq\rho(y)$ implies $Rxy$, and since $f$ is relation preserving also $Rf(x)f(y)$, which implies $\rho(f(x))\leq \rho(f(y))$. So $g$ is relation preserving. Furthermore, again because $f$ is relation preserving we have that for any $U\subseteq F$, the set $f^{-1}(U)$ does not cut clusters, whence $g^{-1}(U)=\rho[f^{-1}(\rho^{-1}(U))]$ is clopen for any $U\subseteq \rho[F]$, as $\rho\mathfrak{X}$ has the quotient topology. Thus $g$ is continuous. Let us check that $g$ satisfies the BDC for $\rho\mathfrak{D}$. Assume that ${\uparrow} g(\rho(x))\cap \rho[\mathfrak{d}]\neq \varnothing$ for $\mathfrak{d}\in \mathfrak{D}$. Then there is some $\rho(y)\in \rho[F]$ with $\rho(f(x))\leq\rho(y)$ and $\rho(y)\in \rho[\mathfrak{d}]$. By construction, wlog we may assume that $y\in \mathfrak{d}$. As $\rho$ is relation reflecting it follows that $Rf(x) y$, and so we have that $R[ f(x)]\cap \mathfrak{d}\neq\varnothing$. Since $f$ satisfies the BDC for $\mathfrak{D}$ we conclude that $f[R[x]]\cap \mathfrak{d}\neq\varnothing$. So there is some $z\in X$ with $Rxz$ and $f(z)\in \mathfrak{d}$. By definition, $\rho(f(z))\in \rho[\mathfrak{d}]$. Hence we have shown that $\rho[f[R[x]]]\cap \rho[\mathfrak{d}]\neq\varnothing$, and so $g$ indeed satisfies the BDC for $\mathfrak{D}$.
\end{proof}
We are now ready to prove the Dummett-Lemmon conjecture for rule systems.
\begin{theorem}[Dummett-Lemmon conjecture for si-rule systems]
For every si-rule system $\mathtt{L}\in \mathbf{Ext}(\mathtt{IPC_R})$, we have that $\mathtt{L}$ is Kripke complete iff $\tau\mathtt{L}$ is.\label{dummettlemmon}
\end{theorem}
\begin{proof}
$(\Rightarrow)$ Let $\mathtt{L}$ be Kripke complete. Suppose that $\Gamma/\Delta\notin \tau\mathtt{L}$. Then there is $\mathfrak{X}\in \mathsf{Spa}(\tau\mathtt{L})$ such that $\mathfrak{X}\nvDash \Gamma/\Delta$. By \Cref{thm:axiomatisationS4scr}, we may assume that $\Gamma/\Delta=\scrmod{F}{\mathfrak{D}}$ for $\mathfrak{F}$ a preorder. By the rule collapse lemma
it follows that $\rho\mathfrak{X}\nvDash \scrsi{\rho F}{\rho \mathfrak{D}}$. Moreover, by \Cref{lem:gtskeleton} it follows that $\rho\mathfrak{X}\models \mathtt{L}$, and so
we conclude $\scrsi{\rho F}{\rho \mathfrak{D}}\notin \mathtt{L}$. Since $\mathtt{L}$ is Kripke complete, there is a si Kripke frame $\mathfrak{Y}$ such that $\mathfrak{Y}\nvDash \scrsi{\rho F}{\rho \mathfrak{D}}$. By \Cref{refutspace}, there is a stable map $f:\mathfrak{Y}\to \rho \mathfrak{F}$ satisfying the BDC for $\rho \mathfrak{D}$. Work in $\rho \mathfrak{F}$. For every $x\in \rho [F]$ look at $\rho^{-1}(x)$, let $k=|\rho^{-1}(x)|$ and enumerate $\rho^{-1}(x)=\{x_1, \ldots, x_k\}$. Now work in $\mathfrak{Y}$. For every $y\in f^{-1}(x)$ replace $y$ with a $k$-cluster $y_1, \ldots, y_k$ and extend the relation $R$ clusterwise: $Ry_iz_j$ iff either $y=z$ or $Ryz$. Call the result $\mathfrak{Z}$. Clearly $\mathfrak{Z}$ is a Kripke frame, and moreover $\mathfrak{Z}\models\tau \mathtt{L}$, because $\rho \mathfrak{Z}\cong \mathfrak{Y}$. For convenience, identify $\rho\mathfrak{Z}=\mathfrak{Y}$. For every $x\in \rho [F]$ define a map $g_x:f^{-1}(x)\to \rho^{-1}(x)$ by setting $g_x(y_i)=x_i$ ($i\leq k$). Finally, define $g:\mathfrak{Z}\to \mathfrak{F}$ by putting $g=\bigcup_{x\in \rho[F]} g_x$.
The map $g$ is evidently well defined, surjective, and relation preserving. We claim that moreover, it satisfies the BDC for $\mathfrak{D}$. To see this, suppose that $R [g(y_i)] \cap \mathfrak{d}\neq\varnothing$ for some $\mathfrak{d}\in \mathfrak{D}$. Then there is $x_j\in F$ with $x_j\in \mathfrak{d}$ and $Rg(y_i)x_j$. By construction also $\rho(x_j)\in \rho [\mathfrak{d}]$ and $R f(\rho (y_i))\rho(x_j)$. As $f$ satisfies the BDC for $\rho\mathfrak{D}$ it follows that there is some $z\in Y$ such that $R\rho(y_i)z$ and $f(z)\in \rho[\mathfrak{d}]$. We may view $z$ as $\rho(z_n)$ where $\rho^{-1}(f(z))$ has cardinality $k\geq n$. Surely $Ry_iz_n$. Furtheromre, since $f(z)\in \rho[\mathfrak{d}]$ there must be some $m\leq k$ such that $f(z)_m=g(z_m)\in \mathfrak{d}$. By construction $Rz_nz_m$ and so in turn $Ry_iz_m$. This establishes that $g$ indeed satisfies the BDC for $\mathfrak{D}$. Thus we have shown $\mathfrak{Z}\nvDash \scrmod{F}{\mathfrak{D}}$. It follows that $\tau \mathtt{L}$ is Kripke complete.
$(\Leftarrow)$ Assume that $\tau(\mathtt{L})$ is Kripke complete. Suppose that $\Gamma/\Delta\notin \mathtt{L}$. Then there is an Esakia space $\mathfrak{X}$ such that $\mathfrak{X}\nvDash \Gamma/\Delta$. Therefore $\sigma\mathfrak{X}\nvDash T(\Gamma/\Delta)$. Surely $\sigma \mathfrak{X}\models \tau \mathtt{L}$, so $T(\Gamma/\Delta)\notin\tau \mathtt{L}$ and thus there is a Kripke frame $\mathfrak{Y}$ such that $\mathfrak{Y}\models \tau \mathtt{L}$ and $\mathfrak{Y}\nvDash T(\Gamma/\Delta)$. But then $\rho\mathfrak{Y}\nvDash \Gamma/\Delta$. $\rho\mathfrak{Y}$ is a Kripke frame, and validates $\mathtt{L}$ by \Cref{lem:gtskeleton}. Therefore we have shown that $\mathtt{L}$ is indeed Kripke complete.
\end{proof}
\subsection{Relations}
\label{sec:functionsrelations}
We begin by fixing some notation concerning binary relations. Let $X$ be a set, $R$ a transitive binary relation on $X$, and $U\subseteq X$. We define:
\begin{align}
\mathit{qmax}_R(U)&:=\{x\in U: \text{ for all }y\in U\text{, if } Rxy\text{ then }Ryx\}\\
\mathit{max}_R(U)&:=\{x\in U: \text{ for all }y\in U\text{, if } Rxy\text{ then }x=y\}\\
\mathit{qmin}_R(U)&:=\{x\in U: \text{ for all }y\in U\text{, if } Ryx\text{ then }Rxy\}\\
\mathit{min}_R(U)&:=\{x\in U: \text{ for all }y\in U\text{, if } Ryx\text{ then }x=y\}.
\end{align}
The elements of $\mathit{qmax}_R(U)$ and $\mathit{max}_R(U)$ are called \emph{$R$-quasi-maximal} and \emph{$R$-maximal} elements of $U$ respectively, and similarly the elements of $\mathit{qmin}_R(U)$ and $\mathit{min}_R(U)$ are called \emph{$R$-quasi-minimal} and \emph{$R$-minimal} elements of $U$ respectively. Note that if $R$ is a partial order then both $\mathit{qmax}_R(U)=\mathit{max}_R(U)$ and $\mathit{qmin}_R(U)=\mathit{min}_R(U)$. Lastly, we say that an element $x\in U$ is \emph{$R$-passive} in $U$ if for all $y\in X\smallsetminus U$, if $Rxy$ then there is no $z\in U$ such that $Ryz$. Intuitively, an $R$-passive element of $U$ is an $x\in U$ such that one cannot ``leave'' and ``re-enter'' $U$ starting from $x$ and ``moving through'' $R$. The set of all $R$-passive elements of $U$ is denoted by $\mathit{pas}_R(U)$.
\subsection{Deductive Systems} We now review \emph{deductive system}, which span both propositional logics and rule systems.
\label{sec:rulsys} The set $\mathit{Frm}_\nu(X)$ of \emph{formulae} in signature $\nu$ over a set of variables $X$ is the least set containing $X$ and such that for every $f\in \nu$ and $\varphi_1, \ldots, \varphi_n\in \mathit{Frm}_\nu(X)$ we have $f(\varphi_1, \ldots, \varphi_n)\in \mathit{Frm}_\nu(X)$, where $n$ is the arity of $f$. Henceforth we will take $\mathit{Prop}$ to be a fixed arbitrary countably infinite set of variables and write simply $\mathit{Frm}_\nu$ for $\mathit{Frm}_\nu(\mathit{Prop})$. We occasionally write formulae in the form $\varphi(p_1, \ldots, p_n)$ to indicate that the variables occurring in $\varphi$ are among $p_1, \ldots, p_n$. A \emph{substitution} is a map $s:\mathit{Prop}\to \mathit{Frm}_\nu(\mathit{Prop})$. Every substitution may be extended to a map $\bar s:\mathit{Frm}_\nu(\mathit{Prop})\to \mathit{Frm}_\nu(\mathit{Prop})$ recursively, by setting $\bar s(p)=s(p)$ if $p\in \mathit{Prop}$, and $\bar s(f(\varphi_1, \ldots, \varphi_n))=f(\bar s(\varphi_1), \ldots, \bar s(\varphi_n))$.
\begin{definition}
A \emph{logic} over $\mathit{Frm}_\nu$ is a set $\mathtt{L}\subseteq \mathit{Frm}_\nu$, such that
\begin{equation}
\varphi\in \mathtt{L}\Rightarrow \bar s(\varphi)\in \mathtt{L} \text{ for every substitution $s$.}\tag{structurality}
\end{equation}
\end{definition}
Interesting examples of logics, including those discussed in this paper, are normally closed under conditions other than structurality. If $\Gamma, \Delta$ are sets of formulae and $\mathcal{S}$ is a set of logics, we write $\Gamma\oplus_{\mathcal{S}}\Delta$ for the least logic in $\mathcal{S}$ extending both $\Gamma, \Delta$.
For any sets $X, Y$, write $X\subseteq_\omega Y$ to mean that $X\subseteq Y$ and $|X|$ is finite. A \emph{$($multi-conclusion$)$ rule} in signature $\nu$ over a set of variables $X$ is a pair $(\Gamma, \Delta)$ such that $\Gamma, \Delta\subseteq_\omega \mathit{Frm}_\nu(X)$. In case $\Delta=\{\varphi\}$ we write $\Gamma/\Delta$ simply as $\Gamma/\varphi$, and analogously if $\Gamma=\{\psi\}$. We use $;$ to denote union between finite sets of formulae, so that $\Gamma; \Delta=\Gamma\cup \Delta$ and $\Gamma; \varphi=\Gamma\cup \{\varphi\}$. We write $\mathit{Rul}_\nu(X)$ for the set of all rules in $\nu$ over $X$, and simply $\mathit{Rul}_\nu$ when $X=\mathit{Prop}$.
\begin{definition}
A \emph{rule system} is a set $\mathtt{S}\subseteq \mathit{Rul}_\nu(X)$ satisfying the following conditions.
\begin{enumerate}
\item If $\Gamma/\Delta\in \mathtt{S}$ then $\bar s[\Gamma]/\bar s[\Delta]\in \mathtt{S}$ for all substitutions $s$ (structurality).
\item $\varphi/\varphi\in \mathtt{S}$ for every formula $\varphi$ (reflexivity).
\item If $\Gamma/\Delta\in \mathtt{S}$ then $\Gamma;\Gamma'/\Delta;\Delta'\in \mathtt{S}$ for any finite sets of formulae $\Gamma',\Delta'$ (monotonicity).
\item If $\Gamma/\Delta;\varphi\in\mathtt{S}$ and $\Gamma;\varphi/\Delta\in \mathtt{S}$ then $\Gamma/\Delta\in \mathtt{S}$ (cut).
\end{enumerate}
\end{definition}
\begin{remark}
Rule systems are also called \emph{multiple-conclusion consequence relations} (e.g., in \cite{BezhanishviliEtAl2016SCR,Iemhoff2016CRaAR}). We prefer the terminology of rule systems (used in \cite{Jerabek2009CR}) for brevity.
\end{remark}
If $\mathcal{S}$ is a set of rule systems and $\Sigma, \Xi$ are sets of rules, we write $\Xi\oplus_{\mathcal{S}}\Sigma$ for the least rule system in $\mathcal{S}$ extending both $\Xi$ and $\Sigma$. A set of rules $\Sigma$ is said to \emph{axiomatise} a rule system $\mathtt{S}\in \mathcal{S}$ \emph{over} some rule system $\mathtt{S}'\in \mathcal{S}$ if $\mathtt{S}'\oplus_{\mathcal{S}}\Sigma=\mathtt{S}$.
If $\mathtt{S}$ is a rule system we let the set of \emph{tautologies} of $\mathtt{S}$ be the set
\[\mathsf{Taut}(\mathtt{S}):=\{\varphi\in \mathit{Frm}_\nu:/\varphi\in \mathtt{S}\}.\]
By the structurality condition for rule systems, it follows that $\mathsf{Taut}(\mathtt{S})$ is a logic for every rule system $\mathtt{S}$.
We interpret deductive systems over algebras in the same signature. If $\mathfrak{A}$ is a $\nu$-algebra we denote its carrier as $A$. Let $\mathfrak{A}$ be some $\nu$-algebra. A \emph{valuation} on $\mathfrak{A}$ is a map $V:\mathit{Prop}\to A$. Every valuation $V$ on $\mathfrak{A}$ may be recursively extended to a map $\bar V:\mathit{Frm}_\nu\to A$, by setting
\begin{align*}
\bar V(p)&:= V(p)\\
\bar V(f(\varphi_1, \ldots, \varphi_n))&:=f^{\mathfrak{A}}(\bar V(\varphi_1), \ldots, \bar V(\varphi_n)).
\end{align*}
A pair $(\mathfrak{A}, V)$ where $\mathfrak{A}$ is a $\nu$-algebra and $V$ a valuation on $\mathfrak{A}$ is called a \emph{model}. A rule $\Gamma/\Delta$ is \emph{valid} on a $\nu$-algebra $\mathfrak{A}$ if the following holds: for any valuation $V$ on $\mathfrak{A}$, if $\bar V(\gamma)=1$ for all $\gamma\in \Gamma$, then $\bar V(\delta)=1$ for some $\delta\in \Delta$. When this holds we write $\mathfrak{A}\models \Gamma/\Delta$, otherwise we write $\mathfrak{A}\nvDash \Gamma/\Delta$ and say that $\mathfrak{A}$ \emph{refutes} $\Gamma/\Delta$. As a special case, a formula $\varphi$ is valid on a $\nu$-algebra $\mathfrak{A}$ if the rule $/\varphi$ is. We write $\mathfrak{A}\models \varphi$ when this holds, $\mathfrak{A}\nvDash \varphi$ otherwise. The notion of validity extends to classes of $\nu$-algebras: $\mathcal{K}\models \Gamma/\Delta$ means that $\mathfrak{A}\models \Gamma/\Delta$ for every $\mathfrak{A}\in \mathcal{K}$, and $\mathcal{K}\nvDash \Gamma/\Delta$ means that $\mathfrak{A}\nvDash \Gamma/\Delta$ for some $\mathfrak{A}\in \mathcal{K}$. Analogous notation is used for formulae. Finally, if $\Xi$ is a set of formulae or rules and $\mathfrak{A}$ a $\nu$-algebra, $\mathfrak{A}\models \Xi$ means that every formula or rule in $\Xi$ is valid on $\mathfrak{A}$, $\mathfrak{A}\nvDash \Xi$ means that some formula or rule in $\Xi$ is not valid on $\mathfrak{A}$, and similarly for classes of $\nu$-algebras.
Write $\mathcal{A}_\nu$ for the class of all $\nu$-algebras. For every deductive system $\mathtt{S}$ we define
\[\mathsf{Alg}(\mathtt{S}):=\{\mathfrak{A}\in \mathcal{A}_\nu:\mathfrak{A}\models \mathtt{S}\}.\]
Conversely, if $\mathcal{K}$ is a class of $\nu$-algebras we set
\begin{align*}
\mathsf{ThR}(\mathcal{K})&:=\{\Gamma/\Delta\in \mathit{Rul}_\nu:\mathcal{K}\models \Gamma/\Delta\}\\
\mathsf{Th}(\mathcal{K})&:=\{\varphi\in \mathit{Frm}_\nu:\mathcal{K}\models \varphi\}
\end{align*}
We also interpret deductive systems over $\nu$-formulae on expansions of Stone spaces dual to $\nu$-algebras, which for the moment we refer to as \emph{$\nu$-spaces}. Precise definitions of these topological sturctures and of valuations over them are given in each subsequent section. If $\mathfrak{X}$ is a $\nu$-space we denote its underlying domain as $X$, its family of open sets as $\mathcal{O}$, and its family of clopen sets as $\mathsf{Clop}(\mathfrak{X})$. Moreover, if $U\subseteq X$ we write $-U$ for $X\smallsetminus U$. Given a valuation $V$ on a $\nu$-space $\mathfrak{X}$ and a point $x\in X$, we call $(\mathfrak{X}, V)$ a (global) \emph{model}. A formula $\varphi$ is \emph{satisfied} on a model $(\mathfrak{X}, V)$ at a point $x$ if $x\in \bar V(\varphi)$. In this case we write $\mathfrak{X}, V, x\models \varphi$, otherwise we write $\mathfrak{X}, V, x\nvDash \varphi$ and say that the model $(\mathfrak{X}, V)$ \emph{refutes} $\varphi$ at a point $x$. A rule $\Gamma/\Delta$ is \emph{valid} on a model $(\mathfrak{X}, V)$ if the following holds: if for every $x\in X$ we have $\mathfrak{X}, V, x\models \gamma$ for each $\gamma\in \Gamma$, then for every $x\in X$ we have $\mathfrak{X}, V, x\models \delta$ for some $\delta\in \Delta$. In this case we write $\mathfrak{X}, V\models \Gamma/\Delta$, otherwise we write $\mathfrak{X}, V\nvDash \Gamma/\Delta$ and say that the model $(\mathfrak{X}, V)$ \emph{refutes} $\varphi$. A rule $\Gamma/\Delta$ is \emph{valid} on a $\nu$-space $\mathfrak{X}$ if it is valid on the model $(\mathfrak{X}, V)$ for every valuation $V$ on $\mathfrak{X}$, otherwise $\mathfrak{X}$ \emph{refutes} $\Gamma/\Delta$. We write $\mathfrak{X}\models \Gamma/\Delta$ to mean that $\Gamma/\Delta$ is valid on $\mathfrak{X}$, and $\mathfrak{X}\nvDash\Gamma/\Delta$ to mean that $\mathfrak{X}$ refutes $\Gamma/\Delta$. As in the case of algebras we define validity on models and $\nu$-spaces for a formula $\varphi$ as validity of the rule $/\varphi$, and write $\mathfrak{X}\models \varphi$ if $\varphi$ is valid in $\mathfrak{X}$, otherwise $\mathfrak{X}\nvDash\varphi$. The notion of validity generalises to classes of $\nu$-spaces, so that if $\mathcal{K}$ is a class of $\nu$-space then $\mathcal{K}\models \Gamma/\Delta$ means $\mathfrak{X}\models \Gamma/\Delta$ for every $\mathfrak{X}\in \mathcal{K}$, and $\mathcal{K}\nvDash \Gamma/\Delta$ means $\mathfrak{X}\nvDash \Gamma/\Delta$ for some $\mathfrak{X}\in \mathcal{K}$. We extend the present notation for validity to sets of formulae or rules the same way as for algebras.
Write $\mathcal{S}_\nu$ for the class of all $\nu$-spaces. For every deductive system $\mathtt{S}$ we define
\[\mathsf{Spa}(\mathtt{S}):=\{\mathfrak{X}\in \mathcal{S}_\nu:\mathfrak{X}\models \mathtt{S}\}.\]
Conversely, if $\mathcal{K}$ is a class of $\nu$-spaces we set
\begin{align*}
\mathsf{ThR}(\mathcal{K})&:=\{\Gamma/\Delta\in \mathit{Rul}_\nu:\mathcal{K}\models \Gamma/\Delta\}\\
\mathsf{Th}(\mathcal{K})&:=\{\varphi\in \mathit{Frm}_\nu:\mathcal{K}\models \varphi\}
\end{align*}
Throughout the paper we study the structure of lattices of deductive systems via semantic methods. This is made possible by the following fundamental result, connecting the syntactic types of deductive systems to closure conditions on the classes of algebras validating them. \Cref{birkhoff} is widely known as \emph{Birkhoff's theorem}, after \cite{Birkhoff1935OtSoAA}.
\begin{theorem}[{\cite[Theorems II.11.9 and V.2.20]{BurrisSankappanavar1981ACiUA}}]
For every class $\mathcal{K}$ of $\nu$-algebras, the following conditions hold:\label{syntacticvarietiesuniclasses}
\begin{enumerate}
\item $\mathcal{K}$ is a variety iff $\mathcal{K}=\mathsf{Alg}(\mathtt{S})$ for some set of $\nu$-formulae $\mathtt{S}$. \label{birkhoff}
\item $\mathcal{K}$ is a universal class iff $\mathcal{K}=\mathsf{Alg}(\mathtt{S})$ for some set of $\nu$-rules $\mathtt{S}$.
\end{enumerate}
\end{theorem}
In this sense, $\nu$-logics correspond to varieties of $\nu$-algebras, whereas $\nu$-rule systems correspond to universal classes of $\nu$-algebras.
This concludes our general preliminaries. We now begin the study of modal companions via stable canonical rules.
\subsection{Deductive Systems for Provability}
We begin by briefly reviewing definitions and basic properties of the structures under discussion.
\subsubsection{Intuitionistic Provability, Frontons, and $\mathtt{KM}$-spaces}
In this subsection we shall work with the \emph{modal superintuitionistic signature}, \[msi:=\{\land, \lor, \to,\boxtimes, \bot, \top\}.\] The set $\mathit{Frm_{msi}}$ of \emph{modal superintuitionistic (msi) formulae} is defined recursively as follows.
\[\varphi::= p \sep \bot \sep \top \sep \varphi \land \varphi \sep \varphi\lor \varphi \sep \varphi\to \varphi \sep \boxtimes\varphi \]
where $p\in \mathit{Prop}$.
The logic $\mathtt{IPCK}$ is obtained by extending $\mathtt{IPC}$ by the $\mathtt{K}$-axiom \[\boxtimes(p\to q)\to (\boxtimes p \to \boxtimes q)\] and closing under necessitation, that is, requiring that whenever $\varphi\in \mathtt{IPCK}$ then $\boxtimes\varphi\in \mathtt{IPCK}$ as well.
\begin{definition}
A \emph{normal modal superintuitionistic logic}, or msi-logic for short, is a logic $\mathtt{L}$ over $\mathit{Frm}_{msi}$ satisfying the following additional conditions:
\begin{enumerate}
\item $\mathtt{IPCK}\subseteq \mathtt{L}$;
\item If $\varphi\to \psi, \varphi\in \mathtt{L}$ then $\psi\in \mathtt{L}$ (MP);
\item If $\varphi\in \mathtt{L}$ then $\boxtimes \varphi\in \mathtt{L}$ (NEC).
\end{enumerate}
A \emph{modal superintuitionistic rule system}, or msi-rule system for short, is a rule system $\mathtt{L}$ over $\mathit{Frm}_{msi}$ satisfying the following additional requirements.
\begin{enumerate}
\item $/\varphi\in \mathtt{L}$ whenever $\varphi\in \mathtt{IPCK}$;
\item $\varphi, \varphi\to \psi/\psi\in \mathtt{L}$ (MP-R);
\item $\varphi/\boxtimes\varphi\in \mathtt{L}$ (NEC-R).
\end{enumerate}
\end{definition}
If $\mathtt{L}$ is an msi-logic (resp. msi-rule system) we write $\mathbf{NExt}(\mathtt{L})$ for the set of msi-logics (resp. rule systems) extending $\mathtt{L}$. Surely, the set of msi-logics systems coincides with $\mathbf{NExt}(\mathtt{IPCK})$. It is easy to check that $\mathbf{NExt}(\mathtt{IPCK})$ forms a lattice under the operations $\oplus_{\mathbf{NExt}(\mathtt{K})}$ as join and intersection as meet. If $\mathtt{L}\in \mathbf{NExt}(\mathtt{IPCK})$, let $\mathtt{L_R}$ be the least msi-rule system containing $/\varphi$ for each $\varphi\in \mathtt{L_R}$. Then $\mathtt{IPCK_R}$ is the least msi-rule system. The set $\mathbf{NExt}(\mathtt{IPCK_R})$ of msi-rule systems is also a lattice when endowed with $\oplus_{\mathbf{NExt}(\mathtt{IPCK_R})}$ as join and intersection as meet. As usual, we refer to these lattices as we refer to their underlying sets, i.e. $\mathbf{NExt}(\mathtt{IPCK})$ and $\mathbf{NExt}(\mathtt{IPCK_R})$ respectively. We also write both $\oplus_{\mathbf{NExt}(\mathtt{IPCK})}$ and $\oplus_{\mathbf{NExt}(\mathtt{IPCK_R})}$ simply as $\oplus$, leaving context to resolve ambiguities. Clearly, for every $\mathtt{L}\in \mathbf{NExt}(\mathtt{IPCK})$ we have that $\mathsf{Taut}(\mathtt{L_R})=\mathtt{L}$, which establishes the following result.
\begin{proposition}
The mappings $(\cdot)_{\mathtt{R}}$ and $\mathsf{Taut}(\cdot)$ are mutually inverse complete lattice isomorphisms between $\mathbf{NExt}(\mathtt{IPCK})$ and the sublattice of $\mathbf{NExt}(\mathtt{IPCK_R})$ consisting of all msi-rule systems $\mathtt{L}$ such that $\mathsf{Taut}(\mathtt{L})_\mathtt{R}=\mathtt{L}$.\label{deductivesystemisomorphismmsi}
\end{proposition}
Rather than studying $\mathbf{NExt}(\mathtt{IPCK_R})$ in its entirety, we shall focus on the sublattice of $\mathbf{NExt}(\mathtt{IPCK_R})$ consisting of all normal extensions of the rule system $\mathtt{KM_R}$, where $\mathtt{KM}$ is the msi-logic axiomatised as follows.
\[\mathtt{KM}:=\mathtt{IPCK}\oplus p\to \boxtimes p \oplus (\boxtimes p\to p)\to p\oplus \boxtimes p\to (q\lor (q\to p)).\]
The logic $\mathtt{KM}$ was introduced by \citet{Kuznetsov1978PIL} (see also \cite{KuznetsovMuravitsky1986OSLAFoPLE}) and later studied by \citet{Esakia2006TMHCaCMEotIL}. Its main motivation lies in its close connection with the Gödel-Löb provability logic, to be discussed in the next section. An extensive overview of both the history and theory of $\mathtt{KM}$ may be found in \cite{Muravitsky2014LKaB}.
A \emph{fronton} is a tuple $\mathfrak{H}=(H, \land, \lor, \to,\boxtimes, 0, 1)$ such that $(H, \land, \lor,\to , 0, 1)$ is a Heyting algebra and for every $a, b\in H$, $\boxtimes$ satisfies
\begin{align}
\boxtimes 1&=1\\
\boxtimes (a\land b)&=\boxtimes a\land \boxtimes b\\
a&\leq \boxtimes a\\
\boxtimes a\to a&=a\\
\boxtimes a &\leq b\lor (b\to a)
\end{align}
Frontons are discussed in detail, e.g., in \cite{Esakia2006TMHCaCMEotIL,Litak2014CMwPS}. We let $\mathsf{Frt}$ denote the class of all frontons. By \Cref{syntacticvarietiesuniclasses}, $\mathsf{Frt}$ is a variety. We write $\mathbf{Var}(\mathsf{Frt})$ and $\mathbf{Uni}(\mathsf{Frt})$ respectively for the lattice of subvarieties and of universal subclasses of $\mathsf{Frt}$. \Cref{algebraisationfrtvar} in the following result follows from, e.g., \cite[Proposition 7]{Muravitsky2014LKaB}, whereas \Cref{algebraisationfrtuni} can be obtained via the techniques used in the proofs of \Cref{thm:algebraisationHA,thm:algebraisationMA}.
\begin{theorem}
The following maps are pairs of mutually inverse dual isomorphisms:\label{algebraisationfrt}
\begin{enumerate}
\item $\mathsf{Alg}:\mathbf{NExt}(\mathtt{KM})\to \mathbf{Var}(\mathsf{Frt})$ and $\mathsf{Th}:\mathbf{Var}(\mathsf{Frt})\to \mathbf{Ext}(\mathtt{KM})$;\label{algebraisationfrtvar}
\item $\mathsf{Alg}:\mathbf{NExt}(\mathtt{KM_R})\to \mathbf{Uni}(\mathsf{Frt})$ and $\mathsf{ThR}:\mathbf{Uni}(\mathsf{Frt})\to \mathbf{NExt}(\mathtt{KM_R})$.\label{algebraisationfrtuni}
\end{enumerate}
\end{theorem}
\begin{corollary}
Every msi-logic $($resp. si-rule system$)$ extending $\mathtt{KM}$ is complete with respect to some variety $($resp. universal class$)$ of Frontons. \label{completeness_msi}
\end{corollary}
We mention a simple yet important property of frontons, which plays a key role in the development of algebra-based rules for rule systems in $\mathbf{NExt}(\mathtt{KM_R})$.
\begin{proposition}[cf. {\cite[Proposition 5]{Esakia2006TMHCaCMEotIL}}]
Every fronton $\mathfrak{H}$ satisfies the identity
\[\boxtimes a=\bigwedge\{b\lor (b\to a):b\in H\}.\]
for every $a\in H$. \label{frontonsquare}
\end{proposition}
It follows that for every Heyting algebra $\mathfrak{H}$, there is at most one way of expanding $\mathfrak{H}$ to a fronton, namely by setting
\[\boxtimes a:=\bigwedge\{b\lor (b\to a):b\in H\}\]
A \emph{$\mathtt{KM}$-space} is a tuple $\mathfrak{X}=(X, \leq, \sqsubseteq, \mathcal{O})$, such that $(X,\leq, \mathcal{O})$ is an Esakia space,
and $\sqsubseteq$ is a binary relation on $X$ satisfying the following conditions, where ${\Uparrow} x:=\{y\in X:x\sqsubseteq y\}$ and ${\Downarrow} x:=\{y\in X:y\sqsubseteq x\}$, and $x<y$ iff $x\leq y$ and $x\neq y$:\label{def:kmspace}
\begin{enumerate}
\item $x<y$ implies $x\sqsubseteq y$;
\item $x\sqsubseteq y$ implies $x\leq y$;
\item ${\Uparrow} x$ is closed for all $x\in X$;
\item ${\Downarrow} [U]\in \mathsf{Clop}(\mathfrak{X})$ for every $U\in \mathsf{ClopUp}(\mathfrak{X})$;
\item For every $U\in \mathsf{ClopUp}(\mathfrak{X})$ and $x\in X$, if $x\notin U$ then there is $y\in -U$ such that $x\leq y$ and ${{\Uparrow}} y\subseteq U$.
\end{enumerate}
$\mathtt{KM}$-spaces are discussed in \cite{Esakia2006TMHCaCMEotIL}, and more at length in \cite{CastiglioniEtAl2010OFHA}.
A \emph{valuation} on a $\mathtt{KM}$ space space $\mathfrak{X}$ is a map $V:\mathit{Prop}\to \mathsf{ClopUp}(\mathfrak{X})$. The geometrical semantics for msi-rule systems extending $\mathtt{KM_R}$ over $\mathtt{KM}$-spaces is obtained straightforwardly by combining the geometrical semantics of si-rule systems and that of modal rule systems. The relation $\leq$ is used to interpret the implication connective $\to$, and the relation $\sqsubseteq$ is used to interpret the modal operator $\boxtimes$.
If $\mathfrak{X}, \mathfrak{Y}$ are $\mathtt{KM}$-spaces, a map $f:\mathfrak{X}\to \mathfrak{Y}$ is called a \emph{bounded morphism} if for all $x, y\in X$ we have:
\begin{itemize}
\item $x\leq y$ implies $f(x)\leq f(y)$;
\item $x\sqsubseteq y$ implies $f(x)\sqsubseteq f(y)$;
\item $f(x)\leq y$ implies that there is $z\in X$ with $x\leq z$ and $f(z)=y$;
\item $f(x)\sqsubseteq y$ implies that there is $z\in X$ with $x\sqsubseteq z$ and $f(z)=y$
\end{itemize}
We recall some useful properties of $\mathtt{KM}$-spaces, which are proved in \cite[Proposition 4.8]{CastiglioniEtAl2010OFHA}.
\begin{proposition}
For every $\mathtt{KM}$-space $\mathfrak{X}$, the following conditions hold:\label{propKMspa}
\begin{enumerate}
\item For every $U\in \mathsf{ClopUp}(U)$ we have $\{x\in X: {\Uparrow} x\subseteq U\}= U\cup \mathit{max}_\leq (-U)$;
\item If $\mathfrak{X}$ is finite, then for all $x, y\in X$ we have $x\sqsubseteq y$ iff $x<y$.
\end{enumerate}
\end{proposition}
It is known that the category of frontons with corresponding homomorphisms is dually equivalent to the category of $\mathtt{KM}$-spaces with continuous bounded morphisms. This result was announced in \cite[354--5]{Esakia2006TMHCaCMEotIL}, and proved in detail in \cite[Theorem 4.4]{CastiglioniEtAl2010OFHA}. We denote the $\mathtt{KM}$-space dual to a fronton $\mathfrak{H}$ as $\mathfrak{H_*}$, and the fronton dual to a $\mathtt{KM}$-space $\mathfrak{X}$ as $\mathfrak{X}^*$.
\subsubsection{Classical Provability, Magari Algebras, and $\mathtt{GL}$-spaces}
We now work in the modal signature $md$ already discussed in \Cref{ch:1}. The modal logic $\mathtt{GL}$ is axiomatised by extending $\mathtt{K}$ with the well-known \emph{Löb formula}.
\begin{align*}
\mathtt{GL}:=&\mathtt{K}\oplus \square (\square p\to p)\to \square p\\
=&\mathtt{K4}\oplus \square (\square p\to p)\to \square p
\end{align*}
The logic $\mathtt{GL}$ was independently discovered by Boolos and the Siena logic group led by Magari (cf. \cite{Sambin1974UDTDL,Sambin1976AEFPTiIDA,Magari1975TDA,SambinValentini1982TMLoPtSA,Boolos1980OSoMLwPI}) as a formalisation of the provability predicate of Peano arithmetic. The arithmetical completeness of $\mathtt{GL}$ was proved by \citet{Solovay1976PIoML} (see also \cite{JonghMontagna1988PFP}). The reader may consult \cite{Boolos1993TLoP} (as well as the more recent if less comprehensive \cite{Muravitsky2014LKaB}) for an overview of known results concerning $\mathtt{GL}$.
A modal algebra $\mathfrak{A}$ is called a \emph{Magari algebra} (after \cite{Magari1975TDA}) if it satisfies the identity
\[\square (\square a\to a)=\square a\]
for all $a\in A$.
Magari algebras are also called $\mathtt{GL}$-algebras, e.g. in \cite{Litak2014CMwPS}. We let $\mathsf{Mag}$ denote the variety of all Magari algebras. Clearly, every Magari algebra is a transitive modal algebra, and moreover $\mathsf{Mag}$ coincides with the class of all modal algebras satisfying the equation
\[\lozenge a =\lozenge (\square \neg a\land a).\]
The following result is a straightforward consequence of \Cref{thm:algebraisationMA}.
\begin{theorem}
The following maps are pairs of mutually inverse dual isomorphisms:\label{algebraisationgl}
\begin{enumerate}
\item $\mathsf{Alg}:\mathbf{NExt}(\mathtt{GL})\to \mathbf{Var}(\mathsf{Mag})$ and $\mathsf{Th}:\mathbf{Var}(\mathsf{Mag})\to \mathbf{Ext}(\mathtt{GL})$;\label{algebraisationglvar}
\item $\mathsf{Alg}:\mathbf{NExt}(\mathtt{GL_R})\to \mathbf{Uni}(\mathsf{Mag})$ and $\mathsf{ThR}:\mathbf{Uni}(\mathsf{Mag})\to \mathbf{NExt}(\mathtt{GL_R})$.\label{algebraisationgluni}
\end{enumerate}
\end{theorem}
\begin{corollary}
Every modal logic $($resp. modal rule system$)$ extending $\mathtt{GL}$ is complete with respect to some variety $($resp. universal class$)$ of Magari algebras. \label{completeness_gl}
\end{corollary}
Modal spaces dual to Magari algebras are called \emph{$\mathtt{GL}$-spaces}. $\mathtt{GL}$-spaces display various similarities with $\mathtt{GRZ}$-spaces, as the reader can appreciate by comparing the following result with \Cref{propgrz1}.
\begin{proposition}[{cf. \cite{Magari1975RaDTfDA}}]
For every $\mathtt{GL}$-space $\mathfrak{X}$ and ${U}\in \mathsf{Clop}(\mathfrak{X})$, the following conditions hold: \label{propgl1}
\begin{enumerate}
\item If $x\in \mathit{max}_R(U)$ then $R[x]\cap U=\varnothing$; \label{propgl1:1}
\item $\mathit{max}_R(U)\in \mathsf{Clop}(\mathfrak{X})$; \label{propgl1:2}
\item If $x\in U$ then either $x\in \mathit{max}_R(U)$ or there is $y\in \mathit{max}_R(U)$ such that $Rxy$;\label{propgl1:3}
\item If $\mathfrak{X}$ is finite then $R$ is irreflexive. \label{propgl1:4}
\end{enumerate}
\end{proposition}
$\mathtt{GL}$ is well-known to be complete with respect to the class of irreflexive and transitive Kripke frames containing no ascending chain. However, like $\mathtt{GRZ}$-spaces, $\mathtt{GL}$-spaces may contain clusters, and a fortiori reflexive points.
\subsection{Pre-stable Canonical Rules for Normal Extensions of $\mathtt{KM_R}$ and $\mathtt{GL_R}$}\label{sec:scrgl}
In this section we develop a new kind of algebra-based rules, serving as analogues of stable canonical rules for rule systems in $\mathbf{NExt}(\mathtt{KM_R})$ and $\mathbf{NExt}(\mathtt{GL_R})$. These rules encode a notion of filtration weaker than standard filtration, and are better suited than the latter to the rule systems under discussion. We call them \emph{pre-stable canonical rules}.
\subsubsection{The $\mathtt{KM_R}$ Case}
We have seen notions of filtration for both Heyting and modal algebras. One would hope that combining the latter would yield a suitable notion of filtration for frontons, which could then be used to develop stable canonical rules for rule systems in $\mathbf{NExt}(\mathtt{KM_R})$. This is in principle possible, but suboptimal. The reason is that with filtrations understood this way, rule systems in $\mathbf{NExt}(\mathtt{KM_R})$ would turn out to admit very few filtrations. To see this, recall (\Cref{propKMspa}) that in every finite $\mathtt{KM}$-space $\mathfrak{X}$ we have that $x\sqsubseteq y$ iff $x<y$ for all $x, y\in X$. Now let $\mathfrak{X}$ be any $\mathtt{KM}$-space such that there are $x, y\in X$ with $x\neq y$ and $x\sqsubseteq y$. Then any finite image of $\mathfrak{X}$ under a $\sqsubseteq$-preserving map $h$ with $h(x)=h(y)$ would contain a reflexive point, hence would fail to be a $\mathtt{KM}$-space.
We know that every finite distributive lattice has a unique Heyting algebra expansion, and moreover that every finite Heyting algebra has a unique fronton expansion. These constructions lead to a natural method for extracting finite countermodels based on frontons to non-valid msi rules, which we illustrate in the proof of \Cref{filtrationfronton}. This result, in a somewhat different formulation, was first proved by \citet{Muravitskii1981FAotICatEoaEHNM} via frame-theoretic methods.
\begin{lemma}
For any msi rule $\Gamma/\Delta$, if $\mathsf{Frt}\nvDash\Gamma/\Delta$ then there is a finite fronton $\mathfrak{H}\in \mathsf{Frt}$ such that $\mathfrak{H}\nvDash \Gamma/\Delta$.\label{filtrationfronton}
\end{lemma}
\begin{proof}
Assume $\mathsf{Frt}\nvDash\Gamma/\Delta$ and let $\mathfrak{H}\in \mathsf{Frt}$ be a fronton with $\mathfrak{H}\nvDash \Gamma/\Delta$. Take a valuation $V$ with $\mathfrak{H}, V\nvDash \Gamma/\Delta$. Put $\Theta=\mathit{Sfor}(\Gamma/\Delta)$ and set
\begin{align*}
D^\to&:= \{(\bar V(\varphi), \bar V(\psi))\in H\times H: \varphi\to \psi\in \Theta\}\cup \{(\bar V(\varphi), a): a\in D^\boxtimes\text{ and } \varphi\in \Theta\}\\
D^\boxtimes&:=\{\bar V(\varphi)\in H:\boxtimes \varphi\in \Theta\}
\end{align*}
Let $\mathfrak{K}$ be the bounded distributive lattice generated by $\Theta$. For all $a, b\in K$ define
\begin{align*}
a\rightsquigarrow b&:=\bigvee\{c\in H: a\land c\leq b\}\\
\boxtimes' a&:= \bigwedge_{b\in K} b\lor (b\rightsquigarrow a)
\end{align*}
Obviously $(\mathfrak{K}, \rightsquigarrow)$ is a Heyting algebra, and by \Cref{frontonsquare} it follows that $\mathfrak{K}':=(\mathfrak{K}, \rightsquigarrow, \boxtimes')$ is a fronton. Moreover, the inclusion $\subseteq:\mathfrak{K}'\to \mathfrak{A}$ is a bounded lattice embedding satisfying
\begin{align*}
a\rightsquigarrow b&\leq a\to b & &\text{for all }(a,b)\in K\times K\\
a\rightsquigarrow b&= a\to b & &\text{for all }(a,b)\in D^\to\\
\boxtimes' a&=\boxtimes a & &\text{for all }a\in D^\boxtimes.
\end{align*}
The first two claims are proved the same way as in the proof of \Cref{rewritesi}. For the third claim we reason as follows. Suppose $a\in D^\boxtimes$. Then $(b, a)\in D^\to$ for every $b\in K$ by construction. Therefore,
\[\boxtimes' a=\bigwedge_{b\in K} b\lor (b\rightsquigarrow a)= \bigwedge_{b\in K} b\lor (b\to a).\]
By the axioms of frontons we have $\boxtimes a\leq b\lor (b\to a)$ for all $b\in H$, hence for all $b\in K$ in particular. Therefore $\boxtimes a\leq \boxtimes' a$. Conversely, for any $a\in K$ we have
\begin{align*}
\boxtimes' a&\leq \boxtimes a\lor \boxtimes a \rightsquigarrow a \\
&\leq\boxtimes a \lor \boxtimes a\to a \tag{by $\boxtimes a\rightsquigarrow a\leq \boxtimes a\to a$}\\
&=\boxtimes a. \tag{by $\boxtimes a\to a= a\leq \boxtimes a$}
\end{align*}
Let $V'$ be an arbitrary valuation on $\mathfrak{K}'$ with $V'(p)=V(p)$ whenever $p\in \mathit{Sfor}(\Gamma/\Delta)\cap \mathit{Prop}$. Then for every $\varphi\in \Theta$ we have $V(\varphi)=V'(\varphi)$. This is shown easily by induction on the structure of $\varphi$. Therefore, $\mathfrak{K}', V'\nvDash \Gamma/\Delta$.
\end{proof}
The proof of \Cref{filtrationfronton} motivates an alternative notion of filtration for frontons. Let $\mathfrak{H}, \mathfrak{K}\in \mathsf{Frt}$. A map $h:\mathfrak{H}\to \mathfrak{K}$ is called \emph{pre-stable} if for every $a, b\in H$ we have $h(a\to b)\leq h(a)\to h(b)$. For $a, b\in H$, we say that $h$ satisfies the $\to$-\emph{bounded domain condition} (BDC$^\to$) for $(a, b)$ if $h(a\to b)=h(a)\to h(b)$. For $D\subseteq H$, we say that $h$ satisfies the $\boxtimes$-\emph{bounded domain condition} (BDC$^\boxtimes$) for $D$ if $h(\boxtimes a)=\boxtimes h(a)$ for every $a\in D$. If $D\subseteq H\times H$, we say that $h$ satisfies the BDC$^\to$ for $D$ if it does for each $(a, b)\in D$, and analogously for the BDC$^\boxtimes$. Lastly, if $D^\to\subseteq H\times H$ and $D^\boxtimes\subseteq H$, we say that $h$ satisfies the BDC for $(D^\to, D^\boxtimes)$ if $h$ satisfies the BDC$^\to$ for $D^\to$ and the BDC$^\boxtimes$ for $D^\boxtimes$.
\begin{definition}
Let $\mathfrak{H}$ be a fronton, $V$ a valuation on $\mathfrak{H}$, and $\Theta$ a finite, subformula closed set of formulae. A (finite) model $(\mathfrak{K}', V')$, with $\mathfrak{K}\in \mathsf{Frt}$, is called a (\emph{finite}) \emph{weak filtration of $(\mathfrak{H}, V)$ through $\Theta$} if the following hold:
\begin{enumerate}
\item $\mathfrak{K}'=(\mathfrak{K},\to , \boxtimes)$, where $\mathfrak{K}$ is the bounded sublattice of $\mathfrak{H}$ generated by $\bar V[\Theta]$;
\item $V(p)=V'(p)$ for every propositional variable $p\in \Theta$;
\item The inclusion $\subseteq:\mathfrak{B}\to \mathfrak{A}$ is a pre-stable embedding satisfying the BDC$^\to$ for the set $\{(\bar V(\varphi), \bar V(\psi)):\varphi\to \psi\in \Theta\}$, and satisfying the BDC$^\boxtimes$ for the set $\{\bar V(\varphi):\boxtimes \varphi\in \Theta\}$
\end{enumerate}
\end{definition}
A straightforward induction on structure establishes the following filtration theorem.
\begin{theorem}[Filtration theorem for frontons]
Let $\mathfrak{H}$ be a fronton, $V$ a valuation on $\mathfrak{H}$, and $\Theta$ a a finite, subformula-closed set of formulae. If $(\mathfrak{K}', V')$ is a weak filtration of $(\mathfrak{H}, V)$ through $\Theta$ then for every $\varphi\in \Theta$ we have
\[\bar V(\varphi)=\bar V'(\varphi).\]
Consequently, for every rule $\Gamma/\Delta$ such that $\gamma, \delta\in \Theta$ for each $\gamma\in \Gamma$ and $\delta\in \Delta$ we have
\[\mathfrak{H}, V\models \Gamma/\Delta\iff \mathfrak{K}', V'\models \Gamma/\Delta.\]
\end{theorem}
We now introduce algebra-based rules for rule systems in $\mathbf{NExt}(\mathtt{KM_R})$ by syntactically encoding weak filtrations as just defined. We call these \emph{pre-stable canonical rules} to emphasize the role of pre-stable maps as opposed to stable maps in their refutation conditions.
\begin{definition}
Let $\mathfrak{H}\in \mathsf{Frt}$ be a finite fronton, and let $D^\to\subseteq H\times H$, $D^\boxtimes\subseteq H$ be such that $a\in D^\boxtimes$ implies $(b, a)\in D^\to$ for every $b\in H$. The \emph{pre-stable canonical rule} of $(\mathfrak{H}, D^\to, D^\boxtimes)$, is defined as $\pscrmsi{H}{D}=\Gamma/\Delta$, where
\begin{align*}
\Gamma:=&\{p_0\leftrightarrow 0\}\cup\{p_1\leftrightarrow 1\} \cup\\
&\{p_{a\land b}\leftrightarrow p_a\land p_b:a\in H\}\cup
\{p_{a\lor b}\leftrightarrow p_a\lor p_b:a\in H\}\cup\\
& \{p_{a\to b}\leftrightarrow p_a\to p_b:(a, b)\in D^\to\}\cup \{p_{\boxtimes a}\leftrightarrow \boxtimes p_a:a\in D^\boxtimes\}\\
\Delta:=&\{p_a\leftrightarrow p_b:a, b\in H\text{ with } a\neq b\}.
\end{align*}
\end{definition}
The next two results outline algebraic refutation conditions for msi pre-stable canonical rules. They may be proved with straightforward adaptations of the proofs of similar results seen in earlier sections.
\begin{proposition}
For every finite fronton $\mathfrak{H}$ and $D^\to\subseteq H\times H$, $D^\boxtimes\subseteq H$, we have $\mathfrak{H}\nvDash \pscrmsi{H}{D}$. \label{msiscr-refutation1}
\end{proposition}
\begin{proposition}
For every msi pre-stable canonical rule $\pscrmsi{H}{D}$ and any $\mathfrak{K}\in \mathsf{Frt}$, we have $\mathfrak{K}\nvDash \pscrmsi{H}{D}$ iff there is a pre-stable embedding $h:\mathfrak{H}\to \mathfrak{K}$ satisfying the BDC for $(D^\to,D^\boxtimes)$.\label{msiscr-refutation2}
\end{proposition}
We now give refutation conditions for msi pre-stable canonical rules on $\mathtt{KM}$-spaces. If $\mathfrak{X}, \mathfrak{Y}$ are $\mathtt{KM}$-spaces, a map $f:\mathfrak{X}\to \mathfrak{Y}$ is called \emph{pre-stable} if for all $x, y\in X$, $x\leq y$ implies $f(x)\leq f(y)$. Clearly, if $f$ is pre-stable then for all $x, y\in X$, $x\sqsubseteq y$ implies $f(x)\leq f(y)$. Now let $\mathfrak{d}\subseteq Y$. We say that $f$ \emph{satisfies the BDC$^\to$ for $\mathfrak{d}$} if for all $x\in X$,
\[{{\uparrow}}[ f(x)]\cap \mathfrak{d}\neq\varnothing\Rightarrow f[{{\uparrow}} x]\cap \mathfrak{d}\neq\varnothing.\]
We say that $f$ \emph{satisfies the BDC$^\boxtimes$ for $\mathfrak{d}$} if for all $x\in X$ the following two conditions hold.
\begin{align*}
{{\Uparrow}}[h(x)]\cap \mathfrak{d}\neq\varnothing \Rightarrow h[{{\Uparrow}} x]\cap \mathfrak{d}\neq\varnothing\tag {BDC$^\boxtimes$-back}\\
h[{{\Uparrow}} x]\cap \mathfrak{d}\neq\varnothing \Rightarrow {{\Uparrow}}[h(x)]\cap \mathfrak{d}\neq\varnothing\tag {BDC$^\boxtimes$-forth}
\end{align*}
If $\mathfrak{D}\subseteq \wp(Y)$, then we say that $h$ satisfies the BDC$^\to$ for $\mathfrak{D}$ if it does for every $\mathfrak{d}\in \mathfrak{D}$, and similarly for the BDC$^\boxtimes$. Finally, if $\mathfrak{D}^\to, \mathfrak{D}^\boxtimes\subseteq \wp(Y)$, then we say that $h$ satisfies the BDC for $(\mathfrak{D}^\to, \mathfrak{D}^\boxtimes)$ if $h$ satisfies the BDC$^\to$ for $\mathfrak{D}^\to$ and the BDC$^\boxtimes$ for $\mathfrak{D}^\boxtimes$.
Let $\mathfrak{H}$ be a finite fronton. If $D^\to\subseteq H\times H$, for every $(a, b)\in D^\to$ set $\mathfrak{d}^\to_{(a, b)}:=\beta (a)\smallsetminus\beta (b)$. If $D^\boxtimes\subseteq H$, for every $a\in D^\boxtimes$ set $\mathfrak{d}^\boxtimes_{a}:=-\beta (a)$. Finally, put $\mathfrak{D}^\to:=\{\mathfrak{d}^\to_{(a, b)}:(a, b)\in D^\to\}$, $\mathfrak{D}^\boxtimes:=\{\mathfrak{d}^\boxtimes_{a}:a \in D^\boxtimes\}$.
\begin{proposition}
For every msi pre-stable canonical rule $\pscrmsi{H}{D}$ and any $\mathtt{KM}$-space, we have $\mathfrak{X}\nvDash\pscrmsi{H}{D}$ iff there is a continuous pre-stable surjection $f:\mathfrak{X}\to \mathfrak{H}_*$ satisfying the BDC $(\mathfrak{D}^\to, \mathfrak{D}^\boxtimes)$.\label{refutspacemsi}
\end{proposition}
\begin{proof}
$(\Rightarrow)$ Assume $\mathfrak{X}\nvDash\pscrmsi{H}{D}$. Then there is a pre-stable embedding $h:\mathfrak{H}\to\mathfrak{X}^*$ satisfying the BDC for $(D^\to,D^\boxtimes)$. Reasoning as in the proofs of \Cref{refutspace} and \Cref{refutspacemod} it follows that there is a pre-stable map $f:\mathfrak{X}\to \mathfrak{H}_*$ satisfying the BDC$^\to$ for $\mathfrak{D}^\to$ and satisfying the BDC$^\boxtimes$-back for $\mathfrak{D}^\boxtimes$, namely the map $f=h^{-1}$.
Let us check that $f$ satisfies the BDC$^\boxtimes$-forth for $\mathfrak{D}^\boxtimes$. Let $\mathfrak{d}^\boxtimes_a\in \mathfrak{D}^\boxtimes$. Assume $f[{{\Uparrow}} x]\cap \mathfrak{d}^\boxtimes_a\neq\varnothing$, i.e., that there is $y\in {{\Uparrow}} x$ with $f(y)\in \mathfrak{d}^\boxtimes_a$. So $x\notin \boxtimes_\sqsubseteq h(U)$, where $U:=-\mathfrak{d}^\boxtimes_a$. Since $h$ satisfies the BDC$^\boxtimes$ for $\mathfrak{d}^\boxtimes_a$ we have $\boxtimes_\sqsubseteq h(U)=h(\boxtimes_\sqsubseteq U)$, and so $x\notin h(\boxtimes_\sqsubseteq U)$. This implies $f(x)\notin \boxtimes_\sqsubseteq(U)$, therefore there must be some $z\in \mathfrak{d}^\boxtimes_a$ such that $f(x)\sqsubseteq z$, i.e. ${{\Uparrow}} [f(x)]\cap \mathfrak{d}^\boxtimes_a\neq\varnothing$.
$(\Leftarrow)$ Assume that there is a continuous pre-stable surjection $f:\mathfrak{X}\to \mathfrak{H}_*$ satisfying the BDC for $(\mathfrak{D}^\to,\mathfrak{D}^\boxtimes)$. By the proof of \Cref{refutspace}, $f^{-1}:\mathfrak{H}\to \mathfrak{X}^*$ is a pre-stable embedding satisfying the BDC$^\to$ for $D^\to$. Let us check that $f^{-1}$ satisfies the BDC$^\boxtimes$ for $D^\boxtimes$. Let $U\subseteq X$ be such that $U=\beta(a)$ for some $a\in D^\boxtimes$, and reason as follows.
\begin{align*}
x\notin f^{-1}(\boxtimes_\sqsubseteq U)&\iff {{\Uparrow}} x\cap f^{-1}(\mathfrak{d}^\boxtimes_a)\neq\varnothing\\
&\iff {{\Uparrow}} [f(x)]\cap \mathfrak{d}^\boxtimes_a\neq\varnothing \tag{$f$ satisfies the BDC$^\boxtimes$ for $\mathfrak{d}^\boxtimes_a$}\\
&\iff x\notin \boxtimes_\sqsubseteq f^{-1}(U).
\end{align*}
\end{proof}
In view of \Cref{refutspacemsi}, when working with $\mathtt{KM}$-spaces we may write an msi pre-stable canonical rule $\pscrmsi{H}{D}$ as $\pscrmsi{H_*}{\mathfrak{D}}$.
We close this subsection by proving that our msi pre-stable canonical rules are expressive enough to axiomatise every rule system in $\mathbf{NExt}(\mathtt{KM_R})$.
\begin{lemma}
For every msi rule $\Gamma/\Delta$ there is a finite set $\Xi$ of msi pre-stable canonical rules such that for any $\mathfrak{K}\in \mathsf{Frt}$ we have $\mathfrak{K}\nvDash \Gamma/\Delta$ iff there is $\pscrmsi{H}{D}\in \Xi$ such that $\mathfrak{K}\nvDash \pscrmsi{H}{D}$.\label{rewritemsi}
\end{lemma}
\begin{proof}
Since bounded distributive lattices are locally finite there are, up to isomorphism, only finitely many triples $(\mathfrak{H}, D^\to, D^\boxtimes)$ such that
\begin{itemize}
\item $\mathfrak{H}\in \mathsf{Frt}$ and $\mathfrak{H}$ is at most $k$-generated as a bounded distributive lattice, where $k=|\mathit{Sfor}(\Gamma/\Delta)|$;
\item There is a valuation $V$ on $\mathfrak{H}$ refuting $\Gamma/\Delta$, such that
\begin{align*}
D^\to=&\{(\bar V(\varphi), \bar V(\psi)):\varphi\to \psi\in \mathit{Sfor}(\Gamma/\Delta)\}\cup \\
& \{(\bar V(\varphi), b):\boxtimes\varphi\in \mathit{Sfor}(\Gamma/\Delta)\text{ and }b\in H\}\\
D^\boxtimes=&\{\bar V(\varphi): \boxtimes\varphi\in \mathit{Sfor}(\Gamma/\Delta)\}.
\end{align*}
\end{itemize}
Let $\Xi$ be the set of all msi pre-stable canonical rules $\pscrmsi{H}{D}$ for all such triples $(\mathfrak{H}, D^\to, D^\boxtimes)$, identified up to isomorphism.
$(\Rightarrow)$ Let $\mathfrak{K}\in \mathsf{Frt}$ and suppose $\mathfrak{H}\nvDash \Gamma/\Delta$. Take a valuation $V$ on $\mathfrak{H}$ such that $\mathfrak{K}, V\nvDash \Gamma/\Delta$. Then by the proof of \Cref{filtrationfronton} there is a weak filtration $(\mathfrak{H}', V')$ of $(\mathfrak{K}, V)$ through $\mathit{Sfor}(\Gamma/\Delta)$, which by the filtration theorem for frontons is such that $\mathfrak{H}', V'\nvDash \Gamma/\Delta$. This implies that there is a stable embedding $h:\mathfrak{H}'\to \mathfrak{K}$, which again by the proof of \Cref{filtrationfronton} satisfies the BDC for the pair $(\mathfrak{D}^\to, \mathfrak{D}^\boxtimes)$ defined as above.
Therefore $\pscrmsi{H'}{D}\in \Xi$ and $\mathfrak{K}\nvDash\pscrmsi{H'}{D}$.
$(\Leftarrow)$ Analogous to the same direction in, e.g., \Cref{rewritesi}.
\end{proof}
\begin{theorem}
Every msi-rule system $\mathtt{L}\in \mathbf{NExt}(\mathtt{KM_R})$ is axiomatisable over $\mathtt{KM_R}$ by some set of msi pre-stable canonical rules of the form $\pscrmsi{H}{D}$, where $\mathfrak{H}\in \mathsf{KM}$. \label{axiomKMscr}
\end{theorem}
\begin{proof}
Analogous to \Cref{axiomatisationsi}.
\end{proof}
\subsubsection{The $\mathtt{GL_R}$ Case}
Modal stable canonical rules as developed in \Cref{sec:scrmod} can axiomatise every rule system in $\mathbf{NExt}(\mathtt{GL_R})$ \cite[Theorem 5.6]{BezhanishviliEtAl2016SCR}. However, modal stable canonical rules differ significantly from msi pre-stable canonical rules: they are based on a different notion of filtration, which is stated in terms of stable rather than pre-stable maps.
Moreover, $\mathtt{GL_R}$ admits very few filtrations. The situation is similar to the case of $\mathbf{NExt}(\mathtt{KM_R})$. For recall (\Cref{propgl1}) that finite $\mathtt{GL}$-spaces are strict partial orders. If $\mathfrak{X}$ is a $\mathtt{GL}$-space and $f:\mathfrak{X}\to \mathfrak{Y}$ is a stable map from $\mathfrak{X}$ onto some finite modal space $\mathfrak{Y}$ such that $f(x)=f(y)$ for some $x, y\in X$ with $Rxy$, then $\mathfrak{Y}$ contains a reflexive point, hence cannot be a $\mathtt{GL}$-space.
In response to this problem, an alternative notion of filtration was introduced in \cite{BenthemBezhanishviliforthcomingMFoF}, who note that the same technique was used already in \cite{Boolos1993TLoP}. We call it \emph{weak filtration}. As usual, we prefer an algebraic definition. If $\mathfrak{A}, \mathfrak{B}$ are modal algebras and $D\subseteq A$, let us say that a map $h:\mathfrak{A}\to \mathfrak{B}$ satisfies the $\square$-\emph{bounded domain condition} (BDC$^\square$) for $D$ if $h(\square a)= \square(a)$ for every $a\in D$.
\begin{definition}
Let $\mathfrak{B}\in \mathsf{Mag}$ be a Magari algebra, $V$ a valuation on $\mathfrak{B}$, and $\Theta$ a finite, subformula closed set of formulae. A (finite) model $(\mathfrak{A}', V')$, with $\mathfrak{A}'\in \mathsf{Mag}$, is called a (\emph{finite}) \emph{weak filtration of $(\mathfrak{B}, V)$ through $\Theta$} if the following hold:
\begin{enumerate}
\item $\mathfrak{A}'=(\mathfrak{A}, \square)$, where $\mathfrak{B}$ is the Boolean subalgebra of $\mathfrak{B}$ generated by $\bar V[\Theta]$;
\item $V(p)=V'(p)$ for every propositional variable $p\in \Theta$;
\item The inclusion $\subseteq:\mathfrak{A}'\to \mathfrak{B}$ satisfies the BDC$^\square$ for $D:=\{\bar V'(\varphi):\square\varphi\in \Theta\}$.
\end{enumerate}
\end{definition}
\begin{theorem}
Let $\mathfrak{B}\in \mathsf{Mag}$ be a Magari algebra, $V$ a valuation on $\mathfrak{B}$, and $\Theta$ a finite, subformula closed set of formulae. Let $(\mathfrak{A}', V')$ be a weak filtration of $(\mathfrak{B}, V)$. Then for every $\varphi\in \Theta$ we have
\[\bar V(\varphi)=\bar V'(\varphi).\]
\end{theorem}
\begin{proof}
Straightforward induction on the structure of $\varphi$.
\end{proof}
Unlike weak filtrations in the msi setting, modal weak filtrations are not in general unique. We will be particularly interested in weak filtrations satisfying an extra condition, which we will construe as a modal counterpart to pre-stability in the msi setting. For any modal algebra $\mathfrak{A}$ and $a\in A$ we write $\square^+(a):=\square a\land a$. Let $\mathfrak{A}, \mathfrak{B}\in \mathsf{Mag}$ be Magari algebras. A Boolean homomorphism $h:\mathfrak{A}\to \mathfrak{B}$ is called \emph{pre-stable} if for every $a\in A$ we have $h(\square^+ a)\leq \square^+h(a)$. Clearly, every stable Boolean homomorphism $h:\mathfrak{A}\to \mathfrak{B}$ is pre-stable, since $h(\square a)\leq \square h(a)$ implies $h(\square a\land a)=h(\square a)\land h(a)\leq \square h(a)\land h(a)$. A weak filtration $(\mathfrak{A}', V')$ of some model $(\mathfrak{B}, V)$ through some finite, subformula closed set of formulae $\Theta$ is called \emph{pre-stable} if the embedding $\subseteq :\mathfrak{A}'\to \mathfrak{B}$ is pre-stable.
If $\mathfrak{A}, \mathfrak{B}$ are modal algebras and $D\subseteq A$, a map $h:\mathfrak{A}\to \mathfrak{B}$ satisfies the $\square^+$-\emph{bounded domain condition} (BDC$^{\square^+}$) for $D$ if $h(\square^+ a)= \square^+h(a)$ for every $a\in D$. Note that if $(\mathfrak{A}', V')$ is a filtration of $(\mathfrak{B}, V)$ through some $\Theta$, then for every $D\subseteq A$ the inclusion $\subseteq:\mathfrak{A}\to \mathfrak{B}$ satisfies the BDC$^{\square^+}$ for $D$ iff it satisfies the BDC$^\square$ for $D$. Indeed, since $\Theta$ is subformula-closed we have that $\square^+ \varphi\in \Theta$ implies $\square \varphi\in \Theta$, which gives the ``only if'' direction, whereas the converse follows from the fact that $\subseteq$ is a Boolean embedding.
Our algebra-based rules encode pre-stable weak filtrations as defined above, and explicitly include a parameter $D^{\square^+}$, linked to the BDC$^{\square^+}$, intended as a counterpart to the parameter $D^\to$ of msi pre-stable canonical rules. We call these rules \emph{modal pre-stable canonical rules.}
\begin{definition}
Let $\mathfrak{A}\in \mathsf{MA}$ be a finite modal algebra, and let $D^{\square^+}, D^\square\subseteq A$. Let $\square^+\varphi:=\square \varphi\land\varphi$. The \emph{pre-stable canonical rule} of $(\mathfrak{A}, D^{\square^+}, D^\square)$, is defined as $\pscrmod{A}{D}=\Gamma/\Delta$, where
\begin{align*}
\Gamma:=&\{p_{a\land b}\leftrightarrow p_a\land p_b:a\in H\}\cup \{p_{a\lor b}\leftrightarrow p_a\lor p_b:a\in H\}\cup \\
&\{p_{\neg a}\leftrightarrow \neg p_a:a\in A\}\cup \{p_{\square^+a}\to \square^+p_a:a\in a\}\cup\\
& \{ \square^+ p_a\to p_{\square^+ a}:a\in D^{\square^+}\}\cup \{p_{\square a}\leftrightarrow \square p_a:a\in D^\square\}\\
\Delta:=&\{p_a:a\in A\smallsetminus \{1\}\}.
\end{align*}
\end{definition}
It is helpful to conceptualise modal pre-stable canonical rules as algebra-based rules for bi-modal rule systems in the signature $\{\land, \lor, \neg,\square, \square^+,0,1\}$ (so that $\square^+$ is an independent operator rather than defined from $\square$) and containing $\square^+ p\leftrightarrow \square p\land p$ as an axiom.\footnote{This view of $\mathtt{GL}$ as a bimodal logic is the main insight informing Litak's \cite{Litak2014CMwPS} strategy for deriving \Cref{KMisovar} of \Cref{KMiso} from the theory of polymodal companions of msi-logics as developed by \citet{WolterZakharyaschev1997IMLAFoCBL,WolterZakharyaschev1997OtRbIaCML}. In that setting, msi formulae are translated into formulae in a bimodal signature, but the two modalities of the latter can be regarded as implicitly interdefinable in logics where one satisfies the Löb formula.} From this perspective, modal pre-stable canonical rules are rather similar to msi pre-stable canonical rules
Using by now familiar reasoning, it is easy to verify that modal pre-stable canonical rules display the intended refutation conditions. For brevity, let us say that a pre-stable map $h$ satisfies the BDC for $(D^{\square^+}, D^\square)$ if $h$ satisfies the BDC$^{\square^+}$ for $D^{\square^+}$ and the BDC$^\square$ for $D^\square$.
\begin{proposition}
For every finite modal algebra $\mathfrak{A}\in \mathsf{MA}$ and $D^{\square^+}, D^\square\subseteq A$, we have $\mathfrak{H}\nvDash \pscrmod{A}{D}$. \label{md+scr-refutation1}
\end{proposition}
\begin{proposition}
For every modal algebra $\mathfrak{B}\in \mathsf{MA}$ and any modal pre-stable canonical rule $\pscrmod{A}{D}$, we have $\mathfrak{B}\nvDash \pscrmod{A}{D}$ iff there is a pre-stable embedding $h:\mathfrak{B}\to \mathfrak{A}$ satisfying the BDC $(D^{\square^+}D^\square)$.\label{md+scr-refutation2}
\end{proposition}
If $\mathfrak{X}$ is any modal space, for any $x, y\in X$ define $R^+xy$ iff $Rxy$ or $x=y$. Let $\mathfrak{X}, \mathfrak{Y}$ be $\mathtt{GL}$-spaces. A map $f:\mathfrak{X}\to \mathfrak{Y}$ is called \emph{pre-stable} if for all $x, y\in X$ we have that $R^+x y$ implies $R^+f(x) f(y)$. If $\mathfrak{d}\subseteq Y$, we say that $f$ satisfies the BDC$^{\square^+}$ for $\mathfrak{d}$ if for all $x\in X$,
\[R^+[ f(x)]\cap \mathfrak{d}\neq\varnothing\Rightarrow f[R^+[x]]\cap \mathfrak{d}\neq\varnothing.\]
Furthermore, we say that $f$ satisfies the BDC$^\square$ for $\mathfrak{d}$ if for all $x\in X$ the following two conditions hold.
\begin{align*}
R[f(x)]\cap \mathfrak{d}\neq\varnothing &\Rightarrow f[R[x]]\cap \mathfrak{d}\neq\varnothing\tag {BDC$^\square$-back}\\
f[R[x]]\cap \mathfrak{d}\neq\varnothing &\Rightarrow R[f(x)]\cap \mathfrak{d}\neq\varnothing.\tag {BDC$^\square$-forth}
\end{align*}
Finally, if $\mathfrak{D}\subseteq\wp(Y)$ we say that $f$ satisfies the BDC$^{\square^+}$ (resp. BDC$^{\square}$) for $\mathfrak{D}$ if it does for every $\mathfrak{d}\in \mathfrak{D}$, and if $\mathfrak{D}^{\square^+}, \mathfrak{D}^\square\subseteq\wp(Y)$ we write that $f$ satisfies the BDC for $(\mathfrak{D}^{\square^+}, \mathfrak{D}^\square)$ if $f$ satisfies the BDC$^{\square^+}$ for $\mathfrak{D}^{\square^+}$ and the BDC$^{\square}$ for $\mathfrak{D}^\square$. Let $\mathfrak{A}$ be a finite Magari algebra. If $D^{\square^+}\subseteq A$, for every $a\in D^{\square^+}$ set $\mathfrak{d}^{\square^+}_{a}:=-\beta (a)$. If $D^\square\subseteq A$, for every $a\in D^\square$ set $\mathfrak{d}^\square_{a}:=-\beta (a)$. Finally, put $\mathfrak{D}^{\square^+}:=\{\mathfrak{d}^{\square^+}_{a}:a\in D^{\square^+}\}$, $\mathfrak{D}^\square:=\{\mathfrak{d}^\square_{a}:a \in D^\square\}$.
\begin{proposition}
For all $\mathtt{GL}$-spaces $\mathfrak{X}$ and any modal pre-stable canonical rule $\pscrmod{A}{D}$, we have $\mathfrak{X}\nvDash\pscrmod{A}{D}$ iff there is a continuous pre-stable surjection $f:\mathfrak{X}\to \mathfrak{A}_*$ satisfying the BDC for $(\mathfrak{D}^{\square^+}, \mathfrak{D}^\square)$.\label{refutspacemd+}
\end{proposition}
As usual, in view of \Cref{refutspacemd+} we write a modal pre-stable canonical rule $\pscrmod{A}{D}$ as $\pscrmod{A_*}{\mathfrak{D}}$ in geometric settings.
We close this section by proving that pre-stable canonical rules axiomatise any rule system in $\mathbf{NExt}(\mathtt{GL_R})$.
\begin{lemma}
For every modal rule $\Gamma/\Delta$ there is a finite set $\Xi$ of modal pre-stable canonical rules of the form $\pscrmod{A}{D}$ with $\mathfrak{A}\in \mathsf{K4}$, such that for any $\mathfrak{B}\in \mathsf{Mag}$ we have $\mathfrak{B}\nvDash \Gamma/\Delta$ iff there is $\pscrmod{A}{D}\in \Xi$ such that $\mathfrak{B}\nvDash \pscrmod{A}{D}$.\label{rewritemd+}
\end{lemma}
\begin{proof}
Since Boolean algebras is locally finite there are, up to isomorphism, only finitely many triples $(\mathfrak{A}, D^{\square^+}, D^\square)$ such that
\begin{itemize}
\item $\mathfrak{A}\in \mathsf{K4}$ and $\mathfrak{A}$ is at most $k$-generated as a Boolean algebra, where $k=|\mathit{Sfor}(\Gamma/\Delta)|$;
\item There is a valuation $V$ on $\mathfrak{A}$ refuting $\Gamma/\Delta$, such that
\begin{align*}
D^{\square^+}&=\{\bar V(\varphi): \square^+\varphi\in \mathit{Sfor}(\Gamma/\Delta)\}\\
D^{\square}&=\{\bar V(\varphi): \square\varphi\in \mathit{Sfor}(\Gamma/\Delta)\}
\end{align*}
\end{itemize}
Let $\Xi$ be the set of all modal pre-stable canonical rules $\pscrmod{A}{D}$ for all such triples ($\mathfrak{A}, D^{\square^+}, D^\square)$, identified up to isomorphism.
$(\Rightarrow)$ Let $\mathfrak{B}\in \mathsf{Mag}$ and suppose $\mathfrak{B}\nvDash \Gamma/\Delta$. Take a valuation $V$ on $\mathfrak{B}$ such that $\mathfrak{B}, V\nvDash \Gamma/\Delta$. As is well-known, there is a transitive filtration $(\mathfrak{A}', V')$ of $(\mathfrak{B}, V)$ through $\mathit{Sfor}(\Gamma/\Delta)$. Then $\mathfrak{A}'\in \mathsf{K4}$. Moreover, clearly every filtration is a weak filtration, hence so is $(\mathfrak{A}', V')$. Therefore there is a Boolean embedding $h:\mathfrak{A}'\to \mathfrak{B}$ satisfying the BDC for $(D^{\square^+}, D^\square)$, where $D^{\square^+}:=\{\bar V'(\varphi): \square^+\varphi\in \mathit{Sfor}(\Gamma/\Delta)\}$ and $D^{\square}:=\{\bar V'(\varphi): \square\varphi\in \mathit{Sfor}(\Gamma/\Delta)\}$. Indeed, it is obvious that $h$ is a Boolean embedding which satisfies the BDC$^{\square}$ for $D^{\square}$. The fact that $h$ satisfies the BDC$^{\square^+}$ follows by noting that, additionally, $\square\varphi\in \mathit{Sfor}(\square^+\varphi)$ for every modal formula $\varphi$. Lastly, since $(\mathfrak{A}', V')$ is actually a filtration, $f$ is stable, a fortiori pre-stable. Hence we have shown $\mathfrak{B}\nvDash \pscrmod{A}{D}$.
$(\Leftarrow)$ Routine.
\end{proof}
\begin{theorem}
Every modal rule system $\mathtt{M}\in \mathbf{NExt}(\mathtt{GL_R})$ is axiomatisable over $\mathtt{GL_R}$ by some set of modal pre-stable canonical rules of the form $\pscrmod{A}{D}$, where $\mathfrak{A}\in \mathsf{K4}$. \label{axiomGLscr}
\end{theorem}
\subsection{The Kuznetsov-Muravitsky Isomorphism via Stable Canonical Rules}\label{sec:isomorphismgl}
We are ready for the main topic of this section, the Kuznetsov-Muravitsky isomorphism and its extension to rule systems. We apply pre-stable canonical rules to prove this and related results in the vicinity, using essentially the same techniques seen in \Cref{sec:modalcompanions,sec:tensecompanions}.
\subsubsection{Semantic Mappings}\label{sec:mappingsmsi}
We begin by reviewing the constructions for transforming frontons into corresponding Magari algebras and vice versa. The results in this paragraph are known, and recent proofs can be found in, e.g., \cite{Esakia2006TMHCaCMEotIL}.
\begin{definition}
The mapping $\sigma: \mathsf{Frt}\to \mathsf{Mag}$ assigns every $\mathfrak{H}\in \mathsf{Frt}$ to the algebra $\sigma \mathfrak{H}:=(B(\mathfrak{H}), \square)$, where $B(\mathfrak{H})$ is the free Boolean extension of $1\mathfrak{H}$ and for every $a\in B(H)$ we have
\begin{align*}
Ia&:=\bigvee\{b\in H:b\leq a\}\\
\square a&:=\boxtimes Ia.
\end{align*}
\end{definition}
Observe that if $a\in H$ then $Ia=a$, and so $\square a=\boxtimes a$. Consequently, if $a\in H$ also $\square^+ a=\boxtimes^+ a$.
\begin{definition}
The mapping $\rho:\mathsf{Mag}\to \mathsf{Frt}$ assigns every Magari algebra $\mathfrak{A}\in \mathsf{Mag}$ to the algebra $\rho \mathfrak{A}:=(O(A), \land, \lor, \to, \square, 1, 0)$, where
\begin{align*}
O(A)&:=\{a\in A:\square^+ a=a\}\\
a\to b&:=\square^+ (\neg a\lor b)\\
\boxtimes a&:= \square a
\end{align*}
\end{definition}
By unpacking the definitions just presented it is not difficult to verify that the following Proposition holds.
\begin{proposition}
For every $\mathfrak{H}\in \mathsf{Frt}$ we have $\mathfrak{H}\cong \rho\sigma \mathfrak{H}$. Moreover, for every $\mathfrak{A}\in \mathsf{GRZ}$ we have $\sigma \rho\mathfrak{A}\rightarrowtail\mathfrak{A}$.\label{representationFrtMag}
\end{proposition}
We call a Magari algebra $\mathfrak{A}$ \emph{skeletal} if $\sigma \rho\mathfrak{A}\cong\mathfrak{A}$ holds.
We now give more suggestive dual descriptions of the maps $\sigma, \rho$ on $\mathtt{KM}$- and $\mathtt{GL}$-spaces, which also make it easier to show that $\sigma, \rho$ are the intended ranges.
\begin{definition}
If $\mathfrak{X}=(X, \leq,\sqsubseteq, \mathcal{O})$ is a $\mathtt{KM}$-space we set $\sigma \mathfrak{X}:=(X, R, \mathcal{O})$, where $R=\sqsubseteq$. Let $\mathfrak{Y}:=(Y, R, \mathcal{O})$ be a $\mathtt{GL}$-space. For $x, y\in Y$ write $x\sim y$ iff $Rxy$ and $Ryx$. Define a map $\rho:Y\to \wp (Y)$ by setting $\rho(x)=\{y\in Y:x\sim y\}$. We define $\rho\mathfrak{Y}:=(\rho[Y], \leq_\rho, \sqsubseteq_\rho \rho[\mathcal{O}])$ where $\rho(x)\sqsubseteq_\rho \rho(y)$ iff $Rxy$ and $\rho(x)\leq_\rho \rho(y)$ iff $R_\rho^+\rho(x) \rho(y)$.
\end{definition}
\begin{proposition}
The following conditions hold.\label{msimapsdual}
\begin{enumerate}
\item Let $\mathfrak{H}\in \mathsf{Frt}$. Then $(\sigma \mathfrak{H})_*\cong\sigma (\mathfrak{H}_*)$. Consequently, if $\mathfrak{X}$ is a $\mathtt{KM}$-space then $(\sigma \mathfrak{X})^*\cong\sigma (\mathfrak{X})^*$. \label{msimapsdual1}
\item Let $\mathfrak{X}$ be a $\mathtt{GL}$-space. Then $(\rho\mathfrak{X})^*\cong\rho(\mathfrak{X}^*)$. Consequently, if $\mathfrak{A}\in \mathsf{Mag}$, then $(\rho\mathfrak{A})_*\cong\rho(\mathfrak{A}_*)$. \label{msimapsdual2}
\end{enumerate}
\end{proposition}
\begin{proposition}
For every fronton $\mathfrak{H}\in \mathsf{Frt}$ we have that $\sigma \mathfrak{H}$ is a Magari algebra, and for every Magari algebra $\mathfrak{A}\in \mathsf{Mag}$ we have that $\rho\mathfrak{A}$ is a fronton.
\end{proposition}
\subsubsection{A Gödelian Translation} We now show how to translate msi formulae into modal formulae in a way which suits our current goals. The main idea, already anticipated when developing msi stable canonical rules, is to conceptualise rule systems in $\mathbf{NExt}(\mathtt{GL_R})$ as stated in a signature containing two modal operators $\square, \square^+$, so to use $\square$ to translate $\boxtimes$ and $\square^+$ to translate $\to$. This leads to the following Gödelian translation function.
\begin{definition}
The Gödelian translation $T:\mathit{Tm}_{msi}\to \mathit{Tm}_{md}$ is defined recursively as follows.
\begin{align*}
T(\bot)&:=\bot\\
T(\top)&:=\top\\
T(p)&:=\square p\\
T(\varphi\land \psi)&:=T(\varphi)\land T(\psi)\\
T(\varphi\lor \psi)&:=T(\varphi)\lor T(\psi)\\
T(\varphi\to \psi)&:=\square^+ (\neg T(\varphi)\lor T(\psi))\\
T(\boxtimes \varphi)&:=\square T(\varphi)
\end{align*}
\end{definition}
The translation $T$ above was originally proposed by \citet{KuznetsovMuravitsky1986OSLAFoPLE}, and is systematically studied in \cite{WolterZakharyaschev1997IMLAFoCBL,WolterZakharyaschev1997OtRbIaCML}. Our presentation contains a revised clause for the case of $T(\boxtimes \varphi)$, which was originally defined as
\[T(\boxtimes \varphi):=\square^+\square T(\varphi).\]
However, it is not difficult to verify that $\mathsf{Mag}\models \square p\leftrightarrow \square^+\square p$, which justifies our revised clause.
As usual, we extend the translation $T$ from terms to rules by setting
\[T(\Gamma/\Delta):=T[\Gamma]/T[\Delta].\]
The following key lemma describes the semantic behaviour of $T(\cdot)$ in terms of the map $\rho$.
\begin{lemma}
For every $\mathfrak{A}\in \mathsf{Mag}$ and si rule $\Gamma/\Delta$, \label{translationmsiskeleton}
\[\mathfrak{A}\models T(\Gamma/\Delta)\iff \rho\mathfrak{A}\models \Gamma/\Delta\]
\end{lemma}
\begin{proof}
A simple induction on structure shows that for every si term $\varphi$, every modal space $\mathfrak{X}$, every valuation $V$ on $\mathfrak{X}$ and every point $x\in X$ we have
\[\mathfrak{X}, V, x\models T(\varphi)\iff \rho\mathfrak{X}, \rho[V], \rho(x)\models\varphi.\]
Using this equivalence and noting that every valuation $V$ on some $\mathtt{KM}$-space $\rho\mathfrak{X}$ can be seen as of the form $\rho[V']$ for some valuation $V'$ on $\mathfrak{X}$, the rest of the proof is easy.
\end{proof}
\subsubsection{The Kuznetsov-Muravitsky Theorem}\label{sec:kmiso}
We are now ready to state and prove the main result of the present section.
Extend the mappings $\sigma:\mathsf{Frt}\to \mathsf{Mag}$ and $\rho:\mathsf{Mag}\to \mathsf{Frt}$ by setting
\begin{align*}
\sigma&:\mathbf{Uni}(\mathsf{Frt})\to \mathbf{Uni}(\mathsf{Mag}) & \rho&:\mathbf{Uni}(\mathsf{Mag})\to \mathbf{Uni}(\mathsf{Frt}) \\
\mathcal{U}&\mapsto \mathsf{Uni}\{\sigma \mathfrak{H}:\mathfrak{H}\in \mathcal{U}\} & \mathcal{W}&\mapsto \{\rho\mathfrak{A}:\mathfrak{A}\in \mathcal{W}\}.
\end{align*}
Now define the following two syntactic counterparts to $\sigma, \rho$ between $\mathbf{NExt}(\mathtt{KM_R})$ and $\mathbf{NExt}(\mathtt{GL_R})$.
\begin{align*}
\sigma&:\mathbf{NExt}(\mathtt{KM_R})\to \mathbf{NExt}(\mathtt{GL_R}) & \rho &:\mathbf{NExt}(\mathtt{GL_R}) \to \mathbf{NExt}(\mathtt{KM_R}) \\
\mathtt{L}&\mapsto\mathtt{GL_R}\oplus\{T(\Gamma/\Delta):\Gamma/\Delta\in \mathtt{L}\} & \mathtt{M}&\mapsto \{\Gamma/\Delta:T(\Gamma/\Delta)\in \mathtt{M}\}
\end{align*}
These maps easily extend to lattices of logics, by setting:
\begin{align*}
\sigma&:\mathbf{NExt}(\mathtt{KM})\to \mathbf{NExt}(\mathtt{GL}) & \rho &:\mathbf{NExt}(\mathtt{GL}) \to \mathbf{NExt}(\mathtt{KM}) \\
\mathtt{L}&\mapsto \mathsf{Taut}(\sigma\mathtt{L_R})=\mathtt{GL}\oplus\{T(\varphi):\varphi\in \mathtt{L}\} & \mathtt{M}&\mapsto \mathsf{Taut}(\rho\mathtt{M_R})=\{\varphi:T(\varphi)\in \mathtt{M}\}
\end{align*}
The goal of this subsection is to establish the following result using pre-stable canonical rules.
\begin{theorem}[Kuznetsov-Muravitsky theorem]
The following conditions hold: \label{KMiso}
\begin{enumerate}
\item $\sigma:\mathbf{NExt}(\mathtt{KM_R})\to \mathbf{NExt}(\mathtt{GL_R})$ and $\rho:\mathbf{NExt}(\mathtt{GL_R}) \to \mathbf{NExt}(\mathtt{KM_R})$ are mutually inverse complete lattice isomorphisms.\label{KMisouni}
\item $\sigma:\mathbf{NExt}(\mathtt{KM})\to \mathbf{NExt}(\mathtt{GL})$ and $\rho:\mathbf{NExt}(\mathtt{GL}) \to \mathbf{NExt}(\mathtt{KM})$ are mutually inverse complete lattice isomorphisms.\label{KMisovar}
\end{enumerate}
\end{theorem}
Similarly to the previous sections, the main difficulty to overcome here consists in showing that $\sigma: \mathbf{NExt}(\mathtt{KM_R})\to \mathbf{NExt}(\mathtt{GL_R})$ is surjective. We approach this problem by applying our pre-stable canonical rules, following a similar blueprint as that used in the previous sections. The following lemma is a counterpart of \Cref{mainlemma-simod}. Its proof is similar to the latter's, thanks to the similarities existing between $\mathtt{GRZ}$- and $\mathtt{GL}$-spaces.
\begin{lemma}
Let $\mathfrak{A}\in \mathsf{Mag}$. Then for every modal rule $\Gamma/\Delta$ we have $\mathfrak{A}\models\Gamma/\Delta$ iff $\sigma\rho\mathfrak{A}\models \Gamma/\Delta$.\label{mainlemma-msimod}
\end{lemma}
\begin{proof}
$(\Rightarrow)$ This direction follows from the fact that $\sigma\rho\mathfrak{A}\rightarrowtail\mathfrak{A}$ (\Cref{representationFrtMag}).
$(\Leftarrow)$ We prove the dual statement that $\mathfrak{A}_*\nvDash \Gamma/\Delta$ implies $\sigma\rho\mathfrak{A}_*\nvDash \Gamma/\Delta$. Let $\mathfrak{X}:=\mathfrak{A}_*$. In view of \Cref{axiomGLscr} it suffices to consider the case $\Gamma/\Delta=\pscrmod{B}{D}$, for $\mathfrak{B}\in \mathsf{K4}$ finite. So suppose $\mathfrak{X}\nvDash \pscrmod{B}{D}$ and let $\mathfrak{F}:=\mathfrak{B}_*$. Then there is a pre-stable map $f:\mathfrak{X}\to \mathfrak{F}$ satisfying the BDC for $(\mathfrak{D}^{\square^+}, \mathfrak{D}^\square)$. We construct a pre-stable map $g:\sigma\rho\mathfrak{X}\to \mathfrak{F}$ which also satisfies the BDC for $(\mathfrak{D}^{\square^+}, \mathfrak{D}^\square)$.
Let $C$ be a cluster in $\mathfrak{F}$. Consider $Z_C:=f^{-1}(C)$. As $f$ is continuous, $Z_C$ is clopen. Moreover, since $f$ is pre-stable $Z_C$ does not cut any cluster. It follows that $\rho[Z_C]$ is clopen in $\rho\mathfrak{X}$, because $\rho \mathfrak{X}$ has the quotient topology.
Enumerate $C:=\{x_1, \ldots, x_n\}$. Then $f^{-1}(x_i)\subseteq Z_C$ is clopen. By \Cref{propgl1}, we have that $M_i:=\mathit{max}_R(f^{-1}(x_i))$ is clopen. Furthermore, as every element of $M_i$ is maximal in $M_i$, by \Cref{propgl1} again we have that $M_i$ does not cut any cluster. Therefore $\rho[M_i]$ is clopen, because $\rho\mathfrak{X}$ has the quotient topology. Clearly, $\rho[M_i]\cap \rho[M_j]=\varnothing$ for each $i\neq j$. Therefore there are disjoint clopens $U_1, \ldots, U_n$ with $\rho[M_i]\subseteq U_i$ and $\bigcup_i U_i=\rho[Z_C]$. Just take $U_i:=\rho[M_i]$ if $i\neq n$, and \[U_n:=\rho[Z_C]\smallsetminus\left( \bigcup_{i<n} U_i\right).\]
Now define
\[g_C: \rho[Z_C]\to C\]
\[g_C(z)=x_i\iff z\in U_i\]
Note that $g_C$ is relation preserving, evidently, and continuous by construction. Finally, define $g: \sigma\rho\mathfrak{X}\to F$ by setting
\[g(\rho(z)):=\begin{cases}
f(z)&\text{ if } f(z)\text{ does not belong to any proper cluster }\\
g_C(\rho(z))&\text{ if }f(z)\in C\text{ for some proper cluster }C\subseteq F
\end{cases}\]
Now, $g$ is evidently pre-stable. Moreover, it is continuous because both $f$ and each $g_C$ are. Let us check that $g$ satisfies the BDC for $(\mathfrak{D}^{\square^+}, \mathfrak{D}^\square)$.
\begin{itemize}
\item (BDC$^{\square^+}$) This may be shown reasoning the same way as in the proof of \Cref{mainlemma-simod}.
\item (BDC$^\square$-back) Let $\mathfrak{d}\in \mathfrak{D}^\square$ and $\rho(x)\in \rho[X]$. Suppose that $R[g(\rho(x))]\cap \mathfrak{d} \neq\varnothing$. Let $U:=f^{-1}(f(x))$. Then $x\in U$, so by \Cref{propgl1} either $x\in \mathit{max}_{R}(U)$ or there exists $x'\in \mathit{max}_R(U)$ such that $R xx'$. We consider the former case only, the latter is analogous. Since $x\in \mathit{max}_{R}(U)$, by construction we have
$g(\rho(x))=f(x)$. Thus $R[f(x)]\cap \mathfrak{d}\neq\varnothing$. Since $f$ satisfies the BDC for $\mathfrak{d}$, it follows that there is $y\in X$ such that $Rxy$ and $f(y)\in \mathfrak{d}$. As $x\in \mathit{max}_R(U)$ we must have $f(x)\neq f(y)$. Now let $V:=f^{-1}(f(y))$. As $y\in V$, by \Cref{propgl1} either $y\in \mathit{max}_R(V)$ or there exists some $y'\in \mathit{max}_R(V)$ such that $Ryy'$. Wlog, suppose the former. Consequently, $f(y)=g(\rho(y))$. But then we have shown that $R \rho(x)\rho(y)$ and $g(\rho(y))\in \mathfrak{d}$, i.e. $g[R[\rho(x)]]\cap \mathfrak{d}\neq\varnothing$.
\item (BDC$^\square$-forth) Let $\mathfrak{d}\in \mathfrak{D}^\square$ and $\rho(x)\in \rho[X]$. Suppose that $g[R[\rho(x)]]\cap \mathfrak{d}\neq\varnothing$. Observe that $g[R[\rho(x)]]\cap \mathfrak{d}\neq\varnothing$ is equivalent to $R[\rho(x)]\cap g^{-1}(\mathfrak{d})\neq\varnothing$. Therefore there is some $y\in \mathfrak{d}$ such that $R[\rho(x)]\cap g^{-1}(y)\neq\varnothing$. By \Cref{propgl1} there is $z\in \mathit{max}_{R}(g^{-1}(y))$ with $R_\rho \rho(x)\rho(z)$. Observe that since $g$ is pre-stable, $R^+g(\rho(x)) g(\rho(z))$, whence if $g(\rho(x))\neq g(\rho(z))$ in turn $Rg(\rho(x)) g(\rho(z))$ and we are done. So suppose otherwise that $g(\rho(x))= g(\rho(z))$. Distinguish two cases
\begin{itemize}
\item\emph{Case 1}: $y\notin R[y]$. Then $y$ cannot belong to a proper cluster, so by construction $f(x)=g(\rho(x))$ and $f(z)=g(\rho(z))$. From $R \rho(x)\rho(z)$ it follows that $Rxz$, whence $R[x]\cap f^{-1}(\mathfrak{d})\neq\varnothing$ Since $f$ satisfies the BDC-forth for $\mathfrak{d}$, there must be some $u\in \mathfrak{d}$ with $R f(x)u$ and $f(u)\in \mathfrak{d}$. Then also $Rg(\rho(x)) u$, i.e. $R[g(\rho(x))]\cap \mathfrak{d}\neq\varnothing$ as desired.
\item \emph{Case 2}: $y\in R[y]$. But then $R g(\rho(x)) y$. This shows $R[g(\rho(x))]\cap \mathfrak{d}\neq\varnothing$ as desired.
\end{itemize}
\end{itemize}
\end{proof}
\begin{proposition}
Every universal class $\mathcal{U}\in \mathbf{Uni}(\mathsf{Mag})$ is generated by its skeletal elements, i.e., $\mathcal{U}=\sigma \rho\mathcal{U}$.\label{uniglgeneratedskel}
\end{proposition}
\begin{proof}
Analogous to \Cref{unigrzgeneratedskel}, but applying \Cref{mainlemma-msimod} instead of \Cref{mainlemma-simod}.
\end{proof}
We now apply \Cref{mainlemma-msimod} to characterise the maps $\sigma:\mathbf{NExt}(\mathtt{KM_R})\to \mathbf{NExt}(\mathtt{GL_R})$ and $\rho:\mathbf{NExt}(\mathtt{KM_R}) \to \mathbf{NExt}(\mathtt{GL_R})$ in terms of their semantic counterparts.
\begin{lemma}
For each $\mathtt{L}\in \mathbf{Ext}(\mathtt{KM_R})$ and $\mathtt{M}\in \mathbf{NExt}(\mathtt{GL_R})$, the following hold:\label{msimapscommute}
\begin{align}
\mathsf{Alg}(\sigma\mathtt{L})&=\sigma \mathsf{Alg}(\mathtt{L})\label{msimapscommute1}\\
\mathsf{Alg}(\rho\mathtt{M})&=\rho \mathsf{Alg}(\mathtt{M})\label{msimapscommute2}
\end{align}
\end{lemma}
\begin{proof}
(\ref{msimapscommute1}) By \Cref{unigrzgeneratedskel} it suffices to show that $\mathsf{Alg}(\sigma\mathtt{L})$ and $\sigma \mathsf{Alg}(\mathtt{L})$ have the same skeletal elements. So let $\mathfrak{A}=\sigma\rho\mathfrak{A}\in \mathsf{Mag}$. Assume $\mathfrak{A}\in\sigma \mathsf{Alg}(\mathtt{L})$. Since $\sigma \mathsf{Alg}(\mathtt{L})$ is generated by $\{\sigma\mathfrak{B}:\mathfrak{B}\in \mathsf{Alg}(\mathtt{L})\}$ as a universal class, by \Cref{representationFrtMag} and \Cref{translationmsiskeleton} we have $\mathfrak{A}\models T(\Gamma/\Delta)$ for every $\Gamma/\Delta\in \mathtt{L}$. But then $\mathfrak{A}\in \mathsf{Alg}(\sigma\mathtt{L})$. Conversely, assume $\mathfrak{A}\in \mathsf{Alg}(\sigma\mathtt{L})$. Then $\mathfrak{A}\models T(\Gamma/\Delta)$ for every $\Gamma/\Delta\in \mathtt{L}$. By \Cref{translationmsiskeleton} this is equivalent to $\rho\mathfrak{A}\in \mathsf{Alg}(\mathtt{L})$, therefore $\sigma\rho\mathfrak{A}=\mathfrak{A}\in \sigma\mathsf{Alg}(\mathtt{L})$.
(\ref{msimapscommute2}) Let $\mathfrak{H}\in \mathsf{Frt}$. If $\mathfrak{H}\in \rho \mathsf{Alg}(\mathtt{M})$ then $\mathfrak{H}=\rho \mathfrak{A}$ for some $\mathfrak{A}\in \mathsf{Alg}(\mathtt{M})$. It follows that for every rule $T(\Gamma/\Delta)\in \mathtt{M}$ we have $\mathfrak{A}\models T(\Gamma/\Delta)$, and so by \Cref{translationmsiskeleton} in turn $\mathfrak{H}\models\Gamma/\Delta$. Therefore indeed $\mathfrak{H}\in \mathsf{Alg}(\rho\mathtt{M})$. Conversely, for all rules $\Gamma/\Delta$, if $\rho\mathsf{Alg}(\mathtt{M})\models \Gamma/\Delta$ then by \Cref{translationmsiskeleton} $\mathsf{Alg}(\mathtt{M})\models T(\Gamma/\Delta)$, hence $\Gamma/\Delta\in \rho\mathtt{M}$. Thus $\mathsf{ThR}(\rho\mathsf{Alg}(\mathtt{M}))\subseteq \rho\mathtt{M}$, and so $\mathsf{Alg}(\rho\mathtt{M})\subseteq \rho\mathsf{Alg}(\mathtt{M})$.
\end{proof}
We are now ready to prove the main result of this section.
\begingroup
\def\ref{KMiso}{\ref{KMiso}}
\begin{theorem}[Kuznetsov-Muravitsky theorem]
The following conditions hold:
\begin{enumerate}
\item $\sigma:\mathbf{NExt}(\mathtt{KM_R})\to \mathbf{NExt}(\mathtt{GL_R})$ and $\rho:\mathbf{NExt}(\mathtt{GL_R}) \to \mathbf{NExt}(\mathtt{KM_R})$ are mutually inverse complete lattice isomorphisms.
\item $\sigma:\mathbf{NExt}(\mathtt{KM})\to \mathbf{NExt}(\mathtt{GL})$ and $\rho:\mathbf{NExt}(\mathtt{GL}) \to \mathbf{NExt}(\mathtt{KM})$ are mutually inverse complete lattice isomorphisms.
\end{enumerate}
\end{theorem}
\addtocounter{theorem}{-1}
\endgroup
\begin{proof}
(\ref{KMisouni}) It suffices to show that the two mappings $\sigma: \mathbf{Uni}(\mathsf{Frt})\to \mathbf{Uni}(\mathsf{Mag})$ and $\rho:\mathbf{Uni}(\mathsf{Mag})\to \mathbf{Uni}(\mathsf{Frt})$ are complete lattice isomorphisms and mutual inverses. Both maps are evidently order preserving, and preservation of infinite joins is an easy consequence of \Cref{translationmsiskeleton}.
Let $\mathcal{U}\in \mathbf{Uni}(\mathsf{Mag})$. Then $\mathcal{U}=\sigma\rho\mathcal{U}$ by \Cref{uniglgeneratedskel}, so $\sigma$ is surjective and a left inverse of $\rho$. Now let $\mathcal{U}\in \mathbf{Uni}(\mathsf{Frt})$. It follows immediately from \Cref{representationFrtMag} that $\rho\sigma \mathcal{U}=\mathcal{U}$. Therefore $\rho$ is surjective and a left inverse of $\sigma$. But then $\sigma$ and $\rho$ are mutual inverses, whence both bijections.
(\ref{KMisovar}) Follows immediately from \Cref{KMisouni} and \Cref{deductivesystemisomorphismmsi}.
\end{proof}
\section{Introduction}\addcontentsline{toc}{section}{Introduction}
\input{introduction}
\section{General Preliminaries}
\input{preliminaries}
\section{Modal Companions of Superintuitionistic Deductive Systems}
\input{intuitionistic_modal}
\section{Tense Companions of Super Bi-intuitionistic Deductive Systems}
\input{biintuitionistic_tense}
\section{The Kuznetsov-Muravitsky Isomorphism for Logics and Rule Systems}
\input{provability.tex}
\addcontentsline{toc}{section}{Conclusions and Further Work}
\section{Conclusions and Future Work}
\input{conclusions.tex}
\addcontentsline{toc}{section}{Bibliography}
\bibliographystyle{default}
|
1,941,325,220,076 | arxiv | \section{Introduction} \label{sec:intro}
Discovery of more than two hundred supermassive black holes (SMBHs) with masses ${\rm \gtrsim\, 10^9\,\mathrm{M_{\odot}}}$ within the first $\sim$ Gyr after the Big Bang~\citep{Fan01,Will10,Mor11,Wu15,Ban18,Mat18,Wan19,She19,Mat19} have challenged our general understanding of black hole growth and formation. How these massive objects formed and grew over cosmic time is one of the biggest puzzles to solve in astrophysics~\citep{Smi19,Ina19,Lat19}. These SMBHs are created from `seed' black holes that grow via gas accretion and mergers. The `seed' black holes are categorized into two categories: (i) low mass seeds ($\lesssim 10^2\,\mathrm{M_{\odot}}$) and (ii) high mass seeds ($\sim 10^{4-6}\,\mathrm{M_{\odot}}$). These seeds were formed at $z \sim 20-30$~\citep{Ren01}, and then they rapidly grew to their final masses by gas accretion and mergers~\citep{Day19,Pac20,Day21}. Low mass seeds are formed from Pop III stellar remnants. However, it is really challenging to grow a SMBH of mass $\sim 10^9\,\mathrm{M_{\odot}}$ from a $10^2\,\mathrm{M_{\odot}}$ seed~\citep{Hai01,Hai04,Vol12,Woo19}. A potential solution could be super-Eddington accretion \citep{Vol05,Ale14,Mad14,Vol15,Pac15,Pac17,Beg17,Toy19,Tak20}. However, radiation feedback from the seed black hole itself~\citep{Mil09,Sug18,Reg19} and inefficient gas angular momentum transfer~\citep{Ina18,Sug18} could reduce the accretion flow into the black hole.
Another solution is to start the growth from a high mass seed~\citep{Bro03}. A possible scenario for the formation of these high mass seeds is the formation of massive black holes via direct collapse \citep{Oh02,Bro03,Beg06,Aga12,Lat13,Dji14,Fer14,Bas19}. A key requirement for this scenario is large inflow rate of $\sim 0.1\,\mathrm{M_{\odot}}\text{yr}^{-1}$ which can be obtained easily in metal free halos~\citep{Aga12,Lat13,Shl16,Reg18,Bec18,Cho18,Aga19,Lat20}. In this scenario supermassive stars (SMSs) of masses $\sim 10^{4-5}\,\,\mathrm{M_{\odot}}$ are formed, which are massive enough to grow to $10^9\,\mathrm{M_{\odot}}$ by $z\sim 7$. These SMSs collapse into seed BHs with minimal mass loss at the end of their lifetime~\citep{Ume16}. These seed BHs are massive enough to grow up to $\sim 10^{9-10}\,\mathrm{M_{\odot}}$ SMBHs observed at $z\sim 7$ via Eddington accretion. The SMSs are often assumed to be formed in primordial halos with virial temperatures $T_{\mathrm{vir}}\sim10^4$ K where the formation of molecular hydrogen is fully suppressed by strong external radiation from nearby galaxies~\citep{Omu01,Sha10,Reg14,Sug14}. The accretion flow into the cloud remains very high ($0.1-1\MSun\mathrm{yr}^{-1}$) due to the high gas temperature~\citep{Lat13,Ina14,Bec15}. The surface of the protostars is substantially inflated due to the high accretion inflow and the effective temperature drops down to a few times $10^3$ K~\citep{Hos12,Hos13,Sch13,Woo17,Hae18}. The accretion flow can then continue without being significantly affected by the radiative feedback, which allows the protostars to grow into SMSs of masses $10^{4-5}\,\mathrm{M_{\odot}}$ within about $\sim 1$ Myr. In order to prevent the molecular ${\rm H_2}$ formation, a very high background UV radiation flux is required \citep{Lat15,Wol17}, which is very rare and optimistic but not impossible~\citep{Vis14,Dji14,Reg17,Reg20}. The formation of ${\rm H_2}$ may lead to fragmentation that would suppress this process, as would the presence of metals via metal line cooling~\citep{Omu08, Dop11, Lat16, Mor18, Cho20}.
However, there are some scenarios in which the high velocity of the baryon gas can yield high accretion rates even in the presence of ${\rm H_2}$ and/or low metallicity. High velocities due to collisions of protogalaxies \citep{Ina15} or due to supersonic streaming motions of baryons relative to dark matter \citep{Hir17} can lead to the required high mass infall rates. Massive nuclear inflows in gas-rich galaxy mergers~\citep{May15} have also been invoked.
\citet{Cho20} have further shown that even in just slightly metal enriched halos ($Z < 10^{-3}\,\,\mathrm{Z_{\odot}}$), where fragmentation takes place, the central massive stars could be fed by the accreting gas and grow into SMSs. \citet{Reg20b} also have shown that SMSs could still be formed in atomic cooling haloes with higher metal enrichment ($Z > 10^{-3}\,\mathrm{Z_{\odot}}$) in the early universe due to inhomogeneous metal distribution. The high infall rate could be obtained by dynamical heating during rapid mass growth of low-mass halos in over-dense regions at high redshifts~\citep{Wis19}. There would still be an angular momentum barrier in all scenarios, though \citet{Sak16} have shown that gravitational instability in a circumstellar disk can solve the angular momentum problem, leads to an episodic accretion scenario \citep{Vor13,Vor15} and is consistent with the maintenance of the required low surface temperature of the accreting SMS. Nevertheless, the DCBH scenario is optimistic in that it relies on low fragmentation \citep{Lat13} and lack of disruptive feedback, e.g. x-ray feedback that could reduce the final mass of the collapsed object~\citep{Ayk14}.
Other scenarios for the formation of SMSs are based on runaway collisions of stars in dense stellar clusters~\citep{Zwa02,Zwa04,Fre07,Fre08,Gle09,Moe11,Lup14,Kat15,Sak17,Boe18,Rei18,Sch19,Gie20,Ali20,Das20,Ver21,Riz21,Ver21}. Both Pop III star clusters~\citep[e.g.][]{Boe18,Rei18,Sch19,Ali20} and nuclear star clusters~\citep[e.g.][]{Kat15,Das20,Nat21} are possible birthplaces of such SMSs. Many galaxies host massive NSCs \citep{Car97,Bok02,leigh12,leigh15,Geo16} in their centre with masses of $\sim 10^{4-8}\,\mathrm{M_{\odot}}$. In many galaxies (typically with masses $10^9\,\mathrm{M_{\odot}} < M < 10^{11}\,\mathrm{M_{\odot}}$), NSCs and SMBHs co-exist \citep{Set08,Gra09,Cor13,Geo16,Ngu19} including galaxies like our own \citep{Sco14}, as well as M31 \citep{Ben05} and M32 \citep{Ngu18}. Studies have also found correlations between both the SMBH mass and the NSC mass with the galaxy mass \citep{Fer06,Ros06,leigh12,sco13,Set20}. In lower-mass galaxies ($M\lesssim 10^{11}\,\mathrm{M_{\odot}}$) the NSC masses are proportional to the stellar mass of the spheroidal component. The most massive galaxies do not
contain NSCs and their galactic nuclei are inhabited by SMBHs. It is therefore motivating to explore the NSCs as possible birthplaces of SMSs. There are different proposed scenarios for the formation of black hole seeds of masses $\sim 10^{3-5}\,\mathrm{M_{\odot}}$ in NSCs. \citet{Sto17} have proposed that above a critical threshold stellar mass, the NSCs can serve as possible sites for the formation of intermediate mass black holes (IMBH) and/or a SMBH from stellar collisions, which could end up eventually as central BHs via runaway tidal encounters. They have shown that both tidal capture and tidal disruption will favour the growth
of the remnant stellar mass black holes in the NSCs. In their study, they have argued that the stellar mass black holes can grow into an intermediate mass black hole (IMBH) or SMBH via three stages of runaway growth processes. At an early stage, the mass growth is driven by the unbound stars leading to supra-exponential growth. Once the BH reaches a mass
$\sim 100\,\mathrm{M_{\odot}}$, the growth is driven by the feeding of bound stars. In this second stage, the growth of the black hole could be extremely rapid as well. At later times, the growth slows down once the seed IMBH/SMBH consumes the core of its host NSC. This type of runaway growth happens in dense nuclear stellar clusters which have been observed at lower redshifts \citep[e.g.][]{Geo16}. The growth of the BH through tidal
captures/disruption of stars has also been proposed \citep{Ale17,Boe18,Arc19}. Another possible pathway for the formation and growth of massive BH seeds in NSCs is via stellar collisions and gas accretion~\citep{Dev09,Dev10,Dav11,Das20,Nat21}. A seed BH of mass $\sim 10^{4-5}\,\mathrm{M_{\odot}}$ could be formed in this case and grow to a $10^9\,\mathrm{M_{\odot}}$ SMBH at $z\sim 6-7$. \citet{Kin06} and \citet{Kin08} have shown that rapid BH growth is favoured by low values of the spin. Several studies have also proposed that NSCs are likely formed by the mergers of smaller clusters and these merging clusters may already host an IMBH which could be brought to the NSC during the merging event~\citep{Ebi01,Kim04,Zwa06,Dev12,Dav20}. It is also likely that multiple IMBHs are being fed to the NSCs~\citep{Ebi01,Mas14}, which will form binary IMBHs that could merge and emit gravitational waves (GW)~\citep{Tam18,Rass20,Arc19,Wir20}. However, the
GW recoil kick from the merging of the two IMBHs has to be less than the escape speed of the NSC in order to retain the merged IMBH within the NSC~\citep{Ama06,Gur06,Ama07,Arc19}. A recent study by~\citet{Ask20} has shown that the SMBH will be ejected from the NSC by the GW recoil kick if the mass ratio $\gtrsim 0.15$. This might explain why some massive galaxies contain an NSC but not an SMBH, e.g. M33.
\citet{Das20} have shown that SMSs of masses $10^{3-5}\,\mathrm{M_{\odot}}$ could be formed in dense NSCs in low metallicity environments via runaway stellar collisions and gas accretion adopting different accretion scenarios. However, in high metallicity environments the mass loss due to stellar winds will play an important role in the formation and growth of the SMSs in the NSCs. \citet{Gle13} and \citet{Kaa19} have shown that these could significantly change the final mass of the SMS formed via collisions. In this paper, we explore the effect of mass loss due to stellar winds on the final mass of the SMSs produced in nuclear clusters via gas accretion and runaway collisions. We use the same idealised N-body setups as in~\citet{Das20} and include the mass loss due to stellar winds. We adopt the theoretical mass loss recipe given by~\citet{Vin00,Vin01}. This work is an extension of the model presented in~\citet{Das20}.
\section{Methodology}
\subsection{Initial conditions}
To model collisions and accretion in the nuclear clusters consisting of main sequence (MS) stars, we use the Astrophysical MUlti-purpose Software Environment (AMUSE) \citep{Por09,Por13,Pel13,Por18}.This is an N-body code with component libraries that can be downloaded for free from \url{amusecode.org}. In the AMUSE framework
different codes, e.g. stellar dynamics, stellar evolution, hydrodynamics and
radiative transfer, can be easily coupled. We have modified the code and introduced the mass-radius relation for MS stars, gravitational N-body dynamics, gravitational coupling between the stars and the gas described via an analytic potential, accretion physics, stellar collisions, and mass growth due to accretion and collisions \citep{Das20}.
The cluster is embedded in a stationary natal gas cloud. Initially both the cluster and gas follow a Plummer density distribution
\begin{equation}
\rho(r)=\frac{3M_{cl}}{4\pi b^3}\left( 1 + \frac{r^2}{b^2}\right)^{-\frac{5}{2}}
\end{equation}
\citep{Plu11}, where $M_{\rm cl}$ is the mass of the cluster and $b$ is the Plummer length scale (or Plummer radius) that sets the size of the cluster core. We further assume that both the gas mass ($M_{\rm g}$) and gas radius ($R_{\rm g}$) are equal to the mass ($M_{\rm cl}$) and radius ($R_{\rm cl}$) of the stellar cluster. We introduce a cut-off radius, which is equal to five times the Plummer radius, after which the density is set to zero so that
the cluster remains stable. We consider a Salpeter initial mass function (IMF) \citep{Sal55} for the stars:
\begin{equation}
\xi(m)\Delta m = \xi_0 \left( \frac{m}{\,\mathrm{M_{\odot}}}\right)^{-\alpha}\left( \frac{\Delta m}{\,\mathrm{M_{\odot}}}\right),
\end{equation}
where $\alpha = 2.35$ is the power-law slope of the mass function. We considered a top heavy IMF with mass range $10\, \,\mathrm{M_{\odot}}-100\, \,\mathrm{M_{\odot}}$. The main parameters for our simulations are the cluster mass $M_{\rm cl}$, cluster radius $R_{\rm cl}$, the gas mass $M_{\rm g}$, gas radius $R_{\rm g}$, and the number of stars $N$.
In principle, pure N-body codes solve Newton’s equations of motion with no free physical parameters. However, they have capacities to flag special events e.g. close encounters or binary dynamics. The time-stepping criterion used to integrate the equations of
motion is the only adjustable quantity. We used ph4 \citep[e.g.][Sec. 3.2]{Mcm96}, which is based on a fourth-order Hermite algorithm~\citep{Mak92}, to model the gravitational interactions between the stars. We modeled the gravitational effect of the gas cloud via an analytical background potential which is coupled to the $N$-body code using the BRIDGE method \citep{Fuj07}. This allows us to determine the motions of the stars from the total combined potential of the gas and stars.
\subsection{Stellar properties}
Another key ingredient in our simulations is the mass-radius relation of the MS stars as the size of the stars will play an important role in determining the number of collisions via the collisional cross section. The mass-radius ($M_\ast-R_\ast$) relation of the stars is given by
\begin{eqnarray}
\label{less50}
\frac{R_\ast}{\,\mathrm{R_{\odot}}}&=&1.60\times\left(\frac{M_\ast}{\,\mathrm{M_{\odot}}} \right)^{0.47}\,\,\,\, \mathrm{for}\,\, 10\,\mathrm{M_{\odot}}\lesssim M_\ast<50 \,\mathrm{M_{\odot}},\\
\label{great50}
\frac{R_\ast}{\,\mathrm{R_{\odot}}}&=&0.85\times\left(\frac{M_\ast}{\,\mathrm{M_{\odot}}} \right)^{0.67}\,\,\,\, \mathrm{for}\,\, 50\,\mathrm{M_{\odot}}\lesssim M_\ast,
\end{eqnarray}
where Eq. \ref{less50} is adopted from \citet{Bon84} and Eq. \ref{great50} is adopted from \citet{Dem91}. However, it is important to note that the mass-radius relation of the SMSs is poorly understood. Moreover, stars produced via collisions could have a larger radii than similar mass stars~\citep[e.g.][]{Lom03}. Using smoothed particle hydrodynamics (SPH),~\citet{Suz07} have shown that the collision product of massive stars ($\gtrsim 100\,\mathrm{M_{\odot}}$) could be $10-100$ times larger than the equilibrium radius and hence the collision rate could be sufficiently high to have the next collision before the star settles down to the equilibrium radius.
The luminosity of the stars is given by
\begin{eqnarray}
\label{less120}
L_\ast&=&1.03\times M_\ast^{3.42}\,\mathrm{L_{\odot}}\,\,\,\, \mathrm{for}\,\, 10\,\mathrm{M_{\odot}}\lesssim M_\ast<120 \,\mathrm{M_{\odot}},\\
\label{great120}
L_\ast&=&f_{\mathrm{Edd}} \times L_{\mathrm{Edd}}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \mathrm{for}\,\, 120\,\mathrm{M_{\odot}}\lesssim M_\ast,
\end{eqnarray}
where
\begin{equation}
L_{\mathrm{Edd}}=3.2\times10^4\left(\frac{M_\ast}{\,\mathrm{M_{\odot}}} \right)\,\mathrm{L_{\odot}}\, .
\label{lumEdd}
\end{equation}
Eq. \ref{less120} is adopted from \citet{Dem91}. As we are considering stars that are accreting, one might consider that the accretion luminosity
\begin{equation}
L_{\mathrm{acc}}=\frac{GM_\ast\dot{M}}{R_\ast}
\end{equation}
also contributes to the total luminosity of the stars.
\begin{figure}
\includegraphics[width=\columnwidth]{/lum.pdf}
\caption{Luminosity of stars as a function of mass. The solid lines represent accretion luminosities whereas the dashed lines represent the Eddington luminosity and the luminosity assumed in~\citet{Dem91}, respectively.}
\label{lum}
\end{figure}
In Fig.~\ref{lum} we have plotted the different luminosity for different accretion scenarios (see below), showing that the accretion luminosity is almost always subdominant, except for cases where we are reaching the largest stellar masses. The atmospheric temperature of the star is given from the Stefan-Boltzman equation via
\begin{equation}
T_{\mathrm{eff}}^4=\frac{L_\ast}{4 \pi R_\ast^2 \sigma_{\mathrm{SB}}},
\end{equation}
where $\sigma_{SB}$ is the Stefan-Boltzmann constant.
\subsection{Gas accretion}
The next key ingredient in our simulation is the gas accretion. The protostars formed in the cluster will grow in mass by gas accretion~\citep{Bon98,Kru09,Har16}.~\citet{Das20} have found that gas accretion will play a crucial role in determining the number of collisions and hence the final mass of the most massive object (MMO). In our current accretion prescription, the gas is assumed to be initially at rest, and hence due to momentum conservation the stellar velocity decreases as they accrete gas and gain mass, and fall deeper into the potential well of the cluster. We assume that no new stars are formed and hence the gas is fueled into the cluster with $100\%$ accretion efficiency. The efficiency might be reduced due to protostellar outflows~\citep{Fed15,Off17}, which are not considered here. At each time step the accreted gas mass is subtracted from the total gas mass, and the density keeps being distributed according to the Plummer profile. Hence, the gas depletion in our simulation is uniform. We consider different accretion scenarios in our work, including constant accretion rates of $10^{-4},\,10^{-5}$ and $10^{-6} \,\mathrm{M_{\odot}}\mathrm{yr^{-1}}$, and Eddington accretion given by:
\begin{equation}
\dot{M}_{\mathrm{Edd}}=2.2\times 10^{-8}\left(\frac{M_\ast}{\,\mathrm{M_{\odot}}} \right)\,\,\mathrm{\,\mathrm{M_{\odot}}\, yr^{-1}}\, ,
\label{eddacc}
\end{equation}
as well as Bondi-Hoyle-Lyttleton (hereafter BHL or Bondi-Hoyle) accretion given by Eq. 2 of \citet{Macc12}:
\begin{equation}
\label{mainbondi}
\dot{M}_{\mathrm{BHL}} = 7\times 10^{-9} \left(\frac{M_\ast}{\,\mathrm{M_{\odot}}}\right)^2 \left(\frac{n}{10^{6}\, \mathrm{cm}^{-3}} \right)^2\left(\frac{\sqrt{c_{\mathrm{s}}^2+v_\infty^2}}{10^6\,\mathrm{cm\ s}^{-1}} \right)^{-3}\mathrm{\,\mathrm{M_{\odot}}\, yr^{-1}}.
\end{equation}
A recent study by \citet{Kaa19} has shown that the average BH accretion rate of an individual star is given by
\begin{equation}
\label{kaaz}
\langle \dot{M}_{\mathrm{BHL}} \rangle=\left\{
\begin{array}{@{}ll@{}}
\dot{M}_{\mathrm{BHL}}, & \text{when}\ R_{\bot} \gg R_{\mathrm{acc}}, \\
N\times\dot{M}_{\mathrm{BHL}}, & \text{when}\ R_{\bot} \leq R_{\mathrm{acc}},
\end{array}\right.
\end{equation}
where $R_{\bot}=R_{\mathrm{cl}} N^{-1/3}$ is the mean separation between stars and $R_{\mathrm{acc}}=\frac{2GM_\ast}{v_\infty^2}$ is the characteristic accretion radius of a star, i.e. the impact parameter in the BHL theory for which gas can be gravitationally-focused and overcome its angular momentum barrier to reach the star. Here, $v_\infty$ is the relative velocity of the star with respect to the gas. Our adopted BHL accretion rate is given by Eq.~\ref{mainbondi}. We multiply the BHL rate of a single star by $N$ if $R_{\bot} \leq R_{\mathrm{acc}}$ according to Eq.~\ref{kaaz}. We compute the density of the gas and hence the $\dot{M}_{\mathrm{BHL}}$ locally.
\subsection{Stellar Collisions}
We adopt the sticky-sphere approximation to model collisions between the main sequence stars \citep{leigh12b, leigh17}, where the two stars are assumed to merge if the distance between their centres is less than the sum of their radii. The two stars are replaced by a single star whose mass is equal to the sum of the masses of the colliding stars and the radius of the object to be determined by the ($M_\ast-R_\ast$) relation described in Eqs. \ref{less50} and \ref{great50}. The conservation of linear momentum is implemented during the collision. However, the mass is not necessarily conserved due to the possible ejection of mass~\citep{Sil02,Dal06,Tra07}. The final mass of the colliding objects could change a lot depending on fraction of the mass that is lost during the merger~\citep{Ali20,Das20}. This fraction depends on the type of stars that are colliding~\citep{Gle13}.
\subsection{Mass loss due to stellar winds}
Since the massive stars and the collision products in our simulations become very massive and luminous, mass loss driven by stellar wind plays a key role in their evolution. However, the mass loss of very massive stars is very poorly understood both observationally and theoretically. In this work we adopt the theoretical mass loss recipe given by~\citet{Vin00,Vin01}. The mass loss rate is a function of the luminosity of the stars $L_\ast$, mass of the stars $M_\ast$, the Galactic ratio of terminal velocity and escape velocity $v_\infty/v_{\mathrm{esc}}$, the effective temperature of the stars $T_\mathrm{eff}$, and the metallicity of the stars $Z_\ast$.
\begin{figure*}
\includegraphics[width=0.65\columnwidth]{rates/fedd0.5.pdf}
\includegraphics[width=0.65\columnwidth]{rates/fedd0.7.pdf}
\includegraphics[width=0.65\columnwidth]{rates/fedd0.9.pdf}
\caption{Mass loss rate $\dot{m}_\mathrm{loss}$ as a function of mass for different metallicities (solid lines) and accretion rates (dashed line) as a function of the stellar mass.}
\label{rates}
\end{figure*}
In Fig.~\ref{rates} we have plotted the mass loss rates for different metallicities. We have also plotted the Eddington and Bondi-Hoyle accretion rates to compare with the mass loss rates (the constant accretion rates can be identified without a plotted line). It is interesting to note that for Bondi-Hoyle accretion, the mass loss could be comparable or higher for masses $\lesssim 200\,\mathrm{M_{\odot}}$ and metallicities $Z=(0.5-1)\,\mathrm{Z_{\odot}}$, whereas the Eddington accretion rate is always lower than the mass loss rate in the same metallicity range for any mass. The Eddington accretion rate could be comparable to or higher than the mass loss rate for $Z\lesssim 0.1\,\mathrm{Z_{\odot}}$. Another key parameter in the mass loss rate is the Eddington factor $f_{\mathrm{Edd}}$, which is given by Eq.~\ref{lumEdd}. \citet{Nad05} have shown that $0.54\lesssimf_{\mathrm{Edd}}\lesssim 0.94$ for stellar masses in the range $3\times 10^2\,\mathrm{M_{\odot}}\lesssim M_\ast\lesssim 10^4\,\mathrm{M_{\odot}}$. We adopt a typical value of $f_{\mathrm{Edd}}=0.7$ for the rest of our models. In Fig.~\ref{rates} we show the mass loss rates for different values of $f_{\mathrm{Edd}}$. An important point to note here is that the mass loss recipe in~\citet{Vin01} was computed for $f_{\mathrm{Edd}}<0.5$. \citet{Vin11} have shown that mass loss rates could be significantly higher for stars close to the Eddington limit. In other words, when extrapolating the results to higher Eddington fractions, it is important to note that we might be underestimating the mass loss that actually occurs.
\section{Results}
The main results of our simulations are presented in this section. We adopted the initial conditions with $N=5000$, $M_{\mathrm{cl}}=M_{\mathrm{g}}=1.12\times 10^5 \,\mathrm{M_{\odot}}$, $R_{\mathrm{cl}}=R_{\mathrm{g}}=1$ pc, assuming a Salpeter IMF within a stellar mass interval of $10-100 \,\mathrm{M_{\odot}}$. We have assumed three different values of $f_{\rm Edd}=0.5,0.7,0.9$ and for each $f_{\rm Edd}$, we have studied six different metallicities $Z_\ast=\,\mathrm{Z_{\odot}},\,0.5\,\mathrm{Z_{\odot}},\,0.1\,\mathrm{Z_{\odot}},\,0.05\,\mathrm{Z_{\odot}},\,0.01\,\mathrm{Z_{\odot}},\,0.001\,\mathrm{Z_{\odot}}$.
The evolution of the cluster is similar to the results in~\citet{Das20}. In the initial phase the stars accrete gas and due to momentum conservation the stellar velocity decreases and the stars fall deeper into the potential well of the cluster. During the accretion phase the accretion dominates the mass growth. Once the gas is fully depleted, the stellar collisions take place and drive the mass growth of the SMS. However, some initial collisions might occur which will boost the accretion process, especially for
the Eddington and Bondi-Hoyle accretion scenarios. The evolution of the Lagrangian radii will be similar to our previous results in~\citet{Das20}. The $10\%$ Lagrange radii will always decrease, eventually leading to a core collapse. The evolution of the $50\%$ and $90\%$ Lagrange radii will be an initial decrease and a later increase after the core collapse. The timing of the transition will depend on the accretion recipe. A similar trend has been seen in previous simulations in the absence of accretion \citep[e.g.][]{leigh14}.
The evolution of the mass of the MMO is shown in Fig.~\ref{metal} for constant accretion rates of $10^{-5}\,\mathrm{M_{\odot}yr^{-1}}$ and $10^{-4}\,\mathrm{M_{\odot}yr^{-1}}$, as well as for physically-motivated accretion rates of Eddington and Bondi-Hoyle accretion rates. For a constant accretion rate of $10^{-4}\,\mathrm{M_{\odot}yr^{-1}}$, the growth of the MMO is quite rapid due to mergers of collision products. The mass of the MMO reaches $\sim10^4 \,\mathrm{M_{\odot}}$ already after $0.8$ Myr except for $Z_\ast=\,\mathrm{Z_{\odot}}$ for $f_{\rm Edd}=0.7$ and 0.9. The final mass of the MMO depends on $Z_\ast$ and $f_{\rm Edd}$. SMSs of mass $\sim 10^4\,\mathrm{M_{\odot}}$ are formed for all the cases except for $Z_\ast=\,\mathrm{Z_{\odot}}$ for $f_{\rm Edd}=0.7$ and 0.9 where the final mass of the MMO is $\sim 5\times 10^3\,\mathrm{M_{\odot}}$. For the case of a constant accretion rate of $10^{-5}\,\mathrm{M_{\odot}yr^{-1}}$, the growth of the MMO is more gradual. The MMO reaches a mass of $\sim 10^4\,\mathrm{M_{\odot}}$ for $Z_\ast< 0.5\,\mathrm{Z_{\odot}}$ after $\sim 2.5$ Myr, and a final mass of $\sim 2\times 10^4\,\mathrm{M_{\odot}}$ after 5 Myr. For the case of $Z=0.5\,\mathrm{Z_{\odot}}$, the evolution of the MMO depends on the adopted $f_{\rm Edd}$. For $Z_\ast= 0.5\,\mathrm{Z_{\odot}}$, the MMO reaches a mass of $\sim 8\times10^3\,\mathrm{M_{\odot}}, 6\times10^3\,\mathrm{M_{\odot}}$ and $2\times10^3\,\mathrm{M_{\odot}}$, after $\sim 2.5$ Myr for $f_{\rm Edd}=0.5,0.7$ and 0.9, respectively. The final mass after 5 Myr varies between $~3\times 10^3-10^4 \,\mathrm{M_{\odot}}$. For the case of $Z=\,\mathrm{Z_{\odot}}$, no SMS could be formed for $f_{\rm Edd}=0.9$. However, for $f_{\rm Edd}=0.5$ and 0.7, a significant growth occurs after 3 Myr, and the MMO reaches a mass of $\sim 5\times 10^2\,\mathrm{M_{\odot}}$ and $4\times 10^3\,\mathrm{M_{\odot}}$ for $f_{\rm Edd}=0.7$ and 0.5, respectively, at 5 Myr. The mass loss by winds is stronger for higher metallicity, and so its effect on the final mass of the MMO is more prominent in the case of the lower accretion rate $10^{-5}\,\mathrm{M_{\odot}yr^{-1}}$ compared to $10^{-4}\,\mathrm{M_{\odot}yr^{-1}}$.
Next, we explore the Eddington accretion scenario given by Eq.~\ref{eddacc}. For the case of Eddington accretion, the growth of the MMO occurs after about 3.5 Myr. However, it is important to note that using this recipe, the MMO does not grow for the cases of $Z_\ast=\,\mathrm{Z_{\odot}}$ and $0.5\,\mathrm{Z_{\odot}}$. The final mass of the MMO depends highly on the values of $Z_\ast$ and $f_{\rm Edd}$. For $f_{\rm Edd}=0.5$, an MMO of mass $\sim 10^3\,\mathrm{M_{\odot}}$ is formed for $Z_\ast \lesssim 0.01\,\mathrm{Z_{\odot}}$. For $f_{\rm Edd}=0.7$ and 0.9, an MMO of mass $\sim 10^3\,\mathrm{M_{\odot}}$ is formed for $Z_\ast \lesssim 0.001\,\mathrm{Z_{\odot}}$. Finally, we explore the more extreme case of Bondi-Hoyle accretion given by Eq.~\ref{mainbondi}. The growth of the MMO is very slow during the initial period of time (depending on $Z_\ast$ and $f_{\rm Edd}$), after which the growth happens in a runaway fashion due to the fact that $\dot{M}_\mathrm{BH}\propto M_\ast^2$. The timing of the runaway growth depends on when the first collision happens. Similar to the Eddington accretion scenario, there is no growth of the MMO for the case of $Z=\,\mathrm{Z_{\odot}}$ and $0.5\,\mathrm{Z_{\odot}}$. The MMO reaches a final mass of $\sim 10^5\,\mathrm{M_{\odot}}$ for $Z_\ast=0.1\,\mathrm{Z_{\odot}}$ or lower. The results could also be understood from the comparison of accretion and mass loss rates in Fig.~\ref{rates}. For a constant accretion rate of $10^{-4}\,\mathrm{M_{\odot}yr^{-1}}$, the accretion rate is greater than the mass loss rate for $M_\ast\lesssim 10^3\,\mathrm{M_{\odot}}$. So the stars have a net gain of mass no matter what metallicity we choose and as a result, they slow down and move towards the core due to momentum conservation. This results in a significant number of collisions and the formation of a SMS in the core. However, for the accretion scenario of $10^{-5}\,\mathrm{M_{\odot}yr^{-1}}$, Eddington, or Bondi-Hoyle, the accretion rate could be greater or lesser than the mass loss rate depending on the adopted values of $Z_\ast$ and $f_{\rm Edd}$.
\begin{figure*}
\includegraphics[width=0.5\columnwidth]{metal/constant1.e-4f0.5.pdf}
\includegraphics[width=0.5\columnwidth]{metal/constant1.e-5f0.5.pdf}
\includegraphics[width=0.5\columnwidth]{metal/eddingtonf0.5.pdf}
\includegraphics[width=0.5\columnwidth]{metal/bondif0.5.pdf}
\includegraphics[width=0.5\columnwidth]{metal/constantmetal.pdf}
\includegraphics[width=0.5\columnwidth]{metal/constant1.e-5f0.7.pdf}
\includegraphics[width=0.5\columnwidth]{metal/eddingtonf0.7.pdf}
\includegraphics[width=0.5\columnwidth]{metal/bondif0.7.pdf}
\includegraphics[width=0.5\columnwidth]{metal/constant1.e-4f0.9.pdf}
\includegraphics[width=0.5\columnwidth]{metal/constant1.e-5f0.9.pdf}
\includegraphics[width=0.5\columnwidth]{metal/eddingtonf0.9.pdf}
\includegraphics[width=0.5\columnwidth]{metal/bondif0.9.pdf}
\caption{Mass evolution of the MMO for different accretion rates and mass loss rates. Different colors represent different values of $Z_\ast$ as labeled.}
\label{metal}
\end{figure*}
\section{Neglected processes}\label{neglected}
In this paper, we considered the interplay of collisions, physically motivated accretion recipes, and mass loss due to stellar winds. However, there are important processes that were still neglected, and which could have a relevant influence on some of the results.
In the context of stellar winds, we considered only the mass loss, but the winds also deposit kinetic energy into the system. It is therefore important to at least approximately assess its effect.
\begin{figure}
\includegraphics[width=\columnwidth]{/wind.pdf}
\caption{Energy deposited by a single star as a function of mass. Different color represents different $Z_\ast$. The binding energy of the cluster is shown as the black dashed line for comparison }
\label{deposit}
\end{figure}
In Fig.~\ref{deposit}, we show the energy deposited by a single star for different metallicities as a function of mass. The energy deposited by a single star can be calculated as $\dot{E}_{\rm \ast, kin}\sim\dot{M}_{\rm loss}v^2$, where $\dot{M}_{\rm loss}$ is the mass loss rate of the star (computed for $f_{\rm Edd}=0.5$). To estimate the velocity of the winds we calculate the escape velocity from the stellar surface, $v_{\rm esc}=\sqrt{2GM_\ast/R_\ast}$. The wind velocity should correspond to that velocity within a factor of a few \citep{Vin00,Vin01}. We also show the gravitational binding energy of the cluster $E_\mathrm{bin} \simeq GM^2/R\sim9\times10^{50}$~erg with the black dashed line in the same figure for comparison. It is important to note that a single star with mass $\sim$ few times 100 $\,\mathrm{M_{\odot}}$ will deposit enough energy in 1 Myr to unbind the cluster for metallicities $Z_\ast\gtrsim 0.5\,\mathrm{M_{\odot}}$. As we go toward lower metallicities, the energy deposition rate is considerably lower. In order to avoid the unbinding of the cluster, one can naively expect the stars to be of lower metallicities. For a constant accretion rate of $10^{-4}\,\mathrm{M_{\odot}yr^{-1}}$ the SMS is forming within the first 1 Myr, so in principle the deposition of energy by the stellar wind will not prohibit the formation of a SMS. However, for an accretion rate of $10^{-5}\,\mathrm{M_{\odot}yr^{-1}}$ the SMS is forming at a much later stage, $\gtrsim 2$ Myr. The energy deposited by the wind from the MMO is enough to unbind the cluster within $~2$ Myr for $Z_\ast\gtrsim 0.1\,\mathrm{Z_{\odot}}$. For the Eddington accretion scenario, the SMS $\sim 10^3\,\mathrm{M_{\odot}}$ is forming at a much later stage $\sim 4-5$ Myr for $Z\lesssim 0.01$ Myr for which the binding energy of the cluster is greater than the kinetic energy deposited by the star. So, the deposition of kinetic energy from the star would not be expected to be a problem at least for the run-time of the simulations. For the Bondi-Hoyle scenario, no SMS is forming for $Z_\ast\gtrsim 0.5\,\mathrm{Z_{\odot}}$, where we expect the unbinding of the cluster to be much faster. For all other cases, the SMS could be formed if kinetic energy feedback was not relevant; however in reality it will unbind the cluster very quickly ($\lesssim 1$ Myr), since for Bondi-Hoyle the mass of the SMS is $\sim 10^5\,\mathrm{M_{\odot}}$.
To compute the energy deposited by the whole cluster we can assume a simplified cluster with $N=5000$ stars with each star of mass $22\,\mathrm{M_{\odot}}$ (which is the average mass of a star in the cluster with the initial conditions we assumed in our simulations). The velocity of the wind can be estimated as for a single star, which yields a characteristic velocity $v\sim 1100$~km/s. The total kinetic energy deposition rate can then be evaluated as $\dot{E}_{\rm kin}\sim\dot{M}Nv^2\sim 1.2 \times10^{53}$~erg Myr$^{-1}$.
To avoid unbinding the gas within a timescale of $1$~Myr, the energy deposition should decrease by at least two orders of magnitude. Equation~(21) of \citet{Vin01} suggests that the mass loss rate scales with the metallicity as $Z^{0.85}$, implying that a decrease of the metallicity by a factor of $225$ should bring the energy deposition rate into the regime where the gas no longer becomes unbound. Since this estimate is very approximate, we expect the transition where kinetic energy deposition is no longer relevant to occur somewhere in the range ($10^{-2}-10^{-3})\,\,\mathrm{Z_{\odot}}$. For the range in between, gas expulsion due to the winds is expected to limit the potential growth of the central massive object, with this effect becoming weaker at lower metallicities. We also want to note that the gravitational potential energy of the cluster will change with the cluster properties. For a massive cluster or for a compact cluster the binding energy will be higher and hence the formation of the SMS would be more favorable.
On the $5$~Myr timescale explored in our simulations, supernova feedback is also expected to become relevant. With the typical energy of $10^{51}$~erg for a core collapse supernova, it is clear that one such event will expel the gas and terminate the accretion, if it has not stopped already (either due to gas expulsion by stellar winds or as the accretion process may have depleted the gas).
In future work, it will be important to study detailed gas dynamics where the kinetic energy deposition of winds as well as the supernova feedback is taken into account.
\section{Summary and Discussion}\label{Discussion}
In this work, we explored the effect of mass loss due to stellar winds on the final mass of the SMSs, which could be formed via runaway stellar collisions and gas accretion inside NSCs. We find that a SMS of mass $\gtrsim 10^3\,\mathrm{M_{\odot}}$ could be formed even in a high metallicity environment for high accretion rates of $10^{-4}\,\mathrm{M_{\odot}yr^{-1}}$. For an accretion rate of $10^{-5}\,\mathrm{M_{\odot}yr^{-1}}$, the final mass of the SMS $\sim 10^4\,\mathrm{M_{\odot}}$ for $Z_\ast\lesssim 0.5\,\mathrm{Z_{\odot}}$. Whereas for solar metallicity, no SMS can be formed for $f_{\rm Edd}=0.9$ and SMSs of masses $\sim 10^{2-3}\,\mathrm{M_{\odot}}$ can be formed for $f_{\rm Edd}=0.7$ and 0.5, respectively. For the case of Eddington accretion it will not be possible to form a SMS in the metallicity regime $\gtrsim 0.1\,\mathrm{Z_{\odot}}$. Finally, for the Bondi-Hoyle accretion scenario, we find that the formation of a SMS will not be possible in the high metallicity regime of $Z_\ast\gtrsim 0.5\,\mathrm{Z_{\odot}}$.
The interaction of the stellar wind and the gas inside the cluster might play an important role in the evolution of the SMS. The winds from the SMSs have high velocities $\sim 10^3$ km\,$\mathrm{s^{-1}}$~\citep{Mui12}, which might exceed the escape velocity from the centre of our modelled star cluster. The SMS in the cluster is close to the centre which results in a high collision rate near the centre due to a shorter relaxation time in the core and an increased collisional cross section. If the SMS is displaced by collisions, it rapidly sinks back close to the centre via dynamical friction where it may eventually decouple from the remainder of the cluster. This is also known as the Spitzer instability~\citep{Spi69}. Interestingly,~\citet{Krau16} have found that for a Salpeter type mass function the stellar wind cannot remove the gas inside the cluster. Hence, we do not expect the stellar wind to remove gas from the cluster.
One of the main caveats of this work is the neglect of the kinetic energy deposition through stellar winds, which could contribute significantly to expel the gas. The latter is likely to create a regime where the growth of a massive object is still inhibited, even though the mass loss itself from the winds is no longer significant. Below a critical metallicity in the range ($10^{-2}-10^{-3})\,\,\mathrm{Z_{\odot}}$, this effect is no longer expected to be relevant; however, supernova feedback may lead to the expulsion of the gas. Another relevant caveat is the extrapolation of the mass loss recipe by~\citet{Vin00,Vin01} beyond $1000\,\mathrm{M_{\odot}}$, for which the mass loss is not really well known. Another uncertainty is the mass loss rates for stars close to their Eddington limit. \citet{Vin11} have shown that the mass-loss rate increases strongly for stars close to the Eddington limit. So we might be underestimating the mass loss rate assuming the \citet{Vin01} recipe, especially in the high mass regime. We point out that similar to~\citet{Das20}, this work contains an idealized simulation setup. In real cosmological systems, the gas dynamics could be different and one needs to solve the hydrodynamics equations. In order to study the gas dynamics in detail, we need to incorporate the full hydrodynamics and hence the cooling, which also depends on the chemistry of the gas. Feedback processes due to the stars would also need to be modeled in more detail. Using cosmological zoom-in simulations,~\citet{Li17} have found that accretion might be regulated by stellar feedback processes. The main goal of this work was to build a simplified model that allows us to study the evolution over a large part of the parameter space for a long timescale of a few Myr. For future work, it will be important to explore more realistic accretion scenarios and their interaction with the mass loss process, as well as mass loss in the range of high stellar masses.
\section*{Acknowledgements}
We thank the anonymous referee for constructive comments on the manuscript. This work received funding from the Mitacs Globalink Research Award, the Western University Science International Engagement Fund, the Millenium Nucleus NCN19$\_$058 (TITANs) and BASAL Centro de Excelencia en Astrofisica y Tecnologias Afines (CATA) grant PFB-06/2007. This research was made possible by the computational facilities provided by the Shared Hierarchical Academic Research Computing Network (SHARCNET: www.sharcnet.ca) and Compute Canada (www.computecanada.ca).
This project was supported by funds from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program under grant agreement No 638435 (GalNUC), and a Discovery Grant from the Natural Sciences and Engineering Research Council (NSERC) of Canada.
\section*{Data availability}
The data underlying this article will be shared on reasonable request to the corresponding author.
\bibliographystyle{mnras}
|
1,941,325,220,077 | arxiv | \section{Introduction}\label{akg:sec:intro}
Wealth is accumulated in various forms and factors. The continual exchange of wealth
(a value assigned) among the agents in an economy gives rise to interesting and many often
universal statistical distributions of individual wealth. Here the word `wealth' is used
in a general sense for the purpose and the spirit of the review (inspite of
separate meanings attached to the terms `money', `wealth' and `income').
Econophysics of wealth distributions \cite{AKG:eco} is an emerging area where mainly the
ideas and techniques of statistical physics are used in interpreting real economic data of
wealth (available mostly in terms of income) of all kinds of people or other entities ({\em e.g.},
companies) for that matter, pertaining to different societies and nations.
Literature in this area is growing steadily (see an excellent website \cite{AKG:ecoforum}).
The prevalence of income data and apparent interconnections of many socio-economic problems
with physics have inspired a wide variety of statistical models, data analysis and other
works in econophysics \cite{AKG:stanley}, sociophysics and other emerging areas \cite{AKG:stauffer}
in the last one decade or more (see an enjoyable article by Stauffer \cite{AKG:outside}).
Simple approach of {\it agent based models}
have been able to bring out all kinds of wealth distributions that open up a whole new way of
understanding and interpreting empirical data.
One of the most important and controversial issues has been to understand the emergence of
{\em Pareto's law}:
\begin{equation}\label{akg:eqn:pareto}
P(w) \propto w^{-\alpha},
\end{equation}
\noindent
where $w\ge w_0$, $w_0$ being some value of wealth beyond which the power law
is observed (usually towards the tail of the distributions). Pareto's law has been observed
in income distributions among the people of almost all kinds of social systems across the
world in a robust way. This phenomenon is now known for more than a century and has been
discussed at a great length in innumerable works in economics, econophysics, sociophysics and
physics dealing with power law distributions.
In many places, while mentioning
{\it Pareto's law}, the power law is often written
in the form: $P(w) \propto w^{-(1+\nu)}$, where $\nu$ is referred to as `Pareto index'.
This index is usually found between 1 and 2 from empirical data fitting.
Power laws in distributions appear in many other cases \cite{AKG:lognormal, AKG:newman, AKG:reed} like that of
computer file sizes, the growth of sizes of business firms and cities {\em etc}.
Distributions are often referred to as `heavy tailed' or `fat tailed'
distributions \cite{AKG:mandelbrot}.
Smaller the value of $\alpha$, fatter the tail of the distribution as it may easily be
understood (the distribution is more spread out).
Some early attempts \cite{AKG:early} have been made to understand the income distributions
which follow Pareto's law at the tail of the distributions.
Some of them are stochastic
logistic equations or some related generalized versions of that which have been able to
generate power laws. However, the absence of direct interactions of one agent with any other
often carries less significance in terms of interpreting real data.
Some part of this review is centered around the concept of emergence of Pareto's law
in the wealth distributions, especially in the context of the models that are discussed here.
However, a word of caution is in order.
In the current literature and as well as in the historical occurrences, the power law distribution has often
been disputed with a closely related lognormal distribution \cite{AKG:lognormal}.
It is often not easy to distinguish between the two. Thus a brief discussion is made here on
this issue. Let us consider the probability density function of a lognormal distribution:
\begin{equation}\label{akg:eqn:lognormal-1}
p(w)={1\over \sqrt{2\pi}\sigma w}\exp[-(\ln w -\overline w)^2/{2\sigma^2}],
\end{equation}
\noindent
The logarithm of the above can be written as:
\begin{equation}\label{akg:eqn:lognormal-2}
\ln p(w)=-\ln w-\ln\sqrt{2\pi}\sigma-{(\ln w-\overline w)^2\over 2\sigma^2}.
\end{equation}
\noindent
If now the variance $\sigma^2$ in the lognormal distribution is large enough, the last term
on the right hand side can be very small so that the distribution may appear linear on a
log-log plot. Thus the cause of concern remains, particularly when one deals with real data.
In the literature, sometimes one calculates a {\it cumulative distribution} function
(to show the power law in a more convincing way) instead of plotting
ordinary distribution from simple histogram (probability density function).
The cumulative probability distribution function $P(\ge w)$ is such that the argument has a
value greater than or equal to $w$:
\begin{equation}\label{akg:eqn:discum-1}
P(\ge w) = \int_w^{\infty} P(w^{\prime})dw^{\prime}.
\end{equation}
\noindent
If the distribution of data follows a power law $P(w)=Cw^{-\alpha}$, then
\begin{equation}\label{akg:eqn:discum-2}
P(\ge w) = C\int_w^{\infty} {w^{\prime}}^{-\alpha}dw^{\prime}={C\over \alpha-1}w^{-(\alpha-1)}.
\end{equation}
\noindent
When the ordinary distribution (found from just histogram and binning) is a
power law, the cumulative distribution thus also follows a power law with the exponent 1 less:
$\alpha-1$, which can be seen from a log-log plot of data. An extensive discussion on power laws
and related matters can be found in \cite{AKG:newman}.
Besides power laws, a wide variety of wealth distributions from exponential to
something like Gamma distributions are all reported in recent literature in econophysics.
Exchange of wealth is considered to be a primary mechanism behind all such distributions.
In a class of wealth exchange models \cite{AKG:chak1, AKG:chak2, AKG:yako1, AKG:yako2}
that follow, the economic activities among agents have been assumed
to be analogous to random elastic collisions among molecules as considered in kinetic gas
theory in statistical physics.
Analogy is drawn between wealth ($w$) and Energy ($E$), where
the average individual wealth ($\overline w$) at equilibrium is equivalent to temperature ($T$).
Wealth ($w$) is assumed to be exchanged between two randomly selected economic agents
like the exchange of energy between a pair of molecules in kinetic gas theory.
The interaction is such that one agent wins and the other loses the
same amount so that the sum of their wealth remains constant before and after an
interaction (trading): $w_i(t+1)+w_j(t+1)=w_i(t)+w_j(t)$; each trading increases time $t$ by one unit.
Therefore, it is basically a process of zero sum exchange between a pair
of agents; amount won by one agent is equal to the amount lost by another.
This way wealth is assumed to be redistributed among a fixed number
of agents ($N$) and the local conservation ensures the total wealth ($W = \sum w_i$) of all the
agents to remain conserved.
Random exchange of wealth between a randomly selected pair of agents may be viewed as
a {\it gambling process} (with zero sum exchange) which leads to Boltzmann-Gibbs type
exponential distribution in individual wealth ($P(w) \propto \exp(-w/{\overline w}$).
However, a little variation in the mode of wealth exchange can lead to a distribution
distinctly different from exponential. A number of agent based conserved models
\cite{AKG:chak2,AKG:saving1,AKG:saving2,AKG:models,AKG:chak3,AKG:akg2,AKG:prefer,AKG:sita}, invoked in recent times, are essentially
variants of a gambling process. A wide variety of distributions evolve out of these models.
There has been a renewed interest in such two-agent exchange models
in the present scenario while dealing with various problems in social systems involving
complex interactions.
A good insight can be drawn by looking at the
$2\times 2$ transition matrices associated with the process of wealth
exchange \cite{AKG:akg1}.
In this review, the aim would be to arrive at some understanding of
how wealth exchange processes in a simple working way may lead to a variety of
distributions within the framework of the conserved models
A fixed number of $N$ agents in a system are allowed to
interact (trade) stochastically and thus wealth is exchanged between them.
The basic steps of such a wealth exchange model are as follows:
\begin{equation} \label{akg:eqn:basic}
w_i(t+1)=w_i(t)+\Delta w,
\end{equation}
\begin{equation*}
w_j(t+1)=w_j(t)-\Delta w,
\end{equation*}
\noindent
where $w_i(t)$ and $w_j(t)$ are wealths of $i$-th and $j$-th agents at time $t$ and
$w_i(t+1)$ and $w_j(t+1)$ are that at the next time step ($t+1$).
The amount $\Delta w$ (to be won or to be lost by an agent) is determined by the nature of
interaction.
If the agents are allowed to interact for a long enough time, a steady state equilibrium
distribution for individual wealth is achieved.
The equilibrium distribution does not depend on the initial configuration (initial
distribution of wealth among the agents).
A single interaction between a randomly chosen pair of
agents is referred here as one `time step'. In some simulations, $N$ such interactions
are considered as one time step. This, however, does not matter as long as the system is
evolved through enough time steps to come to a steady state and then data is collected for
making equilibrium probability distributions.
For all the numerical results presented here, data have been produced following the available models,
conjectures and conclusions. Systems of $N=1000$ agents have been considered in each case. In each
numerical investigation, the system is allowed to equilibrate for a sufficient time
that ranges between $10^5$ to $10^8$ time steps.
Configuration averaging
has been done over $10^3$ to $10^5$ initial configurations in most cases.
The average wealth (averaged over the agents) is kept fixed at $\overline w=1$
(by taking total wealth, $W=N$) for all
the cases. The wealth distributions, that are dealt here in this review,
are ordinary distributions (probability density function) and not the cumulative ones.
\section{Pure gambling}\label{akg:sec:pure}
In a pure gambling process (usual kinetic gas theory), entire sum of wealths of two agents is up
for gambling. Some random fraction of this sum is shared by one agent and the rest goes to
the other. The randomness or stochasticity is introduced into the model through a
parameter $\epsilon$ which is a random number drawn from a uniform distribution in [0, 1].
(Note that $\epsilon$ is independent of a pair of agents {\em i.e.}, a pair of
agents is not likely to share the same fraction of aggregate wealth the same way when they
interact repeatedly).
The interaction can be seen through:
\begin{equation}\label{akg:eqn:gamble}
w_i(t+1)=\epsilon[w_i(t)+w_j(t)],
\end{equation}
\begin{equation*}
w_j(t+1)=(1-\epsilon)[w_i(t)+w_j(t)],
\end{equation*}
\noindent
where the pair of agents (indicated by $i$ and $j$) are chosen randomly. The amount of wealth that is
exchanged is $\Delta w = \epsilon[w_i(t) + w_j(t)] - w_i(t)$.
The individual wealth distribution ($P(w)$ vs. $w$) at equilibrium emerges out to be
Boltzmann-Gibbs distribution like exponential.
Exponential distribution of personal income has in fact been shown to appear in real data
\cite{AKG:yako1, AKG:yako2}. In the kinetic theory model, the exponential distribution is found by
standard formulation of master equation or by entropy maximization method, the latter has been
discussed later in brief in section \ref{akg:sec:ineq}.
A normalized exponential distribution obtained numerically out of this pure gambling process is
shown in Fig.~\ref{akg:fig:dist_expo} in semi logarithmic plot.
The high end of the distribution appears noisy due to sampling of data. The successive
bins on the right hand side of the graph contain less and less number of samples in them
so the fractional counts in them are seen to fluctuate more (finite size effect). One way to get rid of this
sampling error in a great extent is by way of taking logarithmic binning \cite{AKG:newman}.
Here it is not important to do so as the idea is to show the nature of the curve only.
(In the case of power law distribution, an even better way to demonstrate and extract the
power law exponent is to plot the cumulative distribution as discussed already.)
\begin{figure}[htb]
\includegraphics[width=.4\textwidth]{akgfig1.eps}
\caption{Distribution of wealth for the case of Pure Gambling: the linearity in the
semi-log plot indicates exponential distribution.\label{akg:fig:dist_expo}}
\end{figure}
If one intends to take time average of wealth of a single agent over a sufficient time, it
comes out to be equal for all the agents. Therefore, the distribution of
individual {\it time averaged wealth turns out to be a delta function} which is checked
from numerical data. This is because the fluctuation of wealth of any agent over time
is statistically no different from that of any other. The same is true in case of the
distribution of wealth of a single agent over time. However, when the average of wealth of
any agent is
calculated over a short time period, the delta function broadens and the right end part of which
decays exponentially. The distribution of individual wealth at a certain time turns out to
be purely exponential as mentioned earlier. This may be thought of as a `snap shot'
distribution.
\section{Uniform saving propensity}\label{akg:sec:fixlam}
Instead of random sharing of their aggregate wealth during each interaction, if the agents
decide to save (keep aside) a uniform (and fixed) fraction ($\lambda$) of their current
individual wealth, then the wealth exchange equations look like the following:
\begin{equation}\label{akg:eqn:eqsave}
w_i(t+1)=\lambda w_i(t)+\epsilon(1-\lambda)[w_i(t)+w_j(t)],
\end{equation}
\begin{equation*}
w_j(t+1)=\lambda w_j(t)+(1-\epsilon)(1-\lambda)[w_i(t)+w_j(t)],
\end{equation*}
\noindent
where the amount of wealth that is exchanged is
$\Delta w = (\epsilon-1)(1-\lambda)[w_i(t)+w_j(t)]$.
The concept of saving as introduced by Chakrabarti and group \cite{AKG:chak2} in an otherwise
gambling kind of interactions brings out distinctly different distributions.
A number of numerical works followed \cite{AKG:gamma, AKG:gamma-support, AKG:sudha}
in order to understand the emerging distributions to some extent.
Saving induces accumulation of wealth. Therefore, it is expected that the probability of finding
agents with zero wealth may be zero unlike in the previous case of pure gambling
where due to the unguarded nature of exchange many agents are likely to go nearly bankrupt!
(It is to be noted that for an exponential distribution, the peak is at zero.)
In this case the most probable value of the distribution (peak) is somewhere else than at
zero (the distribution is right skewed).
The right end, however, decays exponentially for large values of $w$.
It has been claimed through heuristic arguments (based on numerical results) that the distribution
is a close approximate form of the Gamma distribution \cite{AKG:gamma}:
\begin{equation}\label{akg:eqn:gamma-1}
P(w) = {n^n\over \Gamma(n)}w^{n-1}e^{-nw},
\end{equation}
\noindent
where the Gamma function $\Gamma(n)$ and the index $n$ are understood to be related to the
saving propensity parameter $\lambda$ through the following relation:
\begin{equation}\label{akg:eqn:gamma-2}
n = 1 + {3\lambda\over 1-\lambda}.
\end{equation}
\noindent
The emergence of probable Gamma distribution is also subsequently supported through numerical
results in \cite{AKG:gamma-support}. However, it has later been
shown in \cite{AKG:moments} by considering moments' equation that moments up to third order agree
with that obtained from the above form of distribution subject to the condition stated in
eqn.~(\ref{akg:eqn:gamma-2}). Discrepancies start showing only from 4th
order onwards. Therefore, the actual form of distribution still remains an open question.
In Fig.~\ref{akg:fig:dist_fixlam}, two distributions are shown for two different values of
saving propensity factor: $\lambda=0.4$ and $\lambda=0.8$. Smaller the value of $\lambda$,
lesser the amount one is able to save. This in turn means more wealth is available in
the market for gambling. In the limit of zero saving ($\lambda=0$) the model reduces to that
of pure gambling. In the opposite extent of large saving, only a small amount of wealth
is up for gambling. Then the exchange of wealth will not be able to drastically change the
amount of individual wealth. This means the width of distribution of individual wealth
will be narrow. In the limit of $\lambda=1$, all the agents save all of their wealth and thus
the distribution never evolves.
The concept of `saving' here is of course a little different from that in real life where
people do save some amount to be kept in a bank or so and
the investment (or business or gambling) is done generally not with the entire
amount (or a fraction) of wealth that one holds at a time.
\begin{figure}[htb]
\includegraphics[width=.4\textwidth]{akgfig2.eps}
\caption{Wealth distribution for the model of uniform and fixed saving propensity.
Two distributions are shown with $\lambda=0.4$ and $\lambda=0.8$ where the stochasticity
parameter $\epsilon$ is drawn randomly and uniformly in [0, 1]. Another distribution is
plotted with $\lambda = 0.8$ but with fixed value of the stochasticity parameter,
$\epsilon = 1$.
\label{akg:fig:dist_fixlam}}
\end{figure}
Stochastic evolution of individual wealth is also examined without the inclusion of
stochastic parameter $\epsilon$. The stochasticity seems to be automatically
introduced anyhow through the random selection of a pair of agents (and the random choice
of the winner or loser as well) at each time. Therefore, it is
interesting to see how the distributions evolve with a fixed value of $\epsilon$.
As an example, the equations in (\ref{akg:eqn:eqsave}) reduce to the following with
$\epsilon = 1$:
\begin{equation} \label{akg:eqn:eqsave_reduce}
w_i(t+1)=w_i(t)+(1-\lambda)w_j(t),
\end{equation}
\begin{equation*}
w_j(t+1)=\lambda w_j(t).
\end{equation*}
\noindent
The above equations indicate that the randomly selected agent $j$
keeps (saves) an amount $\lambda w_j(t)$ which is proportional to the wealth he currently has
and transfers the rest to the other agent $i$. This is indeed a stochastic process and
is able to produce Gamma type distributions in wealth as observed. However, a distribution with
random $\epsilon$ and that with a fixed $\epsilon$ are different. Numerically, it
has been observed, the distribution with $\lambda = 0.8$ and with
$\epsilon = 1$ is very close to that with $\lambda = 0.5$ and with random $\epsilon$.
In Fig.~\ref{akg:fig:dist_fixlam} the distribution with fixed $\lambda = 0.8$ and
fixed $\epsilon = 1$ is plotted along with other two distributions with random $\epsilon$.
It should also be noted that while with fixed $\epsilon$, one does not get Gamma type
distributions
for all values of $\lambda$; especially for low values of $\lambda$ the distributions
become close to exponential as observed. This is not clearly understood though.
It has recently been brought to notice in \cite{AKG:lux} that a very similar kind of agent
based model was proposed by Angle \cite{AKG:angle} (see other references cited in \cite{AKG:lux}) in
sociological journals quite some years ago. The pair of equations in Angle's model are as
follows:
\begin{equation}\label{akg:eqn:angle}
w_i(t+1)=w_i(t)+D_t\omega w_j(t)-(1-D_t)\omega w_i(t),
\end{equation}
\begin{equation*}
w_j(t+1)=w_j(t)+(1-D_t)\omega w_i(t)-D_t\omega w_j(t),
\end{equation*}
\noindent
where $\omega$ is a fixed fraction and the winner is decided through a random
toss $D_t$ which takes a value either 0 or 1.
Now, the above can be seen as the more formal way of writing the pair of equations
(\ref{akg:eqn:eqsave_reduce}) which can be arrived at by choosing $D_t=1$ and
identifying $\omega=(1-\lambda)$.
It can in general be said, within the framework of this kind of (conserved) models,
different ways of incorporating wealth exchange processes may lead to drastically different
distributions. If the gamble is played
in a {\it biased way}, then this may lead to a distinctly different situation than the
case when it is played in a normal unbiased manner.
Since in this class
of models negative wealth or debt is not allowed, it is desirable that in each wealth
exchange, the maximum that any agent may invest is the amount that he has at that time.
Suppose, the norm is set for an `equal amount invest' where the amount to be deposited
by an agent for gambling is decided by the amount the poorer agent can afford and consequently
the same amount is agreed to be deposited by the richer agent. Let us suppose $w_i > w_j$.
Now, the poorer agent ($j$) may invest a certain fraction of his wealth, an
amount $\lambda w_j$ and the rest $(1-\lambda)w_j$ is saved by him. Then the total
amount $2\lambda w_j$ is up for gambling and as usual a fraction of
this, $2\epsilon\lambda w_j$ may be shared by the richer agent $i$ where the
rest $(1-\epsilon)\lambda w_j$ goes to the poorer agent $j$. This may appear fine, however,
this leads to `rich gets richer and poor gets poorer' way.
The richer agent draws more and more wealth in his favour in the successive encounters
and the poorer agents are only able to save less and less and finally there is a
condensation of wealth at the hands of the richest person.
This is more apparent when one considers an agent with $\lambda = 1$ where it can be
easily checked that the richer agent automatically saves an amount equal to the difference
of their wealth ($w_i-w_j$) and the poorer agent ends up saving zero amount. Eventually,
poorer agents get extinct. This is `minimum exchange model' \cite{AKG:sita}.
\section{Distributed saving propensity}\label{akg:sec:ranlam}
The distributions emerge out to be dramatically
different when the saving propensity factor ($\lambda$)
is drawn from a uniform and random distribution in [0,1] as introduced in a model proposed
by Chatterjee, Chakrabarti and Manna \cite{AKG:saving1}. Randomness in $\lambda$ is assumed to be
quenched ({\em i.e.}, remains unaltered in time). Agents are indeed heterogeneous.
They are likely to have different (characteristic) saving propensities.
The pair of wealth exchange equations are now written as:
\begin{equation}\label{akg:eqn:ransave}
w_i(t+1)=\lambda_i w_i(t)+\epsilon[(1-\lambda_i)w_i(t)+(1-\lambda_j)w_j(t))],
\end{equation}
\begin{equation*}
w_j(t+1)=\lambda_j w_j(t)+(1-\epsilon)[(1-\lambda_i)w_i(t)+(1-\lambda_j)w_j(t))].
\end{equation*}
\noindent
A power law with
exponent $\alpha =2$ (Pareto index $\nu=1$) is observed at the right end of the wealth
distribution for several decades. Such a distribution is
plotted in Fig.~\ref{akg:fig:dist_ranlam} where a straight line is drawn in the
log-log plot with slope = -2 to illustrate the power law and the exponent. Extensive
numerical results with different distributions in the saving propensity
parameter $\lambda$ are reported in \cite{AKG:chak3}. Power law (with exponent $\alpha = 2$) is found to be robust.
The value of Pareto index obtained here ($\nu =1$), however,
differs from what is generally extracted (1.5 or above) from most of the empirical data of
income distributions (see discussions and analysis on real data by various authors in
\cite{AKG:eco}). The present model is not able to resolve this discrepancy and it is not
expected at the outset either.
introducing random waiting time in the interactions of agents in order to have a
justification for a larger value of the exponent $\nu$ \cite{AKG:richmond}.
\begin{figure}[htb]
\includegraphics[width=.4\textwidth]{akgfig3.eps}
\caption{Wealth distribution for the model of random saving propensity plotted in log-log
scale.
A straight line with slope = -2 is drawn to demonstrate that the power law exponent is
$\alpha=2$.
\label{akg:fig:dist_ranlam}}
\end{figure}
\begin{figure}[htb]
\includegraphics[width=.4\textwidth]{akgfig4.eps}
\caption{Bimodal distribution of wealth ($w$) with fixed values of saving propensities,
$\lambda_1$=0.2 and $\lambda_2$=0.8. Emergence of two economic classes are apparent.
\label{akg:fig:twopeak}}
\end{figure}
The distributed saving gives rise to an additional interesting feature when a special case
is considered where the saving parameter $\lambda$ is assumed to have only two fixed values,
$\lambda_1$ and $\lambda_2$ (preferably widely separated). A bimodal distribution
in individual wealth results in \cite{AKG:akg1}. This can be seen from the
Fig.~\ref{akg:fig:twopeak}.
The system evolves towards a robust and distinct two-peak distribution as the
difference in $\lambda_1$ and $\lambda_2$ is increased systematically. Later it is seen that
one still gets a two-peak distribution even when $\lambda_1$ and $\lambda_2$
are drawn from narrow distributions centered around two widely separated values (one large and one small).
Two economic classes seem to persist
until the distributions in $\lambda_1$ and $\lambda_2$ have got sufficient widths.
A population can be imagined to have two distinctly different kinds of people: some
of them tend to save a very large fraction (fixed) of their wealth and the others tend to
save a relatively small fraction (fixed) of their wealth.
Bimodal distributions (and a polymodal distribution, in general) are, in fact, reported with real data
for the income distributions in Argentina \cite{AKG:polymodal}. The distributions were derived at
a time of political crisis and thus they may not be regarded as truly equilibrium distributions
though. However, it remains an interesting possibility out of a simple model of wealth exchange.
\subsection{Power law from mean field analysis}\label{akg:sec:meanfield}
One can have an estimate of ensemble averaged value of wealth \cite{AKG:akg3} using one of
the above equations (\ref{akg:eqn:ransave}) in section \ref{akg:sec:ranlam}. Emergence of a power law in the
wealth distribution can be established through a simple consideration as follows. Taking ensemble
average of all the terms on both sides of the first eqn.~(\ref{akg:eqn:ransave}), one may write:
\begin{equation}\label{akg:eqn:meanfield-1}
\langle w_i\rangle =\lambda_i \langle w_i\rangle +\langle\epsilon\rangle[(1-\lambda_i)\langle w_i\rangle+
\langle{1\over N}\sum_{j=1}^N(1-\lambda_j)w_j\rangle]
\end{equation}
\noindent
The last term on the right hand side is replaced by the average over agents
where it is assumed that any agent
(here the $i$-th agent), on an average, interacts with all other agents of the system,
allowing sufficient time to interact. This is basically a {\it mean field approach}.
If $\epsilon$ is assumed to be distributed randomly and uniformly between 0 and 1
then $\langle\epsilon\rangle = {1\over 2}$.
Wealth of each individual keeps on changing due to interactions (or wealth exchange processes that
take place in a society). No matter what the personal wealth one begins with, the time
evolution of wealth of an individual agent at the steady state is independent of that
initial value. This means the distribution of wealth of a single agent over time is stationary.
Therefore, the time averaged value of wealth of any agent remains unchanged whatever the
amount of wealth one starts with. In course of time, an agent interacts with all other agents
(presumably repeatedly) given sufficient time. One can thus think of a number of ensembles (configurations)
and focus attention on a particular tagged agent who eventually tends to
interact with all other agents in different ensembles. Thus the time averaged value of
wealth is equal to the ensemble averaged value in the steady state.
Now if one writes
\begin{equation}\label{akg:eqn:meanfield-2}
\langle\overline {(1-\lambda)w}\rangle \equiv \langle{1\over N}\sum_{j=1}^N(1-\lambda_j)w_j\rangle,
\end{equation}
the above equation (\ref{akg:eqn:meanfield-1}) reduces to:
\begin{equation}\label{akg:eqn:meanfield-3}
(1-\lambda_i)\langle w_i\rangle = \langle\overline{(1-\lambda)w}\rangle),
\end{equation}
\noindent
The right hand side of the above equation is independent of any agent-index and the left hand side is referred
to any arbitrarily chosen agent $i$. Thus, it can be argued that the above relation can be true for any agent
(for any value of the index $i$) and so it can be equated to a constant. Let us now
recognize $C = \langle\overline {(1-\lambda)w}\rangle$, a constant which is found by
averaging over all the agents in the system and which is further
averaged over ensembles. Therefore, one arrives at a unique relation for this model:
\begin{equation}\label{akg:eqn:meanfield-4}
w = {C\over (1 - \lambda)},
\end{equation}
\noindent
where one can get rid of the index $i$ and may write $\langle w_i\rangle = w$ for brevity.
The above relation is also verified numerically which is obtained by many authors in their
numerical simulations and scaling of data \cite{AKG:chak3,AKG:gamma-support}.
One can now derive $dw = {w^2\over C}d\lambda$ from the above
relation (\ref{akg:eqn:meanfield-4}). An agent with a
(characteristic) saving propensity factor ($\lambda$) ends up with wealth ($w$)
such that one can in general relate the distributions of the two:
\begin{equation}
P(w)dw = g(\lambda)d\lambda.
\end{equation}
\noindent
If now the distribution in $\lambda$ is considered to be uniform then
$g(\lambda)$ = constant. Therefore, the distribution in $w$ is bound to be of the form:
\begin{equation}
p(w) \propto {1\over w^2}.
\end{equation}
\noindent
This may be regarded as Pareto's law with index $\alpha = 2$ which is already numerically
verified for this present model. The same result is also obtained recently in
\cite{AKG:mohanty} where the treatment is argued to be exact.
\subsection{Power law from reduced situation}\label{akg:sec:reduce}
From numerical investigations, it seems that the stochasticity parameter
$\epsilon$ is irrelevant as long as the saving propensity
parameter $\lambda$ is made random. It has been tested that the model is still able to
produce power law (with the same exponent, $\alpha=2$) for any fixed value of $\epsilon$.
As an example, the case for $\epsilon = 1$ is considered. The pair of wealth exchange equations (ref{akg:eqn:ransave})
now reduce to the following:
\begin{equation}\label{akg:eqn:ransave_reduce}
w_i(t+1) = w_i(t)+(1-\lambda_j)w_j(t) = w_i(t)+\eta_j w_j(t),
\end{equation}
\begin{equation*}
w_j(t+1) = w_j(t)-(1-\lambda_j)w_j(t) = (1-\eta_j)w_j(t).
\end{equation*}
\noindent
The exchange amount, $\Delta w = (1-\lambda_j)w_j(t)=\eta_jw_j(t)$ is now regulated by the
parameter $\eta = (1-\lambda)$ only. If $\lambda$ is drawn from a uniform and
random distribution in [0, 1], then $\eta$ is also uniformly and randomly distributed in
[0, 1].
{\it To achieve a power law in the wealth
distribution it seems essential that randomness in $\eta$ has to be quenched}. For `annealed'
type disorder ({\em i.e.}, when the distribution in $\eta$ varies with time) the power
law gets washed away (which is observed through numerical simulations).
It has also been observed that power law can be obtained when $\eta$ is uniformly
distributed between 0 and some value less than or equal to 1.
As an example, $\eta$ is taken in the range between 0 and 0.5, a power
law is obtained with the exponent around $\alpha=2$. However, when $\eta$ is taken in the
range $0.5 < \eta < 1$, the distribution clearly deviates from power law which is evident
from the log-log plot in Fig.~\ref{akg:fig:dist_eta}.
{\it Thus there seems to be a crossover from power law to some distribution
with exponentially decaying tail as one tunes the range in the quenched parameter $\eta$}.
\begin{figure}[htb]
\includegraphics[width=.4\textwidth]{akgfig6.eps}
\caption{Wealth distributions (plotted in the log-log scale) for two cases of
the `reduced situation': (i) $0< \eta < 0.5$ and (ii) $0.5< \eta <1$ plotted in log-log scale.
In one case, the distribution follows a power law (with exponent around $\alpha=2$) and in the other case,
it is seen to be clearly deviating from a power law.\label{akg:fig:dist_eta}}
\end{figure}
At this point, {\it two important criteria may be identified for achieving power law} within
this reduced situation:
\begin{itemize}
\item
The disorder in the controlling parameter $\eta$ has to be
quenched (fixed set of $\eta$'s for a configuration of agents),
\item
It is required that $\eta$, when
drawn from a uniform distribution, the lower bound of that should be 0.
\end{itemize}
\noindent
The above criteria may appear ad hoc, nevertheless have been checked by extensive numerical
investigations. It is further checked that the power law exponent does not depend on the
width of the distribution in $\eta$ (as long as it is between 0 and something less than 1).
This claim is substantiated by taking various ranges of $\eta$ in which it is uniformly
distributed. Systematic investigations are made for the cases where $\eta$ is drawn
in [0, 0.2], [0, 0.4], $\ldots$ ,[0, 1].
Power laws result in in all the cases with the exponent around $\alpha = 2$.
\section{Understanding through transition matrix}\label{akg:sec:matrix}
The evolution of wealth in the kind of two-agent wealth exchange process can be described
through the following $2\times 2$ transition matrix ($T$) \cite{AKG:akg1}:
\[\left(\begin{array}{c}
w_i^{\prime} \\
w_j^{\prime}
\end{array}\right)=T\left(\begin{array}{c}
w_i \\
w_j
\end{array}\right),\]
\noindent
where it is written, $w_i^{\prime}\equiv w_i(t+1)$ and $w_i\equiv w_i(t)$ and so
on. The transition matrix ($T$) corresponding to {\it pure gambling} process (in
section \ref{akg:sec:pure}) can be written as:
\[T=\left(\begin{array}{cc}
\epsilon & \epsilon\\
1-\epsilon & 1-\epsilon
\end{array}\right).\]
\noindent
In this case the above matrix is {\it singular} (determinant, $|T|=0$) which means the
inverse of this matrix does not exit. This in turn indicates that an evolution through such
transition matrices is bound to be {\it irreversible}. This property is connected to the
emergence of exponential (Boltzmann-Gibbs) wealth distribution. The same may be perceived
in a different way too. When a product of such matrices (for successive interactions) are
taken, the left most matrix (of the product) itself returns:
\[\left(\begin{array}{cc}
\epsilon & \epsilon\\
1-\epsilon & 1-\epsilon
\end{array}\right)\left(\begin{array}{cc}
\epsilon_1 & \epsilon_1\\
1-\epsilon_1 & 1-\epsilon_1
\end{array}\right)=\left(\begin{array}{cc}
\epsilon & \epsilon\\
1-\epsilon & 1-\epsilon
\end{array}\right).\]
\noindent
The above signifies the fact that during the repeated interactions of the same two agents
(via this kind of transition matrices), the last of the interactions is what matters
(the last matrix of the product survives) [$T^{(n)}.T^{(n-1)}\ldots T^{(2)}.T^{(1)}=T^{(n)}$].
This `loss of memory' (random history
of collisions in case of molecules) may be attributed here to the path to irreversibility
in time.
The singularity can be avoided if one considers the following general form:
\[T_1=\left(\begin{array}{cc}
\epsilon_1 & \epsilon_2\\
1-\epsilon_1 & 1-\epsilon_2
\end{array}\right),\]
\noindent
where $\epsilon_1$ and $\epsilon_2$ are two different random numbers
drawn uniformly from [0, 1] (This ensures the transition matrix to be nonsingular.).
The significance of this general form can be seen through the wealth exchange equations in the
following way: the $\epsilon_1$ fraction of wealth of the 1st agent ($i$) added
with $\epsilon_2$ fraction of wealth of the 2nd agent ($j$) is retained by the 1st agent
after the trade. The rest of their total wealth is shared by the 2nd agent. This may happen
in a number of ways which can be related to the detail considerations of a model.
The general matrix $T_1$ is nonsingular as long as $\epsilon_1\neq\epsilon_2$ and then
the two-agent interaction process remains reversible in time. Therefore,
it is expected to have a steady state equilibrium distribution of wealth which may deviate from
exponential distribution (as in the case with pure gambling model).
When one considers $\epsilon_1 = \epsilon_2$, one again gets back the pure exponential
distribution.
A trivial case is obtained for $\epsilon_1=1$ and $\epsilon_2=0$. The transition matrix
then reduces to the identity matrix $I=\left(\begin{array}{cc} 1 & 0\\ 0 & 1 \end{array}\right)$
which trivially corresponds to no interaction and no evolution.
It may be emphasized that any transition matrix
$\left(\begin{array}{cc}
t_{11} & t_{12}\\
t_{21} & t_{22}
\end{array}\right)$,
for such conserved models is bound to be of the form such that the sum of two elements of
either of the columns has to be {\it unity by design}: $t_{11}+t_{21}=1$, $t_{12}+t_{22}=1$.
It is important to note that whatever extra parameter, no matter, one incorporates within the framework of
the conserved model, the transition matrix has to retain this property.
In Fig.~\ref{akg:fig:dist_gen_e1_e2} three distributions
(with $\epsilon_1 \neq \epsilon_2$) are plotted where
$\epsilon_1$ and $\epsilon_2$ are drawn randomly from uniform distributions with different
ranges. It is demonstrated that qualitatively different distributions are possible as the
parameter ranges are tuned appropriately.
\begin{figure}[htb]
\includegraphics[width=.4\textwidth]{akgfig5.eps}
\caption{Three normalized wealth distributions are shown corresponding to the general
matrix $T_2$ (in general with $\epsilon_1\neq\epsilon_2$) as discussed in the text.
Curves are marked by numbers (1, 2 and 3) and the ranges of $\epsilon_1$ and $\epsilon_2$
are indicated within which they are drawn uniformly and randomly.\label{akg:fig:dist_gen_e1_e2}
}
\end{figure}
Now let us compare the above situation with the model of equal saving propensity as discussed
in section \ref{akg:sec:fixlam}.
With the incorporation of saving propensity factor $\lambda$, the transition matrix
now looks like:
\[\left(\begin{array}{cc}
\lambda+\epsilon(1-\lambda) & \epsilon(1-\lambda)\\
(1-\epsilon)(1-\lambda) & \lambda+(1-\epsilon)(1-\lambda)
\end{array}\right).\]
\noindent
The matrix elements can now be rescaled by assuming
$\tilde\epsilon_1=\lambda+\epsilon(1-\lambda)$ and $\tilde\epsilon_2=\epsilon(1-\lambda)$ in
the above matrix. Therefore, the above transition matrix reduces to
\[T_2=\left(\begin{array}{cc}
\tilde\epsilon_1 & \tilde\epsilon_2\\
1-\tilde\epsilon_1 & 1-\tilde\epsilon_2
\end{array}\right).\]
\noindent
Thus the matrix $T_2$ is of the same form as $T_1$. The
distributions due to above two matrices of the same general form can now be compared
if one can correctly identify the ranges of the rescaled elements.
In the model of uniform saving: $\lambda< \tilde\epsilon_1 <1$ and
$0< \tilde\epsilon_2 <(1-\lambda)$ as the stochasticity parameter $\epsilon$ is
is drawn from a uniform and random distribution in [0, 1].
As long as $\tilde\epsilon_1$ and $\tilde\epsilon_2$ are different,
the determinant of the matrix is nonzero ($|T_2|=\tilde\epsilon_1-\tilde\epsilon_2=\lambda$).
Therefore, the incorporation of the saving propensity factor $\lambda$ brings {\it two effects}:
\begin{itemize}
\item
The transition matrix becomes nonsingular,
\item
The matrix elements $t_{11}$ (= $\tilde\epsilon_1$) and $t_{12}$ (= $\tilde\epsilon_2$)
are now drawn from truncated domains (somewhere in [0, 1]).
\end{itemize}
Hence it is clear from the above discussion that the wealth distribution with uniform saving
is likely to be qualitatively no different from what can be achieved with
general transition matrices having different elements, $\epsilon_1\neq\epsilon_2$.
The distributions obtained with different $\lambda$ may correspond to that with appropriately
chosen $\epsilon_1$ and $\epsilon_2$ in $T_1$.
In the next stage, when the saving propensity factor $\lambda$ is distributed as in
section \ref{akg:sec:ranlam}, the transition
matrix between any two agents having different $\lambda$'s (say, $\lambda_1$ and $\lambda_2$)
now looks like:
\[\left(\begin{array}{cc}
\lambda_1+\epsilon(1-\lambda_1) & \epsilon(1-\lambda_2)\\
(1-\epsilon)(1-\lambda_1) & \lambda_2+(1-\epsilon)(1-\lambda_2)
\end{array}\right).\]
\noindent
Again the elements of the above matrix can be rescaled by
putting $\tilde\epsilon_1^{\prime}=\lambda_1+\epsilon(1-\lambda_1)$ and
$\tilde\epsilon_2^{\prime}=\epsilon(1-\lambda_2)$. Hence the transition matrix
can again be reduced to the same form as that of $T_1$ or $T_2$:
\[T_3=\left(\begin{array}{cc}
\tilde\epsilon_1^{\prime} & \tilde\epsilon_2^{\prime}\\
1-\tilde\epsilon_1^{\prime} & 1-\tilde\epsilon_2^{\prime}
\end{array}\right).\]
\noindent
The determinant here is
$|T_3|=\tilde\epsilon_1^{\prime}-\tilde\epsilon_2^{\prime}=\lambda_1(1-\epsilon)+
\epsilon\lambda_2$.
Here also the determinant is ensured to be nonzero as all the parameters
$\epsilon$, $\lambda_1$
and $\lambda_2$ are drawn from the same positive domain: [0, 1]. This means that each transition matrix
for two-agent wealth exchange remains nonsingular which ensures the interaction process
to be reversible in time.
Therefore, it is expected that {\it qualitatively different distributions are possible
when one appropriately tunes the two independent elements in the general
form of transition matrix} ($T_1$ or $T_2$ or $T_3$).
However, the emergence of power law tail ({\it Pareto's law}) in the distribution
can not be explained by this. Later it is examined that to obtain a power law in the
framework of present models, it is essential that the distribution in $\lambda$ has to be
quenched (frozen in time) which means the matrix elements in the general form of any
transition matrix have to be quenched. In the section \ref{akg:sec:reduce}, it has
been shown that the model of distributed saving (section \ref{akg:sec:ranlam}) is
equivalent to a reduced situation where one needs only one variable $\eta$.
The this case the corresponding transition matrix looks even simpler:
\[T_4=\left(\begin{array}{cc}
1 & \eta\\
0 & 1-\eta
\end{array}\right),\]
\noindent
where a nonzero determinant ($|T_4|=1-\eta\neq 0$) is ensured among other things.
\subsection{Distributions from generic situation}\label{akg:sec:generic}
From all the previous discussions, it is clear that the
the transition matrix (for zero sum wealth exchange) is bound to be of the
following general form:
\[\left(\begin{array}{cc}
\epsilon_1 & \epsilon_2\\
1-\epsilon_1 & 1-\epsilon_2
\end{array}\right).\]
\noindent
The matrix elements, $\epsilon_1$ and $\epsilon_2$ can be appropriately associated with
the relevant parameters in a model. A generic situation arrives where one can generate all
sorts of distributions by controlling $\epsilon_1$ and $\epsilon_2$.
As long as $\epsilon_1\neq\epsilon_2$, the
matrix remains nonsingular and one achieves Gamma type distributions.
In a special case, when $\epsilon_1=\epsilon_2$, the transition matrix becomes singular and
Boltzmann-Gibbs type exponential distribution results in.
It has been numerically checked that
a power law with exponent $\alpha=2$ is obtained with the
general matrix when the elements $\epsilon_1$ and $\epsilon_2$ are of the same set of
quenched random numbers drawn
uniformly in [0, 1]. The matrix corresponding to the reduced situation in the section \ref{akg:sec:reduce}, as discussed,
is just a special
case with $\epsilon_1=1$ and $\epsilon_2=\eta$, drawn from a uniform and (quenched) random
distribution.
Incorporation of any parameter in an actual model (saving propensity, for example)
results in the adjustment or truncation of the full domain [0, 1] from which the
element $\epsilon_1$ or $\epsilon_2$ is drawn.
Incorporating distributed $\lambda$'s in section \ref{akg:sec:ranlam} is equivalent to
considering the following domains: $\lambda_1< \epsilon_1 <1$ and
$0< \epsilon_2 <(1-\lambda_2)$.
A more general situation arrives
when the matrix elements $\epsilon_1$ and $\epsilon_2$ are of two sets of random numbers drawn separately
(one may identify them as $\epsilon_1^{(1)}$ and $\epsilon_2^{(2)}$ to distinguish) from two uniform
and random distributions in the domain: [0, 1].
In this case a power law is obtained with the exponent $\alpha = 3$ which is, however, distinctly
different from that obtained in `distributed saving model' in section
\ref{akg:sec:ranlam}.
To test the robustness of the power law, the distributions in the matrix elements are taken in
the following truncated ranges: $0.5< \epsilon_1 <1$ and $0< \epsilon_2 <0.5$ (widths are
narrowed down). A power law is still obtained with the same exponent ($\alpha$ close to 3).
These results are plotted in Fig.~\ref{akg:fig:dist_gen_frozen_e1_e2}.
\begin{figure}[htb]
\includegraphics[width=.4\textwidth]{akgfig7.eps}
\caption{Distribution of individual wealth ($w$) for the most general case with
random and quenched $\epsilon_1$ and $\epsilon_2$: The elements are drawn from two separate
distributions where
$0< \epsilon_1 <1$ and $0< \epsilon_2 <1$ in one case and in the other
case, they are chosen from the ranges, $0.5< \epsilon_1 <1$ and $0< \epsilon_2 <0.5$.
Both show power laws with the same exponent around 3.0 (the two distributions almost
superpose). A straight line (with slope -3.0) is drawn to demonstrate the power law in
the log-log scale. \label{akg:fig:dist_gen_frozen_e1_e2}}
\end{figure}
It is possible to achieve distributions other than power laws as one draws
the matrix elements, $\epsilon_1$ and $\epsilon_2$ from different domains within the range
between 0 and 1. There is indeed a {\it crossover from power law to Gamma like distributions}
as one tunes the elements.
It appears from extensive numerical simulations that power law disappears when both
the parameters are drawn from some ranges that do not include the lower limit 0. For example,
when it is considered, $0.8< \epsilon_1 < 1.0$ and $0.2< \epsilon_2 <0.4$, the wealth
distribution does not follow a power law. In contrast, when $\epsilon_1$ and $\epsilon_2$ are
drawn from the
ranges, $0.8< \epsilon_1 < 1.0$ and $0< \epsilon_2 <0.1$, the power law distribution is back again.
It now appears that {\it to achieve a power law
in such a generic situation, the following criteria are to be fulfilled}:
\begin{itemize}
\item
It is essential to have the randomness or disorder in the
elements $\epsilon_1$ and $\epsilon_2$ to be quenched,
\item
In the most general case,
$\epsilon_1$ should be drawn from a uniform distribution whose upper bound has to be 1
and for $\epsilon_2$ the lower bound has to be 0. Then a power law with higher exponent $\alpha =3$ is achieved. To have a power law with exponent $\alpha=2$, the matrix elements are to be
drawn from the same distribution.
(These choices automatically make the transition matrices to be nonsingular.)
\end{itemize}
\noindent
The above points are not supported analytically at this stage.
However, the observation seems to bear important implications in terms of generation of power
law distributions.
When the disorder or randomness in the elements $\epsilon_1$ and $\epsilon_2$ change with
time ({\em i.e.}, not quenched) unlike the situation just discussed above, the problem
is perhaps similar to the mass diffusion and aggregation model
by Majumdar, Krishnamurthy and Barma \cite{AKG:mass}.
The mass model is defined on a one dimensional lattice with periodic boundary condition.
A fraction of mass from a randomly chosen site is assumed to be continually
transported to any of its neighbouring sites at random. The total mass between the two sites
then is unchanged (one site gains mass and the other loses the same amount) and thus the
total mass of the system remains conserved.
The mass of each site evolves as
\begin{equation}
m_i(t+1)=(1-\eta_i)m_i(t)+\eta_jm_j(t).
\end{equation}
\noindent
Here it is assumed that $\eta_i$ fraction of mass $m_i$ is dissociated from that site $i$ and
joins either of its neighbouring sites $j=i\pm 1$. Thus $(1-\eta_i)$ fraction of mass $m_i$
remains at that site whereas a fraction $\eta_j$ of mass $m_j$ from the neighbouring site joins
the mass at site $i$. Now if we identify $\epsilon_1 = (1-\eta_i)$ and
$\epsilon_2 = \eta_j$ then this model is just the same as described by the general transition
matrix as discussed so far. If $\eta_i$'s are drawn from a random and uniform
distribution in [0, 1] then a mean field calculation (which turns out to be exact in the
thermodynamic limit), as shown in \cite{AKG:mass}, brings out the
stationary mass distribution $P(m)$ to be a Gamma distribution:
\begin{equation}\label{akg:eqn:massdist}
P(m) = {4m\over {\overline m}^2}e^{-2m/{\overline m}},
\end{equation}
\noindent
where ${\overline m}$ is the average mass of the system. It has been numerically checked that
there seems to be no appreciable change in the distribution even when the lattice is not
considered.
Lattice seems to play no significant role in
the case of kinetic theory like wealth distribution models as well.
Incidentally, this distribution with ${\overline m} = 1$ is exactly the same as the Gamma
distribution [eqn.~(\ref{akg:eqn:gamma-1})], mentioned in section
\ref{akg:sec:fixlam} when one considers $n = 2$ in that.
The index $n$ equals to 2 when one puts $\lambda = {1\over 4}$
in the relation (\ref{akg:eqn:gamma-2}).
In the general situation ($\epsilon_1\neq\epsilon_2$), when both the parameters are
drawn from a random and uniform distribution in [0, 1], the emerging distribution
very nearly follows the above expression (\ref{akg:eqn:massdist}). Only when the
randomness in them is quenched (fixed in time), there is a possibility of
getting a power law as it is already mentioned.
The Gamma distribution [eqn.~(\ref{akg:eqn:massdist})] and the numerically
obtained distributions for different cases (as discussed in the text) are plotted in
Fig.~\ref{akg:fig:gamma} in support of the above discussions.
\begin{figure}[h]
\includegraphics[width=.4\textwidth]{akgfig8.eps}
\caption{Normalized probability distribution functions obtained for three different
cases: (i) Wealth distribution with random and uniform $\epsilon_1$ and $\epsilon_2$
in [0, 1], (ii) Wealth distribution with uniform and fixed saving propensity,
$\lambda = {1\over 4}$, (iii) Mass distribution for the model \cite{AKG:mass} in one dimensional
lattice (as discussed in text). The theoretical Gamma distribution
[the eqn.~(\ref{akg:eqn:massdist})] is also plotted (line draw) to have a comparison.
\label{akg:fig:gamma}}
\end{figure}
\section{Role of selective interaction}\label{akg:sec:select}
So far the models of wealth exchange processes have been considered where a pair of agents is
selected randomly. However, interactions or trade among agents in a society are often guided by
personal choice or some social norms or some other reasons. Agents may like to interact
selectively and it would be interesting to see how the Individual wealth distribution is
influenced by selection \cite{AKG:akg2}. The concept of selective
interaction is already there when one considers the formation of a family. The members of a
same family are unlikely to trade (or interact) among each other. It may be worth to examine
the role played by the concept of `family' in wealth distributions of families:
`family wealth distribution' for brevity. A family in a society usually consists of more than
one agent. In computer simulation, the agents belonging to a same family are coloured to keep
track of. To find wealth distributions of families, the contributions of the same family
members are added up.
In Fig.~\ref{akg:fig:family} family wealth distributions are plotted
for three cases: (i) families consist of 2 members each, (ii)
families consist of 4 members each and (iii) families of mixed sizes between 1 and 4.
The distributions are clearly not pure exponential, but modified exponential distributions
(Gamma type distributions) with different peaks and different widths. This is quite
expected as the probability of zero income of a family is zero. Modified exponential
distribution for family wealth is also supported by fitting real data \cite{AKG:yako2}.
\begin{figure}[htb]
\includegraphics[width=.4\textwidth]{akgfig9.eps}
\caption{Family wealth distributions: two curves are for families consisting of all
equal sizes of 2 and 4. One curve is for a system of families consisting of various
sizes between 1 and 4. The distributions are not normalized.
\label{akg:fig:family}
}
\end{figure}
Some special way of incorporating selective interaction is seen to have a drastic effect in the individual
wealth distribution.
To implement the idea of `selection', a `class' of an agent is defined through an index $\epsilon$.
The class may be understood in terms of some sort of efficiency of
accumulating wealth or some other closely related property. Therefore, $\epsilon$'s are
assumed to be quenched. It is assumed that during the interactions,
the agents may convert an appropriate amount of wealth proportional to their efficiency
factor in their favour or against. Now, the model can be understood in terms of the general
form of equations:
\begin{equation}\label{akg:eqn:select}
w_i(t+1) = \epsilon_i w_i(t) + \epsilon_j w_j(t),
\end{equation}
\begin{equation*}
w_j(t+1) = (1-\epsilon_i)w_i(t) + (1-\epsilon_j)w_j(t),
\end{equation*}
\noindent
where $\epsilon_i$'s are quenched random numbers between 0 and 1 (randomly
assigned to the agents at the beginning).
Now the agents are supposed to make a choice to whom not to trade with.
This option, in fact, is not unnatural in the context of a real society where individual or
group opinions are important. There has been a lot of works on the process and dynamics of
opinion formations \cite{AKG:stauffer,AKG:levy-solomon} in model social systems. In the present
model it may be imagined that the `choice' is simply guided by the
relative class index of the two agents. It is assumed that an interaction takes place when
the ratio of two class factors remain within certain upper limit. The requirement for
interaction (trade) to happen is then $1 < \epsilon_i/\epsilon_j <\tau$, where
$\epsilon_i > \epsilon_j$.
Wealth distributions for various values of $\tau$ are numerically investigated. Power laws in
the tails of the distributions are obtained in all cases. In Fig.~\ref{akg:fig:dist_select} the
distributions for $\tau = 2$ and $\tau =4$ are shown.
Power laws are clearly seen with an exponent, $\alpha = 3.5$ (a straight line with slope
around -3.5 is drawn) which means the Pareto index $\nu$
is close to 2.5. It is not further investigated whether the exponent ($\alpha$) actually
differs in a significant way for different choices of $\tau$.
It has been shown that preferential behaviour \cite{AKG:prefer}
generates power law in money distribution with some imposed conditions which allows the
rich to get higher probability of getting richer. The rich is also favoured in a model with some
kind of asymmetric exchange rules as proposed in \cite{AKG:sita} where a power law results in.
The dice seems to be loaded in favour of the rich otherwise the rich can not be the rich!
\begin{figure}[htb]
\includegraphics[width=.4\textwidth]{akgfig10.eps}
\caption{Distribution of individual wealth with selective interaction.
Power law is evident in the log-log plot where a straight line is drawn with
slope = -3.5 for comparison.\label{akg:fig:dist_select}}
\end{figure}
\section{Measure of inequality}\label{akg:sec:ineq}
Emergence of Pareto's law signifies the existence of inequality in wealth in a population.
Inequality or disparity in wealth or that of income is known to exist in almost all societies.
To have a quantitative idea of inequality one generally
plots Lorenz curve and then calculates Gini coefficient. Here the entropy approach
\cite{AKG:kapur} is considered. The time evolution of an appropriate quantity is examined which
may be regarded as a measure of wealth-inequality.
Let us consider $w_1,~w_2, \ldots,~w_N$ be the wealths of N agents in a system.
Let $W = \sum_{i=1}^Nw_i$ be the total wealth of all the agents. Now $p_i=w_i/W$ can be
considered as the fraction of wealth the $i$-th agent shares. Thus each
of $p_i > 0$ and $\sum_{i=1}^Np_i=1$. Thus the set of $p_1,~p_2,\ldots,~p_N$ may be
regarded as a probability distribution. The well known Shannon entropy
is defined as the following:
\begin{equation}
S = -\sum_{i=1}^Np_i\ln p_i.
\end{equation}
\noindent
From the maximization principle of entropy it can be easily shown that the entropy ($S$) is
maximum when
\begin{equation}
p_1=p_2=\cdots=p_N={1\over N},
\end{equation}
\noindent
giving the maximum value of $S$ to be $\ln N$ where it is a limit of
equality (everyone possesses the same wealth). A measure of inequality should be something
which measures a deviation from the above ideal situation. Thus one can have a measure of
wealth-inequality to be
\begin{equation}\label{akg:eqn:inequal}
H = \ln N-S = \ln N + \sum_{i=1}^N p_i \ln p_i = \sum_{i=1}^N p_i \ln(Np_i).
\end{equation}
\noindent
The greater the value of $H$, the greater the inequality is.
\begin{figure}[htb]
\includegraphics[width=.4\textwidth]{akgfig11.eps}
\caption{Comparison of time evolution of the measure of inequality ($H$) in wealth for
different models. Each `time step' ($t$) is equal to a single interaction between a pair of agents.
Data is taken after every $10^4$ time steps to avoid clumsiness and each data point is
obtained by averaging over $10^3$ configurations.
$Y$-axis is shown in log-scale to have a fair comparison.
\label{akg:fig:Hav_comp}}
\end{figure}
\begin{figure}[htb]
\includegraphics[width=.4\textwidth]{akgfig12.eps}
\caption{Evolution of variance ($\sigma^2$) with `time' ($t$) for different models.
$Y$-axis is shown in log scale to accommodate three sets of data in a same graph.
Data is taken after every $10^4$ time steps to avoid clumsiness and each data point is
obtained by averaging over $10^3$ configurations.
\label{akg:fig:sigav_comp}}
\end{figure}
It is seen that the wealth exchange algorithms are so designed that the resulting
disparity or variance (or measure of inequality), in effect, increases with time.
Wherever power law in distribution results in, the distribution naturally
broadens which indicates that the variance ($\sigma^2$) or the inequality measure
[$H$ in eqn.~(\ref{akg:eqn:inequal})] should increase.
In the Fig.~\ref{akg:fig:Hav_comp} and in Fig.~\ref{akg:fig:sigav_comp}
time evolution of inequality measure $H$ and variance $\sigma^2$ respectively
are plotted with time for three
models to have a comparison. It is apparent that the measure of inequality in steady state
attains different levels due to different mechanisms of wealth exchange processes, giving rise
to different power law exponents.
The growth of variance is seen to be different for
different models considered, which is responsible for power laws with different exponents
as discussed in the text. The power law exponents ($\alpha$) appear to be related to the magnitudes of
variance that are attained in equilibrium in the finite systems.
\section{Distribution by maximizing inequality}\label{akg:sec:max}
It is known that probability distribution of wealth of majority is different from that of
handful of minority (rich people). Disparity is more or less a reality in all economies.
A wealth exchange process can be thought of within the present framework
where the interactions among agents eventually lead to increasing
variance. It is numerically examined \cite{AKG:akg2} whether the process of forcing the system to have ever
increasing variance (measure of disparity) leads to a power law as it is known that power
law is usually associated with infinite variance.
Evolution of variance, $\sigma^2=\langle w^2\rangle-\langle w\rangle^2$ is calculated after
each interaction in the framework of pure gambling model [the pair of equations (\ref{akg:eqn:gamble})]
and it is then forced to increase monotonically by comparing this to the previously calculated
value (the average value $\overline w$ is fixed by virtue of the model).
This results in a very large variance under this imposed condition.
The inequality factor $H$ also likewise increases monotonically and attains a high value.
A power law distribution is obtained with the exponent, $\alpha$ close to 1.
None of the available models does bring out such a
low value of the exponent. The variance in any of the usual models generally settles at a level
much lower than that is obtained in such a way.
The resulting distribution of wealth is plotted in a log-log scale
in Fig.~\ref{akg:fig:dist_sigmax}.
\begin{figure}[htb]
\includegraphics[width=.4\textwidth]{akgfig13.eps}
\caption{Wealth distribution by maximizing the variance in the pure gambling model.
Power law is clearly seen (in the log-log plot) and a straight line is drawn
with slope = -1.1 to compare.\label{akg:fig:dist_sigmax}}
\end{figure}
Power law, however, could not be obtained by the
same way in the case of a non-conserved model like the following: $w_i(t+1)=w_i(t)\pm\delta$,
where the increase or decrease ($\delta$) in wealth ($w$) of any agent is
independent of any other.
It has also been noted, considering some of the available models, larger the variance,
smaller the exponent one gets. For example, the variance is seen to attain higher values
(with time) in the model of distributed (random) saving propensities \cite{AKG:chak3} compared to
the model of selective interaction \cite{AKG:akg2} and the resulting power law exponent $\alpha$
is close to 2.0 in the former case whereas it is close to 3.5 in the later. In the present
situation the variance attains even higher values and the exponent $\alpha$ seems to be
close to 1, the lowest among all.
\section{Confusions and conclusions}\label{akg:sec:concl}
As it is seen, the exchange of wealth in various modes
generates a wide variety of distributions within the framework of simple wealth exchange
models as discussed.
In this review, some basic structures and ideas of interactions are looked at which seem to be
fundamental to bring out the desired distributions. In this kind of agent based models
(for some general discussions, see \cite{AKG:agent}) the division of
labour, demand and supply and the human qualities (selfish act or altruism)
and efforts (like investments, business) which are
essential ingredients in classical economics are not considered explicitly.
What causes the exchange of wealth of a specific kind among agents is not important in this discussion.
Models are considered to be conserved (no inflow or outflow of money/ wealth in or from the system). It is
not essential to look for inflation, taxation, debt, investment returns etc.
of and in an economic system at the outset for the kind of questions that are addressed here.
The essence of complexities of interactions leading to distributions can be
understood in terms of the simple (microscopic) exchange rules much the same way the simple logistic
equations that went on to construct `roads to Chaos' and opened up a new horizon of thinking
of a complex phenomenon like turbulence \cite{AKG:kadanoff}.
Some models of zero sum wealth exchange are examined here in this review. One may start
thinking in a fresh way how the distributions emerge out of the kind of algorithmic
exchange processes that are involved.
The exchange processes can be understood in a general way by looking at the structure of
associated $2\times 2$ transition matrices.
Wealths of individuals evolve to have a specific distribution in a stead state through the
kind of interactions which are basically stochastic in nature.
The distributions shift away from Boltzmann-Gibbs like exponential to Gamma type
distributions and in some cases distributions emerge with power law tails known as
Pareto's law ($P(w) \propto w^{-\alpha}$).
It is also seen that the wealth distributions seem to be influenced by personal choice.
In a real society, people usually do not interact arbitrarily rather do
so with purpose and thinking. Some kind of personal preference is always there which may be
incorporated in some way or other. Power law with distinctly different
exponent ($\alpha=3.5$, Pareto exponent $\nu=2.5$) is achieved through a certain way of
selective interaction. The value of Pareto index $\nu$ does not correspond to what is
generally obtained empirically. However, the motivation is not to attach much importance to
the numerical value at the outset rather than to focus on the fact of how power
laws emerge with distinctly different exponents governed by the simple rules of
wealth exchange.
The fat tailed distributions (power laws) are usually associated with large variance, which
can be a measure of disparity. Economic disparity usually exists among a population. The detail
mechanism leading to disparity is not always clear but it can be said to be associated
with the emergence of power law tails in wealth distributions.
Monotonically increasing variance (with time) can be associated with the emergence of
power law in individual wealth distributions.
The mean and variance of a power law distribution can be analytically derived \cite{AKG:newman}
to see that they are finite when the power law exponent $\alpha$ is greater than 3.
For $\alpha\le 3$, the variance diverges but then the mean is finite. In case of the models
discussed here in this review, mean is kept fixed but large or enhanced variance
is observed in different models whenever there results in a power law.
It remains a question of what can be the mechanisms (in the kind of discrete and
conserved models) that generate large variance and power law tails.
Large and increasing variance is also associated with lognormal distributions. A simple
multiplicative stochastic process like $w(t+1)=\epsilon(t) w(t)$ can be used to explain the
emergence of lognormal distribution and indefinite increase in variance. However, empirical
evidence shows that the Pareto index and some other appropriate indices
(Gibrat index, for example), generally dwindle within some range \cite{AKG:souma} indicating that
the variance (or any other equivalent measure of inequality) does not increase forever.
It seems to attain a saturation, given sufficient time. This is indeed the case the numerical
results suggest. Normally there occurs simultaneous increase of variance and mean
in statistical systems (in fact, the relationship between mean and variance goes by a
power law as $\sigma^2 \propto {\overline w}^b$ know as Taylor's power law \cite{AKG:taylor} as
curiously observed in many natural systems).
In this conserved model the mean is not
allowed to vary as it is fixed by virtue of the model. It may be the case that $\sigma^2$
then has to have a saturation.
The limit of $\sigma^2$ is tested through an artificial situation where the association of power law with large variance
is tested in a reverse way.
Understanding the emergence of power law \cite{AKG:newman, AKG:reed} itself has been of great
interest for decades. There
is usually no accepted framework which may explain the origin and wealth of varieties of
its appearance. It is often argued that the dynamics which generate power laws is
dominated by multiplicative processes. It is true that in an economy wealth (or money)
of an agent multiplies and that is coupled to the large number of interacting agents.
The generic stochastic Lotka-Volterra systems like
$w_i(t+1)=\epsilon w_i(t)+a{\overline w(t)}-bw_i(t){\overline w(t)}$ have been
studied \cite{AKG:levy-solomon, AKG:solomon-richmond} to achieve power law distributions in wealth.
However, these kinds of models are not discussed in this review as the basic intention
had been to understand the ideas behind models of conserved wealth which the above is not.
In a twist of thinking, let us imagine a distribution curve which can be stretched in any
direction as one wishes to have, keeping the area under this to be invariant. If now the
curve is pulled too high around the left then the right hand side is to fall off too
quickly, exponential decay is a possible option then.
On the other hand, if the width of it is to be stretched too far (distribution becomes fat)
at the right hand side, it should then decay fairly slowly giving rise to a
possible power law fall at the right end while keeping the area under the curve
preserved. What makes such a stretching possible?
This review has been an attempt to integrate some ideas regarding models of wealth
distributions and to reinvent things with a fresh outlook. In the way, some confusions,
conjectures and conclusions emerged where many questions possibly have been answered with
further questions and doubts. At the end of the day, the usefulness of this (review) may be measured
by further curiosities and enhanced attention on the subject if at all this may generate.
\section*{Acknowledgment}
The author is grateful to Dietrich Stauffer for many important comments and criticisms at
different stages of publications of the results that are incorporated in this review.
|
1,941,325,220,078 | arxiv | \section{Introduction}
A \defn{point\,--\,line configuration} is a set $P$ of \defn{points} and a set $L$ of \defn{lines} together with an \defn{incidence relation}, where two points of $P$ can be incident with at most one line of~$L$ and two lines of~$L$ can be incident with at most one point of $P$. Throughout the paper, we only consider \defn{connected} configurations, where any two elements of~$P \sqcup L$ are connected via a path of incident elements. An \defn{isomorphism} (resp.~a \defn{duality}) between two configurations $(P,L)$ and $(P',L')$ is an incidence-preserving map from~$P \sqcup L$ to $P' \sqcup L'$ which sends points to points and lines to lines (resp.~ which exchanges points and lines).
According to~the underlying structure, we distinguish three different levels of configurations, in increasing generality:
\begin{description}
\item[\rm\defn{Geometric configuration}] Points and lines are points and lines in the real projective plane~$\mathbb{P}$.
\item[\rm\defn{Topological configuration}] Points are points in~$\mathbb{P}$, but lines are \defn{pseudolines}, \ie non-separating simple closed curves of~$\mathbb{P}$.
\item[\rm\defn{Combinatorial configuration}] Just an abstract incidence structure $(P,L)$ as described above, with no additional geometric structure.
\end{description}
In this paper, we focus on \defn{regular} configurations, \ie whose incidence relation is regular. More precisely, an \defn{\conf{n}{k}}~$(P,L)$ is a set $P$ of $n$ points and a set~$L$ of $n$ lines such that each point of $P$ is contained in $k$ lines of $L$ and each line of $L$ contains $k$ points of $P$.
We have represented three famous $3$-regular configurations in Figure~\ref{fig:famousConfigurations} to illustrate the previous definitions.
\begin{figure}
\centerline{\includegraphics[width=1\textwidth]{famousConfigurations}}
\caption{(Left) Fano's configuration is a combinatorial \conf{7}{3} but is not realizable topologically or geometrically. (Center) Kantor's topological \conf{10}{3} is not realizable geometrically. (Right) Pappus' configuration is a geometric \conf{9}{3}.}
\label{fig:famousConfigurations}
\end{figure}
Point\,--\,line configurations have a long history in discrete $2$-dimensional geometry. We refer to Branko Gr\"unbaum's recent monograph~\cite{Grunbaum1} for a detailed treatment of the topic and for historical references. As underlined in this monograph, the current study of regular configurations focusses on the following two problems:
\begin{enumerate}[(i)]
\item For a given~$k$, determine for which values of~$n$ do geometric, topological, and combinatorial \conf{n}{k}s exist.
\item Enumerate and classify \conf{n}{k}s for given $k$ and $n$.
\end{enumerate}
In particular, it is challenging to determine the minimal value $n$ for which \conf{n}{k}s exist and to enumerate these minimal configurations.
For $k \in \{3,4\}$, the existence of \conf{n}{k}s is almost completely understood.
When $k=3$, combinatorial \conf{n}{3}s exist for every $n\ge7$, but topological and geometric \conf{n}{3}s exist only for every $n\ge9$.
When $k=4$, combinatorial \conf{n}{4}s exist iff $n\ge13$, topological \conf{n}{4}s exist iff $n\ge17$~\cite{BokowskiSchewe1, BokowskiGrunbaumSchewe} and geometric \conf{n}{4}s exist iff $n\ge18$~\cite{Grunbaum4, BokowskiSchewe2}, with the possible exceptions of $19$, $22$, $23$, $26$, $37$ and~$43$. For~$k \ge 5$, the situation is more involved, and the existence of combinatorial, topological and geometric \conf{n}{k}s is not determined in general.
Concerning the enumeration, an important effort has been done on combinatorial $(n_3)$- and \conf{n}{4}s. Table~\ref{table:comb} provides the known values of the number $c_k(n)$ of combinatorial \conf{n}{k}s up to isomorphism.
The first row of this table ($k=3$) appeared in~\cite{BettenBrinkmannPisanski}, except $c_3(19)$ which was announced later on in~\cite[p.275]{PBMOG}.
The second row ($k=4$) appeared in~\cite[p.34]{BettenBetten}, except $c_4(19)$ which was only computed recently in~\cite{PaezOsunaSanAugustinChi}.
\begin{table}[b]
\centerline{
$
\begin{array}{c|cccccccccccccc}
n & \le 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 \\
\hline
c_3(n) & 0 & 1 & 1 & 3 & 10 & 31 & 229 & 2\,036 & 21\,399 & 245\,342 & 3\,004\,881 & 38\,904\,499 & 530\,452\,205 & 7\,640\,941\,062 \\
c_4(n) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 4 & 19 & 1\,972 & 971\,171 & 269\,224\,652
\end{array}
$}
\medskip
\caption{The number $c_k(n)$ of combinatorial \conf{n}{k}s up to isomorphism.}
\label{table:comb}
\end{table}
In this paper, we are interested in the numbers $t_k(n)$ and $g_k(n)$ of topological and geometric \conf{n}{k}s up to isomorphism. To obtain these numbers, one method is to select the topologically or geometrically realizable configurations among the list of all combinatorial \conf{n}{k}s. For example, the numbers $t_3(n)$ and $g_3(n)$ presented in Table~\ref{table:topo&geom} were derived from a careful study of the corresponding combinatorial configurations (see the historical remarks and references in~\cite{Grunbaum1}).
In~\cite{Schewe}, Lars Schewe provided a general method to study the topological realizability of a combinatorial configuration using satisfiability solvers, and obtained the numbers $t_4(17)=1$ and $t_4(18)=16$.
In~\cite{BokowskiSchewe2}, J\"urgen Bokowski and Lars Schewe studied the geometric realizability of a combinatorial configuration. This question is clearly an instance of the existential theory of the reals (ETR): it boils down to determining whether a set of polynomial equalities and inequalities admits a solution in the reals (indeed, the inclusion of a point in a line can be tested by a polynomial equation). Using the construction sequences presented in~\cite{BokowskiSchewe2}, the complexity of this instance of ETR can be decreased significantly. With this method, J\"urgen Bokowski and Lars Schewe showed that the only combinatorial \conf{17}{4} which is topologically realizable is not geometrically realizable and they exhibited a geometric \conf{18}{4}.
Table~\ref{table:topo&geom} summarizes the values of $t_3(n), g_3(n), t_4(n)$ and $g_4(n)$ known up-to-date (we have additionally included our results in bold letters; see below). This table indicates a clear difference of behavior between $3$- and $4$-regular configurations. On the one hand, when $k=3$, most of the combinatorial \conf{n}{3}s are topologically and geometrically realizable for small values of~$n$. For~$n \le 12$, the only counter-examples are the Fano \conf{7}{3}, the M\"obius-Kantor \conf{8}{3}, and Kantor's \conf{10}{3} ---~see Figure~\ref{fig:famousConfigurations} (left \& center). On the other hand, when $k=4$, it is not reasonable to look for geometric \conf{n}{4}s among all combinatorial \conf{n}{4}s. To further extend our knowledge on geometric configurations, it thus seems crucial to limit our research to those combinatorial configurations which are already topologically realizable.
\begin{table}
$$
\begin{array}{c|ccc}
n & c_3(n) & t_3(n) & g_3(n) \\
\hline
\le 6 & 0 & 0 & 0 \\
7 & 1 & 0 & 0 \\
8 & 1 & 0 & 0 \\
9 & 3 & 3 & 3 \\
10 & 10 & 10 & 9 \\
11 & 31 & 31 & 31 \\
12 & 229 & 229 & 229 \\
13 & 2\,036 & ? & ?
\end{array}
\qquad\qquad
\begin{array}{c|ccc}
n & c_4(n) & t_4(n) & g_4(n) \\
\hline
\le 12 & 0 & 0 & 0 \\
13 & 1 & 0 & 0 \\
14 & 1 & 0 & 0 \\
15 & 4 & 0 & 0 \\
16 & 19 & 0 & 0 \\
17 & 1\,972 & 1 & 0 \\
18 & 971\,191 & 16 & \bb{2} \\
19 & 269\,224\,652 & \bb{4\,028} & ?
\end{array}
$$
\caption{The numbers $t_k(n)$ of topological \conf{n}{k}s and $g_k(n)$ of geometric \conf{n}{k}s up to isomorphism.}
\label{table:topo&geom}
\end{table}
Motivated by this observation, we present an algorithm for generating, for given $n$ and $k$, all topological \conf{n}{k}s up to isomorphism, without enumerating first all combinatorial \conf{n}{k}s. The algorithm sweeps the projective plane to construct a topological \conf{n}{k}~$(P,L)$, but only considers as relevant the events corresponding to the sweep of points of~$P$. This strategy enables us to identify along the way some isomorphic topological configurations, and thus to maintain a reasonable computation space and time.
We have developed two different implementations of this algorithm. The first one was written in \textsc{haskell}{} by the first author to develop the strategy of the enumeration process. Once the general idea of the algorithm was settled, the second author wrote another implementation in \textsc{java}{}, focusing on the optimization of computation space and time of the process.
We outline three applications of our algorithm.
First, the algorithm is interesting in its own right. Before describing some special methods for constructing topological configurations, Branko~Gr\"unbaum writes in \mbox{\cite[p.\,165]{Grunbaum1}} that \emph{``the examples of topological configurations presented so far have been ad hoc, obtained essentially through (lots of) trial and error''}. Our algorithm can reduce considerably the \emph{``trial and error''} method.
Second, our algorithm enables us to check and confirm all values of $t_4(n)$, for $n\le 18$, obtained in earlier papers. We can use for that a single method and reduce considerably the computation time (\eg the computation of the \conf{18}{4}s needed several months of CPU-time in~\cite{Schewe}, and only one hour with our \textsc{java}{} implementation).
Finally, this algorithm enables us to compute all $t_4(19) = 4028$ isomorphism classes of topological \conf{19}{4}s.
As an application of our enumeration results, we studied in detail the possible geometric realizations of the topological \conf{18}{4}s. Using a \textsc{maple}{} inplementation of the construction sequence method of J\"urgen Bokowski and Lars Schewe~\cite{BokowskiSchewe1}, we obtain that there are precisely $2$ geometric \conf{18}{4}s: the first \conf{18}{4} constructed in~\cite{BokowskiSchewe1}, plus an additional one which appears for the first time in this paper. In contrast, deriving the list of geometric \conf{19}{4}s from the list of topological \conf{19}{4}s still requires some computational effort and is left to a subsequent paper.
The first section of this paper is devoted to the enumeration algorithm for isomorphism classes of topological configurations. The second section presents the application to the enumeration of geometric \conf{18}{4}s.
Topological configurations are pseudoline arrangements, or rank~$3$ oriented matroids. We assume the reader to have some basic knowledge on these topics. We refer to~\cite{Bokowski,BVSWZ,Knuth} for introductions.
\section{Topological configurations}
\label{sec:topoconf}
In this section, we present our algorithm to generate all isomorphism classes of topological \conf{n}{k}s, for given $n$ and $k$. Let us insist again on the crucial fact that we do not need to enumerate first all combinatorial \conf{n}{k}s. The main idea of the algorithm is to sweep the projective plane to construct a topological \conf{n}{k}~$(P,L)$, only focussing on the ``relative positions of the points of $P$'' and ignoring at first the ``relative positions of the other crossings of the pseudolines of $L$'' (precise definitions are given in Section~\ref{subsec:equivrelations}). This strategy enables us to identify along the way some isomorphic topological configurations, and thus to maintain a reasonable computation space and time.
\subsection{Three equivalence relations}
\label{subsec:equivrelations}
There are three distinct notions of equivalence on topological configurations.
The finest notion is the usual notion of topological equivalence between pseudoline arrangements in the projective plane: two configurations are \defn{topologically equivalent} if there is an homeomorphism of their underlying projective planes that sends one arrangement onto the other.
The coarsest notion is that of combinatorial equivalence: two \conf{n}{k}s are \defn{combinatorially equivalent} if they are isomorphic as combinatorial \conf{n}{k}s.
The intermediate notion is based on the graph of admissible mutations. Remember that a \defn{mutation} in a pseudoline arrangement is a local transformation of the arrangement where only one pseudoline $\ell$ moves, sweeping a single vertex $v$ of the remaining arrangement. It only changes the position of the crossings of~$\ell$ with the pseudolines incident to $v$. If those crossings are all \cross{2}s, the mutation does not perturb the \cross{k}s of the arrangement, and thus produces another topological \conf{n}{k}. We say that such a mutation is \defn{admissible}. Two configurations are \defn{mutation equivalent} if one can be obtained from the other by a (possibly empty) sequence of admissible mutations followed by an homeomorphism of the underlying projective space.
\begin{figure}[h]
\centerline{\includegraphics[scale=.75]{mutation}}
\caption{An admissible mutation.}
\label{fig:mutation}
\end{figure}
Obviously, topological equivalence implies mutation equivalence, which in turn implies combinatorial equivalence. The reciprocal implications are wrong.
As an illustration, the two \conf{18}{4}s depicted in Figure~\ref{fig:configurations_18_4} are combinatorially equivalent (the labels on the pseudolines provide a combinatorial isomorphism) but not topologically equivalent (the left one has $22$ quadrangles and $2$ pentagons, while the right one has $23$ quadrangles). In fact, one can even check that they are not mutation equivalent.
\begin{figure}[h]
\centerline{\includegraphics[width=1.3\textwidth]{configurations_18_4}}
\caption{Two \conf{18}{4}s which are combinatorially equivalent but neither mutation nor topologically equivalent.}
\label{fig:configurations_18_4}
\end{figure}
\subsection{Representation of arrangements}
\label{subsec:representation}
In this section, we state certain properties of configurations that we can assume without loss of generality. In particular, we choose a suitable representation of our pseudoline arrangements that we will use for the description of the algorithm in Section~\ref{subsec:description}.
\paragraph{Simple configurations}
A topological configuration $(P,L)$ is \defn{simple} if no three pseudolines of $L$ meet at a common point except if it is a point of~$P$. Since any topological \conf{n}{k} can be arbitrarily perturbed to become simple, we only consider simple topological \conf{n}{k}s. Once we obtain all simple topological \conf{n}{k}s, it is usual to obtain all (non-necessarily simple) topological \conf{n}{k}s up to topological equivalence by exploring the mutation graph, and we do not report on this aspect.
In a simple \conf{n}{k} $(P,L)$, there are two kinds of intersection points among pseudolines of $L$: the points of $P$, which we also call \defn{\cross{k}s}, and the other intersection points, which we call \defn{\cross{2}s}. Each pseudoline of~$L$ contains $k$ \cross{k}s and $n-1-k(k-1)$ \cross{2}s. In total, a simple \conf{n}{k} has $n$ \cross{k}s and ${n \choose 2} - n({k \choose 2}-1)$ \cross{2}s.
\paragraph{Segment length distributions}
A \defn{segment} of a topological configuration~$(P,L)$ is the portion of a pseudoline of~$L$ located between two consecutive points of~$P$. If~$(P,L)$ is simple, a segment contains no \cross{k} except its endpoints, but may contain some \cross{2}s. The \defn{length} of a segment is the number of \cross{2}s it contains.
The circular sequence of the segment lengths on a pseudoline of $L$ forms a \mbox{$k$-partition} of ${n-1-k(k-1)}$.
We call a \defn{maximal representative} of a $k$-tuple the lexicographic maximum of its orbit under the action of the dihedral group (\ie rotations and reflections of the $k$-tuple). We denote by $\Pi$ the list of all distinct maximal representatives of the $k$-partitions of $n-1-k(k-1)$, ordered lexicograhically. For example, when $k=4$ and $n=17$, we have $\Pi = [4,0,0,0]$, $[3,1,0,0]$, $[3,0,1,0]$, $[2,2,0,0]$, $[2,0,2,0]$, $[2,1,1,0]$, $[2,1,0,1]$, $[1,1,1,1]$.
\paragraph{A suitable representation}
We represent the projective plane as a disk where we identify antipodal boundary points. Given a simple topological \conf{n}{k}~$(P,L)$, we fix a representation of its underlying projective plane which satisfies the following properties (see Figure~\ref{fig:configuration} left).
The leftmost point of the disk (which is identified with the rightmost point of the disk) is a point of~$P$, which we call the \defn{base point}.
The $k$ pseudolines of~$L$ passing through the base point are called the \defn{frame pseudolines}, while the other $n-k$ pseudolines of $L$ are called \defn{working pseudolines}.
The frame pseudolines decompose the projective plane into $k$ connected regions which we call \defn{frame regions}.
A crossing is a \defn{frame} crossing if it involves a frame pseudoline and a \defn{working} crossing if it involves only working pseudolines.
The boundary of the disk is a frame pseudoline, which we call the \defn{base line}. We furthermore assume that the segment length distribution $\Lambda$ on the top half-circle appears in $\Pi$ (\ie is its own maximal representative), and that no maximal representative of the segment length distribution of a pseudoline of $L$ appears before $\Lambda$ in $\Pi$. In particular, the leftmost segment of the base line is a longest segment of the configuration.
\begin{figure}
\centerline{\includegraphics[width=1.45\textwidth]{configuration}}
\caption{Suitable representation of a \conf{17}{4}, and the corresponding wiring diagram.}
\label{fig:configuration}
\end{figure}
\paragraph{Wiring diagram and allowable sequence}
Another interesting representation of our \conf{n}{k} is the \defn{wiring diagram}~\cite{GoodmanPollack} of its working pseudolines (see Figure~\ref{fig:configuration} right). It is obtained by sending the base point to infinity in the horizontal direction. The frame pseudolines are $k$ horizontal lines, and the $n-k$ working pseudolines are vertical wires. The orders of the working pseudolines on a horizontal line sweeping the wiring diagram from top to bottom form the so-called \defn{allowable sequence} of the working arrangement, as defined in~\cite{GoodmanPollack}.
\subsection{Description of the algorithm}
\label{subsec:description}
Our algorithm can enumerate all topological \conf{n}{k}s up to either topological or combinatorial equivalence. In order to maintain a reasonable computation space and time, the main idea is to focus on the relative positions of the points of the configurations and to ignore at first the relative positions of the other crossings among the pseudolines. In other words, to work modulo mutation equivalence as defined in Section~\ref{subsec:equivrelations}.
More precisely, we first enumerate at least one representative of each mutation equivalence class of topological \conf{n}{k}s. From these representatives, we can derive:
\begin{enumerate}
\item all topological \conf{n}{k}s up to topological equivalence: we explore each connected component of the mutation graph with our representatives as starting nodes.
\item all combinatorial \conf{n}{k}s that are topologically realizable: we reduce the result modulo combinatorial equivalence.
\end{enumerate}
Since our motivation is to study geometric \conf{n}{k}s, we are only interested by point (2). We discuss a relatively efficient approach to test combinatorial equivalence in Section~\ref{subsec:reduction}. In this section, we give details on the different steps in our algorithm.
\paragraph{Sweeping process}
Our algorithm sweeps the projective plane to construct a topological \conf{n}{k}. The \defn{sweep line} sweeps the configuration from the base line on the top of the disk to the base line on the bottom of the disk. Inside each frame region, it always passes through the base point and always completes the configuration into an arrangement of $n+1$ pseudolines. When it switches from one frame region to the next one, it coincides with the separating frame pseudoline. Along the way, it sweeps completely all the working pseudolines. Except those located on the frame pseudolines, we assume that the crossings of the configuration are reached one after the other by the sweep line. After the sweep line swept a crossing, we remember the order of its intersections with the working pseudolines. In other words, the sweeping process provides us with the allowable sequence of the working pseudolines of our configuration.
Since admissible mutations are irrelevant for us, we only focus on the moments when our sweep line sweeps a \cross{k}. Thus, two different events can occur:
\begin{itemize}
\item when the sweep line sweeps a working \cross{k}, and
\item when the sweep line sweeps a frame pseudoline.
\end{itemize}
In the later case, we sweep simultaneously $k-1$ frame \cross{k}s (each involving the frame pseudoline and $k-1$ working pseudolines), and $n-1-k(k-1)$ frame \cross{2}s (each involving the frame pseudoline and a working pseudoline). Between two such events, the sweep line may sweep working \cross{2}s which are only taken into account when we reach a new event. Let us repeat again that the precise positions of these working \cross{2}s is irrelevant in our enumeration.
To obtain all possible solutions, we maintain a stack with all subconfigurations which have been constructed so far, remembering for each one:
\begin{enumerate}[(i)]
\item the order of the working pseudolines on the current sweep line,
\item the number of frame and working \cross{k}s and \cross{2}s which have already been swept on each working pseudoline,
\item the length of the segment currently swept by the sweep line, and
\item the history of the sweeps performed to reach this subconfiguration.
\end{enumerate}
At each step, we remove the first subconfiguration from the stack, and insert all admissible subconfigurations which can arise after sweeping a new working \cross{k} or a new frame pseudoline. We finally accept a configuration once we have swept $k$ frame pseudolines and $n - k(k-1) - 1$ working \cross{k}s.
Any subconfiguration considered during the algorithm is a potential \conf{n}{k}. Throughout the process, we make sure that any pair of working pseudolines cross at most once, that the number of frame pseudolines (resp.~of working \cross{k}s) already swept never exceeds $k$ (resp.~$n-1-k(k-1)$), and that the total number of working \cross{2}s never exceeds $(n-2k)(n-1-k(k-1))/2$. Furthermore, on each pseudoline, the number of frame and working \cross{k}s (resp.~\cross{2}s) already swept never exceeds $k$ (resp.~${n-1-k(k-1)}$), the number of working $2$- and \cross{k}s already swept never exceeds ${n-1-k(k-1)}$, and the segment currently swept is not longer than the leftmost segment of the base~line.
We now detail individually each step of the algorithm.
\paragraph{Initialization}
We initialize our algorithm sweeping the base line. We only have to choose the distribution of the lengths of the segments on the base line. The possibilities are given by the list $\Pi$ of maximal representatives of $k$-partitions of $n-1-k(k-1)$.
\paragraph{Sweep a working \cross{k}}
If we decide to sweep a working \cross{k}, we have to choose the $k$ working pseudolines which intersect at this \cross{k}, and the direction of the other working pseudolines.
Since we are allowed to perform any admissible mutation, we can assume that all the pseudolines located to the left of the leftmost pseudoline of the working \cross{k}, and all those located to the right of the rightmost pseudoline of the working \cross{k} do not move.
We say that the pseudolines located between the leftmost and the rightmost pseudolines of the working \cross{k} form the \defn{kernel} of the working \cross{k}. We have to choose the positions of the pseudolines of the kernel after the flip: each pseudoline of the kernel either belongs to the working \cross{k}, or goes to its left, or goes to its right (see Figure~\ref{fig:workingkCrossing}).
\begin{figure}
\centerline{\includegraphics[scale=.7]{workingkCrossing}}
\caption{Sweeping a working \cross{k}.}
\label{fig:workingkCrossing}
\end{figure}
A choice of directions for the kernel is admissible provided that
\begin{enumerate}[(i)]
\item each pseudoline involved in the $k$-crossing can still accept a working $k$-crossing;
\item each pseudoline of the kernel can still accept as many working \cross{2}s as implied by the choice of directions for the kernel;
\item no segment becomes longer than the leftmost segment of the base line; and
\item any two pseudolines which are forced to cross by the choice of directions for the kernel did not cross earlier (\ie they still form an inversion on the sweep line before we sweep the working \cross{k}).
\end{enumerate}
\paragraph{Sweep a frame pseudoline}
If we decide to sweep a frame pseudoline, we have to choose the $(k-1)^2$ working pseudolines involved in one of the $k-1$ frame \cross{k}s, and the direction of the other working pseudolines.
As before, we can assume that a pseudoline does not move if it is located to the left of the leftmost pseudoline involved in one of the $k-1$ frame \cross{k}s, or to the right of the rightmost pseudoline involved in one of the $k-1$ frame \cross{k}s. Otherwise, we can perform admissible mutations to ensure this situation.
The other pseudolines form again the \defn{kernel} of the frame sweep, and we have to choose their positions after the flip. Each pseudoline of the kernel either belongs to one of the $k-1$ frame \cross{k}s, or can choose among $k$ possible directions: before the first frame \cross{k}, or between two consecutive frame \cross{k}s, or after the last frame \cross{k} (see Figure~\ref{fig:frameSweep}).
As before, a choice of directions for the kernel is admissible if
\begin{enumerate}[(i)]
\item each pseudoline involved (resp. not involved) in one of the $k-1$ frame \cross{k}s can still accept a frame \cross{k} (resp. a frame \cross{2}); \item each pseudoline of the kernel can still accept as many working \cross{2}s as implied by the choice of directions for the kernel;
\item no segment becomes longer than the leftmost segment of the base line; and
\item any two pseudolines which are forced to cross by the choice of directions for the kernel did not cross earlier (\ie they still form an inversion on the sweep line before we sweep the frame pseudoline).
\end{enumerate}
\begin{figure}
\centerline{\includegraphics[scale=.7]{frameSweep}}
\caption{Sweeping a frame pseudoline (right).}
\label{fig:frameSweep}
\end{figure}
\paragraph{Sweep the last frame region}
Our sweeping process finishes once we have swept $n-1-k(k-1)$ working \cross{k}s and $k$ frame pseudolines.
Each resulting subconfiguration should still be completed into a topological \conf{n}{k} with some necessary remaining \cross{2}s.
More precisely, we need to add on each working pseudoline as many working \cross{2}s as its number of inversions in the permutation given by the working pseudolines on the final sweep line, without creating segments that are too long.
After this last selection, all the constructed configurations are finally guaranteed to be valid topological \conf{n}{k}s. To make sure that we indeed obtain the representation presented in Section~\ref{subsec:representation}, we remove each configuration~$(P,L)$ in which the maximal representative of the segment length distribution of a pseudoline of $L$ appears in the list $\Pi$ before the segment length distribution of its base line.
\subsection{Testing combinatorial equivalence}
\label{subsec:reduction}
In Section~\ref{subsec:equivrelations}, we have seen three equivalence relations between topological \conf{n}{k}s: combinatorial, mutation and topological equivalence. As explained in Section~\ref{subsec:description}, our algorithm outputs at least one representative per mutation equivalence class of topological \conf{n}{k}s. However, we can obtain more than one representative per~class, and two topological \conf{n}{k}s which are not mutation equivalent can still be combinatorially equivalent. We thus need to reduce the output of our algorithm.
\enlargethispage{.1cm}
Note that the topological equivalence between two \conf{n}{k}s~$(P_1,L_1)$ and~$(P_2,L_2)$ can be tested in $\Theta(n^3)$ time. Indeed, since the topological configurations are embedded on the projective plane, the matchings between~$P_1$ and~$P_2$ and between~$L_1$ and~$L_2$ induced by an homeomorphism mapping~$(P_1,L_1)$ to~$(P_2,L_2)$ are determined by the images of any two distinguished pseudolines~$\ell,\ell'$ of~$L_1$. Therefore, for each of the $\Theta(n^2)$ possible choices for the images of~$\ell,\ell'$, we can test in linear time whether this choice yields or not an homeomorphism between~$(P_1,L_1)$ and~$(P_2,L_2)$.
Both combinatorial and mutation equivalences are however harder to decide computationally. We focus here on methods and heuristics to quickly test combinatorial~equivalence.
In order to limit unnecessary computation, we make use of \defn{combinatorial invariants} associated to configurations. If two configurations have distinct invariants, they cannot be combinatorially equivalent. Reciprocally, if they share the same invariant, it provides us with information on the possible combinatorial isomorphisms between these two configurations. The invariants we have chosen are the \defn{clique} and \defn{coclique distributions}. We furthermore need a \defn{multiscale invariant} technique, based on the notion of \defn{derivation} of a combinatorial invariant. We introduce these notions and methods in the next paragraphs.
\paragraph{Cliques and cocliques}
Let $(P,L)$ be a combinatorial configuration.
For $j\ge 3$, define a \defn{$j$-clique} of~$(P,L)$ to be any set of $j$ points of~$P$ which are pairwise related by lines of~$L$. For any point $p$ of $P$, let $\gamma_j(p)$ be the number of $j$-cliques containing $p$, and let $\gamma(p) := (\gamma_j(p))_{j \ge 3}$. The \defn{clique distribution} of~$(P,L)$ is the multiset $\gamma(P) := \multiset{\gamma(p)}{p \in P}$.
Similarly, a \defn{$j$-coclique} of~$(P,L)$ is a set of $j$ lines of~$L$ which are pairwise intersecting at points of~$P$. For any line $\ell$ of $L$, let $\delta_j(\ell)$ be the number of $j$-cocliques containing $\ell$, and let $\delta(\ell) := (\delta_j(\ell))_{j \ge 3}$. The \defn{coclique distribution} of~$(P,L)$ is the multiset $\delta(L) := \multiset{\delta(\ell)}{\ell \in L}$. In other words, the coclique distribution of $(P,L)$ is the clique distribution of its dual configuration~$(L,P)$.
The pair $(\gamma(P), \delta(L))$ of clique and coclique distributions of the configuration~$(P,L)$ is a natural and powerful combinatorial invariant of~$(P,L)$.
\paragraph{Derivation of combinatorial invariants}
Let~$(P,L)$ be a \conf{n}{k}. Assume that $\gamma:P\to X$ and $\delta:L\to Y$ are two functions from the point set and the line set of~$(P,L)$ respectively to arbitrary sets $X$ and $Y$, such that the multisets $\gamma(P) := \multiset{\gamma(p)}{p \in P} \subset X$ and $\delta(L) := \multiset{\delta(\ell)}{\ell \in L} \subset Y$ are combinatorial invariants of~$(P,L)$. The clique and coclique distributions are typical examples of such functions~$\gamma$ and~$\delta$. Observe that we again abuse notation: the functions~$\gamma$ and~$\delta$ usually depend on the configuration~$(P,L)$, but we consider that this dependence is clear from the context. Note however that the target sets~$X$ and~$Y$ of $\gamma$ and $\delta$ do not depend upon~$(P,L)$.
While reducing a set of configurations up to combinatorial equivalence, such a pair of combinatorial invariants~$(\gamma(P),\delta(L))$ can be used in two different ways:
\begin{enumerate}[(i)]
\item either to separate classes of combinatorial isomorphism: two configurations with different invariants cannot be combinatorially equivalent;
\item or to guess combinatorial isomorphisms: an isomorphism between two configurations should respect the invariants~$\gamma$ and~$\delta$.
\end{enumerate}
It often happens however that the pair of combinatorial invariants~$(\gamma(P),\delta(L))$ is not precise enough neither to distinguish two configurations, nor to guess a combinatorial isomorphism between them. It occurs when many points (resp. many lines) of a configuration~$(P,L)$ get the same image under~$\gamma$ (resp. under $\delta$). Two fundamentally different cases can lead to this situation. On the one hand, the configuration~$(P,L)$ can have a large automorphism group. In this case, points (resp. lines) in a common orbit under the automorphism group cannot be distinguished combinatorially, and thus no invariant can speed up the isomorphism test. On the other hand, it could also be that the combinatorial invariant~$(\gamma(P),\delta(L))$ is not precise enough to distinguish the neighborhood properties of the points with the same image under $\gamma$ (resp. the lines with the same image under $\delta$). In the later case, we can construct a new pair of combinatorial invariants which refines~$(\gamma(P),\delta(L))$, taking into account the neighborhoods of points and lines in the configuration. We call these invariants the \defn{derivatives} of $\gamma$ and $\delta$ and denote them $\gamma'$ and $\delta'$.
The \defn{derivative} of the invariant $\gamma : P \to X$ is the function $\gamma':L\to X^k$ which associates to a line $\ell$ of~$L$ the multiset $\gamma'(\ell) := \multiset{\gamma(p)}{p\in P, p\in \ell}$. Intuitively, the image $\gamma'(\ell)$ of a line $\ell$ contains all the combinatorial information carried by~$\gamma$ concerning the points of~$P$ contained in $\ell$. Similarly, the \defn{derivative} of the invariant $\delta : L \to Y$ is the function $\delta':P\to Y^k$ which associates to a point $p$ of $P$ the multiset ${\delta'(p) := \multiset{\delta(\ell)}{\ell \in L, p\in \ell}}$. The pair $(\delta'(P), \gamma'(L))$ is a pair of combinatorial invariants as defined previously, and it refines the previous pair~$(\gamma(P),\delta(L))$.
If this new invariant is still not precise enough, we can consider higher order derivatives $\gamma^{(u)} := (\gamma^{(u-1)})'$ and $\delta^{(u)} := (\delta^{(u-1)})'$ of the initial invariants. We obtain this way a family of refinements of $(\gamma(P),\delta(L))$. Of course, these invariants ultimately carry the same combinatorial information. We use this family in the following multiscale technique.
\paragraph{Multiscale invariants}
The main idea of our reduction process is to use derivative invariants in a multiscale process. Consider a set~$\mathcal{C}$ of configurations that we want to reduce up to combinatorial equivalence. Assume that $\gamma:P\to X$ and $\delta:L\to Y$ are two functions defining a pair of combinatorial invariants~$(\gamma(P),\delta(L))$ of a configuration~$(P,L)$. We separate the configurations of~$\mathcal{C}$ into classes with distinct invariants, which we can consider independently. We now compute the derivative invariants $(\delta'(P), \gamma'(L))$ for each configuration~$(P,L)$. For a given class, we then have three possible situations:
\begin{enumerate}
\item If the derivative invariants $(\delta'(P), \gamma'(L))$ are not the same for all configurations $(P,L)$ of the class, we split the class into refined subclasses and reiterate the refinement (computing one more derivative).
\item If the derivative invariants $(\delta'(P), \gamma'(L))$ are the same for all configurations $(P,L)$ of the class but determine more information on the possible isomorphisms between configurations of the class than the original invariants~$(\gamma(P), \delta(L))$, then we reiterate the refinement.
\item Otherwise, the derivative invariants $(\delta'(P), \gamma'(L))$, as well as any further derivative, provide the same combinatorial information as the original invariants $(\gamma(P), \delta(L))$. Thus, we stop the refinement process and start a brute-force search for possible isomorphisms between the remaining configurations in the class. The efficiency of this brute-force search depends on the quality of the combinatorial information provided by the invariants~$(\gamma(P), \delta(L))$.
\end{enumerate}
This process can be seen as a multiscale process: typically, some invariants provide sufficiently information to deal with certain classes of~$\mathcal{C}$, while other classes require far more precision (obtained by derivatives) to be reduced.
Using this multiscale technique, starting from the clique and coclique distributions of configurations, we managed to reduce the $69\,991$ topological \conf{19}{4}s produced by our sweeping algorithm into $4\,028$ classes of combinatorial equivalence in about one hour\footnote{Computation times on a 2.4 GHz Intel Core 2 Duo processor with 4Go of RAM.}.
\subsection{Results}
\label{subsec:resultsTopo}
We present in this section the results of our algorithm. First, it enables us to check efficiently all former enumerations of topological \conf{n}{k}s. The \textsc{java}{} implementation developed by the second author finds all \conf{n}{k}s in less than a minute\textsuperscript{1} when $k=3$ and $n\le11$, or when $k=4$ and~${n\le 17}$. In particular, we checked that there is no topological \conf{n}{4} when $n\le 16$~\cite{BokowskiSchewe1}, and that there is a single topological \conf{17}{4} up to combinatorial isomorphism. This configuration is represented in Figure~\ref{fig:configuration_17_4}, and labeled in such a way that:
\begin{itemize}
\item the quarter-turn rotation which generates the symmetry group of the picture is the permutation (A)(B,C,D,E)(F,G,H,I)(J,K,L,M)(N,O)(P,Q); and
\item the permutation (A,a)(B,b)\,\dots\,(P,p)(Q,q) is a self-polarity of the topological configuration.
\end{itemize}
\begin{figure}[h]
\centerline{\includegraphics[scale=.55]{configuration_17_4}}
\medskip
\centerline{
\begin{tabular}{c|cccccccccccccccccc}
lines & a & b & c & d & e & f & g & h & i & j & k & l & m & n & o & p & q \\
\hline
& N & F & I & H & G & B & E & D & C & B & E & D & C & A & A & A & A \\
points & P & J & M & L & K & I & H & G & F & E & D & C & B & C & B & F & G \\
in lines & O & O & N & O & N & P & Q & P & Q & L & K & J & M & P & Q & N & O \\
& Q & M & L & K & J & J & M & L & K & F & I & H & G & E & D & H & I
\end{tabular}
}
\caption{The topological \conf{17}{4}~\cite{BokowskiGrunbaumSchewe}.}
\label{fig:configuration_17_4}
\end{figure}
When $k=4$ and $n=18$, we reconstructed the $16$ combinatorial equivalence classes of topological \conf{18}{4}s obtained in~\cite{Schewe} with satisfiability solvers. See~\cite[Figure~6]{BokowskiSchewe2} for a description of these configurations. To obtain this result, our implementation needed about one hour\footnote[1]{Computation times on a 2.4 GHz Intel Core 2 Duo processor with 4Go of RAM.}, compared to several months of CPU-time required in~\cite{Schewe}. The two \conf{18}{4}s presented in Figure~\ref{fig:configurations_18_4}, which are combinatorially equivalent but not mutation equivalent, occurred while we were reducing the list of \conf{18}{4}s up to combinatorial equivalence, using as a first reduction a certain invariant of mutation equivalence defined in~\cite{BokowskiStrauszSantiago}. In the next section, we present two combinatorially distinct geometric \conf{18}{4}s obtained from the list of topological \conf{18}{4}s.
Finally, we want to report on preliminary results concerning the enumeration of topological \conf{19}{4}s, which initially motivated our work. In about $15$ days of computation time\textsuperscript{1}, we obtained the complete list of topological \conf{19}{4}s:
\begin{resultat}
There are precisely $4\,028$ topological \conf{19}{4}s up to combinatorial equivalence. Among them, $222$ are self-dual.
\end{resultat}
From this list, we can immediately extract examples of topological \conf{19}{4}s with non-trivial symmetry groups, closing along the way an open question of Branko Gr\"unbaum~\cite[p.~169, Question~5]{Grunbaum1}. The next step is naturally to study the possible geometric realizations of all these topological \conf{19}{4}s. This work in progress still requires an important computational effort and will be reported in a subsequent paper.
\section{Application to geometric \conf{18}{4}s}
\label{sec:geomconf}
As an application of the enumeration of topological configurations, we derive all isomorphism classes of geometric \conf{18}{4}s. To obtain it, we implemented in \textsc{maple}{} the construction sequence method of J\"urgen Bokowski and Lars Schewe~\cite{BokowskiSchewe2}. Among the $16$ topological \conf{18}{4}s (first generated by Lars Schewe~\cite{Schewe} and now confirmed by our \textsc{java}{} program), only $8$ are compatible with Pappus' and Desargues' Theorem. Starting from these remaining configurations, we run our \textsc{maple}{} code and obtain the following result:
\begin{resultat}
There are precisely two geometric \conf{18}{4}s up to combinatorial isomorphism.
\end{resultat}
The first geometric \conf{18}{4} was obtained in~\cite[Section~4]{BokowskiSchewe1}.
\begin{figure}[b]
\centerline{\includegraphics[scale=.31]{first_18_4}\qquad\includegraphics[scale=.54]{first_18_4_spherical}}
\bigskip
\begin{tabular}{c|cccccccccccccccccc}
lines & a & b & c & d & e & f & g & h & i & j & k & l & m & n & o & p & q & r \\
\hline
& D & F & E & A & C & B & A & C & B & G & I & H & J & L & K & A & C & B \\
points & G & I & H & E & D & F & G & I & H & M & O & N & M & O & N & F & E & D \\
in lines & E & D & F & B & A & C & J & L & K & N & M & O & K & J & L & L & K & J \\
& P & R & Q & R & Q & P & I & H & G & R & Q & P & P & R & Q & M & O & N
\end{tabular}
\caption{Bokowski and Schewe's geometric \conf{18}{4}~\cite{BokowskiSchewe1}.}
\label{fig:firstConfiguration184}
\end{figure}
In Figure~\ref{fig:firstConfiguration184}, we have labeled its points A,\,\dots,\,R and lines a,\,\dots,\,r in such a way that the permutation (A,a)(B,b)\,\dots\,(Q,q)(R,r) is a self-duality of the configuration.
The automorphism group of the combinatorial configuration is generated by the permutations:
\begin{center}
(A,B,C)(D,E,F)(G,H,I)(J,K,L)(M,N,O)(P,Q,R) \\
(A)(K)(B,C,L,J)(D,F,I,R)(E,Q,M,G)(H,O,N,P) \\
(A)(K)(B,L)(C,J)(D,I)(E,M)(F,R)(G,Q)(H,N)(O,P) \\
\end{center}
and is isomorphic to the symmetric group on $4$ elements. Together with the self-duality (A,a)(B,b)\,\dots\,(Q,q)(R,r), the automorphism group of the Levi graph of the configuration is thus isomorphic to $\mathfrak{S}_4\times\mathbb{Z}_2$. Observe that only the first permutation (A,B,C)\,\dots\,(P,Q,R) and the duality (A,a)(B,b)\,\dots\,(Q,q)(R,r) are geometrically visible, while the other generators of the automorphism group of the combinatorial configuration are not isometries of the geometric configuration of Figure~\ref{fig:firstConfiguration184}.
In Figure~\ref{fig:firstConfiguration184new}, we have performed a projective transformation of the configuration of Figure~\ref{fig:firstConfiguration184} (sending the four 3-valent points in Figure~\ref{fig:firstConfiguration184} to a square).
The last generator (A)(K)(B,L)\,\dots\,(O,P) then becomes a central symmetry in the new geometric \conf{18}{4}.
The realization space of this configuration consists of two points, both expressed with coordinates in $\mathbb{Q}\left[1+\sqrt{5}\right]$.
\begin{figure}[h]
\centerline{\includegraphics[scale=.31]{first_18_4_new}\qquad\includegraphics[scale=.54]{first_18_4_spherical_new}}
\caption{Another geometric realization of Bokowski and Schewe's geometric \conf{18}{4}~\cite{BokowskiSchewe1} of Figure~\ref{fig:firstConfiguration184}.}
\label{fig:firstConfiguration184new}
\end{figure}
The second geometric \conf{18}{4} is a result of our \textsc{maple}{} code and appears for the first time in this paper.
\begin{figure}
\centerline{\includegraphics[scale=.31]{second_18_4}\qquad\includegraphics[scale=.54]{second_18_4_spherical}}
\bigskip
\centerline{
\begin{tabular}{c|cccccccccccccccccc}
lines & a & b & c & d & e & f & g & h & i & j & k & l & m & n & o & p & q & r \\
\hline
& M & I & D & C & D & D & C & C & N & N & M & M & N & M & H & D & B & F \\
points & N & L & H & F & O & L & K & H & I & F & E & B & K & I & E & C & E & G \\
in lines & O & P & G & E & K & J & I & J & G & H & G & F & L & J & A & B & L & J \\
& P & Q & P & P & Q & R & R & O & B & R & R & Q & A & A & Q & A & O & K
\end{tabular}
}
\caption{The new geometric \conf{18}{4}.}
\label{fig:secondConfiguration184}
\end{figure}
\enlargethispage{.1cm}
In Figure~\ref{fig:secondConfiguration184}, we have labeled its points A,\,\dots,\,R and lines a,\,\dots,\,r in such a way that the permutation (A,a)(B,b)\,\dots\,(Q,q)(R,r) is a self-polarity of the configuration.
The automorphism group of the combinatorial configuration is generated by the permutation (Q)(R)(A,P)(B,O)(C,N)(D,M)(E,L)(F,K)(G,J)(H,I). Together with the self-polarity (A,a)(B,b)\,\dots\,(Q,q)(R,r), the automorphism group of the Levi graph of the configuration is thus isomorphic to $\mathbb{Z}_2\times\mathbb{Z}_2$. This group is completely realized in the geometric representation of Figure~\ref{fig:secondConfiguration184}.
The realization space of this configuration consists of two points, both expressed with coordinates in $\mathbb{Q}\left[\sqrt[3]{108+12\sqrt{93}}\right]$
To conclude, we want to emphasize that the discovery of the first \conf{18}{4} of Figure~\ref{fig:firstConfiguration184} inspired Branko Gr\"unbaum to find a new family of \conf{6m}{4}s, for any $m \ge 3$ (see~\cite[Chapter~3, p.~171]{Grunbaum1} and Figure~\ref{fig:6mfamily}). This raises the following appealing open question:
\begin{probleme}
Generalize our second geometric \conf{18}{4} of Figure~\ref{fig:secondConfiguration184} to obtain another new infinite family of geometric \conf{n}{4}s.
\end{probleme}
\begin{figure}[b]
\centerline{\includegraphics[height=5cm]{18_4}\quad\includegraphics[height=5cm]{30_4}\quad\includegraphics[height=5cm]{42_4}}
\caption{The $(6m)$-family inspired by the geometric \conf{18}{4} of Figure~\ref{fig:firstConfiguration184}.}
\label{fig:6mfamily}
\end{figure}
For example, we have been able to derive from the second geometric \conf{18}{4} of Figure~\ref{fig:secondConfiguration184} a family of \conf{(18+17m)}{4}s. Unfortunately, the set~$18+17\mathbb{N}$ does not intersect the set~$\{19,22,23,26,37,43\}$ of values~$n$ for which no \conf{n}{4} is known.
\section*{Acknowledgements}
The first author thanks three colleagues from the Universidad Nacional Aut\'onoma de M\'exico, namely Ricardo Strausz Santiago, Rodolfo San Augustin Chi, and Octavio Paez Osuna, for many stimulating discussions about various different earlier versions of the presented algorithm during his one year sabbatical stay (2008/2009) in M\'exico City. We also thank Leah Berman from the University of Alaska Fairbanks for valuable discussions and comments about the subject. We are grateful to Branko Gr\"unbaum, Toma\v{z} Pisanski, and Gunnar Brinkmann for encouragements and helpful communications. As frequent users, we are indebted to the development team of the geometric software \textsc{cinderella}{}, in particular J\"urgen Richter-Gebert and Ulrich Kortenkamp. Finally, we thank two anonymous referees for their comments and suggestions on the presentation.
\bibliographystyle{alpha}
|
1,941,325,220,079 | arxiv | \section{Introduction}
The baby Skyrme model appeared firstly as an analogical model (on plane) to the Skyrme model in three-dimensional space. Since the target space of Skyrme model is $SU(2)$, \cite{Skyrme1961}, \cite{Skyrme1962}, \cite{Skyrme1971}, then for baby Skyrme model the target space is $S^{2}$. In these both models static field configurations can be classified topologically by their winding numbers.
Analogically to the Skyrme model, the baby Skyrme model includes: the quadratic term i.e. the term of nonlinear $O(3)$ sigma model, the quartic term - analogue of the Skyrme term and the potential. The presence of the potential, in baby Skyrme model, is necessary, for existence of static solutions with finite energy. However, the classes of the potentials may here be wide and different forms of them were investigated recently, for e.g. in \cite{Karlineretal2008}, \cite{Adametal2009}, \cite{Adametal2010}, \cite{JMSpeight2010}.
Bogomolny bound and Bogomolny equations for gauged sigma model were derived in \cite{Schroers}. In \cite{Schroersetal} Bogomolny bound and Bogomolny equations for gauged full baby Skyrme model were obtained. In both papers \cite{Schroers} and \cite{Schroersetal}, some topologically non-trivial soliton solutions of derived Bogomolny equations were obtained. The lagrangian of the mentioned gauged full baby Skyrme model in (2+1)-dimensions, has the form, \cite{Schroersetal}
\begin{equation}
\mathcal{L}=D_{\mu} \vec{S} \cdot D^{\mu} \vec{S} + \frac{\lambda^{2}}{4} ( D^{\mu} \vec{S} \times D^{\nu} \vec{S})^{2} + (1 - \vec{n} \cdot \vec{S}) + F^{2}_{\mu \nu} , \label{baby_Skyrme}
\end{equation}
where $\vec{S}$ is three-component vector field, such that $\mid \vec{S} \mid^{2} = 1$, $\lambda > 0$ is a coupling constant, $D_{\mu}\vec{S}=\partial_{\mu}\vec{S} + A_{\mu} (\vec{n} \times \vec{S})$ is the covariant derivative of vector field $\vec{S}$, $F_{\mu\nu}$ is field strength, called also as the curvature and $\vec{n}=[0,0,1]$ is an unit vector and $\mu, \nu = 0, 1, 2$.
The baby Skyrme model has simpler structure, than three-dimensional Skyrme model and so it can give an opportunity of better understanding of the solutions of Skyrme model in (3+1)-dimensions. However, on the other hand, even in the ungauged version, it is still complicated, non-integrable, topologically non-trivial and nonlinear field theory. Because of this reason, it is difficult to make analytical studies of this model and so, the investigations of baby Skyrmions have very often numerical character. Therefore, the simplification, but of course, keeping us in the class of Skyrme-like models and simultaneosuly, giving an opportunity for analytical calculations, is important. One may, for example, try to define, which features of the solutions of the baby Skyrme model, are determined by which part of the model. So, one could neglect some particular part of the Lagrangian and so, investigate such simplified model. One may also simplify the problem of solving of field equations, by deriving Bogomolny equations (sometimes called as Bogomol'nyi equations) for these models, mentioned above.
All solutions of Bogomolny equations satisfy Euler-Lagrange equations, which order is bigger than the order of Bogomolny equations.
Bogomolny equations for ungauged restricted baby Skyrme model with the special form of $V=V(S^{3})$ were derived in \cite{Adametal2010}.\\
In \cite{Stepien2012} Bogomolny decompositions for both ungauged models: restricted and full baby Skyrme one, was derived. There was also showed that in the case of ungauged restricted baby Skyrme model, Bogomolny decomposition existed for arbitrary potential (in \cite{JMSpeight2010}
Bogomolny equations had been obtained for the potential being a square of some non-negative function with isolated zeroes, but by another way than used in \cite{Stepien2012}). Next, in \cite{Stepien2012}, it was also showed that for the case of ungauged full baby Skyrme model, the set of the solutions of corresponding Bogomolny equations was some subset of the set of the solutions of Bogomolny equations for ungauged restricted baby Skyrme model.\\
The Bogomolny equations for gauged full baby Skyrme model in (2+0)-dimensions, but for some special form of the potential, was derived in \cite{Schroersetal}, by using the technique, firstly applied by Bogomolny in \cite{Bogomolny1976}, among others, for the nonabelian gauge theory. This method is based on proper separation of the terms in the functional of energy. The solutions of Bogomolny equations, found in this way, minimalize the energy functional and saturate Bogomolny bound i.e. an inequality connecting energy functional and topological charge.
In this paper we derive Bogomolny equations (we call them as Bogomolny decomposition) for the gauged restricted baby Skyrme model, in (2+0)-dimensions. It is characterized by absence of $O(3)$ term in (\ref{baby_Skyrme}). In this current paper we investigate the case of the potentials of the form $V(1 - \vec{n} \cdot \vec{S})$.\\
In \cite{Adam_etal} the Bogomolny equations for the gauged restricted baby Skyrme model, in (2+0)-dimensions, but for the potentials of the form $V(S^{3})$, have been derived and some non-trivial solutions of these equations have been obtained. However, in contrary to \cite{Adam_etal}, we derive Bogomolny equations (we call them as Bogomolny decomposition), by applying so called, concept of strong necessary conditions, firstly presented in \cite{Sokalski1979} and extended in \cite{Sokalskietal2001}, \cite{Sokalskietal22001}, \cite{Sokalskietal2002}. We derive also the condition, which must be satisfied by the potentials of the form $V(1 - \vec{n} \cdot \vec{S})$, for which Bogomolny decomposition exists.
The procedure of deriving of Bogomolny decomposition from the extended concept of strong necessary conditions, has been presented in \cite{Sokalskietal2002}, \cite{Stepien2003} and developed in \cite{Stepienetal2009}.\\
This paper is organized, as follows. In the next subsections of this section we shortly describe gauged restricted baby Skyrme model and the concept of strong necessary conditions. In the section 2, we derive Bogomolny decomposition for the gauged restricted baby Skyrme model, by using the concept of strong necessary conditions. Section 3 contains a summary.
\subsection{Gauged restricted baby Skyrme model}
The lagrangian of gauged restricted baby Skyrme model with the potential $V=(1 - \vec{n} \cdot \vec{S})$, follows from the Lagrange density of gauged full baby Skyrme model (\ref{baby_Skyrme}), when the $O(3)$-like term is absent
\begin{equation}
\mathcal{L} = \frac{\lambda^{2}}{4}(D_{\mu} \vec{S} \times D_{\nu} \vec{S})^{2} + F^{2}_{\mu \nu} + (1 - \vec{n} \cdot \vec{S}), \label{lagr_restr}
\end{equation}
where $\vec{S}$ is three-component vector, such that $\mid \vec{S} \mid^{2} = 1$ and $D_{\mu}
\vec{S} = \partial_{\mu} \vec{S} + A_{\mu} (\vec{n} \times \vec{S})$ is covariant derivative of vector field $\vec{S}$. \\
In this paper we consider gauged restricted baby Skyrme model in (2+0) dimensions, with the energy functional of the following form
\begin{equation}
H = \frac{1}{2} \int d^{2} x \hspace{0.05 in} \mathcal{H} = \frac{1}{2} \int d^{2} x \bigg( \frac{\lambda^{2}}{4} (\epsilon_{ij} D_{i} \vec{S} \times D_{j} \vec{S})^{2} + F^{2}_{\mu \nu} + \gamma^{2} V(1 - \vec{n} \cdot \vec{S}) \bigg), \label{energy}
\end{equation}
where $x_{1}=x, \hspace{0.05 in} x_{2}=y$ and $i, j = 1, 2$. We make the stereographic projection
\begin{equation}
\vec{S} = \bigg[\frac{\omega+\omega^{\ast}}{1+\omega \omega^{\ast}}, \frac{-i(\omega-\omega^{\ast})}{1+\omega \omega^{\ast}}, \frac{1-\omega
\omega^{\ast}}{1+\omega \omega^{\ast}}\bigg],
\label{stereograf}
\end{equation}
where $\omega=\omega(x,y) \in \mathbb{C}$ and $x, y \in \mathbb{R}$.\\
\vspace{1 in}
Then, the density of energy functional (\ref{energy}) has the form
\begin{equation}
\begin{gathered}
\mathcal{H}=4\lambda_{1} \frac{[i(\omega_{,x}\omega^{\ast}_{,y}-\omega_{,y}\omega^{\ast}_{,x}) - A_{1}(\omega_{,y} \omega^{\ast} + \omega
\omega^{\ast}_{,y}) + A_{2}(\omega_{,x} \omega^{\ast} + \omega \omega^{\ast}_{,x})]^{2}}{(1+\omega \omega^{\ast})^{4}} + \\
\lambda_{2}(A_{2,x} - A_{1,y})^{2} + V\bigg(\frac{2\omega\omega^{\ast}}{1+\omega\omega^{\ast}}\bigg),
\end{gathered}
\end{equation}
where after rescalling, the constants $\lambda_{1}, \lambda_{2}$ have been appeared, instead of $\lambda$ and $\gamma$ has been included in $V(1 - \vec{n} \cdot \vec{S})=V\bigg(\frac{2\omega\omega^{\ast}}{1+\omega\omega^{\ast}}\bigg)$ and $\omega_{,x} \equiv \frac{\partial \omega}{\partial x}$, etc.\\
The Euler-Lagrange equations for this model are, as follows
\begin{equation}
\begin{gathered}
\frac{d}{dx}[N_{1}(i\omega^{\ast}_{,y}+A_{2}\omega^{\ast})]+\frac{d}{dy}[N_{1}(-i\omega^{\ast}_{,x}-A_{1}\omega^{\ast})]+
\frac{1}{4\lambda_{1}}N^{2}_{1}\omega^{\ast}(1+\omega\omega^{\ast})^{3} -\\
N_{1}(-A_{1}\omega^{\ast}_{,y}+A_{2}\omega^{\ast}_{,x}) -
V'\bigg(\frac{2\omega\omega^{\ast}}{1+\omega\omega^{\ast}}\bigg) \frac{2\omega^{\ast}}{(1+\omega\omega^{\ast})^{2}} = 0,\\
c.c.\\
-2\lambda_{2}\frac{d}{dy}(A_{2,x}-A_{1,x})+N_{1} \cdot (\omega_{,y} \omega^{\ast} + \omega \omega^{\ast}_{,y}) =0 \\
2\lambda_{2}\frac{d}{dx}(A_{2,x}-A_{1,x})-N_{1} \cdot (\omega_{,x} \omega^{\ast} + \omega \omega^{\ast}_{,x}) =0
\end{gathered}
\end{equation}
where $N_{1}=\frac{8\lambda_{1}}{(1+\omega\omega^{\ast})^{4}} [i(\omega_{,x}\omega^{\ast}_{,y}-\omega_{,y}\omega^{\ast}_{,x}) -
A_{1}(\omega_{,y} \omega^{\ast} + \omega \omega^{\ast}_{,y}) + A_{2}(\omega_{,x} \omega^{\ast} + \omega \omega^{\ast}_{,x})]$.
\subsection{The concept of strong necessary conditions}
The idea of the concept of strong necessary conditions is such that instead of considering of the Euler-Lagrange
equations,
\begin{equation}
F_{,u} - \frac{d}{dx}F_{,u_{,x}} - \frac{d}{dt}F_{,u_{,t}}=0, \label{el}
\end{equation}
following from the extremum principle, applied to the functional
\begin{equation}
\Phi[u]=\int_{E^{2}} F(u,u_{,x},u_{,t}) \hspace{0.05 in} dxdt, \label{functional}
\end{equation}
we consider strong neecessary conditions, \cite{Sokalski1979}, \cite{Sokalskietal2001}, \cite{Sokalskietal22001}, \cite{Sokalskietal2002}
\begin{gather}
F_{,u}=0, \label{silne1} \\
F_{,u_{,t}}=0, \label{silne2} \\
F_{,u_{,x}}=0, \label{silne3}
\end{gather}
where $F_{,u} \equiv \frac{\partial F}{\partial u}$, etc.
Obviously, all solutions of the system of the equations (\ref{silne1}) - (\ref{silne3}) satisfy the Euler-Lagrange equation (\ref{el}). However, these solutions, if they exist, are very often trivial. So, in order to avoiding such situation, we make gauge transformation of the functional (\ref{functional})
\begin{equation}
\Phi \rightarrow \Phi + Inv, \label{gauge_transf}
\end{equation}
where $Inv$ is such functional that its local variation with respect to $u(x,t)$ vanishes:
$\delta Inv \equiv 0$.\\
Owing to this feature, the Euler-Lagrange equations (\ref{el}) and the Euler-Lagrange equations resulting from requiring of the extremum of $\Phi + Inv$, are equivalent.
On the other hand, the strong necessary conditions (\ref{silne1}) - (\ref{silne3}) are not invariant with respect to the gauge transformation (\ref{gauge_transf}) and so, we may expect to obtain non-trivial solutions. Let us note that the strong necessary conditions (\ref{silne1}) - (\ref{silne3}) constitute the system of the partial differential equations of the order less than the order of Euler-Lagrange equations (\ref{el}).
\section{Bogomolny decomposition of gauged restricted baby Skyrme model}
Now, we apply the concept of strong necessary conditions to the functional (\ref{energy}), in order to find Bogomolny decomposition.
The important step is to construct topological invariant for the case of the topology of this model. The construction of the topological invariant was given in \cite{Schroers}, \cite{Yang}
\begin{equation}
I_{1} = \vec{S} \cdot D_{1} \vec{S} \times D_{2} \vec{S} + F_{12}(1 - \vec{n} \cdot \vec{S}),
\end{equation}
where $D_{i}\vec{S}=\partial_{i}\vec{S} + A_{i} \vec{n} \times \vec{S}$, ($i=1,2$), is covariant derivative of vector field $\vec{S}$ and $F_{12}=\partial_{1} A_{2} - \partial_{2} A_{1}$ is magnetic field.
After making the stereographic projection (\ref{stereograf}), we have:
\begin{equation}
\begin{gathered}
I_{1} = \frac{1}{(1+\omega\omega^{\ast})^{2}}[2(i(\omega_{,x}\omega^{\ast}_{,y}-\omega_{,y}\omega^{\ast}_{,x}) -
A_{1}(\omega_{,y} \omega^{\ast} + \omega \omega^{\ast}_{,y}) + A_{2}(\omega_{,x} \omega^{\ast} + \omega \omega^{\ast}_{,x}))] + \\
\frac{2 \omega \omega^{\ast}}{1+\omega\omega^{\ast}} (A_{2,x} - A_{1,y}).
\end{gathered}
\end{equation}
As it will turn out, it is useful to generalize the above expression such that there by the term $A_{2,x} - A_{1,y}$, some function of the argument $\frac{2 \omega \omega^{\ast}}{1+\omega\omega^{\ast}}$ may be placed
\begin{equation}
\begin{gathered}
I_{1} =\lambda_{4}\bigg\{\frac{1}{(1+\omega\omega^{\ast})^{2}}[2G'_{1} \cdot (i(\omega_{,x}\omega^{\ast}_{,y}-\omega_{,y}\omega^{\ast}_{,x})
- A_{1}(\omega_{,y} \omega^{\ast} + \omega \omega^{\ast}_{,y}) +\\
A_{2}(\omega_{,x} \omega^{\ast} + \omega \omega^{\ast}_{,x}))] + G_{1} \cdot (A_{2,x} - A_{1,y})\bigg\}, \label{niezmiennik1}
\end{gathered}
\end{equation}
\vspace{0.2 in}
where $\lambda_{4}=const, G_{1}=G_{1}(\frac{2 \omega \omega^{\ast}}{1+\omega\omega^{\ast}})$ and $G'_{1}$ denotes the derivative of the function $G_{1}$ with respect to its argument: $\frac{2\omega\omega^{\ast}}{1+\omega\omega^{\ast}}$.\\
\vspace{0.2 in}
We make the following gauge transformation
\begin{equation}
\begin{gathered}
\mathcal{H} \longrightarrow \tilde{\mathcal{H}}=4\lambda_{1} \frac{[i(\omega_{,x}\omega^{\ast}_{,y}-\omega_{,y}\omega^{\ast}_{,x}) -
A_{1}(\omega_{,y} \omega^{\ast} + \omega \omega^{\ast}_{,y}) + A_{2}(\omega_{,x} \omega^{\ast} + \omega \omega^{\ast}_{,x})]^{2}}{(1+\omega
\omega^{\ast})^{4}} + \\
\lambda_{2}(A_{2,x} - A_{1,y})^{2} + V\bigg(\frac{2\omega\omega^{\ast}}{1+\omega\omega^{\ast}}\bigg) + \sum^{3}_{k=1} I_{k},
\label{przecech}
\end{gathered}
\end{equation}
\vspace{0.4 in}
where $I_{1}$ is given by (\ref{niezmiennik1}), $I_{2}= D_{x} G_{2}(\omega,\omega^{\ast}), I_{3}=D_{y} G_{3}(\omega
,\omega^{\ast}), D_{x} \equiv \frac{d}{dx}, D_{y} \equiv \frac{d}{dy}$ and $G_{k} \in \mathcal{C}^{2}$, ($k=1,2,3$), are some functions,
which are to be determinated.\\
\vspace{0.5 in}
After applying the concept of strong necessary conditions to (\ref{przecech}), we obtain the so-called dual
equations
\begin{equation}
\begin{gathered}
\tilde{\mathcal{H}}_{,\omega} =
-16 \lambda_{1} \frac{[i(\omega_{,x}\omega^{\ast}_{,y}-\omega_{,y}\omega^{\ast}_{,x})-A_{1}(\omega_{,y} \omega^{\ast} + \omega
\omega^{\ast}_{,y}) + A_{2}(\omega_{,x} \omega^{\ast} + \omega \omega^{\ast}_{,x})]^{2}\omega^{\ast}}{(1+\omega
\omega^{\ast})^{5}} + \\
\frac{8\lambda_{1}[i(\omega_{,x}\omega^{\ast}_{,y}-\omega_{,y}\omega^{\ast}_{,x})-A_{1}(\omega_{,y} \omega^{\ast} + \omega
\omega^{\ast}_{,y}) + A_{2}(\omega_{,x} \omega^{\ast} + \omega
\omega^{\ast}_{,x})]}{(1+\omega\omega^{\ast})^{4}}(-A_{1}\omega^{\ast}_{,y}+A_{2}\omega^{\ast}_{,x}) + \\
V'\bigg(\frac{2\omega\omega^{\ast}}{1+\omega\omega^{\ast}}\bigg) \frac{2\omega^{\ast}}{(1+\omega\omega^{\ast})^{2}} + \\
\lambda_{4}\bigg\{\frac{1}{(1+\omega\omega^{\ast})^{2}}[G''_{1} \frac{4\omega^{\ast}}{(1+\omega\omega^{\ast})^{2}}
(i(\omega_{,x}\omega^{\ast}_{,y}-\omega_{,y}\omega^{\ast}_{,x})-A_{1}(\omega_{,y} \omega^{\ast} + \omega
\omega^{\ast}_{,y}) +\\
A_{2}(\omega_{,x} \omega^{\ast} + \omega \omega^{\ast}_{,x}))] +
\frac{2G'_{1}(-A_{1}\omega^{\ast}_{,y}+A_{2}\omega^{\ast}_{,x})}{(1+\omega\omega^{\ast})^{2}} - \label{gorne1} \\
\frac{1}{(1+\omega\omega^{\ast})^{3}}[4G'_{1}(i(\omega_{,x}\omega^{\ast}_{,y}-\omega_{,y}\omega^{\ast}_{,x})-A_{1}(\omega_{,y}
\omega^{\ast} +\\
\omega\omega^{\ast}_{,y}) +
A_{2}(\omega_{,x} \omega^{\ast} + \omega \omega^{\ast}_{,x}))\omega^{\ast}] +
G'_{1}\frac{2\omega^{\ast}}{(1+\omega\omega^{\ast})^{2}}(A_{2,x}-A_{1,y})\bigg\} + \\
D_{x}G_{2,\omega}+D_{y}G_{3,\omega}=0,
\end{gathered}
\end{equation}
\begin{equation}
\begin{gathered}
\tilde{\mathcal{H}}_{,\omega^{\ast}} =
-16 \lambda_{1} \frac{[i(\omega_{,x}\omega^{\ast}_{,y}-\omega_{,y}\omega^{\ast}_{,x})-A_{1}(\omega_{,y} \omega^{\ast} + \omega
\omega^{\ast}_{,y}) + A_{2}(\omega_{,x} \omega^{\ast} + \omega \omega^{\ast}_{,x})]^{2}\omega}{(1+\omega
\omega^{\ast})^{5}} + \\
\frac{8\lambda_{1}[i(\omega_{,x}\omega^{\ast}_{,y}-\omega_{,y}\omega^{\ast}_{,x})-A_{1}(\omega_{,y} \omega^{\ast} + \omega
\omega^{\ast}_{,y}) + A_{2}(\omega_{,x} \omega^{\ast} + \omega
\omega^{\ast}_{,x})]}{(1+\omega\omega^{\ast})^{4}}(-A_{1}\omega_{,y}+A_{2}\omega_{,x}) + \\
V'\bigg(\frac{2\omega\omega^{\ast}}{1+\omega\omega^{\ast}}\bigg) \frac{2\omega}{(1+\omega\omega^{\ast})^{2}} + \\
\lambda_{4}\bigg\{\frac{1}{(1+\omega\omega^{\ast})^{2}}[G''_{1} \frac{4\omega}{(1+\omega\omega^{\ast})^{2}}
(i(\omega_{,x}\omega^{\ast}_{,y}-\omega_{,y}\omega^{\ast}_{,x})-A_{1}(\omega_{,y} \omega^{\ast} + \omega
\omega^{\ast}_{,y}) +\\
A_{2}(\omega_{,x} \omega^{\ast} + \omega \omega^{\ast}_{,x}))] +
\frac{2G'_{1}(-A_{1}\omega_{,y}+A_{2}\omega_{,x})}{(1+\omega\omega^{\ast})^{2}} - \label{gorne2} \\
\frac{1}{(1+\omega\omega^{\ast})^{3}}[4G'_{1}(i(\omega_{,x}\omega^{\ast}_{,y}-\omega_{,y}\omega^{\ast}_{,x})-A_{1}(\omega_{,y}
\omega^{\ast} +\\
\omega\omega^{\ast}_{,y}) +
A_{2}(\omega_{,x} \omega^{\ast} + \omega \omega^{\ast}_{,x}))\omega] +
G'_{1}\frac{2\omega}{(1+\omega\omega^{\ast})^{2}}(A_{2,x}-A_{1,y})\bigg\} + \\
D_{x}G_{2,\omega^{\ast}}+D_{y}G_{3,\omega^{\ast}}=0
\end{gathered}
\end{equation}
\begin{equation}
\begin{gathered}
\tilde{\mathcal{H}}_{,\omega_{,x}} =
\frac{8\lambda_{1}[i(\omega_{,x}\omega^{\ast}_{,y}-\omega_{,y}\omega^{\ast}_{,x})-A_{1}(\omega_{,y} \omega^{\ast} + \omega
\omega^{\ast}_{,y}) + A_{2}(\omega_{,x} \omega^{\ast} + \omega
\omega^{\ast}_{,x})]}{(1+\omega\omega^{\ast})^{4}}(i\omega^{\ast}_{,y}+A_{2}\omega^{\ast}) + \\
\frac{2\lambda_{4}G'_{1}(i\omega^{\ast}_{,y}+A_{2}\omega^{\ast})}{(1+\omega\omega^{\ast})^{2}} + G_{2,\omega} = 0, \label{dolne1}
\end{gathered}
\end{equation}
\begin{equation}
\begin{gathered}
\tilde{\mathcal{H}}_{,\omega_{,y}} =
\frac{8\lambda_{1}[i(\omega_{,x}\omega^{\ast}_{,y}-\omega_{,y}\omega^{\ast}_{,x})-A_{1}(\omega_{,y} \omega^{\ast} + \omega
\omega^{\ast}_{,y}) + A_{2}(\omega_{,x} \omega^{\ast} + \omega
\omega^{\ast}_{,x})]}{(1+\omega\omega^{\ast})^{4}}(-i\omega^{\ast}_{,x}-A_{1}\omega^{\ast}) + \\
\frac{2\lambda_{4}G'_{1}(-i\omega^{\ast}_{,x}-A_{1}\omega^{\ast})}{(1+\omega\omega^{\ast})^{2}} + G_{3,\omega} = 0, \label{dolne2}
\end{gathered}
\end{equation}
\begin{equation}
\begin{gathered}
\tilde{\mathcal{H}}_{,\omega^{\ast}_{,x}} =
\frac{8\lambda_{1}[i(\omega_{,x}\omega^{\ast}_{,y}-\omega_{,y}\omega^{\ast}_{,x})-A_{1}(\omega_{,y} \omega^{\ast} + \omega
\omega^{\ast}_{,y}) + A_{2}(\omega_{,x} \omega^{\ast} + \omega
\omega^{\ast}_{,x})]}{(1+\omega\omega^{\ast})^{4}}(-i\omega_{,y}+A_{2}\omega) + \\
\frac{2\lambda_{4}G'_{1}(-i\omega_{,y}+A_{2}\omega)}{(1+\omega\omega^{\ast})^{2}} + G_{2,\omega^{\ast}} = 0, \label{dolne3}
\end{gathered}
\end{equation}
\begin{equation}
\begin{gathered}
\tilde{\mathcal{H}}_{,\omega^{\ast}_{,y}} =
\frac{8\lambda_{1}[i(\omega_{,x}\omega^{\ast}_{,y}-\omega_{,y}\omega^{\ast}_{,x})-A_{1}(\omega_{,y} \omega^{\ast} + \omega
\omega^{\ast}_{,y}) + A_{2}(\omega_{,x} \omega^{\ast} + \omega
\omega^{\ast}_{,x})]}{(1+\omega\omega^{\ast})^{4}}(i\omega_{,x}-A_{1}\omega) + \\
\frac{2\lambda_{4}G'_{1}(i\omega_{,x}-A_{1}\omega)}{(1+\omega\omega^{\ast})^{2}} + G_{3,\omega^{\ast}} = 0, \label{dolne4}
\end{gathered}
\end{equation}
\begin{equation}
\begin{gathered}
\tilde{\mathcal{H}}_{,A_{1}} =
\frac{8\lambda_{1}[i(\omega_{,x}\omega^{\ast}_{,y}-\omega_{,y}\omega^{\ast}_{,x})-A_{1}(\omega_{,y} \omega^{\ast} + \omega
\omega^{\ast}_{,y}) + A_{2}(\omega_{,x} \omega^{\ast} + \omega
\omega^{\ast}_{,x})]}{(1+\omega\omega^{\ast})^{4}}(-\omega_{,y}\omega^{\ast}-\omega\omega^{\ast}_{,y}) + \\
\frac{2\lambda_{4}G'_{1}(-\omega_{,y}\omega^{\ast}-\omega\omega^{\ast}_{,y})}{(1+\omega\omega^{\ast})^{2}} = 0,
\label{dolne5}
\end{gathered}
\end{equation}
\begin{equation}
\begin{gathered}
\tilde{\mathcal{H}}_{,A_{2}} =
\frac{8\lambda_{1}[i(\omega_{,x}\omega^{\ast}_{,y}-\omega_{,y}\omega^{\ast}_{,x})-A_{1}(\omega_{,y} \omega^{\ast} + \omega
\omega^{\ast}_{,y}) + A_{2}(\omega_{,x} \omega^{\ast} + \omega
\omega^{\ast}_{,x})]}{(1+\omega\omega^{\ast})^{4}}(\omega_{,x}\omega^{\ast}+\omega\omega^{\ast}_{,x}) + \\
\frac{2\lambda_{4}G'_{1}(\omega_{,x}\omega^{\ast}+\omega\omega^{\ast}_{,x})}{(1+\omega\omega^{\ast})^{2}} = 0,
\label{dolne55}
\end{gathered}
\end{equation}
\begin{equation}
\begin{gathered}
\tilde{\mathcal{H}}_{,A_{1,y}} =
-2\lambda_{2} (A_{2,x} - A_{1,y}) - \lambda_{4} G_{1}\bigg(\frac{2\omega\omega^{\ast}}{1+\omega\omega^{\ast}}\bigg) = 0,
\label{dolne555}
\end{gathered}
\end{equation}
\begin{equation}
\begin{gathered}
\tilde{\mathcal{H}}_{,A_{2,x}} =
2\lambda_{2} (A_{2,x} - A_{1,y}) + \lambda_{4} G_{1}\bigg(\frac{2\omega\omega^{\ast}}{1+\omega\omega^{\ast}}\bigg) = 0,
\label{dolne5555}
\end{gathered}
\end{equation}
where $G'_{1}, G''_{1}$ denote the derivatives of the function $G_{1}$ with respect to its argument:
$\frac{2\omega\omega^{\ast}}{1+\omega\omega^{\ast}}$.
\vspace{0.1 in}
Now, we must make the equations (\ref{gorne1}) - (\ref{dolne5555}) self-consistent.\\
In this order, we need to reduce the number of independent equations by a proper choice of the functions $G_{k}, (k =1, 2, 3)$.
Usually, such ansatzes exist only for some special $V$ and in most cases of $V$ for many
nonlinear field models, it is impossible to reduce the system of corresponding dual equations, to Bogomolny equations. However, even at that time, such system can be used to derive at least some particular set of solutions of Euler-Lagrange equations. \\
Now, we consider $\omega, \omega^{\ast}, A_{i}, (i=1,2), G_{k}$, ($k=1, 2, 3$), as equivalent dependent variables, governed by the system of equations
(\ref{gorne1}) - (\ref{dolne5555}). We make two operations (similar operations were made firstly in \cite{Stepien2012}, for the cases of ungauged baby Skyrme models: full and restricted one).
Namely, as we see, after putting:
\begin{gather}
G'_{1} = -\frac{4\lambda_{1}[i(\omega_{,x}\omega^{\ast}_{,y}-\omega_{,y}\omega^{\ast}_{,x})-A_{1}(\omega_{,y} \omega^{\ast} + \omega
\omega^{\ast}_{,y}) + A_{2}(\omega_{,x} \omega^{\ast} + \omega \omega^{\ast}_{,x})]}{\lambda_{4}(1+\omega\omega^{\ast})^{2}}, \label{warG1}
\\
A_{2,x} - A_{1,y} = -\frac{\lambda_{4}}{2\lambda_{2}}G_{1}\bigg(\frac{2\omega\omega^{\ast}}{1+\omega\omega^{\ast}}\bigg),\\
G_{2}=const, \hspace{0.08 in} G_{3}=const, \label{warG2G3}
\end{gather}
the equations (\ref{dolne1})-(\ref{dolne5555}) become the tautologies and we have the candidate for Bogomolny decomposition:
\begin{gather}
\frac{4\lambda_{1}[i(\omega_{,x}\omega^{\ast}_{,y}-\omega_{,y}\omega^{\ast}_{,x})-A_{1}(\omega_{,y} \omega^{\ast} + \omega
\omega^{\ast}_{,y}) + A_{2}(\omega_{,x} \omega^{\ast} + \omega \omega^{\ast}_{,x})]}{\lambda_{4}(1+\omega\omega^{\ast})^{2}} = -G'_{1},
\label{rownBogomolny1}\\
2\lambda_{2} (A_{2,x} - A_{1,y}) + \lambda_{4} G_{1}\bigg(\frac{2\omega\omega^{\ast}}{1+\omega\omega^{\ast}}\bigg) = 0.
\label{rownBogomolny2}
\end{gather}
Now, we must check, when the equations (\ref{gorne1})-(\ref{gorne2}) are satisfied, if (\ref{rownBogomolny1})-(\ref{rownBogomolny2}) hold. Thus, we insert (\ref{warG1})-(\ref{warG2G3}) into (\ref{gorne1})-(\ref{gorne2}). We get the system of ordinary differential equations for $V$ and the solution of it is:
\begin{gather}
V=\frac{\lambda^{2}_{4}}{4}\bigg(\frac{1}{\lambda_{1}}(G'_{1})^{2} + \frac{1}{\lambda_{2}} G^{2}_{1}\bigg).
\end{gather}
So, we obtain Bogomolny decomposition for gauged restricted baby Skyrme model in (2+0) dimensions
\begin{equation}
\begin{gathered}
\frac{4\lambda_{1}[i(\omega_{,x}\omega^{\ast}_{,y}-\omega_{,y}\omega^{\ast}_{,x})-A_{1}(\omega_{,y} \omega^{\ast} + \omega
\omega^{\ast}_{,y}) + A_{2}(\omega_{,x} \omega^{\ast} + \omega \omega^{\ast}_{,x})]}{\lambda_{4}(1+\omega\omega^{\ast})^{2}} = -G'_{1},\\
A_{2,x} - A_{1,y} = -\frac{\lambda_{4}}{2\lambda_{2}}G_{1}\bigg(\frac{2\omega\omega^{\ast}}{1+\omega\omega^{\ast}}\bigg).
\label{bogomolny_decomp}
\end{gathered}
\end{equation}
for the potential $V(\frac{2\omega\omega^{\ast}}{1+\omega\omega^{\ast}})$, satisfying
\begin{gather}
V=\frac{\lambda^{2}_{4}}{4}\bigg(\frac{1}{\lambda_{1}}(G'_{1})^{2} + \frac{1}{\lambda_{2}} G^{2}_{1}\bigg), \label{war_potencjal}
\end{gather}
where $G_{1}=G_{1}\bigg(\frac{2\omega\omega^{\ast}}{1+\omega\omega^{\ast}}\bigg) \in \mathcal{C}^{2}$.
\section{Summary}
We applied the concept of strong necessary conditions for gauged restricted baby Skyrme model in (2+0)-dimensions. In result, we obtained Bogomolny decomposition (\ref{bogomolny_decomp}), i.e. Bogomolny equations, for this model, for the class of the potentials $V(\frac{2\omega\omega^{\ast}}{1+\omega\omega^{\ast}})$. We derived also the condition of existence of this Bogomolny decomposition, this condition has the form (\ref{war_potencjal}).
\section{Acknoweledgements}
The author thanks to Dr Hab. A. Wereszczy\'{n}ski for interesting discussions about gauged restricted baby Skyrme model, carried out in 2010. The author thanks also to Dr Z. Lisowski for some interesting remarks.
\section{Computational resources}
The computations were carried out by using WATERLOO MAPLE Software on computer ``mars'' in ACK-CYFRONET AGH in Krak\'{o}w.
This research was supported in part by PL-Grid Infrastructure, too.
|
1,941,325,220,080 | arxiv | \section{Introduction} \label{sec:intro}
In 1998, two independent studies of distant supernovae discovered the accelerated expansion of the universe \cite{Riess:1998cb,Perlmutter:1998np}, which is one of the most surprising astronomical discoveries in history. One can explain the accelerated expansion by introducing a component with negative pressure, usually called dark energy, and study its properties by cosmological observations. The cosmic microwave background (CMB) data measured by the \emph{Planck} satellite \cite{Aghanim:2018eyx} provided precise constraints on the cosmological parameters in the $\Lambda$ cold dark matter ($\Lambda$CDM) model, which is usually regarded as the standard model of cosmology \cite{Bahcall:1999xn} with the origin of dark energy being explained as the cosmological constant $\Lambda$.
The $\Lambda$CDM model fits the current cosmological observations quite well, but it suffers from two theoretical puzzles: the fine-tuning problem and the coincidence problem \cite{Weinberg:1988cp}. Some dynamical dark energy models may relieve these problems \cite{Joyce:2014kja,Zhao:2018fjj,Feng:2019mym,Li:2018ydj}.
However, the extra dark-energy parameters are hard to be precisely constrained by the CMB data alone due to the strong parameter degeneracies \cite{Akrami:2018vks}.
The baryon acoustic oscillation (BAO) measurements from galaxy redshift surveys \cite{Beutler:2011hx,Ross:2014qpa,Alam:2016hwk} as a representative low-redshift cosmological probe are usually combined with the CMB data to break the cosmological parameter degeneracies. In the future, some other low-redshift cosmological probes are expected to yield large amounts of data, such as fast radio bursts (FRBs) and gravitational waves (GWs). Therefore, what precision to cosmological parameters these two non-optical probes could measure and how many events are needed for precise cosmological parameter estimation in different dark energy models are important questions at present.
FRBs are extremely bright, short-duration radio
signals. One of the important characteristics of FRBs is the high dispersion measure (DM), which contains valuable information on the cosmological distance they have traveled. In 2007, the first FRB event, FRB010724, was observed by the 64-m Parkes Radio Telescope in Australia \cite{Lorimer:2007qn}. Although FRB010724 has a high Galactic latitude, it was previously thought to be caused by artificial interference due to its low signal-to-noise ratio (SNR). The second FRB event, FRB010621, was reported in ref.~\cite{Keane:2012yh} in 2012. Later in 2013, Thornton et al. reported four new FRB samples \cite{Thornton:2013iua}, which makes the study of FRBs become an important new direction in radio astronomy. Recently, the Canadian Hydrogen Intensity Mapping Experiment (CHIME)/FRB Project has released its first catalog of 535 FRBs, including 61 bursts from 18 previously reported repeating sources \cite{Amiri:2021tmj}. Till now, 14 FRBs' host galaxies and cosmological redshifts have been identified \cite{Spitler:2016dmz,Chatterjee:2017dqg,Tendulkar:2017vuq,Kokubo:2017kkg,Bassa:2017tke,Prochaska231,Ravi:2019alc,Bannister:2019iju,CHIMEFRB:2020bcn,chittidi2020dissecting,Bhandari:2020oyb,mannings2020high,Marcote:2020ljw,Macquart:2020lln,James:2021oep,Bhardwaj:2021xaa,Law:2020cnm,heintz2020host,simha2020disentangling}. These abundant data tremendously exceed the amount of current data of GWs with electromagnetic (EM) counterparts and further prove that FRBs may become a promising cosmological probe in the future.
The data of localized FRBs could be used to constrain cosmological parameters with the
Macquart relation, which provide a relationship between $\rm DM_{\rm IGM}$ and $z$ \cite{Macquart:2020lln}. There have been a series of works using FRBs as a cosmological probe to study the expansion history of the universe, such as estimating the cosmological parameters \cite{Deng:2013aga} and conducting cosmography \cite{Gao:2014iva} by using FRBs and Gamma-Ray Bursts association, using FRBs to measure Hubble parameter $H(z)$ \cite{Wu:2020jmx} and the Hubble constant \cite{Hagstotz:2021jzu,Wu:2021jyk}, using the FRB data combined with the BAO data or the type Ia supernovae (SN) data to break the cosmological parameter degeneracies \cite{Zhou:2014yta,Jaroszynski:2018vgh}, using FRB dispersion measures as distance measures \cite{Kumar:2019qhc}, probing compact dark matter \cite{Munoz:2016tmg,Wang:2018ydd}, testing Einstein's weak equivalence principle \cite{Wei:2015hwd,Yu:2018slt,Xing:2019geq}, and many other works \cite{Yang:2016zbm,Li:2017mek,Liu:2019jka,Yu:2017beg,Walters:2017afr}. {For a recent review, see ref.~\cite{Bhandari:2021thi}.}
In addition, there is also an idea of combining the FRB data with the GW data \cite{Wei:2018cgd}. Wei et al. noticed the fact that DM is proportional to the Hubble constant $H_{0}$ while luminosity distance $d_{\rm L}$ is inversely proportional to $H_{0}$, and they found that the product of DM from FRB and $d_{\rm L}$ from GW can yield a quantity free of $H_0$, which may be useful in cosmological parameter estimation. Inspired by this work, ref.~\cite{Li:2019klc} introduced a cosmology-independent estimate of the fraction of baryon mass in the intergalactic medium (IGM). In ref.~\cite{Zhao:2020ole}, the authors (including three of the authors in the present paper) also showed that combining the FRB data with the GW data could be helpful in cosmological parameter estimation. GWs provide a new method to measure the cosmic distance \cite{Schutz:1986gp}, dubbed as ``standard sirens" \cite{Holz:2005df}, which successfully avoids the possible systematics in the cosmic distance ladder method. With
the third-generation ground-based GW detectors, such as the Einstein Telescope (ET), large amounts of GW-EM events from the mergers of binary neutron stars (BNSs) are expected to be detected \cite{Zhao:2010sz}. Thus, GWs may become a new precise cosmological probe to determine various cosmological parameters \cite{Zhang:2019loq,Zhang:2018byx,Zhang:2019ylr,Li:2019ajo,Cai:2017aea,Cai:2016sby,Aasi:2013wya,Zhao:2010sz}. Especially, the GW data could be very helpful in breaking the cosmological parameter degeneracies when combined with other cosmological probes \cite{Wang:2018lun,Wang:2019tto,Wang:2021srv,Jin:2020hmc,Jin:2021pcv,Zhao:2019gyk}.
The capability of future FRB data in improving the cosmological parameter estimation in the $w\rm{CDM}$ and Chevallier-Polarski-Linder (CPL) models was studied in ref.~\cite{Zhao:2020ole}. It has been shown that the FRB data could break the parameter degeneracies in the CMB and GW data. But these two models and the most considered models in the literature \cite{Deng:2013aga,Gao:2014iva,Wu:2020jmx,Hagstotz:2021jzu,Wu:2021jyk,Zhou:2014yta,Jaroszynski:2018vgh} are parametred and phenomenological models, and thus it is important to consider some dark energy models with deep and solid theoretical foundations and see whether the FRB data are still useful in cosmological parameter estimation in these dark-energy theoretical models. In this work, we investigate two dark energy models, i.e., the holographic dark energy (HDE) model and the Ricci dark energy (RDE) model. The HDE model is viewed to have a quantum gravity origin, which is constructed by combining the holographic principle of quantum gravity with the effective quantum field theory \cite{Li:2004rb,Wang:2016och}. The HDE model not only can naturally explain the fine-tuning and coincidence problems \cite{Li:2004rb}, but also can fit the observational data well \cite{Zhang:2005hs,Zhang:2007sh,Chang:2005ph,Li:2013dha,Xu:2016grp}. Its theoretical variant, the RDE model, uses the average radius of the Ricci scalar curvature rather than the future event horizon of the universe as the infrared (IR) cutoff within the theoretical framework of holographic dark energy \cite{Gao:2007ep,Zhang:2009un}. Although the RDE model is not favored by the current observations \cite{Xu:2016grp}, we still study it as a demonstration in the issue of forecasting the capability of FRB data in cosmological parameter estimation.
In this paper, we shall first combine the future FRB data with the current CMB data from \emph{Planck}, and then we will study the combination of two future big-data low-redshift measurements, the FRB data and the GW standard siren data, in cosmological parameter estimation in the HDE and RDE models.
This paper is organized as follows. A brief description of the HDE and RDE models, the methods for simulating the FRB data and the GW data, and the current cosmological data are introduced in Section \ref{sec:Method}. The constraint results and relevant discussions are given in Section \ref{sec:Result}. We present our conclusions in Section \ref{sec:con}.
Throughout this paper, we adopt the units in which the speed of light equals 1.
\section{Methods and data}\label{sec:Method}
\subsection{Brief description of the HDE and RDE models}
According to the Bekenstein entropy bound, an effective field theory considered in a box of size $L$ with ultraviolet (UV) cutoff ${\varLambda}_{\rm uv}$ gives the total entropy $S=L^{3}\varLambda_{\rm uv}^{3}\le S_{\rm{BH}}\equiv \pi M^{2}_{\rm{Pl}}L^{2}$, where $S_{\rm{BH}}$ is the entropy of a black hole with the same size $L$ and $M_{\rm{Pl}}=1/\sqrt{8\pi G}$ is the reduced Planck mass. However, Cohen et al. pointed out that in quantum field theory a short distance (i.e., UV) cutoff is related to a long distance (i.e., IR) cutoff due to the limit set by forming a black hole, and proposed a more restrictive bound, i.e., the energy bound \cite{Cohen:1998zx}. If the quantum zero-point energy density $\rho_{\rm{vac}}$ is relevant to a UV cutoff, the total energy of a system with size $L$ would not exceed the mass of a black hole of the same size, namely, $L^{3}\rho_{\rm{vac}}\le LM^{2}_{\rm{Pl}}$. Obviously, the IR cutoff size of this effective quantum field theory is taken to be the largest length size compatible with this bound.
If we take the whole universe into account, the vacuum energy related to this holographic principle is viewed as dark energy, called ``holographic dark energy''. The dark energy density can be expressed as \cite{Li:2004rb}
\begin{align}
\rho_{\rm{de}}=3c^{2}M^{2}_{\rm{Pl}}L^{-2},
\end{align}
where $c$ is a dimensionless model parameter, which is used to characterize all the theoretical uncertainties in the effective quantum field theory, and this parameter is extremely important in phenomenologically determining the evolution of holographic dark energy.
If $L$ is chosen to be the Hubble scale of the universe, then the dark energy density will be close to the observational result. However, Hsu pointed out that the equation of state (EoS) of dark energy in this case is not correct \cite{Hsu:2004ri}. Li subsequently suggested that $L$ should be chosen to be the size of the future event horizon \cite{Li:2004rb},
\begin{align}
R_{\mathrm{eh}}(a)=a \int_{t}^{\infty} \frac{d t^{\prime}}{a}=a \int_{a}^{\infty} \frac{d a^{\prime}}{H(a') a^{\prime 2}},
\end{align}
where $a$ is the scale factor of the universe and $H(a)$ is the Hubble parameter as a function of $a$. In this case, the EoS of dark energy can realize the cosmic acceleration, and the model with such a setting is usually called the HDE model.
In the HDE model, the dynamical evolution of dark energy is governed by the following differential equations,
\begin{align}
\frac{1}{E(z)} \frac{d E(z)}{dz}&=-\frac{\Omega_{\mathrm{de}}(z)}{1+z}\left(\frac{1}{2}+\frac{\sqrt{\Omega_{\mathrm{de}}(z)}}{c}-\frac{3}{2 \Omega_{\mathrm{de}}(z)}\right),\\
\frac{d \Omega_{\mathrm{de}}(z)}{d z}&=-\frac{2 \Omega_{\mathrm{de}}(z)\left(1-\Omega_{\mathrm{de}}(z)\right)}{1+z}\left(\frac{1}{2}+\frac{\sqrt{\Omega_{\mathrm{de}}(z)}}{c}\right),
\end{align}
where $E(z)\equiv H(z)/H_{0}$ is the dimensionless Hubble parameter and $\Omega_{\rm{de}}(z)$ is the fractional density of dark energy. Solving these two differential equations with the initial conditions $\Omega_{\rm de}=1-\Omega_{\rm m0}$ and $E(0)=1$ will obtain the evolutions of $\Omega_{\rm{de}}(z)$ and $E(z)$.
Then from the energy conservation equations,
\begin{align}
\dot{\rho}_{\rm{m}}+3 H\rho_{\rm{m}}&=0,\nonumber\\
\dot{\rho}_{\rm{d e}}+3H(1+w) \rho_{\rm{d e}}&=0 ,
\end{align}
where a dot represents the derivative with respect to the cosmic time $t$ and $\rho_{\rm{m}}$ is the matter density, one can get the EoS of dark energy in the HDE model,
\begin{align}
w=-\frac{1}{3}-\frac{2\sqrt{\Omega_{\rm{de}}}}{3c} .
\end{align}
The RDE model is defined by choosing the average radius of the Ricci scalar curvature as the IR cutoff length scale in the theory.
In FRW cosmology, the Ricci scalar takes
\begin{align}
R=-6\left(\dot{H}+2 H^{2}+\frac{k}{a^{2}}\right) ,
\end{align}
where $k = 1$, $0$, and $-1$ stands for closed, flat, and open geometry, respectively, and we take $k = 0$ in the rest of this work. The dark energy density in the RDE model can be expressed as \cite{Gao:2007ep,Zhang:2009un}
\begin{align}
\rho_{\mathrm{de}}=3 \gamma M_{\mathrm{pl}}^{2}\left(\dot{H}+2 H^{2}\right),
\end{align}
where $\gamma$ is a positive constant redefined in terms of $c$. The evolution of the Hubble parameter is determined by the following differential equation,
\begin{align}
E^{2}=\Omega_{\mathrm{m}} e^{-3 x}+\gamma\left(\frac{1}{2} \frac{d E^{2}}{d x}+2 E^{2}\right),
\end{align}
with $x\equiv \ln a$. The solution to this differential equation is found to be
\begin{align}
E(z)=\left(\frac{2 \Omega_{\mathrm{m}}}{2-\gamma}(1+z)^{3}+\left(1-\frac{2 \Omega_{\mathrm{m}}}{2-\gamma}\right)(1+z)^{\left(4-\frac{2}{\gamma}\right)}\right)^{1 / 2}.
\end{align}
\subsection{Simulation of FRBs}\label{21}
When electromagnetic waves propagate in plasma, they will interact with free electrons and generate dispersion. The group velocities of EM waves vary with respect to frequency, resulting in lower frequency signal being delayed. By measuring the time delay $(\Delta t)$ of the pulse signal between the highest frequency $(\nu_{\rm h})$ and the lowest frequency $(\nu_{\rm l})$, we could obtain the dispersion measure of an FRB,
\begin{align}
\mathrm{D M}=\Delta t \frac{2 \pi m_{\mathrm{e}} }{e^{2}} \frac{\left(\nu_{\rm l} \nu_{\mathrm{h}}\right)^{2}}{\nu_{\mathrm{h}}^{2}-\nu_{\rm l}^{2}} ,
\end{align}
where $m_{\rm e}$ is the electron mass and $e$ is the unit charge. The physical interpretation of DM is the integral of the electron number density along the line-of-sight, which is expressed as
\begin{align}
\mathrm{D M}=\int_{0}^{D} n_{\mathrm{e}}(l) \mathrm{d} l,
\end{align}
where $n_{\mathrm{e}}$ is the electron number density, $l$ is the path length, and $D$ is the distance to FRB. The DMs of current FRB observations are mainly in the range of $100 \sim 3000$ $\mathrm{pc}~\mathrm{cm}^{-3}$ \cite{Amiri:2021tmj}, exceeding the amount of the dispersion contributed by the Milky Way by 10 to 20 times.
Until now, the progenitors of FRBs have not been generally figured out (excluding
a Galactic magnetar), so the real redshift distribution of FRBs is still an open issue. We assume that the comoving number density distribution of FRBs is proportional to the cosmic star formation history (SFH) \cite{Hopkins:2006bw,Caleb:2015uuk}, and we thus obtain the SFH-based redshift distribution of FRBs \cite{Munoz:2016tmg},
\begin{align}
N_{\rm{SFH}}(z)=\mathcal{N}_{\rm{SFH}}\frac{\dot{\rho}_{*}{d^2_{\rm C}}(z)}{H(z)(1+z)}e^{-{d^2_{\rm{L}}}(z)/[2{d^2_{\rm{L}}}(z_{\rm cut})]},
\end{align}
where $\mathcal{N}_{\rm{SFH}}$ is a normalization factor, $d_{\rm C}$ is the comoving distance at redshift $z$, and $z_{\rm cut}=1$ is a Gaussian cutoff. The number of detected FRBs decreases at $z>z_{\rm cut}$ due to the instrumental signal-to-noise ratio threshold effect. The parameterized density evolution $\dot{\rho}_{*}(z)$ reads
\begin{align}
\dot{\rho}_{*}(z)=l \frac{a+b z}{1+(z / n)^{d}} ,
\end{align}
with $l=0.7$, $a=0.0170$, $b=0.13$, $n=3.3$, and $d=5.3$ \citep{Hopkins:2006bw,Cole:2000ea}.
The total observed DM of an FRB consists of the contributions from the FRB's host galaxy, IGM, and the Milky Way \cite{Thornton:2013iua,Deng:2013aga},
\begin{align}\label{eq2}
\rm{DM}_{\rm{obs}}=\rm{DM}_{\rm{host}}+\rm{DM}_{\rm{IGM}}+\rm{DM}_{\rm{MW}}.
\end{align}
The second term on the right hand side, $\rm{DM}_{\rm{IGM}}$, relates to cosmology, and its average value is expressed as
\begin{align}\label{eq3}
\langle\mathrm{DM}_{\mathrm{IGM}}\rangle=\frac{3H_0\Omega_{\rm b}f_{\mathrm{IGM}}}{8\pi G m_{\mathrm{p}}}\int_0^z\frac{\chi(z')(1+z')dz'}{E(z')},
\end{align}
with
\begin{align}
\chi(z)=Y_{\rm H}\chi_\mathrm{{e,H}}(z)+\frac{1}{2}Y_{\rm He}\chi_\mathrm{{e,He}}(z),
\end{align}
where the parameter $\Omega_\mathrm{b}$ is the present-day baryon fractional density, $f_{\mathrm{IGM}}\simeq 0.83$ is the fraction of baryon mass in IGM \cite{Shull:2011aa}, $G$ is Newton's constant, $m_{\mathrm{p}}$ is the proton mass, $Y_{\rm H}=3/4$ is the hydrogen mass fraction, $Y_{\rm He}=1/4$ is the helium mass fraction, and the terms $\chi_\mathrm{{e,H}}$ and $\chi_\mathrm{{e,He}}$ are the ionization fractions for H and He, respectively. We take $\chi_\mathrm{{e,H}}=\chi_\mathrm{{e,He}}=1$ \cite{Fan:2006dp}, since both H and He are fully ionized when $z<3$.
From eq.~(\ref{eq2}), if we could determine $\rm{DM}_{\rm{obs}}$, $\rm{DM}_{\rm{host}}$, and $\rm{DM}_{\rm{MW}}$, the last remaining term $\rm{DM}_{\rm{IGM}}$ could be measured. The total uncertainty of $\rm{DM}_{\rm{IGM}}$ is expressed as
\begin{align}\label{eq6}
\sigma_{\rm{DM}_{\rm{IGM}}}=\left[\sigma_{\rm obs}^{2}+\sigma_{\rm MW}^{2}+\sigma_{\rm IGM}^{2}
+\left(\frac{\sigma_{\rm host}}{1+z}\right)^{2} \right]^{1/2}.
\end{align}
We take the observational uncertainty $\sigma_{\rm {obs}}=1.5~{\rm {pc~cm^{-3}}}$ \cite{Petroff:2016tcr}. From the Australia Telescope National Facility pulsar catalog \cite{Manchester:2004bp}, the average $\sigma_{\rm MW}$ is about $10~{\rm {pc~cm^{-3}}}$ for the sources at high Galactic latitudes. Due to the inhomogeneity of the baryon matter in IGM, the deviation of an individual event from the mean $\rm{DM}_{\rm{IGM}}$ is described by $\sigma_{\rm {IGM}}$. Here we use the following formula \cite{Li:2019klc},
\begin{align}\label{sigmaigm}
\sigma_{\rm IGM}=\begin{cases}\frac{52-45z-263z^2+21z^3+582z^4}{1-4z+7z^2-7z^3+5z^4}, &z\leq 1.03,\\
-416+270z+480z^2+23z^3-162z^4, &1.03<z\leq 1.13,\\
38\arctan [0.6z+1]+17, &z>1.13,
\end{cases}
\end{align}
which is fitted from the simulations in refs.~\citep{FaucherGiguere:2011cy,Conroy:2013iaa}. $\sigma_{\rm{host}}$ is hard to estimate because it depends on the individual properties of an FRB, for example, the type of the host galaxy, the location of FRB in the host galaxy, and the near-source plasma. We take $\sigma_{\rm{host}} = 30 ~{\rm {pc~cm^{-3}}}$ as the uncertainty of ${\rm DM_{host}}$.
The first CHIME/FRB catalog suggests a sky rate of $818 \pm 64~({\rm stat.})^{+220}_{-200}~({\rm sys.}) $ sky$^{-1}$ day$^{-1}$ above a fluence of 5 Jy ms at 600 MHz. Currently, among hundreds of FRB sources, there are 14 FRBs localized to host galaxies. Therefore, we expect that $\sim 10$ FRBs with redshifts can be detected per day \cite{Walters:2017afr}. Thus, we take $N_{\rm FRB}=1000$ as a normal expectation and $N_{\rm FRB}=10000$ as an optimistic expectation for a few years observation, with $N_{\rm FRB}$ being the event number of FRBs.
\subsection{Simulation of standard sirens}
To simulate the GW standard siren data generated by BNS mergers from ET, we first need to assume the redshift distribution of GWs \cite{Zhao:2010sz,Cai:2016sby},
\begin{align}
P(z) \propto \frac{4 \pi d_{\mathrm{C}}^{2}(z) R(z)}{H(z)(1+z)},
\end{align}
where $R(z)$ is the time evolution of the burst rate with the form \cite{Schneider:2000sg,Cutler:2009qv,Cai:2016sby}
\begin{align}
R(z)=\left\{\begin{array}{rcl}
1+2 z, & z \leq 1, \\
\frac{3}{4}(5-z), & 1<z<5, \\
0, & z \geq 5.
\end{array}\right.
\end{align}
By observing the GW waveform of a compact binary merger, one could independently determine the luminosity distance $d_{\rm L}$ to the GW source. If the redshift of the GW source is obtained through the observation of the EM counterpart, then the $d_{\rm L}$--$z$ relation can be established to study the expansion history of the universe \cite{Holz:2005df}.
An incoming GW signal $h(t)$ could be written as a linear combination of two wave polarizations in the transverse traceless gauge,
\begin{align}
h(t)=F_+(\theta, \phi, \psi)h_+(t)+F_\times(\theta, \phi, \psi)h_\times(t),\label{ht}
\end{align}
where $\psi$ is the polarization angle, ($\theta$, $\phi$) are the location angles of the GW source, and $+$ and $\times$ denote the plus and cross polarizations, respectively. The antenna pattern functions of one Michelson-type interferometer of ET are \cite{Zhao:2010sz}
\begin{align}
F_+^{(1)}(\theta, \phi, \psi)=&~~\frac{{\sqrt 3 }}{2}\Big[\frac{1}{2}(1 + {\cos ^2}\theta )\cos (2\phi )\cos (2\psi ) - \cos \theta \sin (2\phi )\sin (2\psi )\Big],\label{equa:F1}\\
F_\times^{(1)}(\theta, \phi, \psi)=&~~\frac{{\sqrt 3 }}{2}\Big[\frac{1}{2}(1 + {\cos ^2}\theta )\cos (2\phi )\sin (2\psi ) + \cos \theta \sin (2\phi )\cos (2\psi )\Big].\label{equa:F2}
\end{align}
Since the three interferometers of ET have inclined angles of $60^\circ$ with each other, the other two pattern functions are $F_{+,\times}^{(2)}(\theta, \phi, \psi)=F_{+,\times}^{(1)}(\theta, \phi+2\pi/3, \psi)$ and $F_{+,\times}^{(3)}(\theta, \phi, \psi)=F_{+,\times}^{(1)}(\theta, \phi+4\pi/3, \psi)$.
Applying the stationary phase approximation to the time domain signal $h(t)$, we can get its Fourier transform $\tilde{h}(f)$ \cite{Zhao:2010sz},
\begin{align}
\tilde{h}(f)=\mathcal{A} f^{-7 / 6} \exp \left\{i\left(2 \pi f t_{c}-\pi / 4+2 \psi(f / 2)-\varphi_{I,(2,0)}\right)\right\},
\end{align}
where ``$\sim$" above a function denotes the Fourier transform and $\mathcal{A}$ is the amplitude in the Fourier space,
\begin{align}
\mathcal{A}=&~~\frac{1}{d_{\rm L}}\sqrt{F_+^2(1+\cos^2\iota)^2+4F_\times^2\cos^2\iota} \sqrt{5\pi/96}\pi^{-7/6}\mathcal{M}_{\rm c}^{5/6},
\label{equa:A}
\end{align}
where $\iota$ is the inclination angle between the direction of binary's orbital angular momentum and the line of sight. The observed chirp mass is defined as $\mathcal{M}_{\rm c}=(1+z)M \eta^{3/5}$, with $M=m_1+ m_2$ being the total mass of coalescing binary system with component masses $m_{1}$ and $m_{2}$, and $\eta=m_1 m_2/M^2$ being the symmetric mass ratio. $\psi(f)$ and $\varphi_{I,(2,0)}$ are given by \cite{Sathyaprakash:2009xs,Blanchet:2004bb}
\begin{align}
\psi(f)&=-\psi_{c}+\frac{3}{256 \eta} \sum_{i=0}^{7} \psi_{i}(2 \pi M f)^{(i-5) / 3},\nonumber\\
\varphi_{I,(2,0)}&=\tan ^{-1}\left(-\frac{2 \cos (\iota) F_{\times}}{\left(1+\cos ^{2}(\iota)\right) F_{+}}\right),
\end{align}
where $\psi_{c}$ is the coalescence phase, and the detailed expressions of the Post-Newtonian (PN) coefficients $\psi_{i}$ are given in ref.~\cite{Sathyaprakash:2009xs}. Notice that here $\psi$'s are the PN coefficients, different from the one in Eqs.~(\ref{ht})--(\ref{equa:F2}), which is the polarization angle.
The total SNR of ET is
\begin{equation}
\rho=\sqrt{\sum\limits_{i=1}^{3}(\rho^{\left(i\right)})^2},
\label{euqa:sum}
\end{equation}
where $\rho_{i}=\sqrt{\left\langle \tilde{h}^{(i)},\tilde{h}^{(i)}\right\rangle}$,
with the inner product being defined as
\begin{equation}
\left\langle{a,b}\right\rangle=4\int_{f_{\rm lower}}^{f_{\rm upper}}\frac{\tilde a(f)\tilde b^\ast(f)+\tilde a^\ast(f)\tilde b(f)}{2}\frac{df}{S_{\rm n}(f)},
\label{euqa:product}
\end{equation}
where $S_{\rm n}(f)$ is the one-sided noise power spectral density. We use the
interpolation method to fit the sensitivity data of ET to get the fitting function of $S_{\rm n}(f)$ \cite{ETcurve-web}. We choose $f_{\rm lower} = 1$ Hz as the lower cutoff frequency and $f_{\rm upper} = 2/(6^{3/2}2\pi M_{\rm obs})$ as the upper cutoff frequency with $M_{\rm obs}=(m_{1}+m_{2})(1+z)$ \cite{Zhao:2010sz}.
The luminosity distance $d_{\rm L}$ from GW observation is directly used to constrain cosmological parameters. The total error of $d_{\rm L}$ is expressed as
\begin{align}\label{eq11}
\sigma_{d_{\mathrm{L}}}=\sqrt{\left(\sigma_{d_{\mathrm{L}}}^{\text {inst }}\right)^{2}+\left(\sigma_{d_{\mathrm{L}}}^{\text {lens }}\right)^{2}+\left(\sigma_{d_{\mathrm{L}}}^{\mathrm{pv}}\right)^{2}},
\end{align}
where $\sigma^{\rm inst}_{d_{\rm L}}$ , $\sigma^{\rm lens}_{d_{\rm L}}$, and $\sigma^{\rm pv}_{d_{\rm L}}$ are the instrumental error, the weak-lensing error, and the peculiar velocity error of luminosity distance, respectively. In our previous work \cite{Zhao:2020ole}, we adopted an approximation to the instrumental error, $\sigma_{d_{\rm L}}^{\rm inst}\simeq 2d_{\rm L}/\rho$, in which the factor of 2 accounts for the maximal effect of the correlation between the luminosity distance and the inclination angle \cite{li2015extracting}. Indeed, this approximation does not exactly show the covariance between the luminosity distance and other parameters and may lead to biased results. Hence, in this work we employ a full Fisher matrix analysis to correctly account for the parameter correlations. The Fisher information matrix of the GW signal can be expressed as
\begin{align}
\boldsymbol{F}_{i j}=\left\langle\frac{\partial \boldsymbol{h}(f)}{\partial \theta_{i}}, \frac{\partial \boldsymbol{h}(f)}{\partial \theta_{j}}\right\rangle,
\end{align}
where $\boldsymbol{h}$ is given by three interferometers of ET,
\begin{align}
\boldsymbol{h}(f)=\left[\frac{\tilde{h}_{1}(f)}{\sqrt{S_{\mathrm{n}}(f)}}, \frac{\tilde{h}_{2}(f)}{\sqrt{S_{\mathrm{n}}(f)}}, \frac{\tilde{h}_{3}(f)}{\sqrt{S_{\mathrm{n}}(f)}}\right]^{\mathrm{T}},
\end{align}
and $\theta_{i}$ denotes the set of nine parameters $(\mathcal{M}_{\rm c},\eta,d_{\rm L},\theta,\phi,\psi,\iota,t_{c},\psi_{c})$ for a GW event. We then could estimate $\sigma^{\rm inst}_{d_{\rm L}}$ as
\begin{align}
\sigma^{\rm inst}_{d_{\rm L}}=\Delta\theta_{d_{\rm L}}=\sqrt{\left(F^{-1}\right)_{d_{\rm L}d_{\rm L}}},
\end{align}
where $F_{ij}$ is the total Fisher information matrix for the ET with three interferometers.
The GW data used in this work are ``bright sirens" and thus need the redshift information. The redshifts of GW sources are measured by their EM counterparts. In this work, we consider the short gamma ray bursts (SGRBs) as EM counterparts. Since SGRBs are supposed to be strongly beamed, we limit the observations of inclination angle from SGRBs within $20^\circ$. Therefore, in the Fisher matrix for each GW event, the sky location $(\theta , \phi)$, the inclination angle $\iota$, the mass of each NS ($m_1$, $m_2$), the coalescence phase $\psi_{\rm c}$, and the polarization angle $\psi$ are evenly sampled in the ranges of [0, $\pi$], [0, $2\pi$], [0, $\pi/9$], $[1, 2]\ M_{\odot}$, $[1, 2]\ M_{\odot}$, [0, $2\pi$], and [0, $2\pi$], respectively, where $M_{\odot}$ is the solar mass. To eliminate the scatter of mock data, these parameters are randomly selected for 100 times to perform the Fisher matrix analysis, and we adopt the average results.
We then turn to the other two errors. The weak-lensing error is given by \cite{Hirata:2010ba,Tamanini:2016zlh}
\begin{align}
\sigma_{d_{\mathrm{L}}}^{\text {lens }}(z)=d_{\mathrm{L}}(z) \times 0.066\left[\frac{1-(1+z)^{-0.25}}{0.25}\right]^{1.8}.
\end{align}
Here we introduce a delensing factor, i.e. the use of dedicated matter surveys along the line of sight of the GW event in order to estimate the lensing magnification distribution and thus remove part of the weak lensing uncertainty.
Following refs.~\cite{Speri:2020hwc,Yang:2021qge}, the delensing could be achieved 30\% at $z=2$ and the delensing factor can be expressed as
\begin{align}
F_{\text {delens }}(z)=1-\frac{0.3}{\pi / 2} \arctan \left(z / z_{*}\right),
\end{align}
with $z_{*}=0.073$. The final lensing uncertainty on $d_{\rm L}$ is
\begin{align}
\sigma_{d_{\mathrm{L}}}^{\text {wl}}(z)=F_{\text {delens }}(z) \sigma_{d_{\mathrm{L}}}^{\text {lens }}(z)
\end{align}
Here we use $\sigma_{d_{\mathrm{L}}}^{\text {wl}}$ to replace $\sigma_{d_{\mathrm{L}}}^{\text {lens }}$ in eq.~(\ref{eq11}).
The peculiar velocity of the source relative to the Hubble flow introduces an additional source of error. For the peculiar velocity error, we take the form in ref.~\cite{Kocsis:2005vv}
\begin{align}
\sigma_{d_{\mathrm{L}}}^{\mathrm{pv}}(z)=d_{\mathrm{L}}(z) \times\left[1+\frac{(1+z)^{2}}{H(z) d_{\mathrm{L}}(z)}\right] \sqrt{\left\langle v^{2}\right\rangle},
\end{align}
where $H(z)$ is the Hubble parameter and $\sqrt{\left\langle v^{2}\right\rangle}$ is the peculiar velocity of the GW source, set to be $500\, {\rm km\ s^{-1}}$.
ET is expected to detect about $10^{5}$ BNS mergers per year but in which only $\sim 10^{-3}$ of the events with $\gamma$-ray bursts toward us \cite{Yu:2021nvx}. So a few $\times 10^{2}$ GW events could be treated as ``bright sirens" per year.
Recently, Chen et al. forecasted that 910 GW standard siren events could be detected by CE and Swift++ with a 10-year observation \cite{Chen:2020zoq}.
In this work, we simulate 1000 standard siren events generated by BNS mergers within a 10-year observation of ET. In figure \ref{FRB+ET}, we make a comparison between the FRB mock data in the normal expectation scenario and the GW mock data. We can see that in our simulations, the FRB data are mainly distributed below $z\approx 1.5$, but the GW data could extend to $z\approx 3$.
\begin{figure*}[htbp]
\subfigure[]{\includegraphics[width=7.4cm,height=5.4cm]{zdm.pdf}
}
\subfigure[]{\includegraphics[width=7.4cm,height=5.4cm]{zdl.pdf}
}
\centering
\caption{The mock FRB data (panel a) and the mock GW data from ET (panel b). The FRB data are simulated from the normal expectation scenario with 1000 FRB events, and the GW data are simulated from the 10-year observation of ET with 1000 BNS merger events.
}\label{FRB+ET}
\end{figure*}
\subsection{Cosmological data}
For the current mainstream cosmological data, we use the \emph{Planck} CMB ``distance priors" derived from the \emph{Planck} 2018 data release \cite{Chen:2018dbv}, and the BAO measurements from 6dFGS at $z_{\rm eff} = 0.106$ \cite{Beutler:2011hx}, SDSS-MGS at $z_{\rm eff} = 0.15$ \cite{Ross:2014qpa}, and BOSS-DR12 at $z_{\rm eff} = 0.38$, 0.51, and 0.61 \cite{Alam:2016hwk}. In the generation of the FRB and GW mock data, the fiducial values of cosmological parameters are taken to be the best-fit values of CMB+BAO+SN from ref.~\cite{Zhang:2019loq}. We use the Markov-chain Monte Carlo analysis \cite{Lewis:2002ah} to obtain the posterior probability distribution of the cosmological parameters.
\section{Results and discussion} \label{sec:Result}
\subsection{CMB+FRB}\label{CF}
\begin{table*}[htbp]
\setlength\tabcolsep{5.0pt}
\renewcommand{\arraystretch}{1.5}
\centering
\begin{tabular}{cccccccc}
\hline
Model & Error & CMB & CMB+BAO & FRB1 & CMB+FRB1& FRB2& CMB+FRB2 \\ \hline
\multirow{4}{*}{HDE} & $\sigma(\Omega_{\rm m})$ &
$0.032$ & $0.012$ & $0.062$ & $0.022$ & $0.026$& $0.012$\\%\cline{2-7}
& $\sigma(h)$ &
$0.035$ & $0.013$ & $-$ & $0.022$&$-$&$0.012$\\%\cline{2-7}
& $\sigma(c)$ &
$0.18$ & $0.087$ & $0.55$ & $0.11$&$0.44$&$0.057$\\%\cline{2-7}
& $10^2\sigma(\Omega_{\rm b} h^2)$ &
$0.015$ & $0.015$ & $0.56$ & $0.014$&$0.53$&$0.0097$\\ \hline
\multirow{4}{*}{RDE} & $\sigma(\Omega_{\rm m})$ &
$0.060$ & $-$ & $0.087$ & $0.043$&$0.032$&$0.020$\\%\cline{2-7}
& $\sigma(h)$ &
$0.057$ & $-$ & $-$ & $0.045$&$-$&$0.021$\\%\cline{2-7}
& $\sigma(\gamma)$ &
$0.019$ & $-$ & $0.24$ & $0.014$&$0.058$&$0.0081$\\%\cline{2-7}
& $10^2\sigma(\Omega_{\rm b} h^2)$ &
$0.015$ & $-$ & $0.67$ & $0.013$&$0.51$&$0.0076$\\ \hline
\end{tabular
\caption{Absolute errors ($1\sigma$) of the cosmological parameters in the HDE and RDE models by using the CMB, CMB+BAO, FRB1, CMB+FRB1, FRB2, and CMB+FRB2 data. FRB1 and FRB2 denote the FRB data in the normal expectation scenario (i.e., $N_{\rm FRB}=1000$) and optimistic expectation scenario (i.e., $N_{\rm FRB}=10000$), respectively.}
\label{tab:full}
\end{table*}
\begin{figure*}[htbp]
\subfigure[]{\includegraphics[width=7.2cm,height=5.4cm]{omc.pdf}}
\subfigure[]{\includegraphics[width=7.2cm,height=5.4cm]{hh0.pdf}}
\centering
\caption{\label{HDE}Two-dimensional marginalized contours (68.3\% and 95.4\% confidence levels) in the $\Omega_{\rm m}$--$c$ plane (panel a) and the $H_{0}$--$\Omega_{\rm b}h^2$ plane (panel b) for the HDE model, by using the FRB, CMB, and CMB+FRB data. Here, the FRB data are simulated based on the optimistic expectation scenario.}
\end{figure*}
\begin{figure*}[htbp]
\subfigure[]{\includegraphics[width=7.2cm,height=5.4cm]{omr.pdf}}
\subfigure[]{\includegraphics[width=7.2cm,height=5.4cm]{rh0.pdf}}
\centering
\caption{ \label{RDE}Two-dimensional marginalized contours (68.3\% and 95.4\% confidence levels) in the $\Omega_{\rm m}$--$\gamma$ plane (panel a) and the $H_{0}$--$\Omega_{\rm b}h^2$ plane (panel b) for the RDE model, by using the FRB, CMB, and CMB+FRB data. Here, the FRB data are simulated based on the optimistic expectation scenario.}
\end{figure*}
In this subsection, we investigate the capability of the FRB data of breaking the parameter degeneracies inherent in the CMB data. We shall first discuss the improvement of the precision of cosmological parameters with the addition of the FRB data. Then we compare the capabilities of FRB and of BAO data in breaking the parameter degeneracies. The absolute errors (1$\sigma$) of the cosmological parameters in the HDE and RDE models are listed in table~\ref{tab:full}. We use FRB1 and FRB2 to denote the FRB data in the normal expectation (i.e., $N_{\rm FRB}=1000$) and the optimistic expectation (i.e., $N_{\rm FRB}=10000$), respectively.
Here, for a cosmological parameter $\xi$, we use $\sigma(\xi)$ to denotes its absolute error, and we also use the relative error $\varepsilon(\xi)=\sigma(\xi)/\xi$ in the following discussions.
From table~\ref{tab:full} we see that the cosmological constraints from the FRB1 and FRB2 data are looser than those from the CMB data (except for $\Omega_{\rm m}$ from FRB2). However, with combining the CMB and FRB data, the constraints from the CMB+FRB1 and CMB+FRB2 data are both evidently improved compared with those from the CMB data alone. As shown in figure~\ref{HDE}, in the HDE model, $\Omega_{\rm m}$ and $c$ are positively correlated in the CMB data, but they are anti-correlated in the FRB data. Combining the CMB and FRB data breaks the parameter degeneracies, and thus improves the constraints on the EoS parameters of dark energy and the matter density parameter. For the HDE model, the CMB data combined with the FRB1 and FRB2 data give the results $\varepsilon(c)=11.7\%$ and $\varepsilon(c)=6.3\%$, respectively. The absolute errors of $c$ are reduced by about 40.0\% and 67.7\% by combining the FRB1 and FRB2 data with the CMB data, respectively. It is worth emphasizing that the constraint $\varepsilon(c)=6.3\%$ is close to the constraint precision given by the CMB+BAO+SN data \cite{Zhang:2019ple}. That is to say, using only the future FRB data combined with the CMB data could provide precise cosmological constraints comparable with the current mainstream data. For the RDE model, it is shown in figure~\ref{RDE} that the FRB data also break the parameter degeneracies inherent in the CMB data. The CMB data combined with the FRB1 and FRB2 data give $\varepsilon(\gamma)=5.9\%$ and $\varepsilon(\gamma)=3.5\%$, respectively. The absolute errors of $\gamma$ are reduced by about 24.3\% and 56.2\%, by adding the FRB1 and FRB2 data to the CMB data, respectively.
Then we turn our attention to the constraints on the baryon density $\Omega_{\rm b}$ and the Hubble constant $H_0$. Using big bang nucleosynthesis (BBN) and CMB can precisely determine the value of $\Omega_{\rm b}h^2$, however, in the nearby universe, the observed baryons in stars, the cold interstellar medium, residual Ly$\alpha$ forest gas, \textsc{O\,vi}, broad \textsc{H\,i} Ly$\alpha$ absorbers, and hot gas in clusters of galaxies account for only $\sim 50\%$ of the baryons \cite{Bhandari:2021thi}. This is called the missing baryon problem.
On the other hand, for the measurements of the Hubble constant, in recent years there appeared the puzzle of ``Hubble tension''.
The values of the Hubble constant derived from different observations show strong tension, which actually reflects the inconsistency of measurements between the early universe and the late universe \cite{Riess:2020sih,Verde:2019ivm,Guo:2018ans,cai:2020,Liu:2019awo,Zhang:2019cww,Ding:2019mmw,Guo:2019dui,Guo:2018uic,Feng:2019jqa,Gao:2021xnk}.
One may expect to precisely measure $\Omega_{\rm b}h^2$ and $H_0$ with powerful low-redshift probes.
In figure~\ref{HDE}(b), we show the marginalized posterior probability distribution contours in the $H_0$--$\Omega_{\rm b}h^2$ plane for the HDE model. We see that $H_0$ and $\Omega_{\rm b}h^2$ are strong positively correlated when using the FRB data alone, because $\mathrm{DM}_{\mathrm{IGM}}$ is proportional to $H_0 \Omega_{\rm b}$ [see eq.~(\ref{eq3})]. It is found that solely using the FRB2 data cannot constrain $H_0$, but can effectively constrain $\Omega_{\rm b}h^2$, giving the result of $\sigma(\Omega_{\rm b}h^2)\approx 0.005$, which is an about 22.7\% constraint.
Although the precision is still low, the method of using the FRB observation has been proven to provide a potential way to solve the missing baryon problem.
The CMB data can place tight constraint on $\Omega_{\rm b}h^2$, and thus the data combination FRB+CMB can give a precise measurement on $H_0$. For the HDE model, the simulated FRB1 data combined with the current CMB data could give the result $\varepsilon(h)=3.3\%$, which is reduced by 37.1\% compared to using the CMB data alone. The data combination CMB+FRB2 can give the result $\varepsilon(h)=1.8\%$, which roughly equals the measurement precision of the SH0ES value \cite{Riess:2019cxk}. For CMB+FRB2 in the RDE model, the FRB2 data combined with the CMB data could give $\varepsilon(h)=2.9\%$, and the error of $h$ are reduced by about 64\% by adding the FRB2 data to the CMB data.
\begin{figure*}[htbp]
\includegraphics[width=8cm,height=6cm]{bao.pdf}
\centering
{\caption{ \label{BAO}Two-dimensional marginalized contours (68.3\% and 95.4\% confidence levels) in the $\Omega_{\rm m}$--$c$ plane for the HDE model, by using the CMB+FRB1, CMB+BAO, and CMB+FRB2 data.}}
\end{figure*}
Moreover, we also compare the capabilities of the CMB+FRB and CMB+BAO data of constraining cosmological parameters. From figure~\ref{BAO} we can see that the constraints from the CMB+FRB2 data are comparable to those from the CMB+BAO data, even slightly better. In the HDE model, both the CMB+FRB2 and CMB+BAO data give a constraint of $\sigma(\Omega_{\rm m})=0.012$.
For the parameter $c$, the CMB+FRB2 and CMB+BAO data give the results $\sigma(c)=0.056$ and $\sigma(c)=0.087$, respectively. It indicates that the FRB2 data have similar capability of breaking parameter degeneracies as the BAO data.
\subsection{GW+FRB}
\begin{table*}[!htb]
\setlength\tabcolsep{9.0pt}
\renewcommand{\arraystretch}{1.5}
\centering
\begin{tabular}{cccccc}
\hline
& Error & GW & GW+FRB & CMB+GW& CMB+GW+FRB \\ \hline
\multirow{4}{*}{HDE} & $\sigma(\Omega_{\rm m})$ &
0.024 & 0.016 & 0.0059& 0.0052\\
& $\sigma(h)$ & 0.010& 0.0095 & 0.0064& 0.0054\\
& $\sigma(c)$ & 0.21 & 0.16 & 0.045 & 0.035 \\
& $10^2\sigma(\Omega_{\rm b} h^2)$ & $-$ & 0.011 & 0.015 &0.0074\\
\hline
\multirow{4}{*}{RDE} & $\sigma(\Omega_{\rm m})$ &
0.012 & 0.011 & 0.0090& 0.0082\\
& $\sigma(h)$ & 0.015& 0.014 & 0.0093& 0.0086\\
& $\sigma(\gamma)$ & 0.028 & 0.020 & 0.0067 & 0.0059 \\
& $10^2\sigma(\Omega_{\rm b} h^2)$ & $-$ & 0.011 & 0.014 &0.0069\\
\hline
\end{tabular
\caption{Absolute errors ($1\sigma$) of the cosmological parameters in the HDE and RDE models by using the GW, GW+FRB, CMB+GW, and CMB+GW+FRB data. Here, the FRB data are simulated based on the optimistic expectation scenario.}
\label{tab:GW}
\end{table*}
\begin{figure*}[htbp]
\subfigure[]{\includegraphics[width=7.2cm,height=5.4cm]{omh0.pdf}}
\subfigure[]{\includegraphics[width=7.2cm,height=5.4cm]{ch0.pdf}}
\centering
\caption{ \label{GWFRB}Two-dimensional marginalized contours (68.3\% and 95.4\% confidence levels) in the $\Omega_{\rm m}$--$H_{0}$ plane (panel a) and the $c$--$H_{0}$ plane (panel b) for the HDE model, by using the FRB, GW, and GW+FRB data. Here, the FRB data are simulated based on the optimistic expectation scenario.}
\end{figure*}
In addition to the FRB observation, the GW observation is another powerful cosmological probe. As demonstrated in ref.~\cite{Zhao:2020ole}, the FRB and GW observation could help each other to break the parameter degeneracies in the $w$CDM and CPL models. In this subsection, we further study these two cosmological probes in the HDE and RDE models, and the FRB data are simulated based on the optimistic expectation scenario.
From table~\ref{tab:GW}, we see that the constraints from the data combination GW+FRB are slightly improved compared with the GW data alone. But from figure~\ref{GWFRB} we find that the parameter degeneracy orientations formed by GW and by FRB are nearly orthogonal. The inclusion of the FRB data reduces the relative error on $\Omega_{\rm m}$ from $8.4\%$ to $5.2\%$ in the HDE model. The constraints on $\Omega_{\rm m}$ and $\Omega_{\rm b}h^2$ from the GW+FRB data are comparable to those from the CMB+BAO data. However, neither the GW data nor the FRB data can give tight constraints on the parameter $c$. The data combination GW+FRB gives $\varepsilon(c)=18.0\%$, which is only slightly better than the constraint with CMB alone. For the Hubble constant $H_0$, the FRB data actually contribute a little to the joint constraint of GW+FRB, since the GW data alone are able to precisely constrain $H_0$ but FRB alone cannot constrain it. Compared with the results in the $w\rm{CDM}$ model \cite{Zhao:2020ole}, the improvement of parameter constraint by adding 10000 FRB data into the GW data is weaker in the HDE model. For example, the improvements of the constraints on $\Omega_{\rm m}$ are $53\%$ and $33\%$ in the $w\rm{CDM}$ and HDE models, respectively.
Therefore, we find that, for constraining the HDE model, much more FRB data are needed to be combined with the GW data.
The results from the data combination CMB+GW+FRB further confirm this statement. The inclusion of the FRB data in the data combination CMB+GW+FRB tinily improves the constraints on $\Omega_{\rm m}$ and $H_0$, and the contributions to the constraints mainly come from CMB+GW. Nevertheless, the CMB+GW data can only constrain $c$ to the precision of around 5\%, which is still far away from the standard of precision cosmology. Compared to the GW data, the FRB data are more likely to be enlarged to a larger sample. Recently, Hashimoto et al. pointed out that large amounts of FRBs could be detected by the Square Kilometre Array (SKA), with the number at least 2--3 orders of magnitude larger than the sample size we have considered \cite{Hashimoto:2020dud}. With the accumulation of more abundant and precise data, FRBs would provide a tight constraint on the EoS of dark energy, so
we expect that in the future the large amounts of FRBs observed by SKA may have the potential to help the joint constraint on $c$ achieve the precision around 1\%.
\subsection{Effect of $\rm DM_{host}$ uncertainty}
In the above analyses, we have assumed that the $\rm DM_{host}$ uncertainty is $30\, {\rm pc\,cm^{-3}}$. The progenitors of FRBs are actually not clear and some factors are still open issues. The treatment of $\sigma_{\rm host} = 30\, {\rm pc\,cm^{-3}}$ thus could be regarded as an optimistic scenario. In this subsection, we perform an analysis based on a more conservative scenario with $\sigma_{\rm host} = 150\, {\rm pc\,cm^{-3}}$. The FRB data of this conservative scenario is represented by FRB3. The constraint results of cosmological parameters are shown in table~\ref{tab:full2}.
\begin{table*}[!htb]
\setlength\tabcolsep{8.0pt}
\renewcommand{\arraystretch}{1.5}
\centering
\begin{tabular}{cccccccc}
\hline
Model & Error & FRB3 & CMB+FRB3 \\ \hline
\multirow{4}{*}{HDE} & $\sigma(\Omega_{\rm m})$ &
$0.025$ & $0.013$ \\%\cline{2-7}
& $\sigma(h)$ &
$-$ & $0.013$ \\%\cline{2-7}
& $\sigma(c)$ &
$0.44$ & $0.058$ \\%\cline{2-7}
& $10^2\sigma(\Omega_{\rm b} h^2)$ &
$0.51$ & $0.010$ \\ \hline
\multirow{4}{*}{RDE} & $\sigma(\Omega_{\rm m})$ &
$0.040$ & $0.024$ \\%\cline{2-7}
& $\sigma(h)$ &
$-$ & $0.023$ \\%\cline{2-7}
& $\sigma(\gamma)$ &
$0.076$ & $0.0089$ \\%\cline{2-7}
& $10^2\sigma(\Omega_{\rm b} h^2)$ &
$0.54$ & $0.0080$ \\ \hline
\end{tabular
\caption{Absolute errors ($1\sigma$) of the cosmological parameters in the HDE and RDE models by using the FRB3 and CMB+FRB3 data. Here, FRB3 denotes the FRB data with the ${\rm DM_{host}}$ uncertainty $\sigma_{\rm host} = 150\, {\rm pc\,cm^{-3}}$.}
\label{tab:full2}
\end{table*}
From table~\ref{tab:full2}, we see that the constraints from FRB3 are slightly looser than those from FRB2. Taking the HDE model as an example, we can see that FRB3 and CMB+FRB3 give the results $\varepsilon(c)=37.3\%$ and $\varepsilon(c)=6.7\%$, respectively, which are 2.8\% and 6.0\% larger than those from FRB2 and CMB+FRB2, respectively. The influence on the constraints in the RDE model is slightly larger. The relative errors in the cases of FRB3 and CMB+FRB3 are $\varepsilon(\gamma)=27.9\%$ and $\varepsilon(\gamma)=3.8\%$, which are 20.4\% and 8.7\% larger than those from FRB2 and CMB+FRB2, respectively. From eq.~(\ref{eq6}), we see that $\sigma_{\rm host}$ contributes to the total uncertainty of $\rm{DM}_{\rm{IGM}}$ with a factor $1/(1+z)$, thus even though $\sigma_{\rm host}$ varies a lot, its impact on FRBs' capability of constraining cosmological parameters and breaking parameter degeneracies is slight. So, there may be some other factors affecting the constraints, but the main conclusions will still hold. This confirms the discussion in ref.~\cite{Zhao:2020ole}.
\section{Conclusion} \label{sec:con}
In this paper, we investigate the capability of future FRB data in improving the cosmological parameter estimation and how many FRB data are required to effectively constrain the cosmological parameters in the HDE and RDE models. We consider two FRB scenarios: the normal expectation scenario with 1000 localized FRB data and the optimistic expectation scenario with 10000 localized FRB data.
We find that, in the HDE model, combining the FRB data and the CMB data could break the parameter degeneracies, and the joint constraints on $H_0$ and dark-energy parameters are quite tight.
To achieve the constraint precision of $H_0$ comparable with the SH0ES result \cite{Riess:2019cxk}, around 10000 FRB data are need to be combined with the CMB data. For the EoS of dark energy, 10000 FRB data combined with the CMB data would give a $\sim 6\%$ constraint, which is close to the precision given by the CMB+BAO+SN data. The data combination FRB+CMB also give a tight constraint on $\Omega_{\rm b}h^2$. The results in the RDE model also support the conclusion above. These results confirm the conclusion in ref. \cite{Zhao:2020ole} that the future $10^4$ FRB data can become useful in cosmological parameter estimation when combined with CMB.
We also consider another powerful low-redshift cosmological probe, the GW standard sirens observation, and investigate the capability of constraining cosmological parameters when combining the FRB data with the GW data, which is independent of CMB. We use a full Fisher matrix analysis to account for the parameter correlations and avoid biased results. The inclusion of the FRB data in the data combination GW+FRB only improves the constraints slightly, but we show that the orientations of the parameter degeneracies formed by FRB and by GW are nearly orthogonal. We show that in some dark energy models such as the HDE model, 10000 FRB events are possibly not enough to be used to constrain cosmological parameters when combined with the GW data. However, we can expect that, in the near future, large amounts of localized FRBs will be observed by SKA. If we could observe much more FRBs with higher precision in the future, the combination with the GW data could significantly improve the constraints. It can be expected that FRBs observed by SKA may have potential to help the jiont constraint on $c$ achieve the precision around 1\%, reaching the standard of precision cosmology.
Finally, we investigate the effect of the ${\rm DM_{host}}$ uncertainty on the constraint results. A larger ${\rm DM_{host}}$ uncertainty surely looses the constraints, but the main conclusions still hold.
\acknowledgments
We thank Zheng-Xiang Li, He Gao, Jing-Zhao Qi, Shang-Jie Jin, Li-Yang Gao, Peng-Ju Wu, Hai-Li Li, Yi-Chao Li, and Di Li for helpful discussions. This work was supported by the National Natural Science Foundation of China (Grants Nos. 11975072, 11835009, 11875102, and 11690021), the Liaoning Revitalization Talents Program (Grant No. XLYC1905011), the Fundamental Research Funds for the Central Universities (Grant No. N2005030), and the National Program for Support of Top-Notch Young Professionals (Grant No. W02070050).
|
1,941,325,220,081 | arxiv | \section{The Neutrino Oscillation Status}
\noindent
Today's neutrino oscillation experimental evidence is consistent with a 3{$\nu$}\ framework~\cite{Ref_3nuGlobalAnalysis}.
This is in agreement with the observed three families of charged fermions making part of the \emph{Standard Model of Particle Physics} ({\bf SM}).
While few inconclusive indications for possible discrepancy to the 3{$\nu$}\ framework have been reported, intense exploration has cornered the remaining solution phase-space to marginal region(s)~\cite{Ref_ReviewSterile} -- still not fully ruled out.
Thus, unambiguous manifestation of physics beyond the Standard Model ({\bf BSM}) remains elusive.
Since {$\nu$}\ oscillation is the macroscopic manifestation of the quantum interference of neutrino mass states during their propagation and the mixing among mass ($\nu_1$, $\nu_2$, $\nu_3$) and weak-flavour ($\nu_e$, $\nu_\mu$, $\nu_\tau$) eigenstates, the entire phenomenon is characterised in terms of
two \emph{mass squared difference} ($ \delta m^2 $\ and $ \Delta m^2 $)\footnote{$ \delta m^2 $\ and $ \Delta m^2 $\ provide notation for the so called ``solar'' ($\Delta m_{12}^2$) and ``atmospheric'' ($\Delta m_{23}^2$ or $\Delta m_{13}^2$) cases.}
and
three \emph{mixing angles} ($\theta _{13} $, $\theta _{12} $, $\theta _{23} $), embedded in the 3$\times$3 {\bf PMNS} mixing matrix.
This is the {\bf CKM} quark counterpart.
This simplified parametrisation relies on a critical assumption: the PMNS matrix is \emph{unitary} (labelled $U$).
This same condition allows for a complex phase leading to CP-violation\footnote{The CPV implies different manifestation for matter and anti-matter, as observed in the 60's with quarks.} ({\bf CPV}) during mixing.
There is no a priori prediction for any such parameters (6), so each must be measured to allow the phenomenological 3{$\nu$}\ model characterisation of today's observations as well as possible searches for significant deviations between data and model, where discoveries may lay.
It is worth noticing that the
{experimental test of the unitarity is as important as the measurement of their derived parameters}.
However, testing the unitarity implies addressing a larger system of equations, where the $\theta _{13} $, $\theta _{12} $, $\theta _{23} $\ parametrisation no longer stands.
\begin{table*}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
& \multicolumn{2}{c|}{\bf knowledge as of 2020} & \multicolumn{3}{c|}{\bf expected knowledge beyond 2020}\\
& dominant & precision (\%) & precision (\%) & dominant &technique\\
\hline
$\theta _{12} $ & SNO & 2.3 & $\leq$1.0 & JUNO & \emph{reactor}\\
$\theta _{23} $ & NOvA & 2.0 & $\sim$1.0 & DUNE+HK & \emph{beam}\\
$\theta _{13} $ & DYB & 1.5 & 1.5 & DC+DYB+RENO & \emph{reactor}\\
$ \delta m^2 $ & KL & 2.3 & $\leq$1.0 & JUNO & \emph{reactor}\\
$|$$ \Delta m^2 $$|$ & DYB+T2K & 1.3 & $\leq$1.0 & JUNO+DUNE+HK & \emph{reactor+beam}\\
\hline
$\pm$$ \Delta m^2 $ & SK & unknown & -- & JUNO+DUNE+HK & \emph{reactor+beam}\\
$\delta_{\mbox{\tiny CP}}$ & T2K & unknown & -- & DUNE+HK & \emph{beam}\\\hline
\end{tabular}
\end{center}
\caption{\small{\bf Neutrino Oscillation Knowledge.}
As of 2019, current and predicted knowledge on 3\,{$\nu$}\ oscillation model is summarised in terms of the precision per parameter.
The different columns show
today's single experiment precision,
dominant experiment,
today's global precision (NuFit\,4.0),
predicted precision
and
best experiment along with the dominant technique used.
The entire neutrino oscillation sector will be characterised using reactors and beams.
This is not surprising since such man-made {$\nu$}'s are best controlled in terms of baseline and systematics, as compared to atmospheric and solar {$\nu$}'s.
$\theta _{12} $\ and $\theta _{23} $\ will be largely improved by JUNO and DUNE+HK, respectively.
JUNO will pioneer the sub-percent precision in the field.
Interestingly, there is no foreseen capability to improve today's DC+DYB+RENO precision on $\theta _{13} $, whose knowledge will go from today's best to future worst, unless a dedicated experiment is proposed.
$ \delta m^2 $\ will be dominated by JUNO while $ \Delta m^2 $\ will be constraints by both JUNO and DUNE+HK.
The unknown mass ordering will be addressed mainly JUNO and DUNE using vacuum oscillations and matter effects, respectively.
Global data analysis suggests a possible favoured normal ordering solution at $\sim$3$\sigma$'s, dominated Super-Kamiokande~\cite{Ref_SK} (SK) data.
Any deviation between JUNO and DUNE would be of great interest.
The unknown $\delta_{\mbox{\tiny CP}}$ depends on DUNE+HK.
Global data, dominated by T2K~\cite{Ref_T2K}, disfavours CP conservation (0 or $\pi$ solutions) at $\sim$2$\sigma$'s.
Despite a key role in the intermediate time scale, atmospheric neutrino experiments such as IceCube~\cite{Ref_IceCube} and ORCA~\cite{Ref_ORCA} are not expected today to lead the ultimate precision by 2030.
\vspace{-1.0cm}
}
\label{Tab_NONow}
\end{table*}
For about 50 years, the experimental community has been devoted to the measurement of each of the neutrino oscillation parameters.
The key realisation was that behind the historically called \emph{solar} and \emph{atmospheric anomalies}, there is one single phenomenon: \emph{neutrino oscillation}.
However, the 2015 Nobel prize~\cite{Ref_NobelPrize} discovery acknowledgement awaited the observation of the predicted new oscillation, driven by $\theta _{13} $.
This was only significantly observed in 2011-2012 by Daya Bay ({\bf DYB})~\cite{Ref_DYB-I}, Double Chooz ({\bf DC})~\cite{Ref_DC-I} and {\bf RENO}~\cite{Ref_RENO-I} experiments.
Now we know neutrinos are massive even though we have not been able to measure its mass directly~\cite{Ref_PDG}.
Today's knowledge can be effectively characterised by the precision of each parameters, since no significant deviations have been found, as summarised in Table~\ref{Tab_NONow}.
While $\theta _{12} $\ and $\theta _{23} $\ are large, $\theta _{13} $\ is very small.
As of mid-2019, all parameters are known to the few percent (\textless2.5\%) upon combining all experiments data.
Two major unknowns remain:
\emph{atmospheric mass ordering}\footnote{This stands as the sign of $ \Delta m^2 $\ unknown since mainly vacuum oscillation has been used to measure it. The sign of $ \delta m^2 $\ is known due to matter dominated enhanced oscillations in the core of the sun.}
and
the \emph{CPV phase}.
There is preliminary evidence~\cite{Ref_3nuGlobalAnalysis} amounting that
a) normal mass ordering may be favoured at $\sim$3$\sigma$'s
and
b) CP conservation may be disfavoured at $\sim$2$\sigma$'s.
Despite major success, today's precision is still limited to address the PMNS unitarity competitively; i.e. sub-percent level.
In the first half of 2020 decade, the sub-percent precision regime is expected.
This will start with measurements of $\theta _{12} $\ and $ \delta m^2 $\ by {\bf JUNO}~\cite{Ref_JUNO}, based in China.
{\bf DUNE}~\cite{Ref_DUNE} and {\bf HK}~\cite{Ref_HK}, based in USA and Japan, respectively, are expected to provide the ultimate knowledge on $\theta _{23} $\ during the second half of the decade.
The knowledge of $ \Delta m^2 $, including the mass ordering resolution, is expected to be led by both JUNO and DUNE using complementary vacuum and matter effects approaches, respectively.
Surprisingly, no experiment is able to significantly improve today's $\theta _{13} $\ precision (1.5\%), while all experiments depend strategically on it for both CPV and mass ordering measurements.
Our $\theta _{13} $\ knowledge remains dominated by the aforementioned 2010 reactor data~\cite{Ref_LastDYB, Ref_DC-IV, Ref_LastRENO}.
By 2030, only experiments relying on artificially produced neutrinos (i.e. reactors and beams) will dominate the ultimate neutrino oscillation knowledge, as described in Table~\ref{Tab_NONow}.
Thus, beyond 2020, the field is expected to be shaped by a few large (or huge) experiments with the highest budgets and largest (\textgreater500 scientists) collaboration per experiment in the history of neutrino research.
It is worth noticing that no major neutrino oscillation experiment is envisaged to be based in Europe in the next decade.
In summary, upon the decade 2020-2030, the field will be reaching an overall sub-percent precision in all mixing angles, except for $\theta _{13} $.
The unknown mass ordering and CPV are expected to be measured by 2030 with today's data already allowing some hinted solutions at a few $\sigma$ level.
Hence, we will have all (6) parameters known by 2030.
Since, arguably, it is difficult to imagine to go larger than JUNO+DUNE+HK experiments,
so we must ensure that we are not missing anything compromising our ability to challenge the SM, thus maximising our best sensitivity to BSM possible manifestation(s), where discovery potential may be.
This reflection is be addressed timely since each step in the field is currently implying decades (preparation and data-taking) and the subsequent resources.
Indeed, this reflection is main the motivation of this document.
\section*{The PMNS Structure \& Unitarity}
\noindent
This is one of the most critical questions to the field -- arguably as important any consequent parameter.
Let us consider the CPV phase for instances.
It is interesting to note the CPV is expected within the neutrino oscillation framework (i.e. PMNS can be complex), while there is no established model behind the violation of unitarity.
Instead, there is no SM prediction for the CPV phase value, while unitary enjoys an accurate prediction.
Hence, the unitary exploration benefits from direct discovery potential in a model-less framework exploiting an accurate prediction to identify deviations.
More, addressing unitary
is complementary to today's measurements of each parameter, regardless of the overall PMNS structure.
We shall below summarise, within the limitations of today's uncertainties, the main features of the PMNS matrix, as illustrated in Fig.\ref{Fig_PMNS}.
Its structure offers some interesting features worth some intriguing questions:
\begin{description}
\item[Why is PMNS non-diagonal?]
Unlike the CKM, almost diagonal, thus leading to minimal mixing in quarks, the PMNS is largely non-diagonal.
This means its ``off-diagonal'' terms are large, as shown in Fig~\ref{Fig_PMNS}.
This implies that whatever BSM theory may stand behind the SM effective manifestation, the predicted flavour sector may be largely different for leptons and quarks.
It is striking to note that $\theta _{13} $\ is the most peculiar, as it is very small and drives the value of $U_{e3}$.
Again, a possible hint from Nature suggesting that we ought to measure $\theta _{13} $\ with the highest possible precision, as it might be key to understand the leptonic flavour sector.
Ironically, no experiment today can improve 2010's results.
Worse, there is up to now no experimental method known to be able to challenge those results.
A new approach is highlighted below.
\begin{figure}[h]
\centering
\includegraphics[scale=0.14]{./PMNS.jpg}
\caption{ \small
{\bf The PMNS Neutrino Mixing Matrix.}
The non-diagonal structure and the smallness of the $U_{e3}$ term (circled in red) are among the main features of the PMNS matrix, as illustrated.
$U_{e3}$ corresponds to $\theta _{13} $, if unitary.
The overall PMNS unitarity test could be reduced to test the unitarity of the rows, where the most sensitive test today arises from the \emph{electron row} (circled in blue).
}
\label{Fig_PMNS}
\end{figure}
\item[Why is PMNS' $J$ so large?]
The PMNS Jarkslog invariant (factorising out the CPV phase $\sin{(\delta)}$ term) is order $\sim$10$^{-2}$, which is much larger than that of the CKM counterpart ($\sim$10$^{-5}$).
This suggests that if the CP was violated ($\sin{\delta}\neq0$), the expected CPV amplitude could be large.
This is an appealing scenario as much CPV is needed to explain the observed matter to anti-matter asymmetry in the universe
-- orders of magnitude more compared to the CPV embedded in the CKM.
\item[Is PMNS unitary?]
As highlighted above, this is likely to be the ultimate and most challenging question that the neutrino oscillation framework might allow us to explore.
We shall address the possible implications and today's knowledge status below.
\end{description}
\noindent
The PMNS structure largely differs from the CKM.
Hence, their nature may likely be different, although unknown.
For some, the PMNS bizarreness (compared to CKM) might indicate that its most precise exploration and scrutiny is one of the best ways to challenge the SM.
It would not be the first time neutrinos proved our best probe to BSM phenomenology.
One of the latest modifications in the SM was the introduction of the phenomenology of massive neutrinos, as inferred from neutrino oscillations, although the absolute scale of their lightness remains a challenging mystery.
To address the PMNS unitarity, we need an overall sub-percent mixing precision.
The results from JUNO, DUNE and HK are therefore critical.
However, while necessary, those are not sufficient conditions to yield the deepest insight.
Testing for the PMNS unitarity implies abandoning the three mixing angles ($\theta _{13} $, $\theta _{12} $, $\theta _{23} $) approximation.
Hence, the equations must be expressed in terms of their $U_{ij}$ terms upon imposing the unitary condition (i.e. $U U^\dagger = I$).
This translates experimentally into constraining more equations.
So, {to test unitary to the percent level implies the need for the above described increase in precision but also additional measurements}.
This is described below.
Indeed, only within the 2020 decade, the field is nearing a competitive level of precision.
The reward of addressing this question is remarkable:
{any significant evidence for unitarity violation implies the manifestation, and thus discovery, of non-standard neutrino states and\,/\,or interactions}~\cite{Ref_Hiroshi-U}.
Non-standard interactions ({\bf NSI})~\cite{Ref_NSI-Review} stand for deviations during interaction and/or propagation of neutrinos.
This implies direct sensitivity to BSM physics manifestation despite lacking the model behind.
Given the stunning prediction power demonstrated by the SM to all so far tested observables, such those proved at the LHC, cosmology, etc; there is a diminishing phase space for direct access to discoveries in particle physics with today's technology.
Hence, testing the PMNS unitary is indeed a compelling and unique opportunity.
\section*{The PMNS Unitarity Test Strategy}
\noindent
Solving the unitary condition ($U U^\dagger = I$) leads to many equations~\cite{Ref_Vogle-U}.
Some are equivalent to testing the ``closure of triangles'', as practiced in the CKM case, should the CP violation be known.
Since, the neutrino CPV phase is still unknown, the PMNS unitary condition can be tested today via the derived $|U_{l1}|^2+|U_{l2}|^2+|U_{l3}|^2=1$ condition, with $l = e,\mu,\tau$.
These equations test the unitarity of each matrix row.
Only the $e$ and $\mu$ are considered since $\tau$ related oscillations are less constrained.
In fact, the most stringent constraint arises from the \emph{electron-row unitarity} ({\bf ERU})\footnote{The \emph{$\mu$-row} case precision is limited by experimental uncertainties such the absolute flux (typically \textless10\% for beams), the unresolved atmospheric mass ordering and the ``octant'' ambiguity due to the almost maximal value of $\theta _{23} $.} (or top row) leading to
the $|U_{e1}|^2+|U_{e2}|^2+|U_{e3}|^2=1$ accurate condition.
If unitarity held, this row depends only on $\theta _{13} $\ and $\theta _{12} $.
Hence, any experiments with the ability to constrain $\theta _{13} $\ and/or $\theta _{12} $\ is of likely direct impact.
ERU is today the only direct and most precise access to unitarity~\cite{Ref_Hiroshi-U,Ref_Vogle-U}.
This is excellent news for JUNO whose highest sensitivity to $\theta _{12} $\ (and also $ \delta m^2 $)
unprecedentedly grants some of the necessary sub-percent precision to test ERU.
Indeed, JUNO is one of the most important experiment in the unitarity quest~\cite{Ref_JUNO}.
However, that is not good enough.
Since, it is difficult to foresee any improvement on JUNO -- even in the far future -- we need other high precision measurements elsewhere.
Unfortunately, the sensitivity on $\theta _{13} $\ appears not improvable in foreseeable future.
Only DUNE, at best, might reach a similar precision as of today.
As discussed in~\cite{Ref_Hiroshi-U,Ref_Vogle-U}, testing for ERU implies several experimental constraints, here highlighted:
\begin{description}
\item[Via $ \delta m^2 $\ Oscillations (i.e. $\theta _{12} $, if unitary):]
JUNO measures $P(\bar\nu_e \to \bar\nu_e)$ with reactor neutrinos over a $\sim$50\,km baseline.
Also, solar neutrinos have key information by probing $P(\nu_e \to \nu_e)$ in the core of the sun via matter effects.
Today's best constraints come from SNO and SK.
There is no dedicated solar experiment foreseen, although JUNO has some sensitivity.
\item[Via $ \Delta m^2 $\ Oscillations (i.e. $\theta _{13} $, if unitary):]
again reactor experiments, like DC and DYB, had measured $P(\bar\nu_e \to \bar\nu_e)$ at the baseline of order 1\,km.
There is however no copious known $\nu_e$ source
capable of addressing $P(\nu_e \to \nu_e)$ precisely enough with a compatible L/E ratio.
\end{description}
\noindent
Although not listed explicitly above, the {\bf \emph{absolute flux} knowledge} is of critical impact to test ERU~\cite{Ref_Hiroshi-U,Ref_Vogle-U}.
However, the control of the absolute flux uncertainties is experimentally very challenging.
This is indeed why many neutrino oscillation experiments use multi-detectors to bypass absolute systematics, as opposed to the simpler relative systematic basis.
This way, for example, reactor experiments systematics can be controlled to the few per mille level while the absolute is controlled to order a few \% at best.
Worse, reactor neutrinos have evidenced a non-understood deficit~\cite{Ref_RAA} (2011) and spectral distorsion~\cite{Ref_DC-III} (2014) relative to ILL-data based predictions.
A hypothetical manifestation of non-standard neutrinos with $ \Delta m^2 $ at $\sim$1\,$eV^2$ had been considered.
Today, however, such a hypothesis has lost much ground thanks to new data addressing this issue directly; i.e. weakening the hypothetical phase-space~\cite{Ref_ReviewSterile}, and/or indirect; i.e. demonstrating that the reactor prediction uncertainties are likely larger~\cite{Ref_DC-IV}.
Considering all those effects, today's studies~\cite{Ref_Hiroshi-U,Ref_Vogle-U} quantify that the ERU test reach a few percent (\textgreater2\%) precision, including a prospected JUNO outcome.
Hence, a dedicated experimental effort addressing the maximal sensitivity to unitarity is needed
if a sub-percent precision level is to become possible -- our goal for discussion here.
\section*{Super Chooz Project Exploration}
\noindent
Improving ERU test precision beyond JUNO requires
(a) a significantly better measurement of $\theta _{13} $\ (ideally sub-percent precision),
(b) a much better control of absolute flux
and, possibly,
(c) a better measurement of solar neutrinos.
Unfortunately, all those items are considered today either impractical -- or even impossible -- with today's technology.
However, a new neutrino detection technology called {\bf LiquidO}~\cite{Ref_LOZ} might allow to address some of those tough challenges.
A hypothetical {\bf Super Chooz} ({\bf SC}) project has been first raised in this \emph{HEP-EPS conference}.
The project would rely on an $\sim$10\,kton LiquidO detector located in one of the existing caverns upon the final deconstruction of the former Chooz-A reactor site.
These caverns are to become available by \textgreater2025, implying minimal civil construction to reuse.
This expansion implies that the existing LNCA laboratory (Chooz) would become one of the largest underground laboratories in Europe with two of the most powerful Areva N4 reactors as source despite low overburden (300\,m water equivalent).
While the physics potential is still under ongoing study, the performance depends on the LiquidO detection.
The first experimental proof of principle~\cite{Ref_LOZ} using its first opaque scintillator articulation~\cite{Ref_NOWASH} has been successfully demonstrated.
\begin{figure}[h]
\centering
\includegraphics[scale=0.142]{SC}
\caption{ \small
{\bf The Super Chooz Site.}
The SC experiment relies on two very-near detectors (order 1ton each)
and
one far large detector with a mass order 10\,kton.
The multi-purpose far detector provide most of the physics programme (details in text).
The site relies on the unique opportunity to use the former EDF Chooz-A reactor caverns for scientific purposes, thus expanding the existing LNCA laboratory up to 50\,m$^3$ volume.
}
\label{Fig_SC}
\end{figure}
Super Chooz addressees the necessary to yield a $\theta _{13} $\ measurement to \textless1\% precision -- publication soon.
This is a possible breakthrough since no technique so far is known to be able to reach such a precision.
This precision could further aid enhance the sensitivities of DUNE and HK on CPV, similar to today's reactor experiments have aided T2K sensitivity
and
possibly also JUNO's mass ordering.
A novel technique called \emph{reactor flux decomposition} is needed to yield total reactor flux error cancellation, as the near detector technique, a la DC or DYB, are proved insufficient.
Our preliminary studies suggest the world best precision on both $\theta _{13} $\ and $ \Delta m^2 $ via shape extraction may be possible.
Hence 2 (out of 6) of the parameters listed in Table~\ref{Tab_NONow} can be improved by Super Chooz, whose experimental configuration is shown in Fig~\ref{Fig_SC}.
SC might be able to help the two other measurements needed for better unitarity precision absolute reactor flux and solar neutrinos, but this is under intense exploration still.
Solar neutrino might benefit from an indium loading~\cite{Ref_LOZ} to enable unprecedented solar neutrino measurement via CC interactions, unlike the more challenging electron elastic scattering.
The main challenge here however is the control of cosmogenic backgrounds due to the lower overburden, while precise (mm scale) $\mu$ precise tracking might open for unprecedented tagging of the correlation between the primary $\mu$ and the spallation products.
Super Chooz could also become one of the best supernova neutrino (burst \& remnant) and proton decay detectors, while complementary to other foreseen detectors.
On the supernova side, the ability for LiquidO to detect and identify $\bar\nu_e$ and $\nu_e$, upon CC interactions, allows unique capability for supernova neutrinos (\textless50\,MeV), including major background reduction and event-wise directionality.
Flavour independent NC interaction detection is also possible upon loading, as highlighted in~\cite{Ref_LOZ}.
The supernova potential remains under active ongoing study.
On the proton decay side, LiquidO's event-wise imaging, again, allows the event-wise identification of K$^+$, $\pi^0$, $\pi^\pm$, $\mu^\pm$, etc. via their main decay mode(s).
All of those particles play a role in different proton decay modes.
LiquidO is expected thus to be one of the best proton decay searches technologies in terms of
its highest free-proton density (normal in scintillators),
high efficiency of detection
and
possible multi-decay mode sensitivity, boosted by its expected large background rejection.
This was preliminary highlighted in~\cite{Ref_LO@CERN}, but further studies are ongoing.
The feasibility and vast physics programme potential of a hypothetical Super Chooz is under study within the LiquidO collaboration, supported by several other cooperating institutions.
The Super Chooz project could have a unique and major potential impact to field, should the LiquidO technology performance demonstrates.
The programme is complementary to all JUNO, DUNE and HK.
The main aspiration remains to articulate a comprehensive programme to tackle all the measurements needed to yield maximal PMNS unitarity precision.
The success of this ambitious goal is under exploration but relies also on the precision to become available in the next decade.
\newpage
|
1,941,325,220,082 | arxiv | \section{Introduction}
\subsection{Time Synchronization in Large Distributed Systems}
The problem of time synchronization in large distributed systems
consists of giving all the physically disjoint elements of the
system a common time scale on which to operate. This common time
scale is usually achieved by periodically synchronizing the clock
at each element to a reference time source, so that the local time
seen by each element of the system is approximately the same. Time
synchronization plays an important role in many systems in that it
allows the entire system to cooperate and function as a cohesive
group.
Time synchronization is an old problem~\cite{Lamport:78}, but the
question of scalability is not. Recent advances in sensor
networks show a clear trend towards the development of large scale
networks with high node density. For example, a hardware
simulation-and-deployment platform for wireless sensor networks
capable of simulating networks with on the order of 100,000 nodes
was recently developed~\cite{KellyEM:03}. As well, for many years
the Smart Dust project sought to build cubic-millimeter motes for
a wide range of applications~\cite{WarnekLLP:01}. Also, there is
work in progress on the drastic miniaturization of power
sources~\cite{LiLBH:02}. These developments (and many others)
indicate that large scale, high density networks are on the
horizon.
Large scale, high density networks have applications in a variety
of situations. Consider, for example, the military application of
sniper localization. Large numbers of wireless nodes can be
deployed to find the shooter location as well as the trajectory of
the projectile~\cite{LedecziVMSBNK:05}. Since the effective range
of a long-range sniper rifle can be nearly $2$km, in order to
fully track the trajectory of the projectile it may be essential
that our deployed network be tightly synchronized over distances
of a few kilometers. Another example might be the implementation
of a distributed radio for communication. In extracting
information from a deployed sensor network, it may be beneficial
for the nodes to cooperatively transmit information to a far away
receiver~\cite{BarriacMM:04,OchiaiMPT:05,HuS:03c}. Such an
application would require that nodes across the network be well
synchronized. As a result, a need for the synchronization of large
distributed systems is very real and one that requires careful
study to understand the fundamental performance limits on
synchronization.
\subsection{Approaches to Synchronization and the Limitations}
The synchronization of large networks has been studied in fields
ranging from biology to electrical engineering. The study of
synchronous behavior has generally taken one of two approaches.
The first approach is to consider synchronization as an emergent
behavior in complex networks of oscillators. In that work, models
are developed to describe natural phenomena and synchronization
emerges from these models. The second approach is to develop and
analyze algorithms that synchronize engineering networks. Nodes
are programmed with algorithms that estimate clock skew and clock
offset to achieve network synchronization. However, both of these
approaches have significant limitations.
\subsubsection{The Emergence of Synchronous Behavior}
Emergent synchronization properties in large populations has been
the object of intense study in the applied
mathematics~(\cite{MatharM:96,VanVreeswijkA:93}),
physics~(\cite{Chen:94,CorralPDA:95,DiazGuerraPA:98,ErnstPG:98,Gerstner:96,
GuardiolaDLP:00,HerzH:95,Kuramoto:91}), and neural
networks~(\cite{Izhikevich:99,SmithCN:94}) literature. These
studies were motivated by a number of examples observed in nature:
\begin{itemize}
\item In certain parts of south-east Asia, thousands of male fireflies
congregate in trees and flash in synchrony at night~\cite{BuckB:76}.
\item Pacemaker cells of the heart, which on average cause 80
contractions a minute during a person's lifetime~\cite{Jalife:84}.
\item The insulin-secreting cells of the
pancreas~\cite{ShermanRK:88}.
\end{itemize}
For further information and examples, see~\cite{MirolloS:90,
Strogatz:03,McClintock:71,Walker:69}, and the references therein.
A number of models have been proposed to explain the emergence of
synchrony, but perhaps one of the most successful and well known
is the model of {\em pulse-coupled oscillators} by Mirollo and
Strogatz~\cite{MirolloS:90}, based on dynamical systems theory.
Consider a function $f:[0,1]\to[0,1]$ that is smooth, monotone
increasing, concave down (i.e., $f' > 0$ and $f'' < 0$), and is
such that $f(0)=0$ and $f(1)=1$. Consider also a phase variable
$\phi$ such that $\partial\phi/\partial t = \frac 1 T$, where $T$
is the period of a cycle. Then, each element in a group of $N$
oscillators is described by a state variable $x_i\in[0,1]$ and a
phase variable $\phi_i\in[0,1]$ as follows:
\begin{itemize}
\item In isolation, $x_i(t)=f(\phi_i(t))$.
\item If $\phi_i(t) = 0$ then $x_i(t) = 0$, and if $\phi_i(t) = 1$ then
$x_i(t) = 1$.
\item When $x_i(t_0) = 1$ for any of the $i$'s and some time
$t_0$,
then for all other $1\leq j\leq N$, $j\ne i$
\[ \phi_j(t_0^+) = \left\{\begin{array}{rl}
f^{-1}(x_j(\phi_j(t_0))+\varepsilon_{i}),
& x_j(\phi_j(t_0)) + \varepsilon_{i} \leq 1 \\
1, & x_j(\phi_j(t_0)) + \varepsilon_{i} > 1,
\end{array}\right.
\]
where $t_0^+$ denotes an infinitesimal amount of time after $t_0$.
That is, oscillator $i$ reaching the end of a cycle causes the state
of all other oscillators to increase by the amount $\varepsilon_{i}$,
and the phase variable to change accordingly.
\end{itemize}
The state variable $x_i$ can be thought of as a voltage. Charge is
accumulated over time according to the nonlinearity $f$ and it
discharges once it reaches full charge, resetting the charging
process. Upon discharging, it causes all other charges to increase
by a fixed amount of $\varepsilon_{i}$, up to the discharge point.
For this model, it is proved in~\cite{MirolloS:90} that for all
$N$ and for almost all initial conditions, the system eventually
becomes synchronized.
For the network to converge into a synchronous state, one key
assumption is that the behavior of every single oscillator is
governed by the same function $f(\cdot)$. This means that all
oscillators must have the same frequency. From the literature, it
appears that this requirement is nearly always needed. As far as
we are aware, for a fully synchronous behavior to emerge, the
oscillators need to have the same, or nearly the same, oscillation
frequencies.
The need for nearly identical oscillators presents a significant
limitation for emergent synchronization. This emergence of
synchrony is clearly desirable and it has been considered for
communication and sensor networks
in~\cite{HongS:04,Werner-AllenTPWN:05, LucarelliW:04}. However,
whether or not these techniques can be adapted to synchronize
networks with nodes that have arbitrary oscillator frequencies
(clock skew) is still unclear. Thus, in order to overcome this
limitation and find techniques capable of synchronizing a more
general class of networks, we turn to algorithms designed to
estimate certain unknown parameters such as clock skew.
\subsubsection{Estimation of Synchronization Parameters and the Scalability Problem}
\label{sec:estimate-params}
There have been many synchronization techniques proposed for use
in sensor networks. These algorithms generally allow each node to
estimate its clock skew and clock offset relative to the reference
clock. Reference Broadcast Synchronization
(RBS)~\cite{ElsonGE:02} eliminates transmitter side uncertainties
by having a transmitter broadcast reference packets to the
surrounding nodes. The receiving nodes then synchronize to each
other using the arrival of the reference packets as
synchronization events. Tiny-Sync/Mini-Sync~\cite{SichitiuV:03}
and the Timing-sync Protocol for Sensor Networks
(TPSN)~\cite{GaneriwalKW:03} organize the network into a
hierarchial structure and the nodes are synchronized using
pair-wise synchronization. In lightweight tree-based
synchronization (LTS)~\cite{GreunenR:03}, pair-wise
synchronization is also employed but the goal of LTS is to reduce
communication and computation requirements by taking advantage of
relaxed accuracy constraints. The Flooding Time Synchronization
Protocol (FTSP)~\cite{MarotiKSL:04} achieves one-hop
synchronization by having a root node broadcast timing information
to surrounding nodes. These surrounding nodes then proceed to
broadcast their synchronized timing information to nodes beyond
the broadcast domain of the root node. This process can continue
for multi-hop networks.
The problem with each of these traditional synchronization
techniques is that synchronization error will increase with each
hop. Since each node is estimating certain synchronization
parameters, i.e. clock skew, there will be inherent errors in the
estimate. As a result, a node multiple hops away from the node
with the reference clock will be estimating its parameters from
intermediate nodes that already have estimation errors. Therefore,
this introduces a fundamental \emph{scalability problem}: as the
number of hops across the network grows, the synchronization error
across the network will also grow.
Current trends in network technology are clearly moving us in the
direction of large, multi-hop networks. First, sensors are
decreasing in size and this size decrease will most likely be
accompanied by a decrease in communication range. Thus, more hops
will be required to traverse a network deployed over a given area.
Second, as we deploy networks over larger and larger areas, then
for a given communication range, the number hops across the
network will also increase. In either case, the increased number
of hops required to communicate across the network will increase
synchronization error. Therefore, it is essential that we develop
techniques than can mitigate the accumulation of synchronization
error over multiple hops.
\subsection{Spatial Averaging and Synchronization}
\subsubsection{Cooperation through Spatial Averaging}
To decrease the error increase in each hop, we need to decrease
the estimation error. There are two primary ways of achieving
this. First, each node can increase the amount of timing
information it obtains from neighboring nodes. For example, from
a received timing packet, the node may be able to construct a data
point telling it the approximate time at the reference clock and
the corresponding time at its local clock. Using a collection of
these data points, the node can estimate clock skew and clock
offset. So instead of using, say, five packets with timing
information, a node can wait for ten packets. More data points
will generally give better estimates. The drawback to such an
approach is the increase in the number of packet exchanges.
The second way in which to reduce estimation error is to increase
the quality of each data point obtained by the nodes. This can be
achieved through improving packet exchange algorithms and time
stamping techniques. However, we believe that there is one
fundamentally new approach to improving data point quality that
has not be carefully studied. This is to use spatial averaging to
improve the quality of each data point.
The motivation for this approach is very simple. Assume that each
node has many neighbors. If all nodes in the network are to be
synchronized, then the neighbors of any given node will also have
synchronization information. Is it possible to simultaneously use
information from all the neighbors to improve the quality of a
timing observation made by a node? Furthermore, it would seem to
make sense that with more neighbors, hence more available timing
information, the quality of the constructed data point should
improve. If this is indeed the case, then achieving
synchronization through the use of spatial averaging will provide
a fundamentally new trade-off in improving synchronization
performance. Network designers would simply be able to increase
the number and density of nodes to obtain better network
synchronization. The study of cooperative time synchronization
using spatial averaging is the focus of this work.
\subsubsection{Model and Technique} \label{sec:introModel}
To obtain a model for developing cooperative synchronization in
large wireless networks, we begin by looking at the signals
observed by a node in a network with $N$ nodes uniformly deployed
over a fixed finite area. To start, we assume propagation delay to
negligible (the general case is considered in
Section~\ref{sec:timeDelay}). All nodes transmit a pulse $p(t)$
and a node $j$ will see a signal $A_{j,N}(t)$ which is the
superposition of all these pulses,
\[
A_{j,N}(t)
= \sum_{i=1}^N \frac{A_{max}K_{j,i}}{N}
p(t-\tau_{0}-T_{i}).
\]
In this expression, $p(t)$ is the basic pulse transmitted by each
node (assumed to be the same for all nodes). $\tau_{0}$ is the
ideal pulse transmit time, but since we assume imperfect time
synchronization among the nodes we have $T_{i}$ modelling random
errors in the pulse transmission time. $K_{j,i}$ models the
amplitude loss in the signal transmitted by the $i$th node.
$A_{max}$ is the maximum magnitude transmitted by a node. We scale
each node's transmission by $N$ so that as the network density
grows, the total power radiated does not grow unbounded. This
model thus describes the received signal seen at a node $j$ for a
network with $N$ nodes and this holds for any $N$. Increasing $N$
will have two effects: (a) node density will increase since the
network area is fixed and (b) node signal transmission magnitude
will decrease due to the $1/N$ scaling. Therefore, by increasing
$N$ this model allows us to study the scalability of networks as
node density grows and node size decreases.
Given that these are the signals observed at each node, we ask: is
it possible for $A_{j,N}(t)$ to encode a time synchronization
signal that will enable all nodes in the network to synchronize
their clocks with bounded error, as $N\to\infty$? The answer is
yes, and the key to proving all our results is the law of large
numbers.
Our key idea is the following. If all nodes were able to determine
when time $\tau_{0}$ (in the reference time) arrives, then by
transmitting $p(t)$ at time $\tau_{0}$, the signal observed at any
node $j$ would be $p(t-\tau_{0})\sum_{i=1}^N
\frac{A_{max}K_{j,i}}{N}$, which is a suitably scaled version of
$p(t)$ centered at $\tau_{0}$. In reality however, there will be
some error in the determination of $\tau_{0}$, which we account
for by allowing for a node-dependent random error $T_i$. But, if
the distribution of $T_i$ satisfies certain conditions, then the
effects of that timing error can be averaged out. A pictorial
representation of why this should be the case is shown in
Fig.~\ref{fig:why-sync-holds}.
\begin{figure}[!ht]
\centerline{\psfig{file=pulse.eps,height=4cm,width=7cm}
\psfig{file=rinfinity.eps,height=4cm,width=7cm}}
\caption{\small Assume $N$ square waves are transmitted (one by
each node) at random times. These times have the properties that
they all have the same mean, a small variance compared to the
duration of the wave, and their distribution is symmetric. Then,
under the assumption of $N$ large, it follows from the Law of
Large Numbers that the observed signal is going to be a smoothed
version of the square wave, in which the center {\em
zero-crossing} will correspond to the location of the mean of the
random times.}
\label{fig:why-sync-holds}
\end{figure}
Therefore, intuitively we can see how the technique of
\emph{cooperative time synchronization} using spatial averaging
can average out the inherent timing errors in each node. Even
though there is randomness and uncertainty in each node's
estimates, by using cooperation among a large number of nodes it
is possible to recover {\em deterministic} parameters from the
resulting aggregate waveform (such as the location of certain
zero-crossings) in the limit as node density grows unbounded. Thus
more nodes will give us better estimates. This is because the
random waveform converges to a deterministic one as more and more
nodes cooperatively generate an aggregate waveform. At the same
time, the average power required by each node will decrease since
smaller nodes send smaller signals. Therefore, by programming
suitable dynamics into the nodes, in this paper we show how it is
possible to generate an aggregate output signal with {\em
equispaced} zero-crossings in the limit of asymptotically dense
networks. Thus, the detection of these zero-crossings plays the
same role as that of an externally generated time reference signal
based on which all nodes can synchronize.
We develop this synchronization technique in three main steps.
One, we set up the model for $A_{j,N}(t)$. Two, we specify
characteristics of the model (i.e. the distribution of $T_{i}$)
that allow us to prove desirable properties of the aggregate
waveform (such as a center zero-crossing at $\tau_{0}$). Three,
we develop the estimators needed for our synchronization technique
and show that the estimators give us the desired characteristics.
\subsection{Main Contributions and Organization of the Paper}
The main contributions presented in this paper are the following;
\begin{itemize}
\item The definition of a probabilistic model for the study of the
time synchronization problem in wireless networks. This model
does contain the classical Mirollo-Strogatz model as a special
case, but its formulation and the tools used to prove convergence
results are of a completely different nature (purely
probabilistic, instead of based on the theory of dynamical
systems). \item Using this model, we provide a rigorous analysis
of a new cooperative time synchronization technique that employs
spatial averaging and has favorable scaling properties. As the
density of nodes increases, synchronization performance improves.
In particular, in the limit of infinite density, deterministic
parameters for synchronization can be recovered. \item We show
that cooperative time synchronization works perfectly for
negligible propagation delay. When propagation delay is
considered, we find that asymmetries at the boundaries reveal some
limitations that need to be carefully considered in designing
algorithms that take advantage of spatial averaging.
\end{itemize} In analyzing the
proposed cooperative time synchronization technique, our goal is
to show that the proposed technique can average out all random
error and provide deterministic parameters for synchronization as
node density grows unbounded. This asymptotic result can be viewed
as a \emph{convergence in scale} to synchrony. The result serves
as a theoretical foundation for allowing a new trade-off between
node density and synchronization performance. In particular,
higher node density can yield better synchronization.
The rest of this paper is organized as follows. The general model
is presented in Section~\ref{sec:systemModel}. Of particular
interest here is Section~\ref{sec:specialcase}, where we show how
our model contains the model of Mirollo and Strogatz for
pulse-coupled oscillators as a special case~\cite{MirolloS:90}. In
Section~\ref{sec:timesync-setup} we specialize the general model
for our synchronization setup and develop waveform properties that
will be used in time synchronization. In
Section~\ref{sec:timeSynchronization} we develop the cooperative
time synchronization technique for no propagation delay. We
extend the cooperative synchronization ideas to the case of
propagation delay in Section~\ref{sec:timeDelay}. The paper
concludes in Section~\ref{sec:conclusion} with a detailed
discussion on the scalability issue and how the technique proposed
in this work lays the theoretical foundation for a general class
of cooperative time synchronization techniques that use spatial
averaging.
\section{System Model}
\label{sec:systemModel}
\subsection{Clock Model}
We consider a network with $N$ nodes uniformly distributed over a
fixed finite area. The behavior of each node $i$ is governed by a
clock $c_{i}$ that counts up from $0$. The introduction of $c_{i}$
is important since it provides a consistent timescale for node
$i$. By maintaining a table of pulse-arrival times, node $i$ can
utilize the arrival times of many pulses over an extended period
of time.
The clock of one particular node in the network will serve as the
reference time and to this clock we wish to synchronize all other
nodes. We will call the node with the reference clock node $1$ and
the clocks of other nodes are defined relative to the clock of
node $1$. We never adjust the frequency or offset of the local
clock $c_{i}$ because we wish to maintain a consistent time scale
for node $i$.
The clock of node $1$, $c_{1}$, will be defined as $c_{1}(t)=t$
where $t \in [0,\infty)$. Taking $c_{1}$ to be the reference
clock, we now define the clock of any other arbitrary node $i$,
$c_{i}$. We define $c_{i}$ as
\begin{equation} \label{eq:clock}
c_{i}(t)=\alpha_{i}(t-\bar{\Delta}_{i})+\Psi_{i}(t),
\end{equation}
where
\begin{itemize}
\item $\bar{\Delta}_{i}$ is an unknown offset between the start
times of $c_{i}$ and $c_{1}$.
\item $\alpha_{i}>0$ is a constant and for each $i$,
$\alpha_{i} \in [\alpha_{low},\alpha_{up}]$ where
$\alpha_{up},\alpha_{low}>0$ are finite. This bound on $\alpha_{i}$
means that the frequency offsets between any two nodes can not be
arbitrarily large.
\item $\Psi_{i}(t)$ is a stochastic process modeling random timing
jitter.
\end{itemize}
Thus, this model assumes that there is a bounded constant
frequency offset between the oscillators of any two nodes as well
as some random clock jitter.
It is important to note that node $1$ does not have to be special
in any way; its clock is simply a reference time on which to
define the clocks of the other nodes. This means that our clock
model actually describes the relative relationship of all the
clocks in the network by using an arbitrary node's clock as a
reference.
\subsection{Pathloss Only Model} \label{sec:prop-model}
\subsubsection{A Random Model for Pathloss}
From Section~\ref{sec:introModel}, we see that we are interested
in studying the aggregate waveform observed at a node $j$. As a
result, we are only concerned with the aggregate signal magnitude
and do not care about the particular signal contribution from each
surrounding node. With this in mind, we can develop a random
model for pathloss that, for dense networks, gives the appropriate
aggregate signal magnitude at node $j$. Such a model is ideal for
our situation since we are studying asymptotically dense networks.
We start with a general pathloss model $K(d)$, where $0\leq
K(d)\leq 1$ for all distances $d\geq 0$, is non-increasing and
continuous. $K(d)$ is a fraction of the transmitted magnitude seen
at distance $d$ from the transmitter. For example, if the receiver
node $j$ is at distance $d$ from node $i$, and node $i$ transmits
a signal of magnitude $A$, then node $j$ will hear a signal of
magnitude $AK(d)$. We derive $K(d)$ from a power pathloss model
since any pathloss model captures the \emph{average} received
power at a given distance from the transmitter. This average
received power is perfect for modelling received signal magnitudes
in our problem setup since we are considering asymptotically dense
networks. Due to the large number of nodes at any given distance
$d$ from the receiver, using the average received magnitude at
distance $d$ as the contribution from each node at that distance
will give a good modelling of the amplitude of the aggregate
waveform.
The random pathloss variable $K_{j}$ will be derived from $K(d)$.
To understand how $K_{j}$ and $K(d)$ are related, we give an
intuitive explanation of the meaning of $K_{j}$ as follows: the
$\textrm{Pr}[K_{j}\in(k,k+\Delta)]$ is the fraction of nodes at
distances $d$ from node $j$ such that $K(d)\in(k,k+\Delta)$, where
$\Delta$ is a small constant. This means that, roughly speaking,
for any given scaling factor $K_{j}=k$, $f_{K_{j}}(k)\Delta$ is
the fraction of received signals with magnitude scaled by
approximately $k$, where $f_{K_{j}}(k)$ is the probability density
function of $K_{j}$. Thus, if we scale the transmit magnitude $A$
from every node $i$ by an independent $K_{j}$, then as the number
of nodes, $N$, gets large, node $j$ will see $Nf_{K_{j}}(k)\Delta$
signals of approximate magnitude $Ak$, and this holds for all $k$
in the range of $K_{j}$. This is because taking a large number of
independent samples from a distribution results in a good
approximation of the distribution.
Thus, this intuition tells us that scaling the magnitude of the
signal transmitted from every node $i$ by an independent sample of
the random variable $K_{j}$ gives an aggregate signal at node $j$
that is the same magnitude as if we generated the signal using
$K(d)$ directly. Even though the signals from two nodes at the
same distance from a receiver have correlated magnitudes, we do
not care about the signal magnitude from any particular node but
only that an appropriate number of all possible received signal
magnitudes contribute to the aggregate waveform. For a receiving
node $j$, we choose therefore to work with the random variable
$K_{j}$ instead of directly with $K(d)$ because, for the goals of
this paper, doing so has two major advantages: (a) we can obtain
desirable limit results by placing very minimal restrictions on
the distribution of the $K_{j}$'s (and hence on $K(d)$) and (b) we
can apply tools from probability theory (basically, the strong law
of large numbers) to carry out our analysis.
\subsubsection{Definition of $K_{j}$}
From the above intuition we can define the cumulative distribution
function of $K_{j}$ as
\begin{equation} \label{eq:prop_cdf}
F_{K_{j}}(k) \;\;=\;\; \textrm{Pr}(K_{j}\leq k)
\;\;=\;\; \left\{ \begin{array}{ll}
0 & k\in(-\infty,0) \\
\frac{A_{T}-A(j,\bar{r})}{A_{T}}
\;\;=\;\; 1 - \frac{A(j,\bar{r})}{A_{T}} & k\in[0,1] \\
1 & k\in(1,\infty)
\end{array} \right.
\end{equation}
where
\begin{itemize}
\item $A_{T}$ is the total area of the network, \item $A(j,a)$ is
the area of the network contained in a circle of radius $a$
centered at node $j$, \item $\bar{r} = \textrm{sup}\{d:K(d) >
k\}$.
\end{itemize}
From the above discussion we see that the distribution of $K_{j}$
is only a function of node $j$, the receiving node. We illustrate
the relationship among node $j$, $K(d)$, $\bar{r}$, and
$F_{K_{j}}(k)$ in Fig~\ref{fig:propModel-illus}. We sometimes
write $K_{j,i}$ with $i$ used to index each node surrounding node
$j$. $i$ is thus indexing a sequence of independent random
variables $K_{j,i}$ for fixed $j$. Therefore, for a given $j$,
$K_{j,i}$'s are independent and identically distributed (i.i.d.)
with a cumulative distribution function given by
(\ref{eq:prop_cdf}) for all $i$.
We assume that $K_{j}$ has the following properties:
\begin{itemize}
\item $K_{j}$ is independent from $\Psi_{l}(t)$ for all $j$, $l$,
and $t$. \item $0 \leq K_{j} \leq 1$, $0<E(K_{j})\leq 1$, and
$\textrm{Var}(K_{j})\leq 1$.
\end{itemize}
The requirements on the random variable $K_{j}$ places
restrictions on the model $K(d)$. Any function $K(d)$ that yields
a $K_{j}$ with the above requirements can be used to model
pathloss.
\begin{figure}[!h]
\centerline{\psfig{file=propModel_illus.eps,height=8.5cm}}
\caption[An illustration of node $j$, $K(d)$, $\bar{r}$, and
$F_{K_{j}}(k)$.]{\small An illustration of the cumulative
distribution function $F_{K_{j}}(k)$ is shown in the bottom-right
figure. For a given scaling value $k$, $F_{K_{j}}(k)$ is defined
to be $1-(A(j,\bar{r})/A_{T})$, where the relationship between
$\bar{r}$ and $k$ is shown in the top-right figure. The area
$A(j,\bar{r})$ and its relation to node $j$ is shown in the
top-left figure.} \label{fig:propModel-illus}
\end{figure}
\subsection{Delay and Pathloss Model} \label{sec:delay-model}
In this section we develop a more complex model to simultaneously
model propagation delay and pathloss. This leads to the joint
development of the delay random variable $D_{j}$ and a
corresponding pathloss random variable $K_{j}$.
\subsubsection{Correlation Between Delay and Pathloss}
Since we want to develop a model for both pathloss and time delay,
we start by keeping the pathloss function $K(d)$ defined in
Section~\ref{sec:prop-model}. The general delay model assumes a
function $\delta(d)$ that models the time delay as a function of
distance. $\delta(d)$ describes the time in terms of $c_{1}$ that
it takes for a signal to propagate a distance $d$. For example, if
node $i$ and node $j$ are distance $d_{0}$ apart, then a pulse
sent by node $i$ at time $c_{1}=0$ will be seen at node $j$ at
time $c_{1}=\delta(d_{0})$. We make the reasonable assumption that
$\delta(d)$ is continuous and strictly monotonically increasing
for $d\geq 0$.
As with the pathloss only model, we want to define a delay random
variable $D_{j}$ for each receiving node $j$. Recall that this
means that for every node $j$ there is a random variable $D_{j}$
associated with it since, in general, each node $j$ will see
different delays. There is a correlation between the delay random
variable $D_{j}$ and the pathloss random variable $K_{j}$. This
correlation arises for two main reasons. First, since in
Section~\ref{sec:prop-model} we define $K(d)$ to be monotonically
decreasing and continuous, it is possible for $K(d)=0$ for
$d\in[R,\infty)$, $R>0$. This might be the case for a multi-hop
network. In this situation, there will be a set of nodes whose
transmissions will never reach node $j$ (i.e. infinite delay) even
though according to $\delta(d)$ these nodes should contribute a
signal with finite delay. Second, a small $K_{j}$ value would
represent a signal from a far away node. As a result, the
corresponding $D_{j}$ value should be large to reflect large
delay. Therefore, keeping these two points in mind, we proceed to
develop a model for both pathloss and propagation delay.
\subsubsection{Definition of $D_{j}$ and $K_{j}$}
We define the cumulative distribution function of $D_{j}$ as
\begin{equation} \label{eq:delay_cdf}
F_{D_{j}}(x) \;\;=\;\; \textrm{Pr}(D_{j}\leq x)
\;\;=\;\; \left\{ \begin{array}{ll}
0 & x\in(-\infty,0) \\
\frac{A(j,r')}{A_T} & x\in[0,\delta(R)] \\
a(x-\delta(R))+\frac{A(j,R)}{A_{T}} & x\in(\delta(R),\delta(R+\Delta
R)] \\
1 & x\in(\delta(R+\Delta R),\infty)
\end{array} \right.
\end{equation}
where $r' = \textrm{sup}\{r:\delta(r)\leq x\}$, $\Delta R>0$ is a
constant, $R=\sup\{d:K(d)>0\}$, and
\begin{displaymath}
a = \frac{1-\frac{A(j,R)}{A_{T}}}{\delta(R+\Delta R)-\delta(R)}.
\end{displaymath}
Recall that $A(j,a)$, defined in Section~\ref{sec:prop-model}, is
the area of the network contained in a circle of radius $a$
centered at node $j$ and $A_{T}$ is the total area of the network.
Note that $R$ can be infinite.
Using the delay random variable $D_{j}$ with the cumulative
distribution function in (\ref{eq:delay_cdf}), we define $K_{j}$
as
\begin{equation} \label{eq:DjKjconnection}
K_{j} = K(\delta^{-1}(D_{j})),
\end{equation}
where $K(\cdot)$ is the deterministic pathloss function from
Section~\ref{sec:prop-model} and
$\delta^{-1}:[0,\infty)\to[0,\infty)$ is the inverse function of
the deterministic delay function $\delta(\cdot)$. Note that
$\delta^{-1}(\cdot)$ exists since $\delta(\cdot)$ is continuous
and strictly monotonically increasing on $[0,\infty)$.
\subsubsection{Intuition Behind $D_{j}$ and $K_{j}$}
To understand the distribution of $D_{j}$, we need to consider the
definition of $K_{j}$ as well. Recall that a signal arriving with
delay $D_{j}$ is scaled by the pathloss random variable $K_{j}$.
Let us consider the cumulative distribution in two pieces,
$x\in[0,\delta(R)]$ and $x\in(\delta(R),\infty)$. The case for
$x\in(-\infty,0)$ is trivial. First, for $x\in[0,\delta(R)]$, the
probability that $D_{j}$ takes a value less than or equal to $x$
is simply the fraction of the network area around node $j$ such
that the nodes are at distances $d$ with $\delta(d)\leq x$. The
intuition is the same as that for the development of $K_{j}$ in
Section~\ref{sec:prop-model}. Second, for
$x\in(\delta(R),\infty)$, the situation is more complex. Note that
a transmitted signal from a node at distance $d\in(R,\infty)$ from
$j$ will arrive at node $j$ with infinite delay since $K(d)=0$ for
$d\in(R,\infty)$. Since any delay values in
$x\in(\delta(R),\infty)$ correspond to distances
$d=\delta^{-1}(x)\in(R,\infty)$, the corresponding scaling value
will be zero because $K_{j}$ and $D_{j}$ are related by
(\ref{eq:DjKjconnection}). As a result, it does not matter what
delay values we assign to the fraction of the network area outside
a circle of radius $R$ centered at node $j$ as long as their delay
value $x$ is such that $\delta^{-1}(x)\in(R,\infty)$. Thus, we
can arbitrarily choose a constant $\Delta R$ value and construct a
piecewise linear portion of the cumulative distribution function
of $D_{j}$ on $x\in(\delta(R),\infty)$. The probability that
$D_{j}\in(\delta(R),\infty)$ will be the fraction of the network
area outside a circle of radius $R$ around node $j$. And since
$D_{j}\in(\delta(R),\infty)$ will have a corresponding $K_{j}$
value that is zero, this fraction of nodes will not contribute to
the aggregate waveform at node $j$. It is clear that the
correlated $D_{j}$ and $K_{j}$ random variables work together to
accurately model a signal arriving with both pathloss and
propagation delay. An illustration of how $K(d)$, $\delta(d)$,
node $j$, and $F_{D_{j}}(x)$ are related can be found in
Fig.~\ref{fig:delayModel-illus}.
\begin{figure}[!h]
\centerline{\psfig{file=delayModel_illus.eps,height=9.5cm}}
\caption[An illustration $K(d)$, $\delta(d)$, node $j$, and
$F_{D_{j}}(x)$.]{\small From the top-left and bottom-left figures,
we can see how $K(d)$ determines the set of nodes surrounding node
$j$ that will contribute to the aggregate waveform at node $j$.
This contributing set of nodes is related to $F_{D_{j}}(x)$
through $\delta(d)$ and this is illustrated in the top-right and
bottom-right figures.} \label{fig:delayModel-illus}
\end{figure}
We require that $D_{j}$ is bounded, has finite expectation, and
has finite variance for all $j$. Note that $D_{j}\geq 0$ by the
requirement that $\delta(d)\geq 0$. As well, since the cumulative
distribution in (\ref{eq:delay_cdf}) is continuous, and often
absolutely continuous, we assume that $D_{j}$ has a probability
density function $f_{D_{j}}(x)$. When we write $D_{j,i}$, the $i$
indexes each node surrounding node $j$. Thus, the $D_{j,i}$'s are
independent and identically distributed in $i$ for a given $j$ and
have a cumulative distribution given by (\ref{eq:delay_cdf}).
Using the $K_{j}$ and $D_{j}$ developed in this section to
simultaneously model pathloss and propagation delay, respectively,
we will be able to closely approximate the received aggregate
waveform at any node $j$ as $N\to\infty$.
To summarize, we see that our choice of the pathloss and delay
random variables will depend on what we want to model. If we only
consider pathloss and not propagation delay, then we will use the
random variable $K_{j}$ defined in Section~\ref{sec:prop-model}.
If we account for both pathloss and delay, then we will use the
delay random variable $D_{j}$ in this section
(Section~\ref{sec:delay-model}) and the pathloss random variable
$K_{j}$ defined by (\ref{eq:DjKjconnection}).
\subsection{Synchronization Pulses and the Pulse-Connection Function}
The exchange of pulses is the method through which the network
will maintain time synchronization. Each node $i$ will
periodically transmit a scaled pulse $A_{i} p(t)$, where $A_{i}$
is a constant and $p(t)$, in general, can be any pulse. We call
the interval of time during which a synchronization pulse is
transmitted a \emph{synchronization phase}.
What each node does with a set of pulse arrival observations is
determined by the pulse-connection function $X_{n,i}^{c_{i}}$ for
node $i$. The pulse-connection function is a function that
determines the time, in the time scale of $c_{i}$, when node $i$
will send its $n$th pulse. It can be a function of the current
value of $c_{i}(t)$ and past pulse arrival times. This function
basically determines how any node $i$ reacts to the arrival of a
pulse.
\subsection{An Example: Pulse-Coupled Oscillators}
\label{sec:specialcase}
The system model that we presented thus far is powerful because it
is very general. In this section we show that it is a
generalization of the pulse-coupled oscillator model proposed by
Mirollo and Strogatz~\cite{MirolloS:90}. As a result, the results
presented in that paper will hold under the simplified version of
our model.
\subsubsection{Model Parameters for Pulse-Coupled Oscillators}
In setting up the system model, Mirollo and Strogatz make four key
assumptions:
\begin{itemize}
\item Pathloss Model: The first assumption that is made is that
there is all-to-all coupling among all $N$ oscillators. This means
that each oscillator's transmission can be heard by all other
oscillators. Thus, for our model we ignore pathloss, i.e. $K(d) =
1$, to allow any node's transmission to be heard by each of the
other $N-1$ nodes. \item Delay Model: The second assumption is
that there is instantaneous coupling. This assumption is the same
as setting $\delta(d) = 0$. In such a situation we would use our
pathloss only model. \item Synchronization Pulses: The third key
assumption made in~\cite{MirolloS:90} is that there is non-uniform
coupling, meaning that each of the $N$ oscillators fire with
strengths $\epsilon_{1},\dots,\epsilon_{N}$. We modify the
parameters in our model by making node $i$ transmit with magnitude
$A_{i} = \epsilon_{i}$. They also assume that any two pulses
transmitted at different times will be seen by an oscillator as
two separate pulses. In our model, we may choose any pulse $p(t)$
that has an arbitrarily short duration and each node will detect
the pulse arrival time and pulse magnitude. \item Clock Model: The
fourth important assumption made by Mirollo and Strogatz is that
the oscillators are identical but they start in arbitrary initial
conditions. We simplify our clock model in~(\ref{eq:clock}) by
eliminating any timing jitter, i.e. $\Psi_{i}(t) = 0$, and making
the clocks identical by setting $\alpha_{i} = 1$ for
$i=1,\dots,N$. We leave $\bar{\Delta}_{i}$ in the model to account
for the arbitrary initial conditions. We also assume that the
phase variable in the pulse-coupled oscillator model increases at
the same rate as our clock. That is, the time it takes the phase
variable to go from zero to one and the time it takes our clock to
count from one integer value to the next are the same.
\end{itemize}
Now that we have identical system models, what remains is to
modify our model to mimic the coupling action detailed
in~\cite{MirolloS:90}. This is accomplished by defining a proper
pulse-connection function $X_{n,i}^{c_{i}}$.
\subsubsection{Choice of Pulse-Connection Function}
To match the coupling action in~\cite{MirolloS:90}, we choose a
pulse transmit time function $X_{n,i}^{c_{i}}(z_{k,i}^{c_{i}},
z_{k-1,i}^{c_{i}},\dots,z_{1,i}^{c_{i}}, x_{n-1,i}^{c_{i}})$ that
is a function of pulse receive times and also the time of node
$i$'s $(n-1)$th pulse transmission time. $z_{k,i}^{c_{i}}$ is the
time in terms of $c_{i}$ that node $i$ receives its $k$th pulse
since its last pulse transmission at $x_{n-1,i}^{c_{i}}$. In this
case, $X_{n,i}^{c_{i}}$ will be a function that updates node $i$'s
$n$th pulse transmission time each time node $i$ receives a pulse.
Let $X_{n,i}^{c_{i}}(k) \stackrel{\small{\Delta}}{=}
X_{n,i}^{c_{i}}(z_{k,i}^{c_{i}},
z_{k-1,i}^{c_{i}},\dots,z_{1,i}^{c_{i}}, x_{n-1,i}^{c_{i}})$ where
it is node $i$'s $n$th pulse transmission time after observing $k$
pulses since its last pulse transmission. Node $i$ will transmit
its pulse as soon as $X_{n,i}^{c_{i}}\leq c_{i}(t)$ where
$c_{i}(t)$ is node $i$'s current time. As soon as the node
transmits a pulse at $X_{n,i}^{c_{i}}$ the function will reset and
become $X_{n+1,i}^{c_{i}}(0) = x_{n,i}^{c_{i}}+1$. The node is now
ready to receive pulses and at its first received pulse, the next
transmission time will become $X_{n+1,i}^{c_{i}}(1)$.
$X_{n,i}^{c_{i}}$ will thus be defined as
\begin{eqnarray} \label{eq:pco_estimator}
X_{n,i}^{c_{i}}(k) &=& X_{n,i}^{c_{i}}(k-1) -
[f^{-1}(\epsilon_{j}+f(z_{k,i}^{c_{i}}-x_{n-1,i}^{c_{i}}))-(z_{k,i}^{c_{i}}-x_{n-1,i}^{c_{i}})],
\qquad k>0\\
X_{n,i}^{c_{i}}(0) &=& x_{n-1,i}^{c_{i}}+1
\label{eq:pco_estimator2}
\end{eqnarray}
where the pulse received at $z_{k,i}^{c_{i}}$ is a pulse of
magnitude $\epsilon_{j}$ and the function $f:[0,1]\to [0,1]$ is
the smooth, monotonic increasing, and concave down function
defined in~\cite{MirolloS:90}.
Equations~(\ref{eq:pco_estimator}) and (\ref{eq:pco_estimator2})
fundamentally say that each time node $i$ receives a pulse, node
$i$'s next transmission time will be adjusted. This is in line
with the behavior of the coupling model described by Mirollo and
Strogatz since each time an oscillator receives a pulse, its state
variable is pulled up by $\epsilon$ thus adjusting the time at
which the oscillator will next fire. To see how
equations~(\ref{eq:pco_estimator}) and (\ref{eq:pco_estimator2})
relate to the coupling model in~\cite{MirolloS:90}, let us
consider an example with two pulse coupled oscillators. Consider
two oscillators $A$ and $B$ illustrated in
Fig.~\ref{fig:strogatzModel}. In Fig.~\ref{fig:strogatzModel}(a),
\begin{figure}[!h]
\centerline{\psfig{file=strogatzModel.eps,height=6cm,width=12cm}}
\vspace{-5mm} \caption[The connection with pulse-coupled
oscillators.]{\small We illustrate the connection between the
pulse-coupled oscillator coupling model and our clock model. In
(a), oscillator $B$ is just about to fire and oscillator $A$ has
phase $q$. In (b), oscillator $B$ fires and increases the phase
of oscillator $A$ by $d$. This $d$ increase in phase effectively
decreases the time at which $A$ will next fire. We capture this
time decrease by decreasing the firing time of our node by an
amount $d$. Thus, oscillator $A$ and our node will fire at the
same time.} \label{fig:strogatzModel}
\end{figure}
we have that oscillator $A$ is at phase $q$ and oscillator $B$ is
just about to fire. Below the pulse-coupled oscillator model we
have a time axis for node $i$ corresponding to our clock model
going from time $x_{n-1,i}^{c_{i}}$ to $x_{n-1,i}^{c_{i}}+1$. Our
time axis for node $i$ models the behavior of oscillator $A$, that
is, we want node $i$ to behave in the same way as oscillator $A$
under the influence of oscillator $B$. If oscillator $B$ did not
exist, then the phase variable $q$ will match our clock in that
$q$ reaches $1$ at the same time our clock reaches
$X_{n,i}^{c_{i}}(0) = x_{n-1,i}^{c_{i}}+1$ and oscillator $A$ will
fire at the same time our model fires. In
Fig.~\ref{fig:strogatzModel}(b), oscillator $B$ has fired and has
pulled the state variable of oscillator $A$ up by $\epsilon$. This
coupling has effectively pushed the phase of oscillator $A$ to
$q+d$ and decreased the time before $A$ fires. In fact, the time
until oscillator $A$ fires again is decreased by $d$. We can
capture this coupling in our model since we can calculate the lost
time $d$. The time at which oscillator $B$ fires is
$z_{1,i}^{c_{i}}$ and it is clear that $d =
f^{-1}(\epsilon+f(z_{1,i}^{c_{i}}-x_{n-1,i}^{c_{i}}))-(z_{1,i}^{c_{i}}-x_{n-1,i}^{c_{i}})$.
Thus, if the time that oscillator $A$ will fire again is decreased
by time $d$ due to the pulse of $B$, then we adjust our node
firing time by decreasing the firing time to $X_{n,i}^{c_{i}}(1) =
x_{n-1,i}^{c_{i}}+1-d$. This is exactly the expression
in~(\ref{eq:pco_estimator}) for $k=1$. This relationship between
our model for calculating the node firing time and the
pulse-coupled oscillator coupling model can be easily extended to
$N$ oscillators.
We can see then that the pulse-coupled oscillator model
proposed by Mirollo and Strogatz in~\cite{MirolloS:90} is a
special case of our model. Our model generalizes this
pulse-coupled oscillator model by considering timing jitter,
pulses of finite width, propagation delay, non-identical clocks,
and an ability to accommodate arbitrary coupling functions.
\section{Cooperative Time Synchronization Setup}
\label{sec:timesync-setup}
Just as we could specialize our model to the pulse-coupled
oscillator model of Mirollo and Strogatz, we now specialize the
model for our proposed synchronization technique. We start under
the assumption of no propagation delay and develop the
synchronization technique for this case. Propagation delay is
considered in Section~\ref{sec:timeDelay}. We proceed in three
steps. In Section~\ref{sec:signal-reception}, we specify the model
for $A^{c_{1}}_{j,N}(t)$, the received waveform at any node $j$.
Second, in Section~\ref{sec:pulseProperties}, we prove that given
certain characteristics of the model, $A^{c_{1}}_{j,N}(t)$ has
very useful limiting properties. Third, we show in
Section~\ref{sec:timeSynchronization} that estimators (i.e., the
pulse connection function) developed for our synchronization
technique give $A^{c_{1}}_{j,N}(t)$ the desired properties.
\subsection{System Parameters} \label{sec:systemParameters}
For our synchronization technique, we specialize the general model
by making the following assumptions on $\alpha_{i}$ and
$\Psi_{i}(t)$ for $i=1\dots N$:
\begin{itemize}
\item A characterization of the $\{\alpha_{i}\}$ is given by a
known function $f_{\alpha}(s)$ with
$s\in[\alpha_{low},\alpha_{up}]$ that gives the percentage of
nodes with any given $\alpha$ value. Thus, the fraction of nodes
with $\alpha$ values in the range $s_{0}$ to $s_{1}$ can be found
by integrating $f_{\alpha}(s)$ from $s_{0}$ to $s_{1}$. We assume
that $|f_{\alpha}(s)|<G_{\alpha}$, for some constant $G_{\alpha}$.
We keep this function constant as we increase the number of nodes
in the network ($N\to\infty$). Given any circle of radius $R$ that
intersects the network, the nodes within that circle will have
$\alpha_{i}$'s that are characterized by $f_{\alpha}(s)$. $R$ is
the maximum $d$ such that $K(d)>0$. This means that the set of
nodes that any node $j$ will hear from will have its
$\alpha_{i}$'s characterized by a known function. Note that $R$
can be infinite, and in that case, any node $j$ hears from all
nodes in the network. Fundamentally, $f_{\alpha}(s)$ means that as
we increase node density, the new nodes have $\alpha$ parameters
that are well distributed in a predictable manner. \item
$\Psi_{i}(t)$ is a zero mean Gaussian process with samples
$\Psi_{i}(t_{0})\sim {\mathcal N}(0,\sigma^{2})$, for any $t_{0}$,
and independent and identically distributed samples for any set of
times $[t_{0},\ldots,t_{k}]$, $k$ a positive integer. We assume
$\sigma^{2}<\infty$ and note that $\sigma^{2}$ is defined in terms
of the clock of node $i$. We assume that $\Psi_{i}(t)$ is Gaussian
since the RMS (root mean square) jitter is characterized by the
Gaussian distribution~\cite{Roberts:03}.
\end{itemize}
We maintain the full generality of the pathloss model from
Section~\ref{sec:prop-model}. Note that throughout this work we
assume no transmission delay or time-stamping error. This means
that a pulse is transmitted at exactly the time the node intends
to transmit it. We make this assumption since there will be no
delay in message construction or access time~\cite{ElsonGE:02}
because our nodes broadcast the same simple pulse without worrying
about collisions. Also, when a node receives a pulse it can
determine its clock reading without delay since any time stamping
error is small and can be absorbed into the random jitter.
\subsection{Signal Reception Model} \label{sec:signal-reception}
For our proposed synchronization technique, the aggregate waveform
seen by node $j$ at any time $t$ is
\begin{eqnarray} \label{eq:timesync-aggwaveform}
A^{c_{1}}_{j,N}(t) = \sum_{i=1}^{N} \frac{ A_{max}K_{j,i}}{N}
p(t-\tau_{o}-T_{i}),
\end{eqnarray}
where $A^{c_{1}}_{j,N}(t)$ is the waveform seen at node $j$
written in the time scale of $c_{1}$ and $A_{i} = A_{max}/N$ for
all $i$. $A_{max}$ is the maximum transmit magnitude of a node.
$T_{i}$ is the random timing offset suffered by the $i$th node,
which encompasses the random clock jitter and estimation error.
This model says that each node $i$'s pulse transmission occurs at
the ideal transmit time $\tau_{0}$ plus some random error $T_{i}$.
In the next section, Section~\ref{sec:pulseProperties}, we find
properties for $T_{i}$ that will give us desirable properties in
$A^{c_{1}}_{j,N}(t)$. Then, in
Section~\ref{sec:timeSynchronization}, we show that our proposed
steady-state synchronization technique and its associated
pulse-connection function will give us the desired properties.
There are two comments about (\ref{eq:timesync-aggwaveform}) that
we want to make. First, note that even though we sum the
transmissions from all $N$ nodes in
(\ref{eq:timesync-aggwaveform}), we do not assume that node $j$
can hear all nodes in the network. Recall from the pathloss model
that if we have a multi-hop network, then there will be a nonzero
probability that $K_{j,i}=0$. Thus, node $j$ will not hear from
the nodes whose transmissions have zero magnitude. Second, it may
be possible that the nodes are told that there are $\bar{N}=vN$
nodes in the network while the actual number of functioning nodes
is $N$. In which case, each node will transmit with signal
magnitude $A_{i} = A_{max}/(vN)$ and
(\ref{eq:timesync-aggwaveform}) will have a factor of $1/v$. Other
than for this factor, however, the theoretical results that follow
are not affected.
To model the quality of the reception of $A^{c_{1}}_{j,N}(t)$ by
node $j$, we model the reception of a signal by defining a
threshold $\gamma$. $\gamma$ is the received signal threshold
required for nodes to perfectly resolve the pulse arrival time. If
the maximum received signal magnitude is less than $\gamma$ then
the node does not make any observations and ignores the received
signal waveform. We assume that $\gamma\ll A_{max}$.
In our work we will assume that $p(t)$ takes on the shape
\begin{eqnarray} \label{eq:poft}
p(t) = \left \{ \begin{array}{ll}
q(t) & \textrm{ $-\tau_{nz} < t < 0$} \\
0 & \textrm{ $t = 0, t \leq -\tau_{nz}, t \geq \tau_{nz}$} \\
-q(-t) & \textrm{ $0 < t < \tau_{nz}$}
\end{array} \right.
\end{eqnarray}
where $\tau_{nz}>0$ is expressed in terms of $c_{1}$. We assume
$q(t)> 0$ for $t\in (-\tau_{nz}, 0)$, $q(t)\ne 0$ only on $t\in
(-\tau_{nz}, 0)$, $\textrm{sup}_{t} |q(t)| = 1$, and $q(t)$ is
uniformly continuous on $(-\tau_{nz}, 0)$. Thus, we see that
$p(t)$ has at most three jump discontinuities (at $t =
0,-\tau_{nz},\tau_{nz}$). $\tau_{nz}$ should be chosen large
compared to $\max_{i}\sigma_{i}^{2}$, i.e.
$\sigma_{i}^{2}<<\tau_{nz}$, where $\sigma_{i}^{2}$ is the value
of $\sigma^{2}$ translated from the time scale of $c_{i}$ to
$c_{1}$. This way, over each synchronization phase, with high
probability a zero-crossing will occur. For each node, the
duration in terms of $c_{1}$ of a synchronization phase will be
$2\tau_{nz}$. Note that we assume $\tau_{nz}$ is a value that is
constant in any consistent time scale. This means that even though
nodes have different clocks, identical pulses are transmitted by
all nodes. We define a pulse to be transmitted at time $t$ if the
pulse makes a zero-crossing at time $t$. Similarly, we define the
\emph{pulse receive (arrival) time} for a node as the time when
the observed waveform first makes a zero-crossing. A
\emph{zero-crossing} is defined for signals that have a positive
amplitude and then transition to a negative amplitude. It is the
time that the signal first reaches zero.
For the exchange of synchronization pulses, we assume that nodes
can transmit pulses and receive signals at the same time. This
simplifying assumption is not required for the ideas presented
here to hold, but simplifies the presentation. We mention a way
to relax this assumption in Section~\ref{sec:noSimultTxRx}.
In (\ref{eq:timesync-aggwaveform}) and in the discussions above,
we have focused on characterizing the aggregate waveform for any
one synchronization phase. That is,
(\ref{eq:timesync-aggwaveform}) is the waveform seen by any node
$j$ for the synchronization phase centered around node $1$'s
transmission at $t=\tau_{0}$, $\tau_{0}$ a positive integer. We
can, however, describe a synchronization pulse train in the
following form,
\begin{eqnarray}
\bar{A}^{c_{1}}_{j,N}(t) = \sum_{u=1}^{\infty}\sum_{i=1}^{N}
\frac{A_{max}K_{j,i}}{N} p(t-\tau_{u}-T_{i,u}),
\end{eqnarray}
where $\tau_{u}$ is the integer value of $t$ at the $u$th
synchronization phase, and $T_{i,u}$ is the error suffered by the
$i$th node in the $u$th synchronization phase. We seek to create
this pulse train with equispaced zero-crossings and use each
zero-crossing as a synchronization event. An illustration of such
a pulse train is shown in Fig.~\ref{fig:pulsetrain}. For
simplicity, however, most of the theoretical work is carried out
on one synchronization phase.
\begin{figure}[!h]
\centerline{\psfig{file=pulsetrain.eps,width=15cm,height=1.5cm}}
\vspace{-3mm}
\caption[A pulse train with equispaced zero-crossings.]{\small An
illustration of a pulse train with equispaced zero-crossings.
The pulse at each integer value of $t$ is an instance of
$A_{j,\infty}(t)=\lim_{N\to\infty}A^{c_{1}}_{j,N}(t)$ so we see
three instances of $A_{j,\infty}(t)$ in the above figure with
zero-crossings at $t=1,2,3$. We can control the zero-crossings of
$A_{j,\infty}(t)$ and choose to place it on an integer value of
$t$. As a result, we can use these zero-crossings as
synchronization events since they can be detected simultaneously
by all nodes in the network.}
\label{fig:pulsetrain}
\end{figure}
\subsection{Desired Structural Properties of the Received Signal}\label{sec:pulseProperties}
In this section, we characterize the properties of $T_{i}$ that
give us desirable properties in the aggregate waveform. From
(\ref{eq:timesync-aggwaveform}), the aggregate waveform seen at
each node $j$ in the network has the form
\begin{eqnarray} \label{eq:waveformGeneralForm}
A_N(t) = \frac 1 N \sum_{i=1}^N A_{max}K_i p(t-\tau_0-T_{i})
\end{eqnarray}
We have dropped the $j$ and $c_{1}$ for notational simplicity
since in this section we deal solely with the received waveform at
a node $j$ in the time scale of $c_{1}$. As we let the number of
nodes grow unbounded ($N\to\infty$), the properties of this limit
waveform can be characterized by Theorem~\ref{theorem:main}. These
properties will be essential for asymptotic cooperative time
synchronization. As a note, in Theorem~\ref{theorem:main} we
present the case for Gaussian distributed $T_{i}$ but similar
results hold for arbitrary zero-mean, symmetrically distributed
$T_{i}$ with finite variance.
\begin{theorem}
\label{theorem:main} Let $p(t)$ be as defined in
equation~(\ref{eq:poft}) and $T_i\sim {\mathcal N}(0,
\frac{\bar{\sigma}^{2}}{\alpha_{i}^{2}})$ with
$\bar{\sigma}^{2}>0$ a constant and
$\frac{\bar{\sigma}^2}{\alpha_{i}^{2}} < B <\infty$ for all $i$,
$B$ a constant. Also, let $K_i$ be defined as in
Section~\ref{sec:prop-model} and be independent from $T_{i}$ for
all $i$. Then, $\lim_{N\to\infty}A_{N}(t) = A_{\infty}(t)$ has the
properties
\begin{itemize}
\item $A_\infty(\tau_0) = 0$, \item $A_\infty(t)>0$ for $t\in
(\tau_{0}-\tau,\tau_{0})$, and $A_\infty(t)<0$ for $t\in
(\tau_{0},\tau_{0}+\tau)$ for some $\tau < \tau_{nz}$. \item
$A_{\infty}(t)$ is odd around $t=\tau_{0}$, i.e.
$A_{\infty}(\tau_{0}+\xi) = -A_{\infty}(\tau_{0}-\xi)$ for
$\xi\geq 0$ \item $A_\infty(t)$ is continuous. \qquad
$\bigtriangleup$
\end{itemize}
\end{theorem}
The properties outlined in Theorem~\ref{theorem:main} will be key
to the synchronization mechanism we describe. The specific value
of $\bar{\sigma}^{2}$ will be determined by our choice of the
pulse-connection function. Before we prove
Theorem~\ref{theorem:main} in Section~\ref{sec:theoremProof} we
develop and motivate a few important related lemmas.
\subsubsection{Polarity and Continuity of $A_\infty(t)$}
At time $t = \tau_1 \ne \tau_0$, we have that
\[ A_N(\tau_1) \;\; = \;\; \sum_{i=1}^N \frac{A_{max}K_i}{N} p(\tau_1-\tau_0-T_{i})
\;\; = \;\; \sum_{i=1}^N \frac 1 N \bar{M}_i(\tau_1),
\]
where $\bar{M}_i(\tau_1) \stackrel{\Delta}{=} A_{max}K_i
p(\tau_1-\tau_0-T_{i})$. We have the mean of
$\bar{M}_{i}(\tau_{1})$ being
\begin{equation} \label{eqn:MbarMean}
E(\bar{M}_{i}(\tau_{1})) = A_{max}E(K_{i})\int
p(\tau_1-\tau_0-\psi) f_{T_{i}}(\psi) d\psi,
\end{equation}
where $f_{T_{i}}(\psi)$ is the Gaussian pdf
\begin{displaymath}
f_{T_{i}}(\psi) =
\frac{1}{\frac{\bar{\sigma}}{\alpha_{i}}\sqrt{2\pi}}\textrm{exp}
\bigg\{-\frac{(\psi)^2}{2\frac{\bar{\sigma}^{2}}{\alpha_{i}^2}}\bigg\}.
\end{displaymath}
It is clear that the $\bar{M}_{i}(\tau_{1})$'s, for different
$i$'s, do not have the same mean and do not have the same variance
since the two quantities depend on the $\alpha_{i}$ value. Since
the $\alpha_{i}$'s are characterized by $f_{\alpha}(s)$ (defined
in Section~\ref{sec:systemParameters}), we write the Gaussian
distribution for $T$ as
\begin{displaymath}
f_{T}(\psi,s) =
\frac{1}{\frac{\bar{\sigma}}{s}\sqrt{2\pi}}\textrm{exp}
\bigg\{-\frac{(\psi)^2}{2\frac{\bar{\sigma}^{2}}{s^2}}\bigg\}.
\end{displaymath}
and $\bar{M}_{i}(\tau_{1})$ is in fact a function of $s$ as well,
denoted $\bar{M}_{i}(\tau_{1},s)$. Using $f_{T}(\psi,s)$ and
$\bar{M}_{i}(\tau_{1},s)$, the notation makes it clear that we can
average over the $\alpha_{i}$'s that are characterized by
$f_{\alpha}(s)$. We use the results of
Lemmas~\ref{lemma:polarity_positive} and
\ref{lemma:polarity_negative} to prove the polarity result for
$A_{\infty}(t)$ in Section~\ref{sec:theoremProof}.
\begin{lemma} \label{lemma:polarity_positive}
Given the sequence of independent random variables
$\bar{M}_{i}(\tau_{1})$ with $\tau_{1}<\tau_{0}$,
$E(\bar{M}_{i}(\tau_{1})) = \mu_{i}$, and
$\textrm{Var}(\bar{M}_{i}(\tau_{1})) = \sigma_{i}^{2}$. Then, for
all $i$,
\begin{equation} \label{eqn:cond3a}
\gamma_{2}>\mu_{i} > \gamma_{1} > 0
\end{equation}
\begin{equation} \label{eqn:cond3b}
\sigma_{i}^{2}<\gamma_{3}<\infty,
\end{equation}
for some constants $\gamma_{1}$, $\gamma_{2}$, and $\gamma_{3}$
and
\begin{displaymath}
\lim_{N\to\infty} \frac{1}{N} \sum_{i=1}^{N} \bar{M}_{i}(\tau_{1}) = \eta(\tau_{1}) > 0
\end{displaymath}
almost surely, where
\begin{eqnarray*}
\eta(\tau_{1}) &=& \int_{\alpha_{low}}^{\alpha_{up}}
E(\bar{M}_{i}(\tau_{1},s)) f_{\alpha}(s) ds \\ &=&
A_{max}E(K_{i})\int_{\alpha_{low}}^{\alpha_{up}}
\int_{-\infty}^{\infty} p(\tau_1-\tau_0-\psi) f_{T}(\psi,s) d\psi
f_{\alpha}(s) ds. \qquad \bigtriangleup
\end{eqnarray*}
\medskip
\end{lemma}
\begin{lemma} \label{lemma:polarity_negative}
Given the sequence of independent random variables
$\bar{M}_{i}(\tau_{1})$ with $\tau_{1}>\tau_{0}$,
$E(\bar{M}_{i}(\tau_{1})) = \mu_{i}$, and
$\textrm{Var}(\bar{M}_{i}(\tau_{1})) = \sigma_{i}^{2}$. Then, for
all $i$,
\begin{equation} \label{eqn:cond4a}
\gamma_{2}<\mu_{i} < \gamma_{1} < 0
\end{equation}
\begin{equation} \label{eqn:cond4b}
\sigma_{i}^{2}<\gamma_{3}<\infty,
\end{equation}
for some constants $\gamma_{1}$, $\gamma_{2}$, and $\gamma_{3}$
and
\begin{displaymath}
\lim_{N\to\infty} \frac{1}{N} \sum_{i=1}^{N} \bar{M}_{i}(\tau_{1}) = \eta(\tau_{1}) < 0
\end{displaymath}
almost surely, where
\begin{displaymath}
\eta(\tau_{1}) = \int_{\alpha_{low}}^{\alpha_{up}}
E(\bar{M}_{i}(\tau_{1},s)) f_{\alpha}(s) ds. \qquad \bigtriangleup
\end{displaymath}
\medskip
\end{lemma}
The results of Lemma~\ref{lemma:polarity_positive} and
Lemma~\ref{lemma:polarity_negative} are intuitive since given that
$p(t)$ is odd and the Gaussian noise distribution is symmetric, it
makes sense for $A_{\infty}(t)$ to have properties similar to an
odd waveform. Since the proofs of the two lemmas are very similar,
we only prove Lemma~\ref{lemma:polarity_positive}. The proof can
be found in the appendix.
Knowing only the polarity of $A_{\infty}(t)$ is not entirely
satisfying since we would also expect that the limiting waveform
be continuous. The proof of Lemma~\ref{lemma:continuity1} is once
again left for the appendix.
\begin{lemma}
\label{lemma:continuity1}
Using $p(t)$ in (\ref{eq:poft}),
\[ A_\infty(t) \;\; = \;\; \lim_{N\to\infty}\frac{1}{N}\sum_{i=1}^{N} A_{max}K_i
p(t-\tau_0-T_i) \;\; = \;\; \lim_{N\to\infty}\frac{1}{N}\sum_{i=1}^{N}
\bar{M}_{i}(t) \;\; = \;\; \eta(t)
\]
is a continuous function of $t$, where
\begin{eqnarray*}
\eta(t) &=& \int_{\alpha_{low}}^{\alpha_{up}} E(\bar{M}_{i}(t,s))
f_{\alpha}(s) ds \\
&=& A_{max}E(K_{i})\int_{\alpha_{low}}^{\alpha_{up}}
\int_{-\infty}^{\infty} p(t-\tau_0-\psi) f_{T}(\psi,s) d\psi
f_{\alpha}(s) ds. \qquad \bigtriangleup
\end{eqnarray*}
\medskip
\end{lemma}
\subsubsection{Proof of Theorem~\ref{theorem:main}}
\label{sec:theoremProof}
We can proceed in a
straightforward manner to show that $A_\infty(\tau_0)=0$. For
$t=\tau_{o}$,
\begin{eqnarray*}
A_N(\tau_0) \;\; = \;\; \sum_{i=1}^N
\frac{A_{max}K_i}{N}p(\tau_0-\tau_0-T_{i}) \;\; = \;\; \frac 1 N
\sum_{i=1}^N A_{max}K_i p(-T_{i})
\;\; = \;\; \frac 1 N \sum_{i=1}^N M_i,
\end{eqnarray*}
where $M_i \triangleq -A_{max}K_ip(T_i)$.
Since our goal is to apply some form of the strong law of large
numbers, we first examine the mean of $M_i$. We have that
$E(M_i)$ $=$ $-A_{max}E(K_i)E(p(T_i))$. Furthermore,
\[
E(p(T_i)) = \int_{-\infty}^\infty p(\psi)f_{T_i}(\psi)d\psi = 0,
\]
since $p(\psi)$ is odd and $f_{T_{i}}(\psi)$ is
even because it is zero-mean Gaussian. Thus, $E(M_{i})=0$.
We next consider the variance of $M_i$:
\begin{eqnarray*}
\textrm{Var}(M_i) & = & E(M_i^{2})-E^2(M_i)
= A_{max}^2 E(K_i^2p^2(T_i)) \\
& = & A_{max}^2 E(K_i^2)E(p^2(T_i))
< A_{max}^2
< \infty,
\end{eqnarray*}
where we have used the fact that $E(K_{i}^{2})\leq 1$ and
$|p(t)|\leq 1$.
From the preceding discussion we see that the $M_i$'s are a sequence of
zero mean, finite (but possibly different) variance random variables.
From Stark and Woods~\cite{stark-woods}, we know that if
$\sum_{i=1}^{\infty} \textrm{Var}(M_i)/i^2 < \infty$,
then we have strong convergence of the $M_i$'s:
\[
\frac 1 N \sum_{i=1}^N M_i \to E(M_i),
\]
with probability-1 as $N\to\infty$. But it is easy to see that
\[
\sum_{i=1}^\infty \frac{\textrm{Var}(M_i)}{i^2} <
\sum_{i=1}^\infty \frac{A_{max}^2}{i^2} = A_{max}^{2}
\frac{\pi^{2}}{6} < \infty,
\]
so the condition is satisfied. As a result,
\[
A_N(\tau_0) = \frac 1 N \sum_{i=1}^N M_i \to 0,
\]
as $N \to \infty$.
We have that $A_{\infty}(t)$ is continuous from
Lemma~\ref{lemma:continuity1}. Thus, next we need to show that
$A_\infty(t)>0$ for $t\in (\tau_{0}-\tau,\tau_{0})$, and
$A_\infty(t)<0$ for $t\in (\tau_{0},\tau_{0}+\tau)$ for some
$\tau<\tau_{nz}$. We show the case for
$t=\tau_{1}\in(\tau_{0}-\tau,\tau_{0})$ by simply applying
Lemma~\ref{lemma:polarity_positive}. Since
Lemma~\ref{lemma:polarity_positive} holds for all
$\tau_{1}<\tau_{0}$, there clearly exists a $\tau$ such
$A_{\infty}(t)>0$ for $t\in (\tau_{0}-\tau,\tau_{0})$. The case
for $t\in(\tau_{0},\tau_{0}+\tau)$ comes similarly from
Lemma~\ref{lemma:polarity_negative}.
Lastly, it remains to be shown that $A_{\infty}(t)$ is odd around
$t=\tau_{0}$. This, however, is evident from the form of $\eta(t)$.
Since $f_{T}(\psi,s)$ is even in $\psi$ about $0$ and $p(\psi)$
is odd about $0$, it is clear that
$\int_{\infty}^{\infty} p(t-\tau_0-\psi) f_{T}(\psi,s) d\psi$
as a function of $t$ is odd about $\tau_{0}$. Thus, $\eta(t)$ is odd around
$\tau_{0}$. This then completes the proof
for Theorem~\ref{theorem:main}. \qquad $\bigtriangleup$
\section{Asymptotic Time Synchronization}
\label{sec:timeSynchronization}
\subsection{The Use of Estimators in Time Synchronization}
\label{assump-theorems}
In this work we want to show that as we let $N\to\infty$ then we
can recover deterministic parameters that allow for time
synchronization. Such a result would provide rigorous theoretical
support for a new trade-off between network density and
synchronization performance. To simplify the study, we focus on
the steady-state time synchronization properties of asymptotically
dense networks. In particular, we develop a cooperative technique
that constructs a sequence of equispaced zero-crossings seen by
all nodes which allows the network to maintain time
synchronization indefinitely given that the nodes start with a
collection of equispaced zero-crossings. Starting with a few
equispaced zero-crossings allows us to avoid the complexities of
starting up the synchronization process but still allows us to
show that spatial averaging can be used to average out timing
errors. If we are able to maintain indefinitely a sequence of
equispaced zero-crossing using cooperative time synchronization,
then it means that spatial averaging can average out all
uncertainties in the system as we let node density grow unbounded.
This recovery of deterministic parameters is our desired result.
Here, we overview the estimators needed for cooperative time
synchronization.
Let $t_{n,i}^{c_{k}}$ be the time, with respect to clock $c_{k}$,
that the $i$th node sees its $n$th pulse. In dealing with the
steady-state properties, we start by assuming that each node $i$
in the network has observed a sequence of $m$ pulse arrival times,
$t_{n-1,i}^{c_{i}},\dots,t_{n-m,i}^{c_{i}}$, that occur at integer
values of $t$, $m$ is an integer. Recall that
$t_{n-1,i}^{c_{i}},\dots,t_{n-m,i}^{c_{i}}$ is defined as a set of
$m$ pulse arrival times in the time scale of $c_{i}$. Therefore,
even though $t_{n-1,i}^{c_{i}},\dots,t_{n-m,i}^{c_{i}}$ occur at
integer values of $t$ (the time scale of $c_{1}$), these values
are not necessarily integers since they are in the time scale of
$c_{i}$. Note also that in our model the pulse arrival time is a
zero-crossing location. Using these $m$ pulse arrival times, each
node $i$ has two distinct, yet closely related tasks. The first
task is time synchronization. To achieve time synchronization,
node $i$ wants to use these $m$ pulse arrival times to make an
estimate of when the next zero-crossing will occur. If it can
estimate this next zero-crossing time, then it can effectively
estimate the next integer value of $t$. This estimator can then be
extended to estimate arbitrary times in the future which gives
node $i$ the ability to synchronize to node $1$. The second task
is that node $i$ needs to transmit a pulse so that the sum of all
pulses from the $N$ nodes in the network will create an aggregate
waveform that, in the limit as $N\to\infty$, will give a
zero-crossing at the next integer value of $t$. This second task
is very significant because if the aggregate waveform gives the
exact location of the next integer value of $t$, then each node
$i$ in the network can use this new zero-crossing along with
$t_{n-1,i}^{c_{i}},\dots,t_{n-m+1,i}^{c_{i}}$ to form a set of $m$
zero-crossing locations. This new set can then be used to predict
the next zero-crossing location as well as node $i$'s next pulse
transmission time. Recall that determining the pulse transmission
time is the job of the pulse-connection function
$X_{n,i}^{c_{i}}$. With such a setup, synchronization would be
maintained indefinitely. The zero-crossings that always occur at
integer values of $t$ would provide node $i$ a sequence of
synchronization events and also illustrate how cooperation is
averaging out all random errors.
The waveform properties detailed in Theorem~\ref{theorem:main}
play a central role in accomplishing the nodes' task of cooperatively
generating an aggregate waveform with a zero-crossing at the next
integer value of $t$. From~(\ref{eq:waveformGeneralForm}), if the
arrival time of any pulse at a node $j$ is a random variable of the
form $\tau_0+T_{i}$, where $\tau_{0}$ is the next integer value of
$t$ and $T_{i}$ is zero-mean Gaussian (or in general any symmetric
random variable with zero-mean and finite variance), then
Theorem~\ref{theorem:main} tells us that the aggregate waveform
will make a zero-crossing at the next integer value of $t$. This
idea is illustrated in Fig.~\ref{fig:example}.
\begin{figure}[!h]
\centerline{\psfig{file=pulse.eps,height=5cm}
\psfig{file=rinfinity.eps,height=5cm}}
\caption[An illustration of the main idea of
Lemma~\ref{theorem1}.]{\small Theorem~\ref{theorem:main} is key in
explaining the intuition first illustrated in
Fig.~\ref{fig:why-sync-holds}. The pulse $p(t)$ is shown on the
left figure, with
$\tau_0=1$ and $A_{max}=1$. On the right we have a realization of $A_N(t)$ ($N=400$),
and we assume that $K_{j,i}=1$ (no path loss) and $T_{i}\sim {\mathcal N}(0,0.01)$ for all $i$. As expected from
Theorem~\ref{theorem:main}, we notice that
the zero-crossing of the simulated waveform is almost exactly at $t=1$.} \label{fig:example}
\end{figure}
Thus, for achieving time synchronization in an asymptotically
dense network we need to address two issues. First, we need to
develop an estimator for the next integer value of $t$ given a
sequence of $m$ pulse arrival times that occur at integer values
of $t$. We will call this the {\em time synchronization estimator}
and let us write $V_{n,i}^{c_{i}}$ as the time synchronization
estimator that determines the time, in the time scale of $c_{i}$,
when node $i$ predicts it will see its $n$th zero-crossing. Two,
we need to develop the pulse-connection function $X_{n,i}^{c_{i}}$
such that node $i$'s transmitted pulse will arrive at a node $j$
with the random properties described in
Theorem~\ref{theorem:main}.
\subsection{Time Synchronization Estimator Performance Measure} \label{sec:opt-cond}
Here we establish the conditions for estimating the next pulse
arrival time, or equivalently the next integer value of $t$, given
$m$ pulse arrival times. These conditions apply most directly to
the time synchronization estimator $V_{n,i}^{c_{i}}$ since we want
to synchronize in some desired manner. The problem of
synchronization is the challenge of having the $i$th node
accurately and precisely predict when the next integer value of
$t$ will occur. In our setup, the reception of a pulse by node $i$
tells it of such an event.
Let us explicitly model the time at an integer value of $t$ in
terms of the clock of node $i$. Assume $\tau_{0}$ is an integer
value of $t$ and at this time, node $i$ will observe its $n$th
pulse. Thus, from (\ref{eq:clock}) we have that
\begin{equation} \label{eq:state-eqns}
t_{n,i}^{c_{i}} =
\alpha_{i}(\tau_{0}-\bar{\Delta}_{i})+\Psi_{i}(\tau_{0}).
\end{equation}
The equation makes use of the clock model of node $i$
(\ref{eq:clock}) to tell us the time at clock $c_{i}$ when node
$1$ is at $\tau_{0}$, where $\tau_{0}$ is an integer in the time
scale of $c_{1}$. We are also starting with the assumption that
the zero-crossing that occurs at an integer value of $t$ is
observed by node $i$ at this time.
From (\ref{eq:state-eqns}) we see that the pulse receive time at
node $i$, $t_{n,i}^{c_{i}}$, is a Gaussian random variable whose
mean is parameterized by the unknown vector $\vartheta =
[\alpha_i, \tau_{0}, \bar{\Delta}_{i}]$. Thus, to achieve
synchronization node $i$ will try to estimate the random variable
$t_{n,i}^{c_{i}}$ using a series of $m$ pulse receive times as
observations (recall that $m$ is known). Note that the
observations are also random variables with distributions
parameterized by $\vartheta$. We want the time synchronization
estimator of node $i$ to make an estimate of $t_{n,i}^{c_{i}}$,
denoted $\hat{t}_{n,i}^{c_{i}}(t_{n-1,i}^{c_{i}},
t_{n-2,i}^{c_{i}},\dots,t_{n-m,i}^{c_{i}})$ which is a function of
past observations $t_{n-1,i}^{c_{i}},
t_{n-2,i}^{c_{i}},\dots,t_{n-m,i}^{c_{i}}$, that meets the
following criteria:
\begin{equation}
E_{\vartheta}\big[\hat{t}_{n,i}^{c_{i}}(t_{n-1,i}^{c_{i}},
t_{n-2,i}^{c_{i}},\dots,t_{n-m,i}^{c_{i}})\big] =
E_{\vartheta}(t_{n,i}^{c_{i}}) \label{eq:opt1}
\end{equation}
\vspace{-5mm}
\begin{equation}
\textrm{argmin}_{\hat{t}_{n,i}^{c_{i}}}
E_{\vartheta}\big[(\hat{t}_{n,i}^{c_{i}}(t_{n-1,i}^{c_{i}},
t_{n-2,i}^{c_{i}},\dots,t_{n-m,i}^{c_{i}})-t_{n,i}^{c_{i}})^{2}\big]
\label{eq:opt2}
\end{equation}
for all $\vartheta$. The subscript $\vartheta$ means that the
expectation is taken over the distributions involved given any
possible $\vartheta$. The first condition comes from the fact
that given a finite $m$, it is reasonable to want the expected
value of the estimate to be the expected value of the random
variable being estimated for all $\vartheta$. As in the
justification for unbiased estimators, this condition eliminates
unreasonable estimators so that the chosen estimator will perform
well, on average, for all values of $\vartheta$~\cite{Poor:94}.
The second condition is the result of seeking to minimize the mean
squared error between the estimate and the random variable being
estimated for all $\vartheta$.
\subsection{Time Synchronization Estimator} \label{sec:timeSyncEstimator}
For the time synchronization estimator, node $i$ will seek to
estimate $t_{n,i}^{c_{i}}$ given
$t_{n-1,i}^{c_{i}},\dots,t_{n-m,i}^{c_{i}}$. From
(\ref{eq:state-eqns}), we see that ${\mathbf
T}=[t_{n-m,i}^{c_{i}},\dots,t_{n-1,i}^{c_{i}}]^{T}$ is a jointly
Gaussian random vector parameterized by $\vartheta$. Recall that
we assume $\Psi_{i}(t)$ is a zero mean Gaussian process with
independent and identically distributed samples $\Psi_{i}(t)\sim
{\mathcal N}(0,\sigma^{2})$, for any $t$. Also, since we're
assuming that the zero-crossings at node $i$ occur at consecutive
integer values of $t$, the random variable $t_{n-m,i}^{c_{i}}$ is
Gaussian with $t_{n-m,i}^{c_{i}}\sim {\mathcal
N}(\alpha_{i}(\tau_{0}-m-\bar{\Delta}_{i}), \sigma^{2})$ for some
$\vartheta = [\alpha_i, \tau_{0}-m, \bar{\Delta}_{i}]$. We also
notice that
\begin{displaymath}
E_{\vartheta}(t_{n-m+1,i}^{c_{i}})=\alpha_{i}(\tau_{0}-m
+1-\bar{\Delta}_{i})=\alpha_{i}(\tau_{0}-m-
\bar{\Delta}_{i})+\alpha_{i}.
\end{displaymath}
Since each noise sample is independent, we see that the
distribution of $\mathbf{T}$ parameterized by $\vartheta$ can be
written as ${\mathbf T} \sim {\mathcal N}({\mathbf M},\Sigma)$
where
\begin{displaymath}
{\mathbf M} = \left[ \begin{array}{c}
\alpha_{i}(\tau_{0}-m-\bar{\Delta}_{i})\\
\alpha_{i}(\tau_{0}-m-\bar{\Delta}_{i})+\alpha_{i}\\
\alpha_{i}(\tau_{0}-m-\bar{\Delta}_{i})+2\alpha_{i}\\
\vdots\\
\alpha_{i}(\tau_{0}-m-\bar{\Delta}_{i})+(m-1)\alpha_{i}
\end{array} \right]
\end{displaymath}
and $\Sigma = \sigma^{2}{\mathbf I}$.
As a result, for any $m$ consecutive observations, we can simplify
notation by using the model
\begin{equation} \label{eq:simple-obs}
{\mathbf Y} = {\mathbf H}{\mathbf \theta} + {\mathbf W},
\end{equation}
where ${\mathbf Y} = [Y_{1} \quad Y_{2} \dots Y_{m}]^{T} =
[t_{n-m,i}^{c_{i}} \quad t_{n-m+1,i}^{c_{i}} \dots
t_{n-1,i}^{c_{i}}]^{T}$ and
\begin{displaymath}
{\mathbf \theta} = \left[ \begin{array}{c}
\theta_{1}\\
\theta_{2}
\end{array} \right] =
\left[ \begin{array}{c}
\alpha_{i}(\tau_{0}-m-\bar{\Delta}_{i})\\
\alpha_{i}
\end{array} \right]
\end{displaymath}
with
\begin{displaymath}
{\mathbf H} = \left[ \begin{array}{ccccc}
1 & 1 & 1 & \ldots & 1\\
0 & 1 & 2 & \ldots & m-1
\end{array} \right]^T
\end{displaymath}
and ${\mathbf W} = [W_{1}\dots W_{m}]^{T}$. Since $\Psi_{i}(t)$ is
a Gaussian noise process, ${\mathbf W} \sim {\mathcal
N}(0,\Sigma)$ with $\Sigma = \sigma^{2}{\mathbf I}$.
Using the simplified notation in (\ref{eq:simple-obs}), we want to
estimate $Y_{m+1}$, where $Y_{m+1}$ is jointly distributed with
${\mathbf Y}$ as
\begin{displaymath}
\left[ \begin{array}{c}
{\mathbf Y}\\
Y_{m+1}
\end{array} \right] \sim {\mathcal N}(
\left[ \begin{array}{c}
{\mathbf M}\\
\theta_{1}+m\theta_{2}
\end{array} \right],
\left[ \begin{array}{cc}
\Sigma & 0\\
0 & \sigma^{2}
\end{array} \right]).
\end{displaymath}
Using this notation, we can rewrite the synchronization criteria
as:
\begin{equation} \label{eq:opt1-simple}
E_{\theta}\big[\hat{Y}_{m+1}(Y_{1}, Y_{2}, \dots,
Y_{m})\big]=E_{\theta}(Y_{m+1})
\end{equation}
\vspace{-5mm}
\begin{equation} \label{eq:opt2-simple}
\textrm{argmin}_{\hat{Y}_{m+1}}
E_{\theta}\big[(\hat{Y}_{m+1}(Y_{1}, Y_{2}, \dots, Y_{m}) -Y_{m+1}
)^{2}\big],
\end{equation}
where $\hat{Y}_{m+1}$ is the estimator for $Y_{m+1}$.
Condition (\ref{eq:opt1-simple}) implies that our estimate must be
unbiased. Condition (\ref{eq:opt2-simple}) is equivalent to
\begin{displaymath}
\textrm{argmin}_{\hat{Y}_{m+1}}
E_{\theta}\big[(\hat{Y}_{m+1}(Y_{1}, Y_{2}, \dots, Y_{m})
-(\theta_{1}+m\theta_{2}) )^{2}\big].
\end{displaymath}
To see this equivalence, note that
\begin{eqnarray}
\lefteqn{E_{\theta}\big[(\hat{Y}_{m+1}(Y_{1}, Y_{2}, \dots, Y_{m})-Y_{m+1} )^{2}\big]} \nonumber \\
& = & E_{\theta}\big[(\hat{Y}_{m+1}(Y_{1}, Y_{2}, \dots, Y_{m})-(\theta_{1}+m\theta_{2})-W_{m+1} )^{2}\big] \nonumber \\
& = & E_{\theta}\big[(\hat{Y}_{m+1}(Y_{1}, Y_{2}, \dots, Y_{m})-
(\theta_{1}+m\theta_{2}) )^{2}\big]+E\big[ W_{m+1}^{2}\big],
\end{eqnarray}
where the last inequality follows from the independence of
$W_{m+1}$ from all other noise samples. Since the distribution of
of $W_{m+1}$ is independent of $\theta$,
\begin{eqnarray}
\lefteqn{\textrm{argmin}_{\hat{Y}_{m+1}} E_{\theta}\big[(\hat{Y}_{m+1}(Y_{1}, Y_{2}, \dots, Y_{m})-Y_{m+1})^{2}\big]} \nonumber \\
&=& \textrm{argmin}_{\hat{Y}_{m+1}}
E_{\theta}\big[(\hat{Y}_{m+1}(Y_{1}, Y_{2}, \dots, Y_{m})-
(\theta_{1}+m\theta_{2}) )^{2}\big] \nonumber.
\end{eqnarray}
With these two conditions, from~\cite{Poor:94} we see that the
desired estimate for $Y_{m+1}$ will be the uniformly minimum
variance unbiased (UMVU) estimator for
$E_{\theta}(Y_{m+1})=\theta_{1}+m\theta_{2}$.
Using the above linear model, from~\cite{Kay:93} we know the
maximum likelihood (ML) estimate of $\theta$, $\hat{\theta}_{ML}$,
is given by
\begin{equation} \label{eq:thetaHat}
\hat{\theta}_{ML}=({\mathbf H}^{T}\Sigma^{-1} {\mathbf
H})^{-1}{\mathbf H}^{T}\Sigma^{-1}{\mathbf Y}=
(\mathbf{H}^{T}\mathbf{H})^{-1}\mathbf{H}^{T}{\mathbf Y}.
\end{equation}
This estimate achieves the Cramer Rao lower bound, hence is
efficient. The Fisher information matrix is
$I(\theta)=\frac{{\mathbf H}^{T}{\mathbf H}}{\sigma^{2}}$ and
$\hat{\theta}_{ML} \sim {\mathcal N}(\theta, \sigma^{2}({\mathbf
H}^{T}{\mathbf H})^{-1})$. This means that $\hat{\theta}_{ML}$ is
UMVU.
Again from~\cite{Kay:93}, the invariance of the ML estimate tells
us that the ML estimate for
$\phi=g(\theta)=\theta_{1}+m\theta_{2}$ is
$\hat{\phi}_{ML}=\hat{\theta_{1}}_{ML}+m\hat{\theta_{2}}_{ML}$.
First, it is clear that $\hat{\phi}_{ML} = {\mathbf
C}\hat{\theta}_{ML}$, where ${\mathbf C}=[1\quad m]$. As a
result, we first see that $E_{\theta}(\hat{\phi}_{ML})={\mathbf
C}E_{\theta}(\hat{\theta}_{ML})=\theta_{1}+m\theta_{2}$ so
$\hat{\phi}_{ML}$ is unbiased. Next, to see that $\hat{\phi}_{ML}$
is also minimum variance we compare its variance to the lower
bound.
\begin{displaymath}
\textrm{Var}_{\theta}(\hat{\phi}_{ML})={\mathbf
C}\sigma^{2}({\mathbf H}^{T}{\mathbf H})^{-1}{\mathbf C}^{T} =
\frac{2\sigma^{2}(2m+1)}{m(m-1)}.
\end{displaymath}
The extension of the Cramer Rao lower bound in~\cite{Kay:93} to a
function of parameters tells us that
\begin{displaymath}
E_{\theta}(\|\hat{g}-g(\theta)\|^{2})\geq {\mathbf
G}(\theta){\mathbf I}^{-1}(\theta){\mathbf G}^{T}(\theta)
\end{displaymath}
with ${\mathbf G}(\theta) = (\nabla_{\theta}g(\theta))^{T}$. In
this case, ${\mathbf G}(\theta)=[1 \quad m]$ so the lower bound to
the mean squared error is
\begin{displaymath}
{\mathbf G}(\theta){\mathbf I}^{-1}(\theta){\mathbf G}^{T}(\theta)
= \frac{2\sigma^{2}(2m+1)}{m(m-1)}.
\end{displaymath}
As a result, we see that $\hat{\phi}_{ML}$ is UMVU. Since
$\hat{\phi}_{ML}$ is the desired estimate of where the next pulse
arrival time will be, it is the time synchronization estimator.
Thus,
\begin{equation} \label{eq:timesync-estimator}
V_{n,i}^{c_{i}}({\mathbf Y}) = {\mathbf
C}(\mathbf{H}^{T}\mathbf{H})^{-1}\mathbf{H}^{T}{\mathbf Y}.
\end{equation}
Note that
\begin{equation} \label{eq:timesync-estimator-distribution}
V_{n,i}^{c_{i}}({\mathbf Y})=\hat{\phi}_{ML} \sim {\mathcal
N}\Big(\phi,\frac{2\sigma^{2}(2m+1)}{m(m-1)}\Big).
\end{equation}
has a variance that goes to zero as $m\to\infty$.
\subsection{Time Synchronization with No Propagation Delay}
\label{sec:one-hop}
We now need to develop the pulse-connection function so that the
conditions for $T_{i}$ in Theorem~\ref{theorem:main} are
satisfied. Recall we are developing the synchronization technique
under the assumption of no propagation delay, i.e. $\delta(d)=0$.
Given a sequence of $m$ pulse arrival times, the time
synchronization estimator $V_{n,i}^{c_{i}}$ given in
(\ref{eq:timesync-estimator}) gives each node the ability to
predict the next integer value of $t$. What remains to be
considered is the second part of the synchronization process:
developing a pulse-connection function $X_{n,i}^{c_{i}}$ such that
the aggregate waveform seen by a node $j$ will have the properties
described in Theorem~\ref{theorem:main}.
Let us first consider the distribution of $V_{n,i}^{c_{i}}$. From
(\ref{eq:timesync-estimator-distribution}), we have that
\begin{displaymath}
V_{n,i}^{c_{i}}({\mathbf Y}) \sim {\mathcal N}
\bigg(\alpha_{i}(\tau_{0}-m-\bar{\Delta}_{i})+m\alpha_{i},\frac{2\sigma^{2}(2m+1)}{m(m-1)}\bigg).
\end{displaymath}
Using (\ref{eq:clock}), we can translate $V_{n,i}^{c_{i}}({\mathbf
Y})$ into the time scale of $c_{1}$ as
\begin{displaymath}
V_{n,i}^{c_{i}}({\mathbf Y}) = \alpha_{i}(V_{n,i}^{c_{1}}({\mathbf
Y})-\bar{\Delta}_{i}) + \Psi_{i}
\end{displaymath}
which gives
\begin{displaymath}
V_{n,i}^{c_{1}}({\mathbf Y}) = \frac{(V_{n,i}^{c_{i}}({\mathbf Y})
- \Psi_{i})}{\alpha_{i}} + \bar{\Delta}_{i}.
\end{displaymath}
This means that
\begin{equation} \label{eq:compToNoSimult}
V_{n,i}^{c_{1}}({\mathbf Y}) \sim {\mathcal
N}\bigg(\tau_{0},\frac{\sigma^{2}}{\alpha_{i}^{2}}\bigg(1+\frac{2(2m+1)}{m(m-1)}\bigg)\bigg).
\end{equation}
Under our assumption of $\delta(d)=0$, any transmission by node
$i$ will be instantaneously seen by any node $j$. As a result,
the random variable $V_{n,i}^{c_{1}}({\mathbf Y})$ will be seen as
the pulse arrival time at node $j$, in the time scale of $c_{1}$.
Due to the assumption of no propagation delay, defining
$X_{n,i}^{c_{1}}({\mathbf
Y})\stackrel{\Delta}{=}V_{n,i}^{c_{1}}({\mathbf Y})$ will give us
the desired properties in the aggregate waveform. To see this,
let us compare the distribution of $X_{n,i}^{c_{1}}({\mathbf Y})$
to the assumptions of Theorem~\ref{theorem:main}. Since $\tau_{0}$
is the ideal crossing time in the time scale of $c_{1}$, we have
\begin{displaymath}
X_{n,i}^{c_{1}}({\mathbf Y}) = \tau_{0} + T_{i}.
\end{displaymath}
Therefore, we see that
\begin{equation} \label{eq:errorVariance}
\textrm{Var}(T_{i}) =
\frac{\sigma^{2}}{\alpha_{i}^{2}}\bigg(1+\frac{2(2m+1)}{m(m-1)}\bigg)
= \frac{\bar{\sigma}^{2}}{\alpha_{i}^{2}},
\end{equation}
where $\bar{\sigma}^{2}$ from Theorem~\ref{theorem:main} is
\begin{displaymath} \bar{\sigma}^{2} =
\sigma^{2}\bigg(1+\frac{2(2m+1)}{m(m-1)}\bigg).
\end{displaymath}
We have shown that using the pulse connection function
$X_{n,i}^{c_{1}}({\mathbf
Y})\stackrel{\Delta}{=}V_{n,i}^{c_{1}}({\mathbf Y})$ satisfies the
conditions of Theorem~\ref{theorem:main}. Thus, all the results
of the theorem apply.
As a result, we have established a time synchronization estimator
$V_{n,i}^{c_{1}}({\mathbf Y})$ and a pulse-connection function
$X_{n,i}^{c_{1}}({\mathbf Y})$. In the case of $\delta(d)=0$, we
have that $X_{n,i}^{c_{1}}({\mathbf
Y})\stackrel{\Delta}{=}V_{n,i}^{c_{1}}({\mathbf Y})$, or in the
time scale of $c_{i}$, $X_{n,i}^{c_{i}}({\mathbf
Y})\stackrel{\Delta}{=}V_{n,i}^{c_{i}}({\mathbf Y})$. When each
node in the network uses the pulse-connection function
$X_{n,i}^{c_{i}}({\mathbf Y})$ we have a resulting aggregate
waveform that has a zero-crossing at the next integer value of $t$
as $N\to\infty$. This fact follows from applying
Theorem~\ref{theorem:main}. Thus, we have an asymptotic
steady-state time synchronization method that can maintain a
sequence of equispaced zero-crossings occurring at integer values
of $t$. An interesting feature of this synchronization technique
is that no node needs to know any information about its location
or its surrounding neighbors.
\subsubsection{Cooperation without Simultaneous Transmission and
Reception} \label{sec:noSimultTxRx}
Before ending this section, let us comment on the assumption of
simultaneous transmission and reception. One way to relax this
assumption is to divide the network into two disjoint sets of
nodes, say the odd numbered nodes and the even numbered nodes,
where each set is still uniformly distributed over the area. Then,
the odd nodes and the even nodes will take turns transmitting and
receiving. For example, the odd numbered nodes can transmit
pulses at odd values of $t$ and the even numbered nodes will
listen. The even numbered nodes will then transmit pulses at the
even values of $t$ and the odd numbered nodes will listen. With
such a scheme, nodes do not transmit and receive pulses
simultaneously, but can still take advantage of spatial averaging.
The odd numbered nodes will see an aggregate waveform generated by
a subset of the even numbered nodes and the even numbered nodes
will receive a waveform cooperatively generated by the odd
numbered nodes. Let us take a more detailed look at this scheme.
\begin{figure}[!h]
\centerline{\psfig{file=noSimultTxRx.eps,width=15cm}}
\caption{\small In the above figure, we assume $\tau_{0}$ is an
even integer value of $t$ and $m=3$. Therefore, each even
numbered node will turn on its receiver to receive the aggregate
signal arriving at times $\tau_{0}-5$, $\tau_{0}-3$, and
$\tau_{0}-1$. Using these three received times, it can then
estimate the time of $\tau_{0}$. Thus, the aggregate signal
occurring at $\tau_{0}$ is cooperatively generated by the even
numbered nodes and is received by the odd numbered nodes.}
\label{fig:noSimultTxRx}
\end{figure}
In Fig.~\ref{fig:noSimultTxRx} we assume that $\tau_{0}$ is an
even integer value of $t$ and use $m=3$. Each even numbered node
will use the aggregate signals occurring at $\tau_{0}-5$,
$\tau_{0}-3$, and $\tau_{0}-1$ to estimate $\tau_{0}$ and
cooperatively the even nodes will generate the aggregate signal at
$\tau_{0}$. The odd numbered nodes will then use the aggregate
signals occurring at $\tau_{0}-4$, $\tau_{0}-2$, and $\tau_{0}$ to
generate the aggregate signal at $\tau_{0}+1$. Therefore, the odd
and even numbered nodes can take turns transmitting and receiving
signals and nodes never need to simultaneously transmit and
receive.
Of course, such a setup would require a modification of the
estimators used by the nodes. Nodes will receive a vector of $m$
observations $\mathbf{Y}$ with $\mathbf{Y}[l+1] =
\alpha_{i}(\tau_{0}+1-2(m-l)-\bar{\Delta}_{i})+\Psi_{i}$ for $l =
0,1,\ldots,m-1$. With such a mechanism, the ${\mathbf H}$ matrix
in equation (\ref{eq:simple-obs}) would change to
\begin{displaymath}
{\mathbf H} = \left[ \begin{array}{ccccc}
1 & 1 & 1 & \ldots & 1\\
0 & 2 & 4 & \ldots & 2(m-1)
\end{array} \right]^T
\end{displaymath} and $\theta$ becomes
\begin{displaymath}
{\mathbf \theta} = \left[ \begin{array}{c}
\theta_{1}\\
\theta_{2}
\end{array} \right] =
\left[ \begin{array}{c}
\alpha_{i}(\tau_{0}+1-2m-\bar{\Delta}_{i})\\
\alpha_{i}
\end{array} \right].
\end{displaymath}
To estimate the location $\tau_{0}$ in the time scale of $c_{i}$,
we can proceed as in Section~\ref{sec:timeSyncEstimator}:
\begin{displaymath}
\hat{\theta}_{ML}=({\mathbf H}^{T}\Sigma^{-1} {\mathbf
H})^{-1}{\mathbf H}^{T}\Sigma^{-1}{\mathbf Y}=
(\mathbf{H}^{T}\mathbf{H})^{-1}\mathbf{H}^{T}Y
\end{displaymath}
will be distributed $\hat{\theta}_{ML} \sim {\mathcal N}(\theta,
\sigma^{2}({\mathbf H}^{T}{\mathbf H})^{-1})$ and
$\hat{\theta}_{ML}$ is UMVU. This leads to the UMVU estimate
$\hat{\phi}_{ML} = {\mathbf C}\hat{\theta}_{ML}$, where ${\mathbf
C}=[1\quad 2m-1]$, and
$E(\hat{\phi}_{ML})={\mathbf
C}E(\hat{\theta}_{ML})=\theta_{1}+(2m-1)\theta_{2}$.
In this case, the variance of $\hat{\phi}_{ML}$ will be
$\textrm{Var}_{\theta}(\hat{\phi}_{ML})={\mathbf
C}\sigma^{2}({\mathbf H}^{T}{\mathbf H})^{-1}{\mathbf C}^{T}$, and
thus we have that
\begin{displaymath}
V_{n,i}^{c_{i}}(\mathbf{Y}) = \hat{\phi}_{ML} \sim
\mathcal{N}\bigg(\alpha_{i}(\tau_{0}+1-2m-\bar{\Delta}_{i})+(2m-1)\alpha_{i},
\frac{\sigma^{2}(2m+1)(2m-1)}{m(m-1)(m+1)}\bigg).
\end{displaymath}
Converted to the time scale of $c_{1}$ we have
\begin{equation} \label{eq:noSimult}
V_{n,i}^{c_{1}}(\mathbf{Y}) \sim \mathcal{N}\bigg(\tau_{0},
\frac{\sigma^{2}}{\alpha_{i}^{2}}\bigg(1+\frac{(2m+1)(2m-1)}{m(m-1)(m+1)}\bigg)\bigg).
\end{equation}
Comparing equations (\ref{eq:compToNoSimult}) and
(\ref{eq:noSimult}), we see that they have the same form. As a
result, we can again set $X_{n,i}^{c_{i}}({\mathbf
Y})\stackrel{\Delta}{=}V_{n,i}^{c_{i}}({\mathbf Y})$ and achieve
cooperative time synchronization.
\section{Time Synchronization with Propagation Delay}
\label{sec:timeDelay}
We now extend the ideas of cooperative time synchronization to the
situation where signals suffer not only from pathloss but also
propagation delay. It turns out that the effect of propagation
delay can also be addressed using the concept we have been using
throughout this paper --- averaging out errors using the large
number of nodes in the network.
In this section, we use the pathloss and propagation delay model
detailed in Section~\ref{sec:delay-model}. We introduce a time
delay function $\delta(d)$. For generality, we explicitly model a
multi-hop network where we have a $K(d)$ function that is zero for
$d$ greater than some distance $R$, i.e. $K(d) = 0$ for $d>R$.
Such a model implies that the aggregate signal seen at any node
$j$ is influenced only by the set of nodes inside a circle of
radius $R$ centered at node $j$. With this we can effectively
divide the network into two disjoint sets, a set of {\em interior
nodes} and a set of {\em boundary nodes}. An interior node $j$ is
defined to be a node whose distance from the nearest network
boundary is greater than or equal to $R$. A boundary node is thus
defined to be a node that is a distance less than $R$ away from
the nearest network boundary.
We make this distinction since the synchronization technique for
each set of nodes is different. Please note that if a pathloss
function where $K(d) = 0$ for $d>R$ is unreasonable, then we
simply choose $R$ to be infinite and consider all nodes in the
network to be boundary nodes.
Using the propagation delay model, $D_{j,i}$ will obviously modify
the general received aggregate waveform seen at any node $j$. In
fact, equation (\ref{eq:timesync-aggwaveform}) will now be written
as
\begin{eqnarray} \label{eq:timesync-aggwaveform-withDelay}
A^{c_{1}}_{j,N}(t) & = & \sum_{i=1}^{N} \frac{A_{max}K_{j,i}}{N}
p(t-\tau_{o}-T_{i}-D_{j,i}).
\end{eqnarray}
For $N$ large, this model will give an accurate characterization
of the aggregate waveform seen at node $j$.
\subsection{Conceptual Motivation}
\label{sec:propDelay_motivation}
From equation (\ref{eq:timesync-aggwaveform-withDelay}), it is
clear that the aggregate waveform will not have a zero-crossing at
$\tau_{0}$ for every node $j$ because of the presence of the
$D_{j,i}$ random variables. Therefore, to average out propagation
delay, the idea we employ is to have each node introduce a
\emph{random} artificial time shift that counteracts the effect of
the time delay random variable. More precisely, we want to
introduce another random variable $D_{fix}$ such that
$D_{fix}+D_{j}$ will have zero mean and a symmetric distribution.
At the same time, we assume each node knows $K(\cdot)$ and
$\delta(\cdot)$ and will also introduce an artificial scaling
factor $K_{fix}=K(\delta^{-1}(-D_{fix}))$ to simplify the analysis
of the aggregate waveform. This means that instead of using the
scaling factor $A_{i}=A_{max}/N$, each node $i$ will scale its
transmitted pulse by $A_{i}=A_{max}K_{fix}/N$. For the motivation
in this section, let us assume that node $j$ is an interior node.
To find the distribution of $D_{fix}$, we consider the following.
$D_{j}$ has density $f_{D_{j}}(x)$ and let $f_{D_{fix}}(x)$ be the
density of $D_{fix}$. Since $D_{j}$ and $D_{fix}$ are
independent, we know that the density of $D_{T}=D_{fix}+D_{j,i}$,
$f_{D_{T}}(x)$, will be the convolution of $f_{D_{j}}(x)$ and
$f_{D_{fix}}(x)$. Therefore, by the properties of the convolution
function, if we set
$f_{D_{fix}}(x)\stackrel{\Delta}{=}f_{D_{j}}(-x)$, then we have
that $f_{D_{T}}(x)$ is symmetric, i.e.
$f_{D_{T}}(x)=f_{D_{T}}(-x)$. As well, since $D_{j}$ has finite
expectation, it is easy to see that $E(D_{T}) = 0$.
Given a sequence of $m$ zero-crossings that we know to be
occurring at integers of $t$, we can still use
$V_{n,i}^{c_{1}}({\mathbf Y})$ (from (\ref{eq:timesync-estimator})
in the time scale of node $1$) as the time synchronization
estimator. However, with propagation delay, the pulse-connection
function will now be $X_{n,i}^{c_{1}}({\mathbf
Y})=V_{n,i}^{c_{1}}({\mathbf Y})+D_{fix} =
\tau_{o}+T_{i}+D_{fix}$. With $D_{fix}$ and $K_{fix}$ included, we
can rewrite equation (\ref{eq:timesync-aggwaveform-withDelay}) as
\begin{eqnarray} \label{eq:timesync-aggwaveform-withDelayAndFix}
A^{c_{1}}_{j,N}(t) = \sum_{i=1}^{N}
\frac{A_{max}K_{fix}K_{j,i}}{N}
p(t-\tau_{o}-T_{i}-D_{fix}-D_{j,i}).
\end{eqnarray}
It is important to see that since $D_{j}$ has the same
distribution for \emph{all} interior nodes $j$, equation
(\ref{eq:timesync-aggwaveform-withDelayAndFix}) holds for every
node $j$ that is an interior node. This means that for the
network to cooperatively generate the waveform in
(\ref{eq:timesync-aggwaveform-withDelayAndFix}) each transmit node
$i$ needs to have the following additional knowledge: (1) the
distribution of $D_{fix}$ whose density is
$f_{D_{fix}}(x)\stackrel{\Delta}{=}f_{D_{j}}(-x)$, where $j$ is an
interior node, and (2) the functions $K(\cdot)$ and
$\delta(\cdot)$ to generate $K_{fix}$. With this knowledge, we can
use equation (\ref{eq:timesync-aggwaveform-withDelayAndFix}) to
study the aggregate waveform seen at any interior node $j$. In
fact, we find that the aggregate waveform has limiting properties
that are similar to those outlined in Theorem~\ref{theorem:main}.
These properties are described in
Theorem~\ref{theorem:main-delay}.
\begin{theorem}
\label{theorem:main-delay} Let $p(t)$ be as defined in
equation~(\ref{eq:poft}) and $T_i\sim {\mathcal N}(0,
\frac{\bar{\sigma}^{2}}{\alpha_{i}^{2}})$ with
$\bar{\sigma}^{2}>0$ a constant and
$\frac{\bar{\sigma}^2}{\alpha_{i}^{2}} < B <\infty$ for all $i$,
$B$ a constant. $K_{j,i}$ and $D_{j,i}$ are defined as in
Section~\ref{sec:delay-model} and $D_{fix}$ with density
$f_{D_{fix}}(x)\stackrel{\Delta}{=}f_{D_{j}}(-x)$ is independent
from $D_{j,i}$. $K_{fix}=K(\delta^{-1}(-D_{fix}))$ and let
$D_{j,i}$, $D_{fix}$, and $T_{i}$ be mutually independent for all
$i$. Then, for any interior node $j$ with $A_{j,N}^{c_{1}}(t)$ as
defined in (\ref{eq:timesync-aggwaveform-withDelayAndFix}),
$\lim_{N\to\infty}A_{j,N}^{c_{1}}(t) = A_{j,\infty}^{c_{1}}(t)$
has the properties
\begin{itemize}
\item $A_{j,\infty}^{c_{1}}(\tau_0) = 0$, \item
$A_{j,\infty}^{c_{1}}(t)$ is odd around $t=\tau_{0}$, i.e.
$A_{j,\infty}^{c_{1}}(\tau_{0}+\xi) =
-A_{j,\infty}^{c_{1}}(\tau_{0}-\xi)$ for $\xi\geq 0$. \qquad
$\bigtriangleup$
\end{itemize}
\end{theorem}
The proof of Theorem~\ref{theorem:main-delay} is left for the
appendix.
From the arguments so far, it seems that time synchronization with
delay, at least for interior nodes, can be solved simply by
modifying the pulse-connection function $X_{n,i}^{c_{1}}({\mathbf
Y})$ and changing the scaling factor to $A_{i}=A_{max}K_{fix}/N$.
Theorem~\ref{theorem:main-delay} tells us that the limiting
aggregate waveform makes a zero-crossing at the next integer value
of $t$ and the waveform is odd. Thus, we can use this
zero-crossing as a synchronization event and maintain
synchronization in a manner identical to the technique used in the
situation without propagation delay. This, however, unfortunately
is not the case. In order to implement the above concept, we need
to find the random variable, $D_{fix}^{c_{i}}$, in the time scale
of $c_{i}$, that corresponds to $D_{fix}$ such that
\begin{eqnarray*}
(V_{n,i}^{c_{i}}({\mathbf Y})+D_{fix}^{c_{i}})^{c_{1}} & = &
\frac{V_{n,i}^{c_{i}}({\mathbf
Y})+D_{fix}^{c_{i}}-\Psi_{i}}{\alpha_{i}}+\bar{\Delta}_{i} \\
& = & V_{n,i}^{c_{1}}({\mathbf Y}) +
\frac{D_{fix}^{c_{i}}}{\alpha_{i}} \\
& = & V_{n,i}^{c_{1}}({\mathbf Y})+D_{fix}.
\end{eqnarray*}
This means that we need $D_{fix}^{c_{i}}/ \alpha_{i} = D_{fix}$.
However, each node $i$ cannot find $D_{fix}^{c_{i}}$ that
satisfies this since it does not know its $\alpha_{i}$.
\subsection{Time Synchronization of Interior Nodes}
\label{sec:timeSync-Interior}
Since the $i$th node does not know its own value of $\alpha_{i}$,
to do time synchronization with propagation delay we can have each
node estimate its $\alpha_{i}$ value. However, this estimate will
not be perfect and we may no longer have the symmetric limiting
aggregate waveform described by Theorem~\ref{theorem:main-delay}.
This means that the center zero-crossing might occur some
$\epsilon$ away from $\tau_{0}$, $\tau_{0}$ an integer value of
$t$. However, steady-state time synchronization can be maintained
if the network can use a sequence of $m$ equispaced zero-crossings
that occur at $t=\tau_{0}-m+\epsilon, \tau_{0}-m+1+\epsilon,
\tau_{0}-m+2+\epsilon,\ldots,\tau_{0}-1+\epsilon$, where
$\tau_{0}$ is an integer value of $t$, to cooperatively generate a
limiting aggregate waveform that has a zero-crossing at
$\tau_{0}+\epsilon$. In such a situation, the network will be able
to construct a sequence of equispaced zero-crossings and maintain
the occurrence of these zero-crossings indefinitely. The idea is
the same as in the case without propagation delay, but the only
difference here would be that the zero-crossings do not occur at
integer values of $t$. Let us give a more formal description of
this idea.
Using notation from Section~\ref{sec:timeSyncEstimator}, we start
with the assumption that each interior node $i$ has a sequence of
$m$ observations that has the form
\begin{equation} \label{eq:observation_form}
\alpha_{i}(\tau_{0}-m+l+\epsilon-\bar{\Delta}_{i})+\Psi_{i},
\end{equation}
where $l=0,1,\ldots,m-1$ and $\epsilon$ is known. To develop the
time synchronization estimator $V_{n,i}^{c_{i}}({\mathbf Y})$ and
the pulse-connection function $X_{n,i}^{c_{i}}({\mathbf Y})$, we
consider the observations made by each node. If we assume that
each node knows the value of $\epsilon$, the vector of
observations can be written as in (\ref{eq:simple-obs})
\begin{displaymath}
{\mathbf Y} = \bar{{\mathbf H}}{\mathbf \theta} + {\mathbf W},
\end{displaymath}
where the matrix $\bar{{\mathbf H}}$ in this case is
\begin{displaymath}
\bar{{\mathbf H}} = \left[ \begin{array}{ccccc}
1 & 1 & 1 & \ldots & 1\\
\epsilon & 1+\epsilon & 2+\epsilon & \ldots & m-1+\epsilon
\end{array} \right]^T.
\end{displaymath}
Using this model, we can follow the development in
Section~\ref{sec:timeSyncEstimator} to find the the time
synchronization estimator
\begin{equation} \label{eq:timesync-estimator-propDelay}
V_{n,i}^{c_{i}}({\mathbf Y},\epsilon) = {\mathbf
C}(\bar{\mathbf{H}}^{T}\bar{\mathbf{H}})^{-1}\bar{\mathbf{H}}^{T}{\mathbf
Y},
\end{equation}
where ${\mathbf C} = [1\quad m]$. This estimator will give each
node the ability to optimally estimate the next integer value of
$t$. Note that the variance of the time synchronization estimator
is
\begin{equation}
\textrm{Var}_{\theta}(V_{n,i}^{c_{i}}({\mathbf Y},\epsilon)) =
{\mathbf C}\sigma^{2}(\bar{{\mathbf H}}^{T}\bar{{\mathbf
H}})^{-1}{\mathbf C}^{T} =
\sigma^{2}\bigg(\frac{2(2m+1)}{m(m-1)}+\frac{12\epsilon(\epsilon-1-m)}{(m-1)m(m+1)}\bigg).
\end{equation}
Using the time synchronization estimator, we can choose the
pulse-connection function as
\begin{equation} \label{eq:pulse-connection-propDelay}
X_{n,i}^{c_{i}}({\mathbf Y})=V_{n,i}^{c_{i}}({\mathbf
Y},\epsilon)+\hat{\alpha}_{i}D_{fix}=V_{n,i}^{c_{i}}({\mathbf
Y},\epsilon)+D^{c_{i}}_{fix},
\end{equation}
where each time node $i$ makes the estimate
$V_{n,i}^{c_{i}}({\mathbf Y},\epsilon)$ it also estimates
$\hat{\alpha}_{i}$ as
\begin{displaymath}
\hat{\alpha_{i}} = \bar{{\mathbf
C}}(\bar{\mathbf{H}}^{T}\bar{\mathbf{H}})^{-1}\bar{\mathbf{H}}^{T}{\mathbf
Y},
\end{displaymath}
$\bar{{\mathbf C}}=[0\quad 1]$. We find that $\hat{\alpha}_{i}
\sim {\mathcal N}(\alpha_{i}, 12\sigma^{2}/((m-1)m(m+1)))$. Since,
from Section~\ref{sec:propDelay_motivation}, we know we want
$D_{fix}^{c_{i}}/ \alpha_{i} = D_{fix}$, we have set
$D_{fix}^{c_{i}} \stackrel{\Delta}{=} \hat{\alpha}_{i}D_{fix}$.
Notice that since $D_{fix}^{c_{i}}$ is simply a realization of
$D_{fix}$ multiplied by node $i$'s estimate of $\alpha_{i}$, node
$i$ can use the realization of $D_{fix}$ and find
$K_{fix}=K(\delta^{-1}(-D_{fix}))$.
With our choice of $X_{n,i}^{c_{i}}({\mathbf Y})$ in
(\ref{eq:pulse-connection-propDelay}), we see that
\begin{displaymath}
(V_{n,i}^{c_{i}}({\mathbf Y},\epsilon)+D_{fix}^{c_{i}})^{c_{1}}
\;\;=\;\; V_{n,i}^{c_{1}}({\mathbf Y},\epsilon)+Z_{i}D_{fix}
\;\;=\;\; \tau_{0} +T_{i}+Z_{i}D_{fix},
\end{displaymath}
where $Z_{i}\sim{\mathcal
N}(1,12\sigma^{2}/(\alpha_{i}^{2}(m-1)m(m+1)))$, and
$\tau_{0}+T_{i}=V_{n,i}^{c_{1}}({\mathbf Y},\epsilon)$. Because of
the random factor $Z_{i}$, we see that
$D_{T}=Z_{i}D_{fix}+D_{j,i}$ is no longer a symmetric
distribution. As a result, the limiting aggregate waveform
\begin{equation} \label{eq:aggregate-waveform-timeDelay}
A_{j,\infty}^{c_{1}}(t)=\lim_{N\to\infty}A^{c_{1}}_{j,N}(t) =
\lim_{N\to\infty}\sum_{i=1}^{N} \frac{A_{max}K_{fix}K_{j,i}}{N}
p(t-\tau_{o}-T_{i}-Z_{i}D_{fix}-D_{j,i})
\end{equation}
may not have a zero-crossing at $t=\tau_{0}$.
Thus, if we can find an $\epsilon$ such that each node $i$ using a
set of observations of the form (\ref{eq:observation_form}) allows
the network to cooperatively generate the waveform in
(\ref{eq:aggregate-waveform-timeDelay}) that has its zero-crossing
occurring at $t=\tau_{0}+\epsilon$ (in the time scale of $c_{1}$),
then we have steady-state time synchronization. This is because
the network would be able to use a sequence of $m$ observations to
generate the next observation that gives the same information as
any of the previous observations. Thus, by always taking the $m$
most recent observations, the process can continue forever and
maintain synchronization. Each node $i$ would need to know
distribution of $D_{fix}$, the value of $\epsilon$, and the
functions $K(\cdot)$ and $\delta(\cdot)$. Therefore, we find that
steady-state time synchronization of the interior nodes is
possible under certain conditions. As a note, no interior node
needs to know any location information.
\subsection{Time Synchronization of Boundary Nodes}
\label{sec:timeSync-Boundary}
Before we consider the synchronization of boundary nodes, we note
that the key requirement for each boundary node $i$ is to have a
pulse-connection function given in equation
(\ref{eq:pulse-connection-propDelay}). The reason that this must
be the pulse-connection for every boundary node $i$ is because the
analysis for the interior nodes assumes that the aggregate
waveform seen by any interior node $j$ is created by pulse
transmissions occurring at a time determined by
(\ref{eq:pulse-connection-propDelay}). Since the aggregate
waveform seen by some interior nodes are created by pulse
transmissions from boundary nodes, each boundary node must have
the appropriate pulse-connection function. This requirement,
however, proves to be extremely problematic and reveals a
limitation of the elegant technique of averaging out timing delay
when we come to boundaries of the network.
The problem comes because $D_{fix}+D_{j,i}$ already does not have
a symmetric distribution if $j$ is a boundary node. Recall that
$f_{D_{fix}}(x) = f_{D_{j}}(-x)$ when $j$ is an interior node and
$f_{D_{j}}(x) = f_{D_{l}}(x)$ when $j$ and $l$ are both interior
nodes. However, $f_{D_{j}}(x) \ne f_{D_{l}}(x)$ when $j$ is an
interior node and $l$ is a boundary node. As a result,
$D_{fix}+D_{j,i}$ is no longer symmetric if $j$ is a boundary
node. In fact, it is clear that the distribution of
$D_{fix}+D_{j,i}$ is a function of node $j$'s location near the
boundary. Because of this additional asymmetry, let us assume for
a moment that the sequence of zero-crossings observed by boundary
node $i$ occur $\epsilon_{i}$ away from an integer value of $t$.
That is, if every node in the network, including the boundary
nodes, transmitted a sequence of pulses where each pulse was sent
according to (\ref{eq:pulse-connection-propDelay}), then boundary
node $i$ would observe the sequence of observations
\begin{equation} \label{eq:observation_form_boundary}
\alpha_{i}(\tau_{0}-m+l+\epsilon_{i}-\bar{\Delta}_{i})+\Psi_{i},
\end{equation}
where $l=0,1,\ldots,m-1$ and $\epsilon_{i}$ is known.
This boundary node $i$ could then use the time synchronization
estimator given by (\ref{eq:timesync-estimator-propDelay}) but
where the matrix $\bar{{\mathbf H}}$ is now replaced with
$\bar{{\mathbf H}}_{i}$
\begin{displaymath}
\bar{{\mathbf H}}_{i} = \left[ \begin{array}{ccccc}
1 & 1 & 1 & \ldots & 1\\
\epsilon_{i} & 1+\epsilon_{i} & 2+\epsilon_{i} & \ldots &
m-1+\epsilon_{i}
\end{array} \right]^T.
\end{displaymath}
Thus, for this boundary node $i$ we have
\begin{equation} \label{eq:timesync-estimator-propDelay-boundary}
V_{n,i}^{c_{i}}({\mathbf Y},\epsilon_{i}) = {\mathbf
C}(\bar{\mathbf{H}}_{i}^{T}\bar{\mathbf{H}}_{i})^{-1}\bar{\mathbf{H}}_{i}^{T}{\mathbf
Y},
\end{equation}
In this case, however, the variance of the time synchronization
estimator depends on $\epsilon_{i}$
\begin{equation} \label{eq:timesync-estimator-variance-boundary}
\textrm{Var}_{\theta}(V_{n,i}^{c_{i}}({\mathbf Y},\epsilon_{i})) =
\sigma^{2}\bigg(\frac{2(2m+1)}{m(m-1)}+\frac{12\epsilon_{i}(\epsilon_{i}-1-m)}{(m-1)m(m+1)}\bigg).
\end{equation}
The fact that the variance depends on $\epsilon_{i}$ is the root
of the problem. The pulse-connection function
\begin{equation} \label{eq:pulse-connection-propDelay-boundary}
X_{n,i}^{c_{i}}({\mathbf Y})=V_{n,i}^{c_{i}}({\mathbf
Y},\epsilon_{i})+\hat{\alpha_{i}}D_{fix},
\end{equation}
is \emph{not} the same as that given by
(\ref{eq:pulse-connection-propDelay}).
To correct for this, we can make the strong assumption that each
boundary node $i$ knows is own $\alpha_{i}$. We address the
reasoning behind this assumption in
Section~\ref{sec:propDelay_assumption}. If we use this
assumption, then each boundary node $i$ can get an observation
sequence of the form (\ref{eq:observation_form}) simply by adding
$\alpha_{i}(\epsilon-\epsilon_{i})$ to each of the $m$
observations of the form given in
(\ref{eq:observation_form_boundary}), where we assume that node
$i$ knows both $\epsilon$ and $\epsilon_{i}$. With such an
observation sequence, boundary node $i$ will have the time
synchronization estimator (\ref{eq:timesync-estimator-propDelay})
and, more importantly, the pulse-connection function
(\ref{eq:pulse-connection-propDelay}). Thus, maintaining time
synchronization for the case of propagation delay would be
possible.
What we have then is that boundary node synchronization would
require only the boundary nodes to know their $\alpha_{i}$
parameters. With this strong assumption only for the boundary
nodes, the network is effectively synchronized. Even though the
boundary nodes do not see the same zero-crossing as the interior
nodes, they can calculate this time and thus have all the required
synchronization information.
\subsection{The Boundary Node Assumption}
\label{sec:propDelay_assumption}
The assumption that each boundary node $i$ knows $\alpha_{i}$ is a
strong assumption. Even though the fraction of nodes that are
boundary nodes is small for multi-hop networks requiring many hops
to send information across the network, we believe that the
assumption is still very artificial. There are two reasons that we
make the assumption for the presentation of results on time
synchronization with propagation delay.
First, the assumption allows us to give an elegant presentation of
the main concept of this paper which is to use high node density
to average out errors in the network. Throughout this work we
have used high node density to average out inherent errors present
in the nodes. We were able to average out random timing jitter
that is present in each node and provide the network with a
sequence of zero-crossings that can serve as synchronization
events. We then applied this technique to averaging out the
errors introduced by time delay. To this end we were partially
successful in that the interior nodes can average out these errors
assuming the boundary nodes have additional information. But this
is of interest since the goal of this paper is to understand the
theory of spatial averaging for synchronization and discover its
fundamental advantages and limitations.
Second, the problem encountered at the boundaries is one that
opens up an entirely new area of study which is the target of our
future work. The issue that we encounter is that the waveform seen
by some nodes in the network will have a zero-crossing that is
shifted from the ideal location. This implies that different nodes
will observe different zero-crossings. Furthermore, these
zero-crossings will now evolve in time since we do not have the
same observations over the entire network. This problem is similar
to what we encounter if we consider finite sized networks. For
finite $N$, the zero-crossing location will be random and thus
introduce another source of error. As well, different nodes will
see different zero-crossing locations. Therefore, we will turn our
attention to the case of finite $N$ and develop a different set of
tools that will be needed to understand what types of
synchronization are achievable under the situation where
zero-crossing locations evolve in time. Using this understanding,
we hope to return to the issue of propagation delay in
asymptotically dense networks and characterize the behavior of the
network.
\section{Conclusions}
\label{sec:conclusion}
To conclude, we revisit the scalability issue under the light of
work developed in this paper.
\subsection{The Scalability Problem Revisited}
In the Introduction (Section~\ref{sec:estimate-params}), we
mentioned that most existing proposals for time synchronization
suffer from an inherent scalability problem. The problem with
those existing proposals lies in the fact that synchronization
errors accumulate: if node 2 can synchronize to node 1 with some
small error, and node 3 can synchronize to node 2 with the same
small error, these errors accumulate, and the synchronization of
node 3 to node 1 is worse. Therefore, synchronization error
increases with the number of hops in the network, and this problem
is especially apparent in the regime of high densities. To make
these ideas precise, we first determine the maximum number of hops
over which synchronization information must travel and then study
how the error in a generic pairwise synchronization mechanism
depends on this number of hops.
\subsubsection{An Estimate of the Maximum Number of Hops}
To obtain an estimate for the maximum number of hops $\ell_N$ in a
network in the regime of high densities (fixed area,
$N\to\infty$), we approximate the transmission range of a node by
the minimum required transmission distance, $d_{N}$, to maintain a
fully connected network with high probability.
From~\cite{GuptaK:98}, we have that for $N$ nodes uniformly
distributed over a $[0,1]\times [0,1]$ square, the graph is
connected with probability-1 as $N\to\infty$ if and only if each
node's transmission distance $d_{N}$ is such that
\begin{displaymath}
\pi d_{N}^{2} = \frac{\log N + \epsilon_{N}}{N},
\end{displaymath}
for some $\epsilon_{N}\to\infty$. Let us, therefore, approximate
$d_{N}$ as
\begin{displaymath}
d_{N} \approx \sqrt{\frac{1}{\pi}\frac{\log N}{N}}.
\end{displaymath}
Thus, $\ell_N = 1/d_N = O\big(\sqrt{\frac{N}{\log N}}\big)$, and thus
$\ell_N\to\infty$ as $N\to\infty$.
\subsubsection{Synchronization Error Over Multiple Hops}
Now, we assume there are $\ell_{N}$ nodes arranged in a linear
ordering, numbered $1$ to $\ell_{N}$. To synchronize, each node
$i$ forms an estimate of its own $\alpha_{i}$, based on $m$ pulses
transmitted from node $i-1$. As before, node $1$ will have the
reference clock $c_{1}(t)=t$.
Node $1$ starts by sending $m$ pulses at times $\tau_{1}+l$ for
$l=0,1,\ldots,m-1$. As a result, node $2$ will get a vector of
observations ${\mathbf Y}_{2}$, where ${\mathbf Y}_{2}[1] =
\alpha_{2}(\tau_{1}-\bar{\Delta}_{2})+\Psi_{2}$ and the $(l+1)$th
element of ${\mathbf Y}_{2}$ is ${\mathbf
Y}_{2}[l+1]=\alpha_{2}(\tau_{1}-\bar{\Delta}_{2})+l\alpha_{2}+\Psi_{2}$.
This is similar to the situation we had in (\ref{eq:simple-obs})
and we can therefore estimate $\alpha_{2}$ using
\begin{displaymath}
\hat{\alpha}_{2} = \bar{{\mathbf
C}}(\mathbf{H}^{T}\mathbf{H})^{-1}\mathbf{H}^{T}{\mathbf Y}_{2},
\end{displaymath}
where $\bar{{\mathbf C}} = [0\quad 1]$. We find that
$\hat{\alpha}_{2} \sim {\mathcal N}(\alpha_{2},
12\sigma^{2}/((m-1)m(m+1)))$.
Node $2$ will now transmit $m$ pulses at times, in terms of
$c_{2}$, $\bar{\tau}_{2}+l\hat{\alpha}_{2}$, for
$l=0,1,\ldots,m-1$. Note that $\hat{\alpha}_{2}$ is now a fixed
value since node $2$ has estimated $\alpha_{2}$. In terms of
$c_{1}$, these pulses occur at
\begin{displaymath}
(\bar{\tau}_{2}+l\hat{\alpha}_{2})^{c_{1}} =
\frac{\bar{\tau}_{2}+l\hat{\alpha}_{2}-\Psi_{2}}{\alpha_{2}}+\bar{\Delta}_{2}
= \tau_{2} + l\frac{\hat{\alpha}_{2}}{\alpha_{2}}-
\frac{\Psi_{2}}{\alpha_{2}},
\end{displaymath}
for $l=0,1,\ldots,m-1$, where $\tau_{2}=
(\bar{\tau}_{2}/\alpha_{2})+\bar{\Delta}_{2}$. Thus, if we
translate these times into the time scale of $c_{3}$, we will have
the vector of observations, ${\mathbf Y}_{3}$, made by node $3$.
We find that the $(l+1)$th element of ${\mathbf Y}_{3}$ is
\begin{displaymath}
{\mathbf Y}_{3}[l+1] = \alpha_{3}((\tau_{2} +
l\frac{\hat{\alpha}_{2}}{\alpha_{2}}-
\frac{\Psi_{2}}{\alpha_{2}})-\bar{\Delta}_{3})+\Psi_{3} \sim
{\mathcal
N}\bigg(\alpha_{3}(\tau_{2}-\bar{\Delta}_{3})+l\alpha_{3}\frac{\hat{\alpha}_{2}}{\alpha_{2}},
\sigma^{2}\big(\frac{\alpha_{3}^{2}}{\alpha_{2}^{2}}+1\big)\bigg).
\end{displaymath}
This vector of observations is of the form
\begin{equation}
{\mathbf Y}_{3} = {\mathbf H}\bar{{\mathbf \theta}} +
\bar{{\mathbf W}},
\end{equation}
where
\begin{displaymath}
\bar{{\mathbf \theta}} = \left[ \begin{array}{c}
\bar{\theta}_{1}\\
\bar{\theta}_{2}
\end{array} \right] =
\left[ \begin{array}{c}
\alpha_{3}(\tau_{2}-\bar{\Delta}_{3})\\
\alpha_{3}\frac{\hat{\alpha}_{2}}{\alpha_{2}}
\end{array} \right]
\end{displaymath}
with
\begin{displaymath}
{\mathbf H} = \left[ \begin{array}{ccccc}
1 & 1 & 1 & \ldots & 1\\
0 & 1 & 2 & \ldots & m-1
\end{array} \right]^T
\end{displaymath}
and ${\mathbf W} = [W_{1}\dots W_{m}]^{T}$. ${\mathbf W} \sim
{\mathcal N}(0,\Sigma)$ with $\Sigma =
\sigma^{2}\big(\frac{\alpha_{3}^{2}}{\alpha_{2}^{2}}+1\big){\mathbf
I}$.
With this vector of observations, we can use the estimator
\begin{displaymath}
\hat{\alpha}_{3} = \bar{{\mathbf
C}}(\mathbf{H}^{T}\mathbf{H})^{-1}\mathbf{H}^{T}{\mathbf Y}_{3},
\end{displaymath}
where $\bar{{\mathbf C}} = [0\quad 1]$. We find that
\begin{displaymath}
\hat{\alpha}_{3} \sim {\mathcal
N}\bigg(\alpha_{3}\frac{\hat{\alpha}_{2}}{\alpha_{2}},
\frac{12\sigma^{2}}{((m-1)m(m+1))}\big(\frac{\alpha_{3}^{2}}{\alpha_{2}^{2}}+1\big)\bigg).
\end{displaymath}
If we continue this reasoning, we find that
\begin{displaymath}
\hat{\alpha}_{\ell_{N}} \sim {\mathcal
N}\bigg(\alpha_{\ell_{N}}\frac{\hat{\alpha}_{\ell_{N}-1}}{\alpha_{\ell_{N}-1}},
\frac{12\sigma^{2}}{((m-1)m(m+1))}\big(\frac{\alpha_{\ell_{N}}^{2}}{\alpha_{\ell_{N}-1}^{2}}+1\big)\bigg)
\end{displaymath}
will be the estimate of node $\ell_{N}$.
From the above analysis, we see that each node $i$'s estimate
suffers from jitter variance of the same form. However, there is
an accumulation of error because node $i$'s estimate has a mean
that is dependent on node $i-1$'s estimate. As a result, if node
$i-1$ has some small error, then that error will propagate to the
estimate of node $i$. A good way to see this is if we consider
the special case where $\alpha_{2} = \alpha_{3} = \ldots
\alpha_{\ell_{N}} = 1$. This is the case where the clock
frequencies are the same, but nodes do not know this. In this
case, we find that node $\ell_{N}$'s estimate can be written as
\begin{displaymath}
\hat{\alpha}_{\ell_{N}} = \hat{\alpha}_{2} +
\sum_{i=3}^{\ell_{N}}W_{i}, \qquad \ell_{N}\geq 2
\end{displaymath}
where $W_{i}\sim{\mathcal N}(0,24\sigma^{2}/((m-1)m(m+1)))$. This
is intuitively obvious because node $i$'s estimate
$\hat{\alpha}_{i}$ will be the mean of the Gaussian random
variable $\hat{\alpha}_{i+1}$. Therefore, it is obvious that the
error variance grows linearly with the number of hops. In fact,
this behavior is observed in experimental work. With Reference
Broadcast Synchronization (RBS), from~\cite{ElsonGE:02} the
authors find that the synchronization error variance of an
$\ell_{N}$ hop path is approximately $\sigma^{2}\ell_{N}$, where
$\sigma^{2}$ is the one hop error variance. Therefore, we have
that the synchronization error between our two nodes will grow
linearly as $\ell_{N}=1/d_{N}$, which is strictly monotonically
increasing. As a result, as $N\to\infty$, we have that
synchronization error will grow unbounded.
This scalability problem, however, can potentially be avoided
using cooperative time synchronization as $N\to\infty$. This is
because in the limit of infinite density, the cooperative time
synchronization technique allows every node in the network to see
a set of identical equispaced zero-crossings. As a result, in
steady-state the synchronization error does not grow across the
network. This comes about by using the high node density to
average out random timing errors. Thus, we find that cooperative
time synchronization has very favorable scalability properties in
the limit as $N\to\infty$.
\subsection{Network Density and Synchronization Performance Trade-Off}
The cooperative synchronization technique described in this paper
provides us deterministic parameters that we can use for time
synchronization in the limit as node density grows unbounded. In
fact, as the node density grows, the observations that can be used
for synchronization improve. This means that our cooperative
synchronization technique provides an effective trade-off between
network density and synchronization performance. Such a trade-off
has not existed before and will provide network designers an
additional dimension over which to improve network synchronization
performance.
The fundamental idea behind cooperative time synchronization is
that by using spatial averaging, the errors inherent in each node
can be averaged out. By using observations that are an ``average"
of the information from a large number of surrounding nodes,
synchronization performance can be improved due to the higher
quality observations.
From this point of view, it is clear that the particular technique
described in this paper is but one example of using spatial
averaging to improve synchronization. Other techniques can also
be developed using spatial averaging. For example, nodes may not
necessarily have to send odd-shaped pulses and use zero-crossing
observations. Even though this setup takes advantage of the
superposition of pulses, it has its drawbacks. To keep the
signals in phase, the jitter variance will limit the maximum
frequency at which signals can be sent. Instead, nodes may
transmit ultra wideband pulses. If the nodes surrounding a
particular node $j$ each transmit an impulse at their estimate of
an integer value of $t$, then due to timing errors in the
surrounding nodes, node $j$ will see a cluster of pulse arrivals
around this integer value of $t$. Node $j$ can then take the
sample mean of this cluster of pulses and use that as an
observation, just like we used the zero-crossing as an observation
in this paper. This idea is illustrated in
Fig.~\ref{fig:pulsetraincluster}. Such a technique based on ultra
wideband pulses will also provide similar scalability properties.
As a result, cooperative time synchronization really describes a
class of techniques that can take advantage of spatial averaging
to improve synchronization performance.
\begin{figure}[!h]
\centerline{\psfig{file=pulsetraincluster.eps,width=12cm}}
\vspace{-5mm} \caption[The connection with pulse-coupled
oscillators.]{\small Clusters of ultra wideband pulses can be used
for cooperative time synchronization. In the the top figure, we
illustrate the clusters of pulses around integer values of $t$. As
the number of nodes increase, the sample mean will converge to the
integer value of the reference time. This idea is parallel to the
use of zero-crossings shown in the bottom figure.}
\label{fig:pulsetraincluster}
\end{figure}
\subsection{Future Work}
With the goal of developing practical cooperative synchronization
mechanisms, two keys areas of interest are cooperative
synchronization in finite-sized networks and algorithm
development. First, the analysis of performance for finite-sized
networks is very important. Determining when the asymptotic
properties presented in this work are good predictors of
performance in networks that may be large but still finite in size
is important in terms of bridging the gap between our proposed
ideas and practical systems. Preliminary, simulation-based work
along these lines can be found in~\cite{HuS:03b}. Second,
developing practical techniques for cooperative time
synchronization is essential for implementing spatial averaging in
real networks. Along these lines, one area of interest is
determining what types of pulses should be used, i.e. odd-shaped
pulses or ultra wideband pulses.
Furthermore, the ideas in this paper suggest a few other areas of
interest for future work. One is the issue of distributed
modulation methods. If we have the ability to generate an
aggregate waveform with equispaced zero-crossings, by controlling
the location of these crossings we can modulate information onto
this waveform and use it to communicate with a far receiver.
Preliminary work along these lines can be found in~\cite{HuS:03c}.
Another issue is to study how the idea of spatial averaging that
is so prevalent in this work contributes to synchronization that
is observed in nature.
\pagebreak
|
1,941,325,220,083 | arxiv | \section{Introduction}
Due to their practical importance, multi-agent collision avoidance and control have been extensively studied across different communities including AI, robotics and control. Considering continuous stochastic trajectories, reflecting each agent's uncertainty about its neighbours' time-indexed locations in an environment space, we exploit a distribution-independent bound on collision probabilities to develop a conservative collision-prediction module. It avoids temporal discretisation by stating collision-prediction as a one-dimensional optimization problem. If mean and standard deviation are computable Lipschitz functions of time, one can derive Lipschitz constants that allow us to guarantee collision prediction success with low computational effort. This is often the case, for instance, when dynamic knowledge of the involved trajectories is available
(e.g. maximum velocities or even the stochastic differential equations).
To avoid collisions detected by the prediction module, we let an agent re-plan repeatedly until no more collisions occur with a definable probability. Here, re-planning refers to modifying a control signal (influencing the basin of attraction and equilibrium point of the agent's stochastic dynamics) so as to bound the collision probability while seeking low plan execution cost in expectation. To keep the exposition concrete, we focus our descriptions on an example scenario where the plans correspond to sequences of setpoints of a feedback controller regulating an agent's noisy state trajectory. However, one can apply our method in the context of more general policy search problems.
In order to foster low social cost across the entire agent collective, we compare two different coordination mechanisms. Firstly, we consider a simple fixed-priority scheme \cite{erdmann_movingobj:87}, and secondly, we modify an auction-based coordination protocol \cite{ArmsTR:2011} to work in our continuous setting. In contrast to pre-existing work in auction-style multi-agent planning (e.g. \cite{ArmsTR:2011,koenig_auction_guarantees}) and multi-agent collision avoidance (e.g. \cite{kostic2010collision,ayanian2010decentralized, vandenberg:12}), we avoid {\it a priori} discretizations of space and time. Instead, we recast the coordination problem as one of incremental open-loop policy search. That is, as a succession of continuous optimisation or root-finding problems that can be efficiently and reliably solved by modern optimisation and root-finding techniques (e.g. \cite{Shubert:72,direct:93}).
While our current experiments were conducted with linear stochastic differential equation (SDE) models with state-independent noise (yielding Gaussian processes), our method is also applicable to any situation where mean and covariances can be evaluated. This encompasses non-linear, non-Gaussian cases that may have state-dependent uncertainties (cf. \cite{Gardiner2009}).
This preprint is an extended and improved version of a conference paper that appeared in \textit{Proc. of the 13th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2014)} \cite{CalliessAAMAS2014}.
\subsection{Related Work}
Multi-agent trajectory planning and task allocation methods have been related to auction mechanisms by identifying locations in state space with atomic goods to be auctioned in a sequence of repeated coordination rounds (e.g. \cite{ArmsTR:2011,koenig_auction_guarantees,koenig_biding_rules_gen}). Unfortunately, even in finite domains the coordination is known to be intractable -- for instance the sequential allocation problem is known to be NP-hard in the number of goods and agents \cite{sandholmcombauc:2002,Koenig2007}. Furthermore, collision avoidance corresponds to non-convex interactions.
This renders the coordination problem inapplicable to standard optimization techniques
that rely on convexity of the joint state space. In recent years, several works have investigated the use of mixed-integer programming techniques
for single- and multi-agent model-predictive control with collision avoidance both in deterministic and stochastic settings \cite{ArmsTR:2011,LyonsACC2012}.
To connect the problem to pre-existing mixed-integer optimization tools these works had to limit the models to dynamics governed by linear, time-discrete difference equations with state-independent state noise. The resulting plans were finite sequences of control inputs that could be chosen freely from a convex set. The controls gained from optimization are open-loop -- to obtain closed-loop policies the optimization problems have to be successively re-solved on-line in a receding horizon fashion. However, computational effort may prohibit such an approach in multi-agent systems with rapidly evolving states.
Furthermore, prior time-discretisation comes with a natural trade-off. On the one hand, one would desire a high temporal resolution in order to limit the chance of missing a collision predictably occurring between consecutive time steps. On the other hand, communication restrictions, as well as poor scalability of mixed-integer programming techniques in the dimensionality of the input vectors, impose severe restrictions on this resolution. To address this trade-off,
\cite{Earl2005} proposed to interpolate between the optimized time steps in order to detect collisions occurring between the discrete time-steps. Whenever a collision was detected they proposed to augment the temporal resolution by the time-step of the detected collision thereby growing the state-vectors incrementally as needed. A detected conflict, at time $t$, is then resolved by solving a new mixed-integer linear programme over an augmented state space, now including the state at $t$. This approach can result in a succession of solution attempts of optimization problems of increasing complexity, but can nonetheless prove relatively computationally efficient. Unfortunately, their method is limited to linear, deterministic state-dynamics.
Another thread of works relies on dividing space into polytopes \cite{li2007motion,ayanian2010decentralized}, while still
others \cite{chang2003collision,dimarogonas2006feedback,mastellone2008formation,kostic2010collision} adopt a potential field. In not accommodating uncertainty and stochasticity, these approaches are forced to be overly conservative in order to prevent collisions in real systems.
In contrast to all these works, we will consider a different scenario. Our exposition focuses on the assumption that each agent is regulated by influencing its continuous stochastic dynamics. For instance, we might have a given feedback controller with which one can interact by providing a sequence of setpoints constituting the agent's plan. While this restricts the choice of control action, it also simplifies computation as the feedback law is fixed. The controller can generate a continuous, state-dependent control signal based on a discrete number of control decisions, embodied by the setpoints. Moreover, it renders our method applicable in settings where the agents' plants are controlled by standard off-the-shelf controllers (such as the omnipresent PID-controllers) rather than by more sophisticated customized ones.
Instead of imposing discreteness, we make the often more realistic assumption that agents follow continuous time-state trajectories within a given continuous time interval. Unlike most work \cite{stipanovic2007cooperative,van2008reciprocal,mastellone2008formation,ayanian2010decentralized} in this field, we allow for stochastic dynamics, where each agent cannot be certain about the location of its team-members. This is crucial for many real-world multi-agent systems. The uncertainties are modelled as state-noise which can reflect physical disturbances or merely model inaccuracies.
While our exposition's focus is on stochastic differential equations, our approach is generally applicable in all contexts where the first two moments of the predicted trajectories can be evaluated for all time-steps.
As noted above, this paper is an extended version of work that has been published in the proceedings of AAMAS'14 \cite{CalliessAAMAS2014} and an earlier stage of this work was presented at an ICML \cite{CalliessICML2012} workshop.
\section{Predictive Probabilistic Collision Detection with Criterion Functions} \label{sec:colldetection}
\textbf{Task}. Our aim is to design a collision-detection module that can decide whether a set of
(predictive) stochastic trajectories is collision-free (in the sense defined below). The module we will derive is guaranteed to make this decision correctly, based on knowledge of the first and second order moments of the trajectories alone. In particular, no assumptions are made about the family of stochastic processes the trajectories belong to. As the required collision probabilities will generally
have to be expressed as non-analytic integrals, we will content ourselves with a fast, \textit{conservative} approach. That is, we are willing to tolerate a non-zero false-alarm-rate as long as decisions can be made rapidly and with zero false-negative rate. Of course, for certain distributions and plant shapes, one may derive closed-form solutions for the collision probability that may be less conservative and hence, lead to faster termination and shorter paths. In such cases, our derivations can serve as a template for the construction of criterion functions on the basis of the tighter probabilistic bounds.
\textbf{Problem Formalization}. Formally, a collision between two objects (or agents) $\agi,\agii$ at time $t \in I := [t_0,t_f] \subset \mathbb R$ can be described by the event
$\mathfrak C^{\agi,\agii}(t) $ $ = \{ (\state^\agi(t),\state^\agii(t)) | \norm{\state^\agi(t)-\state^\agii(t)}_2 \leq \frac{\Lambda^\agi + \Lambda^\agii}{2} \}$. Here, $\Lambda^\agi,\Lambda^\agii$ denote the objects' diameters, and $x^\agi,x^\agii : I \to \mathbb R^D$ are two (possibly uncertain) trajectories in a common, $D$-dimensional interaction space.
In a stochastic setting, we desire to bound the collision probability below a threshold $\delta \in (0,1)$ at any given time in I. We loosely say that the trajectories are \textit{collision-free }if $\Pr[\mathfrak C^{\agi,\agii}(t)] < \delta, \forall t \in I$.
\textbf{Approach.} For conservative collision detection between two agents' stochastic trajectories $\state^\agi,\state^\agii$, we
construct a \textit{criterion function} $\gamma^{\agi,\agii} : I \to \mathbb R$ (eq. as per Eq. \ref{eq:critfctgeneric} below). A conservative criterion function has the property $\gamma^{\agi,\agii}(t) >0 \Rightarrow \Pr [\mathfrak C^{\agi,\agii}(t)] < \delta^\agi$. That is, a collision between the
trajectories with probability above $\delta$ can be ruled-out if $\gamma^{\agi,\agii}$ attains only positive values.
If one could evaluate the function $t \mapsto \Pr [\mathfrak C^{\agi,\agii}(t)]$, an ideal criterion function would be
\begin{equation}\label{eq:collcritideal}
\gamma^{\agi,\agii}_{\text{ideal}}(t) := \delta - \Pr [\mathfrak C^{\agi,\agii}(t)].
\end{equation}
It is ideal in the sense that $\gamma^{\agi,\agii}_{\text{ideal}}(t) >0 \Leftrightarrow \Pr [\mathfrak C^{\agi,\agii}(t)] < \delta$.
However, in most cases, evaluating the criterion function in closed form will not be feasible. Therefore, we adopt a conservative approach: That is, we determine a criterion function $
\gamma^{\agi,\agii}(t) $ such that provably, we have $\gamma^{\agi,\agii}(t) \leq \gamma^{\agi,\agii}_{\text{ideal}}(t), \forall t $, including the possibility of false-alarms. That is, it is possible that for some times $t$, $\gamma^{\agi,\agii}(t) \leq 0$, in spite of $\gamma^{\agi,\agii}_{\text{ideal}}(t) > 0$.
Utilising the conservative criterion functions for collision-prediction, we assume a collision occurs unless $\min_{t \in I} \gamma^{\agi,\agii}(t) >0,\forall \agii \neq \agi$. If the trajectories' means and standard deviations are Lipschitz functions of time then one can often show that $\gamma^{\agi,\agii}$ is Lipschitz as well. In such cases negative values of $\gamma^{\agi,\agii}$ can be found or ruled out rapidly, as will be discussed in Sec.
\ref{Sec:lipfctnegvalsfind}.
In situations where a Lipschitz constant is unavailable or hard to determine, we can base our detection on the output of a global minimization method such as DIRECT \cite{direct:93}.
\subsection{Finding negative function values of Lipschitz functions}
\label{Sec:lipfctnegvalsfind}
Let $t_0,t_f \in \mathbb R, t_0 \leq t_f, I := [t_0,t_f] \subset \mathbb R$. Assume we are given a
Lipschitz continuous \emph{target function} $f: I \to \mathbb R $ with Lipschitz constant
$L \geq 0$. That is, $\forall S
\subset I \,\exists L_S \leq L \,\forall x,x' \in S: \abs{f(x) -
f(x')} \leq L_S \, \abs{x-x'}$. Let $t_0 < t_1 < t_2 <...<t_N < t_f $
and define $G_N = ( t_0,\ldots,t_{N+1})$ to be the \emph{sample grid} of
size $N+2 \geq 2$ consisting of the inputs at which we choose to evaluate the
target $f$.
\emph{Our goal is to prove or disprove the existence of a negative function value of target $f$}.
\subsubsection{A naive algorithm}
As a first, naive method, Alg. \ref{alg:negdetect_lipschitz} leverages Lipschitz continuity to answer the question of positivity correctly after a finite number of function
evaluations.
\begin{algorithm}
\SetKwData{flag}{flag} \SetKwData{negTime}{criticalTime} \SetKwData{minVal}{minVal}
\SetKwData{grid}{TimeGrid} \SetKwData{lipschitzconst}{L}
\SetKwFunction{OR}{OR}\SetKwFunction{FindCompress}{FindCompress} \SetKwFunction{Resolve}{Resolve}
\SetKwFunction{insert}{Insert} \SetKwFunction{Planner}{Planner} \SetKwFunction{Auction}{Auction}
\SetKwFunction{Receive}{Receive} \SetKwFunction{Avoid}{Avoid}
\SetKwFunction{DetectCollisions}{CollDetect}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\Input{Domain boundaries $t_0,t_f$ $\in \mathbb R$, function $\gamma: (t_0,t_f) \to \mathbb R$, Lipschitz constant $\lipschitzconst >0$.}
\Output{Flag \flag indicating presence of a non-positive function value (\flag = 1 indicates existence of a non-positive function value; \flag =0 indicates it has been ruled out that a negative function value can exist). Variable \negTime contains the time of a non-positive function value if such exists (\negTime $=t_0-1$,
iff $\gamma((t_0,t_f)) \subset \mathbb R_{+}$).}
\BlankLine
$\flag \leftarrow -1$;
$\negTime \leftarrow t_0-1$;
$\grid \leftarrow \{t_0,t_f\}$;
$r \leftarrow -1$;
\Repeat{$ \flag =1 $ \OR $\flag =0$ }{
$r \leftarrow r +1$;
$\Delta \leftarrow \frac{t_f-t_0}{2^r}$;
$ N \leftarrow (t_f-t_0)/\Delta$;
$\grid \leftarrow \cup_{i=0}^{N} \{t_0+i \Delta\}$;
$\minVal \leftarrow \min_{t \in \grid} \gamma(t);$ \\
\uIf{$\minVal \leq 0$}{
$\flag \leftarrow 1$;
$\negTime \leftarrow \arg\min_{t \in \grid} \gamma(t)$;
}
\uElseIf{\minVal $> \lipschitzconst \, \Delta$ }
{$\flag \leftarrow 0$;}
}
\caption{Naive algorithm deciding whether a Lipschitz continuous function $\gamma$ has a non-positive value on a compact domain.
Note, if minVal $>$ L $\, \Delta$ the function is guaranteed to map into the positive reals exclusively.}
\label{alg:negdetect_lipschitz}
\end{algorithm}
The algorithm evaluates the function values on a finite grid assuming a uniform constant Lipschitz number $L$. The grid is iteratively refined until either a negative function value is found or, the
Lipschitz continuity of function $\gamma$ allows us to infer that no negative function values can exist. The latter is the case whenever $\min_{t \in G_N} \gamma(t) > L \, \Delta$ where
$G_N = (t_0,..., t_{N+1})$ is the grid of function input (time) samples, $\Delta = |t_{i+1} - t_i| \, (i=0,...,N-1)$ and $L >0$ a Lipschitz number
of the function $\gamma: (t_0,t_f) \to \mathbb R$ which is to be evaluated.
The claim is established by the following Lemma:
\begin{lem}
Let $\gamma: [t_0,t_f]\subset \mathbb R \to \mathbb R$ be a Lipschitz function with Lipschitz number $L>0$. Furthermore, let $G_N = (t_0,t_1,\ldots, t_{N+1})$ be
an equidistant grid with $\Delta = |t_{i+1} - t_i| \, (i=0,...,N-1)$.
We have, $\gamma(t) > 0, \forall t \in (t_0,t_f)$
if
$\forall t \in G_N: \gamma(t) > L \, \Delta $.
\begin{proof}
Since $L$ is a Lipschitz constant of $\gamma$ we have $|\gamma(t) - \gamma(t') | \leq L |t-t'|, \forall t,t' \in (t_0,t_f) $. Now, let $t^* \in (t_0,t_f)$ and
$t_i, t_{i+1} \in G_N$ such that $t^* \in [t_i,t_{i+1}]$. Consistent with the premise of the implication we aim to show, we assume
$\gamma(t_i), \gamma(t_{i+1}) > L \Delta $ and, without loss of generality, we assume $\gamma(t_i)\leq \gamma(t_{i+1})$. Let $\delta := |t_i -t^*|$.
Since $t_i \leq t^* \leq t_{i+1}$ we have $0 \leq \Delta - \delta $. Finally, $0 < L \Delta < |\gamma(t_i)|$ implies $ \gamma(t^*) \geq \gamma(t_i) -
|\gamma(t_i) - \gamma(t^*)| \geq \gamma(t_i) -
L |t_i - t^*| > L \Delta - L \delta = L (\Delta - \delta ) \geq 0$.
\end{proof}
\end{lem}
Appart from a termination criterion, the lemma establishes that larger Lipschitz numbers will generally cause longer run-times of the algorithm as finer resolutions $\Delta t$ will be required to ensure non-negativity of the function under investigation.
\subsubsection{An improved adaptive algorithm} \label{sec:adaptiveLipshubertstyle}
Next, we will present an improved version of the algorithm provided above.
We can define two functions, \emph{ceiling} $\ensuremath{\mathfrak u}_N$ and \emph{floor} $ \ensuremath{\mathfrak l}_N$, such that (i) they bound the target $\forall t \in I: \ensuremath{\mathfrak l}_N(t) \leq \gamma(t) \leq \ensuremath{\mathfrak u}_N(t)$, and (ii) the bounds get tighter for denser grids. In particular, one can show that $\ensuremath{\mathfrak l}_N , \ensuremath{\mathfrak u}_N \stackrel{N \to \infty} {\longrightarrow} f $ uniformly if $G_N$ converges to a dense subset of $[a,b]$.
Define $\xi^{\mathfrak l}_N := \arg \min_{x \in I} \ensuremath{\mathfrak l}_{N}(x)$.
It has been shown that $\xi^{\mathfrak l}_N =\min_{i=1}^{N-1} \frac{t_{i+1}+t_i}{2} - \frac{\gamma(t_{i+1}) - \gamma(t_i)}{2 L}$ and
$\ensuremath{\mathfrak l}_N(\xi^{\mathfrak l}_N) = \min_i \frac{\gamma(t_{i+1}) +\gamma(t_i)}{2} - L \frac{t_{i+1}-t_i}{2}$ (see \cite{Shubert:72,direct:93}).
It is trivial to refine this to take localised Lipschitz constants into account:
$\xi^{\mathfrak l}_N = \min_i \frac{\gamma(t_{i+1}) +\gamma(t_i)}{2} - L_{J_i} \frac{t_{i+1}-t_i}{2}$ where $L_{J_i}$ is a Lipschitz number valid on interval $J_i = (t_i,t_{i+1})$.
This suggests the following algorithm: \textit{We refine the grid $G_N$ to grid $G_{N+1}$, by including $\xi^\ensuremath{\mathfrak l}_N, f(\xi^\ensuremath{\mathfrak l}_N)$ as a new sample. This process is repeated until either of the following stopping conditions are met: (i) a negative function value of $\gamma$ is discovered ($f(\xi^{\mathfrak l}_N) < 0$), or (ii) $\ensuremath{\mathfrak l}_N(\xi^{\mathfrak l}_N) \geq 0$ (in which case we are guaranteed that no negative function values can exist)}.
\begin{figure*}
\centering
\begin{subfigure}
\centering
\includegraphics[width = 3.9cm, clip, trim = 3.5cm 9.5cm 4.5cm 10cm]{shubertex1.pdf}
\end{subfigure}%
\begin{subfigure}
\centering
\includegraphics[width = 3.9cm, clip, trim = 3.5cm 9.5cm 4.5cm 10cm]{shubertex2.pdf}
\end{subfigure}
%
\begin{subfigure}
\centering
\includegraphics[width = 3.9cm, clip, trim = 3.5cm 9.5cm 4.5cm 10cm]{shubertex3.pdf}
\end{subfigure}
\caption{Proving the existence of a negative value of function
$x \mapsto \abs{\sin(x)} \cos(x) +\frac{1}{4}$. Left: Initial condition. Centre: First refinement. Right: The second refinement has revealed the existence of a negative value.}
\label{fig:shubert}
\end{figure*}
\begin{algorithm}
\SetKwData{flag}{flag} \SetKwData{negTime}{criticalTime} \SetKwData{minVal}{minVal}
\SetKwData{grid}{$G_N$} \SetKwData{lipschitzconst}{L}
\SetKwFunction{OR}{OR}\SetKwFunction{FindCompress}{FindCompress} \SetKwFunction{Resolve}{Resolve}
\SetKwFunction{insert}{Insert} \SetKwFunction{Planner}{Planner} \SetKwFunction{Auction}{Auction}
\SetKwFunction{Receive}{Receive} \SetKwFunction{Avoid}{Avoid}
\SetKwFunction{DetectCollisions}{CollDetect}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\Input{Domain boundaries $t_0,t_f$ $\in \mathbb R$, function $\gamma: (t_0,t_f) \to \mathbb R$, Lipschitz constant $\lipschitzconst >0$.}
\Output{Flag \flag indicating presence of a non-positive function value (\flag = 1 indicates existence of a non-positive function value; \flag =0 indicates it has been ruled out that a negative function value can exist). Variable \negTime contains the time of a non-positive function value if such exists (\negTime $=t_0-1$,
iff $\gamma((t_0,t_f)) \subset \mathbb R_{+}$).}
\BlankLine
$\flag \leftarrow -1$;
$\negTime \leftarrow t_0-1$;
$\grid \leftarrow \{t_0,t_f\}$;
$N=0$;
\Repeat{$ \flag =1 $ \OR $\flag = 0$ }{
$\xi^{\mathfrak l} \leftarrow \min_{i=1}^{N} \frac{t_{i+1}+t_i}{2} - \frac{\gamma(t_{i+1}) - \gamma(t_i)}{2 L}$; \\
$\ensuremath{\mathfrak l}_N(\xi^{\mathfrak l}_N) \leftarrow \min_{i=1}^{N} \frac{\gamma(t_{i+1}) +\gamma(t_i)}{2} - L \frac{t_{i+1}-t_i}{2}$;
$\minVal \leftarrow \gamma(\xi^{\mathfrak l});$ \\
\uIf{$\minVal \leq 0$}{
$\flag \leftarrow 1$;
$\negTime \leftarrow \xi^\ensuremath{\mathfrak l}$;
}
\uElseIf{$\ensuremath{\mathfrak l}_N(\xi^{\mathfrak l}_N) > 0$}{
$\flag \leftarrow 0$;
}
\Else{
$N \leftarrow N +1$;
$\grid \leftarrow \grid \cup \{\xi^{\mathfrak l} \}$;
}
}
\caption{Adaptive algorithm based on Shubert's method to prove whether a Lipschitz continuous function $\gamma$ has a non-positive value on a compact domain.
Note, if $\ensuremath{\mathfrak l}_N(\xi^{\mathfrak l}_N) >0$ the function is guaranteed to map into the positive reals exclusively.}
\label{alg:negdetect_lipschitz_shubertstyle}
\end{algorithm}
For pseudo-code refer to Alg. \ref{alg:negdetect_lipschitz_shubertstyle}.
An example run is depicted in Fig. \ref{fig:shubert}. Note, without our stopping criteria, our algorithm degenerates to Shubert's minimization method \cite{Shubert:72}. The stopping criteria are important to save computation, especially in the absence of negative function values.
%
%
%
%
%
\subsection{Deriving collision criterion functions}
This subsection is dedicated to the derivation of a (Lipschitz) criterion function. In lieu to the approach of \cite{ArmsTR:2011,Lyons2011}, the idea is to define hyper-cuboids $H^\agi, H^\agii$ sufficently large to contain a large enough proportion of each agent's probability mass to ensure that no collision occurs (with sufficient confidence) as long as the cuboids do not overlap. We then define the criterion function so as to negative values whenever the hyper-cuboids do overlap.
For ease of notation, we omit the time index $t$. For instance, in this subsection, $x^\agi$ now denotes random variable $x^\agi(t)$ rather than the stochastic trajectory.
The next thing we will do is to derive sufficient conditions for absence of collisions, i.e. for $\Pr[\ensuremath{\mathfrak C}^{\agi,\agii} ] < \delta$.
To this end, we make an intermediate step:
For each agent $\agiii \in \{\agi,\agii\}$ we define an open hyper-cuboid $H^\agiii$ centred around mean $\mu^\agiii = \expect{\state^\agiii(t)}$. As a $D$-dimensional
hyper-cuboid, $H^\agiii$ is completely determined by its centre point $\mu^\agiii$ and its edge lengths $l^\agiii_1,...,l^\agiii_D$.
Let $O^\agiii$ denote the event that $x^\agiii \notin H^\agiii$ and $P^\agiii := \Pr[O^\agiii]$. We derive a simple disjunctive constraint
on the component distances of the means under which we can guarantee that the collision probability is not greater than the probability
of at least one object being outside its hyper-cuboid. This is the case if the hypercuboids do not overlap. That is, their max-norm distance is at least
$\Lambda^{\agi,\agii} : = \frac{\Lambda^\agi + \Lambda^\agii}{2}$.
Before engaging in a formal discussion we need to establish a preparatory fact:
\begin{lem}\label{lem:star}
Let $\mu^\agiii_j$ denote the $j$th component of object $\agiii$'s mean and $r_j^\agiii = \frac 1 2 l_j^\agiii$.
Furthermore, let $\mathfrak F^{\agi,\agii} := \overline {\ensuremath{\mathfrak C}^{\agi,\agii}} $ be the event that no collision occurs and $\mathfrak B^{\agi,\agii}:= H^\agi \times H^\agii$ the event that
$x^\agi \in H^\agi$ and $x^\agii \in H^\agii$.
Assume the component-wise distance between the hyper-cuboids $H^\agi,H^\agii$ is at least $\Lambda^{\agi,\agii}$, which is expressed by the following disjunctive constraint:
\[\exists j \in \{1,...,D\}: \abs{\mu^\agi_j - \mu_j^\agii } > \Lambda^{\agi,\agii} + r^\agi_j + r^\agii_j.\]
Then, we have : $ \mathfrak B^{\agi,\agii} \subset \mathfrak F^{\agi,\agii}.$
\begin{proof}
Since $\norm{x}_\infty \leq \norm{x}_2, \forall x$ we have
$\mathfrak F_\infty := \{(x^\agi,x^\agii) \vert \norm{x^\agi - x^\agii}_\infty> \Lambda^{\agi,\agii} \}$ $\subset \{(x^\agi,x^\agii) \vert \norm{x^\agi - x^\agii}_2 > \Lambda^{\agi,\agii} \} = \mathfrak F^{\agi,\agii} $.
It remains to be shown that $\mathfrak B^{\agi,\agii} \subset \mathfrak F_\infty$:
Let $(x^\agi, x^\agii) \in \mathfrak B^{\agi,\agii} = H^\agi \times H^\agi$. Thus,
$\forall j \in \{1,...,D\}, \agiii \in \{\agi,\agii\}: \abs{x^\agiii_j - \mu^\agiii_j} \leq r^\agiii_j$.
For contradiction, assume $(x^\agi, x^\agii) \notin \mathfrak F_\infty$. Then, $\abs{x^\agi_i -x^\agii_i} \leq \Lambda^{\agi,\agii}$
for all $i \in \{1,...,D\}$.
Hence, $\abs{\mu^\agi_i - \mu^\agii_i} = \abs{\mu^\agi_i - x^\agi_i + x^\agi_i - x^\agii_i + x^\agii_i- \mu^\agii_i}$
$\leq \abs{\mu^\agi_i - x^\agi_i} + \abs{x^\agi_i - x^\agii_i} + \abs{x^\agii_i- \mu^\agii_i} \leq r^\agi_i + \Lambda^{\agi,\agii} + r^\agii_i, \forall i \in \{1,...,D\}$
which contradicts our disjunctive constraint in the premise of the lemma. q.e.d.
\end{proof}
\end{lem}
\begin{thm}
\label{thm:hypercubprobsconstr}
Let $\mu^\agiii_j$ denote the $j$th component of object $\agiii$'s mean and $r_j^\agiii = \frac 1 2 l_j^\agiii$. Assume, $\state^\agi, \state^\agii$ are random variables with means $\mu^\agi = \expect{\state^\agi},\mu^\agii = \expect{\state^\agii}$, respectively. The max-norm distance between hypercuboids $H^\agi,H^\agii$ is at least $\Lambda^{\agi,\agii} > 0$ (i.e. the hypercuboids do not overlap), which is expressed by the following disjunctive constraint:
\[\exists j \in \{1,...,D\}: \abs{\mu^\agi_j - \mu_j^\agii } > \Lambda^{\agi,\agii} + r^\agi_j + r^\agii_j.\]
Then, we have : \[\Pr[\ensuremath{\mathfrak C}^{\agi,\agii}] \leq P^\agi + P^\agii - P^\agii \, P^\agi \leq P^\agi + P^\agii\]
where $P^\agiii = \Pr[x^\agiii \notin H^\agiii], (\agiii \in \{\agi,\agii \})$.
\begin{proof} As in Lem. \ref{lem:star}, let $\mathfrak F^{\agi,\agii} := \overline {\ensuremath{\mathfrak C}^{\agi,\agii}} $ be the event that no collision occurs and
let $\mathfrak B^{\agi,\agii}:= H^\agi \times H^\agii$.
We have
$\Pr[\ensuremath{\mathfrak C}^{\agi,\agii}]$
$\leq 1 - \Pr[\overline {\ensuremath{\mathfrak C}^{\agi,\agii} }] = 1- \Pr[\mathfrak F^{\agi,\agii}]$.
By Lem. \ref{lem:star} we have $\mathfrak B^{\agi,\agii} \subset \mathfrak F^{\agi,\agii}$ and thus,
$ 1- \Pr[\mathfrak F^{\agi,\agii}] \leq 1- \Pr[\mathfrak B^{\agi,\agii}] = \Pr[\overline {\mathfrak B^{\agi,\agii}}]$. Now, $\Pr[\overline {\mathfrak B^{\agi,\agii}}] = \Pr[x^\agi \notin H^\agi \vee x^\agii \notin H^\agii ]$
$= P^\agi + P^\agii - P^\agi \, P^\agii \leq P^\agi + P^\agii$. q.e.d.
\end{proof}
\end{thm}
One way to define a criterion function is as follows:
\begin{equation}
\label{eq:critfctgeneric}
\gamma^{\agi,\agii} (t; \varrho(t)) := \max_{i=1,\ldots,D} \{\abs{\mu^\agi_i(t) - \mu_i^\agii(t) } -\Lambda^{\agi,\agii} - r^\agi_i(t) - r^\agii_i(t)\}
\end{equation}
where $\varrho = (r_1^\agi,\ldots,r_D^\agi,r_1^\agii,\ldots,r_D^\agii)$ is the parameter vector of radii. (For notational convenience, we will often omit explicit mention of parameter $\varrho$ in the function argument.)
For more than two agents, agent $\agi's$ overall criterion function is
$\Gamma^\agi(t) := \min_{\agii \in \agset\backslash\{\agi\}} \gamma^{\agi,\agii} (t).$
Thm. \ref{thm:hypercubprobsconstr} tells us that the collision probability is bounded from above by the desired threshold $\delta$ if $\gamma^{\agi,\agii} (t) >0$, provided we chose the radii $r_j^\agi,r^\agii_j$ ($j=1,...,D$) such that
$P^\agi, P^\agii \leq \frac{\delta}{2}$.
Let $\agiii \in \{\agi,\agii\}$.
Probability theory provides several distribution-independent bounds relating the radii of a (possibly partly unbounded) hypercuboid to
the probability of not falling into it. That is, these are bounds of the form
\[P^\agiii \leq \beta(r^\agiii_1,...,r^\agiii_D; \Theta)\] where $\beta$ is a continuous function that decreases monotonically with increasing radii and $\Theta$ represents additional information.
In the case of Chebyshev-type bounds information about the first two moments are folded in, i.e. $\Theta = (\mu^\agiii, C^\agiii)$ where $C^\agiii (t) \in \mathbb R^{D \times D}$ is the variance (-covariance) matrix.
We then solve for radii that fulfil the inequality $\frac{\delta}{2} \stackrel{}{ \geq} \beta(r^\agiii_1,...,r^\agiii_D; \Theta)$ while simultaneously ensuring collision avoidance with the desired probability.
Inspecting Eq. \ref{eq:critfctgeneric}, it becomes clear that, in order to maximally diminish conservatism of the criterion function, it would be ideal to choose the radii in $\varrho$ such that
$\varrho = \text{argmax}_\varrho \gamma^{\agi,\agii}(t;\varrho) = \text{argmax}_{r_1^\agi,...,r^\agi_D\\r_1^\agii,...,r^\agii_D} \max_{i=1,\ldots,D} \{\abs{\mu^\agi_i - \mu_i^\agii } -\Lambda^{\agi,\agii} - r^\agi_i - r^\agii_i\}$ subject to the constraints $\frac{\delta}{2} \stackrel{}{ \geq} \beta(r^\agiii_1,...,r^\agiii_D; \Theta), (\agiii \in \{\agi,\agii\})$.
Solving this constrained optimisation problem can often be done in closed form.
In the context where $\beta$ is derived from a Chebyshev-type bound, we propose to set as many radii as large as possible (in order to decrease ($\beta$ to satisfy the constraints) while setting the radii $r_i^\agi, r_i^\agii$ as small as possible without violating the constraint (where $i$ is some dimension).
That is, we define the radii as follows: Set $r_j^\agiii := \infty, \forall j \neq i$. The remaining unknown variable, $r_i^\agiii$, then is defined as the solution to the equation $\frac{\delta}{2} = \beta(r^\agiii_1,...,r^\agiii_D; \Theta)$.
The resulting criterion function, denoted by $\gamma^{\agi,\agii}_i$, we obtain with this procedure of course depends on the arbitrary choice of dimension $i$.
Therefore, we obtain a less conservative criterion function by repeating this process for each dimension $i$ and then constructing a new criterion function as the point-wise maximum: $\gamma^{\agi,\agii} (t):= \max_i \gamma^{\agi,\agii}_i(t)$.
A concrete example of this procedure is provided below.
\subsubsection{Example constructions of distribution-independent criterion functions}
We can use the above derivation as a template for generating criterion functions.
Consider the following concrete example. Combining union bound and the standard (one-dim.) Chebyshev bound yields
$P^\agiii =\Pr [\state^\agiii \notin H^\agiii ] \leq \sum_{j=1}^D \frac{C_{jj}^\agiii}{ r_j^\agiii r_j^\agiii } =: \beta(r_1^\agiii,\ldots,r_D^\agiii ; C^\agiii)$.
Setting every radius, except $r_i^\agiii$, to infinitely large values and $\beta$ equal to $\frac{\delta}{2}$ yields
$\frac{\delta}{2} = \frac{C_{ii}^\agiii}{ r_i^\agiii r_i^\agiii}$, i.e. $r_i^\agiii = \sqrt{\frac{2 C_{ii}^\agiii} {\delta}}$.
(Note, this a correction of the radius provided in the conference version of this paper.)
Finally, inserting these radii ( for $\agiii = \agi,\agii$) into Eq. \ref {eq:critfctgeneric} yields our first collision criterion function:
$\gamma^{\agi,\agii} (t) := \abs{\mu^\agi_i(t) - \mu_i^\agii(t) } -\Lambda^{\agi,\agii} - \sqrt{\frac{2 C_{ii}^\agi(t)} {\delta}}-\sqrt{\frac{2 C_{ii}^\agii(t)} {\delta}}.$
Of course, this argument can be made for any choice of dimension $i$. Hence, a less conservative, yet valid, choice is
\begin{equation}
\label{eq:critfctChebyshev_1dim_infiniteradii_better}
\gamma^{\agi,\agii} (t) := \max_{i=1,...,D} \abs{\mu^\agi_i(t) - \mu_i^\agii(t) } -\Lambda^{\agi,\agii} - \sqrt{\frac{2 C_{ii}^\agi(t)} {\delta}}-\sqrt{\frac{2 C_{ii}^\agii(t)} {\delta}}.
\end{equation}
Notice, this function has the desirable property of being Lipschitz continuous, provided the mean $\mu_i^\agiii : I \to \mathbb R$ and standard deviation functions $\sigma_{ii}^\agiii= \sqrt{C^\agiii_{ii}} : I \to \mathbb R_+$ are. In particular, it is easy to show $L(\gamma^{\agi,\agii}) \leq \max_{i=1,...,D} L( \mu_i^\agi ) + L(\mu_i^\agii) + \sqrt{\frac{2}{\delta}} \bigl(L(\sigma_{ii}^\agi) + L(\sigma_{ii}^\agii) \bigr)$
where, as before, $L(f)$ denotes a Lipschitz constant of function $f$.
%
For the special case of two dimensions, we can derive a less conservative alternative criterion function based on a tighter two-dimensional Chebyshev-type bound \cite{whittle_chebyshev}:
\begin{thm}[Alternative collision criterion function] \label{def:collcritfct2d}
Let spatial dimensionality be $D = 2$. Choosing
$r^{\agiii}_i(t) := \sqrt{\frac{1}{2\delta^a}}\, \sqrt{ C_{ii}^\agiii(t) +\frac{ \sqrt{C_{ii}^\agiii(t) C_{jj}^\agiii(t)
(C_{ii}^\agiii(t) C_{jj}^\agiii(t) - (C_{ij}^\agiii(t))^2) }}{C_{jj}^\agiii(t)}}$
($\agiii \in \{\agi,\agii\}, i \in \{1,2\}, j \in \{1,2\} - \{i\}$) in Eq. \ref{eq:critfctgeneric} yields a valid distribution-independend criterion function. That is, $\gamma^{\agi,\agii}(t) >0 \Rightarrow \Pr [\mathfrak C^{\agi,\agii}(t)] < \delta^\agi$.
\end{thm}
A proof sketch and a Lipschitz constant (for non-zero uncertainty) are provided in the appendix. Note, the Lipschitz constant we have derived therein becomes infinite in the limit of vanishing variance. In that case, the presence of negative criterion values can be tested based on the sign of the minimum of the criterion function. This can be found employing a global optimiser. Future work will investigate, in how far Hoelder continuity instead of Lipschitz continuity can be leveraged to yield a similar algorithm as the one provided in Sec. \ref{sec:adaptiveLipshubertstyle}.
\subsubsection{Multi-agent case.} Let $\agi \in \agset$, $\agset' \subset \agset$ such that $\agi \notin \agset'$ a subset of agents. We define the event that $\agi$ collides with at least one of the agents in \agset' at time $t$ as $\mathfrak C^{\agi,\agset'}(t) := \{ (\state^\agi(t),\state^\agii(t)) | \exists \agii \in \agset': \norm{\state^\agi(t)-\state^\agii(t)}_2 \leq \Lambda \} = \bigcup_{\agii \in \agset'} \mathfrak C^{\agi,\agii}$.
By union bound, $\Pr[ \mathfrak C^{\agi,\agset'}(t)] \leq \sum_{\agii \in \agset'} \Pr[ \mathfrak C^{\agi,\agii}(t)] $.
\begin{thm} [Multi-Agent Criterion] Let $\gamma^{\agi,\agii}$ be valid criterion functions defined w.r.t. collision bound $\delta^\agi$.
We define \emph{multi-agent collision criterion function} $\Gamma^{\agi,\agset'}(t) := \min_{\agii \in \agset'} \gamma^{\agi,\agii}(t)$. If $\Gamma^{\agi,\agset'}(t) > 0$ then the collision probability with \agset' is bounded below $\delta^\agi |\agset'|$. That is, $\Pr[ \mathfrak C^{\agi,\agset'}(t)] < \delta^\agi |\agset'|.$
\label{thm:mascritfct}
\end{thm}
\begin{proof}
Let $\agi \in \agset$, $\agset' \subset \agset$ such that $\agi \notin \agset'$ a subset of agents. We define the event that $\agi$ collides with at least one of the agents in \agset' at time $t$ as $\mathfrak C^{\agi,\agset'}(t) := \{ (\state^\agi(t),\state^\agii(t)) | \exists \agii \in \agset': \norm{\state^\agi(t)-\state^\agii(t)}_2 \leq \Delta \} = \bigcup_{\agii \in \agset'} \mathfrak C^{\agi,\agii}$.
We have established that if $\forall \agii \in \agset': \gamma^{\agi,\agii}(t) >0$ then $\Pr [\mathfrak C^{\agi,\agii}(t)] < \delta^\agi,\forall \agii \in \agset' $. Now, let $\Gamma^{\agi, \agset' }< \delta^\agi$. Hence,$\forall \agii \in \agset': \gamma^{\agi,\agii}(t) >0$. Thus, $\forall \agii \in \agset': \Pr[ \mathfrak C^{\agi,\agii}(t)] < \delta^\agi)$ Therefore, $\sum_{\agii \in \agset'} \Pr[ \mathfrak C^{\agi,\agii}(t)] \leq \abs{\agset'} \delta^\agi$. By union bound, $\Pr[ \mathfrak C^{\agi,\agset'}(t)] \leq \sum_{\agii \in \agset'} \Pr[ \mathfrak C^{\agi,\agii}(t)] $. Consequently, we have $\Pr[ \mathfrak C^{\agi,\agset'}(t)] \leq \abs{\agset'} \delta^\agi$. q.e.d.
\end{proof}
Moreover, $\Gamma^{\agi,\agset'}$ is Lipschitz if the constituent functions $\gamma^{\agi,\agii}$ are (see Appendix \ref{sec:derlipno}).
%
%
%
%
\begin{figure*}
\centering
\begin{subfigure}
\centering
\includegraphics[width = 3.9cm, clip, trim = 3cm 9.5cm 4cm 10cm]
{plotcritfct_chebyshev_tinyvar_diam1.pdf}
\end{subfigure}%
\begin{subfigure}
\centering
\includegraphics[width = 3.9cm, clip, trim = 3cm 9.5cm 4cm 10cm]
{plotcritfct_chebyshev_varpt1_diam1.pdf}
\end{subfigure}
%
\begin{subfigure}
\centering
\includegraphics[width = 3.9cm, clip, trim = 3cm 9.5cm 4cm 10cm]
{plotcritfctwhittle_varpt1_diam1.pdf}
\end{subfigure}
\caption{Criterion function values (as per Eq. \ref{eq:critfctChebyshev_1dim_infiniteradii_better}) as a function of $\norm{\expect{\state^\agii} - \expect{\state^\agi}}_\infty$ and with $\delta =0.05$, $\Lambda^{\agi,\agii} =1$. Left: variances $C^\agi = C^\agii = \text{diag}(.00001,.00001)$. Centre: variances $C^\agi = C^\agii = \text{diag}(.1,.1)$. Right: variances $C^\agi = C^\agii = \text{diag}(.1,.1)$ and with improved criterion function (as per Thm. \ref{def:collcritfct2d}).
}
\label{fig:whittlecritfct}
\end{figure*}
Our distribution-independent collision criterion functions have the virtue that they work for all distributions -- not only the omnipresent Gaussian. Unfortunately, distribution-independence is gained at the price of conservativeness ( ref. to Fig. \ref{fig:whittlecritfct}). In our experiments in Sec. \ref{sec:sims}, the collision criterion function as per Thm. \ref{def:collcritfct2d} is utilized as an integral component of our collision avoidance mechanisms. The results suggest that the conservativeness of our detection module does not entail prohibitively high-false-alarm rates for the distribution-independent approach to be considered impractical. That said, whenever distributional knowledge can be converted into a criterion function. One could then use our derivations as a template to generate refined criterion functions using Eq. \ref{eq:critfctgeneric} with adjusted radii $r_i$,$r_j$, reflecting the distribution at hand.
\section{Collision Avoidance}
In this section we outline the core ideas of our proposed approach to multi-agent collision avoidance.
After specifying the agent's dynamics and formalizing the notion of a single-agent plan, we define the multi-agent planning task. Then we describe how conflicts, picked-up by our collision prediction method, can be resolved. In Sec. \ref{sec:coord} we describe the two coordination approaches we consider utilizing to generate conflict-free plans.
\textbf{I) Model (example).} We assume the system contains a set $\agset$ of agents indexed by $ \agi \in \{1,...,| \agset\ | \}$. Each agent \agi's associated plant has a probabilistic state trajectory following stochastic controlled $D$-dimensional state dynamics (we consider the case $D=2$) in the continuous interval of (future) time $I=(t_0,t_f]$. We desire to ask agents to adjust their policies to avoid collisions. Each policy gives rise to a stochastic belief over the trajectory resulting from executing the policy.
For our method to work, all we require is that the trajectory's mean function $m:I \to \mathbb R^D$ and covariance matrix function $\Sigma : I \to \mathbb R^{D \times D}$ are evaluable for all times $t \in I$.
A prominent class for which closed-form moments can be easily derived are linear stochastic differential equations (\textit{SDE}s). For instance, we consider the SDE
\begin{equation}
\label{eq:linSDEcontrolledplant1}
d\state^\agi(t) = K \bigl(\xi^\agi(t) - \state^\agi(t)\bigr) dt + B \, dW
\end{equation} where $K, B \in \mathbb R^{D \times D}$ are matrices $\state^\agi: I \to \mathbb R^D$ is the state trajectory and $W$ is a vector-valued Wiener process. Here, $u(\state^\agi; \xi^\agi) :=K( \xi^\agi - \state^\agi )$ could be interpreted as the control policy of a linear feedback-controller parametrised by $\xi^\agi$. It regulates the state to track a desired trajectory $\xi^\agi(t)= \zeta_0^\agi \chi_{\{0\}} (t)+\sum_{i=1}^{H^\agi} \zeta_i^\agi \chi_{\tau_i^\agi} (t)$ where $\chi_{\tau_i}: \mathbb R \to \{0,1\}$ denotes the indicator function of the half-open interval $\tau_{i}^\agi = (t_{i-1}^\agi, t_{i}^\agi] \subset [0,T^\agi]$ and each $\zeta^\agi_i \in \mathbb R^D$
is a \textit{setpoint}. If $K$ is positive definite the agent's state trajectory is determined by setpoint sequence $p^\agi = (t^a_i,\zeta^\agi_i)_{i=0}^{H^\agi}$ (aside from the random disturbances) which we will refer to as the agent's \emph{plan}.
For example, plan $p^\agi:= \bigl( (t_0,\state_0^\agi ), (t_f,\state_f^\agi ) \bigr)$ could be used to regulate agent $\agi$'s \textit{start state} $\state_0^\agi$ to a given \emph{goal state} $\state_f^\agi$ between times $t_0$ and $t_f$. For simplicity, we assume the agents are always initialized with plans of this form before coordination commences.
One may interpret a setpoint as some way to alter the stochastic trajectory. Below, we will determine setpoints that modify a stochastic trajectory to reduce collision probability while maintaining low expected cost. From the vantage point of policy search, $\xi^\agi$ is agent $\agi$'s policy parameter that has to be adjusted to avoid collisions.
%
\textbf{II) Task.} Each agent $\agi$ desires to find a sequence of setpoints $(p^\agi)$ such that (i) it moves from its start state $x_0^\agi$ to its goal state $x_f^\agi$ along a low-cost trajectory and (ii) such that along the trajectory its plant (with diameter $\Delta$) does not collide with any other agents' plant in state space with at least a given probability $1-\delta \in (0,1)$.
\textbf{III) Collision resolution}. An agent seeks to avoid collisions by adding new setpoints to its plan until the collision probability of
the resulting state trajectory drops below threshold $\delta$.
For choosing these new setpoints we consider two methods \textbf{WAIT} and \textbf{FREE}.
In the first method the agents insert a time-setpoint pair $(t,x_0^\agi)$ into the previous plan $p^\agi$.
Since this aims to cause the agent to wait at its start location $x^\agi_0$ we will call the method {\scshape WAIT}. It is possible that
multiple such insertions are necessary until collisions are avoided. Of course, if a higher-priority agent decides to traverse through $x_0^\agi$,
this method is too rigid to resolve a conflict.
In the second method the agent optimizes for the time and location of the new setpoint. Let $p^\agi_{\uparrow(t,s)} $ be the
plan updated by insertion of time-setpoint pair $(t,s) \in I \times \mathbb R^D$. We propose to choose the candidate setpoint $(t,s)$ that
minimizes a function being a weighted sum of the expected cost entailed by executing updated plan $p^\agi_{\uparrow(t,s)}$ and a hinge-loss collision penalty
$c_{coll}^\agi(p^\agi_{\uparrow(t,s)}) := \lambda \,\max\{0, -\min_t \Gamma^\agi(t)\} $. Here, $\Gamma^\agi$
is computed based on the assumption we were to execute $p^\agi_{\uparrow(t,s)} $ and $\lambda >>0$ determines the extent to which collisions are penalized. Since the new setpoint can be chosen freely in time and
state-space we refer to the method as {\scshape FREE}.
\subsection{Coordination} \label{sec:coord}
We will now consider how to integrate our collision detection and avoidance methods into a coordination framework that determines who needs to avoid whom and at what stage of the coordination process. Such decisions are known to significantly impact the \textit{social cost} (i.e. the sum of all agents' individual costs) of the agent collective.\\
\textbf{Fixed-priorities (FP).}
As a baseline method for coordination we consider a basic fixed-priority method (e.g. \cite{erdmann_movingobj:87,Bennewitz01Exploiting}).
Here, each agent has a unique ranking (or priority) according to its index $\agi$ (i.e. agent 1 has highest priority, agent $|\agset|$ lowest). When all higher-ranking agents are done planning, agent $\agi$ is informed of their planned trajectories which it has to avoid with a probability greater than $ 1-\delta$. This can be done by repeatedly invoking for collision detection and resolution methods described above until no further collision with higher-ranking agents are found.
\textbf{Lazy Auction Protocol (AUC).}
While the FP method is simple and fast the rigidity of the fixed ranking can lead to sub-optimal social cost and coordination success. Furthermore, its sequential nature does not take advantage of possible parallelization a distributed method could. To alleviate this we propose to revert the ranking flexibly on a case-by-case basis. In particular, the agents are allowed to compete for the right to gain passage (e.g. across a region where a collision was detected) by submitting bids in the course of an auction. The structure of the approach is outlined in Alg. \ref{alg:lazyauctions}.
\IncMargin{-1em}
\begin{algorithm}
\begin{small}
\SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up} \SetKwData{Constraints}{Constraints}
\SetKwData{Collisions}{Collisions} \SetKwData{flag}{flag} \SetKwData{C}{$\mathcal C$} \SetKwData{winner}{winner} \SetKwData{tcoll}{$t_{\text{coll}}$}
\SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress} \SetKwFunction{Resolve}{Resolve}
\SetKwFunction{Broadcast}{Broadcast} \SetKwFunction{Planner}{Planner} \SetKwFunction{Auction}{Auction}
\SetKwFunction{Receive}{Receive} \SetKwFunction{Avoid}{Avoid}
\SetKwFunction{DetectCollisions}{CollDetect}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\Input{Agents $\agi \in \agset$, cost functions $c^\agi$, dynamics, initial start and goal states, initial plans $p^1,...,p^{|\agset|}$ .}
\Output{collision-free plans $p^1,...,p^{|\agset|}$.}
\BlankLine
\Repeat{$\forall \agi \in \agset: \flag^\agi =0 $}
{
\For{$\agi \in \agset$}{[
\flag$^\agi,\C^\agi,\tcoll ]\leftarrow$ \DetectCollisions$^\agi (\agi, \agset-\{\agi\})$\\
\If{$\flag^\agi =1$}
{
$\winner \leftarrow \Auction(\C^\agi \cup \{\agi\},\tcoll )$\\
\ForEach{$\agii \in (\C^\agi \cup \{\agi\})-\{\winner\} $}{
$p^\agii \leftarrow \Avoid^\agii((\C^\agi \cup \{\agi\})-\{\agii\},\tcoll )$\\
$\Broadcast^\agii$ ($p^\agii$)
}
}
}
}
\caption{Lazy auction coordination method (AUC) (written in a sequentialized form). Collisions are resolved by choosing new setpoints to enforce collision avoidance.
$\mathcal C^\agi$: set of agents detected to be in conflict with agent $\agi$. flag$^\agi$: collision detection flag (=0, iff no collision detected). $t_{\text{coll}}$: earliest time where a collision was detected. Avoid: collision resolution method updating the plan by a single new setpoint according to WAIT or FREE.}
\label{alg:lazyauctions}
\end{small}
\end{algorithm}
Assume an agent $\agi$ detects a collision at a particular time step $t_{\text{coll}}$ and invites the set of agents $\mathcal C^\agi = \{ \agii | \gamma^{\agi,\agii} (t_{\text{coll}}) \leq 0\}$ to join an auction to decide who needs to avoid whom. In particular, the auction determines a winner who is not required to alter his plan. The losing agents need to insert a new setpoint into their respective plans designed to avoid all other agents in $\mathcal C^\agi$ while keeping the plan cost function low.
The idea is to design the auction rules as a heuristic method to minimize the social cost of the ensuing solution. To this end, we define the bids such that their magnitude is proportional to a heuristic magnitude of the expected regret for losing and not gaining passage. That is agent $\agi$ submits a bid $b^\agi = \mathfrak l^\agi - \mathfrak s^\agi$. Magnitude $\mathfrak l^\agi$ is defined as $\agi$'s anticipated cost $c_{plan}^\agi(p^\agi_{\uparrow(t,s)})$ for the event that the agent will not secure ``the right of passage'' and has to create a new setpoint $(t,s)$ (according to (III)) tailored to avoid all other agents engaged in the current auction. On the other hand, $\mathfrak s^\agi := c_{plan}^\agi(p^\agi)$ is the cost of the unchanged plan $p^\agi$.
If there is a tie among multiple agents the agent with the lowest index among the highest bidders wins.
Acknowledging that $\mathfrak s^{winner} + \sum_{\agi\neq winner} \mathfrak l^\agi $
is an estimated social cost (based on current beliefs of trajectories)
after the auction, we see that the winner determination
rule greedily attempts to minimize social cost: $b^{winner}
\geq b^{\agii} \, \Leftrightarrow \forall \agii: \mathfrak s^\agii + \sum_{\agi\neq
\agii} \mathfrak l^{ \agi} \geq \mathfrak s^{winner} + \sum_{\agi\neq winner} \mathfrak l^{\agi}$.
\begin{figure*}[t!]
\vspace{-1em}
\begin{tabular}{lll}
\includegraphics[width = 3.5cm, clip, trim = 3.5cm 9.5cm 4.5cm 9cm]{init_sim.pdf}
& \includegraphics[width = 3.5cm, clip, trim = 3.5cm 9.5cm 4.5cm 9cm]{FPWAITsim.pdf}
&\includegraphics[width = 3.5cm, clip, trim = 3.5cm 9.5cm 6cm 9cm]{AUCWAITsim.pdf} \\
\end{tabular}
\caption{EXP1. Draws from uncoordinated agents' plans (left), after coordination and collision resolution with methods {\scshape FP-WAIT} (centre) and {\scshape AUC-WAIT} (right).}
\label{Tab:EXP1corridor}
\end{figure*}
\begin{table*}[bht]
\centering
\begin{small}
\begin{tabular}{@{}lcccccc@{}}
\toprule
& \multicolumn{3}{c}{Experiment $1$} &
\multicolumn{3}{c}{Experiment $2$} \\
\cmidrule(r){2-4} \cmidrule(l){5-7}
$Quantity$ & NONE & AUC-WAIT& FP-WAIT &NONE& AUC-FREE &FP-FREE \\
\midrule
A & 78 & 0 & 0 & 51 & 0 & 0 \\
{B} & 13.15 & 13.57 & 12.57 & 14.94 & 16.22 & 18.13\\
{C} & 0.05 & 0.04 & 25.8 & 0.05 & 0.05 & 0.05 \\
{D}& 0 & 6 & 3 & 0 & 4 & 4\\
\bottomrule
\end{tabular}
\end{small}
\caption{Quantities estimated based on 100 draws from SDEs simulating executions of the different plans in EXP1 and EXP2.
\textbf{A}: estimated collision probability $[\%]$; \textbf{B}: averaged path length away from goal; \textbf{C}: averaged sqr. dist. of final state to goal; \textbf{D}: number of collision resolution rounds.
Notice, our collision avoidance methods succeed in preventing collisions. In EXP1 the FP-WAIT method failed to reach its first goal in time which is reflected in the \emph{sqr. distance to goal} measure. Note the discrepancies in avg. path length are relatively low due to convexity effects and the contribution of state noise to the path lengths.}
\label{Tab:data}
\end{table*}
\begin{figure*}[thb!]
\vspace{-1em}
\begin{tabular}{lll}
\includegraphics[width = 3.9cm, clip, trim = 3.5cm 9cm 4.5cm 9cm]{init_simexp2.pdf}
& \includegraphics[width = 3.9cm, clip, trim = 3.5cm 9cm 4.5cm 9cm]{fp_sim.pdf}
&\includegraphics[width = 3.9cm, clip, trim = 3.5cm 9cm 4.5cm 9cm]{auc_sim.pdf} \\
\end{tabular}
\caption{EXP2. Draws from uncoordinated agents' plans (left), after coordination and collision resolution with methods {\scshape FP-FREE} (centre) and {\scshape AUC-FREE} (right). }
\label{Tab:EXP2}
\end{figure*}
\section{Simulations}\label{sec:sims} As a first test, we simulated three simple multi-agent scenarios, \textit{EXP1}, \textit{EXP2} and \textit{EXP3}.
Each agent's dynamics were an instantiation of an SDE of the form of Eq. \ref{eq:linSDEcontrolledplant1}.
We set $\delta$ to achieve collision avoidance with certainty greater than $95 \%$. Collision prediction was based on the improved criterion function as per Thm. \ref{def:collcritfct2d}. During collision resolution with the FREE method
each agent $\agi$ assessed a candidate
plan $p^\agi$ according to cost function
$c_{plan}^\agi(p^\agi) = w_1 \, c_{traj}^\agi(p^\agi) + w_2 \, c_{miss}^\agi(p^\agi) + w_3 \, c_{coll}^\agi(p^\agi) $.
Here $c_{traj}^\agi$ is a heuristic to penalize expected control energy or path length; in the second summand,
$c_{miss}^\agi (p^\agi)= \norm{\state^\agi(t_f) - \state^\agi_f}^2$ penalizes expected deviation from the goal state; the third term
$c_{coll}^\agi(p^\agi)$ penalizes collisions (cf. III ). The weights are design parameters which we set to
$w_1 = 10, w_2 = 10^3$ and $w_3 = 10^6$, emphasizing avoidance of mission failure and collisions. Note, if our method was to be deployed in
a receding horizon fashion, the parameters could also be adapted online using standard learning techniques such as no-regret algorithms
\cite{littlestone89weighted,Srinivas2010}.
\begin{figure*}[thb!]
\centering
\vspace{-1em}
\begin{tabular}{cc}
\includegraphics[scale =.4, clip, trim = 3cm 9cm 3cm 9cm]{circ5Aplan2dAUCFREEinitial2}
& \includegraphics[scale =.4, clip, trim = 3cm 9cm 3cm 9cm]{circ5AsimAUCFREEcoord}
\end{tabular}
\caption{Ex. of EXP3 with 5 agents. Draws from uncoordinated agents' plans (left), after coordination and collision resolution with methods
{\scshape AUC-FREE} (right). } \label{Tab:EXP35agents}
\end{figure*}
\begin{figure*}[thb!]
\vspace{-1em}
\begin{tabular}{ccc}
\includegraphics[width = 4.6cm, clip, trim = 6cm 11.5cm 8cm 13cm]{441.pdf}
&\includegraphics[width = 4.6cm, clip, trim = 6cm 11.5cm 9cm 14cm]{443.pdf}
\end{tabular}
\caption{Recorded results for EXP3 with 1 to 6 agents. Note, all collisions were successfully avoided.} \label{Tab:exp3barplots}
\end{figure*}
\textbf{EXP1.} Collision resolution was done with the WAIT method to update plans. Draws from the SDEs with the initial plans of the agents
are depicted in Fig. \ref{Tab:EXP1corridor} (left). The curves represent 20 noisy trajectories of agents 1 (red) and 2 (blue).
Each curve is a draw from the stochastic differential dynamics obtained by simulating the execution of the given initial plan.
The trajectories were simulated with the Euler-Maruyama method for a time interval of $I = [0 s ,2 s]$.
The spread of the families of curves is due to the random disturbances each agent's controller had to compensate for during runtime.
Agent 1 desired to control the state from start state $\state_0^1 =(5,10)$ to goal $\state_f^1 =(5,5)$.
Agent 2 desired to move from start state $\state_0^2 =(5,0)$ via intermediate goal $\state_{f_1}^2=(5,7)$ (at 1s) to final goal
state $\state_{f_2}^2=(0,7)$. While the agents meet their goals under the initial plans, their execution would imply a high probability of
colliding around state $(5,6)$ (cf. Fig. \ref{Tab:EXP1corridor} (left), Tab. \ref{Tab:data}). Coordination with fixed priorities
(1 (red) $>$ 2 (blue)) yields conflict-free plans (Fig. \ref{Tab:EXP1corridor} (centre)). However, agent 2 is forced to wait too long at its start
location to be able to reach intermediate waypoint $\state_{f,1}^2$ in time and therefore, decides to move directly to its second goal.
This could spawn high social cost due to missing one of the designated goals (Tab. \ref{Tab:data} ). By contrast, the auction method is flexible enough
to reverse the ranking at the detected collision point causing agent 1 to wait instead of 2 (Fig. \ref{Tab:EXP1corridor} (right)).
Thereby, agent 2 is able to reach both of its goal states in time. This success is reflected by low social cost (see Tab. \ref{Tab:data}).
\begin{figure}[ht]
\centering
\includegraphics[scale=.4, clip, trim = 3cm 8.7cm 3cm 9cm]{critfcts_jammedcorridor}
\caption{EXP1. Criterion functions for collision detection of agent 2 before and after coordination. The first graph accurately warns of a collision before coordination (as indicated by negative values), whereas the one corresponding to collision-free trajectories is in the positive-half space as desired.}
\label{fig:CriterionFunctionsForCollisionDetectionOfAgent2BeforeAndAfterCoordination}
\end{figure}
\textbf{EXP2.} The setup was analogous to EXP1 but with three agents and different start and goal states as depicted in Fig. \ref{Tab:EXP2}.
Furthermore, collisions were avoided with the FREE method with 10 random initializations of the local optimizer.
Coordination of plans with fixed priorities (1 (red) $>$ 2 (blue) $>$ 3 (green) ) caused 2 to avoid agent 1 by moving to the left.
Consequently, 3 now had to temporarily leave its start and goal state to get out of the way (see Fig. \ref{Tab:EXP2} (centre) ).
With two agents moving to avoid collisions social cost was relatively high (see Tab. \ref{Tab:data}). During coordination with the auction-based
method agent 2 first chose to avoid agent 1 (as in the FP method). However, losing the auction to agent 3 at a later stage of coordination,
agent 2 decided to finally circumvent 1 by arcing to the right instead of to the left. This allowed 3 to stay in place (see Tab. \ref{Tab:data}).
\textbf{EXP3.} Next, we conducted a sequence of experiments for varying numbers of agents ranging from $|\agset|=1,..,7$.
In each experiment all agents' start locations were placed on a circle. Their respective goals
were placed on the opposite ends of the circle. The eigenvalues of the feedback gain matrices of each agent were drawn at random from a uniform distribution on the range [2,7].
An example situation for an experiment with 5 agents is depicted in Fig. \ref{Tab:EXP35agents}.
Collision avoidance was achieved.
Note, that despite this setting being close to worst case (i.e. almost all agents try to traverse a common, narrow corridor) the coordination overhead is moderate (see Fig. \ref{Tab:exp3barplots}, right). Also, all collisions were successfully avoided (see Fig. \ref{Tab:exp3barplots}, left).
\section{ Conclusions}
This work considered multi-agent planning under stochastic uncertainty and non-convex chance-constraints
for collision avoidance.
In contrast to pre-existing work, we did not need to rely on prior space or time-discretisation. This was achieved by deriving criterion functions with the property that the collision probability is guaranteed to be below a freely definable threshold $\delta \in (0,1)$ if the criterion function attains no negative values. Thereby, stochastic collision detection is reduced to deciding whether such negative values exist. For Lipschitz criterion functions, we provided an algorithm for making this decision rapidly.
We described a general procedure for deriving criterion functions and presented two such functions based on Chebyshev-type bounds.
The advantage of using Chebyshev inequalities is their independence of the underlying distribution. Therefore, our approach is applicable to any stochastic state noise model for which the first two moments can be computed at arbitrary time steps. In particular, this would apply to models with state-dependent uncertainty and non-convex chance constraints which, to the best to our knowledge, have not been successfully approached in the multi-agent control literature.
Nonetheless, future work could build on our results and derive less conservative criterion functions by using more problem-specific probabilistic inequalities. For instance, in simple cases such as additive Gaussian noise, tighter bounds can be given \cite{Blackmore2006} and used in Eq. \ref{eq:critfctgeneric}.
To enforce collision avoidance, our method modified the agent's plans until no collisions could be detected. To coordinate the detection and avoidance efforts of the agents, we employed an auction-based as well as a fixed-priority method.\\
Our experiments are a first indication that our approach can succeed in finding collision-free plans with high-certainty with the number of required coordination rounds scaling mildly in the number of agents. While in its present form, the coordination mechanism does not come with a termination guarantee, in none of our simulations have we encountered an infinite loop. For graph routing, \cite{ArmsTR:2011} provides a termination guarantee of the lazy auction approach under mild assumptions. Current work considers if their analysis can be extended to our continuous setting. Moreover, if required, our approach can be combined with a simple stopping criterion that terminates the coordination attempt when a computational budget is expended or an infinite loop is detected.
The computation time within each coordination round depends heavily on the time required for finding a new setpoint and for collision detection. This involves minimizing $ (t,s) \mapsto c_{plan}^\agi(p^\agi_{\uparrow(t,s)})$ and $c_{coll}^\agi$, respectively. The worst-case complexity depends on the choice of cost functions, their domains and the chosen optimizer. Fortunately, we can draw on a plethora of highly advanced global optimisation methods (eg \cite{Shubert:72,direct:93}) guaranteeing rapid optimization success.
In terms of execution time, we can expect considerable alleviations from implementation in a compiled language. Furthermore, the collision detection and avoidance methods are based on global optimization and thus, would be highly amenable to parallel processing -- this could especially benefit the auction approach.
While our exposition was focussed on the task of defining setpoints of feedback-controlled agents, the developed methods can be readily applied to other policy search settings, where the first two moments of the probabilistic beliefs over the trajectories (that would result from applying the found policies) can be computed.
\begin{small}
\bibliographystyle{plain}
|
1,941,325,220,084 | arxiv | \section{Introduction}
\label{sec:intro}
In recent years the WIMP paradigm has become increasingly unpopular due to the stringent limits being placed on the spin-independent direct detection~(SIDD) scattering cross section of dark matter~(DM) with nuclei from experiments such as XENON1T~\cite{Aprile:2017iyp}. Analogously, the null direct search results at the LHC, and the so far very Standard Model~(SM) like nature of $h_{125}$ has lead to a general pessimism in our field regarding the existence of an extended Higgs sector. However, if certain relationships between model parameters are fulfilled, it is easy to evade both $h_{125}$ considerations and dark matter scattering limits, while obtaining an observationally consistent dark matter relic density. Ultimately we are in search of the ultraviolet~(UV) completion of the SM, which we expect to be based on symmetries at the UV scale. In such a case it is not difficult to imagine that low-scale physics may show up as having certain ``fine-tuned" relations. Keeping an agnostic attitude towards such symmetries, one can nevertheless investigate the requirements for the parameters of a model such that our current lack of experimental signals is not due to the absence of new physics~(NP) at the weak-scale, but rather because certain relationships exits making NP challenging to discover with conventional means.
In Sec.~\ref{sec:EFT} I present a minimal scenario for obtaining a consistent relic density and a suppressed SIDD cross-section from an extended Higgs sector and an additional SM singlet playing the role of DM, using the language of EFT. I show that these requirements can be easily fulfilled without any unnatural hierarchies in the model parameters or physical properties. Sec.~\ref{sec:LHC} outlines LHC search strategies and their prospects which may be used to probe such a scenario. I then show in Sec.~\ref{sec:NMSSM} how such a model is mapped onto the Next to Minimal Supersymmetric Standard Model~(NMSSM). I summarize in Sec.~\ref{sec:sum}.
\section{EFT for Higgs Couplings to Dark Matter}\label{sec:EFT}
\begin{figure}[t]
\hspace*{7mm}
\includegraphics*[width=.95\linewidth]{Couplings.pdf}
\caption{\label{eftCoup}
EFT parameters and couplings of DM to the CP-even and CP-odd Higgs bosons required to obtain the correct thermal relic density while concurrently satisfying SIDD constraints, for $\tan\beta = 2$, $m_\chi = 300\,$GeV, $m_{H^{\rm NSM}} = m_{A^{\rm NSM}} = 500\,$GeV, and decoupled singlet states. The orange shaded-region bounded by solid and dashed lines represents the CP-even Higgs bosons couplings consistent with the SIDD bounds, while the blue and black ellipses denote the couplings of DM to the (neutral) Goldstone mode and the heavy CP-odd Higgs boson yielding $\Omega h^2\sim0.12$ for CP-even Higgs couplings denoted by the corresponding solid or dashed lines, with thick and thin lines denoting two different solutions~\cite{Baum:2017enm}.}
\end{figure}
\begin{figure}[t]
\hspace*{7mm}
\includegraphics*[width=.95\linewidth]{Parameters.pdf}
\caption{\label{Pars} Values of the EFT parameters consistent with the couplings shown in Fig.~\ref{eftCoup}. Dashed and solid lines, as well as the shaded areas shown in this panel are in one-to-one correspondence with similar
lines and areas shown in the left panel~\cite{Baum:2017enm}.
}
\end{figure}
I will discuss a model with a SM singlet Majorana fermion DM $\chi$, which has no renormalizable interactions with SM particles. In order to couple DM to the SM, we consider the Higgs sector of a Type II Two Higgs doublet model~(2HDM): $H_u$ and $H_d$.
Assuming, as usual, that both Higgs doublets acquire vacuum expectation values~(vevs), $\langle H_d \rangle = v_d$, $\langle H_u \rangle = v_u$, with $(v_d^2+v_u^2) = (174\mathrm{~GeV})^2$ and $\tan\beta = v_u/v_d$, I define the Higgs basis~\cite{Georgi:1978ri, Donoghue:1978cj, gunion2008higgs, Lavoura:1994fv, Botella:1994cs, Branco99, Gunion:2002zf} such that the mass eigenstate ~($H^{\rm SM})$ associated with the observed 125 GeV Higgs boson has completely standard model couplings, i.e the SM vev is acquired by the field corresponding to the neutral component of $H^{\rm SM}$, hence $\langle H^{\rm SM}\rangle = \sqrt{2} v$ and $\langle H^{\rm NSM}\rangle = 0$:
\begin{eqnarray}
H^{\rm SM} &= \sqrt{2} {\rm Re} \left( \sin\beta H_u^0 + \cos\beta H_d^0 \right), \label{eq:Hbasis1}
\\ G^0 &= \sqrt{2} {\rm Im} \left( \sin\beta H_u^0 - \cos\beta H_d^0 \right),
\\ H^{\rm NSM} &= \sqrt{2} {\rm Re} \left( \cos\beta H_u^0 - \sin\beta H_d^0 \right),
\\ A^{\rm NSM} &= \sqrt{2} {\rm Im} \left( \cos\beta H_u^0 + \sin\beta H_d^0 \right),
\label{eq:Hbasis-1}
\end{eqnarray}
In addition, we impose that there are no explicit mass terms or scales and hence the Lagrangian is scale invariant. The absence of explicit scale dependence could be originating from a $Z_3$ symmetry. In such a situation, a natural way to generate the mass $m_\chi$ and the scale of NP $\mu$ is via the vev of a singlet $S= \langle S \rangle+\frac{1}{\sqrt{2}} \left(H^S + i A^S \right)$. Hence, without loss of generality we can define $m_\chi = 2 \kappa\langle S \rangle$ and $\mu = \lambda \langle S \rangle$, where $\kappa$ and $\lambda$ are dimensionless parameters.
Assuming that $d > 4$ terms
originate from a theory where a heavier $SU(2)$-doublet Dirac fermion with mass $\mu$ has been integrated out, we can write all the allowed $d=6$ operators which would arise from integrating out such a field, describing the interactions of a Majorana fermion $\chi$ with the two Higgs doublets $H_u$, $H_d$. Ignoring the charged gauge boson interactions, we get~\cite{Baum:2017enm}
\begin{eqnarray} \label{eq:EFTmu}
&&\mathcal{L} =~ - \delta \frac{\chi\chi}{\mu} \left( H_u \!\cdot\! H_d \right)\left( 1 - \frac{\lambda \hat{S}}{\mu} \right) \nonumber \\
&&- \kappa S \chi\chi \left(1 + \xi \frac{H_d^\dagger H_d + H_u^\dagger H_u }{| \mu |^2} \right) + {\rm h.c.}\nonumber \\
&&+ \frac{\alpha}{|\mu|^2}\left\{\chi^\dagger H_u^\dagger \bar{\sigma}^\mu \left[ i \partial_\mu - \frac{g_1}{s_W} (T_3 - Q s^2_W) Z_\mu \right] (\chi H_u) \right. \nonumber \\
&& + \left. \chi^\dagger H_d^\dagger \bar{\sigma}^\mu \left[ i \partial_\mu - \frac{g_1}{s_W} (T_3 - Q s^2_W) Z_\mu \right] (\chi H_d) \right\} , \nonumber \\
\end{eqnarray}
where $S = \mu/\lambda + \hat{S}$, $Q$ and $T_3$ are the charge and weak isospin operators, $s_W \equiv \sin \theta_W$ with the weak mixing angle $\theta_W$, and $g_1 = e/\cos \theta_W$ is the hypercharge coupling.
From Eqs.~(\ref{eq:Hbasis1}) and~(\ref{eq:EFTmu}), the coupling of the DM particles to the SM-like Higgs is
\begin{equation} \label{eq:gxxHSM1}
g_{\chi\chi h} \simeq g_{\chi\chi H^{\rm SM}} = \frac{\sqrt{2} v}{\mu} \left[\delta \sin 2\beta - \frac{(\xi -\alpha)m_\chi}{ \mu^*} \right] .
\end{equation}
The SIDD scattering cross-section is mediated by the $t$-channel exchange of the CP-even scalars. Generically, the SM-like Higgs has the dominant contribution.
However,
we see that we can obtain a blind spot for the cancellation of the coupling of $H^{\rm SM}$ to pairs of DM occurs for
\begin{equation}\label{eq:bs}
\sin 2 \beta = \frac{(\xi-\alpha) m_\chi}{\mu^* \delta}\;.
\end{equation}
In proximity to such a blind spot, there can be further suppression of the SIDD due to interference with the other CP-even Higgs bosons.
Note that the couplings to the other Higgs states, including the Goldstone modes, are not necessarily suppressed. In fact there can be significant couplings of the extended Higgs sector states to DM, mediating the annihilation cross-section needed for obtaining an observationally consistent relic density. It turns out that $\Omega h^2 \sim 0.12$ can be easily achieved by the $s$-channel annihilation of the DM particles into $t\bar{t}$ mediated by $G^0$ for the region of masses we are investigating.
An example for couplings of the Higgs sector states to DM where the SIDD is suppressed while simultaneously obtaining a consistent relic density is shown in Fig.~\ref{eftCoup}. The corresponding EFT parameters are shown in Fig.~\ref{Pars}. The mass spectrum is fixed to labeled values. Changing these values would change the precise numerical values shown, but the qualitative behavior remains the same. I stress that neither the couplings nor the EFT parameters have any extreme values. While specific relationships between parameters have to be fulfilled, the resulting scenario does not appear to be particularly difficult to achieve.
I have discussed how to obtain a WIMP with thermally produced relic density whose direct detection is suppressed, in conjunction with a Higgs sector which is aligned so that $h_{125}$ phenomenology is completely SM like.
The couplings between the Higgs states don't play a role in the scenario discussed so far. However, the details of the Higgs scalar potential may be relevant for the discovery prospects for such a scenario, and some possibilities will be discussed in the next sections.
\section{Higgs Phenomenology \& LHC prospects}\label{sec:LHC}
The scalar potential for a 2HDM + S is given in Refs.~\cite{Carena:2015moc, Baum:2018zhf}. The generic potential is described by 27 arbitrary parameters and at first glance appears difficult to analyze. However the 125 GeV Higgs mass and its SM-like couplings enable us to constrain these significantly. In particular, we show that most of the relevant phenomenology can be parameterized in terms of mostly the physical parameters like masses and mixing angles~\cite{Baum:2018zhf}.
I highlight first a few conditions that alignment imposes on the phenomenology. The most important thing is that alignment forbids the coupling of the NSM or S like CP-even Higgs bosons from coupling to pairs of $h_{125}$ or vector bosons~($W$ or $Z$). Additionally the CP-odd state couplings to $h_{125}$ and $Z$ are also forbidden. Instead, there can be interesting {\it Higgs cascade decays} of the heavy Higgs bosons to final states involving only {\it one} $h_{125}$ {\it or} a $Z$ such as $(H^{\rm{NSM}} \to H^{\rm{S}} H^{\rm{SM}})$ or $(A^{\rm{NSM}} \to H^{\rm{S}} Z)$. The singlets couple only to DM or to the SM particles via their mixing with the other states. Hence depending on the mixing angles and the arbitrary coupling to the DM, such decays could result in $h_{125}$ or $Z$ plus visible or invisible signatures.
We collected all the current search results and projections available for the relevant decays, as well as performed detailed collider simulations where needed, to obtain the projection for the reach at the LHC with 3000 fb$^{-1}$ of data. I present an example of the reach we obtain for exemplary scenarios in Figs.~\ref{reach_mix} and~\ref{reach_mass}. As can be seen, combining the different searches for the various Higgs cascade decay modes provides coverage of most of the parameter space at low values of $\tan\beta$, a region which is generally challenging to probe~\cite{Gori:2016zto}.
\begin{figure}[tbh]
\hspace*{1mm}
\includegraphics*[width=.95\linewidth]{2HDMS_reach_mixing}
\caption{\label{reach_mix}
Regions of 2HDM+S parameter space within the future reach of the different Higgs Cascade search modes as indicated in the legend at the LHC with $L = 3000\,{\rm fb}^{-1}$ of data. The colored regions show the accessible regions via the various signatures in the plane of the singlet fraction of the parent Higgs bosons $(S_H^{\rm S})^2$ vs $(P_A^{\rm S})^2$~\cite{Baum:2018zhf}. }
\end{figure}
\begin{figure}[tbh]
\hspace*{1mm}
\includegraphics*[width=.95\linewidth]{2HDMS_reach_masses}
\caption{\label{reach_mass}
Same as Fig.~\ref{reach_mix}, but with the reach shown in the plane of the masses of the daughter Higgs bosons produced in the Higgs Cascades, $m_h$ vs $m_a$. The remaining parameters are fixed to the values indicated in the labels~\cite{Baum:2018zhf}.}
\end{figure}
\section{NMSSM Interpretation}\label{sec:NMSSM}
The Next to Minimal Supersymmetric Standard Model~(NMSSM) is a well motivated extension of the SM.
The NMSSM has a 2HDM + Singlet~(S) scalar sector, analogous to the Higgs sector assumed in the previous section. What makes the NMSSM particularly interesting is that a Higgs boson with a mass of 125 GeV and SM-like couplings is easily and naturally obtained~\cite{Carena:2015moc}. The NMSSM provides two SM Majorana singlets which may play the role of DM, the singlino~(superpartner of the Singlet Higgs) and the bino~(superpartner of the hypercharge gauge boson). Due to SUSY relations, phenomenological considerations of the $h_{125}$ correlate the masses of all the scalars as well as that of the Singlinos and the Higgsinos, leading to a consistent scenario where the entire Higgs sector as well as the DM candidates can be $\sim\mathcal{O}(\rm {few~ } 100)$ GeV.
The DM-Higgs EFT discussed in Sec.~\ref{sec:EFT} can be trivially mapped to the NMSSM in the following two regions~\cite{Baum:2017enm}:
\begin{itemize}
\item For Singlino DM, we can map the couplings in Eq.~(\ref{eq:EFTmu}) directly to those in the NMSSM via
\begin{equation} \label{eq:pmapS}
\delta = -\alpha \to -\lambda^2~, \lambda \to \lambda~, \kappa \to \kappa~, \xi \to 0~.
\end{equation}
The mapping above leads to the blind spot condition [cf. Eq.~(\ref{eq:bs})]
\begin{equation}\label{eq:bsSin}
\sin 2\beta = m_\chi/\mu~.
\end{equation}
\item In contrast to the singlino, the bino couples to different combinations of the Higgs doublets and the singlet. Such interactions would be obtained by writing down the EFT for the Higgs doublets and the singlet transforming under the $Z_3$, while assuming the Majorana fermion $\chi$ transforms trivially and has a Majorana mass $m_\chi = M_1$.
Keeping this in mind, we can map the couplings of the bino to those in the EFT, Eq.~(\ref{eq:EFTmu}), via
\begin{equation} \label{eq:pmapB}
\delta = \alpha \to \frac{g_1^2}{2}~, \quad \lambda \to \lambda~, \quad \kappa = \xi \to 0~.
\end{equation}
The blind spot condition for the bino region is then
\begin{equation}\label{eq:bsBin}
\sin 2\beta = - m_\chi/\mu~.
\end{equation}
\end{itemize}
\begin{figure}[tbh]
\hspace*{1mm}
\includegraphics*[width=.95\linewidth]{DD_BS_norescale.png}
\caption{\label{NMSSM_BS}
The SIDD cross section $\sigma_p^{\rm SI}$
vs. $m_\chi/(\mu \sin 2\beta)$, where the blind-spot conditions are satisfied for $m_\chi/(\mu \sin 2\beta) = +1 (-1)$ for the singlino (bino) DM case~\cite{Baum:2017enm}.}
\end{figure}
We ran numerical scan using \texttt{NMSSMTools} and \texttt{MicrOmegas} validating our expectation for the obtention of a DM with suppressed SIDD due to the presence of blindspots as dictated by Eqs.~(\ref{eq:bsSin}) and (\ref{eq:bsBin}). The results are shown in Fig.~\ref{NMSSM_BS}. We also observed that while the SIDD can be easily suppressed while obtaining the correct relic density, their is no such suppression mechanism for the spin dependent direct detection~(SDDD) cross section. In fact, while certainly the limits for SDDD are much weaker, near future prospects may allow us to probe most of the region of parameter space with very suppressed SIDD. This is shown in Fig.~\ref{SISD}, where the SDDD is seen to be at most two orders of magnitude below current limits.
\begin{figure}[tbh]
\hspace*{1mm}
\includegraphics*[width=.95\linewidth]{DD_SIvsSD_constraints.png}
\caption{\label{SISD}
SIDD vs. SDDD cross section in units of the respective observed limit for the same points. For SIDD cross section, at each respective DM mass we use the stronger of the two limits from XENON1T and PandaX-II. For SDDD scattering we use the more constraining of the current bounds for either SDDD scattering of neutrons from LUX~\cite{Akerib:2017kat}, or SDDD scattering of protons from PICO-60~\cite{Amole:2017dex}. To guide the eye we indicate the current bounds with thin dashed lines; points lying in the lower left quadrant satisfy all current direct detection bounds~\cite{Baum:2017enm}.}
\end{figure}
\begin{figure}[h!]
\hspace*{-1mm}
\includegraphics*[width=.95\linewidth]{reach_standard_vs_cascades_all}
\caption{\label{reach_NMSSM}
We present our results for the cases where at least one of the heavy Higgs bosons $H$ or $A$ lighter than 1\,TeV color coded according to the Higgs cascade channel with the largest signal strength as indicated in the legend. The $x$-axis shows the largest signal strength out of all conventional Higgs searches,
and the $y$-axis shows the largest signal strength out of the Higgs casccade searches. Note, that for the Higgs cascades modes we use the projected sensitivity for $L=3000\,{\rm fb}^{-1}$ of data while for the conventional searches we use the best current limit~\cite{newNMSSM}. }
\end{figure}
As mentioned earlier, the NMSSM Higgs sector is very predictive due to the presence of a SM-like $h_{125}$. We investigated the collider phenomenology associated with the Higgs sector in detail. The prospects for discovery for the scenarios with at least one mass lighter than 1 TeV are shown in Fig.~\ref{reach_NMSSM}~\cite{newNMSSM}. This shows in particular that the Higgs cascade decays channels discussed in the previous section can provide complimentary probes to the standard search channels, bringing most of the interesting parameter regions within reach of the high luminosity LHC.
\section{Summary}\label{sec:sum}
\label{sec:summ}
There has been no compelling evidence for the presence of NP at the weak scale since the Higgs discovery in 2012. This has lead to wide spread pessimism in our field regarding both the WIMP paradigm and the most popular SUSY models. In these proceedings I have presented a simple scenario where vanilla WIMP DM with thermal relic density can easily evade the stringent SIDD bounds. Using an EFT formulation I present parameter relations such that the coupling of DM can be suppressed to $h_{h125}$ while maintaining enough DM-SM interactions such that thermal equilibrium can be maintained. Given that the DM candidate is assumed to be a SM singlet, coupled with the constraints imposed by the alignment of the Higgs vacuum expectation value on an extended Higgs sector, standard search strategies at the LHC may be insensitive to NP. However, Higgs cascade channels which are unsuppressed due to the presence of the singlet scalars, may be provide an additional handle, allowing us to probe much of the relevant parameter region at the LHC.
I then present the mapping of the EFT to the NMSSM, showing that the EFT expectations are borne out using sophisticated numerical packages available. I also show the direct detection and LHC prospects for the parameter regions discussed.
I stress that the consistent region of parameters we obtain using both the EFT, and its mapping to the NMSSM,
do not require any extreme choices. Indeed this region of parameters would be considered quite ``natural". That said, specific correlations between various parameters are needed. However, considering physics from the UV prospective, it may very well be that GUT scale symmetries broken near the weak scale would show up in low energy physics as strange cancellations or relationships between parameters. Nature may well have chosen to put NP at an energy scale out of our foreseeable reach, which would be our misfortune. However, it seems as likely that there may be NP at the weak scale, and we may have to be more creative to find it.
\section*{Acknowledgements}
I am grateful to the organizers
of ``FPCapri2018" and ``The Future of BSM Physics" for the
conference and workshop, and MITP for partial support
during my stay in Capri. I also acknowledge support the U.S. Department of Energy under
Contract No. DESC0007983.
\bibliographystyle{elsarticle-num}
|
1,941,325,220,085 | arxiv | \section{Introduction}
\label{author sec:1}
In this data driven area, the amount and complexity of the available data grows at an almost incredible speed. Therefore, there is a high need to develop novel tools to cope with such complex data structures. Whereas the first statistical techniques were designed only to manage either quantitative or qualitative data, we can now find statistical procedures to handle functional data (see for instance Arribas-Gil and M\"{u}ller~\citeyear{Arribas2014}; Febrero-Bande and Gonz\'{a}lez-Manteiga~\citeyear{Febrero2013}; Jacques and Preda~\citeyear{jacques}), fuzzy-valued data (see, for instance, Ferraro and Giordani~\citeyear{Ferraro2013}; Gonz\'{a}lez-Rodr\'{\i}guez \emph{et al.}~\citeyear{Gonzalez2012}; Coppi \emph{et al.}~\citeyear{Coppi2012}); incomplete/missing data (see, for instance, Bianco \emph{et al.}~\citeyear{bianco}; Ferraty \emph{et al.}~\citeyear{ferraty}; Lin~\citeyear{lin}; Zhao \emph{et al.}~\citeyear{zhao}), and several other types of data.
Interval-valued data are a type of complex data that requires specific statistical techniques to analyze them. Interval-valued data may arise for different reasons. In some cases the underlying random variable is intrinsically interval-valued, e.g. the daily fluctuation of the systolic blood pressure.
In other cases, there is an underlying real-valued but to preserve a level of confidentiality respondents are only asked to indicate the interval containing their value, e.g. their salary.
It may also happen that the real-valued measurement is only partially known due to certain limitations, such as is the case for interval censored data.
Finally, aggregation of a typically large dataset may lead to e.g. interval-valued symbolic data which include interval variation and structure.
The $d_\theta$-median considered here does not make any assumption about the source of the interval-valued data. In particular, it does not matter whether the random experiment that generated the data involves an underlying observable real-valued random variable or not.
An important remark is that the space of intervals is only a semilinear space, but not a linear space due to the lack of the opposite of an interval.
Therefore, although intervals can be identified with two-dimensional vectors (with first component the mid-point/centre and second component the nonnegative spread/radius), it is not advisable to treat them as regular bivariate data. Indeed, common assumptions for multivariate techniques do not hold in this case.
Statistical procedures for random interval-valued data have already been proposed in the literature for different purposes, such as regression analysis (e.g Gil \emph{et al.}~\citeyear{gil2002,Gil2007}; Gonz\'{a}lez-Rodr\'{\i}guez \emph{et al.}~\citeyear{Gonzalez2012}; Blanco-Fern\'{a}ndez \emph{et al.}~\citeyear{Blanco2011,Blanco2013}; Lima Neto \emph{et al.}~\citeyear{Lima2011}; Fagundes \emph{et al.}~\citeyear{fagundes2013}; Giordani~\citeyear{Giordani2014}); testing hypotheses (e.g. Montenegro \emph{et al.}~\citeyear{Montenegro2008}; Nakama \emph{et al.}~\citeyear{Nakama2010}; Gonz\'{a}lez-Rodr\'{\i}guez \emph{et al.}~\citeyear{Gonzalez2012}), clustering (e.g. De Carvalho \emph{et al.}~\citeyear{Carvalho2006}; D'Urso \emph{et al.}~\citeyear{DUrso2006,DUrso2011,DUrso2014}; Giusti and Grassini~\citeyear{Giusti2008}; Da Costa \emph{et al.}~\citeyear{DaCosta2013}, etc.), principal component analysis (e.g. Billard and Diday~\citeyear{Billard2003}; D'Urso and Giordani~\citeyear{DUrso2004}; Makosso-Kallyth and Diday~\citeyear{Makosso2012}, etc.), modelling distributions (see Brito and Duarte Silva~\citeyear{Brito2012}; Sun and Ralescu~\citeyear{SunRalescu2014}).
One of the most commonly used location measures is the Aumann-type mean (see Aumann \citeyear{Aumann1965}). It is indeed supported by numerous valuable properties, including laws of Large Numbers, and is also coherent with the interval arithmetic. The main disadvantage is that it is strongly influenced by outliers and data changes, which makes this measure not always suitable as a summary measure of the distribution of a random interval. This drawback is in fact inherited from the standard real/vectorial-valued case. In the real case, the most popular alternative is the median.
In the real case, the most popular robust alternative to the mean is the median.
For multivariate data the spatial median (also called the $L_1$-median, as introduced by Weber~\citeyear{Weber1909}) is a popular robust alternative to estimate the center of the multivariate data.
The spatial median is defined as the point in multivariate space with minimal average Euclidean distance to the observations. For more details and extensions, see for instance Gower~(\citeyear{gower1974}), Brown~(\citeyear{Brown}), Milasevic and Ducharme~(\citeyear{ducharme1987}), (Cadre \citeyear{cadre2001}), Roelant and Van Aelst~(\citeyear{Roelant}), Debruyne \emph{et al.}~(\citeyear{Debruyne}), Fritz \emph{et al.}~(\citeyear{Fritz}), Zuo~(\citeyear{Zuo}).
Sinova and Van Aelst (2014) adapted the spatial median to interval-valued data by using a suitable $L^2$ metric on this space (see also Sinova et al. \citeyear{sinova2013}). They used the versatile generalized metric introduced by Bertoluzza \emph{et al.} (\citeyear{bertoluzza1995}, see also Gil \emph{et al.}~\citeyear{gil2002}; Trutschnig \emph{et al.}~\citeyear{trutschnig2009})
The resulting $d_\theta$-median estimator has been shown to be robust with high breakdown point and good finite-sample properties. In this paper we show another important property of the estimator, which is its strong consistency.
The rest of this paper is organized as follows: in Section 2 the basic concepts related to the interval-valued space, interval arithmetic and metric for intervals will be introduced, as well as the usual location measure. In Section 3, the $d_\theta$-median for random intervals and its main properties are recalled. The strong consistency of the $d_\theta$-median is proven in Section 4. Finally, some concluding remarks are presented in Section 5.
\section{The $d_\theta$-median of a random interval}
\label{author_sec:3}
Let $\mathcal K_c(\mathbb R)$ denote the class of nonempty compact intervals. Any interval $K$ in the space $K_c(\mathbb R)$ can be characterized in terms of either its infimum and supremum, $K = [\inf K, \sup K]$, or its mid-point and spread or radius, $K=[{\hbox {\rm mid}}\, K-{\hbox {\rm spr}}\, K, {\hbox {\rm mid}}\, K +{\hbox {\rm spr}}\, K]$, where
$${\hbox {\rm mid}}\, K = \frac{\inf K + \sup K}{2}, \quad {\hbox {\rm spr}}\, K = \frac{\sup K - \inf K}{2} \geq 0.\;$$
The usual interval arithmetic provides the addition, i.e. $K + K' = [\inf K + \inf K', \sup K+\sup K']$
with $K,K'\in\mathcal K_c(\mathbb R)$ and the product by a scalar, i.e.
$\gamma\cdot K = [\gamma\cdot \mathrm{mid}\,K - |\gamma|\cdot \mathrm{spr}\,K,\gamma\cdot \mathrm{mid}\,K + |\gamma|\cdot \mathrm{spr}\,K]$ with $K\in\mathcal K_c(\mathbb R)$ and
$\gamma \in \mathbb R$.
With these two operations the space $\mathcal K_c(\mathbb R)$ is semilinear, but not linear due to the lack of a difference of intervals. Therefore, statistical techniques for interval-valued data are based on distances.
To measure the distance between two interval-valued observations, we consider the \emph{$d_\theta$ metric} introduced by Bertoluzza \emph{et al.}~(\citeyear{bertoluzza1995}), which can be defined as (see Gil \emph{et al.}~\citeyear{gil2002}):
$$d_\theta(K,K')=\sqrt{({\hbox {\rm mid}}\, K-{\hbox {\rm mid}}\, K')^2+\theta\cdot({\hbox {\rm spr}}\, K-{\hbox {\rm spr}}\, K')^2},$$
where $K,K'\in \mathcal K_c(\mathbb R)$ and $\theta\in(0,\infty)$.
Following the general random set approach, a
\emph{random interval} can usually defined as a Borel measurable mapping $X:\Omega\rightarrow\mathcal K_c(\mathbb R)$, where $(\Omega,\mathcal A,P)$ is a probability space with respect to $\mathcal A$ and on $K_c(\mathbb R)$ the Borel $\sigma$-field generated by the topology induced by
the $d_\theta$ metric. As a consequence from the Borel measurability, crucial concepts in probabilistic and inferential developments, such as the (induced) distribution of a random interval or the stochastic independence of random intervals, are well-defined.
One of the most used location measures is the \emph{Aumann-type mean value}. It is defined, if it exists, as the interval $E[X]=[E(\inf X),E(\sup X)]$ or $E[X]=[E({\hbox {\rm mid}}\, X)-E({\hbox {\rm spr}}\, X),E({\hbox {\rm mid}}\, X)+E({\hbox {\rm spr}}\, X)]$ (both expressions are equivalent). Moreover, it is the Fr\'echet expectation with respect to the $d_\theta$ metric, i.e., it is the unique interval that minimizes, over $K\in\mathcal K_c(\mathbb R)$, the expression $E[(d_\theta(X,K))^2]$.
As a robust alternative to the Aumann-type mean, Sinova and Van Aelst (2014) proposed the $d_\theta$-median as measure of location, which is defined as follows.
\begin{definition} The \emph{$d_{\theta}$-median(s)} of a random interval $X:\Omega\rightarrow \mathcal K_c(\mathbb R)$ is(are) the interval(s) $\mathrm{M}_\theta[X]\in \mathcal K_c(\mathbb R)$ such that
$$E(d_{\theta}(X,\mathrm{M}_\theta[X])) = \min_{K\in\mathcal K_c(\mathbb R)}E(d_{\theta}(X,K)),$$
whenever the involved expectations exist.
\end{definition}
Analogously, the sample $d_\theta$-median statistic is defined as follows.
\begin{definition}
Let $(X_1,\ldots,X_n)$ be a simple random sample from a random interval $X:\Omega\rightarrow \mathcal K_c(\mathbb R)$ with realizations $\mathbf{x}_n=(x_1,\ldots,x_n)$. The \emph{sample $d_{\theta}$-median} (or medians) $\widehat{\mathrm{M}_\theta[X]}_n$ is (are) the random interval that takes, for $\mathbf{x}_n$, the interval value(s) $\widehat{\mathrm{M}[\mathbf{x}_n]}$ that is (are) the solution(s) of the following optimization problem:
$$\begin{array}{l}\displaystyle{\min_{K\in\mathcal K_c(\mathbb R)}} \frac{1}{n}\sum_{i=1}^n d_{\theta}(x_i,K)
\\ \displaystyle{=\min_{(y,z)\in \mathbb R\times [0,\infty)}}\frac{1}{n}\sum_{i=1}^n \sqrt{({\hbox {\rm mid}}\, x_i-y)^2+\theta\cdot({\hbox {\rm spr}}\, x_i-z)^2}\end{array}$$
\noindent where $K$, $y$ and $z$ depend on $\mathbf{x}_n$ (which has been omitted from the notation for the sake of simplicity) and the fixed value $\theta$.
\end{definition}
Sinova and Van Aelst (2014) showed the existence of the sample $d_\theta$-median estimator and its uniqueness whenever not all the two-dimensional sample points $\{({\hbox {\rm mid}}\, x_i,{\hbox {\rm spr}}\, x_i)\}_{i=1}^n$ are collinear. Moreover, the robustness was shown by its
finite sample breakdown point (Donoho and Huber~\citeyear{donoho1983}) which is given by $$\mathrm{fsbp}\big(\widehat{\mathrm{M}_\theta[X]}_n,\mathbf{x}_n,d_\theta\big)=\frac{1}{n}\cdot\lfloor\frac{n+1}{2}\rfloor,\vspace{-0.25cm}$$ where $\lfloor\cdot\rfloor$ denotes the floor function.
\section{Consistency of the sample $d_\theta$-median}
\label{author_sec:4}
In this section we investigate the strong consistency of the sample $d_\theta$-median under general conditions.
\begin{theorem}\label{consistency}
Let $X$ be a random interval associated with a probability space $(\Omega,\mathcal A, P)$ such that the $d_\theta$-median exists and is unique. Then, the sample $d_\theta$-median is a strongly consistent estimator of the $d_\theta$-median, that is,
$$\underset{n \rightarrow \infty}{\lim}d_\theta(\widehat{\mathrm{M}_\theta[X]_n},M_\theta[X])=0 \quad \text{a.s.} [P].$$
\end{theorem}
\noindent{\emph{Proof.}}
Sufficient conditions for the strong consistency of an estimator are given in Huber~(\citeyear{huber}). We will check that these conditions, detailed below, are satisfied in our case:
\begin{itemize}
\item The parameter set ($\mathbb R \times [0,\infty)$ in our case, with the topology induced by the $d_\theta$-metric) is a locally compact space with a countable base and $(\Omega,\mathcal A,P)$ is a probability space.
\end{itemize}
Let $\rho(\omega,(y,z))$ be the following real-valued function on $\Omega\times(\mathbb R\times [0,\infty))$:
$$\begin{array}{rccl}
\rho: & \Omega \times (\mathbb R \times [0,\infty)) & \longrightarrow & \mathbb R \\
& (\omega,(y,z)) & \longmapsto & \displaystyle{d_\theta(X(\omega),[y-z,y+z])}.
\end{array}$$
\begin{itemize}
\item Assuming that $\omega_1,\omega_2 \ldots$ are independent $\Omega$-valued random elements with
\noindent common probability distribution $P$, the sequence of functions $\{T_n\}_{n\in \mathbb N}$, defined as $T_n(\omega_1,\ldots,\omega_n)=\widehat{\mathrm{M}_\theta[(X(\omega_1),\ldots,X(\omega_n))]}_n$, satisfies that \vspace{-0.3cm}
{\small $$\frac{1}{n}\sum_{i=1}^n d_\theta(X(\omega_i),T_n(\omega_1,\ldots,\omega_n))-\inf_{(y,z) \in \mathbb R\times [0,\infty)}\frac{1}{n} \sum_{i=1}^n d_\theta (X(\omega_i),[y-z,y+z])\underset{n}{\longrightarrow} 0\vspace{-0.2cm}$$}\vspace{-0.2cm}
\noindent almost surely (obviously because of the definition of the sample $d_\theta$-median).
\end{itemize}\vspace{0.15cm}
\noindent \emph{Assumption (A-1)} For each fixed $(y_0,z_0)\in \mathbb R\times [0,\infty)$, the function \vspace{-0.1cm}
$$\begin{array}{rccll}
\rho_0: & \Omega & \longrightarrow & \mathbb R &\\
& \omega & \longmapsto & \displaystyle{\rho(\omega,(y_0,z_0))}&\displaystyle{=d_\theta(X(\omega),[y_0-z_0,y_0+z_0])}\\[0.8ex]
& & & &\displaystyle{=\sqrt{({\rm mid}\, X(\omega)-y_0)^2+\theta \cdot ({\rm spr}\, X(\omega)-z_0)^2}}\vspace{-0.1cm}
\end{array}$$
is $\mathcal A$-measurable and separable in Doob's sense: there is a P-null set $N$ and a countable subset $S\subset \mathbb R \times [0,\infty)$ such that for every open set $U\subset \mathbb R \times [0,\infty)$ and every closed interval $A$, the sets \vspace{-0.2cm}
$$V_1=\{\omega: \rho(\omega,(y,z))\in A, \forall (y,z)\in U\}\vspace{-0.1cm}$$
$$V_2=\{\omega:\rho(\omega,(y,z))\in A,\forall (y,z)\in U\cap S\}$$
differ by at most a subset of $N$.\vspace{0.15cm}
\noindent \emph{Assumption (A-2)} The function $\rho$ is a.s. lower semicontinuous in $(y_0,z_0)$, that is,
$$\underset{(y,z)\in U}{\inf}\rho(\omega,(y,z))\longrightarrow \rho(\omega,(y_0,z_0)),$$
as the neighborhood $U$ of $(y_0,z_0)$ shrinks to $\{(y_0,z_0)\}$.\vspace{0.15cm}
\noindent \emph{Assumption (A-3)} There is a measurable function $a: \Omega \rightarrow \mathbb R$ such that \vspace{-0.1cm}
$$E[\rho(\omega,(y,z))-a(\omega)]^-<\infty \quad \text{ for all } (y,z)\in \mathbb R \times [0,\infty),\vspace{-0.2cm}$$
$$E[\rho(\omega,(y,z))-a(\omega)]^+<\infty \quad \text{ for some } (y,z)\in \mathbb R \times [0,\infty).$$
Thus, $\gamma((y,z))=E[\rho(\omega,(y,z))-a(\omega)]$ is well-defined for all $(y,z)$.\vspace{0.15cm}
\noindent \emph{Assumption (A-4)} There is a $(y_0,z_0)\in \mathbb R \times [0,\infty)$ such that $\gamma ((y,z))$
\noindent $>\gamma((y_0,z_0))$ for all $(y,z)\neq (y_0,z_0).$\vspace{0.15cm}
\noindent \emph{Assumption (A-5)} There is a continuous function $b((y,z))>0$ such that
\begin{itemize}
\item for some integrable $h$, $$\underset{(y,z)\in \mathbb R\times [0,\infty)}{\inf}\frac{\rho(\omega,(w,z))-a(\omega)}{b((y,z))}\geq h(\omega).$$
\item the following condition is satisfied: $$\underset{(y,z)\rightarrow \infty}{\liminf} b((y,z))>\gamma((y_0,z_0)).$$
\item it is also fulfilled that: $$E\left[\underset{(y,z)\rightarrow \infty}{\liminf} \frac{\rho(\omega,(y,z))-a(\omega)}{b((y,z))}\right]\geq 1.$$
\end{itemize}
We now verify these conditions of Huber:\vspace{0.15cm}
\noindent \emph{(A-1)} For each fixed $(y_0,z_0)\in \mathbb R\times [0,\infty)$, the function $\rho_0$ is $\mathcal A$-measurable (because ${\rm mid}\, X$ and ${\rm }\, X$ are measurable functions since $X$ is a random interval) and separable in Doob's sense: choosing $S=\mathbb Q \times (\mathbb Q \cap [0,\infty))$ as countable subset, for every open set $U \subset \mathbb R \times [0,\infty)$ and every closed interval A, it will be seen that the sets
$$V_1=\{\omega: \rho_0(\omega)\in A, \forall (y,z)\in U\},\,\,V_2=\{\omega: \rho_0(\omega)\in A,\forall (y,z)\in U\cap S\}$$
coincide. Obviously, $V_1\subseteq V_2$. By \emph{reductio ad absurdum}, it is now supposed that $V_2\cap V_1^c\neq \emptyset$. Let $\omega_0 \in V_2\cap V_1^c$:
\begin{itemize}
\item Since $\omega_0 \in V_2$, $\rho(\omega_0,(y,z))\in A$ for all $(y,z)\in U\cap S$;\vspace{0.1cm}
\item Since $\omega_0 \in V_1^c$, there exists $(y_0,z_0)\in U$ such that $\rho(\omega_0,(y_0,z_0))\in A^c$. $A^c$ is an open set, so there exists a ball of radius $r>0$ such that $$(\rho(\omega_0,(y_0,z_0))-r,\rho(\omega_0,(y_0,z_0))+r)\subseteq A^c.$$
\end{itemize}
Notice now that, for a fixed $\omega \in \Omega$, the function
$$\begin{array}{rccll}
\rho_\omega: & \mathbb R \times [0,\infty) & \longrightarrow & \mathbb R &\\
& (y,z) & \longmapsto & \displaystyle{\rho(\omega,(y,z))}&\displaystyle{=\sqrt{({\rm mid}\, X(\omega)-y)^2+\theta \cdot ({\rm spr}\, X(\omega)-z)^2}}
\end{array}$$ is continuous. Therefore, $\rho_{\omega_0}^{-1}(\rho(\omega_0,(y_0,z_0))-r,\rho(\omega_0,(y_0,z_0))+r)$ is an open set of $\mathbb R \times [0,\infty)$ and $U\cap \rho_{\omega_0}^{-1}(\rho(\omega_0,(y_0,z_0))-r,\rho(\omega_0,(y_0,z_0))+r)\neq \emptyset$ too. $S$ is a dense set of $\mathbb R \times [0,\infty)$, so $$U\cap \rho_{\omega_0}^{-1}(\rho(\omega_0,(y_0,z_0))-r,\rho(\omega_0,(y_0,z_0))+r) \cap S \neq \emptyset.\vspace{-0.1cm}$$
\noindent Let $(y',z')\in U\cap \rho_{\omega_0}^{-1}(\rho(\omega_0,(y_0,z_0))-r,\rho(\omega_0,(y_0,z_0))+r) \cap S$. Then, $(y',z')\in U\cap S$, so $\rho(\omega_0,(y',z'))\in A$. But also, \vspace{-0.15cm} $$\rho(\omega_0,(y',z'))\in (\rho(\omega_0,(y_0,z_0))-r,\rho(\omega_0,(y_0,z_0))+r) \subset A^c.\vspace{-0.15cm}$$ This is a contradiction, so the conclusion is that $V_2\subseteq V_1$.\vspace{0.15cm}
\noindent \emph{(A-2)} Indeed, it will be proved for all $\omega \in \Omega$. Let $\omega$ be any element of $\Omega$ and let $(y_0,z_0)$ be any (fixed) point of $\mathbb R \times [0,\infty)$.
First, notice that it is fulfilled for a sequence of neighborhoods $\{U_n\}_{n\in \mathbb N}$ of $(y_0,z_0)$ when $U_n \supseteq U_{n+1}$ for all $n$ that $$\left\{\underset{(y,z)\in U_n}{\inf}d_\theta(X(\omega),[y-z,y+z])\right\}_{n\in \mathbb N}$$ is a monotonically increasing sequence. Furthermore, this sequence is bounded since
$$\underset{(y,z)\in U_n}{\inf}d_\theta(X(\omega),[y-z,y+z]) \leq d_\theta(X(\omega),[y_0-z_0,y_0+z_0])$$
for all $n\in \mathbb N$ because $\displaystyle{(y_0,z_0)\in \cap_{n\in \mathbb N}U_n}$. Therefore, the sequence converges to its supremum, which will be $ d_\theta(X(\omega),[y_0-z_0,y_0+z_0])$.
By \emph{reductio ad absurdum}, suppose that there is a smaller upper bound \vspace{-0.05cm}$$c= d_\theta(X(\omega),[y_0-z_0,y_0+z_0])-\varepsilon,$$
for an arbitrary $\varepsilon >0$. Let's denote by $U_{n_0}$ a neighborhood of $(y_0,z_0)$ satisfying that $U_{n_0}\subseteq B((y_0,z_0),\frac{\varepsilon}{2})$. Then, it can be seen that $$c < \underset{(y,z)\in U_{n_0}}{\inf}d_\theta(X(\omega),[y-z,y+z]),$$ so $c$ cannot be the supremum. Thus, using the triangular inequality,
$$\underset{(y,z)\in U_{n_0}}{\inf}d_\theta(X(\omega),[y-z,y+z])\geq \underset{(y,z)\in B((y_0,z_0),\frac{\varepsilon}{2})}{\inf}d_\theta(X(\omega),[y-z,y+z])$$ $$ \geq \underset{(y,z)\in B((y_0,z_0),\frac{\varepsilon}{2})}{\inf}\left[d_\theta(X(\omega),[y_0-z_0,y_0+z_0])-d_\theta([y-z,y+z],[y_0-z_0,y_0+z_0])\right]$$
$$=d_\theta(X(\omega),[y_0-z_0,y_0+z_0])-\underset{(y,z)\in B((y_0,z_0),\frac{\varepsilon}{2})}{\sup}d_\theta([y-z,y+z],[y_0-z_0,y_0+z_0])$$
$$> d_\theta(X(\omega),[y_0-z_0,y_0+z_0])-\varepsilon = c.$$
Now this result will be extended to general sequences $\{U_n\}_{n \in \mathbb N}$. Consider the suprema and the infima radii reached in every neighborhood, namely,
$$r_n = \underset{(y,z)\in U_n}{\sup}d_\theta([y_0-z_0,y_0+z_0],[y-z,y+z]),$$
$$s_n = \underset{(y,z)\in U_n}{\inf}d_\theta([y_0-z_0,y_0+z_0],[y-z,y+z]).$$
It is known that $r_n \underset{n}{\longrightarrow} 0$, since $\{U_n\}_{n \in \mathbb N}$ shrinks to $\{(y_0,z_0)\}$. Moreover, $s_n \underset{n}{\longrightarrow} 0$ as $0\leq s_n \leq r_n$ for all $n\in \mathbb N$.
Let $\varepsilon$ be any nonnegative number. As $r_n\underset{n}{\longrightarrow} 0$, there exists $n_1 \in \mathbb N$ such that for all $n>n_1$, $r_n<\varepsilon$. Then, $U_n \subseteq B((y_0,z_0),r_n)$ and
$$\underset{(y,z)\in U_n}{\inf}d_\theta(X(\omega),[y-z,y+z])\geq \underset{(y,z)\in B((y_0,z_0),r_n)}{\inf}d_\theta(X(\omega),[y-z,y+z])$$
$$\geq d_\theta(X(\omega),[y_0-z_0,y_0+z_0])-\underset{(y,z)\in B((y_0,z_0),r_n)}{\sup}d_\theta([y_0-z_0,y_0+z_0],[y-z,y+z])$$
$$>d_\theta(X(\omega),[y_0-z_0,y_0+z_0])-\varepsilon.$$
Analogously, as $s_n\underset{n}{\longrightarrow} 0$, there exists $n_2 \in \mathbb N$ such that for all $n>n_2$, $s_n<\varepsilon$. Therefore, $U_n \supseteq B((y_0,z_0),s_n)$ and
$$\underset{(y,z)\in U_n}{\inf}d_\theta(X(\omega),[y-z,y+z]) \leq \underset{(y,z)\in B((y_0,z_0),s_n)}{\inf}d_\theta(X(\omega),[y-z,y+z])$$
$$\leq d_\theta(X(\omega),[y_0-z_0,y_0+z_0])+\underset{(y,z)\in B((y_0,z_0),s_n)}{\inf}d_\theta([y-z,y+z],[y_0-z_0,y_0+z_0])$$
$$<d_\theta(X(\omega),[y_0-z_0,y_0+z_0])+\varepsilon.$$
So for any $\varepsilon >0$, there exists $n_0=\max\{n_1,n_2\}$, such that for all $n>n_0$,
$$d_\theta(X(\omega),[y_0-z_0,y_0+z_0])-\varepsilon < \underset{(y,z)\in U_n}{\inf}d_\theta(X(\omega),[y-z,y+z])$$ $$<d_\theta(X(\omega),[y_0-z_0,y_0+z_0])+\varepsilon,$$
that is to say,
$$\left|\underset{(y,z)\in U_n}{\inf}d_\theta(X(\omega),[y-z,y+z])-d_\theta(X(\omega),[y_0-z_0,y_0+z_0])\right|<\varepsilon,$$
so the sequence $\left\{\underset{(y,z)\in U_n}{\inf}d_\theta(X(\omega),[y-z,y+z])\right\}_{n\in \mathbb N}$ converges to
\noindent $d_\theta(X(\omega),[y_0-z_0,y_0+z_0]).$\vspace{0.15cm}
\noindent \emph{(A-3)} Let $a$ be the measurable function (see (A-1)):\vspace{-0.2cm}
$$\begin{array}{rccll}
a: & \Omega & \longrightarrow & \mathbb R &\\
& \omega & \longmapsto & \displaystyle{d_\theta(X(\omega),[0,0])=\sqrt{({\rm mid}\, X(\omega))^2+\theta \cdot ({\rm spr}\, X(\omega))^2.}}\vspace{-0.1cm}
\end{array}$$
Fixed any arbitrary $(y,z)\in \mathbb R \times [0,\infty)$,
$$E[\rho(\omega,(y,z))-a(\omega)]^-$$$$=\int_\Omega -\min \{d_\theta(X(\omega),[y-z,y+z])-d_\theta(X(\omega),[0,0]),0\}\, dP(\omega)$$
$$=\int_{\scriptsize{\begin{array}{l}\{\omega \in \Omega\,:\, d_\theta(X(\omega),[0,0])\\>d_\theta(X(\omega),[y-z,y+z])\}\end{array}}}\big[d_\theta(X(\omega),[0,0])-d_\theta(X(\omega),[y-z,y+z])\big] dP(\omega).$$
By the triangular inequality,
$$\leq \int_{\scriptsize{\begin{array}{l}\{\omega \in \Omega\,:\, d_\theta(X(\omega),[0,0])\\>d_\theta(X(\omega),[y-z,y+z])\}\end{array}}}\big[d_\theta(X(\omega),[y-z,y+z])+d_\theta([y-z,y+z],[0,0])$$$$-d_\theta(X(\omega),[y-z,y+z])\big] dP(\omega)$$
$$= d_\theta([y-z,y+z],[0,0])\cdot P\big(\omega : d_\theta(X(\omega),[0,0])>d_\theta(X(\omega),[y-z,y+z])\big) < \infty.$$
Analogously,
$$E[\rho(\omega,(y,z))-a(\omega)]^+$$$$=\int_\Omega \max \{d_\theta(X(\omega),[y-z,y+z])-d_\theta(X(\omega),[0,0]),0\}\, dP(\omega)$$
$$=\int_{\scriptsize{\begin{array}{l}\{\omega \in \Omega\,:\, d_\theta(X(\omega),[0,0])\\\leq d_\theta(X(\omega),[y-z,y+z])\}\end{array}}}\big[d_\theta(X(\omega),[y-z,y+z])-d_\theta(X(\omega),[0,0])\big] \, dP(\omega).$$
By the triangular inequality,
$$\leq \int_{\scriptsize{\begin{array}{l}\{\omega \in \Omega\,:\, d_\theta(X(\omega),[0,0])\\ \leq d_\theta(X(\omega),[y-z,y+z])\}\end{array}}}\big[d_\theta(X(\omega),[0,0])+d_\theta([0,0],[y-z,y+z])$$$$-d_\theta(X(\omega),[0,0])\big] dP(\omega)$$
$$= d_\theta([0,0],[y-z,y+z])\cdot P\big(\omega : d_\theta(X(\omega),[0,0])\leq d_\theta(X(\omega),[y-z,y+z])\big) < \infty.$$
So the second inequality also holds for all $(y,z)\in \mathbb R\times [0,\infty)$ in this case.\vspace{0.15cm}
\noindent \emph{(A-4)} The $d_\theta$-median exists and is unique, so that \vspace{-0.1cm}
$$(\hbox {\rm mid}\, M_\theta[X], \hbox {\rm spr}\, M_\theta[X])=\arg \underset{(y,z)\in \mathbb R\times [0,\infty)}{\min}E\left[d_\theta(X(\omega),[y-z,y+z])\right]$$
$$=\arg \underset{(y,z)\in \mathbb R\times [0,\infty)}{\min} E\left[d_\theta(X(\omega),[y-z,y+z])\right]-E\left[d_\theta(X(\omega),[0,0])\right]$$$$=\arg \underset{(y,z)\in \mathbb R\times [0,\infty)}{\min} E\left[d_\theta(X(\omega),[y-z,y+z])-d_\theta(X(\omega),[0,0])\right]$$$$=\arg \underset{(y,z)\in \mathbb R\times [0,\infty)}{\min}\gamma((y,z))$$\vspace{0.1cm}
and $(y_0,z_0):=({\hbox {\rm mid}}\, M_\theta[X], {\hbox {\rm spr}}\, M_\theta[X])$ fulfills this assumption.\vspace{0.15cm}
\noindent \emph{(A-5)} There is a continuous function $b((y,z))>0$ \vspace{-0.1cm}
$$\begin{array}{rccll}
b: & \mathbb R\times [0,\infty) & \longrightarrow & \mathbb R &\\
& (y,z) & \longmapsto & \displaystyle{d_\theta([y-z,y+z],[0,0])+1}\vspace{-0.1cm}
\end{array}$$
such that
\begin{itemize}
\item for the integrable function $h(\omega):=-1$, $$\underset{(y,z)\in \mathbb R\times [0,\infty)}{\inf}\frac{d_\theta(X(\omega),[y-z,y+z])-d_\theta(X(\omega),[0,0])}{d_\theta([y-z,y+z],[0,0])+1}\geq -1$$
because using the triangular inequality,
$$\underset{(y,z)\in \mathbb R\times [0,\infty)}{\inf}\frac{d_\theta(X(\omega),[y-z,y+z])-d_\theta(X(\omega),[0,0])}{d_\theta([y-z,y+z],[0,0])+1}$$
$$\geq \underset{(y,z)\in \mathbb R\times [0,\infty)}{\inf}\frac{d_\theta(X(\omega),[0,0])-d_\theta([y-z,y+z],[0,0])-d_\theta(X(\omega),[0,0])}{d_\theta([y-z,y+z],[0,0])+1}$$
$$=\underset{(y,z)\in \mathbb R\times [0,\infty)}{\inf}\frac{-d_\theta([y-z,y+z],[0,0])}{d_\theta([y-z,y+z],[0,0])+1}\geq -1.$$\vspace{0.15cm}
\item the following condition is satisfied: $$\underset{(y,z)\rightarrow \infty}{\liminf} b((y,z))>\gamma((y_0,z_0)).$$\vspace{0.15cm}
Let $\{(y_n,z_n)\}\subset \mathbb R\times [0,\infty)$ be any sequence with $(y_n,z_n)\underset{n}{\longrightarrow}\infty$ (i.e., $d_\theta([y_n-z_n,y_n+z_n],[0,0])\underset{n}{\longrightarrow}\infty$) and $$M=E\left[d_\theta(X(\omega),[y_0-z_0,y_0+z_0])-d_\theta(X(\omega),[0,0])\right]=\gamma((y_0,z_0))\in \mathbb R,$$ where $(y_0,z_0)$ represents the minimum found in (A-4). Then, there exists $n_0\in \mathbb N$ such that for all $n\geq n_0$,
$$d_\theta([y_n-z_n,y_n+z_n],[0,0])>M.$$
So, for all $n\geq n_0$,
$$\underset{k\geq n}{\inf}b((y_k,z_k))=\underset{k\geq n}{\inf}\left(d_\theta([y_k-z_k,y_k+z_k],[0,0])+1\right)\geq M+1.$$
Finally,
$$\underset{n\rightarrow \infty}{\liminf}b((y_n,z_n))=\underset{n\rightarrow \infty}{\lim}(\underset{k\geq n}{\inf} b((y_k,z_k)))\geq M+1>M=\gamma((y_0,z_0)).$$
\item it is also fulfilled that: $$E\left[\underset{(y,z)\rightarrow \infty}{\liminf} \frac{d_\theta(X(\omega),[y-z,y+z])-d_\theta(X(\omega),[0,0])}{b((y,z))}\right]\geq 1.$$
Let's see that $$\underset{(y,z)\rightarrow \infty}{\liminf} \frac{d_\theta(X(\omega),[y-z,y+z])-d_\theta(X(\omega),[0,0])}{d_\theta([y-z,y+z],[0,0])+1}\geq 1,$$
so the result follows.
$$\underset{(y,z)\rightarrow \infty}{\liminf} \frac{d_\theta(X(\omega),[y-z,y+z])-d_\theta(X(\omega),[0,0])}{d_\theta([y-z,y+z],[0,0])+1}$$
$$=\underset{n\rightarrow \infty}{\lim}\left(\underset{k\geq n}{\inf} \frac{d_\theta(X(\omega),[y_k-z_k,y_k+z_k])-d_\theta(X(\omega),[0,0])}{d_\theta([y_k-z_k,y_k+z_k],[0,0])+1}\right)$$ for any fixed $\omega\in\Omega$. The sequence $$\left\{\underset{k\geq n}{\inf} \frac{d_\theta(X(\omega),[y_k-z_k,y_k+z_k])-d_\theta(X(\omega),[0,0])}{d_\theta([y_k-z_k,y_k+z_k],[0,0])+1}\right\}_{n\in\mathbb N}$$ is monotonically increasing and is upper bounded by $1$: for all $k\in \mathbb N$, using the triangular inequality,
$$\frac{d_\theta(X(\omega),[y_k-z_k,y_k+z_k])-d_\theta(X(\omega),[0,0])}{d_\theta([y_k-z_k,y_k+z_k],[0,0])+1}$$ $$\leq \frac{d_\theta([y_k-z_k,y_k+z_k],[0,0])}{d_\theta([y_k-z_k,y_k+z_k],[0,0])+1}\leq 1.$$
So it converges to its supremum:
$$\underset{n\rightarrow \infty}{\lim}\left(\underset{k\geq n}{\inf} \frac{d_\theta(X(\omega),[y_k-z_k,y_k+z_k])-d_\theta(X(\omega),[0,0])}{d_\theta([y_k-z_k,y_k+z_k],[0,0])+1}\right)$$
$$=\underset{n}{\sup}\left(\underset{k\geq n}{\inf} \frac{d_\theta(X(\omega),[y_k-z_k,y_k+z_k])-d_\theta(X(\omega),[0,0])}{d_\theta([y_k-z_k,y_k+z_k],[0,0])+1}\right)$$
Let's finally see that this supremum is at least equal to $1$. By \emph{reductio ad absurdum}, let's suppose that
$$\underset{n}{\sup}\left(\underset{k\geq n}{\inf} \frac{d_\theta(X(\omega),[y_k-z_k,y_k+z_k])-d_\theta(X(\omega),[0,0])}{d_\theta([y_k-z_k,y_k+z_k],[0,0])+1}\right)=1-\varepsilon,$$
for some $\varepsilon >0$. One gets then a contradiction because one finds an $n^*\in \mathbb N$ such that
$$\underset{k\geq n^*}{\inf} \frac{d_\theta(X(\omega),[y_k-z_k,y_k+z_k])-d_\theta(X(\omega),[0,0])}{d_\theta([y_k-z_k,y_k+z_k],[0,0])+1}>1-\varepsilon$$
since for all $k\geq n^*$, $$\frac{d_\theta(X(\omega),[y_k-z_k,y_k+z_k])-d_\theta(X(\omega),[0,0])}{d_\theta([y_k-z_k,y_k+z_k],[0,0])+1}\geq 1-\frac{\varepsilon}{2}>1-\varepsilon$$
as we will show now. Recall that $(y_n,z_n)\underset{n}{\longrightarrow}\infty$, so for all $M\in \mathbb R$, there exists $n^*\in \mathbb N$ such that for all $n\geq n^*$, $d_\theta([y_n-z_n,y_n+z_n],[0,0])>M$. Therefore,
$$d_\theta([y_n-z_n,y_n+z_n],X(\omega))\geq d_\theta([y_n-z_n,y_n+z_n],[0,0])-d_\theta(X(\omega),[0,0])$$$$>M-d_\theta(X(\omega),[0,0]).$$
Taking $M:=\frac{2}{\varepsilon}-1+\frac{4}{\varepsilon}\cdot d_\theta(X(\omega),[0,0])\in \mathbb R$ (for the fixed arbitrary $\omega\in\Omega$), we can easily check that $1-\frac{\varepsilon}{2}$ is a lower bound of the sequence $$\left\{ \frac{d_\theta(X(\omega),[y_k-z_k,y_k+z_k])-d_\theta(X(\omega),[0,0])}{d_\theta([y_k-z_k,y_k+z_k],[0,0])+1}\right\}_{k\geq n^*}.$$
For any $k\geq n^*$,
$$d_\theta(X(\omega),[y_k-z_k,y_k+z_k])-d_\theta(X(\omega),[0,0])$$
$$=\left(1-\frac{\varepsilon}{2}\right)d_\theta(X(\omega),[y_k-z_k,y_k+z_k])+\frac{\varepsilon}{2}d_\theta(X(\omega),[y_k-z_k,y_k+z_k])$$
$$-d_\theta(X(\omega),[0,0])$$
$$\geq \left(1-\frac{\varepsilon}{2}\right)d_\theta([y_k-z_k,y_k+z_k],[0,0])-\left(1-\frac{\varepsilon}{2}\right)d_\theta(X(\omega),[0,0])$$
$$+\frac{\varepsilon}{2}d_\theta(X(\omega),[y_k-z_k,y_k+z_k])-d_\theta(X(\omega),[0,0])$$
$$=\left(1-\frac{\varepsilon}{2}\right)d_\theta([y_k-z_k,y_k+z_k],[0,0])+\frac{\varepsilon}{2}d_\theta(X(\omega),[y_k-z_k,y_k+z_k])$$
$$-\left(2-\frac{\varepsilon}{2}\right)d_\theta(X(\omega),[0,0])$$
$$>\left(1-\frac{\varepsilon}{2}\right)d_\theta([y_k-z_k,y_k+z_k],[0,0])+\frac{\varepsilon}{2}\left(\frac{2}{\varepsilon}-1+\Big(\frac{4}{\varepsilon}-1\Big)d_\theta(X(\omega),[0,0])\right)$$
$$-\left(2-\frac{\varepsilon}{2}\right)d_\theta(X(\omega),[0,0])=\left(1-\frac{\varepsilon}{2}\right)d_\theta([y_k-z_k,y_k+z_k],[0,0])+1-\frac{\varepsilon}{2}$$
$$\hspace{1.9cm}=\left(1-\frac{\varepsilon}{2}\right)\big(d_\theta([y_k-z_k,y_k+z_k],[0,0])+1\big).\hspace{2.4cm}\square$$
\end{itemize}
\section{Concluding remarks}
\label{author_sec:8}
This paper complements the study of the properties of the
$d_{\theta}$-median as a robust estimator of the center of a random interval by showing its strong consistency which is one of the most important basic properties of an estimator. We obtained this result by showing that all the sufficient conditions of Huber (\citeyear{huber})
are fulfilled. These results open the door to further develop robust statistical inference for random
intervals based on the $d_{\theta}$-median such as the development of hypotheses testing procedures.
\begin{acknowledgements}
Authors are grateful to Mar\'ia \'Angeles Gil for her helpful suggestions to improve this paper. The research by Beatriz Sinova was partially supported by/benefited from the Spanish Ministry of Science and Innovation Grant MTM2009-09440-C02-01. She has been also granted with the Ayuda del Programa de FPU AP2009-1197 from the Spanish Ministry of Education and the Ayuda para Estancias Breves del Programa FPU EST12/00344, an Ayuda de Investigaci\'on 2011 from the Fundaci\'on Banco Herrero and three Short Term Scientific Missions associated with the COST Action IC0702. The research by Stefan Van Aelst was supported by a grant of the Fund for Scientific Research-Flanders (FWO-Vlaanderen) and by IAP research network grant nr. P7/06 of the Belgian government (Belgian Science Policy). Their financial support is gratefully acknowledged.
\end{acknowledgements}
|
1,941,325,220,086 | arxiv | \section{Introduction}
A specific model of nonlinear electrodynamics was proposed by Born
and Infeld in 1934 \cite{BI} founded on a principle of finiteness,
namely, that a satisfactory theory should avoid physical
quantities becoming infinite. The Born-Infeld model was inspired
mainly to remedy the fact that the standard picture of a point
particle possesses an infinite self-energy, and consisted on
placing an upper limit on the electric field strength and
considering a finite electron radius.
Later, Pleba\'{n}ski presented other examples of nonlinear
electrodynamic Lagrangians \cite{Pleb}, and showed that the
Born-Infeld theory satisfies physically acceptable requirements. A
further discussion of these properties can be found in Ref.
\cite{Birula}.
Furthermore, a recent revival of nonlinear electrodynamics has
been verified, mainly due to the fact that these theories appear
as effective theories at different levels of string/M-theory, in
particular, in D$p-$branes and supersymmetric extensions, and
non-Abelian generalizations (see Ref. \cite{Witten} for a review).
Much interest in nonlinear electrodynamic theories has also been
aroused in applications to cosmological models, in particular, in
explaining the inflationary epoch and the late accelerated
expansion of the universe \cite{Novello,Moniz}. In this
cosmological context, an inhomogeneous and anisotropic nonsingular
model for the universe, with a Born-Infeld field was studied
\cite{Sal-Breton}, the effects produced by nonlinear
electrodynamics in spacetimes conformal to Bianchi metrics were
further analyzed \cite{Sal-Breton2}, and geodesically complete
Bianchi spaces were also found \cite{Sal-Breton3}. Homogeneous and
isotropic cosmological solutions governed by the non-abelian
Born-Infeld Lagrangian \cite{DGZZ}, and anisotropic cosmological
spacetimes, in the presence of a positive cosmological constant
\cite{Vollick}, were also extensively analyzed.
In fact, it is interesting to note that the first {\it exact}
regular black hole solution in general relativity was found within
nonlinear electrodynamics \cite{Garcia,Garcia2}, where the source
is a nonlinear electrodynamic field satisfying the weak energy
condition, and the Maxwell field is reproduced in the weak limit.
It was also shown that general relativity coupled to nonlinear
electrodynamics leads to regular magnetic black holes and
monopoles \cite{Bronnikov1}, and regular electrically charged
structures, possessing a regular de Sitter center
\cite{Dymnikova}, and the respective stability of these solutions
was further explored in Ref. \cite{Breton-BH}.
Recently, an alternative model to black holes was proposed, in
particular, the gravastar picture \cite{gravastar}, where there is
an effective phase transition at or near where the event horizon
is expected to form, and the interior is replaced by a de Sitter
condensate. The gravastar model has no singularity at the origin
and no event horizon, as its rigid surface is located at a radius
slightly greater than the Schwarzschild radius. In this context, a
gravastar model within nonlinear electrodynamics, where the
interior de Sitter solution is substituted with a Born-Infeld
Lagrangian, was also found. This solution was denoted as a
Born-Infeld phantom gravastar \cite{Bilic}.
Relatively to wormhole spacetimes \cite{Morris,Visser}, an
important and intriguing challenge is the quest to find a
realistic matter source that will support these exotic geometries.
The latter are supported by {\it exotic matter}, involving a
stress energy tensor that violates the null energy condition
(NEC), i.e., $T_{\mu\nu}k^\mu k^\nu \geq 0$, where $T_{\mu\nu}$ is
the stress-energy tensor and $k^{\mu}$ any null vector. Several
candidates have been proposed in the literature, for instance, to
cite a few, null energy condition violating massless conformally
coupled scalar fields supporting self-consistent classical
wormholes \cite{barcelovisserPLB99}; the extension of the
Morris-Thorne wormhole with the inclusion of a cosmological
constant \cite{LLQ-PRD}; and more recently, the theoretical
realization that wormholes may be supported by exotic cosmic
fluids, responsible for the accelerated expansion of the universe,
such as phantom energy \cite{phantomWH} and the generalized
Chaplygin gas \cite{ChaplyginWH}. It is also interesting to note
that an effective wormhole geometry for an electromagnetic wave
can appear as a result of the nonlinear character of the field
\cite{Novello2}.
In Ref \cite{Arell-Lobo}, evolving $(2+1)$ and $(3+1)-$dimensional
wormhole spacetimes, conformally related to the respective static
geometries, within the context of nonlinear electrodynamics were
also explored. It was found that for the specific
$(3+1)-$dimensional spacetime, the Einstein field equation imposes
a contracting wormhole solution and the obedience of the weak
energy condition. Furthermore, in the presence of an electric
field, the latter presents a singularity at the throat. However, a
regular solution was found for a pure magnetic field. For the
$(2+1)-$dimensional case, it was also found that the physical
fields are singular at the throat. Thus, taking into account the
principle of finiteness, that a satisfactory theory should avoid
physical quantities becoming infinite, one may rule out evolving
$(3+1)-$dimensional wormhole solutions, in the presence of an
electric field, and the $(2+1)-$dimensional case coupled to
nonlinear electrodynamics.
In this work we shall be interested in exploring the possibility
that nonlinear electrodynamics may support static, spherically
symmetric and stationary, axisymmetric traversable wormhole
geometries. In fact, Bronnikov \cite{Bronnikov1,Bronnikov2} showed
that static and spherically symmetric $(3+1)-$dimensional
wormholes is not the case, and we shall briefly reproduce and
confirm this result. We further consider the $(2+1)-$dimensional
case, which proves to be extremely interesting, as the principle
of finiteness is imposed, in order to obtain regular physical
fields at the throat. We shall next analyze the $(2+1)$ and
$(3+1)-$dimensional stationary and axisymmetric case \cite{Teo}
coupled to nonlinear electrodynamics.
This paper is outlined in the following manner: In Sec.
\ref{StaticWH} we analyze $(2+1)$ and $(3+1)-$dimensional static
and spherically symmetric wormholes coupled with nonlinear
electrodynamics, and in Sec. \ref{RotWH} rotating traversable
wormholes in the context of nonlinear electrodynamics are studied.
In Sec. \ref{Conclusion} we conclude.
\section{Static and spherically symmetric
wormholes}\label{StaticWH}
\subsection{$(2+1)-$dimensional wormhole}
In this Section, we shall be interested in $(2+1)-$dimensional
general relativity coupled to nonlinear electrodynamics. We will
use geometrized units throughout this work, i.e., $G=c=1$. The
respective action is given by
\begin{equation}
S=\int \sqrt{-g}\left[\frac{R}{16\pi}+L(F)\right]\,d^3x \,,
\end{equation}
where $R$ is the Ricci scalar and $L(F)$ is a gauge-invariant
electromagnetic Lagrangian, which we shall leave unspecified at
this stage, depending on the invariant $F$ given by
$F=\frac{1}{4}F^{\mu\nu}F_{\mu\nu}$.
$F_{\mu\nu}=A_{\nu,\mu}-A_{\mu,\nu}$ is the electromagnetic field.
Note that the factor $1/16\pi$, in the action, is maintained to
keep the parallelism with $(3+1)-$dimensional
theory~\cite{Garcia4}.
In Einstein-Maxwell theory, the Lagrangian is defined as
$L(F)\equiv -F/4\pi$, but here we consider more general choices of
electromagnetic Lagrangians, however, depending on the single
invariant $F$. It is perhaps important to emphasize that we do not
consider the case where $L$ depends on the invariant $G \equiv
\frac{1}{4}F_{\mu\nu}{}^*F^{\mu\nu}$, where $*$ denotes the Hodge
dual with respect to $g_{\mu\nu}$.
Varying the action with respect to the gravitational field
provides the Einstein tensor
\begin{equation}
G_{\mu\nu}=8\pi
(g_{\mu\nu}L-F_{\mu\alpha}F_{\nu}{}^{\alpha}\,L_{F}) \,,
\end{equation}
where $L_F\equiv d L/d F$. Clearly, the stress-energy tensor is
given by
\begin{equation}
T_{\mu\nu}=g_{\mu\nu}\,L(F)-F_{\mu\alpha}F_{\nu}{}^{\alpha}\,L_{F}\,,
\label{stress-energy}
\end{equation}
where the Einstein field equation is defined as $G_{\mu\nu}=8\pi
T_{\mu\nu}$.
The variation of the action with respect to the electromagnetic
potential $A_\mu$, yields the electromagnetic field equations
\begin{eqnarray}
\left(F^{\mu\nu}\,L_{F}\right)_{;\mu}&=&0 \,,
\label{em-field}
\end{eqnarray}
where the semi-colon denotes a covariant derivative.
The spacetime metric representing a spherically symmetric and
static $(2+1)-$dimensional wormhole is given by
\begin{equation}
ds^2=-e ^{2\Phi(r)}\,dt^2+\frac{dr^2}{1- b(r)/r}+r^2 \, d\phi ^2
\label{metricwormhole}\,,
\end{equation}
where $\Phi(r)$ and $b(r)$ are functions of the radial coordinate,
$r$. $\Phi(r)$ is denoted as the redshift function, for it is
related to the gravitational redshift; $b(r)$ is called the form
function \cite{Morris}. The radial coordinate has a range that
increases from a minimum value at $r_0$, corresponding to the
wormhole throat, to $\infty$.
For the wormhole to be traversable, one must demand the absence of
event horizons, which are identified as the surfaces with
$e^{2\Phi}\rightarrow 0$, so that $\Phi(r)$ must be finite
everywhere. A fundamental property of wormhole physics is the
flaring out condition, which is deduced from the mathematics of
embedding, and is given by $(b-b'r)/b^2>0$ \cite{Morris,Hochberg}.
Note that at the throat $b(r_0)=r=r_0$, the flaring out condition
reduces to $b'(r_0)<1$. The condition $(1-b/r)>0$ is also imposed.
Taking into account the symmetries of the geometry, we shall
consider the following electromagnetic tensor
\begin{equation}
F_{\mu\nu}=E(r)(\delta^t_\mu \delta^r_\nu-\delta^r_\mu
\delta^t_\nu)+B(r)(\delta^\phi_\mu \delta^r_\nu-\delta^r_\mu
\delta^\phi_{\nu}) \label{em-tensor}\,.
\end{equation}
Note that the only non-zero terms for the electromagnetic tensor
are the following $F_{tr}=-F_{rt}=E(r)$ and $F_{\phi
r}=-F_{r\phi}=B(r)$.
The invariant $F=F^{\mu\nu}F_{\mu\nu}/4$ is given by
\begin{eqnarray}
F=-\frac{1}{2}\;\left(1-\frac{b}{r}\right)\,\left[e^{-2\Phi}\,
E^2(r)-\frac{1}{r^2}\,B^2(r)\right]
\,.
\end{eqnarray}
The electromagnetic field equation, Eq. (\ref{em-field}), provides
the following relationships
\begin{eqnarray}
e^{-\Phi}\,\left(1-\frac{b}{r}\right)^{1/2}E\,L_{F}&=&\frac{C_e}{r}
\,,
\label{emf:electric} \\
\frac{1}{r}\,\left(1-\frac{b}{r}\right)^{1/2}B\,L_{F}&=&C_m\,e^{-\Phi}
\,. \label{emf:magnetic}
\end{eqnarray}
where the constants of integration $C_e$ and $C_m$ are related to
the electric and magnetic charge, $q_e$ and $q_m$, respectively.
The mathematical analysis and the physical interpretation will be
simplified using a set of orthonormal basis vectors. These may be
interpreted as the proper reference frame of a set of observers
who remain at rest in the coordinate system $(t,r,\phi)$, with
$(r,\phi)$ fixed.
Now, the non-zero components of the Einstein tensor,
$G_{\hat{\mu}\hat{\nu}}$, in the orthonormal reference frame, are
given by
\begin{eqnarray}
G_{\hat{t}\hat{t}}&=&\;\frac{b'r-b}{2r^3} \label{Gtt}\,,\\
G_{\hat{r}\hat{r}}&=&\left(1-\frac{b}{r}\right) \frac{\Phi'}{r} \label{Grr}\,,\\
G_{\hat{\phi}\hat{\phi}}&=& \left(1-\frac{b}{r}\right)\left[\Phi
''+ (\Phi')^2- \frac{b'r-b}{2r(r-b)}\Phi' \right] \label{Gpp}\,.
\end{eqnarray}
The Einstein field equation, $G_{\hat{\mu}\hat{\nu}}=8\pi
\,T_{\hat{\mu}\hat{\nu}}$, requires that the Einstein tensor be
proportional to the stress-energy tensor, so that in the
orthonormal basis the latter must have an identical algebraic
structure as the Einstein tensor components,
$G_{\hat{\mu}\hat{\nu}}$, i.e., Eqs. (\ref{Gtt})-(\ref{Gpp}).
Recall that a fundamental condition in wormhole physics is the
violation of the NEC, which is defined as
$T_{\mu\nu}k^{\mu}k^{\nu} \geq 0$, where $k^\mu$ is {\it any} null
vector. Considering the orthonormal reference frame with
$k^{\hat{\mu}}=(1,\pm 1,0)$, we have
\begin{equation}\label{NECthroat}
T_{\hat{\mu}\hat{\nu}}k^{\hat{\mu}}k^{\hat{\nu}}=
\frac{1}{8\pi}\,\left[\frac{b'r-b}{r^3}+
\left(1-\frac{b}{r}\right) \frac{\Phi '}{r} \right] \,.
\end{equation}
Using the flaring out condition of the throat, $(b-b'r)/2b^2>0$
\cite{Morris,Visser}, and considering the finite character of
$\Phi(r)$, we verify that evaluated at the throat the NEC is
violated, i.e.,
$T_{\hat{\mu}\hat{\nu}}k^{\hat{\mu}}k^{\hat{\nu}}<0$. Matter that
violates the NEC is denoted as {\it exotic matter}.
The only non-zero components of $T_{\hat{\mu}\hat{\nu}}$, taking
into account Eq. (\ref{stress-energy}), are
\begin{eqnarray}
T_{\hat{t}\hat{t}}&=&-L-e^{-2\Phi}\left(1-\frac{b}{r}\right)\,E^2\,L_{F}
\,,
\label{TttNLE} \\
T_{\hat{r}\hat{r}}&=&L+e^{-2\Phi}\left(1-\frac{b}{r}\right)\,E^2\,L_{F}
\nonumber \\
&&-\left(1-\frac{b}{r}\right)\,\frac{B^2}{r^2}\,L_{F} \,,
\label{TrrNLE} \\
T_{\hat{\phi}\hat{\phi}}&=&L-\left(1-\frac{b}{r}\right)\,\frac{B^2}{r^2}\,L_{F}
\,. \label{TppNLE}
\end{eqnarray}
We need to impose the conditions
$|e^{-2\phi}(1-b/r)E^2L_{F}|<\infty$ and
$|(1-b/r)B^2L_{F}|<\infty$ as $r\rightarrow r_0$, to ensure the
regularity of the stress-energy tensor components.
Note that the Lagrangian may be obtained from the following
relationship:
$L=T_{\hat{\phi}\hat{\phi}}-T_{\hat{t}\hat{t}}-T_{\hat{r}\hat{r}}$,
and using the Einstein field equation, is given by
\begin{eqnarray}\label{Lag}
L&=&\frac{1}{8\pi} \Bigg\{\left(1-\frac{b}{r}\right)\Big[\Phi ''+
(\Phi')^2-\frac{\Phi'}{r}
\nonumber \\
&&- \frac{b'r-b}{2r(r-b)}\Phi' \Big] -\frac{b'r-b}{2r^3} \Bigg\}
\,.
\end{eqnarray}
However, from the metric (\ref{metricwormhole}) we verify the
following zero components of the Einstein tensor:
$G_{\hat{t}\hat{r}}=0$, $G_{\hat{r}\hat{\phi}}=0$ and
$G_{\hat{t}\hat{\phi}}=0$. Thus, through the Einstein field
equation, a further restriction may be obtained from
$T_{\hat{t}\hat{\phi}}=0$, i.e.,
\begin{equation}
T_{\hat{t}\hat{\phi}}=-\frac{1}{r}E(r)B(r)e^{-\phi}(1-b/r)\,L_F
\,,
\end{equation}
which imposes that $E(r)=0$ or $B(r)=0$, considering the
non-trivial case of $L_{F}$ non-zero. It is rather interesting
that both $E(r)$ and $B(r)$ cannot coexist simultaneously, in the
present $(2+1)-$dimensional case.
For the specific case of $B(r)=0$, from Eqs.
(\ref{Gtt})-(\ref{Grr}) and Eqs. (\ref{TttNLE})-(\ref{TrrNLE}), we
verify the following condition
\begin{equation}
\Phi'=-\frac{b'r-b}{2r(r-b)}\,,
\end{equation}
which may be integrated to yield the solution
\begin{equation}
e^{2\Phi}=\left(1-\frac{b}{r}\right) \,.
\end{equation}
This corresponds to a non-traversable wormhole solution, as it
possesses an event horizon at the throat, $r=r_0$.
Now, consider the case of $E(r)=0$ and $B(r)\neq 0$. For this case
we have $T_{\hat{r}\hat{r}}=T_{\hat{\phi}\hat{\phi}}$, and the
respective Einstein tensor components, Eqs.
(\ref{Grr})-(\ref{Gpp}), provide the following differential
equation
\begin{equation}\label{diffeq}
\frac{\Phi'}{r}=\Phi ''+ (\Phi')^2- \frac{b'r-b}{2r(r-b)}\Phi' \,.
\end{equation}
Considering a specific choice of $b(r)$ or $\Phi(r)$, one may, in
principle, obtain a solution for the geometry. Equation
(\ref{diffeq}) may be formally integrated to yield the following
general solution
\begin{equation}\label{gen:Phi}
\Phi(r)=\ln\left[C_1 \int
r\left(1-\frac{b(r)}{r}\right)^{-1/2}\,dr + C_2 \right] \,,
\end{equation}
where $C_1$ and $C_2$ are constants of integration. For instance,
consider a constant form function, $b(r)=r_0$, so that from Eq.
(\ref{gen:Phi}), we deduce
\begin{eqnarray}\label{Phi:sol1}
\Phi(r)&=&\ln\Big\{C_2+\frac{C_1}{8}\Big[2\sqrt{r(r-r_0)}\,(2r+3r_0)
\nonumber \\
&&+3r_0^2\ln{\left(r-r_0/2+\sqrt{r(r-r_0)}\right)}\Big]\Big\} \,,
\end{eqnarray}
which at the throat reduces to
\begin{equation}
\Phi(r_0)=\ln\left[C_2+\frac{3C_1r_0^2}{8}\ln\left(\frac{r_0}{2}\right)\right]\,.
\end{equation}
To obtain a regular solution at the throat, we impose the
condition: $C_2+(3C_1r_0^2/8)\ln(r_0/2)>0$.
Consider for instance $b(r)=r_0^2/r$, then Eq. (\ref{gen:Phi})
provides the solution
\begin{eqnarray}\label{Phi:sol2}
\Phi(r)&=&\ln\Bigg\{C_2+\frac{C_1}{2}\Bigg[r\sqrt{r^2-r_0^2}
\nonumber \\
&&+r_0^2\ln{\left(r+\sqrt{r^2-r_0^2)}\right)}\Bigg]\Bigg\} \,,
\end{eqnarray}
which at the throat, reduces to
\begin{equation}
\Phi(r_0)=\ln\left[C_2+\frac{C_1r_0^2}{2}\ln(r_0)\right]\,.
\end{equation}
Once again, to ensure a regular solution, we need to impose the
following condition: $C_2+(C_1r_0^2/2)\ln(r_0)>0$. Note that these
specific solutions are not asymptotically flat, however, they may
be matched to an exterior vacuum spacetime, much in the spirit of
Refs. \cite{LLQ-PRD,wormhole-shell}.
However, a subtlety needs to be pointed out. Consider Eqs.
(\ref{Gtt})-(\ref{Grr}) and (\ref{TttNLE})-(\ref{TrrNLE}), from
which we deduce
\begin{equation}\label{dphi}
\Phi'=-\frac{b'r-b}{2r(r-b)}-\frac{8\pi B^2}{r} \,L_{F}\,.
\end{equation}
Now, taking into account Eq. (\ref{emf:magnetic}), we find the
following relationships for the magnetic field, $B(r)$, and for
$L_{F}$
\begin{equation}\label{magnetic-field}
B(r)=-\frac{e^{\Phi}}{8\pi
C_m}\left[\frac{b'r-b}{2r^2(1-b/r)^{1/2}}+\left(1-\frac{b}{r}\right)^{1/2}\Phi'
\right] \,,
\end{equation}
and
\begin{equation}\label{magnetic-LF}
L_{F}=-\frac{8\pi C_m^2 r\,
e^{-2\Phi}}{\frac{b'r-b}{2r^2}+(1-b/r)\,\Phi'} \,,
\end{equation}
respectively. Considering that the redshift $\Phi$ be finite
throughout the spacetime, one immediately verifies that the
magnetic field $B(r)$ is singular at the throat, which is
transparent considering the first term in square brackets in the
right hand side of Eq. (\ref{magnetic-field}). This is an
extremely troublesome aspect of the geometry, as in order to
construct a traversable wormhole, singularities appear in the
physical fields. This aspect is in contradiction to the model
construction of nonlinear electrodynamics, founded on a principle
of finiteness, that a satisfactory theory should avoid physical
quantities becoming infinite \cite{BI}. Thus, one should impose
that these physical quantities be non-singular, and in doing so,
we verify that the general solution corresponds to a
non-traversable wormhole geometry. This may be verified by
integrating Eq. (\ref{dphi}), which yields the following general
solution
\begin{equation}
e^{2\Phi}=\left(1-\frac{b}{r} \right)\exp \left(-16\pi\int
\frac{B^2}{r}\,L_{F} \,dr \right) \,.
\end{equation}
We have considered the factor $|B^2L_{F}|<\infty$ as $r\rightarrow
r_0$, to ensure the regularity of the term in the exponential.
However, this solution corresponds to a non-traversable wormhole
solution, as it possesses an horizon at the throat, $b=r=r_0$.
One may also prove the non-existence of $(2+1)-$dimensional static
and spherically symmetric traversable wormholes in nonlinear
electrodynamics, through an analysis of the NEC violation. In the
context of nonlinear electrodynamics, and taking into account Eqs.
(\ref{TttNLE})-(\ref{TrrNLE}), we verify
\begin{equation}
T_{\hat{\mu}\hat{\nu}}k^{\hat{\mu}}k^{\hat{\nu}}
=-\left(1-\frac{b}{r}\right)\,\frac{B^2}{r^2}\,L_{F}
\,,
\end{equation}
which evaluated at the throat, considering the regularity of $B$
and $L_{F}$, is identically zero, i.e.,
$T_{\hat{\mu}\hat{\nu}}k^{\hat{\mu}}k^{\hat{\nu}}|_{r_0} =0$. The
NEC is not violated at the throat, so that the flaring-out
condition is not satisfied, showing, therefore, the non-existence
of $(2+1)-$dimensional static and spherically symmetric
traversable wormholes in nonlinear electrodynamics.
\subsection{$(3+1)-$dimensional wormhole}
The action of $(3+1)-$dimensional general relativity coupled to
nonlinear electrodynamics is given by
\begin{equation}
S=\int \sqrt{-g}\left[\frac{R}{16\pi}+L(F)\right]\,d^4x \,,
\end{equation}
where $R$ is the Ricci scalar and the gauge-invariant
electromagnetic Lagrangian, $L(F)$, depends on a single invariant
$F$ \cite{Pleb,Pleb2}, defined by $F\equiv
\frac{1}{4}F^{\mu\nu}F_{\mu\nu}$, as before. We shall not consider
the case where $L$ depends on the invariant $G \equiv
\frac{1}{4}F_{\mu\nu}{}^*F^{\mu\nu}$, as mentioned in the
$(2+1)-$dimensional case.
Varying the action with respect to the gravitational field
provides the Einstein field equations $G_{\mu\nu}=8\pi
T_{\mu\nu}$, where the stress-energy tensor is given by
\begin{equation}
T_{\mu\nu}=g_{\mu\nu}\,L(F)-F_{\mu\alpha}F_{\nu}{}^{\alpha}\,L_{F}\,.
\label{4dim-stress-energy}
\end{equation}
Taking into account the symmetries of the geometry, the only
non-zero compatible terms for the electromagnetic tensor are
$F_{tr}=E(x^\mu)$ and $F_{\theta\phi}=B(x^\mu)$.
The spacetime metric representing a spherically symmetric and
static $(3+1)-$dimensional wormhole takes the form \cite{Morris}
\begin{equation}
ds^2=-e ^{2\Phi(r)}\,dt^2+\frac{dr^2}{1- b(r)/r}+r^2 \,(d\theta
^2+\sin ^2{\theta} \, d\phi ^2) \label{4metricwormhole}\,.
\end{equation}
The non-zero components of the Einstein tensor, given in an
orthonormal reference frame, are given by
\begin{eqnarray}
G_{\hat{t}\hat{t}}&=& \,\frac{b'}{r^2} \label{rhoWH} \,, \\
G_{\hat{r}\hat{r}}&=& -\frac{b}{r^3}+2 \left(1-\frac{b}{r}
\right) \frac{\Phi'}{r} \label{prWH} \,, \\
G_{\hat{\phi}\hat{\phi}}&=&G_{\hat{\theta}\hat{\theta}}=
\left(1-\frac{b}{r}\right)\Bigg[\Phi ''+ (\Phi')^2-
\frac{b'r-b}{2r(r-b)}\Phi'
\nonumber \\
&&\hspace{1.2cm}-\frac{b'r-b}{2r^2(r-b)}+\frac{\Phi'}{r} \Bigg]
\label{ptWH}\,.
\end{eqnarray}
Its a simple matter to prove that for this geometry, the NEC is
identical to Eq. (\ref{NECthroat}), and is also violated at the
throat, i.e.,
$T_{\hat{\mu}\hat{\nu}}k^{\hat{\mu}}k^{\hat{\nu}}<0$.
The relevant components for the stress-energy tensor, regarding
the analysis of the NEC, are the following
\begin{eqnarray}
T_{\hat{t}\hat{t}}&=&-L-e^{-2\Phi}\left(1-\frac{b}{r}\right)\,E^2\,L_{F}
\,,
\label{4TttNLE} \\
T_{\hat{r}\hat{r}}&=&L+e^{-2\Phi}\left(1-\frac{b}{r}\right)\,E^2\,L_{F}
\,.
\label{4TrrNLE}
\end{eqnarray}
Analogously with the $(2+1)-$dimensional case, we will consider
$|e^{-2\Phi}(1-b/r)\,E^2\,L_{F}|<\infty$, as $r\rightarrow r_0$,
to ensure that the stress-energy tensor components are regular.
From Eqs. (\ref{rhoWH})-(\ref{prWH}) and Eqs.
(\ref{4TttNLE})-(\ref{4TrrNLE}), we verify the following condition
\begin{equation}
\Phi'=-\frac{b'r-b}{2r(r-b)}\,,
\end{equation}
which may be integrated to yield the solution $e^{2\Phi}=(1-b/r)$,
rendering a non-traversable wormhole solution, as it possesses an
event horizon at the throat, $r=r_0$.
Note that the NEC, for the stress-energy tensor defined by
(\ref{4dim-stress-energy}), is identically zero for arbitrary $r$,
i.e., $T_{\hat{\mu}\hat{\nu}}k^{\hat{\mu}}k^{\hat{\nu}}=0$. In
particular this implies that the flaring-out condition of the
throat is not satisfied, showing, therefore, the non-existence of
$(3+1)-$dimensional static and spherically symmetric traversable
wormholes coupled to nonlinear electrodynamics. The analysis
outlined in this Section is consistent with that of Refs.
\cite{Bronnikov1,Bronnikov2}, where it was pointed out that
nonlinear electrodynamics, with any Lagrangian of the form $L(F)$,
coupled to general relativity cannot support static and
spherically symmetric $(3+1)-$dimensional traversable wormholes.
The impediment to the construction of traversable wormholes may be
overcome by considering a non-interacting anisotropic distribution
of matter coupled to nonlinear electrodynamics. This may be
reflected by the following superposition of the stress-energy
tensor
\begin{equation}
T_{\mu\nu}=T_{\mu\nu}^{\rm fluid}+T_{\mu\nu}^{\rm NED} \,,
\end{equation}
where $T_{\mu\nu}^{\rm NED}$ is given by Eq.
(\ref{4dim-stress-energy}), and $T_{\mu\nu}^{\rm fluid}$ is
provided by
\begin{equation}
T_{\mu\nu}^{\rm fluid}=(\rho+p_t)U_\mu \, U_\nu+p_t\,
g_{\mu\nu}+(p_r-p_t)\chi_\mu \chi_\nu \,.
\end{equation}
$U^\mu$ is the four-velocity and $\chi^\mu$ is the unit spacelike
vector in the radial direction. $\rho(r)$ is the energy density,
$p_r(r)$ is the radial pressure measured in the direction of
$\chi^\mu$, and $p_t(r)$ is the transverse pressure measured in
the orthogonal direction to $\chi^\mu$.
Now, the NEC takes the form
\begin{eqnarray}
T_{\hat{\mu}\hat{\nu}}k^{\hat{\mu}}k^{\hat{\nu}}&=&\rho(r)+p_r(r)
\nonumber \\
&=&\frac{1}{8\pi}\,\left[\frac{b'r-b}{r^3}+
\left(1-\frac{b}{r}\right) \frac{\Phi '}{r} \right] \,,
\end{eqnarray}
which evaluated at the throat, reduces to the NEC violation
analysis of Ref. \cite{Morris}, i.e., $\rho+p_r<0$.
\section{Stationary and axisymmetric wormholes}\label{RotWH}
\subsection{$(2+1)-$dimensional wormhole}
We now analyze nonlinear electrodynamics coupled to a stationary
axisymmetric $(2+1)-$dimensional wormhole geometry. The stationary
character of the spacetime implies the presence of a time-like
Killing vector field, generating invariant time translations. The
axially symmetric character of the geometry implies the existence
of a spacelike Killing vector field, generating invariant
rotations with respect to the angular coordinate $\phi$. Consider
the metric
\begin{equation}\label{rwhmn}
ds^2=-N^2dt^2+\frac{dr^2}{1-b/r}+r^2K^2(d\phi-\omega\,dt)^2 \,,
\end{equation}
where $N, K, \omega$ and $b$ are functions of $r$. $\omega(r)$ may
be interpreted as the angular velocity $ d\phi/ dt$ of a particle.
$N$ is the analog of the redshift function in Eq.
(\ref{metricwormhole}) and is finite and nonzero to ensure that
there are no event horizons. We shall also assume that $K(r)$ is a
positive, nondecreasing function of $r$ that determines the proper
radial distance $R$, i.e.,
\begin{equation}
R\equiv rK\,,\qquad R'>0\,.
\end{equation}
To transform to an orthonormal reference frame, the one-forms in
the orthonormal basis transform as
$\tilde{\Theta}^{\hat{\mu}}=\Lambda^{\hat{\mu}}{}_{\nu}\,\Theta^\nu$.
The metric (\ref{rwhmn}) can be diagonalized
\begin{equation}\label{rwhmo}
ds^2=-(\Theta^{\hat{t}})^2+(\Theta^{\hat{r}})^2+(\Theta^{\hat{\phi}})^2
\,,
\end{equation}
by means of the tetrad
\begin{eqnarray}\label{tet2}
\Theta^{\hat{t}}&=&Ndt \,, \\
\Theta^{\hat{r}}&=&(1-b/r)^{-1/2}dr \,, \\
\Theta^{\hat{\phi}}&=&rK(d\phi-\omega dt) \,.
\end{eqnarray}
Now, $\Lambda^{\mu}{}_{\hat{\alpha}} \;
\Lambda^{\hat{\alpha}}{}_{\nu} = \delta^{\mu}{}_{\nu}$ and
$\Lambda^{\mu}{}_{\hat{\nu}}$ is defined as
\begin{equation}
(\Lambda^{\mu}{}_{\hat{\nu}})=\left[
\begin{array}{ccc}
1/N&0&0 \\
0&(1-b/r)^{1/2}&0 \\
\omega/N&0&(rK)^{-1}
\end{array}
\right] \label{tranfs3}\,.
\end{equation}
From the latter transformation, one may deduce the orthonormal
basis vectors, ${\bf
e}_{\hat{\mu}}=\Lambda^{\nu}{}_{\hat{\mu}}\,{\bf e}_{\nu}$, given
by
\begin{eqnarray}
{\bf e}_{\hat{t}}&=&\frac{1}{N}\,{\bf e}_{t}+\frac{\omega}{N}\,{\bf e}_{\phi} \,, \\
{\bf e}_{\hat{r}}&=&\left(1-\frac{b}{r}\right)^{1/2}\,{\bf e}_{r} \,, \\
{\bf e}_{\hat{\phi}}&=&\frac{1}{rK}\,{\bf e}_{\phi} \,.
\end{eqnarray}
Using the fact that ${\bf e}_{\alpha}\cdot{\bf
e}_{\beta}=g_{\alpha\beta}$, we have ${\bf e}_{\hat{\mu}}\cdot{\bf
e}_{\hat{\nu}}=g_{\hat{\mu}\hat{\nu}}=\eta_{\hat{\mu}\hat{\nu}}$.
Using the Einstein field equation,
$G_{\hat\mu\hat\nu}=8\pi\,T_{\hat\mu\hat\nu}$ and taking into
account the null vector $k^{\hat\mu}\,=(1,\pm 1,0)$, we obtain the
following relationship, at the throat
\begin{equation}\label{gnec}
T_{\hat\mu\hat\nu}k^{\hat\mu}\,k^{\hat\nu} =-\frac{1-b'}{16\pi
r_0^2K}(K+r_0K')\,.
\end{equation}
We verify that the NEC is clearly violated because the conditions
$K>0$ and $K'>0$ are imposed by construction \cite{Teo}, so that
the metric (\ref{rwhmn}) can describe a wormhole type solution.
\bigskip
Now for nonlinear electrodynamics, we consider the stress energy
tensor given by Eq. (\ref{stress-energy}), where the nonzero
components of the electromagnetic tensor are
\begin{equation}\label{2emtcomp}
F_{tr}=-F_{rt}, \quad F_{t\phi}=-F_{\phi t}, \quad F_{\phi
r}=-F_{r\phi}\,,
\end{equation}
which are only functions of the radial coordinate $r$. Then, using
the orthonormal reference frame, the NEC takes the following form
\begin{equation}\label{2setnec}
T_{\hat\mu\hat\nu}k^{\hat\mu}\,k^{\hat\nu} =
-\left[N^2\left(1-\frac{b}{r}\right)F_{tr}^2+F_{t\phi}^2\right]\frac{L_{F}}{r^2K^2N^2}
\,,
\end{equation}
which, at the throat, reduces to
\begin{equation}\label{2setnect}
T_{\hat\mu\hat\nu}k^{\hat\mu}\,k^{\hat\nu}\big|_{r_0} =
-\frac{1}{r_0^2K^2N^2}F_{t\phi}^2\,L_{F} \,.
\end{equation}
From this relationship, we verify that the NEC is violated at the
throat only if the derivative $L_{F}$ is positive and $F_{t\phi}$
is nonzero. The latter condition, $F_{t\phi}\neq 0$, is imposed to
have a compatibility of Eqs. (\ref{gnec}) and (\ref{2setnect}).
However, note that from the metric (\ref{rwhmn}) we verify the
following zero components of the Einstein tensor: $G_{tr}=0$ and
$G_{r\phi}=0$, implying that $T_{tr}=0$ and $T_{r\phi}=0$. These
stress-energy tensor components are given by
\begin{eqnarray}
T_{tr}&=&-(F_{tr}\,g^{t \phi}+F_{r \phi}\, g^{\phi\phi})\,F_{t
\phi}\,L_{F}\,, \\
T_{r\phi}&=&-(F_{tr}\, g^{tt}+F_{r \phi}\, g^{t\phi})\,F_{t
\phi}\,L_{F}\,.
\end{eqnarray}
From these conditions, considering that the derivative $L_{F}$ be
finite and positive and the non-trivial case $F_{t\phi}\neq 0$, we
find that
\begin{equation}
(g^{t\phi})^2=g^{tt}g^{\phi\phi} \,.
\end{equation}
From the above imposition we deduce $N=0$, implying the presence
of an event horizon, showing the non-existence of
$(2+1)-$dimensional stationary and axially symmetric traversable
wormholes coupled to nonlinear electrodynamics.
\subsection{$(3+1)-$dimensional wormhole}
Now, consider the stationary and axially symmetric
$(3+1)-$dimensional spacetime, and analogously to the previous
case, it possesses a time-like Killing vector field, which
generates invariant time translations, and a spacelike Killing
vector field, which generates invariant rotations with respect to
the angular coordinate $\phi$. We have the following metric
\begin{equation}\label{3rwh}
ds^2=-N^2dt^2+e^{\mu}\,dr^2+r^2K^2[d\theta^2+\sin^2\theta(d\phi-\omega\,dt)^2]
\end{equation}
where $N$, $K$, $\omega$ and $\mu$ are functions of $r$ and
$\theta$~\cite{Teo}. $\omega(r,\theta)$ may be interpreted as the
angular velocity $ d\phi/ dt$ of a particle that falls freely from
infinity to the point $(r,\theta)$.
For simplicity, we shall consider the definition \cite{Teo}
\begin{equation}
e^{-\mu(r,\theta)}=1-\frac{b(r,\theta)}{r}\,,
\end{equation}
which is well suited to describe a traversable wormhole.
Assume that $K(r,\theta)$ is a positive, nondecreasing function of
$r$ that determines the proper radial distance $R$, i.e., $R\equiv
rK$ and $R_r>0$ \cite{Teo}, as for the $(2+1)-$dimensional case.
We shall adopt the notation that the subscripts $_r$ and
$_{\theta}$ denote the derivatives in order of $r$ and ${\theta}$,
respectively \cite{Teo}.
We shall also write down the contravariant metric tensors, which
will be used later, and are given by
\begin{eqnarray}\label{contravar}
&&g^{tt}=-\frac{1}{N^2}\,, \quad
g^{rr}=\left(1-\frac{b}{r}\right)\,, \quad
g^{\theta\theta}=\frac{1}{r^2K^2}\,,
\nonumber \\
&&g^{\phi\phi}=\frac{N^2-r^2\omega^2K^2\sin^2\theta}{r^2N^2K^2\sin^2\theta}
\,, \quad g^{t\phi}=-\frac{\omega}{N^2}\,.
\end{eqnarray}
Note that an event horizon appears whenever $N=0$~\cite{Teo}. The
regularity of the functions $N$, $b$ and $K$ are imposed, which
implies that their $\theta$ derivatives vanish on the rotation
axis, $\theta=0,\,\pi$, to ensure a non-singular behavior of the
metric on the rotation axis. The metric (\ref{3rwh}) reduces to
the Morris-Thorne spacetime metric (\ref{metricwormhole}) in the
limit of zero rotation and spherical symmetry
\begin{eqnarray}
&N(r,\theta)\rightarrow{\rm e}^{\Phi(r)},\quad
b(r,\theta)\rightarrow b(r)\,,
\\
&K(r,\theta)\rightarrow1\,, \quad \omega(r,\theta)\rightarrow0\,.
\end{eqnarray}
In analogy with the Morris-Thorne case, $b(r_0)=r_0$ is identified
as the wormhole throat, and the factors $N$, $K$ and $\omega$ are
assumed to be well-behaved at the throat.
The scalar curvature of the space-time (\ref{3rwh}) is extremely
messy, but at the throat $r=r_0$ simplifies to
\begin{eqnarray}\label{rotWHRicciscalar}
R&=&-\frac{1}{r^2K^2}\left(\mu_{\theta\theta}
+\frac{1}{2}\mu_\theta^2\right)
-\frac{\mu_\theta}{Nr^2K^2}\,\frac{(N
\sin\theta)_\theta}{\sin\theta}
\nonumber \\
&&-\frac{2}{Nr^2K^2}\,\frac{(N_{\theta}
\sin\theta)_\theta}{\sin\theta}
-\frac{2}{r^2K^3}\,\frac{(K_\theta \sin\theta)_\theta}{\sin\theta}
\nonumber \\
&&+e^{-\mu}\,\mu_r\,\left[\ln(Nr^2K^2)\right]_r
+\frac{\sin^2\theta\,\omega_\theta^2}{2N^2}
\nonumber \\
&&+\frac{2}{r^2K^4}\,(K^2+K_\theta^2) \,.
\end{eqnarray}
The only troublesome terms are the ones involving the terms with
$\mu_\theta$ and $\mu_{\theta\theta}$, i.e.,
\begin{equation}
\mu_\theta=\frac{b_\theta}{(r-b)}\,, \qquad \mu_{\theta\theta}
+\frac{1}{2}\mu_\theta^2=\frac{b_{\theta\theta}}{r-b}
+\frac{3}{2}{b_\theta{}^2\over(r-b)^2}\,.
\end{equation}
Note that one needs to impose that $b_\theta=0$ and
$b_{\theta\theta}=0$ at the throat to avoid curvature
singularities. This condition shows that the throat is located at
a constant value of $r$.
Thus, one may conclude that the metric (\ref{3rwh}) describes a
rotating wormhole geometry, with an angular velocity $\omega$. The
factor $K$ determines the proper radial distance. $N$ is the
analog of the redshift function in Eq. (\ref{4metricwormhole}) and
is finite and nonzero to ensure that there are no event horizons
or curvature singularities. $b$ is the shape function which
satisfies $b\leq r$; it is independent of $\theta$ at the throat,
i.e., $b_\theta=0$; and obeys the flaring out condition $b_r<1$.
\medskip
In the context of nonlinear electrodynamics, we consider the
stress energy tensor defined in Eq. (\ref{4dim-stress-energy}).
The nonzero components of the electromagnetic tensor are
\begin{eqnarray}\label{3emtcomp}
&& F_{tr}=-F_{rt},F_{t\theta}=-F_{\theta t},F_{t\phi}=-F_{\phi t},
\\
&& F_{\phi r}=-F_{r\phi},F_{r\theta}=-F_{\theta r},F_{\theta\phi}=-F_{\phi\theta}
\end{eqnarray}
which are functions of the radial coordinate $r$ and the angular
coordinate $\theta$.
The analysis is simplified using an orthonormal reference frame,
with the following orthonormal basis vectors
\begin{eqnarray}
{\bf e}_{\hat{t}}&=&\frac{1}{N}\,{\bf e}_{t}+\frac{\omega}{N}\,{\bf e}_{\phi} \,, \\
{\bf e}_{\hat{r}}&=&\left(1-\frac{b}{r}\right)^{1/2}\,{\bf e}_{r} \,, \\
{\bf e}_{\hat{\theta}}&=&\frac{1}{rK}\,{\bf e}_{\theta} \,, \\
{\bf e}_{\hat{\phi}}&=&\frac{1}{rK\sin\theta}\,{\bf e}_{\phi} \,.
\end{eqnarray}
Now the Einstein tensor components are extremely messy, but assume
a more simplified form using the orthonormal reference frame and
evaluated at the throat. They have the following non-zero
components
\begin{eqnarray}
G_{\hat{t}\hat{t}}&=&-\frac{(K_\theta
\sin\theta)_\theta}{r^2K^3\sin\theta}
-\frac{\omega_\theta^2\,\sin^2\theta}{4N^2}
+e^{-\mu}\,\mu_r\,\frac{(rK)_r}{rK}
\nonumber \\
&&+\frac{K^2+K_\theta^2}{r^2K^4}
\,, \label{rotGtt}
\\
G_{\hat{r}\hat{r}}&=&\frac{(K_\theta
\sin\theta)_\theta}{r^2K^3\sin\theta}
-\frac{\omega_\theta^2\,\sin^2\theta}{4N^2}
+\frac{(N_\theta \sin\theta)_\theta}{Nr^2K^2\sin\theta}
\nonumber \\
&&-\frac{K^2+K_\theta^2}{r^2K^4}
\,,
\\
G_{\hat{r}\hat{\theta}}&=&\frac{e^{-\mu/2}\,\mu_\theta\,(rKN)_r}{2Nr^2K^2}
\,,
\\
G_{\hat{\theta}\hat{\theta}}&=& \frac{N_\theta(K
\sin\theta)_\theta}{Nr^2K^3\sin\theta}
+\frac{\omega_\theta^2\,\sin^2\theta}{4N^2}
\\ \nonumber
&& -\frac{\mu_r\,e^{-\mu}(NrK)_r}{2NrK}
\,,
\\
G_{\hat{\phi}\hat{\phi}}&=&
-\frac{\mu_r\,e^{-\mu}\,(NKr)_r}{2NKr}-\frac{3\sin^2\theta\,\omega_\theta^2}{4N^2}
\nonumber \\
&&+\frac{N_{\theta\theta}}{Nr^2K^2}-\frac{N_{\theta}K_{\theta}}{Nr^2K^3}
\,, \label{rotGphiphi}
\\
G_{\hat{t}\hat{\phi}}&=&\frac{1}{4N^2K^2r}\;\Big(6NK\,\omega_{\theta}\,\cos\theta
+2NK\,\sin \theta\,\omega_{\theta \theta}
\nonumber \\
&&-\mu_{r}e^{-\mu}r^2NK^3\,\sin\theta\; \omega_{r}
+4N\,\omega_\theta\,\sin\theta\,K_\theta
\nonumber \\
&& -2K\,\sin\theta\,N_{\theta}\,\omega_{\theta} \Big) \,.
\label{rotGtphi}
\end{eqnarray}
Note that the component $G_{\hat{r}\hat{\theta}}$ is zero at the
throat, however, we have included this term, as it shall be
helpful in the analysis of the stress-energy tensor components,
outlined below.
Using the Einstein field equation, the components
$T_{\hat{t}\hat{t}}$ and $T_{\hat{i}\hat{j}}$ have the usual
physical interpretations, and in particular,
$T_{\hat{t}\hat{\phi}}$ characterizes the rotation of the matter
distribution. It is interesting to note that constraints on the
geometry, placing restrictions on the stress energy tensor needed
to generate a general stationary and axisymmetric spacetime, were
found in Ref. \cite{Berg}. Taking into account the Einstein tensor
components above, the NEC at the throat is given by
\begin{eqnarray}\label{NEC}
8\pi\,T_{\hat{\mu} \hat{\nu}}k^{\hat{\mu}} k^{\hat{\nu}}&=&{\rm
e}^{-\mu}\mu_r{(rK)_r\over rK}
-{\omega_\theta{}^2\sin^2\theta\over2N^2}
\nonumber \\
&&+{(N_\theta\sin\theta)_\theta\over(rK)^2N\sin\theta}\,.
\end{eqnarray}
Rather than reproduce the analysis here, we refer the reader to
Ref. \cite{Teo}, where it was shown that the NEC is violated in
certain regions, and is satisfied in others. Thus, it is possible
for an infalling observer to move around the throat, and avoid the
exotic matter supporting the wormhole. However, it is important to
emphasize that one cannot avoid the use of exotic matter
altogether.
Using the stress-energy tensor, Eq. (\ref{4dim-stress-energy}), we
verify the following relationship
\begin{eqnarray}\label{3setnec}
T_{\hat\mu\hat\nu}k^{\hat\mu}k^{\hat\nu} &=&
-\Big[F_{t\phi}^2+\sin^2\theta(F_{t\theta}+\omega
F_{\phi\theta})^2
\nonumber \\
&&\hspace{-2.0cm}+\left(1-\frac{b}{r}\right)N^2(F_{\phi
r}^2+\sin^2\theta
F_{r\theta}^2)\Big]\frac{L_{F}}{r^2K^2N^2\sin^2\theta} \,,
\end{eqnarray}
which evaluated at the throat reduces to
\begin{equation}\label{3setnect}
T_{\hat\mu\hat\nu}k^{\hat\mu}k^{\hat\nu}= -[F_{t\phi}^2
+\sin^2\theta (F_{t\theta}+\omega F_{\phi\theta})^2]
\frac{L_{F}}{r_0^2K^2N^2\sin^2\theta} \,.
\end{equation}
Note that for this expression to be compatible with Eq.
(\ref{NEC}), $L_{F}$ may be either positive, negative or zero.
\bigskip
The non-zero components of the Einstein tensor are precisely the
components expressed in Eqs. (\ref{rotGtt})-(\ref{rotGtphi}), so
that through the Einstein field equation, we have the following
zero components for the stress energy tensor in the
$(3+1)-$dimensional case: $T_{tr}=T_{t\theta}=T_{\phi
r}=T_{\phi\theta}=0$. Thus, taking into account this fact, and
considering that $L_{F}$ is regular, we obtain the following
relationships
\begin{eqnarray}
g^{\theta\theta}F_{t\theta}F_{r\theta}&=&F_{t\phi}(g^{t\phi}F_{tr}+g^{\phi\phi}F_{\phi
r}) \,, \label{1}
\\ \label{2}
-g^{\theta\theta}F_{\phi\theta}F_{r\theta}&=&F_{t\phi}(g^{tt}F_{tr}+g^{t\phi}F_{\phi
r}) \,, \\ \label{3}
-g^{rr}F_{tr}F_{r\theta}&=&F_{t\phi}(g^{t\phi}F_{t\theta}+g^{\phi\phi}F_{\phi\theta})
\,,
\\ \label{4}
g^{rr}F_{\phi
r}F_{r\theta}&=&F_{t\phi}(g^{tt}F_{t\theta}+g^{t\phi}F_{\phi\theta})
\,.
\end{eqnarray}
Now, rewriting Eqs. (\ref{1})-(\ref{2}) in order of $F_{t\theta}$
and $F_{\phi\theta}$, respectively, and introducing these in Eq.
(\ref{3}), we finally arrive at the following relationship
\begin{equation}
-g^{rr}N^2\sin^2\theta\,F_{r\theta}^2=F_{t\phi}^2 \,,
\end{equation}
from which we obtain that $(1-b/r)<0$, implying that $r<b$ for all
values of $r$ except at the throat. However, this restriction is
in clear contradiction with the definition of a traversable
wormhole, where the condition $(1-b/r)>0$ is imposed.
The same restriction can be inferred from the electromagnetic
field equations $(F^{\mu\nu}L_{F})_{;\mu}=0$, which provide the
following relationships
\begin{eqnarray}\label{3eqLFt}
&& F^{rt}[\ln(L_{F})]_{,r}+F^{\theta
t}[\ln(L_{F})]_{,\theta}=-F^{\mu t}{}_{;\mu} \,,
\\
\label{3eqLFphi} &&
F^{r\phi}[\ln(L_{F})]_{,r}+F^{\theta\phi}[\ln(L_{F})]_{,\theta}
=-F^{\mu\phi}{}_{;\mu} \,.
\end{eqnarray}
These can be rewritten in the following manner
\begin{eqnarray}
&&[\ln(L_{F})]_{,r}=\frac{F^{\theta
t}F^{\mu\phi}{}_{;\mu}-F^{\theta \phi}F^{\mu
t}{}_{;\mu}}{F^{rt}F^{\theta\phi}-F^{r \phi}F^{\theta t}} \,,
\\
&&[\ln(L_{F})]_{,\theta}=\frac{F^{r\phi}F^{\mu t
}{}_{;\mu}-F^{rt}F^{\mu \phi}{}_{;\mu}}{F^{rt}F^{\theta\phi}-F^{r
\phi}F^{\theta t}} \,.
\end{eqnarray}
Now, a crucial point to note is that to have a solution, the term
in the denominator, $F^{rt}F^{\theta\phi}-F^{r\phi}F^{\theta t}$,
should be non-zero, and can be expressed as
\begin{eqnarray}\label{1p}
F^{rt}F^{\theta\phi}-F^{r\phi}F^{\theta
t}&=&g^{rr}g^{\theta\theta}\left[g^{tt}g^{\phi\phi}-(g^{t\phi})^2\right]
\times
\nonumber \\
&&\times (F_{tr}F_{\phi\theta}-F_{t\theta}F_{\phi r }) \,.
\end{eqnarray}
However, using the Eqs. (\ref{1})-(\ref{4}), we may obtain an
alternative relationship, given by
\begin{equation}\label{2p}
F^{rt}F^{\theta\phi}-F^{r\phi}F^{\theta
t}=\left(g^{rr}g^{\theta\theta}\frac{F_{r\theta}}{F_{t\phi}}\right)^2
(F_{tr}F_{\phi\theta}-F_{t\theta}F_{\phi r}) \,.
\end{equation}
Confronting both relationships, we obtain that $(1-b/r)<0$,
implying that $r<b$ for all values of $r$ except at the throat,
which, as before, is in clear contradiction with the definition of
a traversable wormhole.
If we consider the individual cases of $F_{tr}=0$, $F_{\phi r}=0$,
$F_{t \theta}=0$ or $F_{\phi \theta}=0$, separately, it is a
simple matter to verify that Eqs. (\ref{1})-(\ref{4}) impose the
restriction $N^2<0$, which does not satisfy the wormhole
conditions.
For the specific case of $F_{t \phi}=0$ (with $F_{r\theta} \neq
0$), the restrictions $F_{tr}=F_{\phi r}=F_{t \theta}=F_{\phi
\theta}=0$ are imposed. Taking into account these impositions one
verifies that from Eq. (\ref{3setnect}), we have
$T_{\hat\mu\hat\nu}k^{\hat\mu}k^{\hat\nu}=0$, which is not
compatible with the geometric conditions imposed by Eq.
(\ref{NEC}).
Considering $F_{r \theta}=0$ (with $F_{t\phi} \neq 0$), one
readily verifies from Eqs. (\ref{1})-(\ref{4}) the existence of an
event horizon, i.e., $N=0$, rendering the wormhole geometry
non-traversable.
The specific case of $F_{t\phi}=0$ and $F_{r\theta}=0$ also obeys
Eqs. (\ref{1})-(\ref{4}), and needs to be analyzed separately. To
show that this case is also in contradiction to the wormhole
conditions at the throat, we shall consider the following
stress-energy tensor components, for $F_{t\phi}=0$ and
$F_{r\theta}=0$
\begin{eqnarray}
T_{\hat{t}\hat{t}}&=&-L-\frac{(1-b/r)}{N^2}(F_{tr}+\omega F_{\phi
r })^2L_F
\\ \nonumber
&&-\frac{1}{N^2r^2K^2}(F_{t\theta}+\omega F_{\phi \theta})^2L_F
\,,
\label{SET1} \\
T_{\hat{r}\hat{r}}&=&L+\frac{(1-b/r)}{N^2}(F_{tr}+\omega F_{\phi r
})^2L_F
\\ \nonumber
&&-\frac{(1-b/r)}{r^2K^2\sin^2\theta}F_{\phi r}^2\,L_F \,,
\label{SET2} \\
T_{\hat{\theta}\hat{\theta}}&=&L+\frac{1}{N^2r^2K^2}(F_{t\theta}+\omega
F_{\phi \theta})^2L_F
\\ \nonumber
&&-\frac{1}{r^4K^4\sin^2\theta}F_{\phi\theta}^2\;L_F \,,
\label{SET3} \\
T_{\hat{\phi}\hat{\phi}}&=&L-\frac{(1-b/r)}{r^2K^2\sin^2\theta}F_{\phi
r }^2\;L_F
\\ \nonumber
&&-\frac{1}{r^4K^4\sin^2\theta}F_{\phi\theta}^2\;L_F \,.
\label{SET4}
\end{eqnarray}
An important result deduced from the above components is the
following
\begin{equation}\label{SETcond}
T_{\hat{t}\hat{t}}+T_{\hat{r}\hat{r}}+T_{\hat{\theta}\hat{\theta}}-
T_{\hat{\phi}\hat{\phi}}=0
\end{equation}
for all points in the geometry.
Now, considering the Einstein tensor components, Eqs.
(\ref{rotGtt})-(\ref{rotGphiphi}), evaluated at the throat, along
the rotation axis, and using the Einstein field equation, we
verify the following relationship
\begin{equation}\label{Einstein-cond}
G_{\hat{t}\hat{t}}+G_{\hat{r}\hat{r}}+G_{\hat{\theta}\hat{\theta}}-
G_{\hat{\phi}\hat{\phi}}=\frac{e^{-\mu}\mu_r(rK)_r}{rK} \,,
\end{equation}
which is always positive. We have taken into account that the
functions $N$ and $K$ are regular, so that their $\theta$
derivatives vanish along the rotation axis, $\theta=0,\;\pi$, as
emphasized above. Therefore, through the Einstein field equation,
we verify that relationship (\ref{Einstein-cond}) is not
compatible with condition (\ref{SETcond}), and therefore rules out
the existence of rotating wormholes for this specific case of
$F_{t\phi}=0$ and $F_{r\theta}=0$.
\section{Conclusion}\label{Conclusion}
In this work we have explored the possibility of the existence of
$(2+1)$ and $(3+1)-$dimensional static, spherically symmetric and
stationary, axisymmetric traversable wormholes coupled to
nonlinear electrodynamics. For the static and spherically
symmetric wormhole spacetimes, we have found the presence of an
event horizon, and that the NEC is not violated at the throat,
proving the non-existence of these exotic geometries within
nonlinear electrodynamics. It is perhaps important to emphasize
that for the $(2+1)-$dimensional case we found an extremely
troublesome aspect of the geometry, as in order to construct a
traversable wormhole, singularities appear in the physical fields.
This particular aspect of the geometry is in clear contradiction
to the model construction of nonlinear electrodynamics, founded on
a principle of finiteness, that a satisfactory theory should avoid
physical quantities becoming infinite \cite{BI}. Thus, imposing
that the physical quantities be non-singular, we verify that the
general solution corresponds to a non-traversable wormhole
geometry. We also point out that the non-existence of
$(3+1)-$dimensional static and spherically symmetric traversable
wormholes is consistent with previous results \cite{Bronnikov1}.
For the $(2+1)-$dimensional stationary and axisymmetric wormhole,
we have verified the presence of an event horizon, rendering a
non-traversable wormhole geometry. Relatively to the
$(3+1)-$dimensional stationary and axially symmetric wormhole
geometry, we have found that the field equations impose specific
conditions that are incompatible with the properties of wormholes.
Thus, we have showed that for the general cases of solutions
outlined above the non-existence of traversable wormholes within
the context of nonlinear electrodynamics. Nevertheless, it is
important to emphasize that regular magnetic time-dependent
traversable wormholes do exist coupled to nonlinear
electrodynamics \cite{Arell-Lobo}.
In the analysis outlined in this paper, we have considered general
relativity coupled to nonlinear electrodynamics, with the
gauge-invariant electromagnetic Lagrangian $L(F)$ depending on a
single invariant $F$ given by $F \sim F^{\mu\nu}F_{\mu\nu}$. An
interesting issue to pursue would be the inclusion, in addition to
$F$, of another electromagnetic field invariant $G \sim
\,^*F^{\mu\nu}F_{\mu\nu}$. This latter inclusion would possibly
add an interesting analysis to the solutions found in this paper.
\section*{Acknowledgements}
We thank Ricardo Garc\'{i}a Salcedo and Nora Bret\'{o}n for
extremely helpful comments and suggestions.
|
1,941,325,220,087 | arxiv | \section{Introduction}
\IEEEPARstart{T}{ransmit} precoding (TPC) is a channel-adaptive technique of precompensating the deleterious channel effects about to be encountered based on the knowledge of channel state information (CSI) at the transmitter (CSIT) \cite{MIMO}. Given the limited bandwidth of control channels in practical communication systems, typically codebook-based TPC schemes relying on a low-rate CSI-feedback are used \cite{LTE,5G}. The pivotal design aspects are the codebook design and the CSI-entry selection criterion. The simplest codebook design relies on selecting a specific antenna subset \cite{Antenna_s1,Antenna_s2}. By contrast, the Fourier codebook proposed in \cite{UP_DFT} appropriately rotates the transmit signal in a high-dimensional complex space. Furthermore, the authors of \cite{UP_STBC} and \cite{UP_SMS} transform the codebook design into packing subspaces into the Grassmann manifold relying on the projection two-norm and Fubini-Study distances, respectively. As for the CSI selection criterion, the popular capacity criterion or the maximum-likelihood (ML) criterion \cite{UP_SMS} may be used for selecting the TPC matrix from the codebook. However, these codebooks and their selection criteria were designed for uncoded MIMO systems with an emphasis on the MIMO detection performance. In reality, coded MIMO systems have to be used, where we focus on the performance of the decoded bits.
In this context, the polar-coded MIMO (PC-MIMO) systems proposed by Dai \emph{et al.} \cite{PCMIMO} have been shown to closely approach the capacity of MIMO system with the aid of successive interference cancellation (SIC), outperforming their turbo/LDPC-coded MIMO counterparts. The PC-MIMO system of \cite{PCMIMO} was designed for fast-fading channels without exploiting the CSIT. However, harnessing the knowledge of CSIT is capable of further improving the performance. Since the polarization effect of data streams is an important factor influencing the performance \cite{PCMIMO}, the PC-MIMO TPC should be designed on the basis of explicitly exploiting the polarization effect.
Table \ref{pe} boldly contrasts our novel contributions to the state-of-the-art both in terms of the selection criterion and the polarization effect, showing the novelty of this work explicitly. The polarization effect is introduced by the successive cancellation (SC) structure. The bit-polarization was first proposed by Ar{\i}kan \cite{arikan} for designing polar codes. Then, the bit-polarization was extended to symbol polarization and a $2^m$-ary multilevel polar-coded modulation scheme was proposed in \cite{Polar_coded_modulation_seidl}. Furthermore, Dai \emph{et al.} designed the PC-MIMO \cite{PCMIMO} using antenna polarization. Inspired by these papers, we conceive data stream polarization to design a unitary precoding scheme.
\begin{table*}[t]
\renewcommand\arraystretch{0.9}
\centering
\vspace{-0em}
\caption{Boldly contrasting our contributions to the state-of-the-art papers.}
\label{pe}
\begin{tabular}{|p{2.8cm}|c|c|c|c|c|c|c|c|}
\hline
&2004\cite{Antenna_s1,Antenna_s2}&2005\cite{UP_STBC}&2005\cite{UP_SMS}&2009\cite{arikan}&2013\cite{Polar_coded_modulation_seidl}&2018\cite{PCMIMO}&2020\cite{SoSCL}&This work\\
\hline
SNR criterion&\checkmark&&\checkmark&&&&&\\
\hline
ML criterion&&\checkmark&\checkmark&&&&&\\
\hline
Capacity criterion&&&\checkmark&&&&&\checkmark\\
\hline
Polarization criterion&&&&&&&&\checkmark\\
\hline
Bit polarization&&&&\checkmark&\checkmark&\checkmark&\checkmark&\checkmark\\
\hline
Symbol polarization&&&&&\checkmark&\checkmark&&\\
\hline
Antenna polarization&&&&&&\checkmark&\checkmark&\\
\hline
Data stream polarization&&&&&&&&\checkmark\\
\hline
\end{tabular}
\end{table*}
In this compact letter, a unitary polar TPC is proposed for improving the performance of PC-MIMO systems. Since the polarization of the substreams directly affects the PC-MIMO performance \cite{PCMIMO}, the proposed polar TPC scheme stems from the \emph{polarization criterion} used for maximizing the polarization effect, which constitutes a radical departure from the traditional TPC design. Given the codebook, the TPC matrix selection comprises two steps. In the first step, a basic TPC matrix is selected for maximizing the capacity. In the second step, we post-multiply the basic matrix by a unitary matrix, which is specifically designed for maximizing the polarization of substreams without eroding the capacity optimized by the basic TPC matrix.
Moreover, the optimal polar TPC of the PC-MIMO system is derived under the polarization criterion and a method to design the polar TPC codebook is proposed based on the DFT TPC.
Our simulation results illustrate that the proposed polar TPC scheme outperforms the state-of-the-art DFT TPC scheme.
\emph{Notational Conventions}: In this letter, scalars are denoted by the lowercase letters (e.g., $x$).
The calligraphic characters, such as ${\cal X}$, are used to denote sets. The bold capital letters, such as $\mathbf{X}$, denote matrices.
The $j$-th column of matrix $\mathbf{X}$ is written as ${\mathbf{X}}_j$ and $\mathbf{X}_i^j$ represents the matrix $\left[{\mathbf{X}}_i,\cdots,{\mathbf{X}}_j\right]$.
The element in the $i$-th row and the $j$-th column of matrix $\mathbf{X}$ is written as $X_{i,j}$. ${\mathbf{X}}^T$ and ${\mathbf{X}}^*$ are used to denote the transposition and the conjugate transposition of ${\mathbf{X}}$, respectively.
The bold lowercase letters (e.g., ${\bf{x}}$) are used to denote column vectors. Notation ${{x}_i^j}$ denotes the column subvector $(x_i,\cdots,x_j)^T$ and $x_i$ denotes the $i$-th element of ${\bf{x}}$.
Given an index set ${\cal A}$, $x_{\cal A}$ is a subvector composed of $x_i$, $i \in {\cal A}$.
We use ${\cal U}(M_T, M)$ to denote the set of $M_T \times M$ matrices with orthonormal columns, ${\bf I}_M$ to denote an $M \times M$ identity matrix, $\lambda_i({\bf X})$ to denote the $i$-th smallest singular value of $\bf X$ and $diag(x_1,\cdots,x_M)$ to denote an $M \times M$ diagonal matrix.
Throughout this letter, $\log \left( \cdot \right)$ means ``base 2 logarithm''.
\section{Preliminaries}
\subsection{Polar-Coded MIMO System}
In this section, we introduce the PC-MIMO system of \cite{PCMIMO} by intrinsically amalgamating it with a unitary TPC scheme. The $K$ information bits are first encoded and modulated into QPSK symbols, which are then precoded by the codebook and transmitted via the MIMO channel using $M_T$-transmit antennas, $M_R$-receive antennas and $M$ bitstreams within $N$ time slots. We focus on block-fading channels, where the channels remain constant for $N$ time slots.
At the transmitter, the source sequence $u_1^{2MN}$ composed of $u_{\cal A}$ and $u_{{\cal A}^c}$ with information set $|{\cal A}| = K$ and code rate $R = \frac{K}{{2MN}}$ is demultiplexed into $M$ different bitstreams and each bitstream is fed into a polar encoder. Then, the $2N$-dimensional encoded sequence $v_{1+2N(i-1)}^{2Ni}$, $1 \le i \le M$, is mapped into an $N$-dimensional modulated sequence $s_{1+N(i-1)}^{Ni}$ using QPSK modulation. Next, the $M \times N$ symbol matrix ${\bf{S}} = {[ {s_1^N,s_{N + 1}^{2N}, \cdots ,s_{N\left( {M - 1} \right) + 1}^{NM}} ]^T}$ is multiplied by a $M_T \times M$ TPC matrix $\bf F$ and produces the transmit signal matrix ${\bf X} = \sqrt {\frac{{{E_s}}}{M}} {\bf{FS}}$, where $E_s$ is the total transmit energy. Hence, the received signal matrix $\bf Y$ at the output of block fading channels is
\begin{equation}\label{symbol_matrix}
{\bf{Y}} = {\bf HX} + {\bf{Z}} = \sqrt {\frac{{{E_s}}}{M}} {\bf HF}{\bf S} + {\bf{Z}},
\end{equation}
where $\bf H$ is the channel response matrix having i.i.d entries in ${\cal CN}(0,1)$
and the elements of $\bf Z$ are i.i.d. complex circular Gaussian random variables with $z_{i,j} \sim {\cal CN}(0,N_0)$. Perfect channel estimation is assumed at the receiver.
To simplify the analysis, we rewrite the system model (\ref{symbol_matrix}) by omitting the time slot as
\begin{equation}\label{symbol_matrix_simplify}
{\bf{y}} = \sqrt {\frac{{{E_s}}}{M}} {{\bf{HF}}}{\bf{s}} + {\bf{z}}.
\end{equation}
At the receiver, a joint multistage detection and decoding receiver is used, which is similar to the SC decoding rules of polar codes \cite{arikan,Lajos1,Lajos2}. Hence, other SC-like decoding schemes, such as the successive cancellation list (SCL) decoder \cite{talvardyscl, niuscl,SoSCL}, the successive cancellation stack decoder \cite{SCS,RCSCS} and the CRC-aided SCL (CA-SCL) decoder \cite{niu_CASCL}, can also be used in the PC-MIMO system to improve the performance.
The MIMO detection order proceeds from substream $1$ to $M$. A substream is first demodulated into bit log-likelihood ratios (LLRs) and the LLRs are then fed into the polar decoder. Then, the decoded bitstream is entered into the polar encoder to retrieve the QPSK symbols. After the bits in the substream have been estimated, they are fed back to the MIMO detector in order to perform interference cancellation.
\subsection{Unitary Precoding}
The receiver selects a TPC matrix $\bf F$ from the codebook set ${\cal F}$ with $|{\cal F}| = 2^B$, where ${\cal F} \subset {\cal U}(M_T, M)$ and $B$ bits of feedback are available. The DFT-based TPC designed for spatial multiplexing systems in \cite{UP_STBC, UP_DFT} is formulated as:
\begin{equation}\label{DFTcodebook}
{\cal F} = \left\{{\bf F}_{\rm DFT}, {\bf\Theta}{\bf F}_{\rm DFT}, \cdots, {\bf\Theta}^{2^B-1}{\bf F}_{\rm DFT}\right\},
\end{equation}
where the entry of ${\bf F}_{\rm DFT}$ at $(k,l)$ is $\frac{1}{\sqrt {M_T}} e^{i\left(\frac{2\pi}{M_T} \right)kl}$ and ${\bf\Theta}$ is the diagonal matrix
\begin{equation}\label{Theta}
{\bf{\Theta }} = diag\left({e^{i\left( {2\pi /{2^B}} \right){a_1}}}, \cdots, {e^{i\left( {2\pi /{2^B}} \right){a_{{M_T}}}}} \right).
\end{equation}
In (\ref{Theta}), the vector ${\bf a} = [a_1,\cdots,a_{M_T}]$ is:
\begin{equation}\label{vector_a}
{\bf a} = \mathop {\arg \max }\limits_{{\cal Z}} \mathop {\min }\limits_{1 \le l \le {2^B} - 1} d\left( {{{\bf{F}}_{{\rm{DFT}}}},{{\bf{\Theta }}^l}{{\bf{F}}_{{\rm{DFT}}}}} \right),
\end{equation}
where ${\cal Z} = \left\{ {\bf a}\in {\mathbb Z}^{M_T}|0 \le a_k \le 2^B-1, \forall k\right\}$ and $d({\bf A}, {\bf B}) = \frac{1}{{\sqrt 2 }}\left\| {{\bf{A}}{{\bf{A}}^*} - {\bf{B}}{{\bf{B}}^*}} \right\|$. Then, random testing of the values of ${\bf a} \in {\cal Z}$ is used to optimize the cost function for training the codebook.
The capacity maximization criterion is used to select the TPC matrix $\bf F$ from $\cal F$ yielding:
\begin{equation}\label{capacitySC}
{\bf{F}} = \mathop {\arg \max }\limits_{{\bf{F}} \in {{\cal F}}} I\left( {{\bf y};{\bf s}|{\bf{HF}}} \right),
\end{equation}
where $I\left( {{\bf y};{\bf s}|{\bf{HF}}} \right) = \log \det \left( {{{\bf{I}}_M} + \frac{{{E_s}}}{{M{N_0}}}{{\bf{F}}^*}{{\bf{H}}^*}{\bf{HF}}} \right)$ is the capacity of the unitary TPC-aided system.
\section{Polar Precoding}
In this section, we first illustrate the polarization effect of substreams. Then, the polarization criterion is provided and the optimal unquantized TPC satisfying this criterion is derived. Finally, the method of designing the polar TPC codebook is proposed.
\subsection{Polarization Effect of Substreams}
\begin{figure}[t]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\centering{\includegraphics[scale=1]{capacity_data_stream.pdf}}
\caption{The capacity of substreams relying on DFT TPC for $M_T = 8$, $M_R = 8$ and $M = 6$ at $\frac{{{E_s}}}{{{N_0}}} = 0$dB.}\label{polarization_effect}
\end{figure}
Given ${\bf G} = {\bf HF}$, the system model (\ref{symbol_matrix_simplify}) is simplified to
\begin{equation}\label{symbol_matrix_simplify_G}
{\bf{y}} = \sqrt {\frac{{{E_s}}}{M}} {{\bf{G}}}{\bf{s}} + {\bf{z}}.
\end{equation}
Then, due to the SC structure at the receiver, the system model associated with the $1$-st to the $(i-1)$-st substreams known is formulated as:
\begin{equation}\label{symbol_matrix_ith_substream}
\underbrace {{\bf{y}} - \sqrt{\frac{{{E_s}}}{M}}{\bf G}_1^{i-1}s_1^{i - 1}}_{ = :{\bf{y}}_i} = \sqrt {\frac{{{E_s}}}{M}} {{\bf G}_i^M}s_i^M + {\bf{z}}.
\end{equation}
According to the chain rule of mutual information, the capacity of the PC-MIMO system is decomposed into
\begin{equation}\label{chain_rule}
I\left( {{\bf{y}};{\bf{s}}|{\bf{HF}}} \right) = \sum\limits_{i = 1}^M {\underbrace {I\left( {{\bf{y}};{s_i}|{\bf{HF}},s_1^{i - 1}} \right)}_{ = :{I_i}}}
\end{equation}
and $I_i$ is the capacity of the $i$-th substream with SC structure, which is calculated by
\begin{equation}\label{Ii}
\begin{aligned}
{I_i} &= I\left( {{\bf{y}};s_i^M|{\bf{HF}},s_1^{i - 1}} \right) - I\left( {{\bf{y}};s_{i + 1}^M|{\bf{HF}},s_1^i} \right)\\
&= I\left( {{\bf{y}}_i;s_i^M|{\bf{G}}_i^M} \right) - I\left( {{\bf{y}}_{i + 1} ;s_{i + 1}^M|{\bf{G}}_{i + 1}^M} \right).
\end{aligned}
\end{equation}
Similar to \cite{arikan, PCMIMO}, the SC structure also introduces the polarization effect of substreams, i.e., the capacity difference among $I_i, i=1,\cdots, M$. Fig. \ref{polarization_effect} is an example illustrating the polarization effect for $M_T = 8$, $M_R = 8$ and $M = 6$ at $\frac{{{E_s}}}{{{N_0}}} = 0$dB using DFT TPC. In Fig. \ref{polarization_effect}, the capacities of the $M = 6$ substreams are increasing from $I_1$ to $I_6$.
Based on that, $I_1$, $I_2$ and $I_3$ are lower than the average capacity and $I_4$, $I_5$ and $I_6$ are higher than the average capacity.
Thus, a capacity difference occurs among $I_1$ to $I_6$ and the polarization effect is introduced by the SC structure.
\subsection{Polarization Criterion}
In PC-MIMO systems, drastic polarization leads to better performance when the capacity is identical \cite{PCMIMO}.
Thus, maximizing the system capacity
\begin{equation}\label{max_cap}
{\bf F} = \mathop {\arg \max }\limits_{{\bf F} \in {\cal F}} I\left( {\bf{y};\bf{s}|{\bf{HF}}} \right)
\end{equation}
and simultaneously maximizing the polarization effect among the substreams
\begin{equation}\label{max_polar}
{\bf{F}} = \mathop {\arg \max }\limits_{{\bf{F}} \in {{\cal F}}} {\sum\limits_{i = 1}^M {{{\left( {{I_i} - \bar I} \right)}^2}} }
\end{equation}
are both necessary for our polar TPC, where $\bar I$ is the average capacity of the $M$ substreams, i.e., $\bar I = \frac{{I\left( {{\bf{y}};{\bf{s}}|{\bf{HF}}} \right)}}{M}$.
However, it is a challenge to directly find a suitable $\bf F$ satisfying both (\ref{max_cap}) and (\ref{max_polar}).
Then, since $I\left( {\bf{y};\bf{s}|{\bf{HF}}} \right)$ remains unchanged when $\bf F$ is multiplied by a unitary matrix, $\bf F$ is partitioned into two matrices ${\bf W} \in {\cal U}(M_T,M)$ and ${\bf Q} \in {\cal U}(M,M)$, and we have
\begin{equation}\label{cap_unchanged}
I\left( {\bf{y};\bf{s}|{\bf{HF}}} \right) = I\left( {\bf{y};\bf{s}|{\bf{HWQ}}} \right) = I\left( {\bf{y};\bf{s}|{\bf{HW}}} \right),
\end{equation}
where ${\bf F} = {\bf WQ}$.
Based on \eqref{cap_unchanged}, we can find a matrix ${\bf W}$ for maximizing $I\left( {\bf{y};\bf{s}|{\bf{HW}}} \right)$, which is equivalent to maximizing $I\left( {\bf{y};\bf{s}|{\bf{HF}}} \right)$.
When $\bf W$ is determined, $I\left( {\bf{y};\bf{s}|{\bf{HF}}} \right)$ remains unchanged for $\forall{\bf Q} \in {\cal U}(M,M)$. Thus, $\bf Q$ can be used for maximizing the polarization effect without affecting the system capacity.
Hence, ${\bf F} = {\bf WQ}$ can satisfy both (\ref{max_cap}) and (\ref{max_polar}). The polarization criterion is defined as
\begin{equation}\label{criterion_new}
\left\{
\begin{aligned}
\bf{W} &= \mathop {\arg \max }\limits_{{\bf{W}} \in {\cal W}} I\left( {\bf{y};\bf{s}|{\bf{HW}}} \right)\\
{\bf{Q}} &= \mathop {\arg \max }\limits_{{\bf{Q}} \in {{\cal Q}}} {\sum\limits_{i = 1}^M {{{\left( {{I_i} - \bar I} \right)}^2}} },
\end{aligned}
\right.
\end{equation}
where ${\cal W} \subset {\cal U}(M_T,M)$ and ${\cal Q} \subset {\cal U}(M,M)$ are the codebooks for $\bf W$ and $\bf Q$, respectively.
\subsection{Optimal Unquantized TPC}
According to the polarization criterion, the system model (\ref{symbol_matrix_simplify}) is transformed into
\begin{equation}\label{symbol_matrix_criterion}
{\bf{y}} = \sqrt {\frac{{{E_s}}}{M}} {\bf HWQ}{\bf s} + {\bf{z}}.
\end{equation}
Let the singular value decomposition of a matrix $\bf A$ be given by
\begin{equation}\label{SVD}
{\bf A} = {\bf U}_{\bf A}{\bf \Sigma}_{\bf A}{\bf V}_{\bf A}^*,
\end{equation}
where ${\bf U}_{\bf A}$ and ${\bf V}_{\bf A}$ are unitary matrices and ${\bf \Sigma}_{\bf A}$ is a diagonal matrix with $\lambda_k({\bf A})$ denoting the $k$-th smallest singular value of $\bf A$ at entry $(k,k)$.
Then, based on (\ref{symbol_matrix_criterion}), we first derive ${\bf Q}_{opt} \in {\cal U}(M,M)$ that maximizes the polarization effect with $\bf W$.
\begin{lemma}\label{lemma1}
The optimal TPC matrix ${\bf Q}_{opt} \in {\cal U}(M,M)$ with $\bf W$ is ${\bf Q}_{opt} = {\bf V}_{\bf HW}$.
\end{lemma}
\begin{IEEEproof}
For the system model (\ref{symbol_matrix_ith_substream}), we have ${\bf G}_i^M = {\bf HW}{\bf Q}_i^M$. In \cite{UP_SMS}, it has been proved that ${\bf Q}_i^M = {{\bf V}_{\bf HW}}_i^M$ can maximize $I\left( {{\bf{y}}_i;s_i^M|{\bf{G}}_i^M} \right)$, where ${{\bf V}_{\bf HW}}_i^M$ is a matrix constructed from the last $(M-i+1)$ columns of ${\bf V}_{\bf HW}$.
Thus, ${\bf Q}_{opt}$ maximizes $I\left( {{\bf{y}}_i;s_i^M|{\bf{G}}_i^M} \right) =
\sum\nolimits_{k = i}^M {{I_k}}, i=1,\cdots,M$.
Let $I_k$ denote the capacity of the $k$-th substream optimized by ${\bf Q}_{opt}$ and $I_1 \le I_2 \le \cdots \le I_M$.
We transform the proof into linear programming as follows:
\begin{equation}\label{LP}
\begin{aligned}
{\max}~& f\left(x_1,x_2, \cdots, x_M\right) = {\sum\limits_{k = 1}^M {{{\left( {{x_k} - \bar I} \right)}^2}} },\\
{\rm{s. t.}~}& {\sum\limits_{k = i}^M {x_k} } \le \sum\limits_{k = i}^M {{I_k}}, 2 \le i \le M,\\
&{\sum\limits_{k = 1}^M {x_k} } = M\bar I.
\end{aligned}
\end{equation}
Then, since $f\left(x_1,x_2, \cdots, x_M\right)$ is a convex function, the maximum value is on the boundary and the point is $x_k = I_k$, $1 \le k \le M$.
Thus, ${\bf Q}_{opt} = {\bf V}_{\bf HW}$ is the optimal TPC matrix with $\bf W$.
\end{IEEEproof}
According to Lemma \ref{lemma1}, we can readily derive the optimal TPC matrix ${\bf F}_{opt}$ for satisfying the polarization criterion.
\begin{lemma}\label{lemma2}
The optimal TPC matrix ${\bf F}_{opt} \in {\cal U}(M_T,M)$ is a matrix constructed from the last $M$ columns of ${\bf V}_{\bf H}$.
\end{lemma}
\begin{IEEEproof}
In \cite{UP_SMS}, it has been shown that ${\bf F}_{opt}{\bf Q}$ is the optimal TPC maximizing $I\left( {{\bf{y};\bf{s}}|{{\bf HF}_{opt}{\bf Q}}} \right)$, where ${\bf F}_{opt}$ is composed of the last $M$ columns of ${\bf V}_{\bf H}$ and $\forall {\bf Q} \in {\cal U}(M,M)$. Then, according to Lemma \ref{lemma1}, the optimal polar TPC associated with fixed ${\bf F}_{opt}$ is
${\bf Q}_{opt} = {\bf V}_{{\bf HF}_{opt}} = {\bf I}_M$. Thus, the optimal polar TPC matrix is ${\bf F}_{opt}$, which is constructed from the last $M$ columns of ${\bf V}_{\bf H}$.
\end{IEEEproof}
\subsection{Polar TPC Codebook Design}
The codebook of the polar TPC is ${\cal F} = \left\{{\bf F}|{\bf F = WQ},{\bf W}\in{\cal W}, {\bf Q}\in{\cal Q}\right\}$ with $|{\cal F}| = 2^{B}$, $|{\cal W}| = 2^{B_1}$, $|{\cal Q}| = 2^{B_2}$ and $B = B_1 + B_2$. The codebook design is divided into two steps, which are summarized as follows:
\begin{enumerate}
\item $\cal W$ is designed by the DFT TPC of \cite{UP_STBC, UP_DFT}, i.e.,
\begin{equation}\label{WDFTcodebook}
{\cal W} = \left\{{\bf W}_{\rm DFT}, {\bf\Theta}_{\bf W}{\bf W}_{\rm DFT}, \cdots, {\bf\Theta}_{\bf W}^{2^{B_1}-1}{\bf W}_{\rm DFT}\right\},
\end{equation}
where the entry of ${\bf W}_{\rm DFT}$ at $(k,l)$ is
$\frac{1}{\sqrt {M_T}} e^{i\left(\frac{2\pi}{M_T} \right)kl}$ and the diagonal matrix ${\bf\Theta}_{\bf W}$ is
\begin{equation}\label{WDFT_Theta}
{\bf{\Theta }_{\bf W}} = diag\left({e^{i\left( {2\pi /{2^{B_1}}} \right){a_1}}}, \cdots, {e^{i\left( {2\pi /{2^{B_1}}} \right){a_{{M_T}}}}} \right).
\end{equation}
The vector ${\bf a} = [a_1,\cdots,a_{M_T}]$ in (\ref{WDFT_Theta}) is
\begin{equation}\label{vector_a_W}
{\bf a} = \mathop {\arg \max }\limits_{{\cal Z}} \mathop {\min }\limits_{1 \le l \le {2^{B_1}} - 1} d\left( {{{\bf{W}}_{{\rm{DFT}}}},{{\bf{\Theta }}_{\bf W}^l}{{\bf{W}}_{{\rm{DFT}}}}} \right).
\end{equation}
\item $\cal Q$ is also designed by the DFT TPC, i.e.,
\begin{equation}\label{QDFTcodebook}
{\cal Q} = \left\{{\bf Q}_{\rm DFT}, {\bf\Theta}_{\bf Q}{\bf Q}_{\rm DFT}, \cdots, {\bf\Theta}_{\bf Q}^{2^{B_2}-1}{\bf Q}_{\rm DFT}\right\}.
\end{equation}
Hence, the entry of ${\bf Q}_{\rm DFT}$ at $(k,l)$ is $\frac{1}{\sqrt {M}} e^{i\left(\frac{2\pi}{M} \right)kl}$ and the diagonal matrix ${\bf\Theta}_{\bf Q}$ is
\begin{equation}\label{QDFT_Theta}
{\bf{\Theta }_{\bf Q}} = diag\left(1,{e^{i\left( {2\pi /{2^{B_2}}} \right)}}, \cdots, {e^{i\left( {2\pi /{2^{B_2}}} \right){\left(M-1\right)}}} \right).
\end{equation}
\end{enumerate}
For polar TPC, the optimization of $B_1$ and $B_2$ is important. In this paper, $B_1$ and $B_2$ are selected empirically and we just provide a compact insight into the optimization.
According to the polarization criterion (\ref{criterion_new}), $B_1$ and $B_2$ affect the capacity and the polarization effect, respectively.
Then, a higher $B_1$ or a lower $B_2$ leads to higher capacity and lighter polarization effect, and vice versa.
Explicitly, both factors have an influence on the PC-MIMO performance. Thus, the polar TPC codebook has to strike a trade-off between the capacity and the polarization effect, and both $B_1$ as well as $B_2$ should be optimized.
\section{Performance Evaluation}
\begin{figure}[t]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\centering{\includegraphics[scale=1]{capacity.pdf}}
\caption{The capacity of substreams for the fixed channel response of (\ref{H}) and different TPCs, where $M_T = 3$, $M_R = 3$ and $M = 2$.}\label{capacity}
\end{figure}
\begin{figure}[t]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\centering{\includegraphics[scale=1]{BLER_fixed_H.pdf}}
\caption{The BLER of PC-MIMO systems using different TPC schemes, where $M_T = 3$, $M_R = 3$, $M = 2$, $N = 64$ and $R = 1/4$.}\label{BLER1}
\end{figure}
In this section, we first provide the capacity of the substreams for the fixed channel matrix
\begin{equation}\label{H}
{\bf{H}} = \left[ {\begin{array}{*{20}{c}}
{0.61 - 0.92i}&{-0.93 + 0.56i}&{-1.24 + 0.35i}\\
{0.93 - 1.30i}&{-0.21 - 0.15i}&{-0.51 - 0.60i}\\
{0.01 + 0.35i}&{-0.64 - 0.44i}&{0.78 + 0.04i}
\end{array}} \right].
\end{equation}
Then, the block error rate (BLER) performance of the proposed polar TPC is provided for the channel response in (\ref{H}). Finally, we provide the BLER performance of polar TPC under block-fading channels. The PC-MIMO system is constructed by the Gaussian approximation (GA) \cite{GA_Trifonov}.
The polarization criterion maximizes both the capacity and the polarization effect simultaneously.
To allow the system performance approach the capacity, ML detection is considered.
Furthermore, since the polarization effect is catalyzed by the SIC structure of the PC-MIMO system, ML-SIC detection is used in this paper.
Fig. \ref{capacity} shows the capacity of the substreams for different TPC schemes and for the fixed channel response in (\ref{H}), where $M_T = 3$, $M_R = 3$ and $M = 2$. In Fig. \ref{capacity}, $I_1$ and $I_2$ are the capacities of the first and the second substreams, respectively. We can observe that by introducing the TPC matrix $\bf Q$, the polarization effect of the polar TPC for $B_1 = 3$ and $B_2 = 1$ is higher than that of the DFT TPC with $B = 3$. Thus, the proposed polar TPC enhances the polarization effect among the substreams, which improves the BLER of the PC-MIMO system shown in Fig. \ref{BLER1}.
Then, the polar TPC using the optimal TPC ${\bf Q}_{opt}$ also shows more significant polarization effect compared to the polar TPC with $B_1 = 3$ and $B_2 = 1$. Similarly, the more significant polarization effect improves the BLER in Fig. \ref{BLER1} as well.
Fig. \ref{BLER1} illustrates the BLER of PC-MIMO systems for different TPC schemes, where $M_T = 3$, $M_R = 3$, $M = 2$, $N = 64$ and $R = 1/4$.
ML-SIC detection and SC decoding are used for the PC-MIMO system.
Then, in order to make the comparison fair, the performance of the DFT and polar TPCs having identical number of feedback bits is provided, i.e., $B = 4$ for the DFT TPC, and $B_1 = 3$ as well as $B_2 = 1$ for the polar TPC. In Fig. \ref{BLER1}, we can first observe that the GA bound, widely used in \cite{PCMIMO, GA_Trifonov}, is still an upper bound of the performance of PC-MIMO TPC schemes under SC decoding. Moreover, the GA bound coincides with the corresponding BLER performance in the high signal-to-noise ratio (SNR) regions.
Furthermore, as expected, both the DFT and the polar TPCs outperform the ``no-TPC'' system. Hence, TPC efficiently improves the performance of PC-MIMO. Additionally, since the proposed polar TPC has better polarization effect than the DFT TPC, it has about $0.45$dB performance gain at BLER $10^{-4}$.
Moreover, due to the better polarization effect shown in Fig. \ref{capacity}, the performance of the polar TPC relying on the optimal TPC ${\bf Q}_{opt}$ achieves about $0.4$dB gain over the polar TPC with $B_1 = 3$ and $B_2 = 1$ at BLER $10^{-4}$. Therefore, better polarization leads to a better PC-MIMO performance using the proposed polar TPC instead of other known TPCs.
Fig. \ref{BLER2} provides the BLER of PC-MIMO systems using CA-SCL decoding \cite{niu_CASCL} and polar TPC under block-fading channels, where $M_T = 4$, $M_R = 4$, $M = 3$, $N = 128$ and $R = 1/2$. The ML-SIC MIMO detection is used and the list size of the CA-SCL decoder is 8, where the 6-bit CRC of \cite{3GPP_5G_polar} is used. In Fig. \ref{BLER2}, the performance of the polar TPC using the optimal TPC ${\bf F}_{opt}$ is provided,
which can be treated as the best-case bound of the polar TPC, since ${\bf F}_{opt}$ maximizes the polarization effect of polar TPC. Then, we can observe that the performance of polar TPC using limited feedback is close to the performance of polar TPC using ${\bf F}_{opt}$ as $B_1$ increases.
Specifically, the polar TPC using $B_1 = 4$ and $B_2 = 1$ has almost identical BLER to that of ${\bf F}_{opt}$ in the high SNR regions. Thus, the polar TPC has the potential of approaching the optimal performance, despite of limited feedback.
\begin{figure}[t]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\centering{\includegraphics[scale=1]{BLER_block_fading.pdf}}
\caption{The BLER of PC-MIMO systems using CA-SCL decoder and polar TPC, where $M_T = 4$, $M_R = 4$, $M = 3$, $N = 128$, $R = 1/2$ and the list size of CA-SCL is 8.}\label{BLER2}
\end{figure}
Fig. \ref{BLE_LDPC} and Fig. \ref{BLER_LDPC} portray out BER and BLER performance comparisons, respectively, where we have $M_T = 4$, $M_R = 4$, $M = 3$, $N = 64$ and $R = 1/3$.
For the PC-MIMO system, the CA-SCL decoder having a list size of 8 and 6-bit CRC \cite{3GPP_5G_polar} is used, where the MIMO detector is ML-SIC. For the low-density-parity-check (LDPC)-coded MIMO (LC-MIMO) system, the LDPC encoder and the rate-matching algorithm are those of 5G \cite{3GPP_5G_polar}, the sum-product algorithm having 25 iterations and layered scheduling are used for the LDPC decoder \cite{LinShu}, and the MIMO detector uses the linear minimum mean square error (LMMSE) algorithm.
In Fig. \ref{BLE_LDPC} and Fig. \ref{BLER_LDPC}, we can observe that the PC-MIMO system using polar TPC has better BER and BLER performance than the LC-MIMO system associated with DFT TPC. Specifically, at BER $10^{-4}$ and BLER $10^{-3}$, the PC-MIMO system has 1.6dB and 1.1dB performance gain over the LC-MIMO system, respectively.
Since the codebook is designed offline, selecting an appropriate precoding matrix from the codebook dominates the complexity of precoding.
The complexity of calculating the capacity is on the order of $O\left(M_TM_RM\right)$.
Hence, the complexity of the DFT TPC relying on the capacity criterion is $O\left(2^BM_TM_RM\right)$. For polar TPC, the complexities of selecting $\bf W$ and $\bf Q$ are $O\left(2^{B_1}M_TM_RM\right)$ and
$O\left(2^{B_2}M_TM_RM^2\right)$, respectively. Then, the complexity of polar TPC is $O\left((2^{B_1} + 2^{B_2}M )M_TM_RM\right)$.
\begin{figure}[t]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\centering{\includegraphics[scale=0.67]{BER_LDPC.pdf}}
\caption{The BER comparison between PC-MIMO system with polar TPC and LC-MIMO system with DFT TPC, where $M_T = 4$, $M_R = 4$, $M = 3$, $N = 64$ and $R = 1/3$. }\label{BLE_LDPC}
\end{figure}
\begin{figure}[t]
\setlength{\abovecaptionskip}{0.cm}
\setlength{\belowcaptionskip}{-0.cm}
\centering{\includegraphics[scale=0.67]{BLER_LDPC.pdf}}
\caption{The BLER comparison between PC-MIMO system with polar TPC and LC-MIMO system with DFT TPC, where $M_T = 4$, $M_R = 4$, $M = 3$, $N = 64$ and $R = 1/3$. }\label{BLER_LDPC}
\end{figure}
\section{Conclusion}
In this compact letter, we proposed the polar TPC of PC-MIMO systems relying on the new polarization criterion, which is quite different from other design criteria. Based on this new polarization criterion, the optimal TPC was derived and the method of designing the polar TPC codebook was proposed. The simulation results illustrate that the proposed polar TPC outperforms its DFT-based counterpart.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.